WO2021232898A1 - System and method for auto-focusing multiple cameras - Google Patents
System and method for auto-focusing multiple cameras Download PDFInfo
- Publication number
- WO2021232898A1 WO2021232898A1 PCT/CN2021/080062 CN2021080062W WO2021232898A1 WO 2021232898 A1 WO2021232898 A1 WO 2021232898A1 CN 2021080062 W CN2021080062 W CN 2021080062W WO 2021232898 A1 WO2021232898 A1 WO 2021232898A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- sub
- unfocused
- search
- search spaces
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
- H04N23/662—Transmitting camera control signals through networks, e.g. control via the Internet by using master/slave camera arrangements for affecting the control of camera image capture, e.g. placing the camera in a desirable condition to capture a desired image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Definitions
- the present disclosure generally relates to the field of autofocus technology and more particularly to a system and method for simultaneously and efficiently auto-focusing multiple cameras.
- an object of the present disclosure is to provide a novel method and system for auto-focusing multiple cameras simultaneously. It is another object of the present disclosure to provide a faster and better focusing approach to capture media. Yet another object of the present disclosure is to reduce the search range per camera to reduce the search time. One more object of the present disclosure is to improve the efficiency of auto-focus system of the multiple camera device.
- the present disclosure provides a method and system for auto-focusing multiple cameras.
- One aspect of the present disclosure relates to a method for auto-focusing multiple cameras.
- the method comprises dividing, by a processing unit, a search space of a first camera and at least one second camera into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera and said at least one second camera.
- the method further comprises searching, by each of said first camera and each of said at least one second camera, said associated sub-search spaces to identify a focus point.
- the method further encompasses identifying, by the processing unit, a focused camera and an unfocused camera from said first camera and at least one second camera, wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point.
- the processing unit determines a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- the system further comprising a first camera and at least one second camera wherein said first camera and at least one second camera are associated with a search space.
- the system further comprises a processing unit, configured to divide the search space of said first camera and said at least one second camera into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera and said at least one second camera.
- the each of said first camera and each of said at least one second camera is configured to search said associated sub-search spaces to identify a focus point.
- the processing unit is configured to identify a focused camera and an unfocused camera from said first camera and at least one second camera, wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. Further, the processing unit is also configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- the user equipment comprises a first camera and at least one second camera, wherein said first camera and at least one second camera are associated with a search space.
- the user equipment further comprises a system, configured to divide said search space of said first camera and said at least one second camera into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera and said at least one second camera. Further each of said first camera and each of said at least one second camera is configured to search said associated sub-search spaces to identify a focus point.
- system is further configured to identify a focused camera and an unfocused camera from said first camera and at least one second camera, wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point.
- the system thereafter configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- FIG. 1 illustrates a block diagram of the system [100] for auto-focusing multiple cameras, in accordance with exemplary embodiment of the present disclosure.
- FIG. 2 illustrates an exemplary method [200] for auto-focusing in a system comprising multiple cameras, in accordance with exemplary embodiment of the present disclosure.
- FIG. 3 illustrates an exemplary lens position map in accordance with exemplary embodiment of the present disclosure.
- FIG 4. illustrates division of search space into sub-search spaces, in accordance with exemplary embodiment of the present disclosure.
- the fast processing of the camera devices is required to capture a moving object clearly, which further leads to the possibilities that the prior art solutions may fail to capture the particular movement of said object due to slow processing.
- the media capturing requires capturing of media at a particular instant of time and further to capture exact details of an event, the time taken by the prior known solutions for processing is usually high and many a times leads to missing specific details of the event that was intended to be captured.
- the present disclosure provides the users, a solution of fast autofocusing by segmenting the search length to multiple cameras. Without focused image/media, the capture will be blurred. To overcome the waiting time for a preview to be focused before capturing any media, the present disclosure allows the electronic devices with multiple camera to make this process much faster.
- the present disclosure uses a focus position map between each camera in multiple camera system. Also, the present disclosure reduces the search range per camera to further reduce the search time, by splitting the search range for focus to different cameras, wherein each camera will be responsible for searching in its range only.
- “user device” may be any electrical, electronic, electromechanical and computing device or equipment, having a plurality of camera units installed on it.
- the electronic device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure and is obvious to a person skilled in the art.
- a “processing unit” or “processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
- a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
- the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
- a “storage unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine.
- a computer-readable medium includes read- only memory ( “ROM” ) , random access memory ( “RAM” ) , magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
- FIG. 1 illustrates a system [100] for auto-focusing multiple cameras, in accordance with exemplary embodiment of the present disclosure.
- the system [100] comprises at least one first camera [102] , a plurality of second cameras [104A] , [104B] , [104C] across [104N] (Collectively referred to as second camera/s [104] ) and at least one processing unit [106] , wherein all the components are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 1 only few units i.e.
- system [100] may comprises multiple such units or the system may comprise any such numbers of said units, obvious to a person skilled in the art to implement the features of the present disclosure. Also, there may be one or more subunits of said units of the system [100] and the same may not shown in the Fig. 1 for the purpose of clarity.
- the first camera [102] and at least one second camera [104] of the system [100] are associated with a search space.
- the search space is a search range of the first camera [102] and the at least one second camera [104] to focus at least one object to be captured.
- the search range is from macro to infinity.
- one of the first camera [102] and the at least one second camera [104] is a preview camera.
- the preview camera is the camera whose camera preview frame is currently being displayed at the electronic device. For instance in an event, if a system has one first camera [102] and three second cameras [104] and the preview of the object to be captured is going on one of a second camera [104] , the said second camera [104] in said event will be the preview camera. Further in the said event each of the first camera [102] and the three second cameras [104] will be associated with their search space.
- the processing unit [106] of the system [100] is connected to said first camera [102] and the at least one second camera [104] .
- the processing unit is configured to divide the search space of said first camera [102] and said at least one second camera [104] into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera [102] and said at least one second camera [104] .
- the processing unit [106] is configured to divide the search space into one of an equal sub-search spaces and an unequal sub-search spaces.
- the divided search sub-space of the first camera [102] may be greater than the sub-search space of the second camera [104] , or in other instance the divided sub-search space of the second camera [104] may be greater than the sub-search space of the first camera [102] .
- the said dividing of the search space into one of an equal sub-search spaces and an unequal sub-search spaces may be based on the autofocus power of the first camera [102] and said at least one second camera [104] .
- each of said first camera [102] and each of said at least one second camera [104] is configured to search said associated sub-search spaces to identify a focus point.
- the focus point may be the object that is to be captured or in another example the focus point may be a state of any object that is to be captured. For instance, while capturing a bike race event, a particular bike may be the focus point. In another event of capturing a scientific chemical reaction, the momentary change in the color and/or state of any substance may be the focus point.
- the processing unit [106] is further configured to identify a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point.
- the processing unit [106] is also configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- the lens position map is a relation between said first camera [102] and said at least one second camera [104] .
- the lens position map is stored on a storage unit of the system [100] and comprises mapping details of first camera [102] with respect to each of the second cameras [104] . Further the lens position map is updated after a particular instant of time by the processing unit [104] of the system [100] .
- the processing unit [104] is further configured to shift a current position of said unfocused camera based on the determined lens position of the unfocused camera. The said position is shifted such that the unfocused camera starts focusing on the basis of the identified focus points.
- an exemplary method flow diagram [200] depicting method for auto-focusing in a system comprising multiple cameras, in accordance with exemplary embodiment of the present disclosure is shown.
- the method begins at step [202] .
- the present disclosure encompasses that the method begins as soon as the camera units of an electronic device are activated, for instance, if the user opens the camera application of the electronic device to capture an image/video.
- the present disclosure also encompasses that the method begins upon receiving explicit input from the user, for instance, after the user opens/launches a camera application at the electronic device, the user may provide an input by touching the camera preview screen indicating the user’s request to begin the auto focus, or by turning on the auto-focus feature of the camera application.
- the method comprises dividing, by a processing unit [106] , a search space of a first camera [102] and at least one second camera [104] of said system into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera [102] and said at least one second camera [104] .
- the search space is a search range of the first camera [102] and the at least one second camera [104] to focus at least one object to be captured.
- one of the first camera [102] and the at least one second camera [104] is a preview camera.
- the division of the search space of said first camera [102] and said at least one second camera [104] into at least the plurality of sub-search spaces comprises dividing the search space into one of an equal sub-search spaces and an unequal sub-search spaces.
- the said dividing of the search space into one of an equal sub-search spaces and an unequal sub-search spaces may be based on the autofocus power of the first camera [102] and said at least one second camera [104] .
- the divided search space of the first camera [102] may be greater than the search range of the second camera [104] or in other instance the divided search space of the second camera [104] may be greater than the search range of the first camera [102] .
- the present disclosure thus divides the search range between camera units, thus reducing the search range per camera unit and subsequently the search time per camera unit.
- an exemplary division of search space into the plurality of sub-search spaces is shown in Figure 4.
- an exemplary search space is divided into three sub search spaces C1, C2 and C3 respectively for the camera lenses L1, L2 and L3.
- the said divided sub search spaces C1, C2 and C3 in an instance may be the equal sub search spaces and in another instance said sub search spaces C1, C2 and C3 may be the unequal sub search spaces, depending upon the autofocus power of said corresponding camera lenses L1, L2 and L3.
- the method comprises searching, by each of said first camera [102] and each of said at least one second camera [104] , said associated sub-search spaces to identify a focus point.
- the focus point may be any moving object that is to be captured or in one other example the focus point may be a state of any object that is to be captured. For instance, while capturing media in a football match, a particular player may be the focus point. In another event of capturing of a natural phenomenon such as a thunderstorm, lightning may be the focus point.
- the present disclosure encompasses using the auto-focus mechanism of the camera units, i.e. the first camera [102] and each of said at least one second camera [104] .
- the auto-focus mechanism works in a manner that sensors detect how far an object is within the camera preview frame and this information is used by the lens of the camera to adjust its focal length.
- the method comprises identifying, by the processing unit [106] , a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. For instance, if the first camera [102] is unable to identify any focus point in its sub-search space, however, one of the second camera [104] is able to identify the focus point in its sub-search space, then the first camera [102] is considered an unfocussed camera while the second camera [104] is considered a focused camera.
- the method comprises determining, by the processing unit [106] , a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- the lens position map is a relation between said first camera [102] and said at least one second camera [104] .
- the lens position map is stored on a storage unit of the system [100] and comprises mapping details of first camera [102] with respect to each of the second cameras [104] .
- the lens positioning map includes but not limited to the details such as focal length, camera lens positions and the like camera parameters. Further, the lens position map is updated after a particular instant of time by the processing unit [104] of the system [100] .
- An exemplary lens position map is shown in Figure 3.
- the Figure 3 indicates a graph between the first camera lens position and the second camera lens position, wherein the graph further indicates an actual lens positon relation between the first camera [102] and the second camera [104] , along with an estimated lens positon relation between the first camera [102] and the second camera [104] .
- the method further comprises shifting a current position of said unfocused camera based on the determined lens position of the unfocused camera. The said position is shifted such that the unfocused camera starts focusing on the basis of the identified focus points.
- the method After successful enabling of auto-focusing in multiple cameras, the method further terminates at step [212] .
- the one more aspect of the present disclosure relates to user equipment for auto-focusing multiple cameras.
- the user equipment comprises at least one first camera [102] , a plurality of second cameras [104] (collectively referred to as second camera [104] ) and at least one system [100] , wherein all the components are assumed to be connected to each other unless otherwise indicated below.
- the first camera [102] and each of the second camera [104] of the system [100] are associated with a search space.
- the search space is a search range of the first camera [102] and the at least one second camera [104] to focus at least one object to be captured.
- one of the first camera [102] and the at least one second camera [104] is a preview camera.
- the preview camera is the camera on which the preview of an event is going on.
- the system [100] is connected to the first camera [102] and the second camera [104] .
- the system [100] is configured to divide the search space of said first camera [102] and said second camera [104] into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera [102] and said second camera [104] .
- the system [100] is configured to divide the search space into one of an equal sub-search spaces and an unequal sub-search spaces.
- the said dividing of the search space into one of an equal sub-search spaces and an unequal sub-search spaces may be based on the autofocus power of the first camera [102] and said the second camera [104] .
- each of said first camera [102] and each of said second camera [104] is configured to search said associated sub-search spaces to identify a focus point.
- the system [100] further configured to identify a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point.
- the system [100] is also configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- the lens position map is a relation between said first camera [102] and said at least one second camera [104] .
- the lens position map is stored on a storage unit of the system [100] and comprises mapping details of first camera [102] with respect to each of the second cameras [104] . Further the lens position map is updated after a particular instant of time by the system [100] .
- the system [100] is further configured to shift a current position of said unfocused camera based on the determined lens position of the unfocused camera. The said position is shifted such that the unfocused camera starts focusing on the basis of the identified focus points.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The present disclosure provides methods and system for auto-focusing multiple cameras. The method encompasses dividing a search space of a first camera and at least one second camera into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera and said at least one second camera. Next, the method comprises searching, by each of said first camera and the second cameras, said associated sub-search spaces to identify a focus point. Further the method identifies a focused camera and an unfocused camera from said first camera and at least one second camera, wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. Also, the method then leads to determining a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
Description
FIELD OF INVENTION
The present disclosure generally relates to the field of autofocus technology and more particularly to a system and method for simultaneously and efficiently auto-focusing multiple cameras.
This section is intended to provide information relating to field of the present disclosure and thus any approach or functionality described below should not be assumed to be qualified as prior art merely by its inclusion in this section.
Media capturing has become a trend in today’s world. The electronic device now-a-days are equipped with multiple technically advanced cameras. These cameras have enabled a number of different media capturing modes along with various media capturing techniques. Various advancements in the field of media capturing has also enabled the electronic camera devices to provide a focused clear captured media to a great extent.
Further although the present solutions are able to capture a clear focused media, but said solutions lacks the capability to capture said media appropriately and clearly in short duration of time. Furthermore, time factor plays an important role in capturing the best possible media, for instance, in case of capturing an object in a moving state there require a fast processing of the camera devices so that all details of the moving object can be captured efficiently.
The existing media capturing solutions have many limitations and there is a need for improvement in this area of technology.
SUMMARY
This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
In order to overcome at least a few problems associated with the known solutions as provided in the previous section, an object of the present disclosure is to provide a novel method and system for auto-focusing multiple cameras simultaneously. It is another object of the present disclosure to provide a faster and better focusing approach to capture media. Yet another object of the present disclosure is to reduce the search range per camera to reduce the search time. One more object of the present disclosure is to improve the efficiency of auto-focus system of the multiple camera device.
In order to achieve the afore-mentioned objectives, the present disclosure provides a method and system for auto-focusing multiple cameras.
One aspect of the present disclosure relates to a method for auto-focusing multiple cameras. The method comprises dividing, by a processing unit, a search space of a first camera and at least one second camera into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera and said at least one second camera. The method further comprises searching, by each of said first camera and each of said at least one second camera, said associated sub-search spaces to identify a focus point. The method further encompasses identifying, by the processing unit, a focused camera and an unfocused camera from said first camera and at least one second camera, wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. Thereafter the processing unit determines a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
Another aspect of the present disclosure relates to a system for auto-focusing multiple cameras. The system further comprising a first camera and at least one second camera wherein said first camera and at least one second camera are associated with a search space. The system further comprises a processing unit, configured to divide the search space of said first camera and said at least one second camera into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera and said at least one second camera. The each of said first camera and each of said at least one second camera is configured to search said associated sub-search spaces to identify a focus point. Also, the processing unit is configured to identify a focused camera and an unfocused camera from said first camera and at least one second camera, wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. Further, the processing unit is also configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
Yet another aspect of the present disclosure relates to a user equipment for auto-focusing multiple cameras. The user equipment comprises a first camera and at least one second camera, wherein said first camera and at least one second camera are associated with a search space. The user equipment further comprises a system, configured to divide said search space of said first camera and said at least one second camera into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera and said at least one second camera. Further each of said first camera and each of said at least one second camera is configured to search said associated sub-search spaces to identify a focus point. Also the system is further configured to identify a focused camera and an unfocused camera from said first camera and at least one second camera, wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. The system thereafter configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
FIG. 1 illustrates a block diagram of the system [100] for auto-focusing multiple cameras, in accordance with exemplary embodiment of the present disclosure.
FIG. 2 illustrates an exemplary method [200] for auto-focusing in a system comprising multiple cameras, in accordance with exemplary embodiment of the present disclosure.
FIG. 3 illustrates an exemplary lens position map in accordance with exemplary embodiment of the present disclosure.
FIG 4. illustrates division of search space into sub-search spaces, in accordance with exemplary embodiment of the present disclosure.
The foregoing shall be more apparent from the following more detailed description of the disclosure.
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
As elaborated in the background section that the fast processing of the camera devices is required to capture a moving object clearly, which further leads to the possibilities that the prior art solutions may fail to capture the particular movement of said object due to slow processing. The media capturing requires capturing of media at a particular instant of time and further to capture exact details of an event, the time taken by the prior known solutions for processing is usually high and many a times leads to missing specific details of the event that was intended to be captured.
A number of solutions to deal with the limitations of capturing a focused media have been developed time to time. For instance, prior art solutions relating to synchronizing autofocus, by focusing a slave camera on the basis of a master camera is provided. Such solutions take the benefit of the master camera autofocus system to focus the slave camera which further makes the entire auto focus system of the solution slow as the dedicated main Autofocus system for all the cameras is same. Therefore, the prior art solutions lack faster auto-focusing implementation in capturing media.
In view of these and other existing limitations, there arises an imperative need to provide a solution to overcome the limitations of prior existing solutions and to provide a more efficient method and system of auto-focusing multiple cameras.
As autofocus is an essential part for any media capturing, the present disclosure provides the users, a solution of fast autofocusing by segmenting the search length to multiple cameras. Without focused image/media, the capture will be blurred. To overcome the waiting time for a preview to be focused before capturing any media, the present disclosure allows the electronic devices with multiple camera to make this process much faster. The present disclosure uses a focus position map between each camera in multiple camera system. Also, the present disclosure reduces the search range per camera to further reduce the search time, by splitting the search range for focus to different cameras, wherein each camera will be responsible for searching in its range only.
As used herein, “user device” , “smart device” , “multiple camera device” and/or “electronic device” , may be any electrical, electronic, electromechanical and computing device or equipment, having a plurality of camera units installed on it. The electronic device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure and is obvious to a person skilled in the art.
As used herein, a “processing unit” or “processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
As used herein, a “storage unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read- only memory ( “ROM” ) , random access memory ( “RAM” ) , magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
The present disclosure is further explained in detail below with reference now to the diagrams.
FIG. 1 illustrates a system [100] for auto-focusing multiple cameras, in accordance with exemplary embodiment of the present disclosure. As shown in Fig. 1, the system [100] comprises at least one first camera [102] , a plurality of second cameras [104A] , [104B] , [104C] ….. [104N] (Collectively referred to as second camera/s [104] ) and at least one processing unit [106] , wherein all the components are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 1 only few units i.e. camera units and processing unit are shown, however the system [100] may comprises multiple such units or the system may comprise any such numbers of said units, obvious to a person skilled in the art to implement the features of the present disclosure. Also, there may be one or more subunits of said units of the system [100] and the same may not shown in the Fig. 1 for the purpose of clarity.
Furthermore, the first camera [102] and at least one second camera [104] of the system [100] are associated with a search space. The search space is a search range of the first camera [102] and the at least one second camera [104] to focus at least one object to be captured. In an auto-focus system/camera unit, the search range is from macro to infinity.
Also, one of the first camera [102] and the at least one second camera [104] is a preview camera. Further, the preview camera is the camera whose camera preview frame is currently being displayed at the electronic device. For instance in an event, if a system has one first camera [102] and three second cameras [104] and the preview of the object to be captured is going on one of a second camera [104] , the said second camera [104] in said event will be the preview camera. Further in the said event each of the first camera [102] and the three second cameras [104] will be associated with their search space.
Further, the processing unit [106] of the system [100] is connected to said first camera [102] and the at least one second camera [104] . The processing unit is configured to divide the search space of said first camera [102] and said at least one second camera [104] into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera [102] and said at least one second camera [104] . Also, the processing unit [106] is configured to divide the search space into one of an equal sub-search spaces and an unequal sub-search spaces. For instance, the divided search sub-space of the first camera [102] may be greater than the sub-search space of the second camera [104] , or in other instance the divided sub-search space of the second camera [104] may be greater than the sub-search space of the first camera [102] . Also, the said dividing of the search space into one of an equal sub-search spaces and an unequal sub-search spaces may be based on the autofocus power of the first camera [102] and said at least one second camera [104] .
Thereafter, each of said first camera [102] and each of said at least one second camera [104] is configured to search said associated sub-search spaces to identify a focus point. In an example, the focus point may be the object that is to be captured or in another example the focus point may be a state of any object that is to be captured. For instance, while capturing a bike race event, a particular bike may be the focus point. In another event of capturing a scientific chemical reaction, the momentary change in the color and/or state of any substance may be the focus point.
Furthermore, the processing unit [106] is further configured to identify a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. The processing unit [106] is also configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map. The lens position map is a relation between said first camera [102] and said at least one second camera [104] . The lens position map is stored on a storage unit of the system [100] and comprises mapping details of first camera [102] with respect to each of the second cameras [104] . Further the lens position map is updated after a particular instant of time by the processing unit [104] of the system [100] .
Thereafter, the processing unit [104] is further configured to shift a current position of said unfocused camera based on the determined lens position of the unfocused camera. The said position is shifted such that the unfocused camera starts focusing on the basis of the identified focus points.
Referring to Fig. 2, an exemplary method flow diagram [200] , depicting method for auto-focusing in a system comprising multiple cameras, in accordance with exemplary embodiment of the present disclosure is shown. As shown in Fig. 2, the method begins at step [202] . The present disclosure encompasses that the method begins as soon as the camera units of an electronic device are activated, for instance, if the user opens the camera application of the electronic device to capture an image/video. The present disclosure also encompasses that the method begins upon receiving explicit input from the user, for instance, after the user opens/launches a camera application at the electronic device, the user may provide an input by touching the camera preview screen indicating the user’s request to begin the auto focus, or by turning on the auto-focus feature of the camera application.
At step [204] , the method comprises dividing, by a processing unit [106] , a search space of a first camera [102] and at least one second camera [104] of said system into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera [102] and said at least one second camera [104] . The search space is a search range of the first camera [102] and the at least one second camera [104] to focus at least one object to be captured. Also, one of the first camera [102] and the at least one second camera [104] is a preview camera.
Further, the division of the search space of said first camera [102] and said at least one second camera [104] into at least the plurality of sub-search spaces comprises dividing the search space into one of an equal sub-search spaces and an unequal sub-search spaces. Also, the said dividing of the search space into one of an equal sub-search spaces and an unequal sub-search spaces may be based on the autofocus power of the first camera [102] and said at least one second camera [104] . For instance, the divided search space of the first camera [102] may be greater than the search range of the second camera [104] or in other instance the divided search space of the second camera [104] may be greater than the search range of the first camera [102] . The present disclosure thus divides the search range between camera units, thus reducing the search range per camera unit and subsequently the search time per camera unit. Further, an exemplary division of search space into the plurality of sub-search spaces is shown in Figure 4. As indicated in Figure 4 an exemplary search space is divided into three sub search spaces C1, C2 and C3 respectively for the camera lenses L1, L2 and L3. The said divided sub search spaces C1, C2 and C3 in an instance may be the equal sub search spaces and in another instance said sub search spaces C1, C2 and C3 may be the unequal sub search spaces, depending upon the autofocus power of said corresponding camera lenses L1, L2 and L3.
Next, at step [206] , the method comprises searching, by each of said first camera [102] and each of said at least one second camera [104] , said associated sub-search spaces to identify a focus point. For example, the focus point may be any moving object that is to be captured or in one other example the focus point may be a state of any object that is to be captured. For instance, while capturing media in a football match, a particular player may be the focus point. In another event of capturing of a natural phenomenon such as a thunderstorm, lightning may be the focus point.
In order to identify the focus point in the search spaces, the present disclosure encompasses using the auto-focus mechanism of the camera units, i.e. the first camera [102] and each of said at least one second camera [104] . Typically, the auto-focus mechanism works in a manner that sensors detect how far an object is within the camera preview frame and this information is used by the lens of the camera to adjust its focal length.
Next, at step [208] , the method comprises identifying, by the processing unit [106] , a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. For instance, if the first camera [102] is unable to identify any focus point in its sub-search space, however, one of the second camera [104] is able to identify the focus point in its sub-search space, then the first camera [102] is considered an unfocussed camera while the second camera [104] is considered a focused camera.
Further at step [210] , the method comprises determining, by the processing unit [106] , a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map. The lens position map is a relation between said first camera [102] and said at least one second camera [104] . The lens position map is stored on a storage unit of the system [100] and comprises mapping details of first camera [102] with respect to each of the second cameras [104] . The lens positioning map includes but not limited to the details such as focal length, camera lens positions and the like camera parameters. Further, the lens position map is updated after a particular instant of time by the processing unit [104] of the system [100] . An exemplary lens position map is shown in Figure 3. The Figure 3 indicates a graph between the first camera lens position and the second camera lens position, wherein the graph further indicates an actual lens positon relation between the first camera [102] and the second camera [104] , along with an estimated lens positon relation between the first camera [102] and the second camera [104] .
Also, the method further comprises shifting a current position of said unfocused camera based on the determined lens position of the unfocused camera. The said position is shifted such that the unfocused camera starts focusing on the basis of the identified focus points.
After successful enabling of auto-focusing in multiple cameras, the method further terminates at step [212] .
Furthermore, the one more aspect of the present disclosure relates to user equipment for auto-focusing multiple cameras. The user equipment comprises at least one first camera [102] , a plurality of second cameras [104] (collectively referred to as second camera [104] ) and at least one system [100] , wherein all the components are assumed to be connected to each other unless otherwise indicated below.
The first camera [102] and each of the second camera [104] of the system [100] are associated with a search space. The search space is a search range of the first camera [102] and the at least one second camera [104] to focus at least one object to be captured. Also, one of the first camera [102] and the at least one second camera [104] is a preview camera. Further, the preview camera is the camera on which the preview of an event is going on.
Further, the system [100] is connected to the first camera [102] and the second camera [104] . The system [100] is configured to divide the search space of said first camera [102] and said second camera [104] into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera [102] and said second camera [104] . Also, the system [100] is configured to divide the search space into one of an equal sub-search spaces and an unequal sub-search spaces. Also, the said dividing of the search space into one of an equal sub-search spaces and an unequal sub-search spaces may be based on the autofocus power of the first camera [102] and said the second camera [104] .
Thereafter, each of said first camera [102] and each of said second camera [104] is configured to search said associated sub-search spaces to identify a focus point. Also, the system [100] further configured to identify a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point. The system [100] is also configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map. The lens position map is a relation between said first camera [102] and said at least one second camera [104] . The lens position map is stored on a storage unit of the system [100] and comprises mapping details of first camera [102] with respect to each of the second cameras [104] . Further the lens position map is updated after a particular instant of time by the system [100] .
Thereafter, the system [100] is further configured to shift a current position of said unfocused camera based on the determined lens position of the unfocused camera. The said position is shifted such that the unfocused camera starts focusing on the basis of the identified focus points.
While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present disclosure. These and other changes in the embodiments of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
Claims (15)
- A method [200] for auto-focusing in a system comprising multiple cameras, the method comprising:- dividing, by a processing unit [106] , a search space of a first camera and at least one second camera into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera and said at least one second camera;- searching, by each of said first camera [102] and each of said at least one second camera [104] , said associated sub-search spaces to identify a focus point;- identifying, by the processing unit [106] , a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point; and- determining, by the processing unit [106] , a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- The method as claimed in claim 1 wherein dividing, by the processing unit [106] , the search space of said first camera [102] and said at least one second camera [104] into at least the plurality of sub-search spaces comprises dividing the search space into one of an equal sub-search spaces and an unequal sub-search spaces.
- The method as claimed in claim 1 wherein one of said first camera [102] and said at least one second camera [104] is a preview camera.
- The method as claimed in claim 1 wherein the lens position map is a relation between said first camera [102] and said at least one second camera [104] .
- The method as claimed in claim 1 further comprising shifting a current position of said unfocused camera based on the determined lens position of the unfocused camera.
- A system [100] for auto-focusing multiple cameras, the system comprising:- a first camera [102] and at least one second camera [104] wherein said first camera [102] and at least one second camera [104] are associated with a search space;- a processing unit [106] , configured to divide the search space of said first camera [102] and said at least one second camera [104] into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera [102] and said at least one second camera [104] ; andwhereineach of said first camera [102] and each of said at least one second camera [104] is configured to search said associated sub-search spaces to identify a focus point;the processing unit [106] is configured to identify a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point; andthe processing unit [106] is configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- The system as claimed in claim 6 wherein the processing unit [106] is configured to divide the search space into one of an equal sub-search spaces and an unequal sub-search spaces.
- The system as claimed in claim 6 wherein one of said first camera [102] and said at least one second camera [104] is a preview camera.
- The system as claimed in claim 6 wherein the lens position map is a relation between said first camera [102] and said at least one second camera [104] .
- The system as claimed in claim 6 wherein the processing unit [106] is configured to shift a current position of said unfocused camera based on the determined lens position of the unfocused camera.
- A user equipment for auto-focusing multiple cameras, the user equipment comprising:- a first camera [102] and at least one second camera [104] , wherein said first camera [102] and at least one second camera [104] are associated with a search space;- a system [100] , configured to:divide said search space of said first camera [102] and said at least one second camera [104] into at least a plurality of sub-search spaces, wherein each of said sub-search spaces is associated with one of said first camera [102] and said at least one second camera [104] ; andwhereineach of said first camera [102] and each of said at least one second camera [104] is configured to search said associated sub-search spaces to identify a focus point;the system [100] is configured to identify a focused camera and an unfocused camera from said first camera [102] and at least one second camera [104] , wherein said focused camera identifies said focus point and said unfocused camera does not identify said focus point; andthe system [100] is configured to determine a lens position of said unfocused camera based on a position of said focused camera and at least one lens positioning map.
- The user equipment as claimed in claim 11 wherein the system [100] is configured to divide the search space into one of an equal sub-search spaces and an unequal sub-search spaces.
- The user equipment as claimed in claim 11 wherein one of said first camera [102] and said at least one second camera [104] is a preview camera.
- The user equipment as claimed in claim 11 wherein the lens position map is a relation between said first camera [102] and said at least one second camera [104] .
- The user equipment as claimed in claim 11 wherein the system [100] is configured to shift a current position of said unfocused camera based on the determined lens position of the unfocused camera.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202041020908 | 2020-05-18 | ||
| IN202041020908 | 2020-05-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021232898A1 true WO2021232898A1 (en) | 2021-11-25 |
Family
ID=78709161
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2021/080062 Ceased WO2021232898A1 (en) | 2020-05-18 | 2021-03-10 | System and method for auto-focusing multiple cameras |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2021232898A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104333702A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | A method, device and terminal for automatic focusing |
| CN104363376A (en) * | 2014-11-28 | 2015-02-18 | 广东欧珀移动通信有限公司 | Continuous focusing method, device and terminal |
| CN106060358A (en) * | 2016-07-20 | 2016-10-26 | 成都微晶景泰科技有限公司 | Scene continuous analysis method and device, and imaging device |
| US20170359536A1 (en) * | 2016-06-09 | 2017-12-14 | Lg Electronics Inc. | Moving picture photographing apparatus having dual camera |
| WO2018147570A1 (en) * | 2017-02-10 | 2018-08-16 | Lg Electronics Inc. | Terminal and method of controlling therefor |
-
2021
- 2021-03-10 WO PCT/CN2021/080062 patent/WO2021232898A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104333702A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | A method, device and terminal for automatic focusing |
| CN104363376A (en) * | 2014-11-28 | 2015-02-18 | 广东欧珀移动通信有限公司 | Continuous focusing method, device and terminal |
| US20170359536A1 (en) * | 2016-06-09 | 2017-12-14 | Lg Electronics Inc. | Moving picture photographing apparatus having dual camera |
| CN106060358A (en) * | 2016-07-20 | 2016-10-26 | 成都微晶景泰科技有限公司 | Scene continuous analysis method and device, and imaging device |
| WO2018147570A1 (en) * | 2017-02-10 | 2018-08-16 | Lg Electronics Inc. | Terminal and method of controlling therefor |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113837079B (en) | Automatic focusing method, device, computer equipment and storage medium of microscope | |
| US10237491B2 (en) | Electronic apparatus, method of controlling the same, for capturing, storing, and reproducing multifocal images | |
| US20170201673A1 (en) | Image capturing terminal and image capturing method | |
| US10721414B2 (en) | All-in-focus implementation | |
| US10178293B2 (en) | Controlling a camera using a voice command and image recognition | |
| CN112714253B (en) | Video recording method and device, electronic equipment and readable storage medium | |
| CN110933299B (en) | Image processing method and device and computer storage medium | |
| CN101395636A (en) | Object-seeking computer program product, object-seeking device, and camera | |
| WO2022022259A1 (en) | Photographing prompting method and apparatus, and storage medium, and electronic device | |
| CN114926437B (en) | Image quality evaluation method and device | |
| CN104767938A (en) | Method and device for taking pictures | |
| KR20190120106A (en) | Method for determining representative image of video, and electronic apparatus for processing the method | |
| US9392190B2 (en) | Method and device for acquiring image | |
| CN115131570B (en) | Training method of image feature extraction model, image retrieval method and related equipment | |
| CN112487888A (en) | Image acquisition method and device based on target object | |
| WO2017157154A1 (en) | Auto-focusing method and apparatus | |
| CN107124547B (en) | Dual camera photographing method and device | |
| CN106203244B (en) | A kind of determination method and device of lens type | |
| WO2021232898A1 (en) | System and method for auto-focusing multiple cameras | |
| CN117424827A (en) | A communication method and device based on distributed deep learning cache system | |
| CN104184936A (en) | Image focusing processing method and system based on light field camera | |
| CN108495038B (en) | Image processing method, device, storage medium and electronic device | |
| CN114125226A (en) | Image shooting method and device, electronic equipment and readable storage medium | |
| CN116016816A (en) | An embedded GPU zero-copy panoramic image stitching method and system with improved L-ORB algorithm | |
| WO2021249067A1 (en) | Method and system for capturing a real-time video in a plurality of video modes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21808756 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21808756 Country of ref document: EP Kind code of ref document: A1 |