WO2012047216A1 - Systèmes et procédés pour acquérir et traiter des données d'image produites par des ensembles de caméras - Google Patents
Systèmes et procédés pour acquérir et traiter des données d'image produites par des ensembles de caméras Download PDFInfo
- Publication number
- WO2012047216A1 WO2012047216A1 PCT/US2010/051661 US2010051661W WO2012047216A1 WO 2012047216 A1 WO2012047216 A1 WO 2012047216A1 US 2010051661 W US2010051661 W US 2010051661W WO 2012047216 A1 WO2012047216 A1 WO 2012047216A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- image data
- controller
- cameras
- computing device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B37/00—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
- G03B37/04—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/58—Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/41—Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/21—Indexing scheme for image data processing or generation, in general involving computational photography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Definitions
- This disclosure is directed to camera arrays, and in particular, to systems for processing images produced by camera arrays.
- Computational photography is a technique to create images using a computer and a variety of data processing methods, which are applied to a number of digital images taken from different view points.
- the resulting photographic images are, in some cases, the same as those that would have been created by a single camera with characteristics that would be too difficult or expensive to manufacture. For example, it is possible to generate images or a video like those captured from cameras with large lenses or unusual optics. It is also possible to create synthetic or virtual images as if they were taken from an inaccessible position.
- Image-based rendering is a related technique used to create photorealistic images of a real scene that is also based on a set of two-dimensional images of the scene captured from different view points, but in this case, the set of two- dimensional images is used to compose a three-dimensional model that can be used to render images of novel views of the scene not captured in the original set of two- dimensional images.
- the generation of virtual images is based on the concept of light-fields, which model the distribution of light intensity and propagation in a scene.
- a fundamental component of these imaging methods is the acquisition of information about the light field using several views. For example, light-field measurements can be performed by moving a single camera to capture images of a static scene from different viewing positions.
- the quality of the synthetic views depends on the number and resolution of those views, so it can be much more convenient to capture the light field using arrays with a large number of cameras.
- Figure 10A shows an example image array and enlargements of a row of real images captured by a row of cameras in a camera system.
- Figure 10B shows columns of pixels associated with the row of real images shown in Figure 10A used to interpolate a line.
- Figure 11 shows a flow diagram summarizing a method for generating virtual images of a scene.
- This disclosure is directed to camera systems including a camera array, data buses shared by the cameras, and at least one processor that process images captured by the camera array.
- the camera array captures the light field of a scene in a set of images that are processed to create a variety of different viewing experiences of the scene using only portions of the image data collected by the camera array.
- the detailed description includes a brief general description of light fields in a first subsection followed by a description of various camera systems.
- a ray of light traveling through space can be characterized by a vector, the length of which corresponds to the amount of light, or radiance, traveling in the direction of the ray.
- the radiance along a vector representing a ray of light in three-dimensional space is characterized by a plenoptic function, which is often used in computer graphics and computer visualization to characterize an image of a scene from many different viewing locations at any viewing angle and at any point in time.
- the plenoptic function can be parameterized as a five- dimensional function with three Cartesian coordinates x, y, and z and two spherical coordinate angles ⁇ and ⁇ . Note that the plenoptic function can also be parameterized by additional dimensions including time and wavelength.
- Figure 1 shows a vector representation of a ray of light passing through a point 101 in three-dimensional space.
- the location of the point 101 in three-dimensional space is characterized by Cartesian coordinates (x,y,z), and the direction of a ray of light emanating from a point source 102 passes through the point 101 is represented by a vector v, 103.
- the plenoptic function Z(v, ) is the radiance of the light passing through the point 101 with the direction represented by the vector v, 103.
- a light field at a point in space is a function that characterizes the amount of light traveling in all directions through the point.
- the light field can be treated as a collection of vectors, each vector representing a ray of light emanating from a different light source with each vector corresponding to a different direction light passes through the point. Summing all vectors passing through a single point, or over an entire sphere of directions, produces a single scalar value at the point called the total irradiance and a resultant vector, which is also a five-dimensional plenoptic function.
- Figure 2 shows a vector representation of two rays of light comprising a light field passing through the point 1 1 in three-dimensional space.
- the ray represented by the vector v, 103 emanates from the point source 102
- a second ray passing through the same point 101 is represented by a second vector v 2 201 emanating from a second point source 202 with a plenoptic function (v 2 ) .
- the two vectors v, 103 and v 2 201 correspond to the light field at the point 101.
- Figure 3 shows an alternative two-plane parameterization of a light field.
- wv-plane 302 and x -plane 304 are parallel planes used to parameterize the rays of light passing through the planes 302 and 304 and are referred to as the light slab.
- the wv-plane 302 is often referred to as the camera plane, and the y-plane 304 is often referred to as the image plane. Only rays with (u,v) and (x,y) coordinates inside both quadrilaterals are represented in the light slab.
- the light slab represents the beam of light entering one quadrilateral (e.g., wv-plane 302) and exiting another quadrilateral (e.g., s/-plane 304).
- Each image pixel (x,y) in the y-plane 304 corresponds to a ray passing through the planes 302 and 304.
- the plenoptic function L ⁇ u,v,x,y) associated with each ray is parameterized by one point in the wv-plane 302 and one point in the xy-plane 304.
- Figure 3 shows an example of two plenoptic function characterizations of rays 306 and 308 passing through the slab.
- Each perspective view image corresponds to all the rays emanating from the wv-plane 302 and arriving at one point in the xy-plane 304.
- the light field is the collections of images taken from points in the jty-plane 304.
- the plenoptic function can be measured using a digital camera.
- the camera can only be at one position in space, and the data associated with the image is two-dimensional.
- the image is represented as a set of samples of the plenoptic function in a region of a two- dimensional plane embedded in the light-field's four-dimensional space.
- a single image can be represented by the fixed x and position of the camera, with pixel coordinates corresponding to dimensions u and v.
- x andy positions are varied, which can be done by moving the camera or using an array with many cameras at different positions.
- the present invention is related to systems and methods to intelligently select a protocol for data acquisition and transmission that minimize costs.
- FIG. 4A shows an example of a first camera system 400.
- the system 400 includes a camera array 402 composed of 48 individual cameras, a camera processor/controller 404, and a computing device 406.
- the system 400 includes eight camera buses 408-415.
- Each camera bus connects a subset of eight cameras of the camera array 402 to the camera processor/controller 404.
- each camera in column of cameras 416-421 is connected to the camera bus 408 and sends image data and receives instructions over the camera bus 408.
- the camera processor/controller 404 includes a controller that manages sending image data captured by the cameras to the computing device 406 and sending instructions from the computing device 406 to the individual cameras.
- the computing device 406 sends instructions to at least one camera to send only the image data (i.e., pixel values) used to generate an image to be processed by the computing device 406.
- the camera processor/controller 404 also controls the operation of each camera of the camera array by sending instructions to each camera of the camera array over a connected camera bus.
- the computing device 406 can direct each camera to send to the camera processor/controller 404 only the image data used by the computing device 406 to generate virtual images of different perspective views of scene.
- FIG. 4B shows an example of a second camera system 450.
- the system 450 includes a camera array 452 composed of 48 individual cameras, a controller 454, and computing device 456. Like the camera system 400, the example system 450 also includes eight camera buses 458-465. Each camera bus connects a subset of eight cameras of the camera array 452 to the controller 454.
- the controller 454 is connected to the computing device 456 and manages sending image data captured by the cameras to the computing device 456 and sending instructions from the computing device 456 to the individual cameras.
- each camera in the camera array 452 is connected to at least two other adjacent cameras and includes a processor for performing image processing. Double-headed arrows identify connected cameras that can exchange image data for image processing.
- camera 466 is connected to cameras 467 and 468.
- the camera 466 can exchange image data with cameras 467 and 468 or exchange information about the images captured by other adjacent cameras.
- each camera is configured to perform image processing and send the processed image data to the controller 454, which forwards the image data to the computing device 456.
- each camera can be directed by the computing device 456 to process images captured by the camera and adjacent connected cameras in a particular manner and send the processed images to the computing device 456 when finished.
- the computing device 456 can direct each camera to collect, process, and send to the controller 454 only the image data used by the computing device 456 to generate virtual images of different perspective views of scene.
- Camera arrays are not limited to the 48 cameras of the systems 400 and
- the camera array of a camera system can include any number of cameras and the cameras can be arranged in any suitable manner.
- Figures 5A-5D show four examples of arrangements of camera arrays.
- a camera array 502 is composed of a two-dimensional planar arrangement of 80 cameras.
- a camera array 504 is composed of a concave arrangement of 50 cameras.
- a camera array 506 is composed of a linear arrangement of 8 cameras.
- a camera array 508 is composed of an arc-like arrangement of 10 cameras. Note that camera arrays of the present invention are not limited to the arrangements shown in Figures 5A-5D.
- the cameras comprising a camera array of a camera system can be arranged in any suitable manner including convex, diagonal, or random arrangements to capture images comprising the light field of a scene.
- Each camera in a camera array can be connected to at least one camera in the same array in order to exchange and process image data as described above for the camera system 450.
- FIG. 6A shows a perspective view of an example two-dimensional planar camera array 602 of a camera system operated to capture different perspective view real images of a cube 604.
- the face 606 of the cube 604 is oriented parallel to the plane of the camera array 602.
- Image array 608 represents perspective view real images of the cube 604 captured by each camera of the camera array 602.
- the perspective view real image of the cube 604 captured by the camera 612 is represented by the real image 610.
- the four perspective view real images 610 and 614- 616 identified by X's in the image array 608 are enlarged in Figure 6B.
- the collection of real images presented in the image array 608 is the four-dimensional light field of the cube 604, and subsets of real images can be used to generate perspective view virtual images of the cube 604 as if the virtual images were captured from different virtual camera locations.
- Image processing of real images captured by a camera array enables the generation of perspective view virtual images as if the virtual images where obtained from virtual cameras located at positions that are different from the positions of the cameras of the camera array.
- Figure 7 shows a top view of the camera array 602.
- the images captured by cameras in the camera array 602 can be processed to generate three different perspective view virtual images of a scene as if each of the virtual images are captured by one of the virtual cameras 702-704.
- FIG 8 A shows the camera system 400 shown in Figure 4A.
- Figure 8B shows an array of real images 800 of a scene, each real image captured by a corresponding camera in the camera array 402.
- heavily bordered rectangles 801-804 shown in Figure 8B represent real images captured by cameras 806-809, respectively, shown in Figure 8A.
- Image data associated with real images 801-804 can be used to generate a virtual image 810, shown in Figure 8B, with a perspective view of the scene that is different from the perspective views in the images 801-804.
- the virtual image 810 can be generated using the image data associated with real images 801-804 as if the image 810 had been captured by a virtual camera 812, shown in Figure 8 A.
- the computing device 406 may send instructions to the camera processor/controller 404 indicating the perspective view to be generated or indicating that only certain image data captured by cameras 806-809 used to generate the virtual image 810.
- the camera processor 404 responds to the instructions sent by the computing device 406 by retrieving from the cameras 806-809 only the image data used to generate the virtual image 810, which the camera processor/controller 404 collects and sends to the computing device 406.
- the computing device 406 processes the image data to generate the virtual image 810.
- the images captured by the remaining cameras may not be needed in generating the virtual image 810.
- the camera processor/controller 404 may make no request that the remaining cameras send any image data.
- only certain portions of the image data obtained from the real images used in generating the virtual image may be sent to the computing device 406 for image processing.
- Figure 8C shows an example of the four example real images 801-804 taken of the cube 604 described above with reference to Figure 6. For the sake of simplicity, suppose that each real image contains substantially the same background and front surface or face 606 image data of the cube 604.
- each real image 801-804 is each taken from different perspective views of the cube 604, each real image contains different image data associated with the top surface and side surface of the cube 604.
- the camera processor/controller 404 may request that only one of the cameras is used to send image data corresponding to the background and front surface 606 of the cube 604 while the image data corresponding to the top surface and side of the cube 604 may be sent to the computing device 604 in order to generate the virtual image 814.
- the camera system 450 can be used to generate virtual image 810 as described above for the camera system 400 except, because each camera includes at least one processor for image processing, the individual cameras share image data with neighboring cameras so that the image data needed to generate the virtual image 810 is sent from the appropriate cameras.
- Figure 9 shows the camera system 450 shown in Figure 4B.
- the computing device 456 may send instructions to all of the cameras indicating the perspective view to be generated or indicating that only certain image data captured by cameras 901-904 can be used to generate the virtual image 810.
- the cameras 901-904 capable of supplying the image data used to generate the virtual image 810 respond to the instructions sent by the computing device 456 by sending only the image data to be used in generating the virtual image 810, and the computing device 456 processes the image data received from the cameras 901-904 to generate the virtual image 810.
- the images captured by the remaining cameras may not be needed in generating the virtual image 810. As a result, the remaining cameras take no action in response to the instructions sent by the computing device 456.
- the camera systems of Figures 8-9 are used to select real images that can be used in generating a virtual image. However, camera systems of the present invention can also be used to perform targeted selections of particular pixel data from the real images captured by the cameras of a camera array.
- Figure 10A shows an image array 1000 and enlargements of a row of real images 1001-1008 captured by a row of cameras in a camera system, such as the camera systems 400 and 450.
- Each real image shows a different perspective view of the cube 604 for a fixed ⁇ -coordinate value.
- the vertical dotted lines 1011-1018 in each of the corresponding real images 1001-1008 represent a column of pixels for a fixed y- coordinate value.
- Figure 10B shows the columns of pixels 1001-1008.
- Hash-marked pixels, such as pixels 1020 represent a line of an interpolated perspective view of the virtual image 1010, shown in Figure 10A.
- Interpolation is typically performed by selecting certain pixels in a neighborhood of the desired pixels.
- a neighborhood that can be used to interpolate a line in virtual image 1010 is outlined by dotted-line box 1022. Note that only a small subset of all the pixels represented by shaded pixels 1024 of the columns of pixels 1011-1018 are used to compute the interpolated line 1022. As a result, the camera system is operated so that only the shaded pixels are sent from the corresponding cameras of the camera array to the computing device for interpolating the line 1022.
- FIG 11 shows a flow diagram summarizing a method for generating virtual images of a scene.
- images of a scene are captured using a camera array. Each image is captured by a camera from a different view point of the scene, as described above with reference to Figure 6.
- each camera is directed to send only a portion of the image data captured by the camera to a computing device for image processing, as described above with reference to Figures 4 and 9.
- a virtual image of a perspective view of the scene not captured by the cameras is generated, as described above with reference to Figure 8.
- the virtual image is formed from the portions of the image data captured by the cameras and sent to the computing device.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
La présente invention se rapporte à des systèmes de caméras comprenant : un ensemble de caméras ; des bus de données partagés par les caméras ; et un processeur ou plus qui traitent les images capturées par l'ensemble de caméras. Dans l'un des aspects de l'invention, un système de caméras (450) comprend un ensemble de caméras (452), un contrôleur (454), un certain nombre de bus de caméras (458 à 465) et un dispositif informatique (456) qui est relié au contrôleur. Chaque caméra est reliée à une caméra adjacente ou plus et elle comprend un processeur ou plus. Chaque bus de caméra connecte un sous-ensemble de caméras de l'ensemble de caméras au contrôleur. D'autre part, chaque caméra peut être commandée pour envoyer au contrôleur uniquement les données d'image utilisées par le dispositif informatique pour générer des images virtuelles et pour recevoir des instructions depuis le contrôleur par le biais d'un bus de caméras.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2010/051661 WO2012047216A1 (fr) | 2010-10-06 | 2010-10-06 | Systèmes et procédés pour acquérir et traiter des données d'image produites par des ensembles de caméras |
| US13/878,053 US9232123B2 (en) | 2010-10-06 | 2010-10-06 | Systems and methods for acquiring and processing image data produced by camera arrays |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2010/051661 WO2012047216A1 (fr) | 2010-10-06 | 2010-10-06 | Systèmes et procédés pour acquérir et traiter des données d'image produites par des ensembles de caméras |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012047216A1 true WO2012047216A1 (fr) | 2012-04-12 |
Family
ID=45928003
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2010/051661 Ceased WO2012047216A1 (fr) | 2010-10-06 | 2010-10-06 | Systèmes et procédés pour acquérir et traiter des données d'image produites par des ensembles de caméras |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US9232123B2 (fr) |
| WO (1) | WO2012047216A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017112905A1 (fr) * | 2015-12-22 | 2017-06-29 | Google Inc. | Capture et restitution de contenu de réalité virtuelle au moyen d'un réseau de caméras de champ lumineux |
| EP3687155A1 (fr) * | 2019-01-22 | 2020-07-29 | Sick Ag | Dispositif modulaire de caméra et procédé de détection optique |
Families Citing this family (30)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
| US9858649B2 (en) | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
| US9001226B1 (en) * | 2012-12-04 | 2015-04-07 | Lytro, Inc. | Capturing and relighting images using multiple devices |
| US9769365B1 (en) * | 2013-02-15 | 2017-09-19 | Red.Com, Inc. | Dense field imaging |
| US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
| US10386614B2 (en) * | 2014-10-31 | 2019-08-20 | Everready Precision Ind. Corp. | Optical apparatus |
| US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
| US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
| US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
| US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
| US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
| US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
| US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
| US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
| US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
| US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
| US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
| US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
| US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
| US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
| CN113504657B (zh) * | 2016-07-15 | 2025-04-18 | 光场实验室公司 | 光场和全息波导阵列中的能量的选择性传播 |
| US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
| US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
| US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
| US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
| US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
| KR20200116943A (ko) | 2018-01-14 | 2020-10-13 | 라이트 필드 랩 인코포레이티드 | 홀로그래픽 및 회절 광학 인코딩 시스템 |
| US11650354B2 (en) | 2018-01-14 | 2023-05-16 | Light Field Lab, Inc. | Systems and methods for rendering data from a 3D environment |
| WO2019140348A2 (fr) | 2018-01-14 | 2019-07-18 | Light Field Lab, Inc. | Dispositif de correction visuelle de champ de lumière |
| US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20060023714A (ko) * | 2004-09-10 | 2006-03-15 | 학교법인 포항공과대학교 | 영상 정합 시스템 및 영상 정합 방법 |
| KR20090041843A (ko) * | 2007-10-25 | 2009-04-29 | 포항공과대학교 산학협력단 | 복수 카메라를 이용한 실시간 입체 영상 정합 시스템 및 그방법 |
| US7538797B2 (en) * | 2000-06-28 | 2009-05-26 | Microsoft Corporation | Scene capturing and view rendering based on a longitudinally aligned camera array |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6002743A (en) | 1996-07-17 | 1999-12-14 | Telymonde; Timothy D. | Method and apparatus for image acquisition from a plurality of cameras |
| US6522325B1 (en) * | 1998-04-02 | 2003-02-18 | Kewazinga Corp. | Navigable telepresence method and system utilizing an array of cameras |
| GB2343320B (en) * | 1998-10-31 | 2003-03-26 | Ibm | Camera system for three dimentional images and video |
| US7015954B1 (en) * | 1999-08-09 | 2006-03-21 | Fuji Xerox Co., Ltd. | Automatic video system using multiple cameras |
| US6825845B2 (en) * | 2002-03-28 | 2004-11-30 | Texas Instruments Incorporated | Virtual frame buffer control system |
| US20050237631A1 (en) * | 2004-04-16 | 2005-10-27 | Hiroyuki Shioya | Image pickup apparatus and image pickup method |
| EP1841213A1 (fr) * | 2006-03-29 | 2007-10-03 | THOMSON Licensing | Appareil et méthode de combinaison de signaux vidéo |
-
2010
- 2010-10-06 WO PCT/US2010/051661 patent/WO2012047216A1/fr not_active Ceased
- 2010-10-06 US US13/878,053 patent/US9232123B2/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7538797B2 (en) * | 2000-06-28 | 2009-05-26 | Microsoft Corporation | Scene capturing and view rendering based on a longitudinally aligned camera array |
| KR20060023714A (ko) * | 2004-09-10 | 2006-03-15 | 학교법인 포항공과대학교 | 영상 정합 시스템 및 영상 정합 방법 |
| KR20090041843A (ko) * | 2007-10-25 | 2009-04-29 | 포항공과대학교 산학협력단 | 복수 카메라를 이용한 실시간 입체 영상 정합 시스템 및 그방법 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017112905A1 (fr) * | 2015-12-22 | 2017-06-29 | Google Inc. | Capture et restitution de contenu de réalité virtuelle au moyen d'un réseau de caméras de champ lumineux |
| US10244227B2 (en) | 2015-12-22 | 2019-03-26 | Google Llc | Capture and render of virtual reality content employing a light field camera array |
| EP3687155A1 (fr) * | 2019-01-22 | 2020-07-29 | Sick Ag | Dispositif modulaire de caméra et procédé de détection optique |
Also Published As
| Publication number | Publication date |
|---|---|
| US9232123B2 (en) | 2016-01-05 |
| US20130188068A1 (en) | 2013-07-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9232123B2 (en) | Systems and methods for acquiring and processing image data produced by camera arrays | |
| Tan et al. | Multiview panoramic cameras using mirror pyramids | |
| US10565734B2 (en) | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline | |
| US8643701B2 (en) | System for executing 3D propagation for depth image-based rendering | |
| US20160309065A1 (en) | Light guided image plane tiled arrays with dense fiber optic bundles for light-field and high resolution image acquisition | |
| WO2019049421A1 (fr) | Dispositif d'étalonnage, système d'étalonnage et procédé d'étalonnage | |
| EP3291532B1 (fr) | Dispositif et procédé de traitement d'image | |
| WO2016175043A1 (fr) | Dispositif de traitement d'images et procédé de traitement d'images | |
| WO2018079283A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image, et programme | |
| US10306146B2 (en) | Image processing apparatus and image processing method | |
| Findeisen et al. | A fast approach for omnidirectional surveillance with multiple virtual perspective views | |
| US10453183B2 (en) | Image processing apparatus and image processing method | |
| JPH09114979A (ja) | カメラシステム | |
| KR102019879B1 (ko) | 가상 카메라를 이용한 게임 내 360 vr 영상 획득 장치 및 방법 | |
| Lin et al. | Single-view-point omnidirectional catadioptric cone mirror imager | |
| Hua et al. | Design analysis of a high-resolution panoramic camera using conventional imagers and a mirror pyramid | |
| De Villiers | Real-time photogrammetric stitching of high resolution video on COTS hardware | |
| JP4523538B2 (ja) | 立体映像撮影表示装置 | |
| CN115514877A (zh) | 用于从多视角图像降噪的装置和方法 | |
| CN114157853A (zh) | 一种用于生成像素光束的数据表示的装置和方法 | |
| CN113132715B (zh) | 一种图像处理方法、装置、电子设备及其存储介质 | |
| Padjla et al. | Panoramic imaging with SVAVISCA camera-simulations and reality | |
| Lin et al. | Generalized stereo for hybrid omnidirectional and perspective imaging | |
| CN120219148A (zh) | 图像转换方法、装置、设备 | |
| Ergünay et al. | A novel hybrid architecture for real-time omnidirectional image reconstruction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10858237 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13878053 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 10858237 Country of ref document: EP Kind code of ref document: A1 |