WO2018118751A1 - Systèmes et procédés d'imagerie stéréoscopique - Google Patents
Systèmes et procédés d'imagerie stéréoscopique Download PDFInfo
- Publication number
- WO2018118751A1 WO2018118751A1 PCT/US2017/066954 US2017066954W WO2018118751A1 WO 2018118751 A1 WO2018118751 A1 WO 2018118751A1 US 2017066954 W US2017066954 W US 2017066954W WO 2018118751 A1 WO2018118751 A1 WO 2018118751A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- signal
- gps
- video
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B7/00—Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/75—Circuitry for compensating brightness variation in the scene by influencing optical camera components
Definitions
- the present invention relates, generally, to a camera system for use in three-dimensional stereographic photography and, more particularly, to the use of low resolution imagery for one half of a stereo pair, and high resolution imagery for the other half.
- Presently known stereographic camera systems typically employ a first camera for recording left channel image data and a second camera for recording right channel image data, and a processor for combining the images into a composite three- dimensional image.
- Such systems typically employ cameras having the same resolution.
- Higher quality 3D images require higher resolution cameras, thereby increasing the cost of the overall camera system.
- Systems and methods are thus needed which provide high quality 3D images at low cost.
- the present invention relates to a stereoscopic camera system which includes a low cost, low resolution camera for recording a first channel, and a higher resolution camera for recording a second channel.
- the first channel image data may be used to construct a depth map, whereupon the high resolution image data may be mapped onto the depth map.
- FIG. 1 is a schematic perspective view of an exemplary prior art shutter synchronizing technique using a tether
- FIG. 2 is an exemplary display graphically depicting the timing relationship between a pulse-per-second (PPS) component and the payload data component of a global positioning (GPS) device output signal in accordance with various embodiments;
- PPS pulse-per-second
- GPS global positioning
- FIG. 3 is a schematic block diagram of an exemplary system for synchronizing multiple camera shutters using a PPS signal in accordance with various embodiments
- FIG.4 a schematic block diagram of an exemplary system for embedding/threading AHRS, GPS, and/or PPS metadata into image data in accordance with various embodiments;
- FIG.5 is a flow diagram illustrating an exemplary process for synchronizing multiple camera shutters using a PPS signal in accordance with various embodiments
- FIG.6 is a schematic diagram of an exemplary depth map useful in constructing stereoscopic images in accordance with various embodiments
- FIG. 7 is a schematic diagram of an exemplary camera system for mapping high resolution image data to a depth map created using a low resolution camera in accordance with various embodiments;
- FIG. 8 is a schematic diagram of an exemplary stereoscopic image constructed using the system of FIG. 7 in accordance with various embodiments;
- FIG. 9 is a schematic flow diagram illustrating an exemplary method of mapping high resolution image data to a depth map in accordance with various embodiments
- FIG. 10 is a schematic top view of an exemplary camera pivotably mounted about an arm for recording cylindrical panoramic stereo images in accordance with various embodiments
- FIG. 11 is a schematic top view of the camera system of FIG. 10, depicting an object within respective overlapping fields of view of the camera in successive angular positions- in accordance with various embodiments;
- FIG. 12 is a flow diagram of an exemplary process for the geo-spatial mapping of objects using metadata embedded in stereoscopic images taken from a single pivoting camera in accordance with various embodiments;
- FIG. 13 is a schematic diagram of an exemplary system for recording stereo pairs recorded using independently controlled camera platforms in accordance with various embodiments
- FIG.14 is an alternate view of the system of FIG.13, with the lenses tilted downwardly in accordance with various embodiments;
- FIG. 15 is a flow diagram of an exemplary process for maintaining a substantially constant ration between an object distance and a stereo base in accordance with various embodiments
- FIGS. 16A-D are schematic diagrams of an exemplary camera platform making multiple passes by and recording a scene changing over time in accordance with various embodiments
- FIG. 17 is a graphical depiction of an exemplary scheme for converting a series of videos into a plurality of time lapse movies in accordance with various embodiments; and [0024] FIG. 18 is a flow diagram of an exemplary process for assembling image frames together from successive videos to create a time lapse movie in accordance with various embodiments.
- DETAILED DESCRIPTION OF PREFERRED EXEMPLARY EMBODIMENTS [0025] The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description. TETHERLESS SHUTTER SYNCHRONIZATION
- a global positioning system (GPS) receiver device reports position data along radial longitudinal and latitudinal rays extending from the mass center of the Earth (an oblate spheroid), and altitude above or below sea level.
- GPS global positioning system
- Each of the various GPS satellites knows its own longitude, latitude, and altitude above sea level, and pings the GPS device.
- the device measures the message transmission time to determine its position based on the coordinates received from multiple satellites, coupled with the respective transmission times.
- GPS time is now the world standard time clock, using cesium clocks which express time down to the pico-second.
- An inertial measurement unit provides relative measurements of attitude (yaw, pitch, and roll); that is, differences in these parameters from a previous measurement.
- Attitude and Heading Reference Systems AHRS
- AHRS Attitude and Heading Reference Systems
- AHRS systems provide absolute attitude measurements, typically using a three- axis magnetometer, a three-axis accelerometer, and a three-axis gyroscope.
- the output of the gyroscope corresponds to the derivative of the output of the accelerometer; the integral of the gyroscope output yields the output of the accelerometer.
- AHRS systems have recently undergone dramatic cost reductions due to advances in micromachining of piezoelectric and other materials in silicon, enabling applications of micromachined accelerometers and gyroscopes which were heretofore cost prohibitive.
- a GPS receiver to report the geo-spatial coordinates of the camera
- an AHRS to report the orientation of the lens axis within the context of the AHRS reference coordinates, namely: facing North (zero yaw), and parallel to the surface Earth (zero pitch and roll).
- the foregoing devices allow image data to be augmented with metadata including pulse-per-second (PPS), GPS coordinates, and AHRS yaw, pitch, and roll information relative to the lens axis for every data frame, as desired.
- PPS pulse-per-second
- the BoschTM company produces a single chip which outputs AHRS metadata in combination with a GPS chip which outputs GPS metadata on a first output pin and a PPS“time hack” signal on a second output pin.
- PPS time is an independent metric extracted from the GPS chip and derived from the GPS satellites.
- the 65nm GPS chip available from Texas InstrumentsTM supports one pulse-per-second (1PPS) timing, and provides a high precision 1ms wide pulse whose rising edge is aligned to GPS time (or UTC time) second boundary.
- the pulse is present on the PPS_OUT pin of TI GPS chips.
- the 1PPS pulse is 100 ms wide and the leading edge is the on-time mark.
- the payload data e.g., National Marine Electronics Association (NMEA) data
- NMEA National Marine Electronics Association
- multiple camera shutters may be synchronized without the need for a physical tether, by using GPS Pulse Per Second (PPS) signaling.
- PPS GPS Pulse Per Second
- two or more camera platforms may be closely synchronized with no need for real time communication channels between them, regardless of the distance between them. This also informs the system exactly when (and where) each frame is taken both in an absolute sense, as well as relative to the other frames from the same and other cameras.
- Many image systems already contain GPS receivers; hence, using this technique allows multiple cameras to be reliably synchronized with little or no additional hardware costs.
- two or more camera platforms may be precisely synchronized by using the PPS pulse to cause each shutter in a multiple camera system to simultaneously record an image.
- Various embodiments employ a GPS device of the type which has a pin that separately reports a pulse-per-second (PPS) signal; that is, the PPS signal transitions from low to high at precisely the same instant that the GPS measurements were taken.
- PPS pulse-per-second
- the PPS pin exhibits a very low rise time (high transition rate); it transitions from low to high in less than a microsecond, perhaps on the order of a picosecond.
- To synchronize shutters in presently known cameras we need resolution on the order of ten microseconds. Thus, regardless of the physical separation between two devices, they can be synchronized on the order of picoseconds using the techniques described herein.
- the present invention uses the PPS transition to cause a picture (or multiple pictures) to be taken at the PPS transition.
- the PPS signal may be used to cause a picture (or frame) to be recorded at a predetermined amount of time following the PPS transition.
- two or more cameras are configured to take a picture (or video frame) at the same point in the PPS cycle, they will necessarily be synchronized because every GPS device which outputs PPS is necessarily self-referenced to the same world clock signal.
- Fastest shutter transitions are about ten microseconds; that’s how long it takes the chip to collect photons.
- Mechanical shutters transition using a moving slit (rolling shutter), where the top of the frame is taken at a different time than the bottom of the frame. Synchronizing two cameras to get a stereographic image requires only that the two contributing images be taken at the same time, even if the top of each image is recorded before or after the bottom.
- Ten microseconds is a conservative lower limit for a frame exposure time.
- cameras use a tethering cable to synchronize shutters (generally referred to as“genlock”), which limits the distance the cameras can be apart from each other to the length of the tethering cable.
- a tethering cable to synchronize shutters
- PPS metadata may be embedded into every frame, effectively locking together every camera having embedded PPS data, even if the multiple cameras were not knowingly coordinated at the time the images were taken.
- the synchronization is particularly important for 3D or stereoscopic photography, where even small synchronization errors can corrupt the resulting stereoscopic image.
- This technique allows the synchronization of any number of cameras and other recording devices/sensors (earthquake vibration), for example thousands of synchronized devices, distributed anywhere in the world, provided they are configured to receive a PPS timing signal from GPS satellites.
- the present invention contemplates sending an instruction which effectively says“take a picture on January 3, 2016 at precisely 11:25.47,” or“begin recording video precisely upon the occurrence of a PPS rising edge,” whereupon all cameras will initiate recording at the rising edge of a specified PPS pulse.
- the images can be retroactively integrated using the embedded time hack metadata.
- an exemplary prior art shutter synchronizing system 100 includes a first camera 102, a second camera 104, and a tether 106 configured to gunlock the two cameras together to thereby synchronize their respective shutters.
- FIG. 2 is an exemplary display 200 illustrating a pulse- per-second (PPS) signal component 202 and a payload signal component 204 of a global positioning (GPS) device output signal.
- PPS pulse- per-second
- GPS global positioning
- FIG. 3 is a schematic block diagram of an exemplary system 300 for synchronizing multiple camera shutters using a PPS signal in accordance with various embodiments. More particularly, the system 300 includes a first camera 302 and a second camera 304, with the shutter execution of both cameras controlled by the same externally received timing signal.
- the first camera 302 includes a first GPS receiver 306 having a PPS output pin 308, a microprocessor 310, and a shutter controller 312.
- the second camera 304 includes a second GPS receiver 320 having a PPS output pin 324, a microprocessor 326, and a shutter controller 328.by configuring the respective processors 310, 326 to execute a “record” instruction based on a particular PPS pulse, the shutters may be precisely controlled without the need for a physical tether extending between the two (or more) cameras.
- FIG.4 a schematic block diagram of an exemplary system 400 for embedding/threading AHRS, GPS, PPS, and/or other metadata into image data in accordance with various embodiments.
- the system 400 includes a processor (for example a camera microprocessor) 402 configured to receive multiple inputs, and to output a resulting signal 412.
- a first input 404 comprises image data (e.g., a recorded image or data frame)
- a second input 406 comprises AHRS information
- a third input 408 comprises GPS coordinate information
- a fourth input 410 comprises timing information (e.g., a PPS signal component).
- the resulting output signal 412 may comprise a composite data frame including an image data component 414 a metadata component 416.
- FIG.5 is a flow diagram illustrating an exemplary process 500 for synchronizing multiple camera shutters using a common timing signal.
- the process 500 includes receiving a timing signal at a first camera (Task 502), receiving the same timing signal at a second camera (Task 504), and simultaneously recording first and second data frames by the first and second cameras, respectively (Task 506).
- Metadata including information relating to the timing signal, position information, and/or attitude (e.g., AHRS) information may then be embedded into the first and/or second data frames (Task 508).
- a camera which includes: a lens; a recording plane; a shutter configured to selectively pass photons from the lens to the recording plane; a timing module configured to receive a periodic timing pulse from an external source; and a processor configured to actuate the shutter in response to the timing pulse.
- a camera which includes: a lens; a recording plane; a shutter configured to selectively pass photons from the lens to the recording plane; a timing module configured to receive a periodic timing pulse from an external source; and a processor configured to actuate the shutter in response to the timing pulse.
- the timing module comprises a global positioning system (GPS) chip including a pulse per second (PPS) pin at which the periodic timing pulse appears.
- GPS global positioning system
- the external source comprises a plurality of GPS satellites.
- the processor is configured to actuate the shutter responsive to a rising edge of the timing pulse.
- the recording plane comprises a photosensitive medium which may include film and/or an array of digital pixels.
- the processor is configured to: execute a sequence of instructions including a shutter actuation instruction; and execute the shutter actuation instruction immediately upon detecting the timing pulse.
- the recording plane is configured to capture a still photographic image and/or a series of video frames.
- the periodic timing pulse comprises a regular repeating series of timing signals each having a duration in the range of 100 milliseconds.
- each timing signal comprises a rising edge having a duration in the range of one nanosecond to one picosecond.
- the GPS chip further comprises a data pin configured to present GPS coordinate data in the range of 100 to 500 milliseconds following each periodic timing pulse.
- a method for controlling the actuation of a camera shutter includes: equipping the camera with a timing module configured to receive a periodic timing signal from a source external to the camera; detecting a leading edge of a unique pulse of the timing signal; and in response to detecting the leading edge, actuating the shutter.
- the timing module comprises a global positioning signal (GPS) receiver including a timing output pin; and the periodic timing signal comprises a pulse-per-second (PPS) signal presented at the timing output pin.
- GPS global positioning signal
- PPS pulse-per-second
- actuating the shutter comprises exposing a photosensitive medium.
- the duration of each timing pulse is in the range of 100 milliseconds; and the duration of the leading edge is in the range of one nanosecond to one picosecond.
- a method for synchronizing the operation of a first shutter of a first camera with the operation of a second shutter of a second camera without a physical tether between the first and second cameras.
- the method includes: receiving a global positioning system (GPS) pulse-per-second (PPS) signal at the first and second cameras; and in response to a unique timing pulse in the PPS signal, simultaneously actuating the first and second shutters.
- GPS global positioning system
- PPS pulse-per-second
- the method further includes: prior to the receipt of the unique timing pulse, receiving, at the first and second cameras, an instruction to actuate a respective shutter when the unique timing pulse is subsequently received.
- simultaneously actuating comprises executing respective actuation instructions at both cameras in response to detecting the rising edge of the unique timing pulse.
- Binocular vision namely, two eyes with overlapping fields of view, facilitates stereoscopic vision and the ability to perceive and measure depth and distance. Eyes located at different lateral positions on the head results in two slightly different images projected to the retinas of the eyes. These positional differences produce horizontal disparities which are processed in the visual cortex of the brain to yield depth perception and the mental rendering of three dimensional structures within a three dimensional spatial experience. Human stereo vision fuses the left and right views (channels) of a scene into a single“cyclopean” view in the brain; that is, the world appears to be seen from a virtual eye midway between the left and right eye positions.
- stereoscopic photography employs two cameras with their respective axes separated by a distance referred to as the stereo base or inter-axial separation.
- Stereoscopy manifests the illusion of depth in a still image, video, or other two-dimensional display by the presentation of a slightly different image to each eye, whereupon the two images are combined in the brain to yield the perception of depth.
- A“stereo pair” refers to right and left images used to construct a resulting 3D image.
- the respective axes of a left and a right camera lens are offset by a predetermined distance (the stereo base), which may be static or variable.
- a depth map also referred to as a disparity map
- the present inventor proposes using low resolution imagery for one half of a stereo pair (e.g., the left channel), and high resolution imagery for the other half (e.g., the right channel).
- the wide field coverage of a scene in low resolution provides the depth, size, and/or positioning information (3D) for objects to be resolved using high resolution images captured with a narrow field camera.
- pixel data from the high resolution channel may be mapped onto the low resolution channel data using the 3D model, resulting in a high resolution stereo pair of the object imaged for visualization. It will be appreciated, however, that even without this mapping the human brain may actually“see” the cyclopean image in high resolution when the mixed resolution channels are presented visually.
- FIG.6 is a schematic diagram of an exemplary scene 600 useful in constructing stereoscopic images.
- the scene 600 was recorded using a low resolution lens alone or in combination with a high resolution lens.
- the high resolution image data may thereafter be specifically mapped to particular depth zones to yield a resulting 3D image.
- the scene includes a first object 602 (a tree), a second object 604 (a mountain), and a third object 606 (a jet airplane).
- a depth map may be created by subjectively (e.g., manually), algorithmically, or otherwise assigning the objects within the scene to two or more depth zones.
- the first object 602 is assigned to zone 1 closest to the viewer
- the second object 604 is assigned to an intermediate depth zone 2
- the third object is assigned to a far distant zone 3.
- the resulting depth map and corresponding zones may then be used to map the high resolution pixel data associated with the various objects into their corresponding zones.
- a camera assembly 700 includes a first camera 702 having a lens axis 704 and a wide field of view 706 (e.g., 30 to 90 degrees), a second camera 712 having a lens axis 714 and a narrow field of view 766 (e.g., 5 to 25 degrees), and a processing module 750 for combining the two image channels into a composite 3D image.
- the camera axes are separated by a stereo base 720.
- An exemplary fixed stereo base may be in the range of 65 to 3000 mm; a variable stereo base may range from 0.2 to 3 meters.
- the first camera 702 has a target resolution in the range of 1 to 1000 pixels/m, and preferably about 10 to 100 pixels per meter (pixels/m); the second camera has a resolution in the range of 100 to 10,000 pixels/m, and preferably about 100 to 1000 pixels/m.
- the image data captured by the first camera 702 may be referred to as the left side data or first channel data
- the image data captured by the second camera 712 may be referred to as the right side data or second channel data.
- the overlap between the wide field of view 706 and the narrow field of view 716 may be divided into a plurality of regions corresponding to successive distances from the camera assembly, including a first (near field) region 732, a second (intermediate field) region 734, and a third (far field) region 736.
- a first (near field) region 732 a second (intermediate field) region 734
- a third (far field) region 736 Those skilled in the art will appreciate that any number of regions corresponding to any number of depth regions may be contemplated.
- a first object 722 e.g., a tree
- a second object e.g., a person
- a third object e.g., a building
- Mature and robust techniques have been developed for mapping various elements in a scene to appropriate perceived depth ranges or regions for 3D viewing including. (See, for example, http://3dstereophoto.blogspot.com/p/software.html;
- depth mapping techniques may be employed to create a 3D image 800 in which a first element 822 appears within a first (near field) region, a second element 824 appears within a second (intermediate field) region, and a third element 826 appears within a third (far field) region in the context of a 3D display which integrates the first channel data from the first camera 702 and the second channel data from the second camera 712.
- the high resolution image data for these objects captured with the narrow field camera 712 may be mapped onto these positions. Specifically, pixel data from the high resolution channel may be overlaid onto the low resolution side using the 3D model, resulting in a high resolution stereo pair of the object imaged for visualization.
- FIG. 9 is a schematic flow diagram illustrating an exemplary method 900 of mapping high resolution image data to a depth map.
- the method 900 includes recording a stereoscopic image using a high resolution lens and a low resolution lens (Task 902), and assigning objects in the scene to distance levels (depths) (Task 904).
- the high resolution data associated with each object may then be mapped to the corresponding depth zones identified in TASK 904 (Task 906).
- the resulting three- dimensional image may then be displayed (Task 908).
- a low cost stereoscopic camera system can be constructed using an inexpensive low resolution, small lens, wide-field camera for capturing depth and/or positioning information (3D), combined with a comparatively more expensive large lens camera, such as a digital single lens reflex (DSLR) or full cinemagraphic camera, making the resulting stereo camera much less expensive than one constructed of two high end cameras.
- 3D depth and/or positioning information
- DSLR digital single lens reflex
- full cinemagraphic camera making the resulting stereo camera much less expensive than one constructed of two high end cameras.
- the present invention contemplates using one DSLR and one low cost (e.g., mobile phone type quality) camera to record the stereo pair.
- the high cost DSLR camera is used to record the high resolution image
- the low cost camera is used to obtain the depth information. That is, the high resolution camera determines the resulting image quality, whereas the low resolution camera determines the depth map (because determining depth does not require hi resolution).
- a three-dimensional (3-D) camera system which includes: a first camera having a first lens axis, a first field of view, and a first resolution; a second camera having a second lens axis substantially parallel to the first lens axis, a second field of view, and a second resolution; and a stereo base separating the first and second lens axes; wherein the second resolution is substantially higher than the first resolution.
- the first and second cameras are configured to record still images and/or video frames.
- the stereo base comprises a fixed length in the range of 65 to 3000 mm.
- the stereo base is configured to vary in the range of .2 to 3 meters.
- the first field of view is in the range of 30 to 90 degrees
- second field of view is in the range of 5 to 25 degrees.
- the first resolution is in the range of 10 to 100 pixels/m
- second resolution is in the range of 100 to 1000 pixels/m.
- the camera system further includes a processor configured to receive first channel image data from the first camera, and first channel image data from the first camera, and to combine the first and second channel data into a composite 3D image.
- the processor is configured to construct a depth map using the first channel data, and to map the second channel data onto the depth map.
- the processor is configured to arrange objects for three dimensional viewing based the first channel data, and to overlay pixel information based on the second channel data onto the arranged objects.
- the processor is configured to overlay high resolution pixel information from the second camera onto objects arranged for viewing based on low resolution information from the first camera.
- the first and second cameras are each configured to receive a pulse-per-second (PPS) signal from an external source; and the processor is configured to synchronize the acquisition of the first and second channel image data based on the PPS signal.
- PPS pulse-per-second
- the first and second cameras are each configured to receive global positioning system (GPS) data from an external source; and the processor is configured to embed the GPS data into the composite 3D image.
- GPS global positioning system
- a method of constructing a three-dimensional image comprising: receiving, by a processor, a first signal from a first camera having a first field of view, the first signal characterized by a first resolution; receiving, by the processor, a second signal from a second camera having a second field of view substantially narrower than the first field of view, the second signal characterized by a second resolution substantially greater than the first resolution; and combining the first and second signals into a three-dimensional image.
- the method further includes: constructing a depth map using the first signal; and mapping pixels derived the second signal onto the depth map. [0092] In an embodiment, the method further includes: identifying objects from the first signal; arranging the objects for three-dimensional viewing; and overlaying high resolution data from the second signal onto the arranged objects.
- arranging the objects comprises mapping a scene depth range onto a display depth range.
- the method further includes at least one of: maintaining a fixed distance between a first lens axis associated with the first camera and a second lens axis associated with the second camera; and controllably varying the distance between the first and second axes.
- the first field of view is in the range of 30 to 90 degrees; the second field of view is in the range of 5 to 25 degrees; the first resolution is in the range of 10 to 100 pixels/m ; and the second resolution is in the range of100 to 1000 pixels/m.
- a stereographic camera system comprising: a first camera characterized by a first resolution and configured to output a first signal; a second camera characterized by a second resolution substantially higher than the first resolution and configured to output a second signal; and a processor configured to: construct a depth map of objects using the first signal; and map pixel data derived from the second signal onto the objects.
- GPS coordinates and other system parameters are used to derive size and position information for objects in the 3D image.
- various size, distance, and other information may be extracted from the images. This works particularly well for stationary objects, and may also be used for moving objects within the image.
- the stereoscopic images provide two benefits: i) the subjective effects of 3D vision; and ii) the objective measurements useful for object mapping.
- Various embodiments simplify the data capture phase associated with measuring the size and geo-position of objects in the field, by relaxing the need for two cameras and recording stereo pairs by incrementally advancing a single camera about an arc. This allows the location, position, and size for all stationary objects within the entire 360 rotational field of view to be accurately mapped. Starting with the known GPS coordinates of the camera, data from the stereo analysis yields the position and size information for objects in the scene.
- the present invention combines the known GPS coordinates for and angular position of the camera with metadata for the objects being mapped to determine their size and location.
- a stereo base may be derived and software used to reconcile the difference between the actual positions of the camera (which are not parallel) and the traditional horizontally shifted positions typically used in stereo photography.
- the stereo image is used to determine the distance at which the object is located from the camera, and the camera GPS coordinates are projected out to the object to determine the object GPS coordinates.
- the object(s) may then be placed on a geo-spatial map.
- the focal length of a lens is classically defined as the distance from the optical center of the lens to the camera sensor (or film plane) when the lens is focused on an object at infinity.
- the angle of view is the angle of subject area that is projected onto the camera's sensor by the lens.
- the field of view is another way of representing the angle of view, but expressed as a measurement of the subject area, rather than an angle.
- a GPS enabled camera extended from an arm may be pivoted about a gimbal or spindle, and the image data used to map the location and size of all the objects in the cylindrical image.
- Presently known software techniques may be used to evaluate the image data and determine the size of the objects and their distance from the camera. Then, using the GPS coordinates of the camera and its angular position, the objects may be placed in their correct positions on a geospatial map.
- a angular stereoscopic camera system 1000 includes a camera 1002 configured to pivot 1012 about a spindle 1004, with a connecting arm 1003 defining a stereo base distance 1006 between the camera 1002 and the spindle 1003. An object 1008 within the scene to be recorded is disposed at a distance 1010 from the camera 1002.
- FIG.11 depicts a camera in a first angular position 1106 at a first angle ⁇ 1 with respect to magnetic north (or other reference) 1104, and in a second angular position 1108 at a second angle ⁇ 2 with respect to magnetic north 1104.
- the arm length (the distance between the camera and the spindle) and the delta angle can be resolved into an effective stereo base.
- the GPS coordinates of the spindle 1102 and/or the camera are embedded into the data frames of the images recorded by the camera.
- the camera records a first image corresponding to a first field of view 1120 in the first position 1106, and records a second image corresponding to a second field of view 1122 in the second position 1108.
- a 3D image of an object 1126 may be constructed from the first and second images, and the size and position may be derived for the object from the foregoing information.
- the object may be placed onto a spatial map.
- a single video camera may thus be mounted on an arm and made to pivot around a center point, with the camera pointing away from the center of rotation.
- the resulting stereo pairs from adjacent images may be used to map objects visible in a stereoscopic cylindrical panorama created using the video recording captured with this system. Not only can all the objects in the scene be positioned geo-spatially on a map using this data, but the size of any of the objects may also be measured from the imagery.
- recorded metadata includes the geographic location (GPS coordinates) of the center of rotation (or the camera), the distance the camera is from the center of rotation (the arm length), the angle of the rotation from true North as a function of time (or as a function of a video frame sequence), and the field of view of the images recorded.
- the rotation may be driven manually or automatically, and with a constant or variable rotational speed as long as the angle is known as a function of time or other reference.
- a synchronous motor may drive the rotation, simplifying the metadata collection.
- the accuracy to which the location of objects in the scene may be determined depends on how far they are from the center of rotation, and the radius at which the camera is mounted from the center of rotation. Longer“camera arms” are required to accurately position or size objects that are further away.
- the FOV of the camera which is a function of the lens focal length, also has an effect on dimensional accuracy, with longer lenses providing greater accuracy.
- the position and size information for objects in the scene may be determined in real time.
- they data may be analyzed“after the fact” by using image frames that are further apart in rotation (greater angular differentials) to make multiple cylindrical stereoscopic panoramas, the difference among panoramas being the effective inter-axial lens distance of each image pair.
- this distance may be “chosen” in post processing, so one pan capture can be used for both visualization and for making accurate positional measurements over a wide range of distances.
- real time software can be used interactively to simultaneously visualize, locate, and measure objects in the recorded scene in the context of a Geographic Information System or“virtual world”.
- a drone could be programmed to fly a circular path of a given radius around a center point with it’s camera pointing out so that the visual data collected could be similarly exploited.
- the spindle may be replaced with a gimbal, allowing the camera to orbit in multiple planes, thereby facilitating the mapping of objects within a spherical or semispherical (as opposed to a cylindrical) panoramic scene.
- FIG. 12 is a flow diagram of an exemplary process 1200 for the geo-spatial mapping of objects using metadata embedded in stereoscopic images taken from a single pivoting camera.
- the method 1200 includes gathering left and right images at incremental angular positions and rendering a composite stereoscopic image (Task 1202).
- the object size(s) may be determined from the stereoscopic image (Task 1204), and the object position(s) may be determined from the camera position and the camera arm length (Task 1206).
- the object position and size information may then be mapped to the panorama (Task 1208).
- a system for determining a spatial attribute and a geographic location of an object visible in a cylindrical panoramic scene, comprising: a spindle having a spindle geographic location; a camera having a field of view (FOV) and configured to rotate at a fixed distance about the spindle; and a processor configured to: receive, from the camera, first image data corresponding to a first angular camera position and second image data corresponding to a second angular camera position; derive stereoscopic image data from the first and second image data; determine, using the stereoscopic image data, a spatial attribute of the object; determine, using the spindle geographic location, the fixed distance, and the FOV, an object geographic location; and map the spatial attribute to the cylindrical panoramic scene at the object geographic location.
- FOV field of view
- the spindle geographic location comprises first global positioning system (GPS) coordinates
- the object geographic location comprises second GPS coordinates
- the system further includes a camera arm connecting the camera to the spindle and defining the fixed distance.
- the camera includes a lens characterized by a focal length, and further wherein the FOV is a function of the focal length.
- the processor is further configured to: receive, from the camera, a plurality of image data frames corresponding to a plurality of angular camera positions, respectively; derive additional stereoscopic image data from the plurality of image data frames; determine additional spatial attributes for a plurality of additional objects, respectively, using the additional stereoscopic image data; and determine additional object geographic locations for the plurality of additional objects, respectively; and map the additional spatial attributes to the cylindrical panoramic scene at the additional object geographic locations, respectively.
- the objects are stationary when the plurality of image data frames are received.
- the system further includes an encoder configured to sense the angular position of the camera and provide a corresponding angular position signal to the processor.
- At least one of the camera and the spindle comprises a GPS receiver configured to supply a GPS signal to the processor.
- the GPS receiver comprises a pulse- per-second (PPS) receiving pin, and further wherein the GPS signal comprises a PPS component.
- PPS pulse- per-second
- the spatial attribute comprises the height of the object.
- the spatial attribute comprises an object dimension substantially orthogonal to a vector bisecting the first and second angular positions.
- the first image data comprises first metadata including indicia of the first angular camera position and the GPS coordinates; and the second image data comprises second metadata including indicia of the second angular camera position and the GPS coordinates.
- a method for determining a spatial attribute and a geographic location of an object visible in a cylindrical panoramic scene comprising the steps of: mounting a camera at a fixed distance from a spindle having a spindle geographic location; recording first image data at a first angular camera position and recording second image data at a second angular camera position; determining size information for the object from the first and second image data; determining geographic information for the object from the spindle geographic location, the fixed distance, and a camera field of view (FOV); and mapping the object size information and the object geographic information onto the cylindrical panoramic scene.
- FOV camera field of view
- the spindle geographic location comprises first global positioning system (GPS) coordinates
- the object geographic location comprises second GPS coordinates
- the camera includes a lens characterized by a focal length, and further wherein the FOV is a function of the focal length.
- the method further includes recording a plurality of image data frames corresponding to a plurality of angular camera positions, respectively; determining additional size information for a plurality of additional objects, respectively, using the plurality of image data frames; determining additional object geographic locations for the plurality of additional objects, respectively; and mapping the additional size information to the cylindrical panoramic scene at the additional object geographic locations, respectively.
- the method further includes deriving stereoscopic image data from the first and second image data; and determining the object size information using the stereoscopic image data.
- the method further includes sensing the angular position of the camera using an encoder; and using an output signal from the encoder to derive the stereoscopic image.
- the first image data comprises first metadata including indicia of the first angular camera position and the GPS coordinates; and the second image data comprises second metadata including indicia of the second angular camera position and the GPS coordinates.
- Computer code embodied in a non-transient medium is also provided for determining the size and global positioning system (GPS) coordinates of an object, wherein the computer code, when executed by a processor, is configured to execute the steps of: determining the size of the object from first and second image data recorded at first and second angular positions, respectively, by a camera rotatably mounted at a fixed distance from a spindle; and determining the GPS coordinates of the object from the spindle GPS coordinates, the fixed distance, and a field of view (FOV) of the camera STEREO PAIRS RECORDED FROM INDEPENDENT CAMERA PLATFORMS
- GPS global positioning system
- the foregoing embodiments generally relate to stereoscopic techniques for mapping and measuring.
- the following relates to 3D visualization, particularly for cinemagraphic applications, which require precise control over the stereo base.
- the respective flight paths of two camera- equipped drones are coordinated to produce real time stereoscopic images.
- Typical rule of thumb is for the stereo base to be approximately 1/30 the distance from the camera to the object being recorded.
- 3D scenes on the order of one to three meters employ a stereo base in the range of three to ten centimeters.
- 3D scenes recorded at distances on the order of one hundred meters require a stereo base in the range of 3 meters
- 3D scenes recorded at distances on the order of one thousand meters require a stereo base in the range of 30 meters.
- the stereo base is equal to the distance between the cameras.
- the effective stereo base (the distance between the lines of sight) is less than the distance between the cameras. Consequently, in order to maintain the 30:1 ratio between the object distance and the stereo base, the following three parameters must be carefully coordinated: i) the distance between the first and the second camera platforms; ii) the respective camera attitudes (which define the effective stereo base); and iii) the distance between the relevant objects in the scene, on the one hand, and the camera pair on the other hand.
- software systems may be developed using: i) a real time GPS signal indicating camera position to control the drone flight paths; and ii) a real time AHRS signal indicating camera attitude to control the camera orientation.
- one of the cameras is directly controlled (e.g., by a director, producer, field officer) and functions as the“master” camera, while the other camera is designated as the slave and is configured to follow the master by adjusting he slave’s geo-location and attitude in a manner calculated to maintain the above-mentioned 30:1 ratio.
- Various embodiments effectively coordinate programmed flight paths and camera attitudes of two otherwise independent camera equipped drone platforms (having GPS and AHRS instruments for real time navigation) such that much of the imagery collected simultaneously by both platforms can be used to create stereo pairs or stereoscopic movies of the scene.
- absolute geo-spatial positioning may be obtained from GPS and AHRS units mounted on each drone, but because two like receivers may be utilized the relative (separation) accuracy will have the precision of near proximity differential GPS measurements (e.g., on the order of a few centimeters).
- the“best” inter-axial distance between the lens axes of the two cameras that form a stereo pair depends primarily on the distance from the cameras to the subject; the further the distance, the wider the inter-axial distance must be. This is particularly important for making geo- spatial and size measurements of objects in the scene utilizing the stereoscopic content.
- the camera shutters may be synchronized using the technique described above.
- the aforementioned technique of coupling a low resolution left channel camera with a high resolution right channel camera may also be employed in the context of stereo pairs recorded from independent drone platforms having coordinated flight paths.
- a system 1300 for recording stereo pairs or stereoscopic movies of a scene 1310 includes a first airborne platform (e.g., drone) 1302 having a first camera 1320 mounted thereto, and a second platform 1304 having a second camera 1350 mounted thereto.
- Each camera includes GPS (preferably providing a PPS signal) and AHRS instruments for real time navigation.
- a first field of view 1303 overlaps with a second field of view 1305 to provide stereoscopic images of an object 1312 located a distance 1340 from the cameras, with the cameras separated by a variable stereo base 1330.
- GPS preferably providing a PPS signal
- AHRS instruments for real time navigation.
- a first field of view 1303 overlaps with a second field of view 1305 to provide stereoscopic images of an object 1312 located a distance 1340 from the cameras, with the cameras separated by a variable stereo base 1330.
- the stereo base distance is equal to the distance between the cameras, inasmuch as their respective lines of sight are orthogonal to a straight line connecting the cameras.
- the cameras pivot such that their lines of sight are no longer orthogonal to a straight line extending between the cameras, one or both of the drones must compensate by reducing the stereo base distance accordingly in order to maintain an appropriate ration between the object distance 1340 and the stereo base 1330 (e.g., in the range of 20:1 to 40:1, and preferably about 30:1).
- FIG. 14 depicts a system 1400 including a first camera having a first FOV 1403 and a second camera 1404 having a second FOV 1405, wherein the respective FOVs overlap in a region 1410 for which 3D visualization may be obtained for an object 1412. More particularly, a first line of sight 1420 is orthogonal to the lens plane of camera 1402, and a second line of sight 1422 is orthogonal to the lens plane of camera 1404. As the cameras tilt away from a straight line 1430 connecting them, the effective stereo base 1424 correspondingly decreases.
- flight paths of one or both platforms mat be adjusted to reduce either the object distance 1440, the stereo base 1424, or a combination of both.
- flight adjustments and camera attitudes may be implemented in real time under the direction of an administrator, in accordance with predetermined flight paths, or a hybrid control scheme which permits ad hoc adjustments to the foregoing parameters, preferably facilitated by real time feedback control of position and/or attitude information from GPA and/or AHRS instrumentation.
- FIG. 15 is a flow diagram of an exemplary process 1500 for maintaining a substantially constant ration between an object distance and a stereo base distance. More particularly, the process 1500 includes pivotably mounting first and second cameras onto first and second airborne platforms, respectively (Task 1502); and configuring the first and second platforms to fly first and second flight paths, respectively, and configuring the first and second cameras to maintain respective attitudes which maintain a substantially constant ratio between the object distance and the stereo base (Task 1504). The method 1500 further involves recording first and second overlapping images from the first and second cameras, respectively (Task 1506); and constructing a stereoscopic image from the first and second overlapping images (Task 1508).
- a method for constructing a stereoscopic image of an object comprising: pivotably mounting first and second cameras onto first and second airborne platforms, respectively; programming the first and second platforms to fly first and second flight paths, respectively; recording first and second overlapping images from said first and second cameras, respectively, of the object at an object distance; and constructing the stereoscopic image from the first and second overlapping images; wherein the first and second flight paths are configured to maintain a substantially constant ratio between: i) the object distance; and ii) a stereo base distance between the first and second cameras.
- the method further includes: providing the first and second platforms with first and second global positioning system (GPS) receivers configured to output first and second GPS signals, respectively; and using the first and second GPS signals as active feedback to control the first and second flight paths, respectively.
- GPS global positioning system
- the method further includes: providing the first camera with a first attitude and heading reference system (AHRS) receiver configured to output a first AHRS signal; and using the first AHRS signal to control a first parameter associated with the first platform.
- AHRS attitude and heading reference system
- the first parameter comprises one of: i) the first camera attitude; and ii) the first flight path.
- the method further includes: providing the first and second cameras with a first and second AHRS receivers configured to output first and second AHRS signals, respectively; and using at least one of the first and second AHRS signals to adjust one of: i) the stereo base distance; and ii) the object distance.
- the method further includes: using at least one of the first and second AHRS signals to control one of: i) the second flight path; and ii) the second camera attitude.
- the method further includes: providing the first and second platforms with first and second global positioning system (GPS) receivers configured to output first and second GPS signals including a pulse-per-second (PPS) signal component, respectively; and using the PPS signal component to synchronize the timing of the recording of the first and second overlapping images.
- GPS global positioning system
- PPS pulse-per-second
- the substantially constant ratio is in the range of about 30:1.
- the first camera has a first line of sight and the second camera has a second line of sight
- the method further includes: maintaining the first line of sight substantially parallel to the second line of sight while recording the first and second overlapping images.
- the first flight path comprises a dynamically configurable master path
- the second flight path is configured as a slave to follow the first flight path
- a system for constructing a stereoscopic image of an object located at an object distance from first and second cameras, the system comprising: a first drone supporting the first camera and having a first controller configured to execute a first flight path; a second drone supporting the second camera and having a second controller configured to execute a second flight path; and a processor configured to construct the three- dimensional image from a first image received from the first camera and a second image received from the second camera; wherein the first and second controllers are configured to coordinate the first and second flight paths to maintain a substantially constant ratio between: 1) the object distance; and ii) a stereo base distance separating the first and second cameras.
- the ratio is in the range of 30:1.
- the first camera is characterized by a first line of sight orthogonal to a first camera lens plane;
- the second camera is characterized by a second line of sight orthogonal to a second camera lens plane; and the stereo base distance comprises the distance between the first and second lines of sight.
- the first camera includes a first GPS receiver configured to output a first GPS signal; the second camera includes a second GPS receiver configured to output a second GPS signal; the first controller employs closed loop feedback using the first GPS signal to execute the first flight path; and the second controller employs closed loop feedback using the second GPS signal to execute the second flight path.
- the first camera includes a first AHRS module configured to output a first AHRS signal; the second camera second AHRS module configured to output a second AHRS signal; the first controller employs closed loop feedback using the first AHRS signal to control the attitude of the first camera; and the second controller employs closed loop feedback using the second AHRS signal to control the attitude of the second camera.
- the first and second GPS signals include a PPS component, and the PPS component is used to synchronize the recording of the first and second images.
- the first image comprises a frame in a first video sequence
- the second image comprises a frame in a second video sequence
- the stereoscopic image comprises a composite frame in a stereoscopic video sequence
- the first and second controllers are configured to coordinate the respective attitudes of the first and second cameras to maintain a substantially constant ratio between: 1) the object distance; and ii) a stereo base distance separating the first and second cameras.
- a method is also provided for using the geospatial position and attitude of a master camera mounted on a master drone to control the geospatial position and attitude of a slave camera mounted on a slave drone, the method comprising the steps of: receiving, at a processor, first GPS coordinates from the first camera; determining, based on the first GPS coordinates, second GPS coordinates to maintain a predetermined ratio between an object distance and a stereo base associated with the first and second cameras; and adjusting a flight path of the slave drone based on the second GPS coordinates.
- the method further includes: receiving, at a processor, first AHRS values associated with the first camera; determining, based on the first AHRS values, second AHRS values to maintain the predetermined ratio; and adjusting the attitude of the second camera based on the second AHRS values.
- a single drone may be flown in a consistent path with a consistently varied camera attitude along the path periodically in time. For instance, daily flights along the same path at the same solar time each day would produce essentially the same video or photographic result each day, if nothing in the scene changes. However, if the scene changes over time, such as the construction of a bridge or building, then a time-lapse movie, or many such movies can be assembled from frames taken from each individual video at the same location along the path to create time-lapse videos from any, or every, position along the consistent flight path. Frames from various positions may be assembled in such a way that as the camera’s perspective changes along the path, the bridge or building can be seen“growing” into existence. Applications are varied from entertainment, to advertising, to “as built” documentation of complex constructions.
- time-lapse stereoscopic movies for visualization and measurement can be constructed using stereo pairs extracted from the motion of the single drone camera in regions where the motion along the flight path is“designed” to optimize the affect.
- some temporal rivalry may be expected do to motion in the scene, but in a great many situations this will not be a significant limitation, particularly if high frame rate video is recorded and flight speeds chosen to reduce the anticipated rivalry.
- surfing within the transverse time domain one may view a scene as it changes over time from various perspectives, without compromising the continuity of the original scene as it was recorded over time.
- an exemplary scene 1600 is depicted as it changes over time, such as when a structure (e.g., a bridge) is built.
- the bridge is built over a period of four regular time units, such as solar days.
- a drone flies a first flight path 1604 with an on-board camera exhibiting a predetermined or otherwise known camera attitude at each position over the course of the flight.
- a first video (V1) 1604 is recorded of a road 1602.
- V2 second video
- the drone traverses the same flight path exhibiting the same camera attitude and records a second video (V2) 1606, capturing a first embankment 1612 which has been constructed adjacent the road 1602.
- a third drone pass the drone traverses the same flight path exhibiting the same camera attitude and records a third video (V3) 1608, capturing a second embankment 1614 constructed on the other side of the road 1602.
- V3 a third video
- V4 a fourth video
- a first frame may be extracted from each video V1– V4 at position P1 and stitched together to construct a first time lapse transverse movie M1 of the scene as viewed from position P1.
- a second frame may be extracted from each video V1– V4 at position P2 and stitched together to construct a second time lapse transverse movie M2 of the scene as viewed from position P2, and so on. Indeed, any number of time lapse transverse movies may be constructed, up to and including the total number of frames comprising each original vide0 V1– V4.
- the viewer may progress through geo-space from positions P1 through P4 (and all positions in between positions P1– P4), switching between the various videos V1– V4 without loss of continuity.
- the viewer may change perspectives between positions P1– P4 by switching back and forth between movies M1– M4, effectively “freezing” the geo-spatial position from which the scene is viewed, without loss of continuity. That is, by stitching together similarly positioned frames from each of the various original videos, the scene may be virtually recorded from any number of “static” positions, and subsequently viewed from those“static” positions.
- FIG. 17 graphically depicts a plurality of original videos V1– Vj, each comprising a plurality of frames F1– Fn, with each video corresponding to a discrete drone pass over a scene. That is, a first video V1 comprises frames F1– Fn recorded within a first time window T1; a second video V2 comprises frames F1– Fn recorded within a second time window T2, and so on. Videos V1 - Vj may be simultaneously replayed, allowing the viewer to switch back and forth among the videos, much like viewing an instant replay of a sporting event from different cameras without compromising the continuity of the recorded scene.
- each first frame F1 from each video may be stitched together to form a first movie M1 comprising frames V1F1, V2F1 . .. VjF1; each second frame F2 from each video may be stitched together to form a second movie M2 comprising frames V1F2, V2F2... VjF2, and so on up to and including a movie Mn comprising the sequence of frames V1Fn, V2Fn ... VjFn.
- the viewer may also view time lapse movies M1 - Mn from any position within the flight path.
- FIG. 18 is a flow diagram of an exemplary process 1800 for assembling image frames together from successive videos to create a time lapse movie using a single drone successively flying a consistent flight path with a consistently varied camera attitude periodically in time.
- the method 1800 includes executing the consistent flight path j times while recording j videos, respectively, with each video comprising n frames (Task 1802); appending the first frame of each of the j videos together to yield a first movie (Task 1804); appending the n-th frame of each of the j videos together to yield an n-th movie (Task 1806); and selectively toggling back and forth among the various j videos and n movies without loss of continuity (Task 1808).
- successive frames within a particular video may be parsed into stereo pairs, and used to construct a stereographic video of the scene.
- a series of stereographic frames from each video may be stitched together into a stereographic movie of the scene from a particular position, as explained above.
- a method of constructing a time lapse movie comprising: recording a first video of a scene while traversing a predetermined path with a camera exhibiting a known attitude during a first time window; recording a second video of the scene while traversing the predetermined path with the known attitude during a second time window; identifying a first frame at a first position within the first video; identifying a first frame at a first position within the second video; and stitching the first frame from the first video together with the first frame from the second video to form a first time lapse movie.
- the first time window comprises a first unit of time within a first solar day
- the second time window comprises the first unit of time within a second solar day.
- the first and second solar days comprise successive days.
- the method further includes identifying a second frame at a second position within the first video; identifying a second frame at a second position within the second video; and stitching the second frame from the first video together with the second frame from the second video to form a second time lapse movie.
- the method further includes; constructing a first stereoscopic image from the first and second frames of the first video; constructing a second stereoscopic image from the first and second frames of the second video; and stitching the first stereographic image together with the second stereographic image to form a stereographic time lapse movie.
- the method further includes using a GPS signal received from a GPS device associated with the camera to maintain the predetermined path.
- the method further includes using a pulse-per-second (PPS) signal received at the camera to synchronize the recording of the first frame of the first video with the recording of the first frame of the second video.
- PPS pulse-per-second
- the method further includes using a altitude and heading reference (AHRS) signal received at the camera to maintain the known attitude while recording the first and second videos.
- AHRS altitude and heading reference
- the method further includes: recording a j-th video of the scene while traversing the predetermined path with the known attitude during a j-th time window; identifying a first frame at a first position within the j-th video; and stitching the first frame from the first video together with the first frame from the second video and the first frame from the j-th video to form the first time lapse movie.
- the method further includes mounting the camera to an airborne platform, such that traversing the predetermined path comprises executing a predetermined flight path.
- the known attitude comprises a constant attitude.
- the known attitude comprises a variable attitude.
- a system for constructing a time lapse movie of a scene, the system comprising: a drone having a video camera pivotably mounted thereon; a control circuit configured to: fly the drone along a consistent flight path during respective first and second passes over the scene; maintain a consistent camera attitude during the first and second passes; record a first video during the first pass and a second video during the second pass; and append a first frame of the first video to a first frame of the second video to form a first time lapse movie.
- control circuit is configured to execute the first and second flight paths at the same solar times on consecutive solar days.
- the camera comprises a GPS receiver configured to receive a GPS signal from an external source; and the control circuit is configured to execute the consistent flight path using the GPS signal in a closed feedback control loop.
- the camera comprises an AHRS device configured to output an AHRS signal; and the control circuit is configured to maintain the consistent camera attitude using the AHRS signal in a closed feedback control loop.
- the GPS receiver comprises a PPS pin configured to output a PPS signal component to the control circuit; and the control circuit is configured to synchronize the first frame of the first video to the first frame of the second video using the PPS signal.
- control circuit is further configured to: record a j-th video during a j-th pass over the scene; and append a first frame from the j-th video to the first time lapse movie.
- control circuit is further configured to selectively switch among the first video, the second video, and the first movie during playback.
- a method of using a single drone successively flying a consistent flight path with a consistently varied camera attitude periodically in time to produce a time lapse movie includes: executing the consistent flight path j times while recording j videos, respectively, each video comprising n frames; appending the first frame of each of the j videos together to yield a first movie; and appending the n-th frame of each of the j videos together to yield an n-th movie.
- the word“exemplary” means“serving as an example, instance, or illustration.” Any implementation described herein as“exemplary” is not necessarily to be construed as preferred or advantageous over other implementations, nor is it intended to be construed as a model that must be literally duplicated.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
L'invention concerne des systèmes, des dispositifs et des procédés de rendu d'images stéréographiques comprenant : un premier appareil de prise de vues caractérisé par une première résolution et conçu pour émettre un premier signal ; un second appareil de prise de vues caractérisé par une seconde résolution sensiblement supérieure à la première résolution et conçu pour émettre un second signal ; et un processeur configuré pour construire une carte de profondeur d'objets à l'aide du premier signal, et pour mapper des données de pixel dérivées du second signal sur les objets.
Applications Claiming Priority (10)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/386,623 US20180174270A1 (en) | 2016-12-21 | 2016-12-21 | Systems and Methods For Mapping Object Sizes and Positions Onto A Cylindrical Panorama Using A Pivoting Stereoscopic Camera |
| US15/386,605 US10084966B2 (en) | 2016-12-21 | 2016-12-21 | Methods and apparatus for synchronizing multiple lens shutters using GPS pulse per second signaling |
| US15/386,605 | 2016-12-21 | ||
| US15/386,623 | 2016-12-21 | ||
| US15/389,879 US20180184063A1 (en) | 2016-12-23 | 2016-12-23 | Systems and Methods For Assembling Time Lapse Movies From Consecutive Scene Sweeps |
| US15/389,868 | 2016-12-23 | ||
| US15/389,868 US20180184073A1 (en) | 2016-12-23 | 2016-12-23 | Systems and Methods For Recording Stereo Pairs From Independent Camera Platforms |
| US15/389,879 | 2016-12-23 | ||
| US15/483,739 US20180295335A1 (en) | 2017-04-10 | 2017-04-10 | Stereographic Imaging System Employing A Wide Field, Low Resolution Camera And A Narrow Field, High Resolution Camera |
| US15/483,739 | 2017-04-10 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018118751A1 true WO2018118751A1 (fr) | 2018-06-28 |
Family
ID=62627211
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2017/066954 Ceased WO2018118751A1 (fr) | 2016-12-21 | 2017-12-18 | Systèmes et procédés d'imagerie stéréoscopique |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018118751A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112880645A (zh) * | 2021-02-20 | 2021-06-01 | 自然资源部第一海洋研究所 | 一种基于立体测图方式的海浪表面三维模型构建系统及方法 |
| CN113408347A (zh) * | 2021-05-14 | 2021-09-17 | 桂林电子科技大学 | 监控摄像头远距离建筑物变化检测的方法 |
| CN116109782A (zh) * | 2023-04-12 | 2023-05-12 | 中科星图测控技术股份有限公司 | 一种geo轨道视角的数字太空场景可视化系统和方法 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080016384A1 (en) * | 2006-06-28 | 2008-01-17 | Smith Jeremy D | System and method for precise absolute time event generation and capture |
| US20080050112A1 (en) * | 2006-08-22 | 2008-02-28 | Sony Ericsson Mobile Communications Ab | Camera shutter |
| US20120229697A1 (en) * | 2011-03-07 | 2012-09-13 | Seiko Epson Corporation | Digital Camera And Exposure Control Method |
-
2017
- 2017-12-18 WO PCT/US2017/066954 patent/WO2018118751A1/fr not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080016384A1 (en) * | 2006-06-28 | 2008-01-17 | Smith Jeremy D | System and method for precise absolute time event generation and capture |
| US20080050112A1 (en) * | 2006-08-22 | 2008-02-28 | Sony Ericsson Mobile Communications Ab | Camera shutter |
| US20120229697A1 (en) * | 2011-03-07 | 2012-09-13 | Seiko Epson Corporation | Digital Camera And Exposure Control Method |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112880645A (zh) * | 2021-02-20 | 2021-06-01 | 自然资源部第一海洋研究所 | 一种基于立体测图方式的海浪表面三维模型构建系统及方法 |
| CN113408347A (zh) * | 2021-05-14 | 2021-09-17 | 桂林电子科技大学 | 监控摄像头远距离建筑物变化检测的方法 |
| CN113408347B (zh) * | 2021-05-14 | 2022-03-15 | 桂林电子科技大学 | 监控摄像头远距离建筑物变化检测的方法 |
| CN116109782A (zh) * | 2023-04-12 | 2023-05-12 | 中科星图测控技术股份有限公司 | 一种geo轨道视角的数字太空场景可视化系统和方法 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20180295335A1 (en) | Stereographic Imaging System Employing A Wide Field, Low Resolution Camera And A Narrow Field, High Resolution Camera | |
| US20180184073A1 (en) | Systems and Methods For Recording Stereo Pairs From Independent Camera Platforms | |
| US20180184063A1 (en) | Systems and Methods For Assembling Time Lapse Movies From Consecutive Scene Sweeps | |
| US10084966B2 (en) | Methods and apparatus for synchronizing multiple lens shutters using GPS pulse per second signaling | |
| US11012622B2 (en) | Digital 3D/360 degree camera system | |
| US6894809B2 (en) | Multiple angle display produced from remote optical sensing devices | |
| US8780174B1 (en) | Three-dimensional vision system for displaying images taken from a moving vehicle | |
| US8384762B2 (en) | Method and apparatus for displaying stereographic images of a region | |
| US9294755B2 (en) | Correcting frame-to-frame image changes due to motion for three dimensional (3-D) persistent observations | |
| ES2996894T3 (en) | A method and corresponding system for generating video-based models of a target such as a dynamic event | |
| JP2013505457A (ja) | 縦続カメラおよび/またはキャリブレーション特徴を含む広いエリア画像を詳細に取り込むシステムおよび方法 | |
| WO2018118751A1 (fr) | Systèmes et procédés d'imagerie stéréoscopique | |
| US20180174270A1 (en) | Systems and Methods For Mapping Object Sizes and Positions Onto A Cylindrical Panorama Using A Pivoting Stereoscopic Camera | |
| JP2010045693A (ja) | 路線の3次元動画生成用画像取得システム | |
| CN104864848A (zh) | 多角度阵列组合的航空数字倾斜摄影测量装置 | |
| KR101009683B1 (ko) | 파노라믹 동영상 생성 시스템 | |
| Gangapurwala | Methods of stereophotogrammetry: a review | |
| JP2004127322A (ja) | ステレオ画像形成方法及び装置 | |
| EP2175661A1 (fr) | Procédé et appareil pour produire une représentation visuelle d'une région | |
| JPS60263863A (ja) | 実体像による表面流速分布監視方法 | |
| JPH0257649B2 (fr) | ||
| Prakash | Stereoscopic 3D viewing systems using a single sensor camera | |
| JP2005283407A (ja) | 解説付き画像の投写表示 | |
| WO2014183172A1 (fr) | Procédé de génération d'images stéréo spatiales de la surface de la terre |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17882839 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17882839 Country of ref document: EP Kind code of ref document: A1 |