[go: up one dir, main page]

WO2016000515A1 - 拍摄星轨视频的方法、装置和计算机存储介质 - Google Patents

拍摄星轨视频的方法、装置和计算机存储介质 Download PDF

Info

Publication number
WO2016000515A1
WO2016000515A1 PCT/CN2015/081016 CN2015081016W WO2016000515A1 WO 2016000515 A1 WO2016000515 A1 WO 2016000515A1 CN 2015081016 W CN2015081016 W CN 2015081016W WO 2016000515 A1 WO2016000515 A1 WO 2016000515A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
star
shooting
camera
composite image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2015/081016
Other languages
English (en)
French (fr)
Inventor
刘林汶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to US15/322,631 priority Critical patent/US10244184B2/en
Publication of WO2016000515A1 publication Critical patent/WO2016000515A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8211Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a sound signal

Definitions

  • the present invention relates to the field of camera technology, and in particular, to a method, device and computer storage medium for capturing a star track video.
  • Star-track shooting can capture the running track of the stars, which is a favorite way for astronomy enthusiasts to shoot.
  • the exposure time usually takes 20 to 60 minutes, which requires a professional camera such as a SLR camera, because it is equipped with a photosensitive hardware that can support long-term continuous exposure.
  • the current star-track shooting can only take photos, and the final result is only a static image showing the running track of the stars, unable to capture a dynamic video that shows the running process of the stars.
  • the embodiment of the invention provides a method for capturing a star track video, including the steps of:
  • the encoded image data is generated as a video file.
  • the image synthesizing the current image with the past image includes:
  • Image synthesis is performed based on the current image and the brightness information of the past image.
  • the performing image synthesis according to the brightness information of the current image and the past image includes:
  • the camera is a front camera
  • the step of acquiring an image by the camera every preset time further comprises: mirroring the image.
  • the step of performing the encoding process on the captured composite image further includes:
  • Special effects processing is performed on the captured composite image, which includes basic effect processing, filter effect processing, and/or special scene effect processing.
  • the embodiment of the invention simultaneously provides an apparatus for capturing a star-track video, comprising an image acquisition module, an image synthesis module and a video generation module, wherein:
  • An image acquisition module configured to acquire an image by the camera every preset time
  • An image synthesis module configured to perform image synthesis on a current image and a past image to generate a composite image
  • the video generation module is configured to capture the composite image, perform encoding processing on the captured composite image, and generate the encoded image data into a video file.
  • the image synthesis module is configured to: according to the current image and the past The image brightness information is used for image synthesis.
  • the image synthesis module is configured to:
  • the device for capturing a star-track video further includes a mirroring module, and the mirroring module is configured to: determine whether the currently used camera is a front camera, and if so, perform image processing on the collected image.
  • the apparatus for capturing a star-track video further includes a special effect processing module configured to: perform special effect processing on the captured composite image, where the special effect processing includes basic effect processing and filter effect processing. And / or special scene effects processing.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to perform at least one of the foregoing methods.
  • a method for capturing a star-track video provided by an embodiment of the present invention, by collecting an image every preset time, and synthesizing the current image and the past image into a composite image, and using image synthesis technology to simulate long-time exposure
  • the star-track photo ie, the composite image
  • the star-track photos at different times are encoded and finally synthesized into a video file, thereby realizing the shooting of the star-track video.
  • the user can use the camera to capture a video showing the running process of the star, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
  • the composite image since the composite image is encoded while being photographed, there is no need to store the generated composite image, so the volume of the video file obtained by the final shooting is not large and does not occupy too much storage space.
  • FIG. 1 is a flow chart provided by a first embodiment of a method for photographing a star-track video of the present invention
  • FIG. 2 is a flow chart provided by a second embodiment of a method for photographing a star track video of the present invention
  • FIG. 3 is a structural block diagram of a first embodiment of the apparatus for shooting a star-track video according to the present invention
  • FIG. 4 is a structural block diagram of a second embodiment of the apparatus for shooting a star-track video according to the present invention.
  • FIG. 5 is a structural block diagram of a third embodiment of the apparatus for shooting a star-track video according to the present invention.
  • FIG. 6 is a structural block diagram of a fourth embodiment of the apparatus for shooting a star-track video according to the present invention.
  • FIG. 7 is a schematic diagram showing the electrical structure of an apparatus for capturing a star-track video according to an embodiment of the present invention.
  • the apparatus for capturing a star-track video of the embodiment of the present invention does not rely on the imaging hardware for long-time exposure, but uses a technique of image synthesis to simulate long-time exposure. Combined with the requirements of the star track shooting scene, the ISO, picture quality, scene mode and other parameters are adjusted and limited, the parameters are output to the hardware device, then the image is acquired for image synthesis, and the composite image is encoded to generate a video file, and finally Realize the shooting of the star track video.
  • the present invention is not limited to shooting a star track, but is also suitable for other similar scenarios.
  • the method includes the following steps:
  • Step S101 After the shooting starts, the camera collects an image every preset time.
  • the embodiment of the invention adds a star-track video shooting mode to the shooting device, and the user can select the star-track video shooting mode or the normal shooting mode for shooting.
  • the star-track video shooting mode is combined with the requirements of the star-track shooting scene, and is preset.
  • the parameters such as exposure time, ISO, resolution, exposure compensation, noise reduction, etc., can also preset different parameters according to different starry sky scenes over different regions, for the user to select when shooting.
  • the collected image may be cached in the cache module, and the image synthesis module in the subsequent step may read the image from the cache module for synthesis, or directly send the collected image to the image synthesis module in the subsequent step for synthesis.
  • Step S102 Perform image synthesis on the current image and the past image to generate a composite image.
  • the image synthesis module of the photographing device directly receives the acquired image; or reads the image in real time from the cache module for image synthesis, and resets the cache module, and empties the data therein to provide space for subsequent data. Based on the above scheme, the image synthesis module performs image synthesis based on the current image and the luminance information of the past image. Since the camera continuously captures images, the composite image is also continuously generated. The composite image is actually a star-track photo, and the composite image generated at different times shows the star-track effect at different times.
  • the image synthesis module determines whether the brightness of the pixels in the current image is greater than the brightness of the pixels in the past image, and if so, replaces the pixels in the past image with the current one.
  • the pixels in the image are the final synthesized composite image after all the pixels with smaller brightness in the past image are replaced. That is, the image synthesis of the present embodiment is performed by means of brightness selection, based on the already synthesized image (past image) as the base image; and then a synthesis method in which the luminance of the subsequent image is replaced with the pixel whose base image is bright is selected.
  • the first image has been taken, and based on the first image (past image), when the second image (current image) arrives, the first image and the second image are Comparing the pixels of the corresponding position, if the brightness of the second picture is greater than the brightness of the first picture, extracting the pixels of the second picture to replace the pixels corresponding to the position of the first picture, and finally A composite image is obtained, and then the subsequent image is subjected to the same processing based on the composite image, and finally a star-track composite image is obtained.
  • the image includes pixel unit 1, pixel unit 2, and pixel unit n for a total of n pixel units, wherein the brightness of the current image of the total of 200 pixel units of the pixel unit 501 to the pixel unit 700 is greater than that of the past image, and the image synthesis module is
  • the pixels of the pixel unit 501 to the pixel unit 700 in the current image are replaced with the pixels of the pixel unit 501 to the pixel unit 700 in the past image, and a new image, that is, a composite image, is obtained after the replacement is completed.
  • This way of replacing the bright spots compared with the way of highlights superposition, can more clearly capture the trajectory of the star movement, preventing other bright spots next to the star track from being too bright and affecting the star track effect.
  • the image synthesis module also performs noise reduction processing on the composite image, and also controls the synthesis ratio of the newly synthesized image according to the exposure degree of the existing image to suppress overexposure generation.
  • Step S103 Grab the composite image, and encode the captured composite image.
  • the composite image or the intermittent captured composite image can be continuously captured.
  • Continuously capturing a composite image means that each time a composite image is generated, one image is captured for encoding, that is, all the generated composite images are used as the material of the composite video.
  • the generation of the composite image and the capture of the composite image for the encoding process are performed simultaneously by the two threads, and since the composite image is encoded while being imaged, it is not necessary to store the generated composite image.
  • Interval grabbing refers to selectively capturing a portion of a composite image as a material for a composite video.
  • the interval mode can be a manual interval mode or an automatic interval mode.
  • the manual interval mode refers to providing an operation interface for the user to click to trigger the captured image data, such as clicking the screen to capture the currently generated composite image (when there is a preview, that is, the current preview image);
  • the automatic interval mode refers to The composite image is captured at a preset time interval, that is, a composite image is captured every preset time.
  • the interval at which the composite image is captured is preferably longer than the interval at which the camera captures the image (ie, the exposure time), avoiding capturing the same composite image two or more times, or reducing the size of the final synthesized video file.
  • a composite image can be captured every 1 ⁇ 2Min, and the composite image is The previously generated composite image, the current time of the star track photo. Then, the captured composite image is subjected to video encoding processing, and processed into common video encodings such as MPEG-4, H264, H263, and VP8, for later generation of the video file, and the method for encoding the composite image and the prior art The same, no longer repeat here.
  • Step S104 When the shooting ends, the encoded image data is generated as a video file.
  • Video file formats include, but are not limited to, mp4, 3gp, avi, rmvb, and the like.
  • the photo of the star track is encoded and finally synthesized into a video file to realize the shooting of the star track video.
  • the user can use the camera to capture a video showing the running process of the star, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
  • the composite image is encoded while being photographed, there is no need to store the generated composite image, so the volume of the video file obtained by the final shooting is not large and does not occupy too much storage space.
  • the method includes the following steps:
  • Step S201 After receiving the shooting instruction, start shooting after delaying the preset time
  • the shooting effect is affected, and the embodiment realizes the anti-shake function by delaying shooting. That is, after the user presses the shooting button and issues a shooting command, The device that shoots the star-track video does not immediately shoot, but delays the preset time, and then starts shooting after the artificially generated jitter ends.
  • the preset time is preferably 1 to 3S.
  • Step S202 Collecting an image every preset time by using the front camera
  • the camera When the user is shooting a star-track video, the camera needs to face the sky. If the camera is shot with the rear camera, the screen of the camera is facing down, which is extremely inconvenient for the user to preview.
  • the front camera is used for shooting, and the screen of the shooting device is facing upward, so that the user can conveniently view the shooting effect.
  • the user can switch between the front camera and the rear camera as needed.
  • Step S203 Perform image processing on the collected image
  • the device that captures the satellite track video may also ask the user if the image needs to be mirrored and the corresponding operation is performed according to the user selection.
  • Step S204 Perform image synthesis on the current image and the past image to generate a composite image.
  • Step S205 displaying the composite image in real time
  • the camera displays the composite image in real time on the display screen for the user to preview the current star track effect in real time.
  • the composite image displayed by the photographing device is a compressed small-sized thumbnail image, and the full-size image is stored, that is, displayed and stored as two threads.
  • the composite image displayed at this time is completely consistent with the actual star-track image, and the user does not need to perform subsequent processing. Since the screen is facing up, the user can easily preview the star track during the shooting process.
  • the composite image when the front camera is used for photographing, the composite image may be mirrored after the composite image is generated, and then the processed composite image may be displayed in real time.
  • Step S206 Grab the composite image, and encode the captured composite image.
  • the manual interval mode priority function can be set even if the automatic interval mode is currently set, but if the user has taken a picture of the composite image on the current preview interface during the shooting, but it is just in the automatic interval capture, this The user can manually grab the composite image, such as clicking or swiping the screen to grab the currently displayed composite image.
  • Step S207 When the shooting ends, the encoded image data is generated as a video file.
  • the anti-shake effect is achieved by delaying the shooting. Shooting through the front camera makes the display face up. During the shooting process, the user can conveniently preview the shooting effect in real time, and can conveniently perform manual interval grabbing to generate the user-satisfied star track video effect, further enhancing the user experience.
  • the special processing of the captured composite image is performed before the encoding process of the captured composite image is performed, and the special effect processing includes basic effect processing. , filter effect processing and / or special scene effects processing.
  • the basic effect processing including noise reduction, brightness, chromaticity and other processing; filter effect processing, including sketch, negative, black and white processing; special scene effect processing, including processing for common weather, starry sky and so on.
  • the method further includes: turning on the audio device, receiving the audio data; and encoding the audio data.
  • audio data There are two main ways to source audio data: microphone capture or custom audio files.
  • the audio source is a custom audio file
  • the audio file is first decoded to obtain the original audio data.
  • special effects processing is also performed on the received audio data, and the special effect processing includes special effect recording, variable sound, pitch change, and/or shifting.
  • the specific way to generate a video file is as follows: The user captures the end command, and generates the video file according to the video file format set by the user, and the encoded image data and the encoded audio data.
  • the un-fetched composite image is preferably not stored, so as to save the storage space of the imaging device.
  • the device for capturing a star-track video may be a general digital camera such as a card camera, or may be a terminal such as a mobile phone or a tablet computer having a camera function.
  • the device, the device for capturing a star track video includes an image acquisition module, an image synthesis module, and a video generation module.
  • Image acquisition module configured to call the camera to capture images.
  • the embodiment of the invention adds a star-track video shooting mode, and the user can select the star-track video shooting mode or the normal mode for shooting.
  • the user selects the star track video shooting mode, press the shooting button or trigger the virtual shooting button, then start the star track video shooting.
  • the image acquisition module calls the camera to collect an image every preset time, and the preset time is equivalent to the exposure time, preferably 5 to 10 seconds. When shooting in the sky, the image acquisition module can automatically set the focus of the camera to infinity.
  • the star-track video shooting mode combines the requirements of the star-track shooting scene, and presets parameters such as exposure time, ISO, resolution, exposure compensation, noise reduction, etc., and may also be based on different starry sky scenes over different regions. Set different parameters for the user to select when shooting. This parameter is output to a related hardware device such as an image acquisition module during shooting so that it can sample or preprocess the acquired image according to the set parameters.
  • the image acquisition module then sends the acquired image to the image synthesis module.
  • the device for capturing the satellite track video may further include a cache module, the image acquisition module stores the collected image in the cache module, and the subsequent image synthesis module directly reads the image information from the cache module.
  • Image synthesis module configured to combine the current image with the past image to generate a composite image. Based on the above scheme, image synthesis is performed based on the current image and the luminance information of the past image, and the image is continuously generated because the camera continuously acquires the image.
  • the image synthesis module determines whether the brightness of the pixels in the current image is greater than the brightness of the pixels in the past image, and if so, replaces the pixels in the past image with
  • the pixels in the current image are the final synthesized composite image after all the pixels with lower brightness in the past image are completely replaced. That is, the image synthesis of the present embodiment is performed by means of brightness selection, based on the already synthesized image (past image) as the base image; and then a synthesis method in which the luminance of the subsequent image is replaced with the pixel whose base image is bright is selected.
  • the first image has been taken, and based on the first image (past image), when the second image (current image) arrives, the first image and the second image are Comparing the pixels of the corresponding position, if the brightness of the second picture is greater than the brightness of the first picture, extracting the pixels of the second picture to replace the pixels corresponding to the position of the first picture, and finally obtaining a composite image, Then, based on this composite image, the subsequent image is processed in the same way, and finally the star map is obtained.
  • the image includes pixel unit 1, pixel unit 2, and pixel unit n for a total of n pixel units, wherein the brightness of the current image of the total of 200 pixel units of the pixel unit 501 to the pixel unit 700 is greater than that of the past image, and the image synthesis module is
  • the pixels of the pixel unit 501 to the pixel unit 700 in the current image are replaced with the pixels of the pixel unit 501 to the pixel unit 700 in the past image, and a new image, that is, a composite image, is obtained after the replacement is completed.
  • the way to replace such highlights, relative to the highlights The way of superposition can more clearly capture the trajectory of the star movement, preventing other bright spots beside the star track from affecting the star track effect.
  • the image synthesis module can also display the generated composite image in real time through the display screen, or can cache each composite image.
  • the image composition module compresses the composite image into a small-sized thumbnail image and displays it through the display screen.
  • the video generation module is configured to capture the composite image, encode the captured composite image, and generate the encoded image data into a video file.
  • the video generation module may continuously capture the composite image or the spaced captured composite image. Continuously capturing a composite image means that each time a composite image is generated, one image is captured for encoding, that is, all the generated composite images are used as the material of the composite video. Generating the composite image and grabbing the composite image for encoding processing are performed simultaneously by the two threads.
  • Interval grabbing refers to selectively capturing a portion of a composite image as a material for a composite video.
  • the interval mode can be a manual interval mode or an automatic interval mode.
  • the manual interval mode means that the video generation module provides an operation interface for the user to click to trigger the captured image data. For example, when the user clicks on the screen, the video generation module captures the currently generated composite image (when there is a preview, that is, the current preview) Image); automatic interval mode means that the video generation module captures the composite image according to a preset time interval, that is, captures a composite image every preset time.
  • the interval between the capture of the composite image is preferably longer than the interval between the images captured by the camera (ie, the exposure time), avoiding capturing the same composite image two or more times, for example, capturing a composite image every 10S to 1 Min, the synthesis
  • the image is the currently generated composite image.
  • the video generation module performs video encoding processing on the captured composite image, and processes it into common video encodings such as MPEG-4, H264, H263, and VP8, in preparation for subsequent generation of the video file, and encoding and processing the composite image.
  • the prior art is the same and will not be described here.
  • grabbing a composite image every preset time can also be expressed as when the camera picks up After the preset image is collected, a composite image is captured.
  • the video generation module may generate the encoded image data into a video file according to a video file format specified by the user, and the video file format includes but is not limited to mp4, 3gp, avi, rmvb, and the like.
  • the un-fetched composite image is not stored, so as to save storage space.
  • the apparatus for capturing the star-track video also has an anti-shake function when performing star-track photography.
  • the difference between this embodiment and the first embodiment is that an anti-shake module is added, and the anti-shake module is connected to the image acquisition module, and the anti-shake module is configured to receive a shooting instruction, and delay the preset after receiving the shooting instruction. After the time, the shooting instruction is transmitted to the image acquisition module, and the image acquisition module starts to acquire the image after receiving the shooting instruction. That is, after the user presses the shooting button and issues a shooting command, the device that shoots the star track video does not immediately shoot, but delays the preset time, and then starts the shooting after the artificially generated jitter ends.
  • the preset time may be 1 ⁇ 3S.
  • the anti-shake function is realized by delaying the shooting, and the slight jitter generated when the shooting button is pressed is prevented from affecting the shooting effect, thereby further improving the user's shooting experience.
  • FIG. 5 is a third embodiment of the apparatus for shooting a star-track video according to the present invention.
  • the difference between the embodiment and the first embodiment is that a mirror module is added, wherein the image acquisition module, the image module, and the image synthesis module are sequentially connected.
  • the mirroring module is configured to: determine whether the currently used camera is a front camera; if yes, perform image processing on the collected image, and transmit the processed image to the image synthesis module; if not, do not perform any processing directly Transfer the image to the image synthesis module.
  • the apparatus for shooting a star-track video of the present embodiment allows the user to freely switch between the front camera and the rear camera when performing star-track photography. Since the star-track picture captured by the front camera is in a mirror image relationship with the actual picture, in this embodiment, after the image is acquired by the front camera, the image is first mirrored by the mirror module, and then the processed image is processed. The image is sent to the cache module or directly to the image synthesis module, and the image synthesis module generates a composite image. The composite image generated at this time is completely consistent with the actual star track image, and the user does not need to perform subsequent processing.
  • the camera needs to face the sky. If the camera is shot with the rear camera, the screen will face down, which is very inconvenient for the user to preview. When shooting with the front camera, the screen is facing up and the user can easily view the shooting effect.
  • the mirroring module may also be respectively connected to the image synthesizing module and the video generating module. After determining that the currently used camera is the front camera, the composite image generated by the image synthesizing module is mirrored, and then processed. The composite image is displayed in real time.
  • the mirroring module may also be connected only by the video generating module, and the video generating module sends the captured composite image to the mirroring module for mirroring processing, and the mirroring module returns the processed synthesizing module to the video generating module for encoding. deal with.
  • the mirroring module may also directly ask the user whether mirror processing is required, and if so, mirror the processed image or composite image.
  • FIG. 6 is a third embodiment of the apparatus for capturing a star-track video according to the present invention.
  • the difference between this embodiment and the first embodiment is that a special effect processing module is added, which is connected to the video generation module, and the video generation module will capture The synthesized image is sent to the special effect processing module, and the special effect processing module performs special effect processing on the captured composite image, and then returns the processed composite image to the video generation module for encoding processing.
  • the special effect processing includes basic effect processing, filter effect processing, and/or special scene effect processing, and the like.
  • the basic effect processing including noise reduction, brightness, chromaticity and other processing; filter effect processing, including sketch, negative, black and white processing; special scene effect processing, including processing as common Weather, starry sky, etc.
  • the video generating module is further configured to: turn on the audio device, receive the audio data through the audio device; and encode the audio data.
  • source audio data There are two main ways to source audio data: microphone capture or custom audio files.
  • the video generation module first decodes the audio file to obtain the original audio data.
  • the special effect processing module further performs special effect processing on the received audio data, and the special effect processing includes special effect recording, variable sound, pitch change, and/or shifting.
  • the video generation module generates the video file according to the video file format set by the user according to the user's shooting end instruction, the encoded image data, and the encoded audio data.
  • the apparatus for shooting a star-track video and the method for photographing the same according to the embodiments of the present invention can be applied to other similar application scenarios in addition to the application of the star track.
  • the apparatus for photographing a star-track video of the present invention acquires an image by every predetermined time, and combines the current image with the past image into a composite image, and uses image synthesis technology to simulate long-time exposure to obtain a star-track photo. (ie, composite image), then encode the star-track photos at different times, and finally synthesize them into video files to realize the shooting of the satellite-track video.
  • the user can use the camera to capture a video showing the running process of the star, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
  • the composite image is encoded while being photographed, there is no need to store the generated composite image, so the volume of the video file obtained by the final shooting is not large and does not occupy too much storage space.
  • anti-shake effect is achieved by delaying shooting. Shooting through the front camera makes the display face up. During the shooting process, the user can conveniently preview the shooting effect in real time, and can conveniently perform manual interval grabbing to generate the user-satisfied star track video effect, further enhancing the user experience.
  • the device for shooting a star-track video provided by the above embodiment is shooting a star track.
  • the above-mentioned function assignments may be completed by different functional modules as needed.
  • the device for capturing the satellite track video provided by the above embodiment is the same as the method for capturing the video of the star track. The specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used in at least one of the foregoing methods for capturing a star track video; And / or the method shown in Figure 2 and / or Figure 3.
  • the computer storage medium may be various types of storage media such as a ROM/RAM, a magnetic disk, an optical disk, a DVD, or a USB flash drive.
  • the computer storage medium may be a non-transitory storage medium.
  • the apparatus for capturing a star-track video, the image acquisition module, the image synthesis module, and the video generation module described in the embodiments of the present application may correspond to various structures capable of performing the above functions, for example, various types of information processing functions.
  • the processor may include an application processor (AP), a central processing unit (CPU), a digital signal processor (DSP), or a programmable gate array (FPGA, Field Programmable Gate). Array) or other information processing structure or chip that can implement the above functions by executing specified code.
  • AP application processor
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA Field Programmable Gate
  • Fig. 7 is a block diagram showing a main electrical configuration of a camera according to an embodiment of the present invention.
  • the photographic lens 101 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens.
  • the photographic lens 101 can be moved in the optical axis direction by the lens driving unit 111, and controls the focus position of the taking lens 101 based on the control signal from the lens driving control unit 112, and also controls the focus distance in the case of the zoom lens.
  • the lens drive control circuit 112 performs drive control of the lens drive unit 111 in accordance with a control command from the microcomputer 107.
  • An imaging element 102 is disposed in the vicinity of a position where the subject image is formed by the photographing lens 101 on the optical axis of the photographing lens 101.
  • the imaging element 102 functions as an image of the subject image and acquires a captured image.
  • Photodiodes constituting each pixel are two-dimensionally arranged in a matrix on the imaging element 102. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode.
  • the front surface of each pixel is provided with a Bayer array of RGB color filters.
  • the imaging element 102 is connected to an imaging circuit 103 that performs charge accumulation control and image signal readout control in the imaging element 102, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
  • the imaging circuit 103 is connected to the A/D conversion unit 104, which performs analog-to-digital conversion on the analog image signal, and outputs a digital image signal (hereinafter referred to as image data) to the bus 199.
  • image data a digital image signal
  • the bus 199 is a transmission path for transmitting various data read or generated inside the camera.
  • the A/D conversion unit 104 is connected to the bus 199, and an image processor 105, a JPEG processor 106, a microcomputer 107, a SDRAM (Synchronous DRAM) 108, and a memory interface (hereinafter referred to as a memory I/F) are connected. 109. LCD (Liquid Crystal Display) driver 110.
  • the image processor 105 performs various kinds of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 102. deal with.
  • the JPEG processor 106 compresses the image data read out from the SDRAM 108 in accordance with the JPEG compression method. Further, the JPEG processor 106 performs decompression of JPEG image data for image reproduction display.
  • the file recorded on the recording medium 115 is read, and after the compression processing is performed in the JPEG processor 106, the decompressed image data is temporarily stored in the SDRAM 108 and displayed on the LCD 116.
  • the JPEG method is adopted as the image compression/decompression method.
  • the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
  • the operation unit 113 includes but is not limited to a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, and an enlarge button.
  • the operation parts such as various input buttons and various input keys are detected, and the operation states of these operation members are detected.
  • the detection result is output to the microcomputer 107.
  • a touch panel is provided on the front surface of the LCD 116 as a display portion, and the touch position of the user is detected, and the touch position is output to the microcomputer 107.
  • the microcomputer 107 executes various processing sequences corresponding to the operation of the user based on the detection result of the operation member from the operation unit 113. (Also, this place can be changed to the computer 107 to execute various processing sequences corresponding to the user's operation based on the detection result of the touch panel in front of the LCD 116.
  • the flash memory 114 stores programs for executing various processing sequences of the microcomputer 107.
  • the microcomputer 107 performs overall control of the camera in accordance with the program. Further, the flash memory 114 stores various adjustment values of the camera, and the microcomputer 107 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
  • the SDRAM 108 is an electrically rewritable volatile memory for temporarily storing image data or the like.
  • the SDRAM 108 temporarily stores image data output from the A/D conversion unit 104 and image data processed in the image processor 105, the JPEG processor 106, and the like.
  • the microcomputer 107 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
  • the microcomputer 107 is connected to the operation unit 113 and the flash memory 114.
  • the microcomputer 107 can control the apparatus in this embodiment to perform the following operations by executing a program:
  • the encoded image data is generated as a video file.
  • the synthesizing the current image with the past image comprises:
  • Image synthesis is performed based on the current image and the brightness information of the past image.
  • the performing image synthesis according to the brightness information of the current image and the past image comprises: determining whether the brightness of the pixel in the current image at the same position is greater than the brightness of the pixel in the past image; if yes, the same position The pixels in the past image are replaced with the pixels in the current image, and image synthesis is performed accordingly.
  • the camera is a front camera
  • the step of acquiring an image by the camera every preset time further comprises: performing image processing on the image.
  • the step of performing the encoding process on the captured composite image further includes: performing special effect processing on the captured composite image, where the special effect processing includes basic effect processing, filter effect processing, and/or special scene effects. deal with.
  • the memory interface 109 is connected to the recording medium 115, and performs control for writing image data and a file header attached to the image data to the recording medium 115 and reading from the recording medium 115.
  • the recording medium 115 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
  • the recording medium 115 is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 110 is connected to the LCD 116, and stores image data processed by the image processor 105 in the SDRAM.
  • the image data stored in the SDRAM is read and displayed on the LCD 116, or the image data stored in the JPEG processor 106 is compressed.
  • the JPEG processor 106 reads the compressed image data of the SDRAM, decompresses it, and displays the decompressed image data on the LCD 116.
  • the LCD 116 is disposed on the back surface of the camera body or the like to perform image display.
  • the LCD 116 is provided with a touch panel that detects a user's touch operation.
  • the liquid crystal display panel (LCD 116) is disposed as the display portion.
  • the present invention is not limited thereto, and various display panels such as an organic EL may be employed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种拍摄星轨视频的方法、装置和计算机存储介质,所述方法包括步骤:拍摄开始后,通过摄像头每隔预设时间采集一张图像;将当前的图像与过去的图像进行图像合成,生成合成图像;抓取所述合成图像,并对抓取的合成图像进行编码处理;拍摄结束时,将编码处理后的图像数据生成为视频文件。

Description

拍摄星轨视频的方法、装置和计算机存储介质 技术领域
本发明涉及摄像技术领域,尤其是涉及一种拍摄星轨视频的方法、装置和计算机存储介质。
背景技术
星轨拍摄可以拍摄出星星的运行轨迹,是深受天文爱好者喜爱的一种拍摄方式。进行星轨拍摄时,曝光时间通常需要20~60分钟,需要专业的摄像装置如单反相机才能实现,因其配置了能够支持长时间持续曝光的感光硬件。目前的星轨拍摄只能拍摄照片,即最终得到的只是一张显示星星的运行轨迹的静态图像,无法拍摄出能够显示星星运行过程的动态视频。
传统意义上的视频拍摄,在夜间进行拍摄时,受光照限制,其拍摄出来的效果很暗。特别是对着星空拍摄时,由于星光非常很黯淡,因此拍摄出来的星空基本上是黑的,根本无法拍摄出星轨视频效果;同时,星星的运行速度很慢,需要长时间拍摄才能获得星轨效果,因此传统的视频拍摄方法会占用大量的存储空间。
因此,现有技术中尚没有实现拍摄星轨视频的解决方案,无法满足用户的多样化需求,影响了用户体验。
发明内容
本发明的期望提供一种拍摄星轨视频的方法、装置和计算机存储介质,旨在实现星轨视频的拍摄,满足用户的多样化需求,提升用户体验。
,本发明实施例提出一种拍摄星轨视频的方法,包括步骤:
拍摄开始后,通过摄像头每隔预设时间采集一张图像;
将当前的图像与过去的图像进行图像合成,生成合成图像;
抓取所述合成图像,并对抓取的合成图像进行编码处理;
拍摄结束时,将编码处理后的图像数据生成为视频文件。
基于上述方案,所述将当前的图像与过去的图像进行图像合成包括:
根据当前的图像与过去的图像的亮度信息进行图像合成。
基于上述方案,所述根据当前的图像与过去的图像的亮度信息进行图像合成包括:
判断同一位置当前的图像中的像素的亮度是否大于过去的图像中的像素的亮度;
若是,则将同一位置过去的图像中的像素替换为当前的图像中的像素,据此进行图像合成。
基于上述方案,所述摄像头为前置摄像头,所述通过摄像头每隔预设时间采集一张图像的步骤之后还包括:对所述图像进行镜像处理。
基于上述方案,所述对抓取的合成图像进行编码处理的步骤之前还包括:
对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理。
本发明实施例同时提出一种拍摄星轨视频的装置,包括图像采集模块、图像合成模块和视频生成模块,其中:
图像采集模块,配置为通过摄像头每隔预设时间采集一张图像;
图像合成模块,配置为将当前的图像与过去的图像进行图像合成,生成合成图像;
视频生成模块,配置为抓取所述合成图像,对抓取的合成图像进行编码处理,并将编码处理后的图像数据生成为视频文件。
基于上述方案,所述图像合成模块配置为:根据当前的图像与过去的 图像的亮度信息进行图像合成。
基于上述方案,所述图像合成模块配置为:
判断同一位置当前的图像中的像素的亮度是否大于过去的图像中的像素的亮度;
若是,则将同一位置过去的图像中的像素替换为当前的图像中的像素,据此进行图像合成。
基于上述方案,所述拍摄星轨视频的装置还包括镜像模块,所述镜像模块配置为:判断当前使用的摄像头是否为前置摄像头,若是,则对采集到的图像进行镜像处理。
基于上述方案,所述拍摄星轨视频的装置还包括特效处理模块模块,所述特效处理模块配置为:对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行上述方法的至少其中之一。
本发明实施例所提供的一种拍摄星轨视频的方法,通过每隔预设时间采集一张图像,并将当前的图像与过去的图像合成为合成图像,采用图像合成技术来模拟长时间曝光进而获得星轨照片(即合成图像),再将不同时刻的星轨照片进行编码处理,最终合成为视频文件,实现了星轨视频的拍摄。使得用户可以利用拍摄装置拍摄出显示星星的运行过程的视频,或者应用于类似的应用场景,满足了用户的多样化需求,提升了用户体验。同时,由于是一边拍摄一边对合成图像进行编码处理,无需存储生成的合成图像,因此最终拍摄获得的视频文件的体积不会很大,不会占用太多的存储空间。
附图说明
图1是本发明拍摄星轨视频的方法第一实施例提供的流程图;
图2是本发明拍摄星轨视频的方法第二实施例提供的流程图;
图3是本发明拍摄星轨视频的装置第一实施例提供的结构框图;
图4是本发明拍摄星轨视频的装置第二实施例提供的结构框图;
图5是本发明拍摄星轨视频的装置第三实施例提供的结构框图;
图6是本发明拍摄星轨视频的装置第四实施例提供的结构框图;
图7是本发明实施例提供的拍摄星轨视频的装置的电气结构示意图。
具体实施方式
以下结合附图对本发明的优选实施例进行详细说明,应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明实施例的拍摄星轨视频的装置不依赖摄像硬件来进行长时间曝光,而是采用图像合成的技术来模拟长时间曝光。并结合星轨拍摄场景的要求,对ISO、画片质量、场景模式等参数加以调整和限制,将参数输出给硬件设备,然后获取图像进行图像合成,并将合成图像进行编码处理生成视频文件,最终实现星轨视频的拍摄。然而,本发明并不限于拍摄星轨,还适合其它类似情景。
参见图1,提出本发明的拍摄星轨视频的方法第一实施例,所述方法包括以下步骤:
步骤S101:拍摄开始后,摄像头每隔预设时间采集一张图像
本发明实施例为拍摄装置增加了一种星轨视频拍摄模式,用户可以选择星轨视频拍摄模式或普通拍摄模式进行拍摄,其中,星轨视频拍摄模式结合星轨拍摄场景的要求,预先设定了曝光时间、ISO、分辨率、曝光补偿、降噪等参数,还可以根据不同地区上空不同的星空场景预设不同的参数,供用户拍摄时进行选择。
当用户选择了星轨视频拍摄模式,按下拍摄按键或触发虚拟拍摄按键后,拍摄装置即开始进行星轨拍摄,利用摄像头每隔预设时间采集一张图像,该预设时间即相当于曝光时间,可选5~10S。可以将采集的图像缓存于缓存模块中,待后续步骤中的图像合成模块从缓存模块中读取图像进行合成,也可以直接将采集到的图像发送给后续步骤中的图像合成模块进行合成。
在进行星空拍摄时,可以自动将焦点设为无限远。
步骤S102:将当前的图像与过去的图像进行图像合成,生成合成图像
拍摄装置的图像合成模块直接接收采集到的图像;或者从缓存模块中实时读取图像进行图像合成,并重置缓存模块,清空其中的数据,为后续数据提供空间。基于上述方案,图像合成模块根据当前的图像与过去的图像的亮度信息进行图像合成。因摄像头持续采集图像,因此合成图像也是持续的生成。所述合成图像实则为星轨照片,不同时刻生成的合成图像显示不同时刻的星轨效果。
在一实施例中,对于同一位置不同时间的像素,图像合成模块判断当前的图像中的像素的亮度是否大于过去的图像中的像素的亮度,若是,则将过去的图像中的像素替换为当前的图像中的像素,在过去的图像中亮度较小的像素全部替换完后,即为最终合成的合成图像。即,本实施例的图像合成采用亮度选择的方式进行,以已经合成的图像(过去的图像)为基础,作为基础图像;然后选择后续图像中亮度比基础图像亮的像素进行替换的合成方法。
比如已经拍摄了第一张图像,这时就以第一张图像(过去的图像)为基础,当第二张图像(当前的图像)到来的时候,将第一张图像与第二张图像的对应位置的像素进行对比,如果第二张的亮度大于第一张的亮度,则提取出第二张图像的像素来替换掉第一张图像对应位置的像素,最后就 得到了一张合成图像,然后又以这张合成图像为基础,对后续图像进行相同的处理,最终得到星轨合成图像。
又如,图像中包括像素单元1、像素单元2…像素单元n共n个像素单元,其中像素单元501~像素单元700共200个像素单元当前的图像的亮度大于过去的图像,图像合成模块则将当前的图像中像素单元501~像素单元700的像素替换过去的图像中像素单元501~像素单元700的像素,替换完成后获得一张新的图像,即合成图像。此种亮点替换的方式,相对于亮点叠加的方式,可以更加清晰的拍摄出星星运动的轨迹,防止星轨旁边的其它亮点过亮而影响星轨效果。
此外,图像合成模块还对合成图像进行降噪处理,同时还根据现有图像的曝光度,控制新合成图像的合成比例,抑制过曝产生。
步骤S103:抓取合成图像,并对抓取的合成图像进行编码处理
具体的,可以连续抓取合成图像或者间隔的抓取合成图像。连续抓取合成图像,是指每生成一张合成图像就抓取一张进行编码处理,即,将生成的所有合成图像都作为合成视频的素材。生成合成图像和抓取合成图像进行编码处理是两个线程同步进行,由于是一边拍摄一边对合成图像进行编码处理,因此无需存储生成的合成图像。
间隔抓取是指选择性的抓取部分合成图像作为合成视频的素材。间隔方式可以是手动间隔模式或者自动间隔模式。其中,手动间隔模式,是指提供操作界面以便用户点击触发抓取图像数据,如点击屏幕,抓取当前生成的合成图像(有预览时,即当前的预览图像);自动间隔模式,是指按照预设的时间间隔抓取合成图像,即每隔预设时间抓取一张合成图像。
抓取合成图像的间隔时间优选长于摄像头采集图像的间隔时间(即曝光时间),避免两次或多次抓取到相同的合成图像,或者减小最终合成的视频文件的大小。例如可以每隔1~2Min抓取一张合成图像,该合成图像即当 前所生成的合成图像,当前时刻的星轨照片。然后对抓取到的合成图像进行视频编码处理,将其处理为MPEG-4、H264、H263、VP8等常见视频编码,以备后续生成视频文件,对合成图像进行编码处理的方法与现有技术相同,在此不再赘述。
此外,每隔预设时间抓取一张合成图像,也可以表述为当摄像头每采集预设张图像后抓取一张合成图像,二者虽然表述方法不同,但实质相同。例如,假设摄像头每隔10S采集一张图像(即曝光时间为10S),拍摄装置在其摄像头每采集3张图像后抓取一张合成图像,实则相当于每隔3*10S=30S后抓取一张合成图像。
步骤S104:拍摄结束时,将编码处理后的图像数据生成为视频文件
生成的视频文件的格式,可以由用户指定。视频文件格式包括但不限于mp4、3gp、avi、rmvb等。
从而,通过每隔预设时间采集一张图像,并将当前的图像与过去的图像合成为合成图像,采用图像合成技术来模拟长时间曝光进而获得星轨照片(即合成图像),再将不同时刻的星轨照片进行编码处理,最终合成为视频文件,实现了星轨视频的拍摄。使得用户可以利用拍摄装置拍摄出显示星星的运行过程的视频,或者应用于类似的应用场景,满足了用户的多样化需求,提升了用户体验。同时,由于是一边拍摄一边对合成图像进行编码处理,无需存储生成的合成图像,因此最终拍摄获得的视频文件的体积不会很大,不会占用太多的存储空间。
参见图2,提出本发明的拍摄星轨视频的方法第二实施例,所述方法包括以下步骤:
步骤S201:接收到拍摄指令后,延迟预设时间后开始拍摄
为了避免按下拍摄按键时产生的轻微抖动影响拍摄效果,本实施例通过延迟拍摄来实现防抖功能。即用户在按下拍摄按键,发出拍摄指令后, 拍摄星轨视频的装置不立即进行拍摄,而是延迟预设时间,待人为产生的抖动结束后,再开始进行拍摄。所述预设时间优选1~3S。
步骤S202:利用前置摄像头每隔预设时间采集一张图像
用户在进行星轨视频拍摄时,摄像头需要面向天空,如果用后置摄像头拍摄,则拍摄装置的屏幕就朝下,用户预览时极不方便。本实施例利用前置摄像头拍摄,则拍摄装置的屏幕朝上,用户就可以方便的查看拍摄效果。当然,用户可以根据需要,在前置摄像头和后置摄像头之间自由切换。
步骤S203:对采集到的图像进行镜像处理
由于前置摄像头捕捉到的星轨画面与实际画面呈镜像关系,有鉴于此本实施例在采集到图像后,先对采集到的图像进行镜像处理,然后才将处理后的图像发送给缓存模块或直接发给图像合成模块,供图像合成模块生成合成图像。在某些实施例中,拍摄星轨视频的装置也可以询问用户是否需要对图像进行镜像处理,根据用户选择执行相应的操作。
步骤S204:将当前的图像与过去的图像进行图像合成,生成合成图像
步骤S205:实时显示该合成图像
拍摄装置在显示屏上实时显示合成图像,供用户实时预览当前的星轨效果。为了达到流畅预览的效果,拍摄装置显示的合成图像为经压缩后的小尺寸的缩略图,全尺寸的图像予以存储,即显示和存储为两个线程。
由于事先对采集到的图像进行了镜像处理,此时显示的合成图像就与实际的星轨画面完全一致,用户无需做后续处理。由于屏幕朝上,在拍摄过程中,用户可以很方便的预览星轨拍摄效果。
在某些实施例中,利用前置摄像头进行拍摄时,也可以在生成了合成图像后,对合成图像进行镜像处理,然后对处理后的合成图像进行实时显示。
步骤S206:抓取合成图像,并对抓取的合成图像进行编码处理
由于屏幕朝上,用户可以方便的看到合成图像的预览效果,因此用户更方便利用手动间隔模式抓取当前预览的合成图像。可以设置手动间隔模式优先功能,即使当前设置了自动间隔模式,但在拍摄过程中,若用户看中了当前预览界面上的一张合成图像,但其又刚好在自动间隔抓取之外,此时用户可以手动抓取该合成图像,如点击或划动屏幕来抓取当前显示的合成图像。
步骤S207:拍摄结束时,将编码处理后的图像数据生成为视频文件
从而,通过延迟拍摄,实现了防抖效果。通过前置摄像头拍摄,使得显示屏朝上,拍摄过程中用户可以方便的实时预览拍摄效果,并可以方便的进行手动间隔抓取,以生成用户满意的星轨视频效果,进一步提升了用户体验。
可选地,针对前述两个实施例,为了提高用户拍摄的趣味性,在对抓取的合成图像进行编码处理之前,还对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理等。其中,基本效果处理,包含减噪、亮度、色度等处理;滤镜效果处理,包含素描、负片、黑白等处理;特殊场景效果处理,包含处理为常见天气、星空等。
可选地,为了在录制视频的同时,用户能够录制声音,抓取合成图像并进行编码处理的同时,还包括:开启音频设备,接收音频数据;对音频数据进行编码处理。音频数据的来源方式主要有两种:麦克风采集或者自定义音频文件。当音频来源为自定义音频文件时,先对音频文件进行解码,得到原始的音频数据。基于上述方案,在对音频数据进行编码处理之前,还对接收到的音频数据进行特效处理,所述特效处理包括特效录音、变声、变调和/或变速等。
在增加了录制音频的功能基础上,生成视频文件的具体方式为:根据 用户拍摄结束指令,将编码处理后的图像数据,以及编码处理后的音频数据,按照用户设定的视频文件格式,生成视频文件。
为了用户操作起来更为方便实用,还可以给用户提供一个操作界面,用来设定抓取合成图像的方式(间隔抓取或连续抓取),间隔抓取时的间隔时间,是否进行特效处理,是否开启录制音频功能等。
本实施例在实际应用中,在对间隔抓取的合成图像进行编码处理的同时,对未抓取的合成图像,优选不进行存储,以便节省拍摄装置的存储空间。
参见图3,提出本发明的拍摄星轨视频的装置第一实施例,所述拍摄星轨视频的装置可以是普通数码相机如卡片相机等,也可以是具有摄像功能的手机、平板电脑等终端设备,所述拍摄星轨视频的装置包括图像采集模块、图像合成模块和视频生成模块。
图像采集模块:配置为调用摄像头采集图像。
本发明实施例增加了一种星轨视频拍摄模式,用户可以选择星轨视频拍摄模式或普通模式进行拍摄。当用户选择了星轨视频拍摄模式,按下拍摄按键或触发虚拟拍摄按键后,则开始进行星轨视频拍摄。图像采集模块调用摄像头每隔预设时间采集一张图像,所述预设时间即相当于曝光时间,优选5~10S。在进行星空拍摄时,图像采集模块可以自动将摄像头的焦点设定为无限远。
本发明实施例中的星轨视频拍摄模式结合星轨拍摄场景的要求,预先设定了曝光时间、ISO、分辨率、曝光补偿、降噪等参数,还可以根据不同地区上空不同的星空场景预设不同的参数,供用户拍摄时进行选择。拍摄时将该参数输出给相关硬件设备如图像采集模块,以使其根据设定参数对采集到的图像进行选样或预处理。
随后图像采集模块将采集到的图像发送给图像合成模块。在某些实施 例中,拍摄星轨视频的装置还可以包括一缓存模块,图像采集模块将采集到的图像存储于缓存模块中,后续图像合成模块直接从缓存模块中读取图像信息。
图像合成模块:配置为将当前的图像与过去的图像进行图像合成,生成合成图像。基于上述方案,根据当前的图像与过去的图像的亮度信息进行图像合成,因摄像头持续采集图像,因此合成图像也是持续的生成。
在一优选实施例中,对于同一位置不同时间的像素,图像合成模块判断当前的图像中的像素的亮度是否大于过去的图像中的像素的亮度,若是,则将过去的图像中的像素替换为当前的图像中的像素,在过去的图像中亮度较小的像素全部替换完后,即为最终合成的合成图像。即,本实施例的图像合成采用亮度选择的方式进行,以已经合成的图像(过去的图像)为基础,作为基础图像;然后选择后续图像中亮度比基础图像亮的像素进行替换的合成方法。
比如已经拍摄了第一张图像,这时就以第一张图像(过去的图像)为基础,当第二张图像(当前的图像)到来的时候,将第一张图像与第二张图像的对应位置的像素进行对比,如果第二张的亮度大于第一张的亮度,则提取出第二张图像的像素来替换掉第一张图像对应位置的像素,最后就得到了一张合成图像,然后又以这张合成图像为基础,对后续图像进行相同的处理,最终得到星轨图。
又如,图像中包括像素单元1、像素单元2…像素单元n共n个像素单元,其中像素单元501~像素单元700共200个像素单元当前的图像的亮度大于过去的图像,图像合成模块则将当前的图像中像素单元501~像素单元700的像素替换过去的图像中像素单元501~像素单元700的像素,替换完成后获得一张新的图像,即合成图像。此种亮点替换的方式,相对于亮点 叠加的方式,可以更加清晰的拍摄出星星运动的轨迹,防止星轨旁边的其它亮点过亮而影响星轨效果。
图像合成模块还可以通过显示屏实时显示生成的合成图像,也可以将每一张合成图像予以缓存。
为了达到流畅预览的效果,图像合成模块将合成图像压缩为小尺寸的缩略图后通过显示屏予以显示。
视频生成模块:配置为抓取合成图像,对抓取的合成图像进行编码处理,将编码处理后的图像数据生成为视频文件。
可选地,视频生成模块可以连续抓取合成图像或者间隔的抓取合成图像。连续抓取合成图像,是指每生成一张合成图像就抓取一张进行编码处理,即,将生成的所有合成图像都作为合成视频的素材。生成合成图像和抓取合成图像进行编码处理是两个线程同步进行。
间隔抓取是指选择性的抓取部分合成图像作为合成视频的素材。间隔方式可以是手动间隔模式或者自动间隔模式。其中,手动间隔模式,是指视频生成模块提供操作界面以便用户点击触发抓取图像数据,如当用户点击屏幕时,视频生成模块则抓取当前生成的合成图像(有预览时,即当前的预览图像);自动间隔模式,是指视频生成模块按照预设的时间间隔抓取合成图像,即每隔预设时间抓取一张合成图像。
抓取合成图像的间隔时间优选长于摄像头采集图像的间隔时间(即曝光时间),避免两次或多次抓取到相同的合成图像,例如每隔10S~1Min抓取一张合成图像,该合成图像即当前所生成的合成图像。然后视频生成模块对抓取到的合成图像进行视频编码处理,将其处理为MPEG-4、H264、H263、VP8等常见视频编码,以备后续生成视频文件,对合成图像进行编码处理的方法与现有技术相同,在此不再赘述。
此外,每隔预设时间抓取一张合成图像,也可以表述为当摄像头每采 集预设张图像后抓取一张合成图像,二者虽然表述方法不同,但实质相同。例如,假设摄像头每隔10S采集一张图像(即曝光时间为10S),拍摄装置在其摄像头每采集3张图像后抓取一张合成图像,实则相当于每隔3*10S=30S后抓取一张合成图像。
视频生成模块可以根据用户指定的视频文件格式,将编码处理后的图像数据生成为视频文件,该视频文件格式包括但不限于mp4、3gp、avi、rmvb等。
本实施例在实际应用中,在对间隔抓取的合成图像进行编码处理的同时,对未抓取的合成图像,不进行存储,以便节省存储空间。
在如图4所示的第二实施例中,拍摄星轨视频的装置在进行星轨拍摄时还具有防抖功能。本实施例与第一实施例的区别是增设了一防抖模块,该防抖模块与图像采集模块连接,所述防抖模块配置为接收拍摄指令,并在接收到拍摄指令后,延迟预设时间之后才将拍摄指令传送给图像采集模块,图像采集模块接收到拍摄指令后才开始采集图像。即用户在按下拍摄按键,发出拍摄指令后,拍摄星轨视频的装置不立即进行拍摄,而是延迟预设时间,待人为产生的抖动结束后,再开始进行拍摄。所述预设时间可选1~3S。
从而,通过延迟拍摄来实现防抖功能,避免按下拍摄按键时产生的轻微抖动影响拍摄效果,进一步提升了用户的拍摄体验。
图5所示为本发明的拍摄星轨视频的装置第三实施例,本实施例与第一实施例的区别是增设了一镜像模块,其中图像采集模块、镜像模块和图像合成模块依次连接,该镜像模块配置为:判断当前使用的摄像头是否为前置摄像头;若是,则对采集到的图像进行镜像处理,将处理后的图像传送给图像合成模块;若否,则不做任何处理,直接将图像传送给图像合成模块。
本实施例的拍摄星轨视频的装置在进行星轨拍摄时,允许用户在前置摄像头和后置摄像头之间自由切换。由于前置摄像头捕捉到的星轨画面与实际画面呈镜像关系,因此本实施例在利用前置摄像头采集到图像后,利用镜像模块先对采集到的图像进行镜像处理,然后才将处理后的图像发送给缓存模块或直接发给图像合成模块,供图像合成模块生成合成图像,此时生成的合成图像就与实际的星轨画面完全一致,用户无需做后续处理。用户在进行星轨拍摄时,摄像头需要面向天空,如果用后置摄像头拍摄,屏幕就朝下,用户预览时极不方便。当利用前置摄像头拍摄时,屏幕朝上,用户就可以方便的查看拍摄效果。
在某些实施例中,镜像模块也可以分别与图像合成模块和视频生成模块连接,在判定当前使用的摄像头为前置摄像头后,对图像合成模块生成的合成图像进行镜像处理,然后将处理后的合成图像予以实时显示。
在某些实施例中,镜像模块也可以只有视频生成模块连接,视频生成模块将抓取到的合成图像发送给镜像模块进行镜像处理,镜像模块将处理后的合成模块返回给视频生成模块进行编码处理。
在某些实施例中,镜像模块也可以直接询问用户是否需要镜像处理,若是,则对采集到的图像或合成图像进行镜像处理。
图6所示为本发明的拍摄星轨视频的装置第三实施例,本实施例与第一实施例的区别是增设了一特效处理模块,其与视频生成模块连接,视频生成模块将抓取到的合成图像发送给特效处理模块,特效处理模块对抓取到的合成图像进行特效处理,再将处理后的合成图像返回给视频生成模块进行编码处理。
所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理等。其中,基本效果处理,包含减噪、亮度、色度等处理;滤镜效果处理,包含素描、负片、黑白等处理;特殊场景效果处理,包含处理为常见 天气、星空等。
基于上述方案,视频生成模块还配置为:开启音频设备,通过音频设备接收音频数据;对音频数据进行编码处理。音频数据的来源方式主要有两种:麦克风采集或者自定义音频文件。当音频来源为自定义音频文件时,视频生成模块先对音频文件进行解码,得到原始的音频数据。基于上述方案,在对音频数据进行编码处理之前,特效处理模块还对接收到的音频数据进行特效处理,所述特效处理包括特效录音、变声、变调和/或变速等。最后,视频生成模块根据用户拍摄结束指令,将编码处理后的图像数据,以及编码处理后的音频数据,按照用户设定的视频文件格式,生成视频文件。
本发明实施例的拍摄星轨视频的装置及其拍摄方法,除了应用于星轨拍摄外,也可以应用于其它的类似应用场景。
从而,本发明拍摄星轨视频的装置,通过每隔预设时间采集一张图像,并将当前的图像与过去的图像合成为合成图像,采用图像合成技术来模拟长时间曝光进而获得星轨照片(即合成图像),再将不同时刻的星轨照片进行编码处理,最终合成为视频文件,实现了星轨视频的拍摄。使得用户可以利用拍摄装置拍摄出显示星星的运行过程的视频,或者应用于类似的应用场景,满足了用户的多样化需求,提升了用户体验。同时,由于是一边拍摄一边对合成图像进行编码处理,无需存储生成的合成图像,因此最终拍摄获得的视频文件的体积不会很大,不会占用太多的存储空间。
此外,还通过延迟拍摄,实现了防抖效果。通过前置摄像头拍摄,使得显示屏朝上,拍摄过程中用户可以方便的实时预览拍摄效果,并可以方便的进行手动间隔抓取,以生成用户满意的星轨视频效果,进一步提升了用户体验。
需要说明的是:上述实施例提供的拍摄星轨视频的装置在拍摄星轨视 频时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成。另外,上述实施例提供的拍摄星轨视频的装置与拍摄星轨视频的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于上述拍摄星轨视频的方法的至少其中之一;具体如图1和/或图2和/或图3所示的方法。
所述计算机存储介质可为如ROM/RAM、磁盘、光盘、DVD或U盘等各种类型的存储介质,本实施例所述计算机存储介质可选为非瞬间存储介质。
值得注意的本申请实施例中所述拍摄星轨视频的装置,图像采集模块、图像合成模块以及视频生成模块,可对应各种能够进行上述功能的结构,比如,各种类型具有信息处理功能的处理器。所述处理器可包括应用处理器AP(AP,Application Processor)、中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或可编程门阵列(FPGA,Field Programmable Gate Array)等信息处理结构或芯片,所述处理器可通过执行指定代码来实现上述功能。
图7是表示本发明的一个实施方式的相机的主要电气结构的框图。摄影镜头101由用于形成被摄体像的多个光学镜头构成,是单焦点镜头或变焦镜头。摄影镜头101能够通过镜头驱动部111在光轴方向上移动,根据来自镜头驱动控制部112的控制信号,控制摄影镜头101的焦点位置,在变焦镜头的情况下,也控制焦点距离。镜头驱动控制电路112按照来自微型计算机107的控制命令进行镜头驱动部111的驱动控制。
在摄影镜头101的光轴上、由摄影镜头101形成被摄体像的位置附近配置有摄像元件102。摄像元件102发挥作为对被摄体像摄像并取得摄像图像 数据的摄像部的功能。在摄像元件102上二维地呈矩阵状配置有构成各像素的光电二极管。各光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与各光电二极管连接的电容器进行电荷蓄积。各像素的前表面配置有拜耳排列的RGB滤色器。
摄像元件102与摄像电路103连接,该摄像电路103在摄像元件102中进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。
摄像电路103与A/D转换部104连接,该A/D转换部104对模拟图像信号进行模数转换,向总线199输出数字图像信号(以下称之为图像数据)。
总线199是用于传送在相机的内部读出或生成的各种数据的传送路径。在总线199连接着上述A/D转换部104,此外还连接着图像处理器105、JPEG处理器106、微型计算机107、SDRAM(Synchronous DRAM)108、存储器接口(以下称之为存储器I/F)109、LCD(液晶显示器:Liquid Crystal Display)驱动器110。
图像处理器105对基于摄像元件102的输出的图像数据进行OB相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等各种图像处理。JPEG处理器106在将图像数据记录于记录介质115时,按照JPEG压缩方式压缩从SDRAM108读出的图像数据。此外,JPEG处理器106为了进行图像再现显示而进行JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质115中的文件,在JPEG处理器106中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM108中并在LCD116上进行显示。另外,在本实施方式中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。
操作单元113包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等各种输入按钮和各种输入键等操作部材,检测这些操作部材的操作状态,。
将检测结果向微型计算机107输出。此外,在作为显示部的LCD116的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机107输出。微型计算机107根据来自操作单元113的操作部材的检测结果,执行与用户的操作对应的各种处理序列。(同样,可以把这个地方改成计算机107根据LCD116前面的触摸面板的检测结果,执行与用户的操作对应的各种处理序列。
闪存114存储用于执行微型计算机107的各种处理序列的程序。微型计算机107根据该程序进行相机整体的控制。此外,闪存114存储相机的各种调整值,微型计算机107读出调整值,按照该调整值进行相机的控制。SDRAM108是用于对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM108暂时存储从A/D转换部104输出的图像数据和在图像处理器105、JPEG处理器106等中进行了处理后的图像数据。
微型计算机107发挥作为该相机整体的控制部的功能,统一控制相机的各种处理序列。微型计算机107连接着操作单元113和闪存114。
所述微型计算机107可通过执行程序控制本实施例中装置执行下列操作:
拍摄开始后,通过摄像头每隔预设时间采集一张图像;
将当前的图像与过去的图像进行图像合成,生成合成图像;
抓取所述合成图像,并对抓取的合成图像进行编码处理;
拍摄结束时,将编码处理后的图像数据生成为视频文件。
可选地,所述将当前的图像与过去的图像进行图像合成包括:
根据当前的图像与过去的图像的亮度信息进行图像合成。
可选地,所述根据当前的图像与过去的图像的亮度信息进行图像合成包括:判断同一位置当前的图像中的像素的亮度是否大于过去的图像中的像素的亮度;若是,则将同一位置过去的图像中的像素替换为当前的图像中的像素,据此进行图像合成。
可选地,所述摄像头为前置摄像头,所述通过摄像头每隔预设时间采集一张图像的步骤之后还包括:对所述图像进行镜像处理。
可选地,所述对抓取的合成图像进行编码处理的步骤之前还包括:对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理。
存储器接口109与记录介质115连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质115和从记录介质115中读出的控制。记录介质115例如为能够在相机主体上自由拆装的存储器卡等记录介质,然而不限于此,也可以是内置在相机主体中的硬盘等。
LCD驱动器110与LCD116连接,将由图像处理器105处理后的图像数据存储于SDRAM,需要显示时,读取SDRAM存储的图像数据并在LCD116上显示,或者,JPEG处理器106压缩过的图像数据存储于SDRAM,在需要显示时,JPEG处理器106读取SDRAM的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD116进行显示。
LCD116配置在相机主体的背面等上,进行图像显示。该LCD116设有检测用户的触摸操作的触摸面板。另外,作为显示部,在本实施方式中配置的是液晶表示面板(LCD116),然而不限于此,也可以采用有机EL等各种显示面板。
本领域普通技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来控制相关的硬件完成,所述程序可以存储于一计算 机可读取存储介质中,所述存储介质,如ROM/RAM、磁盘、光盘等。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡按照本发明原理所作的修改,都应当理解为落入本发明的保护范围。

Claims (11)

  1. 一种拍摄星轨视频的方法,包括步骤:
    拍摄开始后,通过摄像头每隔预设时间采集一张图像;
    将当前的图像与过去的图像进行图像合成,生成合成图像;
    抓取所述合成图像,并对抓取的合成图像进行编码处理;
    拍摄结束时,将编码处理后的图像数据生成为视频文件。
  2. 根据权利要求1所述的拍摄星轨视频的方法,其中,所述将当前的图像与过去的图像进行图像合成包括:
    根据当前的图像与过去的图像的亮度信息进行图像合成。
  3. 根据权利要求2所述的拍摄星轨视频的方法,其中,所述根据当前的图像与过去的图像的亮度信息进行图像合成包括:
    判断同一位置当前的图像中的像素的亮度是否大于过去的图像中的像素的亮度;
    若是,则将同一位置过去的图像中的像素替换为当前的图像中的像素,据此进行图像合成。
  4. 根据权利要求1至3任一项所述的拍摄星轨视频的方法,其中,所述摄像头为前置摄像头,所述通过摄像头每隔预设时间采集一张图像的步骤之后还包括:对所述图像进行镜像处理。
  5. 根据权利要求1至3任一项所述的拍摄星轨视频的方法,其中,所述对抓取的合成图像进行编码处理的步骤之前还包括:
    对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理。
  6. 一种拍摄星轨视频的装置,包括图像采集模块、图像合成模块和视频生成模块,其中:
    图像采集模块,配置为通过摄像头每隔预设时间采集一张图像;
    图像合成模块,配置为将当前的图像与过去的图像进行图像合成,生成合成图像;
    视频生成模块,配置为抓取所述合成图像,对抓取的合成图像进行编码处理,并将编码处理后的图像数据生成为视频文件。
  7. 根据权利要求6所述的拍摄星轨视频的装置,其中,所述图像合成模块配置为:根据当前的图像与过去的图像的亮度信息进行图像合成。
  8. 根据权利要求7所述的拍摄星轨视频的装置,其中,所述图像合成模块配置为:
    判断同一位置当前的图像中的像素的亮度是否大于过去的图像中的像素的亮度;
    若是,则将同一位置过去的图像中的像素替换为当前的图像中的像素,据此进行图像合成。
  9. 根据权利要求6至8任一项所述的拍摄星轨视频的装置,其中,所述拍摄星轨视频的装置还包括镜像模块,所述镜像模块配置为:判断当前使用的摄像头是否为前置摄像头,若是,则对采集到的图像进行镜像处理。
  10. 根据权利要求6-8任一项所述的拍摄星轨视频的装置,其中,所述拍摄星轨视频的装置还包括特效处理模块模块,所述特效处理模块配置为:对抓取的合成图像进行特效处理,所述特效处理包括基本效果处理、滤镜效果处理和/或特殊场景效果处理。
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至5所述方法的至少其中之一。
PCT/CN2015/081016 2014-07-02 2015-06-08 拍摄星轨视频的方法、装置和计算机存储介质 Ceased WO2016000515A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/322,631 US10244184B2 (en) 2014-07-02 2015-06-08 Method and apparatus for shooting star trail video, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410312348.4A CN104079833A (zh) 2014-07-02 2014-07-02 拍摄星轨视频的方法和装置
CN201410312348.4 2014-07-02

Publications (1)

Publication Number Publication Date
WO2016000515A1 true WO2016000515A1 (zh) 2016-01-07

Family

ID=51600852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/081016 Ceased WO2016000515A1 (zh) 2014-07-02 2015-06-08 拍摄星轨视频的方法、装置和计算机存储介质

Country Status (3)

Country Link
US (1) US10244184B2 (zh)
CN (1) CN104079833A (zh)
WO (1) WO2016000515A1 (zh)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079833A (zh) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 拍摄星轨视频的方法和装置
CN104079835A (zh) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 拍摄星云视频的方法和装置
JP2016076869A (ja) * 2014-10-08 2016-05-12 オリンパス株式会社 撮像装置、撮影方法、およびプログラム
CN105554361A (zh) * 2014-10-28 2016-05-04 中兴通讯股份有限公司 一种动感视频拍摄的处理方法和系统
CN104754227A (zh) * 2015-03-26 2015-07-01 广东欧珀移动通信有限公司 一种拍摄视频的方法及装置
CN105072350B (zh) * 2015-06-30 2019-09-27 华为技术有限公司 一种拍照方法及装置
CN106357979A (zh) * 2016-09-19 2017-01-25 宇龙计算机通信科技(深圳)有限公司 一种摄影方法、装置和终端
JP2019106647A (ja) * 2017-12-13 2019-06-27 キヤノン株式会社 画像処理装置及び方法、及び撮像装置
CN108933905A (zh) * 2018-07-26 2018-12-04 努比亚技术有限公司 视频拍摄方法、移动终端和计算机可读存储介质
CN110913118B (zh) * 2018-09-17 2021-12-17 腾讯数码(天津)有限公司 视频处理方法、装置及存储介质
CN109361870A (zh) * 2018-11-28 2019-02-19 维沃移动通信有限公司 一种拍照方法及终端设备
CN109743508A (zh) * 2019-01-08 2019-05-10 深圳市阿力为科技有限公司 一种延时摄影装置及方法
CN110995993B (zh) * 2019-12-06 2022-04-15 北京小米移动软件有限公司 星轨视频拍摄方法、星轨视频拍摄装置及存储介质
CN110996030A (zh) * 2019-12-20 2020-04-10 Tcl移动通信科技(宁波)有限公司 视频生成方法、装置、存储介质及终端设备
JP7566470B2 (ja) * 2020-02-06 2024-10-15 キヤノン株式会社 撮像装置および制御方法
CN112040126A (zh) * 2020-08-31 2020-12-04 维沃移动通信有限公司 拍摄方法、装置、电子设备及可读存储介质
CN112102190A (zh) * 2020-09-14 2020-12-18 努比亚技术有限公司 图像处理方法、移动终端及计算机可读存储介质
CN112954201B (zh) 2021-01-28 2022-09-27 维沃移动通信有限公司 拍摄控制方法、装置和电子设备
TWI789127B (zh) * 2021-11-22 2023-01-01 聯詠科技股份有限公司 影像補償電路及方法
US12493930B2 (en) 2023-01-23 2025-12-09 Samsung Electronics Co., Ltd. System and method for generating astro-lapse video on user device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103312969A (zh) * 2012-03-12 2013-09-18 卡西欧计算机株式会社 图像合成装置以及图像合成方法
CN103797780A (zh) * 2011-09-14 2014-05-14 株式会社理光 图像捕获设备
CN103888683A (zh) * 2014-03-24 2014-06-25 深圳市中兴移动通信有限公司 移动终端及其拍摄方法
CN103905730A (zh) * 2014-03-24 2014-07-02 深圳市中兴移动通信有限公司 移动终端的拍摄方法和移动终端
CN104079833A (zh) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 拍摄星轨视频的方法和装置
CN104104798A (zh) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 拍摄光绘视频的方法和移动终端

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5113493A (en) * 1987-05-11 1992-05-12 Liberty Life Insurance Co. Full speed animation system for low-speed computers and method
US7948681B2 (en) * 2006-08-30 2011-05-24 Conley Kenneth E Device for displaying a three dimensional image
CN102244722B (zh) * 2010-05-10 2013-09-04 曜鹏科技股份有限公司 避免快门延迟的影像撷取模块及影像撷取方法
JP6021512B2 (ja) * 2012-08-10 2016-11-09 オリンパス株式会社 撮像装置
US20140078343A1 (en) * 2012-09-20 2014-03-20 Htc Corporation Methods for generating video and multiple still images simultaneously and apparatuses using the same
CN103595925A (zh) * 2013-11-15 2014-02-19 深圳市中兴移动通信有限公司 照片合成视频的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103797780A (zh) * 2011-09-14 2014-05-14 株式会社理光 图像捕获设备
CN103312969A (zh) * 2012-03-12 2013-09-18 卡西欧计算机株式会社 图像合成装置以及图像合成方法
CN103888683A (zh) * 2014-03-24 2014-06-25 深圳市中兴移动通信有限公司 移动终端及其拍摄方法
CN103905730A (zh) * 2014-03-24 2014-07-02 深圳市中兴移动通信有限公司 移动终端的拍摄方法和移动终端
CN104079833A (zh) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 拍摄星轨视频的方法和装置
CN104104798A (zh) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 拍摄光绘视频的方法和移动终端

Also Published As

Publication number Publication date
US10244184B2 (en) 2019-03-26
CN104079833A (zh) 2014-10-01
US20170134666A1 (en) 2017-05-11

Similar Documents

Publication Publication Date Title
WO2016000515A1 (zh) 拍摄星轨视频的方法、装置和计算机存储介质
US8633998B2 (en) Imaging apparatus and display apparatus
US10129488B2 (en) Method for shooting light-painting video, mobile terminal and computer storage medium
WO2016023406A1 (zh) 物体运动轨迹的拍摄方法、移动终端和计算机存储介质
JP6325841B2 (ja) 撮像装置、撮像方法、およびプログラム
WO2015143841A1 (zh) 移动终端及其拍摄方法
CN109005342A (zh) 全景拍摄方法、装置和成像设备
US20170302848A1 (en) Photographing method, device and computer storage medium
WO2016029746A1 (zh) 拍摄方法、拍摄装置及计算机存储介质
JP6304293B2 (ja) 画像処理装置、画像処理方法及びプログラム
US9609167B2 (en) Imaging device capable of temporarily storing a plurality of image data, and control method for an imaging device
KR20130071794A (ko) 디지털 촬영 장치 및 이의 제어 방법
WO2016008359A1 (zh) 物体运动轨迹图像的合成方法、装置及计算机存储介质
WO2016000514A1 (zh) 拍摄星云视频的方法和装置和计算机存储介质
JP5909997B2 (ja) 撮像装置および撮像装置の制御方法
US8654204B2 (en) Digtal photographing apparatus and method of controlling the same
JP6323022B2 (ja) 画像処理装置
CN110278375B (zh) 图像处理方法、装置、存储介质及电子设备
KR101480407B1 (ko) 디지털 영상 처리 장치, 이의 제어 방법 및 상기 제어방법을 기록한 기록 매체
WO2016169488A1 (zh) 图像处理方法、装置、计算机存储介质和终端
JP6280780B2 (ja) 撮像装置、撮像装置の制御方法、及びプログラム
JP5530304B2 (ja) 撮像装置および撮影画像表示方法
WO2016019786A1 (zh) 物体运动轨迹拍摄方法、系统及计算机存储介质
JP4678273B2 (ja) 撮像装置、動画記憶方法および動画記憶プログラム
KR101298638B1 (ko) 디지털 영상 처리 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15815194

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15322631

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/05/17)

122 Ep: pct application non-entry in european phase

Ref document number: 15815194

Country of ref document: EP

Kind code of ref document: A1