[go: up one dir, main page]

WO2023115390A1 - Procédé et dispositif de traitement d'image, plateforme mobile, terminal de commande et système - Google Patents

Procédé et dispositif de traitement d'image, plateforme mobile, terminal de commande et système Download PDF

Info

Publication number
WO2023115390A1
WO2023115390A1 PCT/CN2021/140483 CN2021140483W WO2023115390A1 WO 2023115390 A1 WO2023115390 A1 WO 2023115390A1 CN 2021140483 W CN2021140483 W CN 2021140483W WO 2023115390 A1 WO2023115390 A1 WO 2023115390A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
displayed
area
image area
movable platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2021/140483
Other languages
English (en)
Chinese (zh)
Inventor
封旭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Priority to PCT/CN2021/140483 priority Critical patent/WO2023115390A1/fr
Publication of WO2023115390A1 publication Critical patent/WO2023115390A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions

Definitions

  • the present application relates to the field of image technology, in particular to an image processing method, device, mobile platform, control terminal and system.
  • UAVs Take UAVs as an example.
  • UAVs can be equipped with shooting devices, and users can get new shooting angles by using UAVs to shoot.
  • the framing and operation of the lens are often limited by the field of view of the shooting device, which increases the difficulty of shooting. It takes multiple shots to get a satisfactory photo, which greatly reduces the user's shooting experience.
  • one of the purposes of this application is to provide an image processing method, device, mobile platform, control terminal and system, so as to effectively reduce the difficulty of shooting.
  • an image processing method comprising:
  • the image to be displayed includes at least part of the image content captured by the first camera mounted on the movable platform, and all image content collected by the second camera mounted on the movable platform, the first A photographing device is used to collect images for obtaining depth information, and the second photographing device is used to collect images displayed to users;
  • the image to be displayed is displayed through a user interface.
  • an image processing device including:
  • memory for storing processor-executable instructions
  • a mobile platform including:
  • a power assembly used to drive the movable platform to move in space
  • the first photographing device is used to collect and obtain images of depth information
  • the second shooting device is used to collect images displayed to the user
  • memory for storing processor-executable instructions
  • a control terminal which communicates with a movable platform, and the control terminal includes:
  • a communication module for communicating with the movable platform
  • memory for storing processor-executable instructions
  • an image processing system comprising:
  • a movable platform communicating with the control terminal is equipped with a first photographing device and a second photographing device; the movable platform includes a processor and a memory, and the memory is used to store processor-executable instructions ; implementing the operation in the first aspect above when the processor calls the executable instruction.
  • a computer program product including a computer program, and when the computer program is executed by a processor, the above-mentioned steps in the first aspect are implemented.
  • a computer-readable storage medium is provided, and several computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the method of the above-mentioned first aspect is executed.
  • the present application provides an image processing method, device, movable platform, control terminal and system, wherein the movable platform is equipped with a first photographing device and a second photographing device, and the images collected by the first photographing device are used to acquire Depth information, that is, the first shooting device is a shooting device for obstacle avoidance, and the image collected by the second shooting device is used to show the user, that is, the second shooting device is used to provide the user with the first mobile platform Angle of shooting device.
  • the image to be displayed displayed on the user interface includes at least part of the image content captured by the first shooting device and all image content collected by the second shooting device.
  • the user interface in addition to presenting all the image content collected by the second shooting device, the user interface also presents at least part of the image content collected by the shooting device used for obstacle avoidance.
  • the field of view that the user can obtain from the user interface is widened, and it is helpful to adjust the viewfinder and/or the pose of the lens during shooting, and reduces the difficulty of shooting.
  • FIG. 1 is an application scene of photographing by a mobile platform in the related art.
  • Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 3A is a schematic diagram of an image to be displayed according to an embodiment of the present application.
  • Fig. 3B is a schematic diagram of an image to be displayed according to another embodiment of the present application.
  • Fig. 3C is a schematic diagram of an image to be displayed according to another embodiment of the present application.
  • Fig. 3D is a schematic diagram of an image to be displayed according to another embodiment of the present application.
  • Fig. 4A is a schematic diagram of a user interface according to an embodiment of the present application.
  • Fig. 4B is a schematic diagram of an image to be displayed according to another embodiment of the present application.
  • Fig. 5 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Fig. 6A is a schematic diagram showing a first image and a second image according to an embodiment of the present application.
  • Fig. 6B is a schematic diagram of a first image and a second image according to another embodiment of the present application.
  • Fig. 6C is a schematic diagram of a first image and a second image according to another embodiment of the present application.
  • Fig. 7 is a schematic diagram of a superimposed image according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a spliced image according to an embodiment of the present application.
  • Fig. 9 is a flowchart of an image processing method according to another embodiment of the present application.
  • Fig. 10 is a schematic diagram of a first image shown according to an embodiment of the present application.
  • Fig. 11 is a flowchart of an image processing method according to another embodiment of the present application.
  • Fig. 12 is a structural diagram of an image processing device according to an embodiment of the present application.
  • Fig. 13 is a structural diagram of a movable platform according to an embodiment of the present application.
  • Fig. 14 is a structural diagram of a control terminal according to an embodiment of the present application.
  • Fig. 15 is a structural diagram of an image processing system according to an embodiment of the present application.
  • a mobile platform refers to any device that can move, and may include, but is not limited to, land vehicles, water vehicles, air vehicles, and other types of motorized vehicles.
  • the movable platform may be a UAV (Unmanned Aerial Vehicle, UAV), an unmanned vehicle, a robot, and the like.
  • UAV Unmanned Aerial Vehicle
  • UAVs can be equipped with shooting devices, and users can get new shooting angles by using UAVs to shoot.
  • the shooting process often involves complex camera movements, composition and framing.
  • traditional shooting scenarios for example, when a user shoots through a mobile terminal such as a mobile phone, the field of view obtained by the human eye is often larger than the range of the live view (liveview) displayed on the mobile phone shooting interface, so the user can rely on the understanding of the surrounding environment Come and plan ahead for good luck mirrors, composition and framing, etc.
  • the user can use the control terminal used to control the UAV to control the UAV to fly to a longer distance and take pictures.
  • the computer transmits to the control terminal, and the control terminal displays the received image on the liveview screen.
  • the user's field of vision is limited to the liveview screen displayed on the control terminal, and the user cannot obtain environmental information other than the liveview screen, nor can he know the surrounding environment of the drone.
  • the limited battery life of UAVs also restricts users from using UAVs to make more explorations in the air in advance.
  • the user cannot predict the moving objects that will enter the screen only through the liveview screen displayed on the control terminal, let alone predict in advance what will be displayed on the screen after changing the viewing direction. Therefore, the framing of shooting and the operation of the lens are greatly restricted, which increases the difficulty of shooting. Users need to constantly adjust the viewfinder range and lens pose, and take multiple shots to get satisfactory photos, which greatly reduces the user's shooting experience.
  • pre-exploration and understanding of the shooting environment can be realized by planning the route and composition in advance.
  • the key points in the route are marked in advance, and the composition is carried out in advance on the key points.
  • such a method has poor flexibility and is not suitable for shooting sports scenes, such as the above-mentioned shooting scene of a high-speed racing car.
  • it takes more time to plan in advance, and it also brings additional power consumption to the drone.
  • Step 210 Acquiring an image to be displayed, wherein the image to be displayed includes at least part of the image content captured by the first camera mounted on the movable platform, and all image content collected by the second camera mounted on the movable platform;
  • Step 220 Display the image to be displayed through a user interface.
  • the movable platform is equipped with a first shooting device and a second shooting device.
  • the first photographing device is used for capturing images with depth information, so that the depth information can be used for obstacle avoidance.
  • the image captured by the first photographing device can be used to acquire depth information of an object, such as an obstacle.
  • the first photographing device may adopt a binocular vision system.
  • the principle of the binocular vision system is to use two cameras to form a binocular. Based on the principle of parallax and use imaging equipment to obtain two images of the measured object from different positions, the three-dimensional geometry of the object can be obtained by calculating the position deviation between the corresponding points of the image. Information, that is, using two cameras to form binoculars to complete the depth information perception of the scene in a certain direction.
  • the first shooting device may be a monocular camera, and the camera may collect multiple images at different positions, and use changes of the same object in the multiple images to determine the depth information of the measured object. Through the perceived depth information of the measured object, the obstacle avoidance function of the movable platform can be guaranteed, so that the movable platform can move safely.
  • the movable platform in order to improve the safety of the movement of the movable platform, can be equipped with a plurality of first photographing devices with different orientations at different positions for realizing the obstacle avoidance function. Multiple first photographing devices with different orientations can realize obstacle avoidance in more directions.
  • the second shooting device is used to capture images displayed to the user, and these images may be liveview images, that is, the second shooting device may be a shooting device for capturing images and displaying a real-time view to the user, such as a camera.
  • the image captured by the second shooting device can also be stored in the memory of the movable platform, wherein the liveview image and the image stored in the memory of the movable platform can be the collection result of the same object at the same time, but the images of the two
  • the parameters can be different, for example, due to the problem of limited transmission bandwidth, the resolution of the liveview image can be lower than the image stored in the memory of the removable platform.
  • the user interface displays image content captured by the second shooting device.
  • At least part of the content captured by the first shooting device can be obtained through the first image captured by the first shooting device; all the content of the image captured by the second shooting device can be obtained through the second image captured by the second shooting device, or through the first image get.
  • the image processing method provided in this embodiment can be applied to a mobile platform, where the image to be displayed is acquired by the mobile platform, and the image to be displayed is displayed through a user interface.
  • the image to be displayed is displayed on the user interface of the movable platform;
  • the image to be displayed is sent to the control terminal through the communication link, and the image to be displayed is displayed on the user interface through the control terminal.
  • the control terminal can be a ground control terminal that can be operated by the user, such as a remote controller, a mobile phone installed with mobile platform control software, a personal computer (Personal Computer, PC), a tablet computer, or a wearable device and the like.
  • a user interface such as a control terminal with a screen remote control or a mobile phone
  • the image to be displayed can be displayed on the user interface of the control terminal;
  • the control terminal is not provided with a user interface, such as a non-screen remote control, Then, it is possible to establish a communication connection between the control terminal and other electronic devices, and display the image to be displayed on the user interface of the other electronic devices, wherein the other electronic devices can be portable terminals with screens such as mobile phones and tablet computers.
  • the image processing method provided in this embodiment can also be applied in a control terminal.
  • the control terminal communicates with the movable platform, and the control terminal can directly or indirectly receive the image to be displayed sent by the movable platform, and display the image to be displayed on the user interface.
  • the control terminal may also receive at least part of the image content captured by the first photographing device and all image content captured by the second photographing device sent by the movable platform. After the image to be displayed is acquired through image processing, the image to be displayed is displayed on the user interface.
  • control terminal is provided with a user interface, such as a control terminal with a screen remote control or a mobile phone
  • the image to be displayed can be displayed on the user interface of the control terminal; if the control terminal is not provided with a user interface, such as a non-screen remote control, Then, it is possible to establish a communication connection between the control terminal and other electronic devices, and display the image to be displayed on the user interface of the other electronic devices, wherein the other electronic devices can be portable terminals with screens such as mobile phones and tablet computers.
  • the image to be displayed displayed on the user interface includes at least part of image content captured by the first shooting device and all image content captured by the second shooting device.
  • the so-called "at least part of the image content captured by the first camera device” may be part of the image content captured by the first camera device, or all the image content captured by the first camera device.
  • the user interface in addition to presenting all the image content collected by the second shooting device, the user interface also presents at least part of the image content collected by the first shooting device for obstacle avoidance.
  • this embodiment displays more image content on the user interface for the user's reference or prediction, thereby broadening the user's ability to
  • the range of field of view that can be obtained in the user interface helps to adjust the framing and/or the pose of the lens during shooting, reducing the difficulty of shooting. This provides users with more environmental reference information and assists users in creating with more degrees of freedom.
  • the images captured by the first shooting device are only used to obtain depth information to realize the obstacle avoidance function, but in this embodiment, the first shooting device is also used to obtain additional image content, so that in Broaden the user's field of vision and reduce the difficulty of shooting without increasing the cost of the mobile platform.
  • adjusting the framing and/or the pose of the lens by means of additional image content of the first photographing device is to adjust the viewfinder range/or the pose of the lens of the second photographing device.
  • the image to be displayed 300 may include a first image area 310 and a second image area 320 .
  • the first image area 310 may include at least part of the image content captured by the first camera; the second image area 320 may include all image content captured by the second camera.
  • the positional relationship between the first image area 310 and the second image area 320 may include one or more of the following positional relationships:
  • the first image area 310 and the second image area 320 may be located on two sides of the image to be displayed 300 .
  • the first image area 310 and the second image area 320 may be separated left and right, or may be separated up and down on both sides of the image 300 to be displayed.
  • the first image area 310 and the second image area 320 may also be separated on both sides of the image to be displayed 300 in other ways, which is not limited in this application.
  • the movable platform may be equipped with multiple first photographing devices with different orientations at different positions for realizing the obstacle avoidance function.
  • Image content captured by different first shooting devices may be displayed in different first regions.
  • the number of the first image areas 310 may be two or more. Taking two first image areas as an example, as shown in FIG. 3B , the two first image areas 310 may be located on two sides of the second image area 320 . As an example, the two first image areas 310 may be separated left and right, or may be separated up and down on both sides of the second image area 320 . Of course, the two first image areas 310 may also be separated on both sides of the second image area 320 in other ways, which is not limited in this application.
  • the first image area 310 may surround the periphery of the second image area 320 .
  • the first image area 310 may include a plurality of first image areas 310 surrounding the second image area 320 .
  • the image content displayed in the plurality of first image areas may be continuous image content or discontinuous image content.
  • the size of the first image area 310 is larger than the size of the second image area 320 , and the first image area 310 includes the second image area 320 .
  • the position of the second image area in the image to be displayed may be determined by one or more of the following methods:
  • the first preset parameter can be the coordinates of the feature point of the second image area in the image to be displayed, and the feature point can be the center point, vertex, etc. of the second image area; the first preset parameter can also be the second image The location of the region in the image to be displayed.
  • the first preset parameter may also be other parameters capable of determining the position of the second image area in the image to be displayed, which is not limited in this application.
  • Mode 2 Determine the position of the second image area in the image to be displayed according to the position information carried in the first user instruction.
  • the position of the second image area in the image to be displayed can be adjusted according to the wishes of the user, and the user can change the position of the second image area in the image to be displayed by inputting a first user instruction carrying position information.
  • the position information can be the coordinates of the feature points of the second image area in the image to be displayed, and the feature points can be the center point, vertex, etc. of the second image area; the position information can also be the position of the second image area in the image to be displayed area.
  • the location information may also be other parameters capable of determining the location of the second image area in the image to be displayed, which is not limited in this application.
  • the position of the second image area in the image to be displayed may be determined by default according to the first preset parameter, and then the position of the second image area in the image to be displayed is adjusted according to the position information carried in the first user instruction.
  • Mode 3 Determine the position of the second image area in the image to be displayed based on the first obstacle avoidance strategy of the movable platform.
  • the first obstacle avoidance strategy may be based on the orientation and number of obstacles, correspondingly adjusting the position of the second image area in the image to be displayed. For example, if there are more obstacles in the left front of the movable platform than in the right front, then the second image area can be set on the right side of the image to be displayed, and the first image area can be set on the right side of the image area to be displayed. Left side, to show the user the environment in front of the movable platform, which helps the user better control the movement of the movable platform when shooting, avoiding collisions with obstacles, so as to better realize the framing and/or lens alignment Pose adjustment.
  • the second image area 320 may be located in a central area of the image to be displayed 300 .
  • the first image area and the second image area may have differentiated marks on the user interface.
  • the user interface 400 includes a first window 410 and a second window 420 .
  • the first window 410 and the second window 420 can be respectively arranged on the user interface 400 (as shown in FIG. 4A ), and the second window 420 can also be inside the first window 410 (not shown in the figure).
  • the first window 410 displays a first image area 430 of the image to be displayed
  • the second window displays a second image area 440 of the image to be displayed. In this way, the first image area and the second image area are displayed with differential marks on the user interface.
  • the first image area and the second image area may have differential marks on the images to be displayed. For example, different marks may be added to the image to be displayed to distinguish between the first image area and the second image area.
  • the images displayed in the first image area and the second image area are respectively set with different image parameters, such as different contrast, brightness, color, etc., so as to realize differentiated identification.
  • image parameters such as different contrast, brightness, color, etc.
  • one of the image areas can display a grayscale image and the other image area can display a color image for differentiated identification.
  • the junction of the first image area and the second image area is provided with a first specific mark, so as to realize differentiated marks.
  • a dotted line mark is provided at the junction of the first image area 430 and the second image area 440, indicating that the image area within the dotted line frame is the second image area 440, and the image area outside the dotted line frame is the first image area. 430.
  • the second image area may be provided with a second specific logo, so as to realize differentiated logos.
  • the second specific identification may be a text identification and/or a graphic identification.
  • the first image area and the second image area may be differentially marked according to one or more combinations of the above examples, so that the user can distinguish the two image areas more clearly.
  • the size of the second image area in the image to be displayed may be determined by one or more of the following methods:
  • Way 1 Determine the size of the second image area in the image to be displayed according to the second preset parameter.
  • the second preset parameter may be the length and/or width of the second image area.
  • the second preset parameter may be the area of the second image region.
  • the second preset parameter may be the ratio of the size parameter of the second image area to the size parameter of the first image area, the image to be displayed or the user interface, wherein the size parameter may include one of length, width, or area or more.
  • the image to be displayed can be displayed on the entire user interface (full screen mode), and the user interface can also be divided into multiple areas or windows, wherein one area or window is used to display the image to be displayed, and other areas or windows are used to display Movable platform parameters or other functional virtual buttons.
  • the second image area includes the real-time view image collected by the second shooting device, in the shooting scene, in order to ensure that the user can clearly see the real-time view, the area of the second image area and the user interface or the image to be displayed can be set to meet a certain requirement. proportional relationship.
  • Way 2 Determine the size of the second image area in the image to be displayed according to the size information carried in the second user instruction.
  • the size of the second image area in the image to be displayed can be adjusted according to the wishes of the user, and the user can change the size of the second image area in the image to be displayed by inputting a second user instruction carrying size information.
  • the size information may include one or more of the length, width, area, and size ratio of the second image area.
  • the size of the second image area in the image to be displayed may be determined according to the second preset parameter by default, and then the size of the second image area in the image to be displayed is adjusted according to the size information carried in the second user instruction.
  • the above-mentioned first user instruction and the second user instruction may be the same user instruction, and the user instruction carries position information and size information at the same time.
  • the position of the second image area in the image to be displayed can be determined according to the position information
  • the size of the second image area in the image to be displayed can be determined according to the size information.
  • the first user instruction and the second user instruction may also be two different user instructions. As shown in FIG. 5 , the user interface 520 only displays a part of the image to be displayed 510 (left figure) including the second image area 530, then the first user instruction can be a drag instruction, and the dragging start point and the dragging instruction are used according to the drag instruction.
  • the moving end point determines the movement vector of the second image area 530, and moves the image to be displayed 510 based on the movement vector, so that the position of the second image area moves to a position expected by the user.
  • the second user instruction may be a zoom instruction, and the zoom ratio of the image to be displayed 510 may be determined according to the zoom instruction. After the image to be displayed 510 is moved and scaled, the image displayed on the user interface 520 is shown in the right figure of FIG. Getting more environmental information is helpful for predicting during shooting.
  • Mode 3 Determine the size of the second image area in the image to be displayed based on the second obstacle avoidance strategy of the movable platform.
  • the second obstacle avoidance strategy may be based on the orientation and number of obstacles, correspondingly adjusting the size of the second image area in the image to be displayed. For example, if there are many obstacles around the movable platform, the size of the second image area in the image to be displayed can be smaller, and the first image area is correspondingly larger, so that the user can see more obstacle information, so as to avoid The movable platform collides with obstacles during shooting.
  • the positional relationship between the first image area and the second image area in the image to be displayed is possible to determine the positional relationship between the first image area and the second image area in the image to be displayed, differentiated display, and the position and size of the second image area in the image to be displayed.
  • the image to be displayed can be determined by one or more of the positional relationship of the regions, the position of the second image region, and the size of the second image.
  • the image to be displayed includes at least part of image content captured by the first shooting device and all image content captured by the second shooting device.
  • the image to be displayed may be obtained by using the first image captured by the first shooting device, that is, the first image includes all image content captured by the first shooting device and all image content captured by the second shooting device .
  • the first photographing device mounted on the movable platform may be a fisheye camera or other cameras that can cover the photographing area of the second photographing device.
  • the left image is a first image 610 captured by a first camera
  • the right image is a second image 620 captured by a second camera.
  • the shooting area of the first shooting device covers the shooting area of the second shooting device, the first image 610 contains all the image content captured by the second shooting device, that is, all the image content of the second image 620 . Therefore, the image to be displayed can be acquired directly by using the first image.
  • the image to be displayed may be acquired by using the first image captured by the first shooting device and the second image captured by the second shooting device.
  • the first image may at least partially overlap the second image, so-called "at least partial overlap", as shown in Figure 6A, the first image 610 includes all image content of the second image 620; it may also be as shown in Figure 6B , the first image 610 and the second image 620 include the same image content 630 .
  • the first image 610 and the second image 620 may also be adjacent to each other.
  • the image to be displayed is acquired based on the image content relationship between the image content captured by the first shooting device and the image content captured by the second shooting device. Or in other words, the image to be displayed is acquired based on the image content relationship between the first image and the second image.
  • Image content relationships may include at least partial overlap, adjacency, or discontinuity.
  • the image content relationship may be determined based on the relative pose relationship between the first camera and the second camera, the field of view of the first camera, and the field of view of the second camera. Based on the relative pose relationship of the two shooting devices, and their respective field of view sizes, it can be determined whether the field of view of the first shooting device covers the field of view of the second shooting device, or whether it is consistent with the field of view of the second shooting device. The field areas partially overlap or are adjacent, so that the image content relationship between the first image and the second image can be judged.
  • the second photographing device can be installed on the movable platform through the pan-tilt, and the relative pose relationship between the first photographing device and the second photographing device can be based on the relative pose between the first photographing device and the pan-tilt , and the attitude of the gimbal to determine.
  • the second shooting device can be fixedly installed on the movable platform, so the relative pose relationship between the first shooting device and the second shooting device can be calibrated in advance.
  • image recognition technology may also be used to determine whether the first image includes the image content of the second image, or whether the first image and the second image have the same image content.
  • image recognition technology reference may be made to relevant technical records, and this application will not discuss it here.
  • the image to be displayed is obtained by performing one or more of superposition processing, splicing processing, clipping processing, image parameter adjustment processing, and dedistortion processing on at least one of the first image and the second image .
  • the first image and the second image may be superimposed.
  • the first image 610 includes all the image content of the second image 620, then the second image can be superimposed on the first image based on the position of the image content of the second image 620 in the first image 610.
  • a superimposed image as shown in FIG. 4B is obtained, wherein the second image is displayed in the second image area 440 ; the first image is displayed in the first image area 430 and the second image area 440 .
  • the first image 610 and the second image 620 include the same image content 630, then based on the positions of the same image content 630 in the first image 610 and the second image 620 respectively, the second The first image 610 and the second image 620 are superimposed to obtain the superimposed image as shown in FIG. 7 .
  • pixel fusion may be performed on the overlapped portion of the first image 610 and the second image 620 to obtain an image with better quality, and the fused image may be displayed as a part of the image to be displayed.
  • the first image 610 is adjacent to the second image 620 , the first image and the second image may be spliced to obtain a spliced image as shown in FIG. 8 .
  • the clipping process can be performed on the first image, and the clipping process includes steps as shown in FIG. 9:
  • Step 910 Obtain the first display size of the image to be displayed and/or the second display size corresponding to the image content of the second image in the image to be displayed;
  • Step 920 Crop the first image based on the first display size and/or the second display size.
  • the first display size and the second display size may be preset size parameters, or may be user-adjustable size parameters.
  • the first display size of the image to be displayed may be obtained, and the image to be displayed that satisfies the first display size may be clipped from the first image.
  • the second display size corresponding to the image content of the second image in the image to be displayed may be the size of the second image area in the image to be displayed in the above embodiment. According to the second display size, an image to be displayed that is larger than the second display size can be cut out from the first image.
  • the corresponding image content of the second image in the image to be displayed may also be considered position for clipping.
  • edge areas of the image are often distorted, or a few pixels at the edge of the image are blurred.
  • edge regions of the first image and/or the second image may be cropped to remove distorted or blurred regions.
  • the edge area can be a preset image area, and those skilled in the art can set the size of the edge area according to actual needs, for example, several pixels.
  • the first photographing device may be a camera for realizing the obstacle avoidance function
  • the second photographing device may be a camera for capturing images and presenting a real-time view to the user.
  • the images captured by them that is, the image parameters of the first image and the second image will also be different.
  • the image parameters may include one or more of brightness, contrast, resolution, super-resolution, and distortion.
  • the first image captured by the first photographing device is used to obtain depth information, its resolution may be smaller than that of the second image displayed to the user.
  • image parameter adjustment processing may be performed on the first image and/or the second image. Adjust the image parameters of the first image and/or the second image, so that the image parameters of the first image and the second image are consistent.
  • an image parameter of the other of the first image and the second image may be adjusted according to an image parameter of the other of the first image and the second image.
  • the image parameters of the first image are adjusted according to the image parameters of the second image, so that the image parameters of the first image are aligned with the image parameters of the second image.
  • the image parameters of the second image and the image parameters of the first image may be respectively adjusted to preset image parameters.
  • de-distortion processing may be performed on the first image and/or the second image, so as to eliminate imaging distortion caused by inconsistent magnifications at different positions of the optical lens.
  • Those skilled in the art may select a correction algorithm to perform de-distortion processing on the first image and/or the second image according to actual needs, and this application does not discuss further here.
  • the image to be displayed can be obtained by performing one or more of superposition processing, splicing processing, clipping processing, image parameter adjustment processing, and dedistortion processing on the first image and/or the second image.
  • the first image may not be subjected to superposition processing, splicing processing, cropping processing, image parameter adjustment processing or de-distortion processing, but directly as the image to be displayed. image. It can be understood that when there are multiple first shooting devices, the first image may be obtained by splicing the images captured by the multiple first shooting devices.
  • the first shooting device and the second shooting device can be controlled to expose synchronously, so that the first image and the second The images are images captured by the first shooting device and the second shooting device at the same time, so as to avoid recording the states of the same object at different times in the same image.
  • the movable platform can be equipped with a plurality of first photographing devices with different orientations at different positions for realizing the obstacle avoidance function.
  • Multiple first shooting devices with different orientations can realize obstacle avoidance in more directions, and the images collected by multiple first shooting devices with different orientations can be spliced to obtain an image with a larger field of view as shown in Figure 10 .
  • the first image may be an image captured by one of the first shooting devices, or may be an image spliced from images captured by at least two first shooting devices.
  • the first image may be acquired in various ways as required. For example, all the images collected by the first shooting device may be selected, or part of the images collected by the first shooting device may be selected for computing power, or the first shooting device whose orientation is the same as that of the second shooting device may be selected, and so on.
  • the first photographing device used to acquire the first image may be determined from multiple first photographing devices.
  • the motion state of the movable platform may include one or more of the motion direction, the motion speed, or the relative motion relationship between the movable platform and the shooting target.
  • the first photographing device for acquiring the first image may be determined based on the moving direction of the movable platform. For example, when the movable platform is moving forward, a first photographing device facing the moving direction (forward) may be used to acquire a first image, so that the acquired first image includes environmental information in the moving direction of the movable platform.
  • the first photographing device for acquiring the first image may be determined based on the moving direction and moving speed of the movable platform. For example, when the movable platform is moving forward and the moving speed is fast, in addition to the first photographing device facing the direction of movement (front), the first photographing device facing the left and right sides can also be used to capture a wider field of view. Big first image. That is, when the speed of the movable platform is fast, the first image with a relatively large field of view can be acquired; otherwise, the first image with a relatively small field of view can be acquired.
  • the first photographing device for acquiring the first image may be determined based on the relative motion relationship between the movable platform and the photographing target.
  • the photographing target may be a racing car driving at high speed, if the photographing target enters the photographing range of the first photographing device and the second photographing device from the left. Then, the first photographing device facing the photographing target (left side) can be used to acquire the first image, so that the acquired first image meets the user's photographing requirements.
  • the image to be displayed displayed on the user interface includes at least part of image content captured by the first shooting device and all image content captured by the second shooting device.
  • the user interface in addition to presenting all the image content collected by the second shooting device, the user interface also presents at least part of the image content collected by the first shooting device for obstacle avoidance.
  • this embodiment displays more image content on the user interface for the user's reference or prediction, thereby broadening the user's understanding from the user.
  • the range of field of view that can be obtained in the interface helps to adjust the framing and the pose of the lens during shooting, reducing the difficulty of shooting. This provides users with more environmental reference information and assists users in creating with more degrees of freedom.
  • the images captured by the first shooting device are only used to obtain depth information to realize the obstacle avoidance function, but in this embodiment, the first shooting device is also used to obtain additional image content, so that in Broaden the user's field of vision and reduce the difficulty of shooting without increasing the cost of the mobile platform.
  • Step 1110 Based on the motion state of the movable platform, determine a target first photographing device for capturing the first image from a plurality of first photographing devices;
  • the motion state of the movable platform includes one or more of the motion direction, the motion speed, or the relative motion relationship between the movable platform and the shooting target.
  • Step 1120 Obtain the first image captured by the target first shooting device and the second image captured by the second shooting device;
  • the first image and the second image are respectively images captured by the first photographing device and the second photographing device at the same moment.
  • the second shooting device is installed on the movable platform through the platform.
  • Step 1130 Adjust the image parameters of the first image according to the image parameters of the second image, so that the image parameters of the first image are aligned with the image parameters of the second image;
  • the image parameter may be one or more of brightness, contrast, resolution, super-resolution, and distortion.
  • Step 1140 Determine the image content relationship between the first image and the second image based on the pre-calibrated relative pose of the first shooting device and the pan/tilt, and the posture of the pan/tilt;
  • Step 1151 If the first image includes all the image content of the second image, based on the position of the image content of the second image in the first image, superimpose the second image on the first image to obtain a processed image;
  • Step 1152 If the first image and the second image contain the same image content, based on the positions of the same image content in the first image and the second image respectively, superimpose the first image and the second image to obtain the processed after the image;
  • Step 1153 If the first image is adjacent to the second image, then splicing the first image and the second image to obtain a processed image;
  • Step 1160 Based on the first display size of the image to be displayed, the second display size and position corresponding to the image content of the second image in the image to be displayed, crop the image to be displayed from the processed image;
  • the image area to be displayed includes a first image area and a second image area.
  • the first image area includes at least part of the image content of the first image and the second image area includes the entire image content of the second image.
  • step 1110 may be performed by the mobile platform, and steps 1120-1180 may be performed by the mobile platform, or may be performed by a control terminal communicatively connected with the mobile platform.
  • the image to be displayed displayed on the user interface includes at least part of image content captured by the first shooting device and all image content captured by the second shooting device.
  • the user interface in addition to presenting all the image content collected by the second shooting device, the user interface also presents at least part of the image content collected by the first shooting device for obstacle avoidance.
  • this embodiment displays more image content on the user interface for the user's reference or prediction, thereby broadening the user's understanding from the user.
  • the range of field of view that can be obtained in the interface helps to adjust the framing and the pose of the lens during shooting, reducing the difficulty of shooting. This provides users with more environmental reference information and assists users in creating with more degrees of freedom.
  • the images captured by the first shooting device are only used to obtain depth information to realize the obstacle avoidance function, but in this embodiment, the first shooting device is also used to obtain additional image content, so that in Broaden the user's field of vision and reduce the difficulty of shooting without increasing the cost of the mobile platform.
  • the present application also provides a schematic structural diagram of an image processing device as shown in FIG. 12 .
  • the image processing device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and of course may also include hardware required by other services.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and runs it, so as to realize the image processing method described in any of the above embodiments.
  • the present application also provides a schematic structural diagram of a movable platform as shown in FIG. 13 .
  • the mobile platform includes a fuselage, a power assembly, a first camera, a second camera, a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and of course it may also include Hardware required by other businesses.
  • the power assembly is used to drive the movable platform to move in space;
  • the first photographing device is used to collect images for obtaining depth information;
  • the second photographing device is used to collect images displayed to users.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it to implement the image processing method described in any of the above embodiments.
  • the present application also provides a schematic structural diagram of a control terminal as shown in FIG. 14 , where the control terminal communicates with a movable platform.
  • the control terminal includes a communication module, a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and of course may also include hardware required by other services.
  • the communication module is used for communicating with the movable platform.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and runs it, so as to realize the image processing method described in any of the above embodiments.
  • the present application also provides a schematic structural diagram of an image processing system as shown in FIG. 15 .
  • the image processing system includes a control terminal as shown in Figure 14, and a movable platform communicating with the control terminal as shown in Figure 13, the movable platform is equipped with a first camera and a second The photographing device; the movable platform includes a processor and a memory, and the memory is used to store executable instructions of the processor; when the processor invokes the executable instructions, an image processing method described in any of the above embodiments is implemented.
  • the present application also provides a computer program product, including a computer program, when the computer program is executed by a processor, it can be used to perform the image processing method described in any of the above embodiments method.
  • the present application also provides a computer storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, it can be used to perform an image processing method described in any of the above embodiments. an image processing method.
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)

Abstract

Procédé et dispositif de traitement d'image, plateforme mobile, terminal de commande et système. Un premier dispositif de photographie et un second dispositif de photographie sont montés sur la plateforme mobile. Une image acquise par le premier dispositif de photographie est utilisée pour obtenir des informations de profondeur, c'est-à-dire que le premier dispositif de photographie est un dispositif de photographie pour évitement d'obstacle ; une image acquise par le second dispositif de photographie est utilisée pour affichage à un utilisateur. De plus, une image à afficher sur une interface utilisateur comprend au moins une partie du contenu d'image acquis par le premier dispositif de photographie et la totalité du contenu d'image acquis par le second dispositif de photographie. De cette manière, en plus de présenter la totalité du contenu d'image acquis par le second dispositif de photographie, l'interface utilisateur présente en plus au moins une partie du contenu d'image acquis par le dispositif de photographie pour évitement d'obstacle. Par conséquent, le champ de vision qui peut être obtenu par un utilisateur à partir d'une interface utilisateur est élargi, la visée et la pose de lentille peuvent être ajustées de manière pratique pendant la prise de vue, et la difficulté de prise de vue est réduite.
PCT/CN2021/140483 2021-12-22 2021-12-22 Procédé et dispositif de traitement d'image, plateforme mobile, terminal de commande et système Ceased WO2023115390A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/140483 WO2023115390A1 (fr) 2021-12-22 2021-12-22 Procédé et dispositif de traitement d'image, plateforme mobile, terminal de commande et système

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/140483 WO2023115390A1 (fr) 2021-12-22 2021-12-22 Procédé et dispositif de traitement d'image, plateforme mobile, terminal de commande et système

Publications (1)

Publication Number Publication Date
WO2023115390A1 true WO2023115390A1 (fr) 2023-06-29

Family

ID=86901025

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/140483 Ceased WO2023115390A1 (fr) 2021-12-22 2021-12-22 Procédé et dispositif de traitement d'image, plateforme mobile, terminal de commande et système

Country Status (1)

Country Link
WO (1) WO2023115390A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960454A (zh) * 2017-03-02 2017-07-18 武汉星巡智能科技有限公司 景深避障方法、设备及无人飞行器
JP2018056940A (ja) * 2016-09-30 2018-04-05 株式会社ニコン 撮像装置、表示装置、電子機器、プログラム、撮像システム、表示システムおよび画像処理装置
CN108521808A (zh) * 2017-10-31 2018-09-11 深圳市大疆创新科技有限公司 一种障碍信息显示方法、显示装置、无人机及系统
CN111835973A (zh) * 2020-07-22 2020-10-27 Oppo(重庆)智能科技有限公司 拍摄方法、拍摄装置、存储介质与移动终端
CN112585939A (zh) * 2019-12-31 2021-03-30 深圳市大疆创新科技有限公司 一种图像处理方法、控制方法、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018056940A (ja) * 2016-09-30 2018-04-05 株式会社ニコン 撮像装置、表示装置、電子機器、プログラム、撮像システム、表示システムおよび画像処理装置
CN106960454A (zh) * 2017-03-02 2017-07-18 武汉星巡智能科技有限公司 景深避障方法、设备及无人飞行器
CN108521808A (zh) * 2017-10-31 2018-09-11 深圳市大疆创新科技有限公司 一种障碍信息显示方法、显示装置、无人机及系统
CN112585939A (zh) * 2019-12-31 2021-03-30 深圳市大疆创新科技有限公司 一种图像处理方法、控制方法、设备及存储介质
CN111835973A (zh) * 2020-07-22 2020-10-27 Oppo(重庆)智能科技有限公司 拍摄方法、拍摄装置、存储介质与移动终端

Similar Documents

Publication Publication Date Title
US10904430B2 (en) Method for processing image, image processing apparatus, multi-camera photographing apparatus, and aerial vehicle
US10871258B2 (en) Method and system for controlling gimbal
JP2018160228A (ja) 経路生成装置、経路制御システム、及び経路生成方法
WO2018133589A1 (fr) Dispositif, procédé de photographie aérienne, et véhicule aérien sans pilote
WO2013069050A1 (fr) Dispositif de génération d'images et procédé de génération d'images
CN112204946A (zh) 数据处理方法、装置、可移动平台及计算机可读存储介质
WO2013069048A1 (fr) Dispositif de génération d'images et procédé de génération d'images
CN111818304A (zh) 图像融合方法及装置
CN113273172A (zh) 全景拍摄方法、装置、系统及计算机可读存储介质
CN108076265A (zh) 处理装置以及摄像装置
CN112669469B (zh) 基于无人机和全景相机的发电厂虚拟漫游系统及方法
WO2021212445A1 (fr) Procédé photographique, plateforme mobile, dispositif de commande et support de stockage
WO2022047701A1 (fr) Procédé et appareil de traitement d'images
WO2019023914A1 (fr) Procédé de traitement d'image, véhicule aérien sans pilote, console au sol et système de traitement d'image associé
WO2019205103A1 (fr) Procédé de correction d'orientation de panoramique-inclinaison, appareil de correction d'orientation de panoramique-inclinaison, panoramique-inclinaison, système de panoramique-inclinaison et aéronef sans pilote
CN108419052B (zh) 一种多台无人机全景成像方法
US20240290087A1 (en) Video display method and display system based on unmanned aerial vehicle viewing angle
WO2022056683A1 (fr) Procédé, dispositif et système de détermination de champ de vision et support
JP7021036B2 (ja) 電子機器及び通知方法
WO2023115390A1 (fr) Procédé et dispositif de traitement d'image, plateforme mobile, terminal de commande et système
JP2017046065A (ja) 情報処理装置
US20240219928A1 (en) Aircraft and control method therefor, and system and storage medium
EP4607264A1 (fr) Procédé de traitement d'image, dispositif d'affichage monté sur la tête et support
CN106296584B (zh) 一种全景拼接图与局部视频或图像融合显示的方法
WO2022253017A1 (fr) Procédé de commande et système de commande pour véhicule aérien sans pilote

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21968539

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21968539

Country of ref document: EP

Kind code of ref document: A1