[go: up one dir, main page]

WO2018188297A1 - Procédé et dispositif d'identification d'une image affichée en trois dimensions - Google Patents

Procédé et dispositif d'identification d'une image affichée en trois dimensions Download PDF

Info

Publication number
WO2018188297A1
WO2018188297A1 PCT/CN2017/106811 CN2017106811W WO2018188297A1 WO 2018188297 A1 WO2018188297 A1 WO 2018188297A1 CN 2017106811 W CN2017106811 W CN 2017106811W WO 2018188297 A1 WO2018188297 A1 WO 2018188297A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
pixel
picture
value
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/106811
Other languages
English (en)
Chinese (zh)
Inventor
李聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to JP2019556179A priority Critical patent/JP2020517025A/ja
Priority to KR1020197032801A priority patent/KR20190136068A/ko
Publication of WO2018188297A1 publication Critical patent/WO2018188297A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1415Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes

Definitions

  • the present disclosure relates to the field of image technologies, and in particular, to a method and an apparatus for identifying a three-dimensional display picture.
  • the three-dimensional picture display technology needs to wear 3D glasses to develop without the need to wear 3D glasses, that is, to develop the naked eye 3D picture display technology, because the naked eye 3D picture display technology enables people to get rid of the restraint of 3D glasses, and possession Great advantage.
  • more and more display devices have both a two-dimensional picture display mode and a three-dimensional picture display mode. Then, before displaying the picture, the display device needs to select a corresponding display mode according to the type of the picture to be displayed, such as a two-dimensional picture, a three-dimensional picture, and the like. At present, the display device manually marks the image to be displayed in three dimensions before displaying the image, and then uses the three-dimensional display for the marked image when displaying, and the ordinary two-dimensional for the unmarked image. display. Moreover, the identification mark for displaying the picture in three dimensions is only saved in the database, and the picture itself does not have corresponding attributes.
  • the current electronic device with three-dimensional display is cumbersome to operate when processing a three-dimensional display picture.
  • embodiments of the present disclosure are expected to provide a method and apparatus for three-dimensionally displaying pictures, which can automatically recognize pictures for three-dimensional display and improve operation convenience.
  • An embodiment of the present disclosure provides a method for identifying a three-dimensional display picture, the method comprising:
  • the target picture is identified as a picture for three-dimensional display.
  • the method further includes:
  • the target picture is identified as a picture that is not used for three-dimensional display.
  • the acquiring the difference degree parameter of the first view and the second view includes:
  • N is an integer greater than or equal to 1;
  • the minimum pixel difference sum is determined as the difference degree parameter.
  • the obtaining, for each of the offset values, the pixel difference sum of the first view and the second view including:
  • the calculating the pixel difference sum according to the preset operation based on the deviation value, the first graphic parameter value, and the second graphic parameter value includes:
  • the i-th pixel in the first view and the j-th pixel in the second view are grouped into pixel pairs, wherein i, j have a value range of the first view and the second
  • the range of pixel coordinate values of the view image, the abscissa of the i-th pixel and the abscissa of the j-th pixel are different by d deviation values, the ordinate is the same, and the value of d is within the preset deviation interval;
  • the sum of the squares of the differences is obtained to obtain the sum of the pixel differences.
  • An embodiment of the present disclosure further provides an identification device for displaying a picture in three dimensions, the device comprising:
  • An extraction module configured to extract a first view and a second view of the target image
  • An acquiring module configured to acquire a difference degree parameter of the first view and the second view, where the difference degree parameter is used to indicate a difference degree between the first view and the second view;
  • the first identifier module is configured to: when the difference degree parameter is within the preset range value, identify the target picture as a picture for three-dimensional display.
  • the device further includes:
  • the second identifier module is configured to: when the difference degree parameter is not within the preset range value, identify the target picture as a picture that is not used for three-dimensional display.
  • the obtaining module includes:
  • a first obtaining submodule configured to acquire, according to the preset N deviation values, a pixel difference sum of the first view and the second view for each deviation value, where N is an integer greater than or equal to ;
  • a second obtaining submodule configured to obtain a minimum value of each of the pixel difference sums to obtain a minimum pixel difference sum
  • Determining a sub-module set to determine the minimum pixel difference sum as the difference degree parameter.
  • the first obtaining submodule includes:
  • a first acquiring unit configured to acquire a first graphic parameter value of each pixel of the first view image
  • a second acquiring unit configured to acquire a second graphic parameter value of each pixel of the second perspective view
  • the calculating unit is configured to calculate the pixel difference sum according to the preset operation based on the deviation value, the first graphic parameter value, and the second graphic parameter value.
  • the calculating unit includes:
  • Forming a sub-unit configured to form a pixel pair of the i-th pixel in the first view and the j-th pixel in the second view, wherein values of i and j are respectively the first view And the pixel coordinate value range of the second perspective view, the abscissa of the i-th pixel and the abscissa of the j-th pixel are different by d deviation values, the ordinate is the same, and the value of d is within a preset deviation interval;
  • Calculating a subunit configured to calculate, for each pixel pair, a square of a difference between a first graphic parameter value of the ith pixel and a second image parameter value of the jth pixel;
  • Embodiments of the present disclosure also provide a storage medium configured to store program code for performing the method of any of the above.
  • the method and device for recognizing a three-dimensionally displayed picture provided by the embodiment of the present disclosure, by acquiring a first view and a second view of the target picture, and when the difference between the first view and the second view is within a preset range
  • the target image is a picture for three-dimensional display
  • the image for three-dimensional display can be automatically recognized, thereby solving the problem that the operation is inconvenient due to the inability to automatically recognize the image for three-dimensional display, and the improvement is achieved.
  • FIG. 1 is a schematic diagram of a method for identifying a three-dimensional display picture according to Embodiment 1 of the present disclosure
  • FIG. 2A is a schematic diagram of a method for identifying a three-dimensional display picture according to Embodiment 2 of the present disclosure
  • FIG. 2B is a schematic diagram of a pixel difference method and method according to Embodiment 2 of the present disclosure
  • FIG. 2C is a schematic diagram of a pixel difference method and method according to Embodiment 2 of the present disclosure.
  • 3A is a schematic structural diagram of an apparatus for identifying a three-dimensional display picture according to Embodiment 3 of the present disclosure
  • FIG. 3B is a schematic structural diagram of another apparatus for identifying a three-dimensionally displayed picture according to Embodiment 3 of the present disclosure.
  • 3C is a schematic structural diagram of an apparatus for acquiring a difference degree parameter according to Embodiment 3 of the present disclosure
  • 3D is a schematic structural diagram of acquiring a pixel difference value and device according to Embodiment 3 of the present disclosure
  • FIG. 3 is a schematic structural diagram of calculating a pixel difference value and apparatus according to Embodiment 3 of the present disclosure.
  • module means a module
  • component a component
  • unit a unit
  • the terminal can be implemented in various forms.
  • the terminals described in the present disclosure may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and devices such as digital TVs, desktop computers, etc.
  • Fixed terminal In the following, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will appreciate that configurations in accordance with embodiments of the present disclosure can be applied to fixed type terminals in addition to elements that are specifically for mobile purposes.
  • This embodiment provides a method for identifying a three-dimensional display picture.
  • the method includes:
  • Step 101 Extract a first view and a second view of the target picture.
  • the target picture may be a picture for two-dimensional display, a picture for three-dimensional display, or a picture for other dimensional display.
  • the first view and the second view are a left eye view and a right eye view.
  • the left eye view and the right eye view are two views of the same scene, viewed from two different perspectives, the left eye and the right eye.
  • the left eye view is a view seen by the left eye in the three-dimensional display effect
  • the right eye view is a view seen by the right eye in the three-dimensional display effect.
  • the left eye view and the right eye view can be laid out in the same picture or in multiple pictures.
  • the left-eye view and the right-eye view when the left-eye view and the right-eye view are laid out in the same target picture, the left-eye view and the right-eye view may be arranged in a left-right view or a top-and-side view format for three-dimensional display. In the picture. Of course, it can also be arranged in the same picture according to other formats, and the embodiment of the present disclosure does not limit this. Further, when the left-eye view and the right-eye view are laid out in a plurality of pictures, the left-eye view and the right-eye view may be laid out in two pictures. For a picture for two-dimensional display or a picture for other dimensional display, the first view and the second view are only two pictures extracted according to the extraction method, and there is no relationship in the picture for three-dimensional display. .
  • the first view and the second view of the target image may be extracted by the following method. Obtaining the content of the left half of the target image to obtain a first view; obtaining the content of the right half of the target image, and obtaining a second view, where the first view and the second view are the first set of view.
  • the content of the upper part of the target image is obtained to obtain a first perspective view; the content of the lower half of the target image is obtained, and a second perspective view is obtained, where the first perspective view and the second perspective view are the second set of perspective views. .
  • the content of each half of the target image may be obtained by copying the target image to obtain two identical target images, which are recorded as the first target image and the second target image; and cropping the first target image to the left and right.
  • the content of the left and right parts of the target picture is obtained; the second target picture is cropped up and down to obtain the content of the upper and lower parts of the target picture.
  • the left and right cropping may be left and right symmetric cutting, and the upper and lower cropping may be vertically symmetric cutting.
  • the relevant software technology can be used to directly obtain the content of each half of the target image from the target image, which will not be described in detail herein.
  • the first view and the first view may be extracted not strictly according to the content of each half of the target picture.
  • the second view is obtained, and the first view and the second view are acquired according to their corresponding view formats.
  • the first perspective view and the second view are extracted in the embodiment of the present disclosure, and the method for extracting the first view and the second view is not limited.
  • first view and the second view may be extracted by other methods, which is not limited in this embodiment.
  • the left-eye view and the right-eye view are arranged in a picture for three-dimensional display in a left-right view format or a top-bottom view format.
  • the target picture is a picture for three-dimensional display or a picture for two-dimensional display, and of course, it cannot be determined as a left-right view format or a top-and-side view format. Therefore, when extracting the target image, it is necessary to perform left and right extraction, and it is necessary to perform upper and lower extraction.
  • the method for extracting the first view and the second view of the target image can be used to extract a target image into two sets of perspective views. Therefore, the two sets of perspectives need to perform subsequent steps 102 to 104.
  • the target picture in order to reduce the operation steps and improve the operation efficiency, the target picture may be roughly judged, and only a set of the first view and the second view are extracted after the rough judgment, only the first view of the set is obtained. Steps 102 through 104 are performed with the second perspective view.
  • only one set of the first view and the second view may be extracted by: performing preliminary analysis on the target image before extracting the target image, and roughly determining whether the target image has left or right or similar Situation; when the left and right are similar, the target image is determined to be the left and right view format, and the target image is extracted left and right to obtain a set of first perspective view and second perspective view; when the top and bottom are similar, the target image is determined to be the top and bottom view format, and the target is The picture is extracted up and down, and only a set of first view and second view are obtained.
  • the target image may be a red, green, and blue color mode picture, that is, an RGB (Red Green Blue) picture, or may be a hue, saturation, and brightness color mode picture, that is, an HSI (Hue Saturation Intensity) picture, of course. It is also possible to have other color mode pictures. Since different color modes correspond to different picture features, different rough determination methods can be determined according to the color mode of the target picture.
  • RGB Red Green Blue
  • HSI Human Saturation Intensity
  • the target picture when the target picture is an RGB picture, the target picture may be roughly determined by acquiring a gray value of each pixel of the target picture in a red channel, and obtaining a red gray value matrix; acquiring each pixel of the target picture.
  • a green gray value matrix is obtained; the gray value of each pixel of the target image in the blue channel is obtained, and a blue gray value matrix is obtained; respectively, the red gray value matrix and the green gray value matrix are respectively obtained.
  • the blue gray value matrix is compared up and down and left and right.
  • the values in the gray value matrix have upper and lower or left and right symmetry relationship, it is determined that the target image has similar left and right or top and bottom.
  • the symmetry relationship described above is that the gray values are numerically close.
  • the target picture When the target picture is an HSI picture, the target picture may be roughly determined using a method similar to that when the target picture is an RGB picture, and will not be described in detail herein.
  • the target picture when only one set of the first view and the second view is extracted, and after performing step 102 to step 104 on the set of the first view and the second view, the target picture is not used.
  • another set of the first view and the second view of the target image may be re-extracted, and steps 102 to 104 are performed again to identify whether the target image is used for three-dimensional display to improve the recognition of the target image. Accuracy.
  • the three-dimensional display picture in the embodiment of the present disclosure may be used for the naked eye three-dimensional display picture.
  • Step 102 Acquire a difference degree parameter of the first view and the second view.
  • the difference degree parameter is used to indicate the degree of difference between the first view image and the second view.
  • the degree of difference parameter may indicate whether there is a difference between the first view and the second view, and when there is a difference, the size of the difference.
  • the degree of difference parameter may be expressed using a degree of difference in pixels of the first view and the second view in terms of graphical features such as gray value, hue, brightness, or saturation.
  • the degree of difference is the degree of difference between the gray values of the respective pixels of the first view and the second view.
  • the degree of difference parameter may be a number, and the magnitude of the numerical value represents the magnitude of the difference.
  • the range of the difference degree parameter is [0-3000] 0 can be used to indicate that there is no difference between the first view and the second view, and for the remaining numbers, the difference increases with the increase of the value.
  • the difference degree parameter may be a value obtained according to a preset operation.
  • the difference degree parameter may be a minimum pixel difference sum of the first view image and the second view image calculated according to the preset operation.
  • the difference parameter can be obtained by acquiring the graphic parameter values of the first view and the second view, and substituting the parameter values into the preset operation.
  • the graphical parameter value is a parameter used to characterize the graphical features of the first perspective view and the second perspective view.
  • the graphical parameter values may be determined based on a color mode of the target picture. Exemplarily, when the target picture is an RGB picture, the graphic parameter value may be a gray value of each color channel; when the target picture is HIS, the graphic parameter value may be hue, saturation, brightness, and the like.
  • the degree of difference parameter may also be other parameters that can reflect the degree of difference between the first view and the second view.
  • Step 103 When the difference degree parameter is within the preset range value, identify the target picture as a picture for three-dimensional display.
  • the picture for three-dimensional display and the picture for two-dimensional display have the following characteristics in terms of difference:
  • the target picture is a picture for three-dimensional display
  • one of the first view and the second view is a left-eye view and the other is a right-eye view. Since the left-eye view and the right-eye view are the same scene, the view is seen from two different perspectives of the left eye and the right eye. Therefore, in the picture for three-dimensional display, there is a difference between the first view and the second view, and the difference is within a certain range.
  • a picture for two-dimensional display In general, a picture for two-dimensional display is used to represent a complete single picture, and there is no left-eye view and right-eye view of one scene. In the case of a picture for two-dimensional display. Therefore, for a picture for two-dimensional display, the difference between the first view and the second view is larger, and the difference is greater than the difference between the first view and the second view in the picture for three-dimensional display. In addition, for some special pictures for two-dimensional display, such as a monochrome picture, a completely symmetrical picture on the left or right, or a completely symmetrical picture on the top and bottom, there is no difference between the first view and the second view.
  • the pre-determination degree parameter selected for the three-dimensional display may be set according to the difference degree parameter selected by the difference degree feature. Set the range value.
  • the preset range value can be obtained through a large number of experiments to ensure the accuracy of the difference degree parameter.
  • the value range of the difference degree parameter is [0-3000] as an example, and the preset range value is set to (0-1000).
  • the target image is identified as a picture for three-dimensional display; when the acquired difference degree parameter value is not within the range of (0-1000), the target image is identified. A picture that is not used for 3D display.
  • the target picture when the identification target picture is a picture for three-dimensional display, the target picture may be marked and related information may be stored.
  • the tag name can be marked by modifying the picture name.
  • the stored related information may include a view format of the picture, that is, a left and right view format or a top and bottom view format.
  • other manners can be marked and other related information can be stored, which is not limited in this embodiment.
  • the target picture is identified as a picture that is not used for three-dimensional display.
  • the terminal turns on the automatic selection display mode function, and the terminal receives the target (the picture to be displayed), the terminal automatically recognizes the type of the target picture, selects the corresponding display frame, drives the device, and controls the hardware to implement the corresponding display mode; when the terminal is closed
  • the display mode function is automatically selected, when the terminal receives the target picture, the terminal does not recognize the type of the target picture, requires manual recognition, and then selects a corresponding display mode according to the recognition result.
  • the case where the terminal receives the target picture includes: initializing the picture, changing the picture state, and operating the display picture.
  • the initializing the picture includes the first time the terminal is loaded to load the picture;
  • the picture state change includes the creation, replacement or other modification of the picture;
  • the operation displaying the picture includes the user clicking to display a picture, displaying the picture in batches, and sliding to a certain picture.
  • the video may be determined whether the video adopts a three-dimensional video playing mode by identifying whether the included picture in the video is a picture for three-dimensional display.
  • the picture included in the video is a picture for three-dimensional display, it is determined to adopt a three-dimensional video playing mode; when the picture included in the video is a picture that is not used for three-dimensional display, it is determined to adopt other video playing modes.
  • the method for recognizing a three-dimensional display picture obtained by the embodiment of the present disclosure obtains a first view and a second view of the target picture, and the difference between the first view and the second view is
  • the target image is identified as a picture for three-dimensional display; thus, the picture for three-dimensional display can be automatically recognized, thereby solving the problem that the operation is inconvenient because the picture for three-dimensional display cannot be automatically recognized. ; achieved the effect of improving the convenience of operation.
  • the embodiment provides a method for recognizing a three-dimensional display picture. Compared with the first embodiment, the embodiment determines the difference degree parameter in the first embodiment as the minimum pixel difference sum, according to the minimum pixel difference value and the identification. Whether the target image is a picture for three-dimensional display. Referring to FIG. 2A, the method includes:
  • Step 201 Extract a first view and a second view of the target picture.
  • This step is the same as or similar to step 101 and will not be described here.
  • Step 202 Acquire pixel sum values of the first view image and the second view image for each deviation value based on the preset N deviation values.
  • the deviation value is used to indicate the parallax range for the left and right eyes of the same scene.
  • the deviation value can be based on a large amount of data acquisition, resulting in a more reasonable value or range of values.
  • the offset value can be from 58 mm to 72 mm. Further, in the present embodiment, the offset value can be converted into the number of pixels in accordance with the size of each pixel.
  • N is an integer greater than or equal to 1, that is, at least one offset value is set.
  • the more the number of deviation values the more accurate, but too many deviation values lead to an increase in computational complexity, so the number of deviation values can be determined according to actual needs.
  • the pixel difference sum of the first view and the second view may be obtained by:
  • Step 2021 Acquire a first graphic parameter value of each pixel of the first view.
  • factors affecting the graphic parameter value of the target picture include the gray value of the target picture in each color channel; when the target picture is a picture of the HSI color space, affecting the graphic parameter value of the target picture
  • the factors are the hue, saturation and brightness of the picture.
  • the value of the graphic parameter affecting the target picture is other corresponding factors, and will not be described in detail herein.
  • a picture of a commonly used RGB color space is taken as an example for description.
  • the graphical parameter value may include an average of the gray values of the at least one color value.
  • the first graphical parameter value can be the following:
  • the first type the first gray value of the R color
  • the more the types of colors included when calculating the first graphical parameter value the more accurate the obtained first graphical parameter value.
  • the accuracy of the first, second, and third is similar; the accuracy of the fourth, fifth, and sixth is similar, but the accuracy is higher than the first, second, and The third; the seventh is the most accurate.
  • the first view image includes 4 pixels, and the first graphic parameter value is the seventh type.
  • the gray values of the three colors of red, green, and blue included in each pixel are respectively:
  • the first pixel 100, 150 and 200;
  • 2nd pixel 150, 120, and 120;
  • 3rd pixel 120, 130 and 140;
  • 4th pixel 100, 100 and 100.
  • the first pixel 150;
  • the second pixel 130;
  • the third pixel 130;
  • the number of pixels included in the first view is much larger than 4, and it is assumed here that the first view includes 4 pixels for convenience of description.
  • Step 2022 Acquire a second graphic parameter value of each pixel of the second perspective view.
  • the second graphical parameter value of each pixel of the second view is obtained by the same method as the first graphical parameter value of each pixel of the first view in step 2021.
  • obtaining gray values of three colors of red, green, and blue included in each pixel are respectively:
  • the first pixel 110, 160 and 210;
  • 2nd pixel 160, 130 and 130;
  • 3rd pixel 130, 140 and 150;
  • 4th pixel 110, 110 and 110.
  • the first pixel 160;
  • the second pixel 140;
  • the third pixel 140;
  • Step 2023 calculating a pixel difference sum according to a preset operation based on the deviation value, the first graphic parameter value, and the second graphic parameter value.
  • step 2023 can be implemented by the following method:
  • Step 2023a the i-th pixel in the first view and the j-th pixel in the second view are combined into a pixel pair.
  • the range of values of i and j is the range of pixel coordinate values of the first view and the second view, respectively, and the abscissa of the i-th pixel and the abscissa of the j-th pixel are different by d deviation values
  • the ordinate is the same, and the value of d is within the preset deviation interval.
  • Step 2023b for each pixel pair, calculate the square of the difference between the first graphics parameter value of the i-th pixel and the second image parameter value of the j-th pixel.
  • step 2023c the squares of the differences are summed to obtain a pixel difference sum.
  • the step 2023a to the step 2023c may be expressed by the following two formulas, that is, the preset operation may be the following two formulas:
  • d is the deviation value
  • DL(d) and DR(d) are the pixel difference sums corresponding to the deviation value d
  • FL(x, y) is the first When a view is placed in a two-dimensional Cartesian coordinate system, the first graphical parameter value of the pixel at the coordinate position (x, y); FR (x, y) is when the second view is placed in the two-dimensional Cartesian coordinate system The second graphical parameter value of the pixel at the coordinate position (x, y); wherein the range of values of x, y is the range of coordinate values of the left view.
  • the boundary parameter method is used to obtain its graphic parameter value.
  • the relevant method of boundary filling is not described in detail here.
  • the target picture may be a picture for three-dimensional display
  • the first view and the second view and the left view are not determined.
  • Correspondence between the map and the right eye view therefore, when calculating the pixel difference sum, it is necessary to keep the position of the first view in the two-dimensional Cartesian coordinate system, and the second view is along the positive half of the x-axis.
  • the deviation value is moved to obtain a pixel difference sum, and then the second half-axis is extended by the deviation value according to the negative half-axis of the x-axis to obtain another pixel difference sum.
  • N d values can obtain 2N pixel difference sums.
  • N may be determined as other values depending on the number of pixels included in the target picture. For example, when N is equal to 3, that is, when d includes three values, six pixel difference sums can be obtained by the above method.
  • Step 203 Acquire a minimum value of each pixel difference sum, obtain a minimum pixel difference sum, and determine a minimum pixel difference sum as a difference degree parameter.
  • pixel difference values are used to indicate the degree of difference between the first view and the second view; in one embodiment, for three-dimensional display
  • the first view and the second view in the picture are two pictures of different views in the same scene, and the difference is only affected by the parallax, so the difference between the first view and the second view is small. Therefore, the pixel difference value and the medium minimum value are taken as the difference degree parameter to improve the selection of the first view image and the second view image, thereby improving the accuracy of the picture recognition.
  • the minimum value is obtained from two pixel difference values and 500 and 2000, that is, 500, then 500 is the minimum pixel difference sum, and 500 is determined as the difference degree parameter.
  • the difference between the pixel difference value and the maximum value may also be obtained as a difference degree parameter, or multiple pixel difference values and average values may be used as the difference degree parameter, or the pixel difference value may also be selected. Multiple simultaneous as a difference degree parameter.
  • Step 204 When the minimum pixel difference value is within the preset range value, the target image is identified as a picture for three-dimensional display.
  • the preset range value is 400 to 3000, and since the 500 is within the preset range value, the target map is identified.
  • the slice is a picture for three-dimensional display.
  • the target picture is identified as a picture for three-dimensional display when all the difference degree parameters are within the preset range value.
  • Step 205 When the minimum pixel difference value is not within the preset range value, the target image is identified as being not used for three-dimensional display of the picture.
  • the preset range value is 1000 to 3000. Since 500 is not within the preset range value, the target image is identified as being not used for three-dimensional display of the picture.
  • the target picture is identified as being not used for three-dimensional display pictures.
  • difference degree parameter is not limited to the minimum pixel difference sum, and may also be other picture-related parameters, which are not described in detail in the embodiments of the present disclosure.
  • the method for recognizing a three-dimensional display picture obtained by the embodiment of the present disclosure obtains a first view and a second view of the target picture, and the difference between the first view and the second view is When the preset range value is within, the target image is marked for three-dimensional display of the picture; thus, the image for three-dimensional display can be automatically recognized, thereby solving the problem that the operation is inconvenient due to the inability to automatically recognize the image for three-dimensional display; Improve the convenience of operation.
  • the present embodiment provides an identification device 300 for displaying a picture in three dimensions.
  • the device 300 includes an extraction module 301, an acquisition module 302, and a first identification module 303.
  • the extraction module 301 is configured to extract a first view and a second view of the target picture.
  • the obtaining module 302 is configured to acquire a difference degree parameter of the first view image and the second view image, where the difference degree parameter is used to indicate the degree of difference between the first view image and the second view view.
  • the first identifier module 303 is configured to identify the target image as a picture for three-dimensional display when the difference degree parameter is within the preset range value.
  • the apparatus 300 further includes: a second identifier module 304 configured to: when the difference degree parameter is not within the preset range value, identify the target picture as a picture that is not used for three-dimensional display.
  • the obtaining module 302 includes:
  • the first obtaining sub-module 3021 is configured to acquire, according to the preset N deviation values, a pixel difference sum of the first view image and the second view image for each deviation value, where N is an integer greater than or equal to 1.
  • the second obtaining sub-module 3022 is configured to obtain a minimum value of each pixel difference sum to obtain a minimum pixel difference sum.
  • the determination sub-module 3023 is arranged to determine the minimum pixel difference sum as the difference degree parameter.
  • the first obtaining submodule 3021 includes:
  • the first obtaining unit 3021a is configured to acquire a first graphic parameter value of each pixel of the first view.
  • the second obtaining unit 3021b is configured to acquire a second graphic parameter value of each pixel of the second perspective view.
  • the calculating unit 3021c is configured to calculate the pixel difference sum according to the preset operation based on the deviation value, the first graphic parameter value, and the second graphic parameter value.
  • the calculating unit 3021c includes:
  • the compiling subunit 3021c1 is configured to form the pixel pair of the i-th pixel in the first view and the j-th pixel in the second view.
  • the range of values of i and j is the range of pixel coordinate values of the first view and the second view, respectively, and the abscissa of the i-th pixel and the abscissa of the j-th pixel are different by d deviation values
  • the ordinate is the same, and the value of d is within the preset deviation interval.
  • the calculation sub-unit 3021c2 is arranged to calculate, for each pixel pair, the square of the difference between the first graphical parameter value of the i-th pixel and the second image parameter value of the j-th pixel.
  • the summation sub-unit 3021c3 is arranged to sum the squares of the differences to obtain a pixel difference sum.
  • This embodiment is an embodiment of the device corresponding to the first embodiment and the second embodiment.
  • the identification device for displaying a picture in three dimensions acquires a first view and a second view of the target picture, and when the difference between the first view and the second view is When the preset range value is within, the target image is identified as a picture for three-dimensional display; thus, the picture for three-dimensional display can be automatically recognized, thereby solving the problem that the operation is inconvenient because the picture for three-dimensional display cannot be automatically recognized. ; achieved the effect of improving the convenience of operation.
  • embodiments of the present disclosure can be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware aspects. Moreover, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • the method for identifying a three-dimensional display picture provided by the embodiment of the present disclosure, by acquiring a first view and a second view of the target picture, and when the difference between the first view and the second view is within a preset range
  • the target image is identified as a picture for three-dimensional display
  • the picture for three-dimensional display can be automatically recognized, thereby solving the problem that the operation is inconvenient due to the inability to automatically recognize the picture for three-dimensional display; Convenience effect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé et un dispositif d'identification d'une image affichée en trois dimensions, le procédé comprenant les étapes suivantes consistant à : extraire une première image de vue et une seconde image de vue d'une image cible (101) ; acquérir un paramètre de degré de différence entre la première image de vue et la seconde image de vue, le paramètre de degré de différence étant utilisé pour indiquer un degré de différence entre la première image de vue et la seconde image de vue (102); et lorsque le paramètre de degré de différence se situe dans une valeur de plage prédéfinie, identifier l'image cible en tant qu'image d'affichage tridimensionnel (103).
PCT/CN2017/106811 2017-04-12 2017-10-19 Procédé et dispositif d'identification d'une image affichée en trois dimensions Ceased WO2018188297A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2019556179A JP2020517025A (ja) 2017-04-12 2017-10-19 3次元表示用画像の識別方法及び装置
KR1020197032801A KR20190136068A (ko) 2017-04-12 2017-10-19 3d 디스플레이 이미지용 식별 방법 및 장치

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710236021.7 2017-04-12
CN201710236021.7A CN108694031B (zh) 2017-04-12 2017-04-12 一种用于三维显示图片的识别方法及装置

Publications (1)

Publication Number Publication Date
WO2018188297A1 true WO2018188297A1 (fr) 2018-10-18

Family

ID=63793102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/106811 Ceased WO2018188297A1 (fr) 2017-04-12 2017-10-19 Procédé et dispositif d'identification d'une image affichée en trois dimensions

Country Status (4)

Country Link
JP (1) JP2020517025A (fr)
KR (1) KR20190136068A (fr)
CN (1) CN108694031B (fr)
WO (1) WO2018188297A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120387924A (zh) * 2025-06-26 2025-07-29 江苏奥斯汀光电科技股份有限公司 面向电子照片墙风格迁移一致性的视频生成方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724521A (zh) * 2011-03-29 2012-10-10 青岛海信电器股份有限公司 立体显示方法及装置
CN103051913A (zh) * 2013-01-05 2013-04-17 北京暴风科技股份有限公司 一种3d片源格式自动识别的方法
CN103179426A (zh) * 2011-12-21 2013-06-26 联咏科技股份有限公司 自动检测图像格式的方法与应用其的播放方法
CN104657966A (zh) * 2013-11-19 2015-05-27 江苏宜清光电科技有限公司 一种3d格式分析方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120115825A (ko) * 2011-04-11 2012-10-19 주식회사 케이티 이동 단말의 3d 객체 업데이트 방법
CN102395037B (zh) * 2011-06-30 2014-11-05 深圳超多维光电子有限公司 一种格式识别方法及识别装置
CN102710953A (zh) * 2012-05-08 2012-10-03 深圳Tcl新技术有限公司 自动识别3d视频播放模式的方法和装置
CN104767985A (zh) * 2014-01-07 2015-07-08 冠捷投资有限公司 使用区域分布分析以自动检测三维图像格式的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724521A (zh) * 2011-03-29 2012-10-10 青岛海信电器股份有限公司 立体显示方法及装置
CN103179426A (zh) * 2011-12-21 2013-06-26 联咏科技股份有限公司 自动检测图像格式的方法与应用其的播放方法
CN103051913A (zh) * 2013-01-05 2013-04-17 北京暴风科技股份有限公司 一种3d片源格式自动识别的方法
CN104657966A (zh) * 2013-11-19 2015-05-27 江苏宜清光电科技有限公司 一种3d格式分析方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120387924A (zh) * 2025-06-26 2025-07-29 江苏奥斯汀光电科技股份有限公司 面向电子照片墙风格迁移一致性的视频生成方法

Also Published As

Publication number Publication date
KR20190136068A (ko) 2019-12-09
JP2020517025A (ja) 2020-06-11
CN108694031A (zh) 2018-10-23
CN108694031B (zh) 2021-05-04

Similar Documents

Publication Publication Date Title
CN108492343B (zh) 一种扩充目标识别的训练数据的图像合成方法
US12141647B2 (en) Product verification in a messaging system
CN109508681B (zh) 生成人体关键点检测模型的方法和装置
AU2016349518B2 (en) Edge-aware bilateral image processing
US8594376B2 (en) Computer-readable storage medium having information processing program stored therein, information processing method, information processing apparatus, and information processing system
CN101657839B (zh) 用于对2d图像进行区域分类以进行2d至3d转换的系统和方法
CN103946890B (zh) 跟踪三维物体的方法及设备
CN106705837B (zh) 一种基于手势的物体测量方法及装置
CN101706793B (zh) 搜索图片的方法和装置
EP3036901B1 (fr) Procédé, appareil et produit programme d'ordinateur pour une détection et une segmentation d'objet
US11704357B2 (en) Shape-based graphics search
US8649603B2 (en) Computer-readable storage medium having information processing program stored therein, information processing method, information processing apparatus, and information processing system
CN104781849A (zh) 单眼视觉同时定位与建图(slam)的快速初始化
US20160148343A1 (en) Information Processing Method and Electronic Device
CN101271578A (zh) 一种平面视频转立体视频技术中的深度序列生成方法
CN113762033A (zh) 人脸识别方法、装置、设备及介质
CN115861572A (zh) 一种三维建模方法、装置、设备及存储介质
WO2018188297A1 (fr) Procédé et dispositif d'identification d'une image affichée en trois dimensions
US20120275709A1 (en) Building texture extracting apparatus and method thereof
CN102708570A (zh) 获取深度图的方法及装置
CN113762059B (zh) 图像处理方法、装置、电子设备及可读存储介质
CN105989029A (zh) 图像搜索方法及系统
CN117409079A (zh) 一种识别3d物体旋转姿态的方法、系统及可读存储介质
KR20250056053A (ko) 딥러닝 기술을 활용한 개인별 맞춤 피부과 시술 추천 시스템 및 방법
CN109816746A (zh) 素描图像生成方法及相关产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17905815

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019556179

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20197032801

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 17905815

Country of ref document: EP

Kind code of ref document: A1