[go: up one dir, main page]

WO2023240898A1 - Appareil d'affichage et procédé de traitement de photographie sous-écran - Google Patents

Appareil d'affichage et procédé de traitement de photographie sous-écran Download PDF

Info

Publication number
WO2023240898A1
WO2023240898A1 PCT/CN2022/129156 CN2022129156W WO2023240898A1 WO 2023240898 A1 WO2023240898 A1 WO 2023240898A1 CN 2022129156 W CN2022129156 W CN 2022129156W WO 2023240898 A1 WO2023240898 A1 WO 2023240898A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
screen
under
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2022/129156
Other languages
English (en)
Chinese (zh)
Inventor
陈昊
朱家兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Govisionox Optoelectronics Co Ltd
Original Assignee
Kunshan Govisionox Optoelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Govisionox Optoelectronics Co Ltd filed Critical Kunshan Govisionox Optoelectronics Co Ltd
Publication of WO2023240898A1 publication Critical patent/WO2023240898A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Definitions

  • the present application relates to the field of display technology, and in particular, to a display device and an under-screen photographing processing method.
  • the front camera of the full-screen display device is set in the transparent display area of the display device.
  • the front camera takes pictures, light enters the front camera after passing through the transparent display area, resulting in poor image quality and affecting the user experience.
  • the present application provides a display device and an under-screen photographing processing method to improve the quality of photographed images of a first camera disposed under the screen.
  • embodiments of the present application provide an off-screen photographing processing method, applied to a display device.
  • the display device includes a first camera disposed under the screen and a second camera disposed not under the screen.
  • the off-screen photo processing method includes:
  • embodiments of the present application also provide a display device, including:
  • a first camera arranged under the screen, a second camera arranged not under the screen, and an under-screen photo processing device;
  • the first image acquisition module is used to acquire the first image taken by the user using the first camera
  • a first image restoration determination module configured to use a first image restoration model to process the first image to obtain a first restored image; wherein the first image restoration model is based on the first camera and the second The difference of the images captured by the camera under the first preset condition is determined.
  • the display device further includes a display panel and a third camera, the light incident surface of the third camera is provided with a transparent shield;
  • the under-screen photo processing device also includes:
  • the embodiment of the present application restores the image quality of the image captured by the first camera to that of the second camera by arranging a second camera located not under the screen, and based on the difference in the images of the first camera and the second camera under the first preset condition.
  • the first image restoration model captures the image quality level.
  • the display device system uses the first image recovery model to process the first image to obtain the first restored image. Due to the quality of the image captured by the second camera is the normal level, and the quality of the first restored image is close to the quality level of the image captured by the second camera, so the quality of the first restored image is better, thereby improving the photo quality of the first camera and improving the user experience.
  • Figure 2 is a schematic diagram of the non-light emitting side of a display device provided by an embodiment of the present application
  • Figure 3 is a schematic diagram of an off-screen photography processing method provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of the non-light-emitting side of yet another display device provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of yet another off-screen photographing processing method provided by an embodiment of the present application.
  • Figure 9 is a process diagram of the generation process of the image restoration coefficient matrix under the front screen provided by the embodiment of the present application.
  • the under-screen front camera of the existing full-screen display device is currently unable to obtain external images well, because the transparent display area of the display device corresponding to the under-screen front camera still has an opaque part, resulting in the under-screen front camera
  • the captured images have problems such as dark images, blurry images, and color stripes. Therefore, improving the photo quality of the under-screen front camera has become a top priority in the development of full-screen displays.
  • the first image restoration model is determined based on the difference in images captured by the first camera and the second camera under a first preset condition.
  • the difference in the images of the same object captured by the first camera 21 and the second camera 22 at the same position and in the same environment is mainly due to the light incident surface of the camera. Due to the occlusion of the transparent display area 11, an artificial intelligence algorithm is used to identify the difference in the images of the same object taken by the first camera 21 and the second camera 22 at the same position and in the same environment, so that the first camera 21 can be learned and trained.
  • the image quality of the image captured by the camera 21 is restored to the first image restoration model which is close to the quality level of the image captured by the second camera 22 .
  • the internal structure of the transparent display area 11 meets the following requirements: 1
  • the non-metallic film layers are all made of high-transmittance materials, and the conductive traces are made of transparent materials (ITO, etc.); 2
  • the anode is made of non-transparent material, reduce its opaque area; 3
  • the cathode is a low-transmittance metal material, optionally, use precision metal mask evaporation or laser etching to make the surface shape
  • the cathode is changed to a patterned cathode with higher transmittance.
  • the patterned cathode openings are preferably irregular shapes such as circular or elliptical, and are arranged in an irregular manner, so as to avoid the problem of light diffraction caused by the screen body.
  • the internal structure of the transparent display area 11 meets the following requirements: 1
  • the pixel definition layer opening and the anode are in irregular shapes such as circular or elliptical, and are arranged in an irregular manner; 2
  • Conductive wiring At least part of the arrangement is in the form of a curve; 3
  • the orthographic projection of each TFT device and the corresponding sub-pixel anode on the substrate at least partially overlaps, and the outer contour of the orthographic projection of each TFT device on the substrate is at least partially a curve, thereby reducing the cost of the TFT device The effect of light transmission diffraction produced; 4
  • the 11 pixel circuit in the transparent display area can use one TFT device to drive multiple sub-pixels, that is, reducing the number of TFT devices in the transparent display area 11 and reducing the overall opacity of the TFT device area, at this time the TFT device is placed under the sub-pixel anode with a larger area; 5
  • the embodiment of the present application restores the image quality of the image captured by the first camera 21 to the
  • the first image restoration model is close to the quality level of the image captured by the second camera 22.
  • the display device system uses the first image restoration model to process the first image to obtain the first restored image. Since The quality of the image captured by the second camera 22 is at a normal level, and the quality of the first restored image is close to the quality level of the image captured by the second camera 22 . Therefore, the image quality of the first restored image is better, thereby improving the photographing quality of the first camera 21 , which improves the user experience.
  • FIG. 5 is a schematic diagram of yet another off-screen photography processing method provided by an embodiment of the present application. Referring to Figures 1, 4 and 5, the method includes:
  • the first image restoration model is determined based on the difference in images captured by the first camera and the second camera under a first preset condition.
  • the first image conversion model is determined based on the difference between images captured by the first camera and the third camera under a second preset condition
  • the second image recovery model is determined based on the second camera determined by the difference between the image captured by the third camera under the third preset condition.
  • the user when the user is satisfied with the quality of the first restored image, he exits the shooting interface of the first camera. When the user is not satisfied with the quality of the first restored image, the user can click the image optimization option of the shooting interface. At this time, the display device The system receives image optimization instructions.
  • An artificial intelligence algorithm can be used to identify the difference between the images captured by the first camera 21 and the third camera 23, and the first image conversion model can be obtained through learning and training.
  • the first image conversion model is used to process the image captured by the first camera 21 into The third camera 23 captures images with similar image quality levels.
  • the artificial intelligence algorithm can be used to identify the difference between the images captured by the third camera 23 and the second camera 22, and learn and train to obtain a second image restoration model.
  • the second image restoration model is used to process the images captured by the third camera 23.
  • the image taken by the second camera 22 has an image quality level close to that of the second camera 22 .
  • the first image conversion model and the second image restoration model are stored in the display device system.
  • the film structure on the light incident surface side of the third camera and the first camera is close to each other, so that the difference between the images captured by the third camera and the first camera is small.
  • the difference is mainly caused by the image displayed in the transparent display area at the light entrance surface of the first camera.
  • the first image conversion model can process the difference in a targeted manner so that the shooting quality of the first camera is restored to that of the third camera.
  • the shooting quality is the same or close.
  • the structural difference between the third camera and the second camera is that the light incident surface of the third camera is equipped with a transparent shield.
  • the difference in the images captured by the third camera and the second camera is mainly caused by the transparent shield.
  • the second image recovery model The difference can be processed in a targeted manner so that the image quality captured by the third camera is restored to be the same as or close to the image quality captured by the second camera.
  • This embodiment uses the dual functions of the first image conversion model and the second image restoration model to perform image quality restoration processing on the first image.
  • the first image conversion model and the second image restoration model have two models.
  • the joint image quality restoration method has more constraints, and the first image conversion model and the second image restoration model respectively restore different image differences, which has a better image quality restoration effect on the first image. Therefore, in the embodiment of the present invention, it is preferable to set up the second camera 22 and the third camera 23 at the same time, and use the first image conversion model and the second image restoration model to process the first image.
  • the first camera 21 is disposed on the non-light emitting side of the transparent display area 11 of the display panel 10 , and the light incident surface of the first camera 21 is adjacent to the transparent display area 11 , and the transparent shielding member 30 has the same structure as the transparent display area 11 , the third camera 23 and the first camera 21 are the same type of cameras.
  • the display panel 10 of this embodiment does not include a cover.
  • the third camera 23 can be the rear first camera of the display device.
  • the transparent shielding member 30 has the same structure as the transparent display area 11, except that it does not display a picture.
  • the transparent shielding member 30 and the transparent display area 11 may include film layers such as pixel circuits, light-emitting layers, encapsulation layers, touch layers, and polarizers.
  • Figure 6 is a schematic diagram of the production of an organic light-emitting display panel provided by an embodiment of the present application. As shown in Figure 6, when designing the screen layout, the parts of the substrate that originally need to be cut and removed are used to set multiple settings without affecting the product layout rate.
  • the transparent sample area 40 There is a transparent sample area 40, and the middle part of the transparent sample area 40 is a transparent shielding member 30, whose appearance and structure are the same as the transparent display area 11; at the same time, a single transparent sample area 40 is immediately adjacent to the transparent display area 11 of the corresponding display panel 10, so as to reduce the The non-uniformity of process film formation has a differential impact on the light transmittance and light diffraction between the transparent shielding member 30 and the transparent display area 11; in addition, in addition to the transparent shielding member 30 in the middle, the transparent sample area 40 also includes a surrounding middle part
  • the shape of the installation area is not limited, and the transparent sample area 40 can be easily placed into the rear lens module of the display device for installation.
  • the transparent sample area 40 and the corresponding display panel 10 are not separated, and part of the module process is carried out as a whole; before the module cover is attached, the transparent sample area 40 and the display panel 10 are separated by cutting again, and only The display panel 10 is attached to the cover. Finally, the transparent sample area 40 without the cover plate and the display panel 10 with the cover plate attached are used together for subsequent installation operations of the same display device.
  • the structure of the transparent shielding member 30 and the transparent display area 11 is the same, and the third camera 23 and the first camera 21 are the same type of camera, so that the third camera 23 and the first camera 21 have the same light incident surface.
  • the film layer structure on both sides is the same, which further reduces the difference between the images captured by the third camera 23 and the first camera 21, so that the first image conversion model can better restore the image quality of the first camera 21 to that of the third camera 21.
  • the image quality captured by the camera 23 is the same or close to it.
  • the second camera 22 and the third camera 23 may be of the same type of camera, and the third camera 23 and the second camera 22 may both be disposed on the same side of the display panel. For example, they may both be disposed on the non-light-emitting side of the display panel. .
  • FIG. 7 is a schematic diagram of another off-screen photography processing method provided by an embodiment of the present application. Referring to Figure 7, the method includes:
  • the user If the user is satisfied with the first calibration restored image quality generated by the user's visual experience, he or she will choose to exit the shooting interface, and the display device system will update the first rear screen under-screen image restoration coefficient calibration matrix Rear_Re_N+1 to the second image restoration model; If not satisfied, the user can choose the continue image calibration option.
  • the display device system After receiving the first continue image calibration instruction, the display device system uses an artificial intelligence algorithm to iteratively identify the image difference between the first calibration image Normal_Cal_1 and the second calibration image Rear_Cal_1, and learns and The optimized first rear under-screen image restoration coefficient calibration matrix Rear_Re_N+1 is trained, the first converted image is reprocessed, and a first calibrated restored image close to the quality level of the image captured by the second camera 22 is generated until the user is satisfied.
  • an artificial intelligence algorithm to iteratively identify the image difference between the first calibration image Normal_Cal_1 and the second calibration image Rear_Cal_1, and learns and The optimized first rear under-screen image restoration coefficient calibration matrix Rear_Re_N+1 is trained, the first converted image is reprocessed, and a first calibrated restored image close to the quality level of the image captured by the second camera 22 is generated until the user is satisfied.
  • this embodiment performs image calibration During the process, the first image conversion model between the first camera 21 and the third camera 23 is kept unchanged, and only the second image recovery model between the third camera 23 and the second camera 22 is calibrated and optimized, which can reduce the problem of the display device system. image calibration workload, thereby improving its image calibration efficiency.
  • the under-screen photo processing method of the display device also includes:
  • the display device system displays the second calibration recovery image to the user, and after receiving the user's click-input command to continue the calibration of the second image, iteratively optimizes the front-screen under-screen image recovery coefficient calibration matrix, using the iterative optimization
  • the subsequent front-screen under-screen image restoration coefficient calibration matrix reprocesses the first image to obtain a second calibration restoration image, until the user no longer clicks to input the second image to continue the calibration instruction;
  • the first image restoration model is updated according to the front-screen under-screen image restoration coefficient calibration matrix.
  • the user can select the second image calibration option to perform a higher level image calibration.
  • the display device system receives the user's third image calibration. 2. Image calibration instructions.
  • the user can also directly select the second image calibration option when the user is not satisfied with the quality of the first restored image or the second restored image.
  • FIG 8 is a schematic diagram of an off-screen image calibration process provided by an embodiment of the present invention.
  • the display device system starts the second camera 22 after receiving the second image calibration instruction, and the display device system appears in the shooting interface.
  • the user uses the fine grid coordinate system and the central red point to locate the center and outer contour of the second calibration object in the shooting interface and capture the fourth calibration image Normal_Cal_2.
  • the display device system uses the front screen image restoration coefficient calibration matrix Front_Re_N+1 to process the first image, generate and display a second calibration restored image that is close to the image quality level captured by the second camera 22; the user visually experiences the second calibration restored image quality , if satisfied, choose to exit the shooting interface, and the display device system will update the front screen under-screen image restoration coefficient calibration matrix Front_Re_N+1 to the first image restoration model. If not satisfied, the user can select the second image to continue calibration option multiple times, allowing the display device system to use an artificial intelligence algorithm to iteratively identify the image difference between the third calibration image Front_Cal_2 and the fourth calibration image Normal_Cal_2, and learn and train the optimized image.
  • the front screen image restoration coefficient calibration matrix Front_Re_N+1 reprocesses the first image to generate a second calibration restored image that is close to the quality level of the image captured by the second camera 22 until the user is satisfied with the second calibration restored image quality.
  • the first image restoration model can be updated according to the optimized front screen under-screen image restoration coefficient calibration matrix Front_Re_N+1.
  • the off-screen image calibration process also includes:
  • the display device system After receiving the user's second image calibration instruction, the display device system starts the third camera 23 and obtains the fifth calibration image obtained by the user using the third camera 23 to capture the second calibration object;
  • the display device system can also control the third camera 23 to start, with the appearance of criss-crossing fine grids in the shooting interface. coordinate system and the central red point.
  • the user uses the fine grid coordinate system and the central red point to locate the center of the second calibration object and its outer contour in the shooting interface, so that its position in the shooting interface is consistent with the second camera 22 when taking pictures. Consistent, the fifth calibration image Rear_Cal_2 is obtained.
  • the display device system uses an artificial intelligence algorithm to identify the image difference between the fifth calibration image Rear_Cal_2 and the fourth calibration image Normal_Cal_2, learns and trains the second rear under-screen image restoration coefficient calibration matrix Rear_Re_N+2, and based on the third
  • the image difference between the calibration image Front_Cal_2 and the fifth calibration image Rear_Cal_2 is used to learn and train the front screen image conversion coefficient calibration matrix Front_Con_N+1.
  • the display device system uses the front under-screen image conversion coefficient calibration matrix Front_Con_N+1 to process the first image to obtain the second converted image, and uses the second rear under-screen image restoration coefficient calibration matrix Rear_Re_N+2 to process the second converted image to generate
  • the third calibration restored image is close to the image quality level captured by the second camera 22; the user visually experiences the third calibration restored image quality, and if satisfied, chooses to exit the shooting interface, and the display device system will separately calibrate the image conversion coefficient under the front screen
  • the matrix Front_Con_N+1 and the second rear under-screen image restoration coefficient calibration matrix Rear_Re_N+2 are updated to the first image conversion model and the second image restoration model; if not satisfied, the user can select the second image to continue the calibration option multiple times , the display device system receives the second image continued calibration instruction, and uses an artificial intelligence algorithm to iteratively identify the image difference between the third calibration image Front_Cal_2 and the fifth calibration image Rear_Cal_2 and between the fifth calibration image Rear_Cal_2 and the fourth
  • This embodiment can receive the second image calibration instruction clicked and input by the user in real time, capture the image of the second calibration object under the same environmental conditions through the first camera 21, the second camera 22 and the third camera 23, and use the artificial intelligence algorithm according to the above
  • the first image conversion model and the second image restoration model are updated and optimized based on the difference between the images, so that the image quality captured by the first camera 21 can be better restored and the quality of the image captured by the first camera 21 can be improved.
  • using the first image restoration model to process the first image to obtain the first restored image includes:
  • using a first image conversion model to process the first image to obtain a first converted image, and using a second image restoration model to process the first converted image to obtain a second restored image includes:
  • the front under-screen image conversion coefficient matrix corresponding to the current shooting environmental conditions is selected through the first image conversion model, and the rear under-screen image restoration coefficient matrix corresponding to the current shooting environmental conditions is selected through the second image restoration model; where,
  • the first image conversion model includes a plurality of front-screen under-screen image conversion coefficient matrices corresponding to different environmental conditions, and the second image restoration model includes a plurality of rear-screen under-screen image restoration coefficient matrices corresponding to different environmental conditions.
  • the front-screen under-screen image restoration coefficient matrix is based on the difference in images of the same preset object captured by the first camera 21 and the second camera 22 at the same position and under the same environmental conditions, and at the same position. Determine the difference between images of the same environmental scene taken under the same environmental conditions;
  • the front-screen under-screen image conversion coefficient matrix is based on the difference in images of the same preset object captured by the first camera 21 and the third camera 23 at the same position and under the same environmental conditions.
  • the image restoration coefficient matrix under the rear screen is determined based on the difference of the images of the same environmental scene captured under the rear screen according to the same preset object captured by the second camera 22 and the third camera 23 at the same position and under the same environmental conditions.
  • the difference between the images is determined by the difference between the images of the same environmental scene taken at the same location and under the same environmental conditions.
  • the specific determination process of the first image restoration model, the first image conversion model and the second image restoration model includes:
  • the display device After selecting a certain environmental condition and display image content, that is, image acquisition environment condition 1, to simulate the shooting environment, the display device first obtains the display image through the first camera 21 to obtain the first object image Front_Obj_1. At this time, the first camera 21 The self-portrait mirror function will be turned off. Then the second camera 22 directly captures the environmental scene on the side of the display device facing away from the display, and obtains the first environment image Normal_Env_1. Immediately afterwards, the fixture stage controls the third camera 23 to move to the position where the second camera 22 captures the environmental image, and captures the environmental scene to obtain the second environmental image Rear_Env_1.
  • the fixture stage first rotates 180°, controls the third camera 23 to move to the position where the first camera 21 captures the object image, captures the display image, and obtains the second object image Rear_Obj_1.
  • the second camera 22 is controlled to move to the position where the first camera 21 captures the object image, captures the display image, and obtains the third object image Normal_Obj_1.
  • the fixture stage is rotated 180° again, and the first camera 21, the second camera 22 and the third camera 23 are controlled to return to their respective initial positions before image collection, thereby completing the entire process of image collection of the display device and obtaining a certain image.
  • Three kinds of object images and three kinds of environment images under environmental conditions namely the first object image Front_Obj_1, the second object image Rear_Obj_1, the third object image Normal_Obj_1, the first environment image Normal_Env_1, the second environment image Rear_Env_1 and the third environment image Front_Env_1 .
  • Figure 9 is a process diagram of the generation process of the front screen image restoration coefficient matrix provided by the embodiment of the present application.
  • the process includes: identifying the first object image Front_Obj_1 and the first object image Front_Obj_1 through artificial intelligence algorithms, such as neural network methods, etc.
  • the image difference between the three object images Normal_Obj_1 is used to learn and train the image restoration coefficient matrix under the front screen.
  • Use the front screen under-screen image restoration coefficient matrix to process the first object image Front_Obj_1 to generate the first object restored image, and then use the image quality evaluation function, such as the Vollaths function, etc., to determine whether the quality of the first object restored image is close to the third object image.
  • Normal_Obj_1 Normal image quality level.
  • Figure 10 is a generation process diagram of the front screen under-screen image conversion coefficient matrix and the rear under-screen image restoration coefficient matrix provided by the embodiment of the present application.
  • the first object image Front_Obj_1 and the second object are respectively identified through artificial intelligence algorithms.
  • the image quality evaluation function is used to determine whether the restored image quality of the third environment is close to the normal image quality level of the first environment image Normal_Env_1; if it is qualified, the image quality after processing of the third environment image Front_Env_1 is output.
  • the front screen under-screen image conversion coefficient matrix judged qualified by the second image quality evaluation function and the rear screen under-screen image restoration coefficient matrix judged qualified by the second image quality evaluation function. as a coefficient matrix for processing the picture quality of the first camera 21 when the environmental condition is 1; if it is unqualified, the artificial intelligence algorithm is reused to continuously iteratively identify the first object image Front_Obj_1, the second object image Rear_Obj_1, and the second object image Rear_Obj_1.
  • the third object image Normal_Obj_1 learn and train the optimized front under-screen image conversion coefficient matrix and the rear under-screen image restoration coefficient matrix, until another image quality evaluation function is used for the optimized post-screen image
  • the image quality of the third environment restored image generated by the under-screen image restoration coefficient matrix processing is judged to be qualified, and the final iteratively optimized front under-screen image conversion coefficient matrix and rear under-screen image restoration coefficient matrix are output.
  • This embodiment also provides a display device, including a first camera disposed under the screen, a second camera disposed not under the screen, and an under-screen photographing processing device;
  • the under-screen photo processing device includes:
  • the first image acquisition module is used to acquire the first image taken by the user using the first camera
  • a first restored image determination module configured to use a first image restoration model to process the first image to obtain a first restored image; wherein the first image restoration model is based on the first camera and the second The difference of the images captured by the camera under the first preset condition is determined.
  • the display device further includes a display panel and a third camera, the light incident surface of the third camera is provided with a transparent shield;
  • the under-screen photo processing device also includes:
  • the image optimization module is configured to, after receiving the user's image optimization instruction, use a first image conversion model to process the first image to obtain a first converted image, and use a second image restoration model to process the first converted image. Processing to obtain a second restored image; wherein the first image conversion model is determined based on the difference between images captured by the first camera and the third camera under a second preset condition, and the second image restoration model is determined based on The difference is determined by the difference between the images captured by the second camera and the third camera under the third preset condition, and when the third camera captures the image, the light passes through the transparent shield and then enters the third camera.
  • the display device also includes:
  • the display panel includes a transparent display area, the first camera is disposed on the non-light emitting side of the transparent display area, and the light incident surface of the first camera is adjacent to the transparent display area.
  • the under-screen photo processing device also includes:
  • a second image display module configured to display the second restored image to the user
  • the first calibration image acquisition module is configured to, after receiving the user's first image calibration instruction, control the start of the second camera, and acquire the first calibration image obtained by the user using the second camera to capture the first calibration object;
  • the second calibration image acquisition module is used to control the activation of the third camera and acquire the second calibration image obtained by the user using the third camera to capture the first calibration object;
  • a first calibration matrix determination module configured to determine a first rear under-screen image recovery coefficient calibration matrix based on the difference between the first calibration image and the second calibration image
  • a first calibration restored image determination module configured to use the first rear under-screen image restoration coefficient calibration matrix to process the first converted image to obtain a first calibration restored image
  • the first iterative optimization module is used to display the first calibration recovery image to the user, and iterate the first rear under-screen image recovery coefficient calibration matrix after receiving the first image continued calibration instruction clicked and input by the user.
  • Optimize use the iteratively optimized first rear screen image restoration coefficient calibration matrix to reprocess the first converted image to obtain the first calibration restoration image, until the user no longer clicks to input the first image to continue the calibration instruction;
  • the under-screen photo processing device also includes:
  • the third calibration image acquisition module is configured to start the first camera after receiving the second image calibration instruction clicked and input by the user, and acquire the third calibration image of the second calibration object captured by the user using the first camera;
  • a fourth calibration image acquisition module configured to activate the second camera and acquire a fourth calibration image obtained by the user using the second camera to capture the second calibration object
  • a second calibration matrix determination module configured to determine the front-screen under-screen image restoration coefficient calibration matrix based on the difference between the third calibration image and the fourth calibration image
  • a second calibration recovery image determination module is used to process the first image using the front screen under-screen image recovery coefficient calibration matrix to obtain a second calibration recovery image
  • the second iterative optimization module is used to display the second calibration recovery image to the user, and perform iterative optimization on the front-screen under-screen image recovery coefficient calibration matrix after receiving the second image continued calibration instruction clicked and input by the user, Use the iteratively optimized front-screen under-screen image restoration coefficient calibration matrix to reprocess the first image to obtain a second calibration restoration image until the user no longer clicks to input the second image to continue the calibration instruction;
  • the second model update module is used to update the first image restoration model according to the front-screen under-screen image restoration coefficient calibration matrix.
  • the under-screen photo processing device also includes:
  • the fifth calibration image acquisition module is used to start the third camera after receiving the user's second image calibration instruction, and acquire the fifth calibration image obtained by the user using the third camera to shoot the second calibration object;
  • a fourth calibration matrix determination module configured to determine the second rear under-screen image restoration coefficient calibration matrix based on the difference between the fourth calibration image and the fifth calibration image;
  • the third calibration recovery image determination module is used to process the first image using a front-screen under-screen image conversion coefficient calibration matrix to obtain a second converted image, and use a second rear-screen under-screen image recovery coefficient calibration matrix to process the first image.
  • the second converted image is processed to obtain a third calibration restored image;
  • the third iterative optimization module is used to display the third calibration recovery image to the user, and after receiving the user's instruction to continue the calibration of the second image, convert the coefficient calibration matrix of the front under-screen image and the second rear image to the user.
  • the under-screen image restoration coefficient calibration matrix is iteratively optimized, and the first image is reprocessed using the iteratively optimized front-under-screen image conversion coefficient calibration matrix to obtain a second converted image, and the iteratively optimized second post-screen image is obtained.
  • the image restoration coefficient calibration matrix under the screen processes the reprocessed second converted image to obtain a third calibration restored image until the user no longer clicks to input the second image to continue the calibration instruction;
  • a third model update module configured to update the first image conversion model according to the front under-screen image conversion coefficient calibration matrix, and update the second image according to the second rear under-screen image restoration coefficient calibration matrix. Restoration model.
  • the image optimization module is specifically used for:
  • the front under-screen image conversion coefficient matrix corresponding to the current shooting environmental conditions is selected through the first image conversion model, and the rear under-screen image conversion coefficient matrix corresponding to the current shooting environmental conditions is selected through the second image recovery model; wherein , the first image conversion model includes a plurality of front-screen under-screen image conversion coefficient matrices corresponding to different environmental conditions, and the second image restoration model includes a plurality of rear-screen under-screen image restoration corresponding to different environmental conditions. coefficient matrix.
  • the front-screen under-screen image conversion coefficient matrix is based on the difference in images of the same preset object captured by the first camera and the third camera at the same location and under the same environmental conditions.
  • the difference in images of the same environmental scene is determined
  • the rear screen under-screen image restoration coefficient matrix is determined based on the difference in images of the same preset object captured by the second camera and the third camera at the same position and under the same environmental conditions. and the difference between images of the same environmental scene taken at the same location and under the same environmental conditions.
  • the display device provided by the embodiment of the present invention may be a mobile phone, a wearable device with a display function, a computer, and other display devices.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un appareil d'affichage et un procédé de traitement de photographie sous-écran. Le procédé de traitement de photographie sous-écran est appliqué à l'appareil d'affichage. L'appareil d'affichage comprend une première caméra sous-écran et une seconde caméra non-sous-écran. Le procédé de traitement de photographie sous-écran consiste à : acquérir une première image photographiée par un utilisateur au moyen de la première caméra ; traiter la première image au moyen d'un premier modèle de restauration d'image pour obtenir une première image de restauration, le premier modèle de restauration d'image étant déterminé en fonction de la différence d'images photographiées par la première caméra et la seconde caméra dans une première condition prédéfinie. La présente invention améliore la qualité d'image de photographie de la caméra sous-écran.
PCT/CN2022/129156 2022-06-16 2022-11-02 Appareil d'affichage et procédé de traitement de photographie sous-écran Ceased WO2023240898A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210686835.1A CN115100054B (zh) 2022-06-16 2022-06-16 一种显示装置及屏下拍照处理方法
CN202210686835.1 2022-06-16

Publications (1)

Publication Number Publication Date
WO2023240898A1 true WO2023240898A1 (fr) 2023-12-21

Family

ID=83290829

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129156 Ceased WO2023240898A1 (fr) 2022-06-16 2022-11-02 Appareil d'affichage et procédé de traitement de photographie sous-écran

Country Status (2)

Country Link
CN (1) CN115100054B (fr)
WO (1) WO2023240898A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100054B (zh) * 2022-06-16 2025-06-13 昆山国显光电有限公司 一种显示装置及屏下拍照处理方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019051683A1 (fr) * 2017-09-13 2019-03-21 深圳传音通讯有限公司 Procédé de photographie de lumière de remplissage, terminal mobile et support de stockage lisible par ordinateur
CN111951192A (zh) * 2020-08-18 2020-11-17 义乌清越光电科技有限公司 一种拍摄图像的处理方法及拍摄设备
CN112004077A (zh) * 2020-08-17 2020-11-27 Oppo(重庆)智能科技有限公司 屏下摄像头的校准方法、装置、存储介质与电子设备
CN112785507A (zh) * 2019-11-07 2021-05-11 上海耕岩智能科技有限公司 图像处理方法及装置、存储介质、终端
CN112887598A (zh) * 2021-01-25 2021-06-01 维沃移动通信有限公司 图像处理方法、装置、拍摄支架、电子设备及可读存储介质
CN115100054A (zh) * 2022-06-16 2022-09-23 昆山国显光电有限公司 一种显示装置及屏下拍照处理方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110581910A (zh) * 2019-09-17 2019-12-17 Oppo广东移动通信有限公司 图像采集方法、装置、终端及存储介质
CN110809115B (zh) * 2019-10-31 2021-04-13 维沃移动通信有限公司 拍摄方法及电子设备
CN113139911B (zh) * 2020-01-20 2024-11-01 北京迈格威科技有限公司 图像处理方法及装置、图像处理模型的训练方法及装置
WO2021258300A1 (fr) * 2020-06-23 2021-12-30 Oppo广东移动通信有限公司 Procédé de commande de photographie en écran, dispositif terminal et support de stockage

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019051683A1 (fr) * 2017-09-13 2019-03-21 深圳传音通讯有限公司 Procédé de photographie de lumière de remplissage, terminal mobile et support de stockage lisible par ordinateur
CN112785507A (zh) * 2019-11-07 2021-05-11 上海耕岩智能科技有限公司 图像处理方法及装置、存储介质、终端
CN112004077A (zh) * 2020-08-17 2020-11-27 Oppo(重庆)智能科技有限公司 屏下摄像头的校准方法、装置、存储介质与电子设备
CN111951192A (zh) * 2020-08-18 2020-11-17 义乌清越光电科技有限公司 一种拍摄图像的处理方法及拍摄设备
CN112887598A (zh) * 2021-01-25 2021-06-01 维沃移动通信有限公司 图像处理方法、装置、拍摄支架、电子设备及可读存储介质
CN115100054A (zh) * 2022-06-16 2022-09-23 昆山国显光电有限公司 一种显示装置及屏下拍照处理方法

Also Published As

Publication number Publication date
CN115100054B (zh) 2025-06-13
CN115100054A (zh) 2022-09-23

Similar Documents

Publication Publication Date Title
US8547421B2 (en) System for adaptive displays
Itoh et al. Occlusion leak compensation for optical see-through displays using a single-layer transmissive spatial light modulator
CN111427166B (zh) 一种光场显示方法及系统、存储介质和显示面板
US20160365394A1 (en) Organic light emitting diode (oled) display apparatus having light sensing function
WO2021179358A1 (fr) Procédé de photographie, support de stockage et dispositif électronique
US20190260963A1 (en) Display panel, display device and image pickup method therefor
CN109143670A (zh) 显示面板、移动终端及其控制方法
CN107452031A (zh) 虚拟光线跟踪方法及光场动态重聚焦显示系统
WO2020155117A1 (fr) Procédé de traitement d'image, support de stockage et dispositif électronique
JP2004294477A (ja) 三次元画像計算方法、三次元画像作成方法及び三次元画像表示装置
WO2019196694A1 (fr) Appareil d'affichage de réalité virtuelle, dispositif d'affichage et procédé de calcul d'angle de visualisation
CN111586273A (zh) 电子设备及图像获取方法
US20090245696A1 (en) Method and apparatus for building compound-eye seeing displays
CN111526278A (zh) 图像处理方法、存储介质及电子设备
WO2021258877A1 (fr) Dispositif électronique, ensemble d'écran d'affichage, et module de caméra
WO2023240898A1 (fr) Appareil d'affichage et procédé de traitement de photographie sous-écran
JP2009210912A (ja) プラネタリウム装置
CN113409700B (zh) 虚拟窗户
WO2019223449A1 (fr) Dispositif d'acquisition d'image, procédé d'acquisition d'image, dispositif électronique, et appareil d'imagerie
CN115546041B (zh) 补光模型的训练方法、图像处理方法及其相关设备
TW202004555A (zh) 圖像採集設備、圖像採集方法、電子設備及成像裝置
US9924144B2 (en) Light field illuminating method, device and system
EP3923564B1 (fr) Dispositif électronique et procédé pour caméra sous-écran réalisant une photographie et un affichage, et support d'informations lisible par ordinateur
CN120495145B (zh) 基于机器视觉的虚拟场景实时渲染方法及其相关设备
CN113038114B (zh) 基于人眼视觉特性的ar仿真系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946566

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22946566

Country of ref document: EP

Kind code of ref document: A1