[go: up one dir, main page]

CN116503536A - A Light Field Rendering Method Based on Scene Layering - Google Patents

A Light Field Rendering Method Based on Scene Layering Download PDF

Info

Publication number
CN116503536A
CN116503536A CN202310764527.0A CN202310764527A CN116503536A CN 116503536 A CN116503536 A CN 116503536A CN 202310764527 A CN202310764527 A CN 202310764527A CN 116503536 A CN116503536 A CN 116503536A
Authority
CN
China
Prior art keywords
resolution
scene
light field
display
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310764527.0A
Other languages
Chinese (zh)
Other versions
CN116503536B (en
Inventor
邢树军
于迅博
聂子涵
高鑫
黄辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhenxiang Technology Co ltd
Beijing University of Posts and Telecommunications
Original Assignee
Shenzhen Zhenxiang Technology Co ltd
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhenxiang Technology Co ltd, Beijing University of Posts and Telecommunications filed Critical Shenzhen Zhenxiang Technology Co ltd
Priority to CN202310764527.0A priority Critical patent/CN116503536B/en
Publication of CN116503536A publication Critical patent/CN116503536A/en
Application granted granted Critical
Publication of CN116503536B publication Critical patent/CN116503536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明涉及一种基于场景分层的光场渲染方法,以解决现有技术中对3D场景大深度部分进行较高分辨率渲染导致的算力浪费的问题。本发明包括:使用模型渲染工具进行实验获得三维光场显示中感知分辨率随光场显示深度的变化曲线;根据感知分辨率随光场显示深度的变化曲线,选定深度阈值,将场景划分为不同层级;对不同层级的3D场景进行渲染;确定不同层级间的遮挡关系,得到低分辨率的视差图像;对视差图像进行超分辨率重建与像素编码,得到高分辨率的合成图像。通过仿真实验获得理想状态下光场显示深度与人眼感知到的显示分辨率的变化关系,结合深度学习超分辨率方法提高合成图像的显示分辨率,从而实现高质量且算力损耗小的显示效果。

The invention relates to a light field rendering method based on scene layering to solve the problem of waste of computing power caused by high-resolution rendering of a large depth part of a 3D scene in the prior art. The present invention includes: using a model rendering tool to conduct experiments to obtain the change curve of the perceived resolution with the depth of the light field display in the three-dimensional light field display; according to the change curve of the perceived resolution with the display depth of the light field, a depth threshold is selected, and the scene is divided into Different levels; render 3D scenes at different levels; determine the occlusion relationship between different levels to obtain low-resolution parallax images; perform super-resolution reconstruction and pixel encoding on parallax images to obtain high-resolution composite images. Through simulation experiments, the relationship between the display depth of the light field and the display resolution perceived by the human eye under ideal conditions is obtained, and the display resolution of the synthesized image is improved by combining the deep learning super-resolution method, so as to achieve high-quality display with low computing power loss Effect.

Description

一种基于场景分层的光场渲染方法A Light Field Rendering Method Based on Scene Layering

技术领域technical field

本发明本属于三维显示领域,尤其是涉及一种基于场景分层的光场渲染方法。The invention belongs to the field of three-dimensional display, and in particular relates to a light field rendering method based on scene layering.

背景技术Background technique

三维光场显示技术通过生理因素或心理因素产生立体视觉。常见的三维显示方法,如光栅3D显示、集成成像3D显示、体三维显示等3D显示技术运用双目视差等生理以及心理因素使视差图像显示在2D显示屏上,从而使观众可以感受到屏幕中物体的深度信息,呈现立体感。三维图像的清晰度不仅与光场显示系统的显示分辨率、透镜阵列到LCD的距离、扩散膜到透镜阵列的距离等因素有关,也与光场显示系统的显示深度有关。对于出入屏距离小的显示内容,图像的渲染分辨率与显示分辨率越高,三维图像的清晰度越高,人眼观看的效果越好;对于出入屏距离大的显示内容,如果图像的分辨率过高,会由于体像素较大导致重影和模糊的现象,降低图像的显示清晰度,同时也是对算力不必要的浪费。Three-dimensional light field display technology produces stereoscopic vision through physiological factors or psychological factors. Common 3D display methods, such as grating 3D display, integrated imaging 3D display, volumetric 3D display and other 3D display technologies, use physiological and psychological factors such as binocular parallax to display parallax images on the 2D display screen, so that the audience can feel the screen. The depth information of the object presents a three-dimensional effect. The clarity of a 3D image is not only related to the display resolution of the light field display system, the distance from the lens array to the LCD, and the distance from the diffusion film to the lens array, but also to the display depth of the light field display system. For display content with a small screen-to-screen distance, the higher the rendering resolution and display resolution of the image, the higher the definition of the 3D image, and the better the viewing effect of the human eye; for the display content with a large screen-to-screen distance, if the resolution of the image If the rate is too high, it will cause ghosting and blurring due to large volume pixels, which will reduce the display clarity of the image, and it will also be an unnecessary waste of computing power.

公开号为CN110956689A,公开日为2020-04-03的发明专利文献公开了一种三维悬浮光场显示系统及显示方法,参阅其[0048]至[0089]段,在本发明的一个实施例中提供一种三维悬浮光场显示系统,图1为本发明实施例提供的三维悬浮光场显示系统结构示意图,所述显示系统包括Web端、云端和双屏悬浮显示端,云端包括云端服务器和云端数据库……本发明实施例提供一种三维悬浮光场显示系统及显示方法,提供了一套实现三维悬浮光场显示的管理系统,能够实现对多用户悬浮显示的有序管理,保证产品三维悬浮显示的效果。The invention patent document with the publication number CN110956689A and the publication date of 2020-04-03 discloses a three-dimensional suspended light field display system and display method, refer to paragraphs [0048] to [0089], in an embodiment of the present invention A three-dimensional suspended light field display system is provided. FIG. 1 is a schematic structural diagram of the three-dimensional suspended light field display system provided by an embodiment of the present invention. The display system includes a web terminal, a cloud, and a dual-screen floating display terminal. Database... The embodiment of the present invention provides a three-dimensional floating light field display system and display method, and provides a set of management system for realizing three-dimensional floating light field display, which can realize orderly management of multi-user floating display and ensure three-dimensional floating of products displayed effect.

其虽然能够实现对产品三维悬浮显示的效果,但其对于出入屏距离大的显示内容,如果图像的分辨率过高,会由于体像素较大导致重影和模糊的现象,降低图像的显示清晰度,同时也是对算力不必要的浪费的问题无法得到有效解决。Although it can achieve the effect of three-dimensional floating display of the product, for the display content with a large distance between the entrance and exit screen, if the resolution of the image is too high, it will cause ghosting and blurring due to large volume pixels, which will reduce the clarity of the image display At the same time, the problem of unnecessary waste of computing power cannot be effectively solved.

发明内容Contents of the invention

本发明的目的在于提供一种基于场景分层的光场渲染方法,以解决现有技术中对3D场景大深度部分进行较高分辨率渲染导致的算力浪费的问题。The purpose of the present invention is to provide a light field rendering method based on scene layering, so as to solve the problem of waste of computing power caused by high-resolution rendering of a large depth part of a 3D scene in the prior art.

为实现上述目的,本发明提供了一种基于场景分层的光场渲染方法的具体技术方案如下:In order to achieve the above purpose, the present invention provides a specific technical solution of a light field rendering method based on scene layering as follows:

步骤1:实验获得三维光场显示中感知分辨率随光场显示深度的变化曲线。Step 1: Experimentally obtain the variation curve of the perceived resolution with the depth of the light field display in the 3D light field display.

所述光场显示深度,光栅三维显示器的显示深度指的是光栅三维显示器显示3D图像的出屏和入屏范围;集成成像三维显示器的显示深度指的是观看到的3D图像离开零视差面的间距。The display depth of the light field, the display depth of the grating three-dimensional display refers to the screen-out and screen-entry range of the grating three-dimensional display displaying 3D images; spacing.

所述实验使用模型渲染工具实现。首先放置3D场景,设置环境光照,然后确定虚拟相机阵列的数量与位置,设置输出视差图的分辨率;为了取得理想的渲染效果,使用光线追踪算法对3D场景进行渲染;通过调整场景零平面的位置,使虚拟相机阵列在不同深度平面上采集3D模型的图像,并采用计算成像技术再现3D图像;测量人眼观看到的图像分辨率,从而获得感知分辨率与显示深度的变化曲线。The experiments were carried out using model rendering tools. First place the 3D scene, set the ambient light, then determine the number and position of the virtual camera array, and set the resolution of the output disparity map; in order to obtain the ideal rendering effect, use the ray tracing algorithm to render the 3D scene; by adjusting the zero plane of the scene Position, so that the virtual camera array collects images of 3D models on different depth planes, and uses computational imaging technology to reproduce 3D images; measure the image resolution seen by human eyes, so as to obtain the change curve of perceived resolution and display depth.

步骤2:选择3D场景的零平面,根据获得的分辨率变化曲线,选择多个出屏与入屏深度作为阈值对场景实现分层,得到多个场景层级;Step 2: Select the zero plane of the 3D scene, and according to the obtained resolution change curve, select multiple out-screen and in-screen depths as thresholds to layer the scene, and obtain multiple scene levels;

所述零平面,是指在光场显示成像系统中,光线汇聚平面是成像清晰度最佳的平面,呈现的重建三维场景最为清晰,该平面称为零平面,或零视差面。The zero plane means that in the light field display imaging system, the light convergence plane is the plane with the best imaging definition, and the reconstructed three-dimensional scene presented is the clearest. This plane is called the zero plane, or the zero parallax plane.

对于仿真实验中再现的3D场景,距离零平面近的深度平面对应场景体像素的尺寸小,距离零平面远的深度平面对应场景体像素的尺寸大,则人眼观察到的图像分辨率随场景距离零平面的深度发生变化:场景距离零平面越远,分辨率越低。根据分辨率的变化曲线,选择多个出屏与入屏深度作为阈值对场景实现分层,得到多个场景层级。For the 3D scene reproduced in the simulation experiment, the depth plane closer to the zero plane corresponds to a smaller scene voxel size, and the depth plane farther from the zero plane corresponds to a larger scene voxel size, so the image resolution observed by the human eye varies with the scene The depth from the zero plane varies: the farther the scene is from the zero plane, the lower the resolution. According to the change curve of the resolution, multiple screen-out and screen-in depths are selected as thresholds to implement layering of the scene, and multiple scene levels are obtained.

步骤3:对不同层级的3D场景进行渲染;Step 3: Render 3D scenes of different levels;

所述渲染,是指将三维场景中的模型,按照设定好的环境、灯光、材质及渲染参数,二维投影成数字图像的过程。The rendering refers to the process of two-dimensionally projecting the model in the three-dimensional scene into a digital image according to the set environment, lighting, material and rendering parameters.

可选的,渲染技术包括光栅化、光线追踪、路径追踪等算法。优选的,光线追踪算法能够更加真实地模拟光线效果,对反射、透明现象、阴影以及全局光照实现正确的渲染,实现良好的视觉效果。Optionally, the rendering technology includes algorithms such as rasterization, ray tracing, and path tracing. Preferably, the ray tracing algorithm can more realistically simulate light effects, achieve correct rendering of reflections, transparency phenomena, shadows, and global illumination, and achieve good visual effects.

通过调整不同层级的视点数量与渲染的分辨率参数,对复杂程度高、信息量高、接近零平面的3D场景层级进行高清晰度、高分辨率的渲染,以保留更多的模型细节,实现高质量的显示效果;对复杂程度低,信息量低,远离零平面的3D场景进行低清晰度、低分辨率的渲染,从而减少计算时间,节省算力。By adjusting the number of viewpoints at different levels and the rendering resolution parameters, high-definition and high-resolution rendering is performed on 3D scene levels with high complexity, high information content, and close to zero plane, so as to retain more model details and achieve High-quality display effect; low-definition, low-resolution rendering of 3D scenes with low complexity, low information content, and far away from the zero plane, thereby reducing computing time and saving computing power.

步骤4:确定不同层级间的遮挡关系,获得视差图像;Step 4: Determine the occlusion relationship between different levels to obtain a parallax image;

所述不同层级间的遮挡关系,指的是场景分层后,由于不同层级距离视点的距离不同,且分开渲染,因此位于不同场景中的物体可能具有前后遮挡的关系。为了形成具有正确遮挡关系的视差图像,对各层级按照其与视点的距离进行从远到近的排序,距离视点近的层级优先级高于距离视点远的层级。绘制视差图像时,使优先级高的层级图像覆盖优先级低的场景图像。The occlusion relationship between different levels refers to that after the scene is layered, since different levels have different distances from the viewpoint and are rendered separately, objects located in different scenes may have a front-to-back occlusion relationship. In order to form a parallax image with a correct occlusion relationship, each level is sorted from far to near according to its distance from the viewpoint, and the priority of the level closer to the viewpoint is higher than that farther away from the viewpoint. When drawing a parallax image, the layer image with a higher priority covers the scene image with a lower priority.

所述视差图像,是指根据双目视差原理,使用相机阵列从不同角度拍摄同个3D场景,获得的多幅有差异的图像。The parallax image refers to a plurality of different images obtained by shooting the same 3D scene from different angles with a camera array according to the principle of binocular parallax.

步骤5:对视差图像进行超分辨率重建与像素编码,得到高分辨率的合成图像。Step 5: Perform super-resolution reconstruction and pixel encoding on the disparity image to obtain a high-resolution composite image.

所述像素编码,指的是根据三维光场显示器的显示特性,对视差图像序列中的特定子像素,以一定规律排列生成合成图像的过程。将合成图像显示在三维光场显示器的显示面板上,根据双目视差原理,控光单元使子像素发出的光线形成不同的视点显示区域,实现立体图像的再现。The pixel encoding refers to the process of arranging specific sub-pixels in a parallax image sequence according to a certain rule to generate a composite image according to the display characteristics of a three-dimensional light field display. The synthesized image is displayed on the display panel of the 3D light field display. According to the principle of binocular parallax, the light control unit makes the light emitted by the sub-pixels form different viewing point display areas to realize the reproduction of stereoscopic images.

所述超分辨率重建为利用机器学习以及相关光学知识,根据已知的低分辨率图像信息恢复图像细节和其他数据信息的过程。超分辨率神经网络的结构为一个生成对抗网络,由生成网络与判别网络组成。所述生成对抗网络,指的是一种通过生成网络模型与判别网络模型相互对抗进而相互优化的机器学习算法。The super-resolution reconstruction is a process of restoring image details and other data information based on known low-resolution image information by using machine learning and related optical knowledge. The structure of the super-resolution neural network is a generative confrontation network, which consists of a generative network and a discriminative network. The generative confrontation network refers to a machine learning algorithm in which a generative network model and a discriminative network model compete with each other to optimize each other.

其中,所述生成网络用于对输入的低分辨率合成图像提取特征并进行超分辨率重建,由多个可变形卷积层,多个激活单元,一个亚像素卷积层与一个残差连接组成。Wherein, the generation network is used to extract features from the input low-resolution synthetic image and perform super-resolution reconstruction, which consists of multiple deformable convolutional layers, multiple activation units, a sub-pixel convolutional layer and a residual connection composition.

所述可变形卷积,指的是一种在传统卷积感受野中引入偏移量的非常规卷积方法,其中偏移量随物体的形状和大小适应性的变化,从而实现感受野随物体的自由变形。The deformable convolution refers to an unconventional convolution method that introduces an offset in the traditional convolution receptive field, wherein the offset varies with the adaptability of the shape and size of the object, so that the receptive field changes with the object free deformation.

对于输入特征图x,可变形卷积后输出特征图y中P0处的值为:For the input feature map x, the value at P 0 in the output feature map y after deformable convolution is:

,

其中R={(-1,-1),(-1,0),...,(0,1),(1,1)},定义了一个3×3大小的卷积核,W(Pn)为感受野中Pn处的权重,ΔPn为偏移量。Where R={(-1,-1),(-1,0),...,(0,1),(1,1)} defines a 3×3 convolution kernel, W( P n ) is the weight at P n in the receptive field, and ΔP n is the offset.

由于加入偏移量后的位置非整数,因此使用双线性插值得到偏移后的位置:X(P):Since the position after adding the offset is not an integer, use bilinear interpolation to get the offset position: X(P):

,

其中任意位置P=P0+Pn+ΔPn,q为特征图x中所有整数空间位置,G(q,p)为双线性插值算法核。Among them, any position P=P 0 +P n +ΔP n , q is all integer spatial positions in the feature map x, and G(q,p) is the kernel of the bilinear interpolation algorithm.

可变形卷积更好地考虑到了物体形状大小变化的情况,弥补了模型固定的传统卷积在提取空间信息上的不足。The deformable convolution better takes into account the change in the shape and size of the object, and makes up for the shortcomings of the traditional convolution with a fixed model in extracting spatial information.

所述亚像素卷积,指的是将低分辨的特征图,通过卷积和多通道间的重组得到高分辨率的特征图的过程。在生成网络的最后一层使用亚像素卷积,将低分辨率图像的多通道特征图重组为高分辨率图像,避免了先对低分辨率图像进行插值再进行特征提取导致的巨大计算量,是一种节省算力,减少网络时间复杂度的选择。The sub-pixel convolution refers to the process of obtaining a high-resolution feature map from a low-resolution feature map through convolution and multi-channel recombination. In the last layer of the generation network, sub-pixel convolution is used to reorganize the multi-channel feature map of the low-resolution image into a high-resolution image, avoiding the huge amount of calculation caused by interpolating the low-resolution image first and then performing feature extraction. It is a choice to save computing power and reduce network time complexity.

所述判别网络的输入数据集由真实数据(标记为1)与生成网络的输出结果(标记为0)组成,通过最小化判别网络的预测值与真值之间的误差来优化判别网络参数。判别网络由多个卷积层、BN层、激活单元与全连接层组成,用于对生成网络模型的参数进行优化。The input data set of the discriminant network is composed of real data (marked as 1) and the output result of the generated network (marked as 0), and the parameters of the discriminant network are optimized by minimizing the error between the predicted value of the discriminant network and the true value. The discriminative network consists of multiple convolutional layers, BN layers, activation units and fully connected layers, which are used to optimize the parameters of the generated network model.

本发明提供的一种基于场景分层的光场渲染方法具有以下优点:A light field rendering method based on scene layering provided by the present invention has the following advantages:

本申请通过使用模型渲染工具,进行实验获得三维光场显示中感知分辨率随光场显示深度的变化曲线;根据感知分辨率随光场显示深度的变化曲线,选定深度阈值,将场景划分为不同层级;对不同层级的3D场景进行渲染;确定不同层级间的遮挡关系,得到低分辨率的视差图像;对视差图像进行超分辨率重建与像素编码,得到高分辨率的合成图像。同时本申请提供的技术方案通过仿真实验获得理想状态下光场显示深度与人眼感知到的显示分辨率的变化关系,对3D场景进行分层,进而对不同显示深度,不同信息量的场景层级使用不同的视点数与分辨率进行分层渲染,并结合深度学习超分辨率方法提高合成图像的显示分辨率,从而实现高质量且算力损耗小的显示效果。This application uses the model rendering tool to conduct experiments to obtain the change curve of the perceived resolution with the depth of the light field display in the 3D light field display; according to the change curve of the perceived resolution with the display depth of the light field, the depth threshold is selected, and the scene is divided into Different levels; render 3D scenes at different levels; determine the occlusion relationship between different levels to obtain low-resolution parallax images; perform super-resolution reconstruction and pixel encoding on parallax images to obtain high-resolution composite images. At the same time, the technical solution provided by this application obtains the relationship between the display depth of the light field in an ideal state and the display resolution perceived by the human eye through simulation experiments, and then layers the 3D scene, and further classifies the scene levels of different display depths and different amounts of information. Use different viewpoints and resolutions for layered rendering, and combine deep learning super-resolution methods to improve the display resolution of composite images, so as to achieve high-quality display effects with low computing power consumption.

附图说明Description of drawings

图1为本发明提供的流程示意图;Fig. 1 is the schematic flow chart that the present invention provides;

图2为本发明提供的生成网络的结构示意图;Fig. 2 is a schematic structural diagram of the generation network provided by the present invention;

图3为本发明提供的超分辨率神经网络的流程示意图。Fig. 3 is a schematic flow chart of the super-resolution neural network provided by the present invention.

图中:In the picture:

1、实验获得三维光场显示中感知分辨率随光场显示深度的变化曲线;1. Experimentally obtain the variation curve of the perceived resolution with the depth of the light field display in the 3D light field display;

2、根据感知分辨率随光场显示深度的变化曲线,选定深度阈值,将场景划分为不同层级;2. According to the change curve of the perception resolution and the display depth of the light field, the depth threshold is selected to divide the scene into different levels;

3、对不同层级的3D场景进行渲染;3. Render 3D scenes of different levels;

4、确定不同层级间的遮挡关系,得到低分辨率的视差图像;4. Determine the occlusion relationship between different levels to obtain low-resolution parallax images;

5、对视差图像进行超分辨率重建与像素编码,得到高分辨率的合成图像。5. Perform super-resolution reconstruction and pixel coding on the parallax image to obtain a high-resolution composite image.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用于解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

参阅图1至图3,本发明提供了一种基于场景分层的光场渲染方法,包括五个步骤 :Referring to Fig. 1 to Fig. 3, the present invention provides a light field rendering method based on scene layering, including five steps:

步骤1:进行光学仿真实验,使用开源三维图形图像软件Blender进行仿真实验。在Blender中使用光线追踪算法对立体内容进行渲染与生成的步骤如下:Step 1: Carry out optical simulation experiments, and use the open source 3D graphic image software Blender to conduct simulation experiments. The steps to render and generate stereoscopic content using ray tracing algorithm in Blender are as follows:

a)创建3D场景。调整场景中物体的位置,并为场景设置适合的光照。a) Create a 3D scene. Adjust the position of objects in the scene and set the appropriate lighting for the scene.

b)摆放相机阵列,通过先摆放主相机确定整体相机的阵列位置。调节相机阵列的位置与数量,使场景达到最佳的拍摄效果。b) Place the camera array, and determine the array position of the overall camera by placing the main camera first. Adjust the position and quantity of the camera array to achieve the best shooting effect of the scene.

c)设置输出视差图的路径和分辨率,对场景进行渲染。c) Set the path and resolution of the output disparity map, and render the scene.

d)将视差图像编码为合成图像。d) Encoding the disparity image into a composite image.

为了研究感知分辨率与显示深度的变化曲线,使用固定数量的虚拟相机阵列在多个不同深度平面上使用离轴式摆放结构对3D场景进行采集,得到多视点的视差图像。通过与相同视场条件下的二维图像进行比较,显示深度每变化50mm测量一次人眼观看到的光场采集图像的分辨率,得到感知分辨率与显示深度的变化曲线。In order to study the change curve of perceived resolution and display depth, a fixed number of virtual camera arrays are used to capture 3D scenes with an off-axis placement structure on multiple different depth planes to obtain multi-view parallax images. By comparing with the two-dimensional image under the same field of view conditions, the resolution of the light field acquisition image viewed by the human eye is measured every 50 mm of the display depth change, and the change curve of the perceived resolution and the display depth is obtained.

步骤2:选定3D场景的零平面作为重建三维场景时成像最清晰的平面,根据感知分辨率与显示深度的变化曲线,设定十个深度阈值,将整个3D场景分为九个层级。Step 2: Select the zero plane of the 3D scene as the plane with the clearest image when reconstructing the 3D scene, set ten depth thresholds according to the change curve of perceived resolution and display depth, and divide the entire 3D scene into nine levels.

步骤3:对不同层级的3D场景进行渲染;Step 3: Render 3D scenes of different levels;

由感知分辨率随光场显示深度的变化曲线,可得到感知分辨率随光场显示深度的变化函数:,/>From the change curve of the perceptual resolution with the display depth of the light field, the change function of the perceptual resolution with the display depth of the light field can be obtained: , /> ,

其中,h为光场显示深度,f(z)为显示深度为h时的感知分辨率值,hc为曲线中心的横坐标,hc=0,y0为偏移量,A为幅值,w为h等于标准偏差±σ时对应的全宽度。H3和H4是埃尔米特多项式,其中H3=z3-3z,H4=z 4 -6z3+3。Among them, h is the display depth of the light field, f(z) is the perceived resolution value when the display depth is h, h c is the abscissa of the center of the curve, h c =0, y 0 is the offset, A is the amplitude , w is the full width corresponding to h equal to the standard deviation ±σ. H 3 and H 4 are Hermitian polynomials, where H 3 =z 3 -3z, H 4 =z 4 -6z 3 +3.

设置每个场景层级的视点数为15个;根据感知分辨率随光场显示深度的变化函数,选择每个场景平均显示深度对应的感知分辨率值作为该场景层级的渲染分辨率。使用新视点图像生成方法对零平面所在层级以及附近n个层级的多路视点图像进行处理,每个场景层级生成最多15个新视点图像,远离零平面的场景层级不进行新视点图像的生成。The number of viewpoints at each scene level is set to 15; according to the change function of the perceptual resolution with the display depth of the light field, the perceptual resolution value corresponding to the average display depth of each scene is selected as the rendering resolution of the scene level. Use the new viewpoint image generation method to process the multi-viewpoint images of the level where the zero plane is located and n nearby levels. Each scene level generates up to 15 new viewpoint images, and the scene levels far away from the zero plane do not generate new viewpoint images.

优选的,新视点图像的生成技术包括但不限于基于深度学习、基于光流以及基于视差或深度图像的绘制方法。Preferably, the new viewpoint image generation technology includes but not limited to deep learning-based, optical flow-based, parallax or depth image-based rendering methods.

优选的,对每个层级的3D场景进行渲染时,只对该层级介于近平面与远平面之间的场景内容进行渲染;所述渲染方法包括但不限于光栅化、路径追踪、光线追踪算法。Preferably, when rendering the 3D scene of each level, only the scene content of the level between the near plane and the far plane is rendered; the rendering method includes but not limited to rasterization, path tracing, ray tracing algorithm .

步骤4:确定不同层级间的遮挡关系,获得视差图像;Step 4: Determine the occlusion relationship between different levels to obtain a parallax image;

根据层级与视点的远近距离得到层级的深度优先级。距离视点远的层级优先级低,先进行绘制,距离视点远的层级优先级高,后进行绘制。从而使距离视点近的场景内容覆盖距离视点远的场景内容,形成具有正确遮挡关系的视差图像。The depth priority of the layers is obtained according to the distance between the layers and the viewpoint. The layer farther away from the viewpoint has a lower priority and is drawn first, and the layer farther away from the viewpoint has a higher priority and is drawn later. Therefore, the scene content close to the viewpoint covers the scene content far away from the viewpoint, forming a parallax image with a correct occlusion relationship.

步骤5:对视差图像进行超分辨率重建与像素编码,得到高分辨率的合成图像。Step 5: Perform super-resolution reconstruction and pixel encoding on the disparity image to obtain a high-resolution composite image.

将渲染得到的视差图像作为超分辨率神经网络的输入,选择超分辨率的尺度因子r=2,实现批量视差图像的超分辨率重建,得到高分辨率的视差图像,根据三维显示设备的显示特性,对视差图像中的像素进行编码,得到高分辨率的合成图像。The parallax image obtained by rendering is used as the input of the super-resolution neural network, and the super-resolution scale factor r=2 is selected to realize the super-resolution reconstruction of the batch parallax image and obtain a high-resolution parallax image. According to the display of the 3D display device feature, which encodes the pixels in the disparity image to obtain a high-resolution composite image.

所述超分辨率神经网络为生成对抗网络。其中,生成网络由串联的多个可变形卷积层、激活单元、一个亚像素卷积层与一个残差连接组成。对于输入大小为H×W的低分辨率视差图像,使用多个可变形卷积层提取其特征,经多次可变形卷积操作后输出特征图大小为4×H×W。对多次卷积后提取的特征图进行亚像素卷积,得到大小为2H×2W的高分辨率特征图。将高分辨率特征图和来自输入端直接通过上采样获取的图像连接,输出高分辨率的视差图像。The super-resolution neural network is a generated confrontation network. Among them, the generative network consists of multiple deformable convolutional layers, activation units, a sub-pixel convolutional layer and a residual connection in series. For a low-resolution disparity image with an input size of H×W, multiple deformable convolution layers are used to extract its features, and the output feature map size is 4×H×W after multiple deformable convolution operations. Sub-pixel convolution is performed on the feature maps extracted after multiple convolutions to obtain a high-resolution feature map with a size of 2H×2W. The high-resolution feature map is concatenated with the image directly obtained by upsampling from the input to output a high-resolution disparity image.

所述判别网络采用论文“C. Ledig etc, Photo-realistic single imagesuper-resolution using a generative adversarial network, in Proceedings ofIEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp.4681–4690.”中提出的判别网络结构。可选地,也可采用其他方法或者自己搭建神经网络作为判别网络。The discriminant network is proposed in the paper "C. Ledig etc, Photo-realistic single image super-resolution using a generative adversarial network, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp.4681-4690." The discriminative network structure. Optionally, other methods can also be used or the neural network can be built by itself as the discriminant network.

根据三维光场显示器的显示特性,对视差图像序列中的子像素进行像素编码,得到高分辨率的合成图像。所述三维光场显示器为柱透镜光栅三维显示器。According to the display characteristics of the 3D light field display, pixel encoding is performed on the sub-pixels in the parallax image sequence to obtain a high-resolution composite image. The three-dimensional light field display is a cylindrical lens grating three-dimensional display.

优选的,三维显示硬件设备由显示面板和背光板组成,包括但不限于狭缝光栅三维显示器、柱透镜光栅三维显示器、以及集成成像三维显示器。Preferably, the three-dimensional display hardware device is composed of a display panel and a backlight plate, including but not limited to a slit grating three-dimensional display, a cylindrical lens grating three-dimensional display, and an integrated imaging three-dimensional display.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.

Claims (10)

1. A scene layering-based light field rendering method, comprising the steps of:
step 1, obtaining a change curve of perceived resolution along with the display depth of a light field in three-dimensional light field display through experiments;
step 2, selecting a depth threshold according to a change curve of the perceived resolution along with the display depth of the light field, and dividing a scene into different levels;
step 3, rendering the 3D scenes of different levels;
step 4, determining shielding relations among different layers to obtain a low-resolution parallax image;
and 5, performing super-resolution reconstruction and pixel coding on the parallax image to obtain a high-resolution synthetic image.
2. The scene hierarchy-based light field rendering method according to claim 1, wherein the specific steps of rendering 3D scenes of different levels are as follows:
obtaining a change function of the perceived resolution along with the light field display depth by using a change curve of the perceived resolution along with the light field display depth obtained through experiments, and setting rendering resolutions of scene levels positioned at different display depths according to the change function, so that the rendering resolution of each scene level is the perceived resolution at the current display depth;
generating new viewpoint images for scene levels where zero planes are located and scene levels near n zero planes, so that the levels have rendering results with high discretization degree and high definition;
for other layers far away from the zero plane, generating a new viewpoint image is not performed, and a low-resolution rendering result with low discretization degree and low definition is obtained;
the generation technology of the new viewpoint image comprises a drawing method based on deep learning, optical flow, parallax or depth image.
3. The scene layering-based light field rendering method according to claim 1, wherein occlusion relations among different layers are determined to obtain a low-resolution parallax image, and the specific steps are as follows:
and according to the distance between the hierarchy and the view point, sequencing the hierarchy to determine the priority, so that the scene hierarchy image close to the view point covers the scene hierarchy image far away from the view point, and obtaining the parallax image with the correct shielding relation.
4. The scene layering-based light field rendering method according to claim 1, wherein the step of experimentally obtaining a change curve of perceived resolution with light field display depth in three-dimensional light field display is as follows:
and performing a light field simulation experiment on the 3D scene by using a model rendering tool, changing the display depth of the scene by adjusting the position of a zero plane of the virtual scene, and measuring the perceived resolution observed by eyes in three-dimensional light field display, thereby obtaining a change curve of the perceived resolution along with the display depth of the light field in the three-dimensional light field display.
5. The scene hierarchy based light field rendering method of claim 1, wherein the perceived resolution is a curve of change in light field display depth, wherein:
the corresponding abscissa of the change curve represents the light field display depth, namely the distance between the displayed scene plane and the scene zero plane, and the display depth is a negative value if the displayed scene plane is behind the scene zero plane when the displayed scene plane is watched from the viewpoint direction; if the displayed scene plane is before the scene zero plane, the display depth is a positive value; the change curve corresponds to a numerical value whose ordinate represents the perceived resolution of the human eye.
6. The scene hierarchy based light field rendering method of claim 1, wherein the step of selecting a depth threshold and dividing the scene into different levels is as follows:
selecting a zero plane of the 3D scene, and selecting a plurality of screen-out and screen-in depths as thresholds according to the obtained resolution change curve to carry out depth division on the 3D scene in the model rendering tool so as to obtain a plurality of 3D scene levels;
two depth thresholds are needed for dividing any 3D scene level, wherein the plane where the depth threshold with a large value is located is the near plane of the scene level, and the plane where the depth threshold with a small value is located is the far plane of the scene level.
7. The scene hierarchy-based light field rendering method according to claim 1, wherein the specific steps of rendering 3D scenes of different levels are as follows:
when rendering the 3D scene of each level, only rendering the scene content of the level between the near plane and the far plane;
the rendering method comprises rasterization, path tracking and ray tracking algorithms.
8. The scene layering-based light field rendering method according to claim 1, wherein there are two ways to reconstruct the parallax image in super resolution and encode the pixels to obtain a high resolution synthetic image;
the method comprises the following specific steps:
taking the parallax image obtained by rendering as the input of the super-resolution neural network;
selecting a scale factor of super resolution, and realizing super resolution reconstruction of batch parallax images to obtain high-resolution parallax images;
according to the display characteristics of the three-dimensional display equipment, pixels in the parallax image are encoded, and a high-resolution synthetic image is obtained;
the method comprises the following specific steps:
encoding pixels in the low-resolution parallax image according to the display characteristics of the three-dimensional display device to obtain a low-resolution composite image;
and taking the single low-resolution synthetic image as the input of the super-resolution neural network, and selecting the scale factor of the super-resolution so that the super-resolution neural network outputs the single high-resolution synthetic image.
9. The scene hierarchy based light field rendering method of claim 8, wherein the super-resolution neural network is a generation countermeasure network comprising a generation network and a discrimination network; the generating network is composed of a plurality of deformable convolution layers, a plurality of activating units and a sub-pixel convolution layer and a residual error connection, wherein the plurality of deformable convolution layers are used for extracting the characteristics of a low-resolution image with the input size of H multiplied by W, the sub-pixel convolution layer changes the low-resolution image into a high-resolution characteristic image with the size of rH multiplied by rW through pixel recombination operation, the residual error connection is used for connecting the high-resolution characteristic image with an image obtained from an input end through up sampling directly, network identity mapping is initially input through learning residual error, the degradation problem of the network is avoided, and a final high-resolution reconstructed image is output; the discrimination network consists of a plurality of convolution layers, a BN layer, an activation unit and a full connection layer and is used for optimizing parameters for generating a network model.
10. The scene hierarchy based light field rendering method of claim 1, wherein the three-dimensional display hardware comprises a slit grating three-dimensional display, a lenticular grating three-dimensional display, or an integrated imaging three-dimensional display.
CN202310764527.0A 2023-06-27 2023-06-27 A light field rendering method based on scene layering Active CN116503536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310764527.0A CN116503536B (en) 2023-06-27 2023-06-27 A light field rendering method based on scene layering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310764527.0A CN116503536B (en) 2023-06-27 2023-06-27 A light field rendering method based on scene layering

Publications (2)

Publication Number Publication Date
CN116503536A true CN116503536A (en) 2023-07-28
CN116503536B CN116503536B (en) 2024-04-05

Family

ID=87316982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310764527.0A Active CN116503536B (en) 2023-06-27 2023-06-27 A light field rendering method based on scene layering

Country Status (1)

Country Link
CN (1) CN116503536B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333364A (en) * 2023-09-27 2024-01-02 浙江大学 A time-domain upsampling method and device participating in real-time rendering of media
CN118354053A (en) * 2024-04-29 2024-07-16 天翼云科技有限公司 A stereoscopic video communication method suitable for computing resource constrained environment
CN119228966A (en) * 2024-10-08 2024-12-31 南京大学 A digital sand table system using image rendering and a light field image rendering method
CN119478258A (en) * 2025-01-15 2025-02-18 北京邮电大学 A real-time rendering and stereoscopic display method and device suitable for 3D light field interaction
CN119722602A (en) * 2024-12-04 2025-03-28 北京邮电大学 Clarity evaluation method and device for three-dimensional light field display
CN119766980A (en) * 2024-12-13 2025-04-04 北京邮电大学深圳研究院 Three-dimensional scene display method, system, equipment and medium
GB2638245A (en) * 2024-02-16 2025-08-20 Murrell Richard Telepresence system
WO2025192865A1 (en) * 2024-03-14 2025-09-18 삼성전자 주식회사 Image processing device and image encoding and decoding method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6480205B1 (en) * 1998-07-22 2002-11-12 Nvidia Corporation Method and apparatus for occlusion culling in graphics systems
CN105898266A (en) * 2015-11-24 2016-08-24 徐州维林苑文化传媒有限公司 Multi-view mirror-free stereoscopic image rendering method and system
CN106910236A (en) * 2017-01-22 2017-06-30 北京微视酷科技有限责任公司 Rendering indication method and device in a kind of three-dimensional virtual environment
US20210364817A1 (en) * 2018-03-05 2021-11-25 Carnegie Mellon University Display system for rendering a scene with multiple focal planes
CN113748682A (en) * 2019-02-22 2021-12-03 阿瓦龙全息照相技术股份公司 Layered scene decomposition coding and decoding system and method
US20220351751A1 (en) * 2019-09-22 2022-11-03 Mean Cat Entertainment Llc Camera tracking system for live compositing
US11501410B1 (en) * 2022-03-22 2022-11-15 Illuscio, Inc. Systems and methods for dynamically rendering three-dimensional images with varying detail to emulate human vision
CN115359173A (en) * 2022-07-01 2022-11-18 北京邮电大学 Virtual multi-viewpoint video generation method, device, electronic device and storage medium
CN116095294A (en) * 2023-04-10 2023-05-09 深圳臻像科技有限公司 Three-dimensional light field image coding method and system based on depth value rendering resolution

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6480205B1 (en) * 1998-07-22 2002-11-12 Nvidia Corporation Method and apparatus for occlusion culling in graphics systems
CN105898266A (en) * 2015-11-24 2016-08-24 徐州维林苑文化传媒有限公司 Multi-view mirror-free stereoscopic image rendering method and system
CN106910236A (en) * 2017-01-22 2017-06-30 北京微视酷科技有限责任公司 Rendering indication method and device in a kind of three-dimensional virtual environment
US20210364817A1 (en) * 2018-03-05 2021-11-25 Carnegie Mellon University Display system for rendering a scene with multiple focal planes
CN113748682A (en) * 2019-02-22 2021-12-03 阿瓦龙全息照相技术股份公司 Layered scene decomposition coding and decoding system and method
US20220351751A1 (en) * 2019-09-22 2022-11-03 Mean Cat Entertainment Llc Camera tracking system for live compositing
US11501410B1 (en) * 2022-03-22 2022-11-15 Illuscio, Inc. Systems and methods for dynamically rendering three-dimensional images with varying detail to emulate human vision
CN115359173A (en) * 2022-07-01 2022-11-18 北京邮电大学 Virtual multi-viewpoint video generation method, device, electronic device and storage medium
CN116095294A (en) * 2023-04-10 2023-05-09 深圳臻像科技有限公司 Three-dimensional light field image coding method and system based on depth value rendering resolution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李远航: "三维光场显示实时编码与渲染方法研究", 《中国博士学位论文全文数据库 信息科技辑》, pages 138 - 165 *
秦志强;张文阁;文军;蒋晓瑜;: "集成成像光场计算方法研究进展", 科技传播, no. 02, pages 133 - 136 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333364A (en) * 2023-09-27 2024-01-02 浙江大学 A time-domain upsampling method and device participating in real-time rendering of media
GB2638245A (en) * 2024-02-16 2025-08-20 Murrell Richard Telepresence system
WO2025192865A1 (en) * 2024-03-14 2025-09-18 삼성전자 주식회사 Image processing device and image encoding and decoding method
CN118354053A (en) * 2024-04-29 2024-07-16 天翼云科技有限公司 A stereoscopic video communication method suitable for computing resource constrained environment
CN119228966A (en) * 2024-10-08 2024-12-31 南京大学 A digital sand table system using image rendering and a light field image rendering method
CN119722602A (en) * 2024-12-04 2025-03-28 北京邮电大学 Clarity evaluation method and device for three-dimensional light field display
CN119766980A (en) * 2024-12-13 2025-04-04 北京邮电大学深圳研究院 Three-dimensional scene display method, system, equipment and medium
CN119478258A (en) * 2025-01-15 2025-02-18 北京邮电大学 A real-time rendering and stereoscopic display method and device suitable for 3D light field interaction

Also Published As

Publication number Publication date
CN116503536B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN116503536B (en) A light field rendering method based on scene layering
US11876949B2 (en) Layered scene decomposition CODEC with transparency
Attal et al. Matryodshka: Real-time 6dof video view synthesis using multi-sphere images
TWI813098B (en) Neural blending for novel view synthesis
CN103945208B (en) A kind of parallel synchronous zooming engine for multiple views bore hole 3D display and method
CN108513123B (en) Image array generation method for integrated imaging light field display
CN112233165B (en) A baseline extension implementation method based on multi-plane image learning perspective synthesis
CN102447934A (en) Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens
CA2540538C (en) Stereoscopic imaging
WO2012140397A2 (en) Three-dimensional display system
CN116708746A (en) Naked eye 3D-based intelligent display processing method
CN113763301B (en) A three-dimensional image synthesis method and device that reduces the probability of miscutting
Hornung et al. Interactive pixel‐accurate free viewpoint rendering from images with silhouette aware sampling
CN111343444A (en) Three-dimensional image generation method and device
Adhikarla et al. Real-time adaptive content retargeting for live multi-view capture and light field display
CN103871094A (en) Swept-volume-based three-dimensional display system data source generating method
CN105120252A (en) Depth perception enhancing method for virtual multi-view drawing
CN114815286B (en) Method, device and equipment for determining parameters of full parallax three-dimensional light field display system
Yan et al. Stereoscopic image generation from light field with disparity scaling and super-resolution
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
Lino 2D image rendering for 3D holoscopic content using disparity-assisted patch blending
Thatte et al. Real-World Virtual Reality With Head-Motion Parallax
CN116385577A (en) Method and device for generating virtual viewpoint image
Li et al. Multi-Sight Passthrough: A Real-Time View Synthesis with Improved Visual Quality for VR
CN119766980B (en) Three-dimensional scene display method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant