CN116704111A - Image processing method and apparatus - Google Patents
Image processing method and apparatus Download PDFInfo
- Publication number
- CN116704111A CN116704111A CN202211573792.2A CN202211573792A CN116704111A CN 116704111 A CN116704111 A CN 116704111A CN 202211573792 A CN202211573792 A CN 202211573792A CN 116704111 A CN116704111 A CN 116704111A
- Authority
- CN
- China
- Prior art keywords
- original image
- feature points
- matching feature
- images
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
本申请提供了一种图像处理方法和设备,有利于提高三维重建结果的精确性。该方法包括:基于多张原始图像得到多组匹配特征点,并基于多组匹配特征点在原始图像中的像素坐标,确定多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿,基于多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿,确定每张原始图像上对应的匹配特征点的深度信息,并结合多张原始图像,确定每张原始图像对应的环境光照图,基于环境光照图和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标;基于虚拟光源三维坐标和多张环境光照图,得到多张光照补偿图像;基于多张光照补偿图像和多组匹配特征点的三维坐标,确定目标对象的三维模型。
The present application provides an image processing method and device, which is beneficial to improving the accuracy of three-dimensional reconstruction results. The method includes: obtaining multiple sets of matching feature points based on multiple original images, and determining the three-dimensional coordinates of the multiple sets of matching feature points and the relative camera position corresponding to each original image based on the pixel coordinates of the multiple sets of matching feature points in the original image Pose, based on the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image, determine the depth information of the corresponding matching feature points on each original image, and combine multiple original images to determine the depth information of each original image The corresponding environmental light map, based on the environmental light map and the relative camera pose corresponding to each original image, determine the three-dimensional coordinates of the virtual light source; based on the three-dimensional coordinates of the virtual light source and multiple environmental light maps, multiple light compensation images are obtained; based on multiple The three-dimensional coordinates of the illumination compensation image and multiple sets of matching feature points are used to determine the three-dimensional model of the target object.
Description
技术领域technical field
本申请涉及图像处理技术领域,尤其涉及一种图像处理方法和设备。The present application relates to the technical field of image processing, and in particular to an image processing method and device.
背景技术Background technique
三维重建技术是指利用计算机技术将现实世界场景或物体重建为计算机能够表达和处理的数据模型。基于图像的三维重建技术由于其对于图像采集设备的低要求以及重建过程的低成本特点,应用越来越广泛。3D reconstruction technology refers to the use of computer technology to reconstruct real world scenes or objects into data models that can be expressed and processed by computers. Image-based 3D reconstruction technology is more and more widely used due to its low requirements for image acquisition equipment and low cost of reconstruction process.
目前基于图像的三维重建技术主要包括:提取多视角图像中的特征点、完成特征点匹配、基于运动恢复结构技术进行稀疏点云重建以及基于多视图立体技术进行稠密重建。通常在稀疏点云重建后,取匹配特征点在不同视角下的图像中的颜色的均值作为对应的三维点颜色,为稀疏点云进行着色处理。基于着色后的稀疏点云,通过多视图立体技术进行稠密重建,来提高三维重建结果的精确性。At present, image-based 3D reconstruction technologies mainly include: extracting feature points in multi-view images, completing feature point matching, performing sparse point cloud reconstruction based on motion restoration structure technology, and dense reconstruction based on multi-view stereo technology. Usually, after the sparse point cloud is reconstructed, the mean value of the color of the matching feature point in the image under different viewing angles is taken as the corresponding 3D point color, and the sparse point cloud is colored. Based on the sparse point cloud after coloring, dense reconstruction is performed through multi-view stereo technology to improve the accuracy of 3D reconstruction results.
然而,由于目标对象的表面材质、采集目标对象的图像时的环境光位置或颜色,使不同视角下的图像可能出现目标对象纹理不清晰、颜色不真实、多视角下纹理不一致等现象,匹配特征点在不同视角下的图像中的颜色的均值并不代表目标对象的真实颜色,因此,上述方法的三维重建结果的精确性较差。However, due to the surface material of the target object, the position or color of the ambient light when the image of the target object is collected, the images under different viewing angles may have the phenomenon that the texture of the target object is not clear, the color is not real, and the texture is inconsistent under multiple viewing angles. The mean value of the colors of the points in the images under different viewing angles does not represent the true color of the target object, therefore, the accuracy of the 3D reconstruction results of the above methods is poor.
发明内容Contents of the invention
本申请提供了一种图像处理方法和设备,能够考虑到环境光照对目标对象的影响,有利于提高三维重建结果的精确性。The present application provides an image processing method and device, which can take into account the influence of ambient light on a target object, and help improve the accuracy of 3D reconstruction results.
第一方面,提供了一种图像处理方法,包括:获取目标对象的多张原始图像,多张原始图像分别是在不同视角下对所述目标对象进行拍摄得到的;基于多张原始图像得到多组匹配特征点,并基于多组匹配特征点在对应的原始图像中的像素坐标,确定多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿;基于多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿,确定每张原始图像上对应的匹配特征点的深度信息;基于每张原始图像上对应的匹配特征点的深度信息和多张原始图像,确定每张原始图像对应的环境光照图;基于每张原始图像对应的环境光照图和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标;基于虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿,得到多张光照补偿图像;基于多张光照补偿图像和多组匹配特征点的三维坐标,确定目标对象的三维模型。In the first aspect, an image processing method is provided, including: acquiring a plurality of original images of a target object, and the plurality of original images are respectively obtained by shooting the target object under different viewing angles; Group matching feature points, and based on the pixel coordinates of multiple sets of matching feature points in the corresponding original image, determine the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image; based on multiple sets of matching feature points The three-dimensional coordinates and the relative camera pose corresponding to each original image determine the depth information of the corresponding matching feature points on each original image; based on the depth information of the corresponding matching feature points on each original image and multiple original images, determine The environmental light map corresponding to each original image; based on the environmental light map corresponding to each original image and the camera relative pose corresponding to each original image, determine the three-dimensional coordinates of the virtual light source; based on the three-dimensional coordinates of the virtual light source and the corresponding position of each original image In the environmental light map, light compensation is performed on each original image to obtain multiple light compensation images; based on the multiple light compensation images and the 3D coordinates of multiple sets of matching feature points, the 3D model of the target object is determined.
本申请实施例的图像处理方法,通过考虑环境光照对目标对象的影响,在三维重建时,引入环境光照图,利用环境光照图,确定虚拟光源三维坐标,基于虚拟光源三维坐标和环境光照图对原始图像进行光照补偿,得到多张光照补偿图像,由于光照补偿图像包含了环境光照对目标对象的影响,利用多张光照补偿图像对目标对象进行三维重建,可以使得三维重建后得到的三维模型的颜色更接近于目标物体在实际环境光照场景下的真实颜色,有利于提高三维重建结果的精确性,从而提高用户体验。In the image processing method of the embodiment of the present application, by considering the impact of ambient light on the target object, the environment light map is introduced during 3D reconstruction, and the three-dimensional coordinates of the virtual light source are determined by using the environment light map. Based on the three-dimensional coordinates of the virtual light source and the environment light map Light compensation is performed on the original image to obtain multiple light compensation images. Since the light compensation image contains the influence of ambient light on the target object, using multiple light compensation images to perform 3D reconstruction of the target object can make the 3D model obtained after 3D reconstruction The color is closer to the real color of the target object in the actual ambient lighting scene, which is conducive to improving the accuracy of the 3D reconstruction results, thereby improving the user experience.
应理解,深度信息为各组匹配特征点分别到对应的图像采集设备(例如相机)成像平面的垂直距离。It should be understood that the depth information is the vertical distance from each group of matching feature points to the imaging plane of the corresponding image acquisition device (such as a camera).
还应理解,环境光照图是用于表示原始图像中的每个像素点在环境光源影响下的亮度信息的图像。上述多张原始图像中的每张原始图像都存在一个对应的环境光照图,即该步骤可以得到多张原始光照图。It should also be understood that the ambient light map is an image used to represent the brightness information of each pixel in the original image under the influence of the ambient light source. Each of the above multiple original images has a corresponding ambient light map, that is, multiple original light maps can be obtained in this step.
还应理解,虚拟光源的数量可以是一个,也可以是多个It should also be understood that the number of virtual light sources can be one or more
结合第一方面,在第一方面的某些实现方式中,上述方法还包括:基于每张原始图像对应的相机相对位姿、虚拟光源三维坐标和多组匹配特征点的三维坐标,确定更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标;基于虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿,包括:基于更新虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿;基于多张光照补偿图像和多组匹配特征点的三维坐标,确定目标对象的三维模型,包括:基于多张光照补偿图像和多组匹配特征点的更新三维坐标,确定目标对象的三维模型。With reference to the first aspect, in some implementations of the first aspect, the above method further includes: based on the relative pose of the camera corresponding to each original image, the three-dimensional coordinates of the virtual light source, and the three-dimensional coordinates of multiple sets of matching feature points, determine to update the virtual The three-dimensional coordinates of the light source and the updated three-dimensional coordinates of multiple sets of matching feature points; based on the three-dimensional coordinates of the virtual light source and the environmental light map corresponding to each original image, perform illumination compensation for each original image, including: based on the updated three-dimensional coordinates of the virtual light source and each The ambient light map corresponding to the original image performs light compensation on each original image; based on multiple light compensation images and the 3D coordinates of multiple matching feature points, the 3D model of the target object is determined, including: based on multiple light compensation images and multiple The updated 3D coordinates of the group matching feature points are used to determine the 3D model of the target object.
本申请实施例的图像处理方法,通过确定更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标,对每张原始图像进行光照补偿,并基于光照补偿后的图像和多组匹配特征点的更新三维坐标,确定目标对象的三维模型,更新的虚拟光源三维坐标和多组匹配特征点的更新三维坐标使比未更新的虚拟光源三维坐标和多组匹配特征点的更新三维坐标误差更小,有利于提高三维重建结果的精确性。The image processing method of the embodiment of the present application performs illumination compensation on each original image by determining and updating the three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of multiple sets of matching feature points, and based on the image after illumination compensation and the multiple sets of matching feature points Updating the three-dimensional coordinates, determining the three-dimensional model of the target object, the updated three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of the multiple sets of matching feature points make the error of the updated three-dimensional coordinates of the three-dimensional coordinates of the virtual light source and the multiple sets of matching feature points smaller than that of the non-updated three-dimensional coordinates of the virtual light source, It is beneficial to improve the accuracy of 3D reconstruction results.
结合第一方面,在第一方面的某些实现方式中,对每张原始图像进行光照补偿,得到多张光照补偿图像,包括:基于更新虚拟光源三维坐标,确定虚拟光源在每张原始图像中的更新像素坐标;基于虚拟光源在每张原始图像中的更新像素坐标与虚拟光源在每张原始图像中对应的像素坐标之间的差值,对每张原始图像对应的环境光照图中的像素点进行位移,得到每张原始图像对应的环境光照补偿图;基于每张原始图像对应的环境光照补偿图,分别对每张原始图像进行光照补偿,得到多张光照补偿图像。In combination with the first aspect, in some implementations of the first aspect, light compensation is performed on each original image to obtain multiple light compensation images, including: determining the position of the virtual light source in each original image based on updating the three-dimensional coordinates of the virtual light source The updated pixel coordinates of the virtual light source; based on the difference between the updated pixel coordinates of the virtual light source in each original image and the corresponding pixel coordinates of the virtual light source in each original image, the pixels in the ambient light map corresponding to each original image Points are displaced to obtain the ambient light compensation map corresponding to each original image; based on the ambient light compensation map corresponding to each original image, light compensation is performed on each original image to obtain multiple light compensation images.
示例性地,图像处理设备可以将每张原始图像对应的环境光照补偿图映射到对应的原始图像,通过反投影操作分别对每张原始图像进行光照补偿,得到多张光照补偿图像。Exemplarily, the image processing device may map the ambient illumination compensation map corresponding to each original image to the corresponding original image, perform illumination compensation on each original image through a back-projection operation, and obtain multiple illumination compensation images.
结合第一方面,在第一方面的某些实现方式中,基于多张光照补偿图像和多组匹配特征点的更新三维坐标,确定目标对象的三维模型,包括:将多组匹配特征点的更新三维坐标确定为无色稀疏三维点云;获取多组匹配特征点在对应的多张光照补偿图像中的像素坐标;利用多组匹配特征点在对应的多张光照补偿图像中的像素坐标,对无色稀疏三维点云进行着色,得到着色稀疏三维点云;基于着色稀疏三维点云和多张光照补偿图像,确定所述目标对象的三维模型。In combination with the first aspect, in some implementations of the first aspect, the 3D model of the target object is determined based on multiple illumination compensation images and updated 3D coordinates of multiple sets of matching feature points, including: updating multiple sets of matching feature points The three-dimensional coordinates are determined as a colorless sparse three-dimensional point cloud; obtain the pixel coordinates of multiple sets of matching feature points in the corresponding multiple illumination compensation images; use the pixel coordinates of multiple sets of matching feature points in the corresponding multiple illumination compensation images to The colorless sparse three-dimensional point cloud is colored to obtain the colored sparse three-dimensional point cloud; based on the colored sparse three-dimensional point cloud and multiple illumination compensation images, the three-dimensional model of the target object is determined.
示例性地,图像处理设备可以利用多组匹配特征点在对应的多张光照补偿图像中的像素坐标的均值,对无色稀疏三维点云进行着色,得到着色稀疏三维点云。Exemplarily, the image processing device may color the colorless sparse 3D point cloud by using the mean value of the pixel coordinates of multiple sets of matching feature points in the corresponding multiple illumination compensation images to obtain the colored sparse 3D point cloud.
结合第一方面,在第一方面的某些实现方式中,基于多张原始图像得到多组匹配特征点,包括:提取多张原始图像中每张原始图像的特征点;对多张原始图像进行特征点匹配,获得多组匹配特征点。In combination with the first aspect, in some implementations of the first aspect, multiple sets of matching feature points are obtained based on multiple original images, including: extracting feature points of each original image in multiple original images; Feature point matching to obtain multiple sets of matching feature points.
应理解,图像处理设备可以通过特征提取算法,提取多张原始图像中每张原始图像的特征点。示例性地,特征提取算法可以是尺度不变特征转换算法(scale-invariantfeature transform,SIFT)、加速稳健特征算法(speeded up robust features,SURF)、角点检测算法(features from accelerated segment test,FAST)或快速特征点提取和描述算法(oriented fast and rotated brief,ORB)算法等特征点提取算法提取多张原始图像中每张原始图像的特征点,本申请实施例对此不做限定。It should be understood that the image processing device may extract the feature points of each original image in the multiple original images through a feature extraction algorithm. Exemplarily, the feature extraction algorithm may be a scale-invariant feature transform algorithm (scale-invariant feature transform, SIFT), an accelerated robust feature algorithm (speeded up robust features, SURF), a corner detection algorithm (features from accelerated segment test, FAST) Or a feature point extraction algorithm such as a fast feature point extraction and description algorithm (oriented fast and rotated brief, ORB) algorithm extracts the feature points of each original image in multiple original images, which is not limited in this embodiment of the present application.
还应理解,图像处理设备可以通过特征匹配策略对多张原始图像上分别提取多个特征点进行特征点匹配,获得多组匹配特征点。示例性地,特征匹配策略可以是暴力匹配策略、K最邻近法(k-nearest neighbor,KNN)匹配策略等,本申请实施例对此不做限定。It should also be understood that the image processing device may perform feature point matching on multiple feature points extracted from multiple original images through a feature matching strategy to obtain multiple sets of matching feature points. Exemplarily, the feature matching strategy may be a brute force matching strategy, a K-nearest neighbor (KNN) matching strategy, etc., which are not limited in this embodiment of the present application.
通过合适的特征提取算法和特征匹配策略,图像处理设备可以更加准确地提取到特征点,且可以获得更加准确的特征匹配结果,即上述多组匹配特征点,这样,有利于提高后续三维重建结果的准确性。Through appropriate feature extraction algorithms and feature matching strategies, image processing equipment can more accurately extract feature points, and can obtain more accurate feature matching results, that is, the above-mentioned multiple sets of matching feature points, which is conducive to improving the subsequent 3D reconstruction results accuracy.
结合第一方面,在第一方面的某些实现方式中,确定多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿,包括:基于多组匹配特征点在对应的原始图像中的像素坐标和多张原始图像对应的相机内参,利用三角化方法,确定每张原始图像对应的相机相对位姿;基于多组匹配特征点在对应的原始图像中的像素坐标和每张原始图像对应的相机相对位姿,利用三角化方法,确定多组匹配特征点的三维坐标。In combination with the first aspect, in some implementations of the first aspect, determining the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image includes: based on multiple sets of matching feature points in the corresponding original image The pixel coordinates in the corresponding original images and the camera internal references corresponding to multiple original images, use the triangulation method to determine the relative camera pose corresponding to each original image; based on the pixel coordinates of multiple sets of matching feature points in the corresponding original images and each original The relative pose of the camera corresponding to the image, using the triangulation method, determines the three-dimensional coordinates of multiple sets of matching feature points.
结合第一方面,在第一方面的某些实现方式中,确定每张原始图像上对应的匹配特征点的深度信息,包括:将多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿输入至深度估计网络模型,得到每张原始图像上对应的匹配特征点的深度信息。In combination with the first aspect, in some implementations of the first aspect, determining the depth information of the corresponding matching feature points on each original image includes: combining the three-dimensional coordinates of multiple sets of matching feature points with the camera corresponding to each original image The relative pose is input to the depth estimation network model to obtain the depth information of the corresponding matching feature points on each original image.
示例性地,深度估计网络模型可以是卷积神经网络(convolutional neuralnetworks,CNN)。Exemplarily, the depth estimation network model may be a convolutional neural network (convolutional neural networks, CNN).
结合第一方面,在第一方面的某些实现方式中,确定每张原始图像对应的环境光照图,包括:将每张原始图像上对应的匹配特征点的深度信息和多张原始图像输入至光照估计网络模型,得到每张原始图像对应的环境光照图。In combination with the first aspect, in some implementations of the first aspect, determining the ambient light map corresponding to each original image includes: inputting depth information of corresponding matching feature points on each original image and multiple original images to The illumination estimation network model obtains the environmental illumination map corresponding to each original image.
示例性地,光照估计网络模型可以是Gardner’s光照估计网络模型。Exemplarily, the illumination estimation network model may be Gardner's illumination estimation network model.
结合第一方面,在第一方面的某些实现方式中,确定虚拟光源三维坐标,包括:将每张原始图像对应的环境光照图中像素幅值最小的像素坐标确定为虚拟光源在每张原始图像中对应的像素坐标;基于虚拟光源在每张原始图像对应的像素坐标和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标。With reference to the first aspect, in some implementations of the first aspect, determining the three-dimensional coordinates of the virtual light source includes: determining the pixel coordinate with the smallest pixel amplitude in the ambient light map corresponding to each original image as the virtual light source in each original image. The corresponding pixel coordinates in the image; based on the pixel coordinates corresponding to the virtual light source in each original image and the relative camera pose corresponding to each original image, determine the three-dimensional coordinates of the virtual light source.
第二方面,提供了一种图像处理设备,包括:获取模块,用于获取目标对象的多张原始图像,多张原始图像分别是在不同视角下对所述目标对象进行拍摄得到的;处理模块,用于基于多张原始图像得到多组匹配特征点,并基于多组匹配特征点在对应的原始图像中的像素坐标,确定多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿;基于多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿,确定每张原始图像上对应的匹配特征点的深度信息;基于每张原始图像上对应的匹配特征点的深度信息和多张原始图像,确定每张原始图像对应的环境光照图;基于每张原始图像对应的环境光照图和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标;基于虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿,得到多张光照补偿图像;以及,基于多张光照补偿图像和多组匹配特征点的三维坐标,确定目标对象的三维模型。In a second aspect, an image processing device is provided, including: an acquisition module, configured to acquire multiple original images of a target object, where the multiple original images are respectively obtained by shooting the target object under different viewing angles; a processing module , used to obtain multiple sets of matching feature points based on multiple original images, and based on the pixel coordinates of multiple sets of matching feature points in the corresponding original images, determine the three-dimensional coordinates of multiple sets of matching feature points relative to the camera corresponding to each original image Pose; based on the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image, determine the depth information of the corresponding matching feature points on each original image; based on the corresponding matching feature points on each original image Based on the depth information of each original image and multiple original images, determine the ambient light map corresponding to each original image; determine the three-dimensional coordinates of the virtual light source based on the environmental light map corresponding to each original image and the relative camera pose corresponding to each original image; The three-dimensional coordinates of the light source and the ambient light map corresponding to each original image, perform illumination compensation on each original image, and obtain multiple illumination compensation images; and, based on multiple illumination compensation images and the three-dimensional coordinates of multiple sets of matching feature points, determine the target 3D model of the object.
结合第二方面,在第二方面的某些实现方式中,处理模块还用于:基于每张原始图像对应的相机相对位姿、虚拟光源三维坐标和多组匹配特征点的三维坐标,确定更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标;基于更新虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿;以及,基于多张光照补偿图像和多组匹配特征点的更新三维坐标,确定所述目标对象的三维模型。In combination with the second aspect, in some implementations of the second aspect, the processing module is further configured to: based on the relative pose of the camera corresponding to each original image, the three-dimensional coordinates of the virtual light source, and the three-dimensional coordinates of multiple sets of matching feature points, determine the updated The three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of multiple sets of matching feature points; based on the updated three-dimensional coordinates of the virtual light source and the environmental light map corresponding to each original image, light compensation is performed on each original image; and, based on multiple light compensation images and The updated three-dimensional coordinates of multiple groups of matching feature points determine the three-dimensional model of the target object.
结合第二方面,在第二方面的某些实现方式中,处理模块还用于:基于更新虚拟光源三维坐标,确定虚拟光源在每张原始图像中的更新像素坐标;基于虚拟光源在每张原始图像中的更新像素坐标与虚拟光源在每张原始图像中对应的像素坐标之间的差值,对每张原始图像对应的环境光照图中的像素点进行位移,得到每张原始图像对应的环境光照补偿图;基于每张原始图像对应的环境光照补偿图,分别对每张原始图像进行光照补偿,得到多张光照补偿图像。With reference to the second aspect, in some implementations of the second aspect, the processing module is further configured to: determine the updated pixel coordinates of the virtual light source in each original image based on the updated three-dimensional coordinates of the virtual light source; The difference between the updated pixel coordinates in the image and the pixel coordinates corresponding to the virtual light source in each original image, and the pixels in the environment light map corresponding to each original image are displaced to obtain the environment corresponding to each original image Illumination compensation map: Based on the ambient light compensation map corresponding to each original image, light compensation is performed on each original image to obtain multiple light compensation images.
结合第二方面,在第二方面的某些实现方式中,处理模块还用于:将多组匹配特征点的更新三维坐标确定为无色稀疏三维点云;获取模块还用于:获取多组匹配特征点在对应的多张光照补偿图像中的像素坐标;处理模块还用于:利用多组匹配特征点在对应的多张光照补偿图像中的像素坐标,对无色稀疏三维点云进行着色,得到着色稀疏三维点云;基于着色稀疏三维点云和多张光照补偿图像,确定目标对象的三维模型。In combination with the second aspect, in some implementations of the second aspect, the processing module is also used to: determine the updated 3D coordinates of multiple sets of matching feature points as a colorless sparse 3D point cloud; the acquisition module is also used to: acquire multiple sets of Match the pixel coordinates of the feature points in the corresponding multiple illumination compensation images; the processing module is also used to: use the pixel coordinates of multiple sets of matching feature points in the corresponding multiple illumination compensation images to color the colorless and sparse 3D point cloud , to obtain the colored sparse 3D point cloud; based on the colored sparse 3D point cloud and multiple illumination compensation images, the 3D model of the target object is determined.
结合第二方面,在第二方面的某些实现方式中,处理模块还用于:提取多张原始图像中每张原始图像的特征点;以及,对多张原始图像进行特征点匹配,获得多组匹配特征点。In combination with the second aspect, in some implementations of the second aspect, the processing module is also used to: extract feature points of each original image in multiple original images; and perform feature point matching on multiple original images to obtain multiple Group matching feature points.
结合第二方面,在第二方面的某些实现方式中,处理模块还用于:基于多组匹配特征点在对应的原始图像中的像素坐标和多张原始图像对应的相机内参,利用三角化方法,确定每张原始图像对应的相机相对位姿;以及,基于多组匹配特征点在对应的原始图像中的像素坐标和每张原始图像对应的相机相对位姿,利用三角化方法,确定多组匹配特征点的三维坐标。In conjunction with the second aspect, in some implementations of the second aspect, the processing module is further configured to: use triangulation based on the pixel coordinates of multiple sets of matching feature points in the corresponding original image and the internal camera parameters corresponding to multiple original images method to determine the relative pose of the camera corresponding to each original image; Set the 3D coordinates of the matching feature points.
结合第二方面,在第二方面的某些实现方式中,处理模块还用于:将多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿输入至深度估计网络模型,得到每张原始图像上对应的匹配特征点的深度信息。In combination with the second aspect, in some implementations of the second aspect, the processing module is further configured to: input the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image into the depth estimation network model, and obtain The depth information of the corresponding matching feature points on each original image.
结合第二方面,在第二方面的某些实现方式中,处理模块还用于:将每张原始图像上对应的匹配特征点的深度信息和多张原始图像输入至光照估计网络模型,得到每张原始图像对应的环境光照图。With reference to the second aspect, in some implementations of the second aspect, the processing module is further configured to: input the depth information of the corresponding matching feature points on each original image and multiple original images to the illumination estimation network model, and obtain each The ambient light map corresponding to the original image.
结合第二方面,在第二方面的某些实现方式中,处理模块还用于:将每张原始图像对应的环境光照图中像素幅值最小的像素坐标确定为虚拟光源在每张原始图像中对应的像素坐标;以及,基于虚拟光源在每张原始图像对应的像素坐标和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标。With reference to the second aspect, in some implementations of the second aspect, the processing module is further configured to: determine the pixel coordinate with the smallest pixel amplitude in the environment light map corresponding to each original image as the virtual light source in each original image The corresponding pixel coordinates; and, based on the pixel coordinates corresponding to the virtual light source in each original image and the relative camera pose corresponding to each original image, determine the three-dimensional coordinates of the virtual light source.
第三方面,提供了另一种图像处理设备,包括处理器和存储器。该处理器用于读取存储器中存储的指令,并可通过接收器接收信号,通过发射器发射信号,以执行上述第一方面中的任一种可能实现方式中的方法。In a third aspect, another image processing device is provided, including a processor and a memory. The processor is used to read instructions stored in the memory, and may receive signals through the receiver and transmit signals through the transmitter, so as to execute the method in any possible implementation manner of the first aspect above.
可选地,处理器为一个或多个,存储器为一个或多个。Optionally, there are one or more processors, and one or more memories.
可选地,存储器可以与处理器集成在一起,或者存储器与处理器分离设置。Optionally, the memory may be integrated with the processor, or the memory may be separated from the processor.
在具体实现过程中,存储器可以为非瞬时性(non-transitory)存储器,例如只读存储器(read only memory,ROM),其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请实施例对存储器的类型以及存储器与处理器的设置方式不做限定。In a specific implementation process, the memory may be a non-transitory (non-transitory) memory, such as a read-only memory (read only memory, ROM), which may be integrated with the processor on the same chip, or may be respectively arranged in different On the chip, the embodiment of the present application does not limit the type of the memory and the configuration of the memory and the processor.
上述第三方面中的图像处理设备可以是一个芯片,该处理器可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,该处理器可以是逻辑电路、集成电路等;当通过软件来实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现,该存储器可以集成在处理器中,可以位于该处理器之外,独立存在。The image processing device in the third aspect above can be a chip, and the processor can be implemented by hardware or by software. When implemented by hardware, the processor can be a logic circuit, an integrated circuit, etc.; When implemented, the processor may be a general-purpose processor, which is realized by reading software codes stored in a memory, and the memory may be integrated in the processor, or may be located outside the processor and exist independently.
第四方面,提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序(也可以称为代码,或指令),当其在计算机上运行时,使得计算机执行上述第一方面中的任一种可能实现方式中的方法。In a fourth aspect, a computer-readable storage medium is provided, the computer-readable storage medium stores a computer program (also referred to as code, or an instruction), and when it runs on a computer, it causes the computer to perform the above-mentioned first aspect. A method in any of the possible implementations.
第五方面,提供了一种计算机程序产品,计算机程序产品包括:计算机程序(也可以称为代码,或指令),当计算机程序被运行时,使得计算机执行上述第一方面中的任一种可能实现方式中的方法。In a fifth aspect, a computer program product is provided, and the computer program product includes: a computer program (also referred to as code, or an instruction), when the computer program is executed, it causes the computer to perform any one of the possibilities in the first aspect above. method in the implementation.
附图说明Description of drawings
图1是本申请实施例的应用场景示意图;FIG. 1 is a schematic diagram of an application scenario of an embodiment of the present application;
图2是本申请实施例提供的图像处理方法的示意性流程图;FIG. 2 is a schematic flow chart of an image processing method provided in an embodiment of the present application;
图3是本申请实施例提供的虚拟光源位置的示意图;Fig. 3 is a schematic diagram of the position of the virtual light source provided by the embodiment of the present application;
图4是本申请实施例提供的另一种图像处理方法的示意性流程图;Fig. 4 is a schematic flowchart of another image processing method provided by the embodiment of the present application;
图5是本申请实施例提供的一种图像处理装置的示意性框图;Fig. 5 is a schematic block diagram of an image processing device provided by an embodiment of the present application;
图6是本申请实施例提供的另一种图像处理装置的示意性框图。Fig. 6 is a schematic block diagram of another image processing apparatus provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合附图,对本申请中的技术方案进行描述。The technical solution in this application will be described below with reference to the accompanying drawings.
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, words such as "first" and "second" are used to distinguish the same or similar items with basically the same function and effect. Those skilled in the art can understand that words such as "first" and "second" do not limit the quantity and execution order, and words such as "first" and "second" do not necessarily limit the difference.
需要说明的是,本申请中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。It should be noted that, in this application, words such as "exemplarily" or "for example" are used as examples, illustrations or illustrations. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as being preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplarily" or "for example" is intended to present related concepts in a concrete manner.
此外,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a、b和c中的至少一项(个),可以表示:a,或b,或c,或a和b,或a和c,或b和c,或a、b和c,其中a,b,c可以是单个,也可以是多个。In addition, "at least one" means one or more, and "plurality" means two or more. "And/or" describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist at the same time, and B exists alone, where A, B can be singular or plural. The character "/" generally indicates that the contextual objects are an "or" relationship. "At least one of the following" or similar expressions refer to any combination of these items, including any combination of single or plural items. For example, at least one (one) of a, b and c may represent: a, or b, or c, or a and b, or a and c, or b and c, or a, b and c, wherein a, b, c can be single or multiple.
三维重建技术是指利用计算机技术将现实世界场景或物体重建为计算机能够表达和处理的数据模型。基于图像的三维重建由于其对于采集设备的低要求以及重建过程的低成本特点,应用越来越广泛。3D reconstruction technology refers to the use of computer technology to reconstruct real world scenes or objects into data models that can be expressed and processed by computers. Image-based 3D reconstruction is more and more widely used due to its low requirements for acquisition equipment and low cost of the reconstruction process.
为了便于理解本申请,下面结合图1对本申请实施例所涉及的应用场景进行介绍。In order to facilitate the understanding of the present application, the application scenario involved in the embodiment of the present application will be introduced below with reference to FIG. 1 .
图1示出了本申请实施例的应用场景100的示意图。该应用场景包括图像采集设备101和图像处理设备102。图像采集设备101可以采集三维重建目标对象或目标对象所在场景的不同视角下的图像,并向图像处理设备102发送不同视角下的图像。对应地,图像处理设备102接收上述不同视角下的图像后,可以利用不同视角下的图像对目标对象或目标对象所在场景进行三维重建。FIG. 1 shows a schematic diagram of an application scenario 100 of an embodiment of the present application. The application scenario includes an image acquisition device 101 and an image processing device 102 . The image acquisition device 101 may acquire images under different viewing angles of the three-dimensionally reconstructed target object or a scene where the target object is located, and send the images under different viewing angles to the image processing device 102 . Correspondingly, after the image processing device 102 receives the above-mentioned images under different viewing angles, it may use the images under different viewing angles to perform three-dimensional reconstruction of the target object or the scene where the target object is located.
可选地,上述图像采集设备101可以利用摄像头对目标对象进行拍照,直接采集多张不同视角的图像;也可以利用摄像头进行录像,采集到包括目标对象不同视角下画面的视频,然后在视频中剪辑出包括目标对象的不同视角下的图像;也可以从互联网中获取同一个目标对象的不同视角下的网络图片本申请实施例对此不作限定。Optionally, the above-mentioned image acquisition device 101 can use the camera to take pictures of the target object, and directly collect multiple images of different viewing angles; it can also use the camera to perform video recording, and collect the video that includes the pictures of the target object under different viewing angles, and then display the images in the video. Images under different viewing angles including the target object are clipped; network pictures of the same target object under different viewing angles may also be obtained from the Internet, which is not limited in this embodiment of the present application.
应理解,上述图像采集设备101具体可以为包括摄像头的设备,上述图像处理设备102具体可以为具有数据处理能力的设备。还应理解,在上述应用场景中,图像采集设备的数量可以为一个或多个,本申请实施例对此不作限定。It should be understood that the above-mentioned image acquisition device 101 may specifically be a device including a camera, and the above-mentioned image processing device 102 may specifically be a device having a data processing capability. It should also be understood that, in the foregoing application scenario, there may be one or more image acquisition devices, which is not limited in this embodiment of the present application.
上述图1仅示出了一种可能的场景,在其他可能的场景中,图像采集设备和图像处理设备可以合设为一台实体设备,为便于描述,本申请实施例将其称为图像处理设备。换句话说,本申请实施例的图像处理设备可以具备图像采集的功能,自己采集目标对象不同视角的图像,也可以接收来自其他设备采集到的图像,本申请实施例对此不作限定。The above Figure 1 only shows one possible scenario. In other possible scenarios, the image acquisition device and the image processing device can be combined into one physical device. For the convenience of description, this embodiment of the application refers to it as an image processing device. equipment. In other words, the image processing device in the embodiment of the present application may have the function of image collection, collect images from different perspectives of the target object by itself, or receive images collected from other devices, which is not limited in the embodiment of the present application.
目前,图像处理设备基于不同视角下的图像进行三维重建的过程可以包括:提取多视角图像中的特征点、完成特征点匹配、基于运动恢复结构技术进行稀疏点云重建以及基于多视图立体技术进行稠密重建。通常在稀疏点云重建后,取同一特征点在不同视角下的图像中的颜色的均值作为对应的三维点颜色,为稀疏点云进行着色处理。基于着色后的稀疏点云,通过多视图立体技术进行稠密重建,来提高三维重建结果的精确性。At present, the process of image processing equipment for 3D reconstruction based on images from different perspectives may include: extracting feature points in multi-view images, completing feature point matching, performing sparse point cloud reconstruction based on motion restoration structure technology, and performing multi-view stereo technology. Dense reconstruction. Usually, after the sparse point cloud is reconstructed, the mean value of the color of the same feature point in the image under different viewing angles is taken as the corresponding 3D point color, and the sparse point cloud is colored. Based on the sparse point cloud after coloring, dense reconstruction is performed through multi-view stereo technology to improve the accuracy of 3D reconstruction results.
然而,由于目标对象的表面材质、采集目标对象的图像时的环境光位置或颜色,使不同视角下的图像可能出现目标对象纹理不清晰、颜色不真实、多视角下纹理不一致等现象,匹配特征点在不同视角下的图像中的颜色的均值并不代表目标对象的真实颜色,因此,上述方法的三维重建结果的精确性较差。However, due to the surface material of the target object, the position or color of the ambient light when the image of the target object is collected, the images under different viewing angles may have the phenomenon that the texture of the target object is not clear, the color is not real, and the texture is inconsistent under multiple viewing angles. The mean value of the colors of the points in the images under different viewing angles does not represent the true color of the target object, therefore, the accuracy of the 3D reconstruction results of the above methods is poor.
有鉴于此,本申请实施例提出了一种图像处理方法和设备,通过考虑环境光照对目标对象的影响,在三维重建时,引入环境光照图,利用环境光照图,确定虚拟光源三维坐标,基于虚拟光源三维坐标和环境光照图对原始图像进行光照补偿,得到多张光照补偿图像,由于光照补偿图像包含了环境光照对目标对象的影响,利用多张光照补偿图像对目标对象进行三维重建,提高三维重建结果的精确性。In view of this, the embodiment of the present application proposes an image processing method and device. By considering the influence of ambient light on the target object, the environment light map is introduced during 3D reconstruction, and the three-dimensional coordinates of the virtual light source are determined by using the environment light map. Based on The 3D coordinates of the virtual light source and the ambient light map perform light compensation on the original image to obtain multiple light compensation images. Since the light compensation image contains the influence of the environmental light on the target object, the three-dimensional reconstruction of the target object is performed using multiple light compensation images, improving Accuracy of 3D reconstruction results.
下面将结合图2至图3,对本申请实施例提供的图像处理方法进行描述。The image processing method provided by the embodiment of the present application will be described below with reference to FIG. 2 to FIG. 3 .
图2是本申请实施例提供的一种图像处理方法200的示意性流程图,该方法200可以由图1所示的图像处理设备102执行,也可以由其他类似设备执行,本申请实施例对此不作限定。为便于描述,本申请实施例将其统称为图像处理设备。该方法200包括以下步骤:FIG. 2 is a schematic flow chart of an image processing method 200 provided by an embodiment of the present application. The method 200 can be executed by the image processing device 102 shown in FIG. 1 or by other similar devices. The embodiment of the present application is specific to This is not limited. For ease of description, they are collectively referred to as an image processing device in this embodiment of the application. The method 200 includes the following steps:
S201,获取目标对象的多张原始图像,多张原始图像分别是在不同视角下对所述目标对象进行拍摄得到的。S201. Acquire multiple original images of a target object, where the multiple original images are respectively obtained by shooting the target object under different viewing angles.
应理解,上述多张原始图像可以由图像处理设备自己拍摄得到(在图像处理设备具备图像采集功能的情况下),也可以由图像采集设备拍摄得到。此外,上述多张原始图像可以是全景相机采集的全景图像,或者是广角图像,也可以是普通相机拍摄的图像,本申请对此不做限定。It should be understood that the above multiple original images may be captured by the image processing device itself (if the image processing device has an image acquisition function), or may be captured by the image acquisition device. In addition, the above multiple original images may be panoramic images captured by a panoramic camera, or wide-angle images, or images captured by a common camera, which is not limited in this application.
S202,基于多张原始图像得到多组匹配特征点,并基于多组匹配特征点在对应的原始图像中的像素坐标,确定多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿。S202. Obtain multiple sets of matching feature points based on multiple original images, and determine the three-dimensional coordinates of the multiple sets of matching feature points and the relative camera position corresponding to each original image based on the pixel coordinates of the multiple sets of matching feature points in the corresponding original images. posture.
应理解,图像处理设备可以通过特征提取算法在多张原始图像上分别提取多个特征点,并通过特征匹配策略对多张原始图像上的每张原始图像的多个特征点进行匹配,得到多组匹配特征点。It should be understood that the image processing device can extract multiple feature points on multiple original images through a feature extraction algorithm, and use a feature matching strategy to match multiple feature points of each original image on multiple original images to obtain multiple Group matching feature points.
上述匹配特征点可以是两张图像中的匹配特征点,例如第一原始图像中有特征点1-1和特征点1-2,第二原始图像中有特征点2-1和特征点2-2,特征点1-1和特征点2-1为同一物理空间点在上述两张原始图像中的成像点,因此,特征点1-1和特征点2-1为一组匹配特征点。The above matching feature points can be matching feature points in two images, for example, there are feature points 1-1 and 1-2 in the first original image, and feature points 2-1 and 2-2 in the second original image. 2. The feature point 1-1 and the feature point 2-1 are imaging points of the same physical space point in the above two original images, therefore, the feature point 1-1 and the feature point 2-1 are a group of matching feature points.
上述匹配特征点也可以是两张以上图像中的匹配特征点,例如第一原始图像中有特征点1-1和特征点1-2,第二原始图像中有特征点2-1和特征点2-2,第三原始图像中有特征点3-1和特征点3-2,特征点1-1、特征点2-1和特征点3-1为同一物理空间点在上述三张原始图像中的成像点,因此,特征点1-1、特征点2-1和特征点3-1为一组匹配特征点。The above-mentioned matching feature points can also be matching feature points in more than two images, for example, there are feature points 1-1 and feature points 1-2 in the first original image, and feature points 2-1 and feature points in the second original image 2-2 , the third original image has feature point 3-1 and feature point 3-2 , feature point 1-1 , feature point 2-1 and feature point 3-1 are the same physical space point in the above three original images Therefore, feature point 1-1 , feature point 2-1 and feature point 3-1 are a group of matching feature points.
在上述示例中,图像处理设备可以确定特征点1-1在第一原始图像中的像素坐标、特征点2-1在第二原始图像中的像素坐标、以及特征点3-1在第三原始图像中的像素坐标。由于特征点1-1、特征点2-1和特征点3-1为同一物理空间点在上述三张原始图像中的成像点,该组匹配特征点的三维坐标即为该组匹配特征点对应的同一物理空间点的三维坐标。In the above example, the image processing device can determine the pixel coordinates of feature point 1-1 in the first original image, the pixel coordinates of feature point 2-1 in the second original image, and the pixel coordinates of feature point 3-1 in the third original image. Pixel coordinates in the image. Since feature point 1-1 , feature point 2-1 and feature point 3-1 are the imaging points of the same physical space point in the above three original images, the three-dimensional coordinates of this group of matching feature points are the corresponding The three-dimensional coordinates of the same physical space point.
上述示例仅仅是以一组匹配特征点为例进行了说明,在实际操作中,多张原始图像可以存在一组匹配特征点或者多组匹配特征点,本申请实施例对此不作限定。The above example is only described by taking a set of matching feature points as an example. In actual operation, multiple original images may have one set of matching feature points or multiple sets of matching feature points, which is not limited in this embodiment of the present application.
S203,基于多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿,确定每张原始图像上对应的匹配特征点的深度信息。S203. Based on the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image, determine the depth information of the corresponding matching feature points on each original image.
应理解,深度信息为各组匹配特征点分别到对应的图像采集设备(例如相机)成像平面的垂直距离。示例性地,第一原始图像中有特征点1-1,特征点1-1与第二原始图像中有特征点2-1为一组匹配特征点,第一原始图像中的特征点1-1到相机成像平面的垂直距离即为特征点的深度信息。It should be understood that the depth information is the vertical distance from each group of matching feature points to the imaging plane of the corresponding image acquisition device (such as a camera). Exemplarily, there is a feature point 1-1 in the first original image, and the feature point 1-1 and the feature point 2-1 in the second original image are a set of matching feature points, and the feature point 1-1 in the first original image 1 The vertical distance from the camera imaging plane is the depth information of the feature point.
S204,基于每张原始图像上对应的匹配特征点的深度信息和多张原始图像,确定每张原始图像对应的环境光照图。S204. Based on the depth information of the corresponding matching feature points on each original image and the multiple original images, determine an ambient light map corresponding to each original image.
应理解,环境光照图是用于表示原始图像中的每个像素点在环境光源影响下的亮度信息的图像。上述多张原始图像中的每张原始图像都存在一个对应的环境光照图,即该步骤可以得到多张原始光照图。It should be understood that the ambient light map is an image used to represent the brightness information of each pixel in the original image under the influence of the ambient light source. Each of the above multiple original images has a corresponding ambient light map, that is, multiple original light maps can be obtained in this step.
S205,基于每张原始图像对应的环境光照图和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标。S205. Determine the three-dimensional coordinates of the virtual light source based on the ambient light map corresponding to each original image and the relative camera pose corresponding to each original image.
应理解,虚拟光源的数量可以是一个,也可以是多个,对应地,虚拟光源三维坐标的数量可以是一个,也可以多个,本申请实施例对此不作限定。It should be understood that the number of virtual light sources may be one or multiple, and correspondingly, the number of three-dimensional coordinates of virtual light sources may be one or multiple, which is not limited in this embodiment of the present application.
可选地,若上述多张原始图像是在室外自然光线的场景下采集到的,虚拟光源的数量可以为一个;若上述多张原始图像是在室外自然光线不良或室内的场景下采集到的,虚拟光源的数量可以为一个或多个。Optionally, if the above-mentioned multiple original images are collected under outdoor natural light scenes, the number of virtual light sources can be one; if the above-mentioned multiple original images are collected under outdoor natural light or indoor scenes , the number of virtual light sources can be one or more.
S206,基于虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿,得到多张光照补偿图像。S206. Based on the three-dimensional coordinates of the virtual light source and the ambient light map corresponding to each original image, perform light compensation on each original image to obtain multiple light compensation images.
S207,基于多张光照补偿图像和多组匹配特征点的三维坐标,确定目标对象的三维模型。S207. Determine a 3D model of the target object based on the multiple illumination compensation images and the 3D coordinates of multiple sets of matching feature points.
本申请实施例的图像处理方法,通过考虑环境光照对目标对象的影响,在三维重建时,引入环境光照图,利用环境光照图,确定虚拟光源三维坐标,基于虚拟光源三维坐标和环境光照图对原始图像进行光照补偿,得到多张光照补偿图像,由于光照补偿图像包含了环境光照对目标对象的影响,利用多张光照补偿图像对目标对象进行三维重建,可以使得三维重建后得到的三维模型的颜色更接近于目标物体在实际环境光照场景下的真实颜色,有利于提高三维重建结果的精确性,从而提高用户体验。In the image processing method of the embodiment of the present application, by considering the impact of ambient light on the target object, the environment light map is introduced during 3D reconstruction, and the three-dimensional coordinates of the virtual light source are determined by using the environment light map. Based on the three-dimensional coordinates of the virtual light source and the environment light map Light compensation is performed on the original image to obtain multiple light compensation images. Since the light compensation image contains the influence of ambient light on the target object, using multiple light compensation images to perform 3D reconstruction of the target object can make the 3D model obtained after 3D reconstruction The color is closer to the real color of the target object in the actual ambient lighting scene, which is conducive to improving the accuracy of the 3D reconstruction results, thereby improving the user experience.
作为一个可选的实施例,上述方法还包括:As an optional embodiment, the above method also includes:
基于每张原始图像对应的相机相对位姿、虚拟光源三维坐标和多组匹配特征点的三维坐标,确定更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标;Based on the relative pose of the camera corresponding to each original image, the three-dimensional coordinates of the virtual light source and the three-dimensional coordinates of multiple sets of matching feature points, determine and update the three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of the multiple sets of matching feature points;
S206,基于虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿,包括:S206. Based on the three-dimensional coordinates of the virtual light source and the ambient light map corresponding to each original image, perform illumination compensation for each original image, including:
基于更新虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿;Based on updating the three-dimensional coordinates of the virtual light source and the ambient light map corresponding to each original image, perform illumination compensation for each original image;
S207,基于多张光照补偿图像和所述多组匹配特征点的三维坐标,确定所述目标对象的三维模型,包括:S207. Determine the 3D model of the target object based on the multiple illumination compensation images and the 3D coordinates of the multiple sets of matching feature points, including:
基于多张光照补偿图像和多组匹配特征点的更新三维坐标,确定目标对象的三维模型。The three-dimensional model of the target object is determined based on the updated three-dimensional coordinates of the multiple illumination compensation images and the multiple sets of matching feature points.
本申请实施例的图像处理方法,通过确定更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标,对每张原始图像进行光照补偿,并基于光照补偿后的图像和多组匹配特征点的更新三维坐标,确定目标对象的三维模型,更新的虚拟光源三维坐标和多组匹配特征点的更新三维坐标使比未更新的虚拟光源三维坐标和多组匹配特征点的更新三维坐标误差更小,有利于提高三维重建结果的精确性。The image processing method of the embodiment of the present application performs illumination compensation on each original image by determining and updating the three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of multiple sets of matching feature points, and based on the image after illumination compensation and the multiple sets of matching feature points Updating the three-dimensional coordinates, determining the three-dimensional model of the target object, the updated three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of the multiple sets of matching feature points make the error of the updated three-dimensional coordinates of the three-dimensional coordinates of the virtual light source and the multiple sets of matching feature points smaller than that of the non-updated three-dimensional coordinates of the virtual light source, It is beneficial to improve the accuracy of 3D reconstruction results.
应理解,更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标又可以称为精确虚拟光源三维坐标和多组匹配特征点的精确三维坐标。It should be understood that updating the three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of the multiple sets of matching feature points may also be referred to as the precise three-dimensional coordinates of the virtual light source and the precise three-dimensional coordinates of the multiple sets of matching feature points.
应理解,图像处理设备在计算每张原始图像对应的相机相对位姿、虚拟光源三维坐标和多组匹配特征点的三维坐标的过程存在误差,影响目标对象的三维模型的精确性。由于每张原始图像对应的相机相对位姿、虚拟光源三维坐标和多组匹配特征点的三维坐标彼此间相互影响,每张原始图像对应的相机相对位姿是否更新会影响虚拟光源三维坐标和多组匹配特征点的三维坐标,图像处理设备可以对每张原始图像对应的相机相对位姿、虚拟光源三维坐标和多组匹配特征点的三维坐标均进行调整,得到每张原始图像对应的精确相机相对位姿、精确虚拟光源三维坐标和多组匹配特征点的精确三维坐标,最终提高目标对象的三维模型的精确性。It should be understood that the image processing device has errors in the process of calculating the relative pose of the camera corresponding to each original image, the three-dimensional coordinates of the virtual light source, and the three-dimensional coordinates of multiple sets of matching feature points, which affects the accuracy of the three-dimensional model of the target object. Since the relative pose of the camera corresponding to each original image, the three-dimensional coordinates of the virtual light source, and the three-dimensional coordinates of multiple sets of matching feature points affect each other, whether the relative pose of the camera corresponding to each original image is updated will affect the three-dimensional coordinates of the virtual light source and the multi-dimensional coordinates of the virtual light source. The image processing equipment can adjust the relative pose of the camera corresponding to each original image, the three-dimensional coordinates of the virtual light source and the three-dimensional coordinates of multiple matching feature points, and obtain the precise camera corresponding to each original image. The relative pose, the precise 3D coordinates of the virtual light source and the precise 3D coordinates of multiple matching feature points ultimately improve the accuracy of the 3D model of the target object.
作为一个可选的实施例,上述S206,对每张原始图像进行光照补偿,得到多张光照补偿图像,包括:基于更新虚拟光源三维坐标,确定虚拟光源在每张原始图像中的更新像素坐标;基于虚拟光源在每张原始图像中的更新像素坐标与虚拟光源在每张原始图像中对应的像素坐标之间的差值,对每张原始图像对应的环境光照图中的像素点进行位移,得到每张原始图像对应的环境光照补偿图;基于每张原始图像对应的环境光照补偿图,分别对每张原始图像进行光照补偿,得到多张光照补偿图像。As an optional embodiment, the above step S206 is performing illumination compensation on each original image to obtain multiple illumination compensation images, including: determining the updated pixel coordinates of the virtual light source in each original image based on the updated three-dimensional coordinates of the virtual light source; Based on the difference between the updated pixel coordinates of the virtual light source in each original image and the corresponding pixel coordinates of the virtual light source in each original image, the pixels in the ambient light map corresponding to each original image are displaced to obtain The ambient light compensation map corresponding to each original image; based on the ambient light compensation map corresponding to each original image, light compensation is performed on each original image to obtain multiple light compensation images.
示例性地,图像处理设备可以将每张原始图像对应的环境光照补偿图映射到对应的原始图像,通过反投影操作分别对每张原始图像进行光照补偿,得到多张光照补偿图像。Exemplarily, the image processing device may map the ambient illumination compensation map corresponding to each original image to the corresponding original image, perform illumination compensation on each original image through a back-projection operation, and obtain multiple illumination compensation images.
作为一个可选的实施例,上述S207,基于多张光照补偿图像和多组匹配特征点的更新三维坐标,确定目标对象的三维模型,包括:将多组匹配特征点的更新三维坐标确定为无色稀疏三维点云;获取多组匹配特征点在对应的多张光照补偿图像中的像素坐标;利用多组匹配特征点在对应的多张光照补偿图像中的像素坐标,对无色稀疏三维点云进行着色,得到着色稀疏三维点云;基于着色稀疏三维点云和多张光照补偿图像,确定目标对象的三维模型。As an optional embodiment, in the above S207, determining the 3D model of the target object based on multiple illumination compensation images and updated 3D coordinates of multiple sets of matching feature points includes: determining the updated 3D coordinates of multiple sets of matching feature points as None color sparse 3D point cloud; obtain the pixel coordinates of multiple sets of matching feature points in corresponding multiple illumination compensation images; use the pixel coordinates of multiple sets of matching feature points in corresponding multiple illumination compensation images to perform The cloud is colored to obtain a colored sparse 3D point cloud; based on the colored sparse 3D point cloud and multiple illumination compensation images, the 3D model of the target object is determined.
示例性地,图像处理设备可以利用多组匹配特征点在对应的多张光照补偿图像中的像素坐标的均值,对无色稀疏三维点云进行着色,得到着色稀疏三维点云。Exemplarily, the image processing device may color the colorless sparse 3D point cloud by using the mean value of the pixel coordinates of multiple sets of matching feature points in the corresponding multiple illumination compensation images to obtain the colored sparse 3D point cloud.
作为一个可选的实施例,上述S202,基于多张原始图像得到多组匹配特征点,包括:提取多张原始图像中每张原始图像的特征点;对多张原始图像进行特征点匹配,获得多组匹配特征点。As an optional embodiment, the above S202, obtaining multiple sets of matching feature points based on multiple original images, includes: extracting feature points of each original image in multiple original images; performing feature point matching on multiple original images to obtain Multiple sets of matching feature points.
应理解,图像处理设备可以通过特征提取算法,提取多张原始图像中每张原始图像的特征点。示例性地,特征提取算法可以是尺度不变特征转换算法(scale-invariantfeature transform,SIFT)、加速稳健特征算法(speeded up robust features,SURF)、角点检测算法(features from accelerated segment test,FAST)或快速特征点提取和描述算法(oriented fast and rotated brief,ORB)算法等特征点提取算法提取多张原始图像中每张原始图像的特征点,本申请实施例对此不做限定。It should be understood that the image processing device may extract the feature points of each original image in the multiple original images through a feature extraction algorithm. Exemplarily, the feature extraction algorithm may be a scale-invariant feature transform algorithm (scale-invariant feature transform, SIFT), an accelerated robust feature algorithm (speeded up robust features, SURF), a corner detection algorithm (features from accelerated segment test, FAST) Or a feature point extraction algorithm such as a fast feature point extraction and description algorithm (oriented fast and rotated brief, ORB) algorithm extracts the feature points of each original image in multiple original images, which is not limited in this embodiment of the present application.
应理解,图像处理设备可以通过特征匹配策略对多张原始图像上分别提取多个特征点进行特征点匹配,获得多组匹配特征点。示例性地,特征匹配策略可以是暴力匹配策略、K最邻近法(k-nearest neighbor,KNN)匹配策略等,本申请实施例对此不做限定。It should be understood that the image processing device may use a feature matching strategy to perform feature point matching on multiple feature points extracted from multiple original images to obtain multiple sets of matching feature points. Exemplarily, the feature matching strategy may be a brute force matching strategy, a K-nearest neighbor (KNN) matching strategy, etc., which are not limited in this embodiment of the present application.
通过合适的特征提取算法和特征匹配策略,图像处理设备可以更加准确地提取到特征点,且可以获得更加准确的特征匹配结果,即上述多组匹配特征点,这样,有利于提高后续三维重建结果的准确性。Through appropriate feature extraction algorithms and feature matching strategies, image processing equipment can more accurately extract feature points, and can obtain more accurate feature matching results, that is, the above-mentioned multiple sets of matching feature points, which is conducive to improving the subsequent 3D reconstruction results accuracy.
作为一个可选的实施例,上述S202,确定多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿,包括:基于多组匹配特征点在对应的原始图像中的像素坐标和多张原始图像对应的相机内参,利用三角化方法,确定每张原始图像对应的相机相对位姿;基于多组匹配特征点在对应的原始图像中的像素坐标和每张原始图像对应的相机相对位姿,利用三角化方法,确定多组匹配特征点的三维坐标。As an optional embodiment, in the above S202, determining the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image includes: based on the pixel coordinates of multiple sets of matching feature points in the corresponding original image and The camera internal parameters corresponding to multiple original images, use the triangulation method to determine the relative camera pose corresponding to each original image; based on the pixel coordinates of multiple sets of matching feature points in the corresponding original image and the camera relative to each original image Pose, using the triangulation method to determine the three-dimensional coordinates of multiple sets of matching feature points.
示例性地,第一原始图像中有特征点1-1,第二原始图像中有特征点2-1,特征点1-1和特征点2-1为一组匹配特征点。基于特征点1-1在第一原始图像中的像素坐标、特征点2-1在第二原始图像中的像素坐标、第一原始图像的相机相对位姿(例如预设的单位阵)以及相机内参,通过三角化方法,可以得到第二原始图像对应的相机相对位姿。基于第二原始图像对应的相机相对位姿、特征点1-1在第一原始图像中的像素坐标和特征点2-1在第二原始图像中的像素坐标,通过三角化方法,可以确定匹配特征点的三维坐标。Exemplarily, there is a feature point 1-1 in the first original image, and a feature point 2-1 in the second original image, and the feature point 1-1 and the feature point 2-1 are a set of matching feature points. Based on the pixel coordinates of feature point 1-1 in the first original image, the pixel coordinates of feature point 2-1 in the second original image, the camera relative pose of the first original image (such as the preset unit matrix) and the camera The internal reference, through the triangulation method, can obtain the relative pose of the camera corresponding to the second original image. Based on the relative pose of the camera corresponding to the second original image, the pixel coordinates of feature point 1-1 in the first original image and the pixel coordinates of feature point 2-1 in the second original image, the matching can be determined by triangulation method The three-dimensional coordinates of the feature points.
示例性地,第一原始图像中有特征点1-1,第二原始图像中有特征点2-1,第三原始图像中有特征点3-1,特征点1-1、特征点2-1和特征点3-1为一组匹配特征点。基于特征点1-1在第一原始图像中的像素坐标、特征点2-1在第二原始图像中的像素坐标、第一原始图像的相机相对位姿(例如预设的单位阵)以及相机内参,通过上述三角化方法,可以确定第二原始图像对应的相机相对位姿和匹配特征点的三维坐标P1'。基于特征点1-1在第一原始图像中的像素坐标、特征点3-1在第三原始图像中的像素坐标、第一原始图像的相机相对位姿(例如预设的单位阵)以及相机内参,通过上述三角化方法,可以确定第三原始图像对应的相机相对位姿和匹配特征点的三维坐标P1",基于三维坐标P1'和P1",通过联合方程,确定P1为对应的匹配特征点的三维坐标。Exemplarily, there are feature points 1-1 in the first original image, feature points 2-1 in the second original image, feature points 3-1 in the third original image, feature points 1-1 , feature points 2- 1 and feature point 3-1 are a set of matching feature points. Based on the pixel coordinates of feature point 1-1 in the first original image, the pixel coordinates of feature point 2-1 in the second original image, the camera relative pose of the first original image (such as the preset unit matrix) and the camera As for the internal reference, the relative pose of the camera corresponding to the second original image and the three-dimensional coordinates P1' of the matching feature points can be determined through the above-mentioned triangulation method. Based on the pixel coordinates of feature point 1-1 in the first original image, the pixel coordinates of feature point 3-1 in the third original image, the camera relative pose of the first original image (such as the preset unit matrix) and the camera The internal reference, through the above triangulation method, can determine the relative pose of the camera corresponding to the third original image and the three-dimensional coordinate P1" of the matching feature point. Based on the three-dimensional coordinates P1' and P1", through the joint equation, determine P1 as the corresponding matching feature The 3D coordinates of the point.
作为一个可选的实施例,上述S203,确定每张原始图像上对应的匹配特征点的深度信息,包括:将多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿输入至深度估计网络模型,得到每张原始图像上对应的匹配特征点的深度信息。As an optional embodiment, in the above S203, determining the depth information of the corresponding matching feature points on each original image includes: inputting the three-dimensional coordinates of multiple sets of matching feature points and the relative camera poses corresponding to each original image to The depth estimation network model obtains the depth information of the corresponding matching feature points on each original image.
示例性地,深度估计网络模型可以是卷积神经网络(convolutional neuralnetworks,CNN)。Exemplarily, the depth estimation network model may be a convolutional neural network (convolutional neural networks, CNN).
作为一个可选的实施例,上述S204,确定每张原始图像对应的环境光照图,包括:将每张原始图像上对应的匹配特征点的深度信息和多张原始图像输入至光照估计网络模型,得到每张原始图像对应的环境光照图。As an optional embodiment, in the above S204, determining the ambient light map corresponding to each original image includes: inputting the depth information of the corresponding matching feature points on each original image and multiple original images to the illumination estimation network model, Obtain the ambient light map corresponding to each original image.
示例性地,光照估计网络模型可以是Gardner’s光照估计网络模型。Exemplarily, the illumination estimation network model may be Gardner's illumination estimation network model.
作为一个可选的实施例,上述S205,确定虚拟光源三维坐标,包括:将每张原始图像对应的环境光照图中像素幅值最小的像素坐标确定为虚拟光源在每张原始图像中对应的像素坐标;基于虚拟光源在每张原始图像对应的像素坐标和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标。As an optional embodiment, the above S205, determining the three-dimensional coordinates of the virtual light source includes: determining the pixel coordinate with the smallest pixel amplitude in the environment light map corresponding to each original image as the pixel corresponding to the virtual light source in each original image Coordinates: Determine the three-dimensional coordinates of the virtual light source based on the pixel coordinates corresponding to each original image of the virtual light source and the relative pose of the camera corresponding to each original image.
应理解,像素幅值指示单个像素点的亮度。示例性地,像素幅值的范围为0-256,像素幅值为0可用于指示白色,像素幅值为256可用于指示黑色。It should be understood that the pixel magnitude indicates the brightness of a single pixel point. Exemplarily, the range of the pixel amplitude is 0-256, the pixel amplitude value of 0 can be used to indicate white, and the pixel amplitude value of 256 can be used to indicate black.
示例性地,图3a为环境光照图,其中存在一个虚拟光源P,图3b为该虚拟光源P在相机坐标系下的位置。图3a中P点为像素幅值最接近0的点,确定P点为虚拟光源在图3a环境光照图中的位置。虚拟光源在图3a环境光照图中的像素坐标可以经过像素坐标系到图像坐标系到相机坐标系的转换,得到相机坐标系下的虚拟光源P坐标如图3b。Exemplarily, Fig. 3a is an environment light map, in which there is a virtual light source P, and Fig. 3b is the position of the virtual light source P in the camera coordinate system. Point P in Figure 3a is the point whose pixel amplitude is closest to 0, and point P is determined to be the position of the virtual light source in the environment light map in Figure 3a. The pixel coordinates of the virtual light source in the environment light map in Figure 3a can be transformed from the pixel coordinate system to the image coordinate system to the camera coordinate system to obtain the virtual light source P coordinates in the camera coordinate system as shown in Figure 3b.
可选地,基于虚拟光源在每张原始图像对应的像素坐标和每张原始图像对应的相机相对位姿,通过三角化方法,确定虚拟光源三维坐标。Optionally, based on the pixel coordinates of the virtual light source corresponding to each original image and the relative pose of the camera corresponding to each original image, the three-dimensional coordinates of the virtual light source are determined through a triangulation method.
下面结合图4,以在室外自然光线下采集的10张不同视角的原始图像为例,对本申请实施例进行详细说明。Hereinafter, with reference to FIG. 4 , the embodiment of the present application will be described in detail by taking 10 original images of different viewing angles collected under outdoor natural light as an example.
图4是本申请实施例提供的另一种图像处理方法400的示意性流程图,该方法400可以由图1所示的图像处理设备102执行,也可以由其他类似设备执行,本申请实施例对此不作限定。该方法400包括以下步骤:Fig. 4 is a schematic flowchart of another image processing method 400 provided by the embodiment of the present application. The method 400 can be executed by the image processing device 102 shown in Fig. 1, or by other similar devices. The embodiment of the present application There is no limit to this. The method 400 includes the following steps:
S401,图像处理设备获取目标对象的10张原始图像,该10张原始图像是在不同视角下对目标对象进行拍摄得到的。S401. The image processing device acquires 10 original images of a target object, where the 10 original images are obtained by shooting the target object under different viewing angles.
S402,图像处理设备在上述10张原始图像中的每张原始图像上分别提取多个特征点,获得每张原始图像的多个特征点信息,该多个特征点信息包括特征点在原始图像中的像素坐标。S402. The image processing device extracts a plurality of feature points from each of the above 10 original images, and obtains a plurality of feature point information of each original image, and the plurality of feature point information includes feature points in the original image. The pixel coordinates of .
示例性地,图像处理设备可以通过SIFT算法、SURF算法、FAST算法或ORB算法在上述十张原始图像中的每张原始图像上分别提取多个特征点。Exemplarily, the image processing device may extract a plurality of feature points on each of the above ten original images by using a SIFT algorithm, a SURF algorithm, a FAST algorithm or an ORB algorithm.
S403,图像处理设备基于上述10张原始图像的多个特征点信息,对10张原始图像进行特征点匹配,获得多组匹配特征点。匹配特征点可以理解为目标对象的同一物理空间点在多个原始图像中的成像点。S403. The image processing device performs feature point matching on the 10 original images based on the multiple feature point information of the above 10 original images to obtain multiple sets of matching feature points. Matching feature points can be understood as imaging points of the same physical space point of the target object in multiple original images.
示例性地,上述图像处理设备可以通过暴力匹配(brute-force matcher,BFM)策略、对上述十张原始图像的多个特征点进行匹配,获得多组匹配特征点。Exemplarily, the above image processing device may use a brute-force matcher (BFM) strategy to match multiple feature points of the above ten original images to obtain multiple sets of matching feature points.
S404,基于多组匹配特征点在对应的原始图像中的像素坐标和原始图像对应的相机内参,图像处理设备得到多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿。S404. Based on the pixel coordinates of the multiple sets of matching feature points in the corresponding original image and the camera internal parameters corresponding to the original image, the image processing device obtains the three-dimensional coordinates of the multiple sets of matching feature points and the relative camera pose corresponding to each original image.
可选地,图像处理设备可以基于每两张原始图像中的匹配特征点在对应的原始图像中的像素坐标和相机内参,通过三角化方法,得到相机相对位姿。图像处理设备可以基于得到的相机相对位姿和每两张原始图像中的匹配特征点在对应的原始图像中的像素坐标,通过三角化方法,得到每两张原始图像中的匹配特征点的三维坐标。Optionally, the image processing device may obtain the relative pose of the camera through a triangulation method based on the pixel coordinates of the matching feature points in the corresponding original images in each two original images and the internal camera parameters. The image processing device can obtain the three-dimensional coordinates of the matching feature points in each two original images based on the obtained relative pose of the camera and the pixel coordinates of the matching feature points in the corresponding original images through a triangulation method. coordinate.
S405,图像处理设备基于上述每两张原始图像中的匹配特征点的三维坐标和上述每张原始图像对应的相机相对位姿,通过深度估计网络模型,获得上述匹配特征点在其对应的原始图像上的深度信息。S405. Based on the three-dimensional coordinates of the matching feature points in each of the above two original images and the relative camera pose corresponding to each of the above original images, the image processing device obtains the matching feature points in the corresponding original image through a depth estimation network model. in-depth information.
示例性地,深度估计网络模型可以是CNN。Exemplarily, the depth estimation network model may be CNN.
S406,图像处理设备基于上述10张原始图像和10张原始图像上对应的匹配特征点的深度信息,利用光照估计网络模型,获得10张原始图像对应的环境光照图。S406. Based on the above 10 original images and the depth information of the corresponding matching feature points on the 10 original images, the image processing device uses the illumination estimation network model to obtain the environment illumination map corresponding to the 10 original images.
示例性地,上述光照估计网络模型可以是Gardner’s光照估计网络模型。Exemplarily, the aforementioned illumination estimation network model may be Gardner's illumination estimation network model.
S407,图像处理设备基于10张原始图像对应的环境光照图,将环境光照图中像素幅值最小的像素坐标确定为虚拟光源在所述每张原始图像中对应的像素坐标。S407. The image processing device determines, based on the environmental light maps corresponding to the 10 original images, the pixel coordinate with the smallest pixel amplitude in the environmental light map as the pixel coordinate corresponding to the virtual light source in each of the original images.
可选地,图像处理设备可以根据环境光照图中像素点的像素幅值,对每个环境光照图进行连通区域划分,在小于或等于预设阈值的区域中,寻找像素幅值最小的点,并将该点的位置作为虚拟点光源的位置。像素幅值的范围:0-256,像素幅值为0时,为白色,像素幅值为256时,为黑色。Optionally, the image processing device may divide each environment light map into connected regions according to the pixel amplitude of the pixel points in the environment light map, and find the point with the smallest pixel amplitude in the area less than or equal to the preset threshold value, And the position of this point is used as the position of the virtual point light source. The range of pixel amplitude: 0-256, when the pixel amplitude is 0, it is white, and when the pixel amplitude is 256, it is black.
S408,图像处理设备基于10张原始图像对应的相机相对位姿和虚拟光源在每张原始图像中对应的像素坐标,计算虚拟光源三维坐标。S408. The image processing device calculates the three-dimensional coordinates of the virtual light source based on the relative camera poses corresponding to the 10 original images and the pixel coordinates corresponding to the virtual light source in each original image.
可选地,图像处理设备可以基于10张原始图像对应的相机相对位姿和虚拟光源在每张原始图像中对应的像素坐标,通过三角化方法,计算得到虚拟光源三维坐标。Optionally, the image processing device may calculate the three-dimensional coordinates of the virtual light source by triangulation based on the relative camera poses corresponding to the 10 original images and the pixel coordinates corresponding to the virtual light source in each original image.
S409,图像处理设备基于10张原始图像对应的相机相对位姿、虚拟光源三维坐标和多组匹配特征点的三维坐标,利用集束优化方程,计算更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标。S409. The image processing device calculates and updates the three-dimensional coordinates of the virtual light source and the updates of the multiple sets of matching feature points based on the relative poses of the cameras corresponding to the 10 original images, the three-dimensional coordinates of the virtual light source, and the three-dimensional coordinates of multiple sets of matching feature points, using a bundle optimization equation. 3D coordinates.
S410,图像处理设备基于更新虚拟光源三维坐标和上述10张原始图像对应的环境光照图,对对应的10张原始图像进行光照补偿,获得光照补偿后的10张图像。S410. The image processing device performs light compensation on the corresponding 10 original images based on the updated three-dimensional coordinates of the virtual light source and the environmental light map corresponding to the above 10 original images, and obtains 10 images after light compensation.
S411,图像处理设备将多组匹配特征点的更新三维坐标确定为无色稀疏三维点云,获取多组匹配特征点在对应的10张光照补偿图像中的像素坐标,计算多组匹配特征点在对应的多张光照补偿图像中的像素坐标的平均值,对无色稀疏三维点云进行着色,得到着色稀疏三维点云。S411. The image processing device determines the updated three-dimensional coordinates of the multiple sets of matching feature points as a colorless sparse three-dimensional point cloud, obtains the pixel coordinates of the multiple sets of matching feature points in the corresponding 10 illumination compensation images, and calculates the coordinates of the multiple sets of matching feature points in The average value of the pixel coordinates in the corresponding multiple illumination compensation images is used to color the colorless sparse 3D point cloud to obtain the colored sparse 3D point cloud.
S412,图像处理设备基于10张光照补偿后的图像和着色后的稀疏三维点云,获得目标对象的三维模型。S412. The image processing device obtains a 3D model of the target object based on the 10 light-compensated images and the colored sparse 3D point cloud.
可选地,图像处理设备基于10张光照补偿后的图像和着色后的稀疏三维点云,通过稠密重建算法,获得目标对象的三维模型。稠密重建算法可以是多视点立体视觉(multi-view stereo,MVS)算法。Optionally, the image processing device obtains the 3D model of the target object through a dense reconstruction algorithm based on the 10 light-compensated images and the colored sparse 3D point cloud. The dense reconstruction algorithm may be a multi-view stereo vision (multi-view stereo, MVS) algorithm.
本申请实施例的图像处理方法,通过确定更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标,对每张原始图像进行光照补偿,并基于光照补偿后的图像和多组匹配特征点的更新三维坐标,确定目标对象的三维模型,更新的虚拟光源三维坐标和多组匹配特征点的更新三维坐标使比未更新的虚拟光源三维坐标和多组匹配特征点的更新三维坐标误差更小,有利于提高三维重建结果的精确性。The image processing method of the embodiment of the present application performs illumination compensation on each original image by determining and updating the three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of multiple sets of matching feature points, and based on the image after illumination compensation and the multiple sets of matching feature points Updating the three-dimensional coordinates, determining the three-dimensional model of the target object, the updated three-dimensional coordinates of the virtual light source and the updated three-dimensional coordinates of the multiple sets of matching feature points make the error of the updated three-dimensional coordinates of the three-dimensional coordinates of the virtual light source and the multiple sets of matching feature points smaller than that of the non-updated three-dimensional coordinates of the virtual light source, It is beneficial to improve the accuracy of the 3D reconstruction results.
上文结合图2至图4,详细描述了本申请实施例的图像处理方法,下面将结合图5和图6,详细描述本申请实施例的图像处理设备。The image processing method of the embodiment of the present application is described in detail above with reference to FIG. 2 to FIG. 4 , and the image processing device of the embodiment of the present application will be described in detail below in conjunction with FIG. 5 and FIG. 6 .
图5示出了本申请实施例提供的图像处理设备500,该图像处理设备500包括:获取模块501和处理模块502。FIG. 5 shows an image processing device 500 provided by an embodiment of the present application, and the image processing device 500 includes: an acquisition module 501 and a processing module 502 .
获取模块501,用于获取目标对象的多张原始图像,多张原始图像分别是在不同视角下对所述目标对象进行拍摄得到的;处理模块502,用于基于多张原始图像得到多组匹配特征点,并基于多组匹配特征点在对应的原始图像中的像素坐标,确定多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿;基于多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿,确定每张原始图像上对应的匹配特征点的深度信息;基于每张原始图像上对应的匹配特征点的深度信息和多张原始图像,确定每张原始图像对应的环境光照图;基于每张原始图像对应的环境光照图和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标;基于虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿,得到多张光照补偿图像;以及,基于多张光照补偿图像和多组匹配特征点的三维坐标,确定目标对象的三维模型。The obtaining module 501 is used to obtain multiple original images of the target object, and the multiple original images are respectively obtained by shooting the target object under different viewing angles; the processing module 502 is used to obtain multiple matching groups based on the multiple original images feature points, and based on the pixel coordinates of multiple sets of matching feature points in the corresponding original image, determine the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image; based on the three-dimensional coordinates of multiple sets of matching feature points Based on the relative pose of the camera corresponding to each original image, determine the depth information of the corresponding matching feature points on each original image; based on the depth information of the corresponding matching feature points on each original image and multiple original images, determine each The environmental light map corresponding to the original image; based on the environmental light map corresponding to each original image and the relative camera pose corresponding to each original image, determine the three-dimensional coordinates of the virtual light source; based on the three-dimensional coordinates of the virtual light source and the environmental light corresponding to each original image In the figure, light compensation is performed on each original image to obtain multiple light compensation images; and, based on the multiple light compensation images and the three-dimensional coordinates of multiple sets of matching feature points, the three-dimensional model of the target object is determined.
可选地,处理模块502还用于:基于每张原始图像对应的相机相对位姿、虚拟光源三维坐标和多组匹配特征点的三维坐标,确定更新虚拟光源三维坐标和多组匹配特征点的更新三维坐标;基于更新虚拟光源三维坐标和每张原始图像对应的环境光照图,对每张原始图像进行光照补偿;以及,基于多张光照补偿图像和多组匹配特征点的更新三维坐标,确定所述目标对象的三维模型。Optionally, the processing module 502 is further configured to: determine and update the three-dimensional coordinates of the virtual light source and the three-dimensional coordinates of the multiple sets of matching feature points based on the relative pose of the camera corresponding to each original image, the three-dimensional coordinates of the virtual light source, and the multiple sets of matching feature points Update the three-dimensional coordinates; based on the updated three-dimensional coordinates of the virtual light source and the environmental light map corresponding to each original image, perform illumination compensation on each original image; and, based on the updated three-dimensional coordinates of multiple illumination compensation images and multiple sets of matching feature points, determine A three-dimensional model of the target object.
可选地,处理模块502还用于:基于更新虚拟光源三维坐标,确定虚拟光源在每张原始图像中的更新像素坐标;基于虚拟光源在每张原始图像中的更新像素坐标与虚拟光源在每张原始图像中对应的像素坐标之间的差值,对每张原始图像对应的环境光照图中的像素点进行位移,得到每张原始图像对应的环境光照补偿图;基于每张原始图像对应的环境光照补偿图,分别对每张原始图像进行光照补偿,得到多张光照补偿图像。Optionally, the processing module 502 is further configured to: determine the updated pixel coordinates of the virtual light source in each original image based on the updated three-dimensional coordinates of the virtual light source; The difference between the corresponding pixel coordinates in each original image, and the pixel points in the environmental light map corresponding to each original image are displaced to obtain the environmental light compensation map corresponding to each original image; based on the corresponding to each original image Ambient light compensation map, light compensation is performed on each original image separately, and multiple light compensation images are obtained.
可选地,处理模块502还用于:将多组匹配特征点的更新三维坐标确定为无色稀疏三维点云;获取模块501还用于:获取多组匹配特征点在对应的多张光照补偿图像中的像素坐标;处理模块还用于:利用多组匹配特征点在对应的多张光照补偿图像中的像素坐标,对无色稀疏三维点云进行着色,得到着色稀疏三维点云;基于着色稀疏三维点云和多张光照补偿图像,确定目标对象的三维模型。Optionally, the processing module 502 is further configured to: determine the updated 3D coordinates of multiple groups of matching feature points as a colorless sparse 3D point cloud; the obtaining module 501 is also configured to: obtain multiple groups of matching The pixel coordinates in the image; the processing module is also used to: use the pixel coordinates of multiple sets of matching feature points in the corresponding multiple illumination compensation images to color the colorless sparse 3D point cloud to obtain the colored sparse 3D point cloud; based on coloring Sparse 3D point cloud and multiple light compensation images to determine the 3D model of the target object.
可选地,处理模块502还用于:提取多张原始图像中每张原始图像的特征点;以及,对多张原始图像进行特征点匹配,获得多组匹配特征点。Optionally, the processing module 502 is further configured to: extract feature points of each of the multiple original images; and perform feature point matching on the multiple original images to obtain multiple sets of matching feature points.
可选地,处理模块502还用于:基于多组匹配特征点在对应的原始图像中的像素坐标和多张原始图像对应的相机内参,利用三角化方法,确定每张原始图像对应的相机相对位姿;以及,基于多组匹配特征点在对应的原始图像中的像素坐标和每张原始图像对应的相机相对位姿,利用三角化方法,确定多组匹配特征点的三维坐标。Optionally, the processing module 502 is also configured to: determine the camera relative to each original image based on the pixel coordinates of multiple groups of matching feature points in the corresponding original image and the internal camera parameters corresponding to the multiple original images, using the triangulation method pose; and, based on the pixel coordinates of the multiple sets of matching feature points in the corresponding original images and the relative camera poses corresponding to each original image, using a triangulation method to determine the three-dimensional coordinates of the multiple sets of matching feature points.
可选地,处理模块502还用于:将多组匹配特征点的三维坐标和每张原始图像对应的相机相对位姿输入至深度估计网络模型,得到每张原始图像上对应的匹配特征点的深度信息。Optionally, the processing module 502 is also configured to: input the three-dimensional coordinates of multiple sets of matching feature points and the relative camera pose corresponding to each original image into the depth estimation network model, and obtain the corresponding matching feature points on each original image. depth information.
可选地,处理模块502还用于:将每张原始图像上对应的匹配特征点的深度信息和多张原始图像输入至光照估计网络模型,得到每张原始图像对应的环境光照图。Optionally, the processing module 502 is further configured to: input the depth information of corresponding matching feature points on each original image and multiple original images to the illumination estimation network model to obtain an environment illumination map corresponding to each original image.
可选地,处理模块502还用于:将每张原始图像对应的环境光照图中像素幅值最小的像素坐标确定为虚拟光源在每张原始图像中对应的像素坐标;以及,基于虚拟光源在每张原始图像对应的像素坐标和每张原始图像对应的相机相对位姿,确定虚拟光源三维坐标。Optionally, the processing module 502 is further configured to: determine the pixel coordinate with the smallest pixel amplitude in the ambient light map corresponding to each original image as the pixel coordinate corresponding to the virtual light source in each original image; and, based on the virtual light source in The pixel coordinates corresponding to each original image and the relative pose of the camera corresponding to each original image determine the three-dimensional coordinates of the virtual light source.
应理解,这里的设备500以功能模块的形式体现。这里的术语“模块”可以指应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。在一个可选例子中,本领域技术人员可以理解,设备500可以具体为上述实施例中的图像处理设备,设备500可以用于执行上述方法实施例中与图像处理设备对应的各个流程和/或步骤,为避免重复,在此不再赘述。It should be understood that the device 500 here is embodied in the form of functional modules. The term "module" herein may refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (such as a shared processor, a dedicated processor, or a group of processor, etc.) and memory, incorporated logic, and/or other suitable components to support the described functionality. In an optional example, those skilled in the art can understand that the device 500 may specifically be the image processing device in the above-mentioned embodiment, and the device 500 may be used to execute each process corresponding to the image processing device in the above-mentioned method embodiment and/or The steps are not repeated here to avoid repetition.
上述设备500具有实现上述方法中图像处理设备执行的相应步骤的功能;上述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。The above device 500 has the function of implementing the corresponding steps performed by the image processing device in the above method; the above function can be realized by hardware, or by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions.
在本申请的实施例,图5中的装置500也可以是芯片或者芯片系统,例如片上系统(system on chip,SoC)。In the embodiment of the present application, the device 500 in FIG. 5 may also be a chip or a chip system, such as a system on chip (system on chip, SoC).
图6示出了本申请实施例提供的另一种图像处理设备600,该图像处理设备600包括处理器601、收发器602和存储器603。其中,处理器601、收发器602和存储器603通过内部连接通路相互通信,该存储器603用于存储命令,该处理器601用于执行该存储器603存储的指令,以控制该收发602发送信号和/或接收信号。FIG. 6 shows another image processing device 600 provided by an embodiment of the present application. The image processing device 600 includes a processor 601 , a transceiver 602 and a memory 603 . Wherein, the processor 601, the transceiver 602 and the memory 603 communicate with each other through an internal connection path, the memory 603 is used to store commands, and the processor 601 is used to execute the instructions stored in the memory 603 to control the transceiver 602 to send signals and/or or receive signals.
应理解,图像处理设备600可以具体为上述实施例中的图像处理设备,并且可以用于执行上述方法实施例中与图像处理设备对应的各个步骤和/或流程。可选地,该存储器603可以包括只读存储器和随机存储器,并向处理器601提供指令和数据。存储器603的一部分还可以包括非易失性随机存取存储器。例如,存储器603还可以存储设备类型的信息。该处理器601可以用于执行存储器中存储的指令,并且当该处理器501执行存储器中存储的指令时,该处理器601用于执行上述与图像处理设备对应的方法实施例的各个步骤和/或流程。该收发器602可以包括发射器和接收器,该发射器可以用于实现上述收发器对应的用于执行发送动作的各个步骤和/或流程,该接收器可以用于实现上述收发器对应的用于执行接收动作的各个步骤和/或流程。It should be understood that the image processing device 600 may specifically be the image processing device in the foregoing embodiments, and may be configured to execute various steps and/or processes corresponding to the image processing device in the foregoing method embodiments. Optionally, the memory 603 may include a read-only memory and a random access memory, and provides instructions and data to the processor 601 . A portion of memory 603 may also include non-volatile random access memory. For example, the memory 603 may also store information of device types. The processor 601 may be used to execute the instructions stored in the memory, and when the processor 501 executes the instructions stored in the memory, the processor 601 is used to execute the steps of the above-mentioned method embodiments corresponding to the image processing device and/or or process. The transceiver 602 may include a transmitter and a receiver, the transmitter may be used to implement various steps and/or processes corresponding to the above transceiver for performing sending actions, and the receiver may be used to implement the application corresponding to the above transceiver Various steps and/or processes for performing receiving actions.
应理解,在本申请实施例中,该处理器601可以是中央处理单元(centralprocessing unit,CPU),该处理器601还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that, in the embodiment of the present application, the processor 601 may be a central processing unit (central processing unit, CPU), and the processor 601 may also be other general-purpose processors, digital signal processors (DSPs), application-specific integrated circuits ( ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器执行存储器中的指令,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。In the implementation process, each step of the above method can be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software. The steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register. The storage medium is located in the memory, and the processor executes the instructions in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, no detailed description is given here.
本申请还提供了一种计算机可读存储介质,该计算机可读存储介质用于存储计算机程序,该计算机程序用于实现上述实施例中与图像处理设备对应的方法。The present application also provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and the computer program is used to implement the method corresponding to the image processing device in the foregoing embodiments.
本申请还提供了一种计算机程序产品,该计算机程序产品包括计算机程序(也可以称为代码,或指令),当该计算机程序在计算机上运行时,该计算机可以执行上述实施例中与图像处理设备对应的方法。The present application also provides a computer program product, which includes a computer program (also referred to as code, or instruction). When the computer program is run on a computer, the computer can execute the above-mentioned embodiment and image processing. The method corresponding to the device.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art can appreciate that the modules and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, device, and module can refer to the corresponding process in the foregoing method embodiment, and details are not repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the modules is only a logical function division. In actual implementation, there may be other division methods. For example, multiple modules or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or modules may be in electrical, mechanical or other forms.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or may be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。In addition, each functional module in each embodiment of the present application may be integrated into one processing module, each module may exist separately physically, or two or more modules may be integrated into one module.
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other various media that can store program codes. .
以上所述,仅为本申请的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应所述以权利要求的保护范围为准。The above is only the specific implementation of the application, but the protection scope of the embodiment of the application is not limited thereto, and any skilled person familiar with the technical field can easily think of changes within the technical scope disclosed in the embodiment of the application Or replacement, should be covered within the scope of protection of the embodiments of the present application. Therefore, the scope of protection of the embodiments of the present application should be based on the scope of protection of the claims.
Claims (21)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211573792.2A CN116704111B (en) | 2022-12-08 | 2022-12-08 | Image processing method and apparatus |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211573792.2A CN116704111B (en) | 2022-12-08 | 2022-12-08 | Image processing method and apparatus |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN116704111A true CN116704111A (en) | 2023-09-05 |
| CN116704111B CN116704111B (en) | 2024-08-27 |
Family
ID=87832750
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211573792.2A Active CN116704111B (en) | 2022-12-08 | 2022-12-08 | Image processing method and apparatus |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116704111B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117911631A (en) * | 2024-03-19 | 2024-04-19 | 广东石油化工学院 | Three-dimensional reconstruction method based on heterogeneous image matching |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109658449A (en) * | 2018-12-03 | 2019-04-19 | 华中科技大学 | A kind of indoor scene three-dimensional rebuilding method based on RGB-D image |
| CN109816782A (en) * | 2019-02-03 | 2019-05-28 | 哈尔滨理工大学 | A 3D reconstruction method of indoor scene based on binocular vision |
| CN110896609A (en) * | 2018-09-27 | 2020-03-20 | 武汉资联虹康科技股份有限公司 | A TMS localization and navigation method for transcranial magnetic stimulation therapy |
| WO2021238923A1 (en) * | 2020-05-25 | 2021-12-02 | 追觅创新科技(苏州)有限公司 | Camera parameter calibration method and device |
| CN114119864A (en) * | 2021-11-09 | 2022-03-01 | 同济大学 | A positioning method and device based on three-dimensional reconstruction and point cloud matching |
| CN114332415A (en) * | 2022-03-09 | 2022-04-12 | 南方电网数字电网研究院有限公司 | Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology |
| CN115035175A (en) * | 2022-05-26 | 2022-09-09 | 华中科技大学 | Three-dimensional model construction data processing method and system |
| CN115205489A (en) * | 2022-06-06 | 2022-10-18 | 广州中思人工智能科技有限公司 | Three-dimensional reconstruction method, system and device in large scene |
-
2022
- 2022-12-08 CN CN202211573792.2A patent/CN116704111B/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110896609A (en) * | 2018-09-27 | 2020-03-20 | 武汉资联虹康科技股份有限公司 | A TMS localization and navigation method for transcranial magnetic stimulation therapy |
| CN109658449A (en) * | 2018-12-03 | 2019-04-19 | 华中科技大学 | A kind of indoor scene three-dimensional rebuilding method based on RGB-D image |
| CN109816782A (en) * | 2019-02-03 | 2019-05-28 | 哈尔滨理工大学 | A 3D reconstruction method of indoor scene based on binocular vision |
| WO2021238923A1 (en) * | 2020-05-25 | 2021-12-02 | 追觅创新科技(苏州)有限公司 | Camera parameter calibration method and device |
| CN114119864A (en) * | 2021-11-09 | 2022-03-01 | 同济大学 | A positioning method and device based on three-dimensional reconstruction and point cloud matching |
| CN114332415A (en) * | 2022-03-09 | 2022-04-12 | 南方电网数字电网研究院有限公司 | Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology |
| CN115035175A (en) * | 2022-05-26 | 2022-09-09 | 华中科技大学 | Three-dimensional model construction data processing method and system |
| CN115205489A (en) * | 2022-06-06 | 2022-10-18 | 广州中思人工智能科技有限公司 | Three-dimensional reconstruction method, system and device in large scene |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117911631A (en) * | 2024-03-19 | 2024-04-19 | 广东石油化工学院 | Three-dimensional reconstruction method based on heterogeneous image matching |
| CN117911631B (en) * | 2024-03-19 | 2024-05-28 | 广东石油化工学院 | A 3D reconstruction method based on heterogeneous image matching |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116704111B (en) | 2024-08-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111598993B (en) | Three-dimensional data reconstruction method and device based on multi-view imaging technology | |
| CN110176032B (en) | Three-dimensional reconstruction method and device | |
| US9269003B2 (en) | Diminished and mediated reality effects from reconstruction | |
| CN107223269B (en) | Three-dimensional scene positioning method and device | |
| CN106940704B (en) | Positioning method and device based on grid map | |
| CN111144349B (en) | Indoor visual relocation method and system | |
| CN110728671B (en) | Vision-Based Dense Reconstruction Methods for Textureless Scenes | |
| CN109472828B (en) | Positioning method, positioning device, electronic equipment and computer readable storage medium | |
| JP2018028899A (en) | Image registration method and system | |
| CN110070598B (en) | Mobile terminal for 3D scanning reconstruction and 3D scanning reconstruction method thereof | |
| CN113052907B (en) | Positioning method of mobile robot in dynamic environment | |
| CN105654547B (en) | Three-dimensional rebuilding method | |
| WO2021136386A1 (en) | Data processing method, terminal, and server | |
| CN112184811B (en) | Monocular space structured light system structure calibration method and device | |
| CN115035235B (en) | Three-dimensional reconstruction method and device | |
| WO2021035627A1 (en) | Depth map acquisition method and device, and computer storage medium | |
| CN112146647B (en) | Binocular vision positioning method and chip for ground texture | |
| CN115409949A (en) | Model training method, perspective image generation method, device, equipment and medium | |
| CN111882655A (en) | Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction | |
| CN114766039A (en) | Object detection method, object detection device, terminal device, and medium | |
| CN113538538A (en) | Binocular image alignment method, electronic device and computer-readable storage medium | |
| JP2017037426A (en) | Information processing apparatus, information processing method, and program | |
| WO2021100681A1 (en) | Three-dimensional model generation method and three-dimensional model generation device | |
| JP2020004219A (en) | Apparatus, method, and program for generating three-dimensional shape data | |
| CN110310325B (en) | Virtual measurement method, electronic device and computer readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CP03 | Change of name, title or address | ||
| CP03 | Change of name, title or address |
Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040 Patentee after: Honor Terminal Co.,Ltd. Country or region after: China Address before: 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong Patentee before: Honor Device Co.,Ltd. Country or region before: China |