WO2024183760A1 - Scanning data splicing method and apparatus, and device and medium - Google Patents
Scanning data splicing method and apparatus, and device and medium Download PDFInfo
- Publication number
- WO2024183760A1 WO2024183760A1 PCT/CN2024/080360 CN2024080360W WO2024183760A1 WO 2024183760 A1 WO2024183760 A1 WO 2024183760A1 CN 2024080360 W CN2024080360 W CN 2024080360W WO 2024183760 A1 WO2024183760 A1 WO 2024183760A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- facial
- result
- scan
- tooth
- scanning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates to the field of data processing technology, and in particular to a scan data splicing method, device, equipment and medium.
- the stitching accuracy of the mouth scan data and the face scan data is relatively poor.
- the present disclosure provides a scan data stitching method, device, equipment and medium.
- the present disclosure provides a scan data stitching method, the method comprising:
- the facial scanning mesh result and the The above-mentioned mouth scanning mesh results are spliced to obtain an initial splicing result.
- the present disclosure also provides a scan data splicing device, the device comprising:
- the first acquisition module is used to obtain the facial posture corresponding to the user's facial scanning grid result
- a second acquisition module is used to acquire the tooth posture corresponding to the user's mouth scanning grid result
- a splicing module is used to splice the facial scan grid result and the mouth scan grid result based on the facial posture and the tooth posture to obtain an initial splicing result.
- An embodiment of the present disclosure also provides an electronic device, which includes: a processor; a memory for storing executable instructions of the processor; the processor is used to read the executable instructions from the memory and execute the instructions to implement the scan data stitching method provided in the embodiment of the present disclosure.
- the embodiment of the present disclosure further provides a computer-readable storage medium, wherein the storage medium stores a computer program, and the computer program is used to execute the scan data stitching method provided by the embodiment of the present disclosure.
- the scanning data stitching solution provided by the embodiment of the present disclosure obtains the facial pose corresponding to the user's facial scanning grid result, and obtains the tooth pose corresponding to the user's mouth scanning grid result, and stitches the facial scanning grid result and the mouth scanning grid result based on the facial pose and the tooth pose to obtain an initial stitching result.
- FIG1 is a schematic diagram of a flow chart of a scan data stitching method provided by an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of another scanning data stitching method provided by an embodiment of the present disclosure. picture
- FIG3 is a schematic diagram of the structure of a scan data splicing device provided by an embodiment of the present disclosure
- FIG. 4 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present disclosure.
- the embodiments of the present disclosure propose a scan data stitching method, which improves the existing rough stitching method and increases the stitching success rate for the tooth stitching scenario of oral scan data and facial scan data.
- FIG1 is a flow chart of a scan data stitching method provided by an embodiment of the present disclosure, and the method can be performed by a scan data stitching device, wherein the device can be implemented by software and/or hardware, and can generally be integrated in an electronic device. As shown in FIG1 , the method includes:
- Step 101 Obtain the facial pose corresponding to the user's facial scanning grid result.
- the face scanning mesh result refers to the mesh data of the user's face.
- the face point cloud data can be obtained by scanning the user's face with a face scanner or other equipment, and then meshed to obtain the face scanning mesh result.
- Facial postures such as the frontal face orientation, the side face orientation, etc.
- the facial pose corresponding to the facial scan grid result can be obtained.
- a facial image of the user is obtained, and the facial image is recognized to obtain a dental area and key facial feature points.
- the dental area and key facial feature points are projected to the facial scan grid result to obtain a facial scan grid result to be processed. Calculations are performed based on the facial scan grid result to obtain the facial pose corresponding to the facial scan grid result.
- facial feature points corresponding to the facial scan grid result are obtained, and the facial pose corresponding to the facial scan grid result is determined by calculation based on the facial feature points.
- the above two methods are only examples of obtaining the facial pose corresponding to the user's facial scan grid result, and the present disclosure does not specifically limit the implementation method of obtaining the facial pose corresponding to the user's facial scan grid result.
- Step 102 Obtain the tooth posture corresponding to the user's mouth scan grid result.
- the mouth scanning grid result refers to the grid data of the user's mouth.
- the mouth of the user can be scanned by a device such as a mouth scanner to obtain the mouth point cloud data, and then the mouth scanning grid result can be obtained by gridding.
- tooth posture refers to the direction of the front teeth area.
- the mouth scanning grid result is projected onto a two-dimensional plane to obtain two-dimensional coordinate points of the mouth.
- the two-dimensional coordinate points of the mouth are processed according to a preset processing method to obtain a two-dimensional curve.
- the parameters corresponding to the two-dimensional curve are fitted based on the dental arch line fitting method to obtain the tooth posture; in other embodiments, the mouth scanning grid result is directly calculated based on a preset posture estimation formula to obtain the tooth posture; the above two methods are only examples of obtaining the tooth posture corresponding to the mouth scanning grid result.
- the embodiments of the present disclosure do not specifically limit the method of obtaining the tooth posture corresponding to the mouth scanning grid result.
- Step 103 splice the facial scan mesh result and the mouth scan mesh result based on the facial pose and the tooth pose to obtain an initial splicing result.
- the facial scan grid result and the mouth scan grid result are spliced based on the facial pose and the tooth pose to obtain an initial splicing result.
- the mouth scan grid result is converted to a coordinate system corresponding to the facial scan grid result, and feature points of the same target number and the same position are respectively obtained from the facial scan grid result and the mouth scan grid result, and the rigid body transformation matrix between the feature points is determined.
- the facial scan grid result and the mouth scan grid result to be processed are spliced according to the rigid body transformation matrix to obtain an initial splicing result.
- the facial scan grid result is converted to a coordinate system corresponding to the mouth scan grid result
- the upper teeth area is cut from the facial scan grid result
- the front teeth area of a certain width is cut from the mouth scan grid result.
- the cut front teeth area is spliced to the upper teeth area to obtain an initial splicing result.
- the scan data stitching solution provided by the embodiment of the present disclosure obtains the facial pose corresponding to the user's facial scan grid result, and obtains the tooth pose corresponding to the user's mouth scan grid result, and stitches the facial scan grid result and the mouth scan grid result based on the facial pose and the tooth pose to obtain an initial stitching result.
- the above technical solution is adopted, and the facial pose and the tooth pose are used to assist the stitching, so as to improve the accuracy of the initial stitching result while ensuring the stitching efficiency, thereby improving the final stitching success rate and meeting the user's accuracy requirements for the stitching of scan data.
- FIG2 is a flow chart of another scan data stitching method provided by an embodiment of the present disclosure. Based on the above embodiment, this embodiment further optimizes the above scan data stitching method. As shown in FIG2 , the method includes:
- Step 201 Obtain a user's facial image, and recognize the facial image to obtain the tooth area and key facial feature points.
- the face image refers to a two-dimensional picture including the user's face, which can be obtained by scanning the user's face through a device such as a face scanner.
- facial images can be recognized by a facial intelligent recognition algorithm in an artificial intelligence recognition algorithm to obtain various areas corresponding to the face, such as the eye area, nose area, and tooth area, etc., and obtain key facial feature points, such as the tip of the nose, the root of the nose, the chin, the center of the mouth, the left corner of the mouth, the outer corner of the left eye, and the inner corner of the right eye, etc. 75 facial feature points.
- the tooth area may include the upper tooth area and the lower tooth area or the entire tooth area.
- the facial scanning grid result mainly exposes the upper teeth, with less exposure of the lower teeth.
- the tooth area generally refers to the upper tooth area. It should be noted that after the upper tooth area is spliced, the lower tooth area can be determined according to the tooth occlusion state, thereby further improving the splicing efficiency.
- Step 202 Project the tooth area and key feature points of the face to the facial scan grid result to obtain the facial scan grid result to be processed.
- the tooth area and key facial feature points after obtaining the tooth area and key facial feature points based on the facial image, it is necessary to project the tooth area and the key facial feature points onto the facial scanning grid result to obtain the facial scanning grid result to be processed, that is, map the pixel coordinate points of the tooth area and the two-dimensional coordinate points of the key facial feature points to the facial scanning grid result, thereby obtaining the facial scanning grid result to be processed including the tooth area and the key facial feature points.
- the facial image is a two-dimensional image
- the tooth area and the key feature points of the face are both two-dimensional coordinate points in the two-dimensional image coordinate system.
- the tooth area and the key feature points of the face in the image coordinate system can be converted to three-dimensional coordinate points in the scanning camera coordinate system by obtaining the coordinate transformation matrix between the image coordinate system and the scanning camera coordinate system.
- the tooth area and the key feature points of the face are projected onto the facial scanning grid result based on the camera projection principle to obtain the facial scanning grid result to be processed.
- the tooth area and key feature points of the face are projected onto the facial scanning grid result to obtain the facial scanning grid result to be processed, including: obtaining pixel coordinate points corresponding to the tooth area; obtaining feature coordinate points corresponding to the key feature points of the face; projecting the pixel coordinate points and feature coordinate points onto the facial scanning grid result according to the transformation relationship between the image coordinate system and the scanning camera coordinate system to obtain the facial scanning grid result to be processed.
- the dental area and the key facial feature points on the face image are projected onto the facial scanning grid result based on the camera projection principle.
- the pixel coordinate points corresponding to the tooth area are obtained, the feature coordinate points corresponding to the key feature points of the face are obtained, and the pixel coordinate points and the feature coordinate points are projected to the facial scanning grid result according to the transformation relationship between the image coordinate system and the scanning camera coordinate system to obtain the facial scanning grid result to be processed.
- Step 203 Calculate based on the face scan mesh result to be processed to obtain the face pose corresponding to the face scan mesh result.
- each facial feature point in the face scanning grid result to be processed can be obtained, that is, facial feature points, and estimate the facial pose based on each facial feature point.
- 75 facial feature points are input into a preset facial pose calculation formula for calculation to obtain facial poses such as the frontal face orientation and the side face orientation.
- multiple facial directions are determined through each facial feature point, and the facial pose is determined based on the multiple facial directions.
- the specific setting is selected according to the application scenario, and the embodiments of the present disclosure do not impose specific restrictions.
- a facial contour, a first feature area, and a second feature area are determined based on facial feature points
- a facial plane is determined based on the facial contour and the first feature area
- a first midline feature point and a second midline feature point are determined based on the facial feature points
- a longitudinal direction of the facial plane is determined based on the first midline feature point and the second midline feature point
- the longitudinal direction of the facial plane is adjusted based on the center of gravity direction of the facial contour and the center of gravity direction of the second feature area to obtain a facial pose corresponding to the facial scanning grid result.
- the first characteristic area may refer to the eye and eyebrow area; the second characteristic area refers to the mouth area; the first midline refers to the nose midline; and the second midline refers to the mouth midline.
- the facial pose corresponding to the facial scanning grid result is calculated based on the facial feature points in three-dimensional space.
- facial pose estimation can be completed based on the facial feature points.
- the facial plane is estimated by taking the vertices of the facial contour and the eye and eyebrow areas, the feature points of the midline of the nose and the midline of the mouth are taken to estimate the longitudinal direction, and the center of gravity of the mouth area and the center of gravity of the facial contour are taken to calibrate the facial orientation, thereby determining the facial pose.
- Step 204 Project the mouth scanning grid result onto a two-dimensional plane to obtain the two-dimensional coordinate points of the mouth, process the two-dimensional coordinate points of the mouth according to a preset processing method to obtain a two-dimensional curve, fit the parameters corresponding to the two-dimensional curve based on the dental arch line fitting method to obtain the tooth posture.
- the mouth scanning grid results are projected onto a two-dimensional plane to obtain the two-dimensional coordinate points of the mouth, and the two-dimensional coordinate points of the mouth are converted into a two-dimensional curve (expressed by the two-dimensional curve formula) on the two-dimensional plane through diffusion corrosion.
- the parameters corresponding to the two-dimensional curve are fitted by the dental arch line fitting method to obtain the tooth posture.
- Step 205 Based on the facial posture and the tooth posture, the facial scan mesh result and the mouth scan mesh result are unified into the same coordinate system to obtain the facial scan mesh result to be spliced and the mouth scan mesh result to be spliced. Scan mesh results of the joint surface.
- the facial scan grid result and the oral scan grid result are unified into the same coordinate system based on the facial posture and the tooth posture, including: determining the coordinate transformation matrix between the facial scan grid result and the oral scan grid result based on the facial posture and the tooth posture, and transforming the oral scan grid result into the coordinate system corresponding to the facial scan grid result according to the coordinate transformation matrix; or, transforming the facial scan grid result into the coordinate system corresponding to the oral scan grid result according to the coordinate transformation matrix.
- Step 206 determine a first tooth region to be spliced from the face scan grid result to be spliced, and determine a second tooth region to be spliced of a target width from the face scan grid result to be spliced.
- Step 207 Acquire feature points of the same target quantity and position in the first tooth region to be spliced and the second tooth region to be spliced, respectively, and determine the rigid body transformation matrix between the feature points in the first tooth region to be spliced and the feature points in the second tooth region to be spliced, and splice the face scan mesh result to be processed and the mouth scan mesh result according to the rigid body transformation matrix to obtain an initial splicing result.
- the mouth scanning grid result and the mouth scanning grid result are aligned to the same coordinate system, and a preset number of feature points are selected from the facial scanning grid result and the mouth scanning grid result respectively, wherein the number of feature points in the facial scanning grid result is equal to the number of feature points in the mouth scanning grid result and the positions are the same, and the rigid body transformation matrix between the feature points in the facial scanning grid result and the feature points in the mouth scanning grid result is determined, and the facial scanning grid result and the mouth scanning grid result are spliced according to the rigid body transformation matrix to obtain an initial splicing result.
- the upper teeth area is cut from the facial scan grid result, and the front teeth area of a certain width (set according to the empirical value) is cut from the mouth scan grid result.
- the cut mouth scan grid result is spliced to the facial scan grid result to obtain the initial splicing result. It can be understood that under the premise that the pose has been confirmed, the feature descriptor no longer needs the estimation of the local coordinate system.
- the local coordinate system with a fixed direction can ensure that the pose changes during the splicing process are as small as possible, further improving the accuracy of the splicing process. High splicing success rate.
- Step 208 Process the overlapping source scan point cloud and target scan point cloud in the initial stitching result to obtain a rotation and translation matrix, transform the source scan point cloud to the coordinate system of the target scan point cloud according to the rotation and translation matrix, determine the error between the transformed source scan point cloud and the target scan point cloud, until the error is less than or equal to a preset error threshold, and obtain the target stitching result.
- the ICP Intelligent Closest Point
- the ICP Intelligent Closest Point
- the ICP is used to perform stitching adjustment on the initial stitching result, that is, the overlapping source scan point cloud and target scan point cloud in the initial stitching result are processed to obtain a rotation and translation matrix, and the source scan point cloud is transformed into the coordinate system of the target scan point cloud according to the rotation and translation matrix, and the error between the transformed source scan point cloud and the target scan point cloud is determined until the error is less than or equal to a preset error threshold, and the target stitching result is obtained.
- the error threshold is selected and set according to the application scenario.
- the scanning data splicing solution obtained by the embodiment of the present disclosure obtains the user's face image, recognizes the face image, obtains the tooth area and the key feature points of the face, projects the tooth area and the key feature points of the face to the face scanning grid result, obtains the face scanning grid result to be processed, performs calculation based on the face scanning grid result to obtain the facial posture corresponding to the facial scanning grid result, projects the mouth scanning grid result to a two-dimensional plane, obtains the two-dimensional coordinate points of the mouth, processes the two-dimensional coordinate points of the mouth according to a preset processing method, obtains a two-dimensional curve, fits the parameters corresponding to the two-dimensional curve based on the dental arch line fitting method, obtains the tooth posture, unifies the face scanning grid result and the mouth scanning grid result into the same one based on the face posture and the tooth posture.
- a coordinate system is provided, and the facial scanning grid result to be spliced and the mouth scanning grid result to be spliced are obtained, a first tooth region to be spliced is determined from the facial scanning grid result to be spliced, and a second tooth region to be spliced with a target width is determined from the mouth scanning grid result to be spliced, feature points of the same target number and the same position are obtained in the first tooth region to be spliced and the second tooth region to be spliced, respectively, and a rigid body transformation matrix between the feature points in the first tooth region to be spliced and the feature points in the second tooth region to be spliced is determined, and the facial scanning grid result to be processed and the mouth scanning grid result are spliced according to the rigid body transformation matrix to obtain an initial splicing result, and the overlapping source scanning point cloud and target scanning point cloud in the initial splicing result are processed to obtain a rotation and translation matrix, and a rotation and
- the source scan point cloud is transformed into the coordinate system of the target scan point cloud, and the error between the transformed source scan point cloud and the target scan point cloud is determined until the error is less than or equal to the preset error threshold, and the target stitching result is obtained.
- the above technical solution solves the technical problem of difficult tooth stitching of mouth scan data and facial scan data. Through auxiliary stitching of facial posture and tooth posture, the accuracy of the initial stitching result is improved while ensuring the stitching efficiency, thereby improving the accuracy of the target stitching result, ensuring the success rate of the final scan data stitching, meeting the user's requirements for the accuracy of scan data stitching, and improving the user's scanning stitching experience.
- FIG3 is a schematic diagram of the structure of a scan data splicing device provided by an embodiment of the present disclosure.
- the device can be implemented by software and/or hardware and can generally be integrated in an electronic device. As shown in FIG3 , the device includes:
- the first acquisition module 301 is configured to acquire the facial posture corresponding to the user's facial scanning grid result
- the second acquisition module 303 is configured to acquire the tooth posture corresponding to the mouth scanning grid result
- the stitching module 303 is configured to stitch the facial scan mesh result and the mouth scan mesh result based on the facial pose and the tooth pose to obtain an initial stitching result.
- the first acquisition module 301 includes:
- An acquisition unit is configured to acquire a facial image of a user, and recognize the facial image to obtain a tooth region and key facial feature points;
- a projection unit is configured to project the tooth area and the key feature points of the face onto the facial scanning grid result to obtain the facial scanning grid result to be processed;
- the calculation unit is configured to perform calculation based on the face scanning grid result to be processed to obtain the face position and pose corresponding to the face scanning grid result.
- the projection unit is specifically configured as:
- the pixel coordinate point The feature coordinate points are projected onto the facial scanning grid result to obtain the facial scanning grid result to be processed.
- the computing unit is specifically configured as:
- the longitudinal direction of the facial plane is adjusted based on the centroid direction of the facial contour and the centroid direction of the second feature area to obtain the facial pose corresponding to the facial scanning grid result.
- the second acquisition module 302 is specifically configured to:
- the parameters corresponding to the two-dimensional curve are fitted to obtain the tooth posture.
- the splicing module 303 includes:
- a coordinate unification unit is configured to unify the facial scanning mesh result and the oral scanning mesh result into the same coordinate system based on the facial posture and the tooth posture, so as to obtain the facial scanning mesh result to be spliced and the oral scanning mesh result to be spliced;
- a determination unit is configured to determine a first tooth region to be spliced from a mesh result of a face scan to be spliced, and to determine a second tooth region to be spliced of a target width from a mesh result of a face scan to be spliced;
- an acquisition and determination unit configured to acquire the same target number and the same position of feature points in the first tooth region to be spliced and the second tooth region to be spliced, respectively, and determine a rigid body transformation matrix between the feature points in the first tooth region to be spliced and the feature points in the second tooth region to be spliced;
- the splicing unit is configured to splice the face scanning mesh result to be processed and the mouth scanning mesh result according to the rigid body transformation matrix to obtain an initial splicing result.
- the coordinate unification unit is specifically configured as:
- the facial pose and tooth pose determine the coordinate transformation matrix between the facial scan mesh result and the mouth scan mesh result
- the mouth scan grid result is converted to the coordinate system corresponding to the face scan grid result according to the coordinate conversion matrix; or, the face scan grid result is converted to the coordinate system corresponding to the mouth scan grid result according to the coordinate conversion matrix to obtain the face scan grid result to be spliced and the mouth scan grid result to be spliced.
- a processing module is configured to process the overlapping source scanning point cloud and target scanning point cloud in the initial stitching result to obtain a rotation and translation matrix
- the stitching adjustment module is configured to transform the source scanning point cloud into the coordinate system of the target scanning point cloud according to the rotation and translation matrix, determine the error between the transformed source scanning point cloud and the target scanning point cloud, until the error is less than or equal to a preset error threshold, and obtain the target stitching result.
- the scan data stitching device provided in the embodiment of the present disclosure can execute the scan data stitching method provided in any embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
- the embodiments of the present disclosure also provide a computer program product, including a computer program/instruction, which, when executed by a processor, implements the scan data stitching method provided by any embodiment of the present disclosure.
- FIG4 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present disclosure. Referring specifically to FIG4 below, it shows a schematic diagram of the structure of an electronic device 400 suitable for implementing the embodiment of the present disclosure.
- the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG4 is merely an example and should not impose any limitations on the functions and scope of use of the embodiment of the present disclosure.
- the electronic device 400 may include a processing device (eg, a central processing unit, The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404.
- An input/output (I/O) interface 405 is also connected to the bus 404.
- the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 407 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 408 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 409.
- the communication device 409 may allow the electronic device 400 to communicate wirelessly or wired with other devices to exchange data.
- FIG. 4 shows an electronic device 400 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have alternatively.
- an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program can be downloaded and installed from the network through the communication device 409, or installed from the storage device 408, or installed from the ROM 402.
- the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the scan data stitching method of the embodiment of the present disclosure are executed.
- the computer-readable medium of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a
- a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, device, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, which carries a computer-readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which may send, propagate, or transmit a program for use by or in combination with an instruction execution system, device, or device.
- the program code contained on the computer-readable medium may be transmitted using any suitable medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
- the client and server may communicate using any currently known or future developed network protocol such as HTTP (Hyper Text Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network).
- HTTP Hyper Text Transfer Protocol
- Examples of communication networks include a local area network ("LAN”), a wide area network ("WAN”), an internet (e.g., the Internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network.
- the computer-readable medium may be included in the electronic device, or may exist independently without being installed in the electronic device.
- the above-mentioned computer-readable medium carries one or more programs.
- the electronic device obtains the facial posture corresponding to the user's facial scanning grid result, and obtains the tooth posture corresponding to the user's mouth scanning grid result, and splices the facial scanning grid result and the mouth scanning grid result based on the facial posture and the tooth posture to obtain an initial splicing result.
- Computer program code for performing operations of the present disclosure may be written in one or more programming languages, or a combination thereof, including but not limited to object-oriented programming languages. Programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as "C" or similar programming languages are also included.
- the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., through the Internet using an Internet service provider).
- LAN local area network
- WAN wide area network
- Internet service provider e.g., AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function.
- the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
- each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs the specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure may be implemented by software or hardware, wherein the name of a unit does not, in some cases, constitute a limitation on the unit itself.
- exemplary types of hardware logic components include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chip (SOCs), complex programmable logic devices (CPLDs), and the like.
- FPGAs field programmable gate arrays
- ASICs application specific integrated circuits
- ASSPs application specific standard products
- SOCs systems on chip
- CPLDs complex programmable logic devices
- a machine-readable medium may be a tangible medium that can A program containing or storing for use by or in conjunction with an instruction execution system, device or equipment.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or equipment, or any suitable combination of the foregoing.
- machine-readable storage media may include electrical connections based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or flash memory erasable programmable read-only memory
- CD-ROM compact disk read-only memory
- magnetic storage device or any suitable combination of the foregoing.
- the present disclosure provides an electronic device, including:
- a memory for storing processor-executable instructions
- the processor is used to read executable instructions from the memory and execute the instructions to implement any scan data stitching method provided in the present disclosure.
- the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute any scan data stitching method provided by the present disclosure.
- the scan data stitching method provided by the present invention improves the accuracy of the initial stitching result while ensuring the stitching efficiency through the auxiliary stitching of facial posture and tooth posture, thereby improving the final stitching success rate and having strong industrial practicability.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
Description
本公开要求于2023年03月07日提交中国专利局、申请号为202310229276.6、发明名称为“一种扫描数据拼接方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure claims the priority of the Chinese patent application filed with the Chinese Patent Office on March 7, 2023, with application number 202310229276.6 and invention name “A method, device, equipment and medium for stitching scanned data”, the entire contents of which are incorporated by reference in this disclosure.
本公开涉及数据处理技术领域,尤其涉及一种扫描数据拼接方法、装置、设备及介质。The present disclosure relates to the field of data processing technology, and in particular to a scan data splicing method, device, equipment and medium.
通常,比如微笑设计,模拟正畸等应用场景中,需要将口部扫描数据精准地拼接到面部扫描数据上,达到面部扫描数据与口部扫描数据联动的效果,作为后期正畸诊疗、医疗美学等领域的参考依据,然而,口部扫描数据与面部扫描数据的分辨率差异比较大,可拼接区域占比低,面部扫描数据受到脸部影响,面部扫描数据中的牙齿数据受噪声影响较大,导致口部扫描数据与面部扫描数据中的牙齿数据拼接比较难。Usually, in application scenarios such as smile design and simulated orthodontics, it is necessary to accurately splice the mouth scan data with the face scan data to achieve the effect of linking the face scan data with the mouth scan data, which can serve as a reference for later orthodontic diagnosis and treatment, medical aesthetics and other fields. However, the resolution difference between the mouth scan data and the face scan data is relatively large, the proportion of the splicable area is low, the face scan data is affected by the face, and the tooth data in the face scan data is greatly affected by noise, resulting in difficulty in splicing the mouth scan data with the tooth data in the face scan data.
相关技术中,口部扫描数据与面部扫描数据拼接方式的拼接精度比较差。In the related art, the stitching accuracy of the mouth scan data and the face scan data is relatively poor.
发明内容Summary of the invention
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种扫描数据拼接方法、装置、设备及介质。In order to solve the above technical problem or at least partially solve the above technical problem, the present disclosure provides a scan data stitching method, device, equipment and medium.
本公开实施例提供了一种扫描数据拼接方法,所述方法包括:The present disclosure provides a scan data stitching method, the method comprising:
获取用户的面部扫描网格结果对应的面部位姿;Get the facial pose corresponding to the user's facial scan mesh result;
获取所述用户的口部扫描网格结果对应的牙齿位姿;Obtaining a tooth posture corresponding to a mesh result of a mouth scan of the user;
基于所述面部位姿和所述牙齿位姿对所述面部扫描网格结果和所 述口部扫描网格结果进行拼接,得到初始拼接结果。Based on the facial pose and the tooth pose, the facial scanning mesh result and the The above-mentioned mouth scanning mesh results are spliced to obtain an initial splicing result.
本公开实施例还提供了一种扫描数据拼接装置,所述装置包括:The present disclosure also provides a scan data splicing device, the device comprising:
第一获取模块,用于获取用户的面部扫描网格结果对应的面部位姿;The first acquisition module is used to obtain the facial posture corresponding to the user's facial scanning grid result;
第二获取模块,用于获取所述用户的口部扫描网格结果对应的牙齿位姿;A second acquisition module is used to acquire the tooth posture corresponding to the user's mouth scanning grid result;
拼接模块,用于基于所述面部位姿和所述牙齿位姿对所述面部扫描网格结果和所述口部扫描网格结果进行拼接,得到初始拼接结果。A splicing module is used to splice the facial scan grid result and the mouth scan grid result based on the facial posture and the tooth posture to obtain an initial splicing result.
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的扫描数据拼接方法。An embodiment of the present disclosure also provides an electronic device, which includes: a processor; a memory for storing executable instructions of the processor; the processor is used to read the executable instructions from the memory and execute the instructions to implement the scan data stitching method provided in the embodiment of the present disclosure.
本公开实施例还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的扫描数据拼接方法。The embodiment of the present disclosure further provides a computer-readable storage medium, wherein the storage medium stores a computer program, and the computer program is used to execute the scan data stitching method provided by the embodiment of the present disclosure.
本公开实施例提供的扫描数据拼接方案,获取用户的面部扫描网格结果对应的面部位姿,并获取用户的口部扫描网格结果对应的牙齿位姿,基于面部位姿和牙齿位姿对面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果。The scanning data stitching solution provided by the embodiment of the present disclosure obtains the facial pose corresponding to the user's facial scanning grid result, and obtains the tooth pose corresponding to the user's mouth scanning grid result, and stitches the facial scanning grid result and the mouth scanning grid result based on the facial pose and the tooth pose to obtain an initial stitching result.
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required for use in the embodiments or the description of the prior art will be briefly introduced below. Obviously, for ordinary technicians in this field, other drawings can be obtained based on these drawings without paying any creative labor.
图1为本公开实施例提供的一种扫描数据拼接方法的流程示意图;FIG1 is a schematic diagram of a flow chart of a scan data stitching method provided by an embodiment of the present disclosure;
图2为本公开实施例提供的另一种扫描数据拼接方法的流程示意 图;FIG. 2 is a schematic diagram of another scanning data stitching method provided by an embodiment of the present disclosure. picture;
图3为本公开实施例提供的一种扫描数据拼接装置的结构示意图;FIG3 is a schematic diagram of the structure of a scan data splicing device provided by an embodiment of the present disclosure;
图4为本公开实施例提供的一种电子设备的结构示意图。FIG. 4 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present disclosure.
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the accompanying drawings, it should be understood that the present disclosure can be implemented in various forms and should not be construed as being limited to the embodiments described herein, which are instead provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are only for exemplary purposes and are not intended to limit the scope of protection of the present disclosure.
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。It should be understood that the various steps described in the method embodiments of the present disclosure may be performed in different orders and/or in parallel. In addition, the method embodiments may include additional steps and/or omit the steps shown. The scope of the present disclosure is not limited in this respect.
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。The term "including" and its variations used herein are open inclusions, i.e., "including but not limited to". The term "based on" means "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". The relevant definitions of other terms will be given in the following description.
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。It should be noted that the concepts such as "first" and "second" mentioned in the present disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order or interdependence of the functions performed by these devices, modules or units.
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。It should be noted that the modifications of "one" and "plurality" mentioned in the present disclosure are illustrative rather than restrictive, and those skilled in the art should understand that unless otherwise clearly indicated in the context, it should be understood as "one or more".
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。 The names of the messages or information exchanged between multiple devices in the embodiments of the present disclosure are only used for illustrative purposes and are not used to limit the scope of these messages or information.
目前,面部扫描数据中的牙齿数据的占比很低,可拼接区域较小,且口部扫描的牙齿数据相对精细,面部扫描数据分辨率低,面部扫描数据中的牙齿数据往往会有噪声干扰,还存在数据不全的问题,这些情况都大大加深了拼接的难度,现有的拼接方式计算量比较大、抗噪声的能力比较弱、以及拼接的精度比较低。At present, the proportion of dental data in facial scan data is very low, the stitching area is small, and the dental data of the mouth scan is relatively fine. The resolution of facial scan data is low, and the dental data in facial scan data is often interfered by noise. There is also the problem of incomplete data. These situations have greatly increased the difficulty of stitching. The existing stitching methods have a large amount of calculation, weak anti-noise ability, and low stitching accuracy.
针对上述问题,本公开实施例提出一种扫描数据拼接方法,针对口部扫描数据与面部扫描数据的牙齿拼接场景,改进现有粗拼接方式,提高拼接成功率。In view of the above problems, the embodiments of the present disclosure propose a scan data stitching method, which improves the existing rough stitching method and increases the stitching success rate for the tooth stitching scenario of oral scan data and facial scan data.
具体地,图1为本公开实施例提供的一种扫描数据拼接方法的流程示意图,该方法可以由扫描数据拼接装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图1所示,该方法包括:Specifically, FIG1 is a flow chart of a scan data stitching method provided by an embodiment of the present disclosure, and the method can be performed by a scan data stitching device, wherein the device can be implemented by software and/or hardware, and can generally be integrated in an electronic device. As shown in FIG1 , the method includes:
步骤101、获取用户的面部扫描网格结果对应的面部位姿。Step 101: Obtain the facial pose corresponding to the user's facial scanning grid result.
其中,面部扫描网格结果指的是用户面部的网格数据,可以通过面部扫描仪等设备对用户的人脸进行扫描获取面部点云数据后进行网格化后得到面部扫描网格结果。面部位姿比如正脸朝向、侧脸多少度朝向等。Among them, the face scanning mesh result refers to the mesh data of the user's face. The face point cloud data can be obtained by scanning the user's face with a face scanner or other equipment, and then meshed to obtain the face scanning mesh result. Facial postures, such as the frontal face orientation, the side face orientation, etc.
在本公开实施例中,在获取面部扫描网格结果后,可以获取面部扫描网格结果对应的面部位姿,在一些实施方式中,获取用户的人脸图像,并对人脸图像进行识别,得到牙齿区域和人脸关键特征点,将牙齿区域和人脸关键特征点投影至面部扫描网格结果,得到待处理面部扫描网格结果,基于待处理面部扫描网格结果进行计算,得到面部扫描网格结果对应的面部位姿。In an embodiment of the present disclosure, after obtaining a facial scan grid result, the facial pose corresponding to the facial scan grid result can be obtained. In some implementations, a facial image of the user is obtained, and the facial image is recognized to obtain a dental area and key facial feature points. The dental area and key facial feature points are projected to the facial scan grid result to obtain a facial scan grid result to be processed. Calculations are performed based on the facial scan grid result to obtain the facial pose corresponding to the facial scan grid result.
在另一些实施方式中,获取面部扫描网格结果对应的面部特征点,根据面部特征点进行计算确定面部扫描网格结果对应的面部位姿。以上两种方式仅为获取用户的面部扫描网格结果对应的面部位姿的示例,本公开不对获取用户的面部扫描网格结果对应的面部位姿的实现方式进行具体限制。 In other embodiments, facial feature points corresponding to the facial scan grid result are obtained, and the facial pose corresponding to the facial scan grid result is determined by calculation based on the facial feature points. The above two methods are only examples of obtaining the facial pose corresponding to the user's facial scan grid result, and the present disclosure does not specifically limit the implementation method of obtaining the facial pose corresponding to the user's facial scan grid result.
步骤102、获取用户的口部扫描网格结果对应的牙齿位姿。Step 102: Obtain the tooth posture corresponding to the user's mouth scan grid result.
其中,口部扫描网格结果指的是用户口部的网格数据,可以通过口部扫描仪等设备对用户的口部进行扫描获取口部点云数据后进行网格化后得到口部扫描网格结果。The mouth scanning grid result refers to the grid data of the user's mouth. The mouth of the user can be scanned by a device such as a mouth scanner to obtain the mouth point cloud data, and then the mouth scanning grid result can be obtained by gridding.
在本公开实施例中,牙齿位姿指的是前牙区域方向,获取口部扫描网格结果对应的牙齿位姿的方式有很多种,在一些实施方式中,将口部扫描网格结果投影至二维平面,得到口部二维坐标点,按照预设的处理方式对口部二维坐标点进行处理,得到二维曲线,基于牙弓线拟合方式对二维曲线对应的参数进行拟合处理,得到牙齿位姿;在另一些实施方式中,基于预设的位姿估计公式直接对口部扫描网格结果进行计算,得到牙齿位姿;以上两种方式仅为获取口部扫描网格结果对应的牙齿位姿的示例,本公开实施例不对获取口部扫描网格结果对应的牙齿位姿的方式进行具体限制。In the embodiments of the present disclosure, tooth posture refers to the direction of the front teeth area. There are many ways to obtain the tooth posture corresponding to the mouth scanning grid result. In some embodiments, the mouth scanning grid result is projected onto a two-dimensional plane to obtain two-dimensional coordinate points of the mouth. The two-dimensional coordinate points of the mouth are processed according to a preset processing method to obtain a two-dimensional curve. The parameters corresponding to the two-dimensional curve are fitted based on the dental arch line fitting method to obtain the tooth posture; in other embodiments, the mouth scanning grid result is directly calculated based on a preset posture estimation formula to obtain the tooth posture; the above two methods are only examples of obtaining the tooth posture corresponding to the mouth scanning grid result. The embodiments of the present disclosure do not specifically limit the method of obtaining the tooth posture corresponding to the mouth scanning grid result.
步骤103、基于面部位姿和牙齿位姿对面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果。Step 103: splice the facial scan mesh result and the mouth scan mesh result based on the facial pose and the tooth pose to obtain an initial splicing result.
在本公开实施例中,在获取面部位姿和牙齿位姿后,基于面部位姿和牙齿位姿对面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果,在一些实施方式中,将口部扫描网格结果转换至面部扫描网格结果对应的坐标系,从面部扫描网格结果和口部扫描网格结果中分别获取相同目标数量和相同位置的特征点,确定特征点之间的刚体变换矩阵,根据刚体变换矩阵对待处理面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果;在另一些实施方式中,将面部扫描网格结果转换至口部扫描网格结果对应的坐标系,从面部扫描网格结果中截取上牙区域,从口部扫描网格结果中截取一定宽度的前牙区域,基于固定方向的特征描述子(比如牙尖上的点等),将截取后的前牙区域拼接到上牙区域,得到初始拼接结果。以上两种方式仅为示例,本公开实施例不对基于面部位姿和牙齿位姿对面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果的方式进行 具体限制。In an embodiment of the present disclosure, after obtaining the facial pose and the tooth pose, the facial scan grid result and the mouth scan grid result are spliced based on the facial pose and the tooth pose to obtain an initial splicing result. In some embodiments, the mouth scan grid result is converted to a coordinate system corresponding to the facial scan grid result, and feature points of the same target number and the same position are respectively obtained from the facial scan grid result and the mouth scan grid result, and the rigid body transformation matrix between the feature points is determined. The facial scan grid result and the mouth scan grid result to be processed are spliced according to the rigid body transformation matrix to obtain an initial splicing result. In other embodiments, the facial scan grid result is converted to a coordinate system corresponding to the mouth scan grid result, the upper teeth area is cut from the facial scan grid result, and the front teeth area of a certain width is cut from the mouth scan grid result. Based on a feature descriptor of a fixed direction (such as a point on the tooth cusp, etc.), the cut front teeth area is spliced to the upper teeth area to obtain an initial splicing result. The above two methods are only examples. The embodiment of the present disclosure does not perform the method of splicing the facial scan mesh result and the mouth scan mesh result based on the facial posture and the tooth posture to obtain the initial splicing result. Specific restrictions.
本公开实施例提供的扫描数据拼接方案,获取用户的面部扫描网格结果对应的面部位姿,并获取用户的口部扫描网格结果对应的牙齿位姿,基于面部位姿和牙齿位姿对面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果。采用上述技术方案,通过面部位姿和牙齿位姿辅助拼接,在保证拼接效率的同时提高初始拼接结果的精确性,从而提高最终拼接成功率,满足用户对于扫描数据拼接的精度需求。The scan data stitching solution provided by the embodiment of the present disclosure obtains the facial pose corresponding to the user's facial scan grid result, and obtains the tooth pose corresponding to the user's mouth scan grid result, and stitches the facial scan grid result and the mouth scan grid result based on the facial pose and the tooth pose to obtain an initial stitching result. The above technical solution is adopted, and the facial pose and the tooth pose are used to assist the stitching, so as to improve the accuracy of the initial stitching result while ensuring the stitching efficiency, thereby improving the final stitching success rate and meeting the user's accuracy requirements for the stitching of scan data.
图2为本公开实施例提供的另一种扫描数据拼接方法的流程示意图,本实施例在上述实施例的基础上,进一步优化了上述扫描数据拼接方法。如图2所示,该方法包括:FIG2 is a flow chart of another scan data stitching method provided by an embodiment of the present disclosure. Based on the above embodiment, this embodiment further optimizes the above scan data stitching method. As shown in FIG2 , the method includes:
步骤201、获取用户的人脸图像,并对人脸图像进行识别,得到牙齿区域和人脸关键特征点。Step 201: Obtain a user's facial image, and recognize the facial image to obtain the tooth area and key facial feature points.
其中,人脸图像指的是包括用户面部的二维图片,可以通过面部扫描仪等设备对用户的人脸进行扫描获取。Among them, the face image refers to a two-dimensional picture including the user's face, which can be obtained by scanning the user's face through a device such as a face scanner.
本公开实施例中,对人脸图像进行识别的方式有很多种,在一些实施方式中,可以通过人工智能识别算法中的人脸智能识别算法对人脸图像进行识别,获取人脸对应的各个区域,比如眼睛区域、鼻子区域和牙齿区域等,并获取人脸关键特征点,比如鼻尖、鼻根、下巴、嘴中心、嘴左角、左眼外角和右眼内角等75个面部特征点。In the embodiments of the present disclosure, there are many ways to recognize facial images. In some implementations, facial images can be recognized by a facial intelligent recognition algorithm in an artificial intelligence recognition algorithm to obtain various areas corresponding to the face, such as the eye area, nose area, and tooth area, etc., and obtain key facial feature points, such as the tip of the nose, the root of the nose, the chin, the center of the mouth, the left corner of the mouth, the outer corner of the left eye, and the inner corner of the right eye, etc. 75 facial feature points.
其中,牙齿区域可以包括上牙区域和下牙区域或者整个牙齿区域,在本公开实施例中,面部扫描网格结果基本露出上牙为主,下牙露出较少,牙齿区域通常指的是上牙区域。需要说明的是,扫描数据拼接在完成上牙区域的拼接后下牙区域可以根据牙齿咬合状态确定即可,从而进一步提高拼接效率。The tooth area may include the upper tooth area and the lower tooth area or the entire tooth area. In the disclosed embodiment, the facial scanning grid result mainly exposes the upper teeth, with less exposure of the lower teeth. The tooth area generally refers to the upper tooth area. It should be noted that after the upper tooth area is spliced, the lower tooth area can be determined according to the tooth occlusion state, thereby further improving the splicing efficiency.
步骤202、将牙齿区域和人脸关键特征点投影至面部扫描网格结果,得到待处理面部扫描网格结果。 Step 202: Project the tooth area and key feature points of the face to the facial scan grid result to obtain the facial scan grid result to be processed.
本公开实施例中,在基于人脸图像获取牙齿区域和人脸关键特征点后,需要将牙齿区域和人脸关键特征点投影至面部扫描网格结果,得到待处理面部扫描网格结果,即将牙齿区域的像素坐标点和人脸关键特征点的二维坐标点映射到面部扫描网格结果,从而获取包括牙齿区域和人脸关键特征点的待处理面部扫描网格结果。In the disclosed embodiment, after obtaining the tooth area and key facial feature points based on the facial image, it is necessary to project the tooth area and the key facial feature points onto the facial scanning grid result to obtain the facial scanning grid result to be processed, that is, map the pixel coordinate points of the tooth area and the two-dimensional coordinate points of the key facial feature points to the facial scanning grid result, thereby obtaining the facial scanning grid result to be processed including the tooth area and the key facial feature points.
可以理解的是,人脸图像是二维图像,从而牙齿区域和人脸关键特征点都是二维图像坐标系下的各个二维坐标点,可以通过获取图像坐标系和扫描相机坐标系之间的坐标变换矩阵,将图像坐标系下的牙齿区域和人脸关键特征点转换至扫描相机坐标系下的三维坐标点,也可以理解为基于相机投影原理将牙齿区域和人脸关键特征点投影到面部扫描网格结果得到待处理面部扫描网格结果。It can be understood that the facial image is a two-dimensional image, so the tooth area and the key feature points of the face are both two-dimensional coordinate points in the two-dimensional image coordinate system. The tooth area and the key feature points of the face in the image coordinate system can be converted to three-dimensional coordinate points in the scanning camera coordinate system by obtaining the coordinate transformation matrix between the image coordinate system and the scanning camera coordinate system. It can also be understood that the tooth area and the key feature points of the face are projected onto the facial scanning grid result based on the camera projection principle to obtain the facial scanning grid result to be processed.
在一些实施例中,将牙齿区域和人脸关键特征点投影至面部扫描网格结果,得到待处理面部扫描网格结果,包括:获取牙齿区域对应的像素坐标点;获取人脸关键特征点对应的特征坐标点;根据图像坐标系和扫描相机坐标系之间的变换关系将像素坐标点和特征坐标点投影至面部扫描网格结果,得到待处理面部扫描网格结果。In some embodiments, the tooth area and key feature points of the face are projected onto the facial scanning grid result to obtain the facial scanning grid result to be processed, including: obtaining pixel coordinate points corresponding to the tooth area; obtaining feature coordinate points corresponding to the key feature points of the face; projecting the pixel coordinate points and feature coordinate points onto the facial scanning grid result according to the transformation relationship between the image coordinate system and the scanning camera coordinate system to obtain the facial scanning grid result to be processed.
具体地,通过图片的人工智能识别算法在人脸图像(对包括人脸的用户拍摄的二维人脸图像)上识别牙齿区域(通常指的是上牙区域对应的像素点集合区域)与人脸关键特征点后,基于相机投影原理,将牙齿区域和人脸关键特征点投影到面部扫描网格结果上。Specifically, after identifying the dental area (usually referring to the pixel point set area corresponding to the upper dental area) and the key facial feature points on the face image (a two-dimensional face image taken of a user including the face) through the image's artificial intelligence recognition algorithm, the dental area and the key facial feature points are projected onto the facial scanning grid result based on the camera projection principle.
具体地,获取牙齿区域对应的像素坐标点,获取人脸关键特征点对应的特征坐标点,根据图像坐标系和扫描相机坐标系之间的变换关系将像素坐标点和特征坐标点投影至面部扫描网格结果,得到待处理面部扫描网格结果。Specifically, the pixel coordinate points corresponding to the tooth area are obtained, the feature coordinate points corresponding to the key feature points of the face are obtained, and the pixel coordinate points and the feature coordinate points are projected to the facial scanning grid result according to the transformation relationship between the image coordinate system and the scanning camera coordinate system to obtain the facial scanning grid result to be processed.
步骤203、基于待处理面部扫描网格结果进行计算,得到面部扫描网格结果对应的面部位姿。Step 203: Calculate based on the face scan mesh result to be processed to obtain the face pose corresponding to the face scan mesh result.
在本公开实施例中,在获取待处理面部扫描网格结果后,表示可以获取待处理面部扫描网格结果中的各个面部特征点,即三维空间下 的面部特征点,并根据各个面部特征点估计面部位姿,比如75个面部特征点输入预设的面部位姿计算公式进行计算,得到面部位姿比如正脸朝向、侧脸多少度朝向等;再比如通过各个面部特征点确定多个面部方向,并根据多个面部方向确定面部位姿;具体根据应用场景需要选择设置,本公开实施例不作具体限制。In the embodiment of the present disclosure, after obtaining the face scanning grid result to be processed, it means that each facial feature point in the face scanning grid result to be processed can be obtained, that is, facial feature points, and estimate the facial pose based on each facial feature point. For example, 75 facial feature points are input into a preset facial pose calculation formula for calculation to obtain facial poses such as the frontal face orientation and the side face orientation. For example, multiple facial directions are determined through each facial feature point, and the facial pose is determined based on the multiple facial directions. The specific setting is selected according to the application scenario, and the embodiments of the present disclosure do not impose specific restrictions.
在一些实施方式中,基于面部特征点确定面部轮廓、第一特征区域和第二特征区域,基于面部轮廓和第一特征区域确定面部平面,基于面部特征点确定第一中线特征点和第二中线特征点,并基于第一中线特征点和第二中线特征点确定面部平面的纵向方向,基于面部轮廓的重心方向和第二特征区域的重心方向对面部平面的纵向方向进行调整,得到面部扫描网格结果对应的面部位姿。In some embodiments, a facial contour, a first feature area, and a second feature area are determined based on facial feature points, a facial plane is determined based on the facial contour and the first feature area, a first midline feature point and a second midline feature point are determined based on the facial feature points, and a longitudinal direction of the facial plane is determined based on the first midline feature point and the second midline feature point, and the longitudinal direction of the facial plane is adjusted based on the center of gravity direction of the facial contour and the center of gravity direction of the second feature area to obtain a facial pose corresponding to the facial scanning grid result.
其中,第一特征区域可以指的是眼睛眉毛区域;第二特征区域指的是嘴巴区域;第一中线指的是鼻中线;第二中线指的是嘴巴中线。Among them, the first characteristic area may refer to the eye and eyebrow area; the second characteristic area refers to the mouth area; the first midline refers to the nose midline; and the second midline refers to the mouth midline.
具体地,基于三维空间的面部特征点计算面部扫描网格结果对应的面部位姿,通常,根据面部特征点就能完成面部姿态估计,比如基于75个面部特征点来计算,取面部轮廓与眼睛眉毛区域的顶点估计面部平面,取鼻中线与嘴巴中线的特征点估计纵向方向,取嘴巴区域重心与面部轮廓重心方向来校准面部朝向,从而确定面部位姿。Specifically, the facial pose corresponding to the facial scanning grid result is calculated based on the facial feature points in three-dimensional space. Usually, facial pose estimation can be completed based on the facial feature points. For example, the facial plane is estimated by taking the vertices of the facial contour and the eye and eyebrow areas, the feature points of the midline of the nose and the midline of the mouth are taken to estimate the longitudinal direction, and the center of gravity of the mouth area and the center of gravity of the facial contour are taken to calibrate the facial orientation, thereby determining the facial pose.
步骤204、将口部扫描网格结果投影至二维平面,得到口部二维坐标点,按照预设的处理方式对口部二维坐标点进行处理,得到二维曲线,基于牙弓线拟合方式对二维曲线对应的参数进行拟合处理,得到牙齿位姿。Step 204: Project the mouth scanning grid result onto a two-dimensional plane to obtain the two-dimensional coordinate points of the mouth, process the two-dimensional coordinate points of the mouth according to a preset processing method to obtain a two-dimensional curve, fit the parameters corresponding to the two-dimensional curve based on the dental arch line fitting method to obtain the tooth posture.
具体地,将口部扫描网格结果投影到二维平面得到口部二维坐标点,在二维平面上通过扩散腐蚀的方式将口部二维坐标点转化为二维曲线(通过二维曲线公式表示),最后通过牙弓线拟合方式对二维曲线对应的参数进行拟合获取牙齿位姿。Specifically, the mouth scanning grid results are projected onto a two-dimensional plane to obtain the two-dimensional coordinate points of the mouth, and the two-dimensional coordinate points of the mouth are converted into a two-dimensional curve (expressed by the two-dimensional curve formula) on the two-dimensional plane through diffusion corrosion. Finally, the parameters corresponding to the two-dimensional curve are fitted by the dental arch line fitting method to obtain the tooth posture.
步骤205、基于面部位姿和牙齿位姿,将面部扫描网格结果和口部扫描网格结果统一至同一坐标系,得到待拼接面部扫描网格结果和待 拼接口部扫描网格结果。Step 205: Based on the facial posture and the tooth posture, the facial scan mesh result and the mouth scan mesh result are unified into the same coordinate system to obtain the facial scan mesh result to be spliced and the mouth scan mesh result to be spliced. Scan mesh results of the joint surface.
在本公开实施例中,基于面部位姿和牙齿位姿,将面部扫描网格结果和口部扫描网格结果统一至同一坐标系,包括:基于面部位姿和牙齿位姿确定面部扫描网格结果和口部扫描网格结果之间的坐标转换矩阵,根据坐标转换矩阵将口部扫描网格结果转换至面部扫描网格结果对应的坐标系;或,根据坐标转换矩阵将面部扫描网格结果转换至口部扫描网格结果对应的坐标系。In an embodiment of the present disclosure, the facial scan grid result and the oral scan grid result are unified into the same coordinate system based on the facial posture and the tooth posture, including: determining the coordinate transformation matrix between the facial scan grid result and the oral scan grid result based on the facial posture and the tooth posture, and transforming the oral scan grid result into the coordinate system corresponding to the facial scan grid result according to the coordinate transformation matrix; or, transforming the facial scan grid result into the coordinate system corresponding to the oral scan grid result according to the coordinate transformation matrix.
步骤206、从待拼接面部扫描网格结果中确定第一待拼接牙齿区域,并从待拼接口部扫描网格结果确定目标宽度的第二待拼接牙齿区域。Step 206: determine a first tooth region to be spliced from the face scan grid result to be spliced, and determine a second tooth region to be spliced of a target width from the face scan grid result to be spliced.
步骤207、在第一待拼接牙齿区域和第二待拼接牙齿区域中分别获取相同目标数量和相同位置的特征点,并确定第一待拼接牙齿区域中特征点与第二待拼接牙齿区域中特征点之间的刚体变换矩阵,根据刚体变换矩阵对待处理面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果。Step 207: Acquire feature points of the same target quantity and position in the first tooth region to be spliced and the second tooth region to be spliced, respectively, and determine the rigid body transformation matrix between the feature points in the first tooth region to be spliced and the feature points in the second tooth region to be spliced, and splice the face scan mesh result to be processed and the mouth scan mesh result according to the rigid body transformation matrix to obtain an initial splicing result.
具体地,基于面部位姿和牙齿位姿,将口部扫描网格结果与口部扫描网格结果摆正到同一坐标系,分别选取在面部扫描网格结果和口部扫描网格结果中选取预设数量的特征点,其中,面部扫描网格结果中特征点的数量与口部扫描网格结果中特征点的数量相等且位置相同,确定面部扫描网格结果中特征点与口部扫描网格结果中特征点之间的刚体变换矩阵,根据刚体变换矩阵将面部扫描网格结果与口部扫描网格结果进行拼接,得到初始拼接结果。Specifically, based on the facial posture and the tooth posture, the mouth scanning grid result and the mouth scanning grid result are aligned to the same coordinate system, and a preset number of feature points are selected from the facial scanning grid result and the mouth scanning grid result respectively, wherein the number of feature points in the facial scanning grid result is equal to the number of feature points in the mouth scanning grid result and the positions are the same, and the rigid body transformation matrix between the feature points in the facial scanning grid result and the feature points in the mouth scanning grid result is determined, and the facial scanning grid result and the mouth scanning grid result are spliced according to the rigid body transformation matrix to obtain an initial splicing result.
比如从面部扫描网格结果中截取上牙区域,从口部扫描网格结果截取一定宽度(根据经验值设置)的前牙区域,基于固定方向(预先设置)的特征描述子(比如牙尖上的点),将截取后的口部扫描网格结果拼接到面部扫描网格结果,得到初始拼接结果。可以理解的是,在位姿已经确认的前提下,特征描述子不再需要局部坐标系的估计,固定方向的局部坐标系能确保拼接过程中位姿变化尽可能小,进一步提 高拼接成功率。For example, the upper teeth area is cut from the facial scan grid result, and the front teeth area of a certain width (set according to the empirical value) is cut from the mouth scan grid result. Based on the fixed direction (pre-set) feature descriptor (such as the point on the tooth tip), the cut mouth scan grid result is spliced to the facial scan grid result to obtain the initial splicing result. It can be understood that under the premise that the pose has been confirmed, the feature descriptor no longer needs the estimation of the local coordinate system. The local coordinate system with a fixed direction can ensure that the pose changes during the splicing process are as small as possible, further improving the accuracy of the splicing process. High splicing success rate.
步骤208、对初始拼接结果中重叠的源扫描点云和目标扫描点云进行处理,得到旋转平移矩阵,根据旋转平移矩阵将源扫描点云变换到目标扫描点云的坐标系下,确定变换后源扫描点云与目标扫描点云的误差,直到误差小于等于预设的误差阈值,得到目标拼接结果。Step 208: Process the overlapping source scan point cloud and target scan point cloud in the initial stitching result to obtain a rotation and translation matrix, transform the source scan point cloud to the coordinate system of the target scan point cloud according to the rotation and translation matrix, determine the error between the transformed source scan point cloud and the target scan point cloud, until the error is less than or equal to a preset error threshold, and obtain the target stitching result.
具体地,在初始拼接结果上通过ICP(Iterative Closest Point,迭代最近点)算法进行拼接调整,即,对初始拼接结果中重叠的源扫描点云和目标扫描点云进行处理,得到旋转平移矩阵,根据旋转平移矩阵将源扫描点云变换到目标扫描点云的坐标系下,确定变换后源扫描点云与目标扫描点云的误差,直到误差小于等于预设的误差阈值,得到目标拼接结果。其中,误差阈值根据应用场景选择设置。Specifically, the ICP (Iterative Closest Point) algorithm is used to perform stitching adjustment on the initial stitching result, that is, the overlapping source scan point cloud and target scan point cloud in the initial stitching result are processed to obtain a rotation and translation matrix, and the source scan point cloud is transformed into the coordinate system of the target scan point cloud according to the rotation and translation matrix, and the error between the transformed source scan point cloud and the target scan point cloud is determined until the error is less than or equal to a preset error threshold, and the target stitching result is obtained. The error threshold is selected and set according to the application scenario.
本公开实施例提供的扫描数据拼接方案,获取用户的人脸图像,并对人脸图像进行识别,得到牙齿区域和人脸关键特征点,将牙齿区域和人脸关键特征点投影至面部扫描网格结果,得到待处理面部扫描网格结果,基于待处理面部扫描网格结果进行计算,得到面部扫描网格结果对应的面部位姿,将口部扫描网格结果投影至二维平面,得到口部二维坐标点,按照预设的处理方式对口部二维坐标点进行处理,得到二维曲线,基于牙弓线拟合方式对二维曲线对应的参数进行拟合处理,得到牙齿位姿,基于面部位姿和牙齿位姿,将面部扫描网格结果和口部扫描网格结果统一至同一坐标系,得到待拼接面部扫描网格结果和待拼接口部扫描网格结果,从待拼接面部扫描网格结果中确定第一待拼接牙齿区域,并从待拼接口部扫描网格结果确定目标宽度的第二待拼接牙齿区域,在第一待拼接牙齿区域和第二待拼接牙齿区域中分别获取相同目标数量和相同位置的特征点,并确定第一待拼接牙齿区域中特征点与第二待拼接牙齿区域中特征点之间的刚体变换矩阵,根据刚体变换矩阵对待处理面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果,对初始拼接结果中重叠的源扫描点云和目标扫描点云进行处理,得到旋转平移矩阵,根据旋转平移矩阵 将源扫描点云变换到目标扫描点云的坐标系下,确定变换后源扫描点云与目标扫描点云的误差,直到误差小于等于预设的误差阈值,得到目标拼接结果。采用上述技术方案,解决了口部扫描数据与面部扫描数据的牙齿拼接比较难的技术问题,通过面部位姿和牙齿位姿辅助拼接,在保证拼接效率的同时提高初始拼接结果的精确性,从而提高目标拼接结果的精确性,保证最终扫描数据拼接的成功率,满足用户对于扫描数据拼接的精度需求,提升用户扫描拼接体验。The scanning data splicing solution provided by the embodiment of the present disclosure obtains the user's face image, recognizes the face image, obtains the tooth area and the key feature points of the face, projects the tooth area and the key feature points of the face to the face scanning grid result, obtains the face scanning grid result to be processed, performs calculation based on the face scanning grid result to obtain the facial posture corresponding to the facial scanning grid result, projects the mouth scanning grid result to a two-dimensional plane, obtains the two-dimensional coordinate points of the mouth, processes the two-dimensional coordinate points of the mouth according to a preset processing method, obtains a two-dimensional curve, fits the parameters corresponding to the two-dimensional curve based on the dental arch line fitting method, obtains the tooth posture, unifies the face scanning grid result and the mouth scanning grid result into the same one based on the face posture and the tooth posture. A coordinate system is provided, and the facial scanning grid result to be spliced and the mouth scanning grid result to be spliced are obtained, a first tooth region to be spliced is determined from the facial scanning grid result to be spliced, and a second tooth region to be spliced with a target width is determined from the mouth scanning grid result to be spliced, feature points of the same target number and the same position are obtained in the first tooth region to be spliced and the second tooth region to be spliced, respectively, and a rigid body transformation matrix between the feature points in the first tooth region to be spliced and the feature points in the second tooth region to be spliced is determined, and the facial scanning grid result to be processed and the mouth scanning grid result are spliced according to the rigid body transformation matrix to obtain an initial splicing result, and the overlapping source scanning point cloud and target scanning point cloud in the initial splicing result are processed to obtain a rotation and translation matrix, and a rotation and translation matrix is obtained according to the rotation and translation matrix. The source scan point cloud is transformed into the coordinate system of the target scan point cloud, and the error between the transformed source scan point cloud and the target scan point cloud is determined until the error is less than or equal to the preset error threshold, and the target stitching result is obtained. The above technical solution solves the technical problem of difficult tooth stitching of mouth scan data and facial scan data. Through auxiliary stitching of facial posture and tooth posture, the accuracy of the initial stitching result is improved while ensuring the stitching efficiency, thereby improving the accuracy of the target stitching result, ensuring the success rate of the final scan data stitching, meeting the user's requirements for the accuracy of scan data stitching, and improving the user's scanning stitching experience.
图3为本公开实施例提供的一种扫描数据拼接装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图3所示,该装置包括:FIG3 is a schematic diagram of the structure of a scan data splicing device provided by an embodiment of the present disclosure. The device can be implemented by software and/or hardware and can generally be integrated in an electronic device. As shown in FIG3 , the device includes:
第一获取模块301,被配置为获取用户的面部扫描网格结果对应的面部位姿;The first acquisition module 301 is configured to acquire the facial posture corresponding to the user's facial scanning grid result;
第二获取模块303,被配置为获取口部扫描网格结果对应的牙齿位姿;The second acquisition module 303 is configured to acquire the tooth posture corresponding to the mouth scanning grid result;
拼接模块303,被配置为基于面部位姿和牙齿位姿对面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果。The stitching module 303 is configured to stitch the facial scan mesh result and the mouth scan mesh result based on the facial pose and the tooth pose to obtain an initial stitching result.
可选的,第一获取模块301,包括:Optionally, the first acquisition module 301 includes:
获取单元,被配置为获取用户的人脸图像,并对人脸图像进行识别,得到牙齿区域和人脸关键特征点;An acquisition unit is configured to acquire a facial image of a user, and recognize the facial image to obtain a tooth region and key facial feature points;
投影单元,被配置为将牙齿区域和人脸关键特征点投影至面部扫描网格结果,得到待处理面部扫描网格结果;A projection unit is configured to project the tooth area and the key feature points of the face onto the facial scanning grid result to obtain the facial scanning grid result to be processed;
计算单元,被配置为基于待处理面部扫描网格结果进行计算,得到面部扫描网格结果对应的面部位姿。The calculation unit is configured to perform calculation based on the face scanning grid result to be processed to obtain the face position and pose corresponding to the face scanning grid result.
可选的,投影单元,具体被配置为:Optionally, the projection unit is specifically configured as:
获取牙齿区域对应的像素坐标点;Get the pixel coordinate points corresponding to the tooth area;
获取人脸关键特征点对应的特征坐标点;Get the feature coordinate points corresponding to the key feature points of the face;
根据图像坐标系和扫描相机坐标系之间的变换关系将像素坐标点 和特征坐标点投影至面部扫描网格结果,得到待处理面部扫描网格结果。According to the transformation relationship between the image coordinate system and the scanning camera coordinate system, the pixel coordinate point The feature coordinate points are projected onto the facial scanning grid result to obtain the facial scanning grid result to be processed.
可选的,计算单元,具体被配置为:Optionally, the computing unit is specifically configured as:
基于面部特征点确定面部轮廓、第一特征区域和第二特征区域;Determine a facial contour, a first feature area, and a second feature area based on the facial feature points;
基于面部轮廓和第一特征区域确定面部平面;determining a facial plane based on the facial contour and the first feature area;
基于面部特征点确定第一中线特征点和第二中线特征点,并基于第一中线特征点和第二中线特征点确定面部平面的纵向方向;Determine a first midline feature point and a second midline feature point based on the facial feature points, and determine a longitudinal direction of the facial plane based on the first midline feature point and the second midline feature point;
基于面部轮廓的重心方向和第二特征区域的重心方向对面部平面的纵向方向进行调整,得到面部扫描网格结果对应的面部位姿。The longitudinal direction of the facial plane is adjusted based on the centroid direction of the facial contour and the centroid direction of the second feature area to obtain the facial pose corresponding to the facial scanning grid result.
可选的,第二获取模块302,具体被配置为:Optionally, the second acquisition module 302 is specifically configured to:
将口部扫描网格结果投影至二维平面,得到口部二维坐标点;Project the mouth scanning grid result onto a two-dimensional plane to obtain the two-dimensional coordinate points of the mouth;
按照预设的处理方式对口部二维坐标点进行处理,得到二维曲线;Process the two-dimensional coordinate points of the mouth according to a preset processing method to obtain a two-dimensional curve;
基于牙弓线拟合方式对二维曲线对应的参数进行拟合处理,得到牙齿位姿。Based on the dental arch line fitting method, the parameters corresponding to the two-dimensional curve are fitted to obtain the tooth posture.
可选的,拼接模块303,包括:Optionally, the splicing module 303 includes:
坐标统一单元,被配置为基于面部位姿和牙齿位姿,将面部扫描网格结果和口部扫描网格结果统一至同一坐标系,得到待拼接面部扫描网格结果和待拼接口部扫描网格结果;A coordinate unification unit is configured to unify the facial scanning mesh result and the oral scanning mesh result into the same coordinate system based on the facial posture and the tooth posture, so as to obtain the facial scanning mesh result to be spliced and the oral scanning mesh result to be spliced;
确定单元,被配置为从待拼接面部扫描网格结果中确定第一待拼接牙齿区域,并从待拼接口部扫描网格结果确定目标宽度的第二待拼接牙齿区域;A determination unit is configured to determine a first tooth region to be spliced from a mesh result of a face scan to be spliced, and to determine a second tooth region to be spliced of a target width from a mesh result of a face scan to be spliced;
获取确定单元,被配置为在第一待拼接牙齿区域和第二待拼接牙齿区域中分别获取相同目标数量和相同位置的特征点,并确定第一待拼接牙齿区域中特征点与第二待拼接牙齿区域中特征点之间的刚体变换矩阵;an acquisition and determination unit, configured to acquire the same target number and the same position of feature points in the first tooth region to be spliced and the second tooth region to be spliced, respectively, and determine a rigid body transformation matrix between the feature points in the first tooth region to be spliced and the feature points in the second tooth region to be spliced;
拼接单元,被配置为根据刚体变换矩阵对待处理面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果。The splicing unit is configured to splice the face scanning mesh result to be processed and the mouth scanning mesh result according to the rigid body transformation matrix to obtain an initial splicing result.
可选的,坐标统一单元,具体被配置为: Optionally, the coordinate unification unit is specifically configured as:
面部位姿和牙齿位姿确定面部扫描网格结果和口部扫描网格结果之间的坐标转换矩阵;The facial pose and tooth pose determine the coordinate transformation matrix between the facial scan mesh result and the mouth scan mesh result;
根据坐标转换矩阵将口部扫描网格结果转换至面部扫描网格结果对应的坐标系;或,根据坐标转换矩阵将面部扫描网格结果转换至口部扫描网格结果对应的坐标系,得到待拼接面部扫描网格结果和待拼接口部扫描网格结果。The mouth scan grid result is converted to the coordinate system corresponding to the face scan grid result according to the coordinate conversion matrix; or, the face scan grid result is converted to the coordinate system corresponding to the mouth scan grid result according to the coordinate conversion matrix to obtain the face scan grid result to be spliced and the mouth scan grid result to be spliced.
可选的,在获取初始拼接结果之后,还包括:Optionally, after obtaining the initial stitching result, it also includes:
处理模块,被配置为对初始拼接结果中重叠的源扫描点云和目标扫描点云进行处理,得到旋转平移矩阵;A processing module is configured to process the overlapping source scanning point cloud and target scanning point cloud in the initial stitching result to obtain a rotation and translation matrix;
拼接调整模块,被配置为根据旋转平移矩阵将源扫描点云变换到目标扫描点云的坐标系下,确定变换后源扫描点云与目标扫描点云的误差,直到误差小于等于预设的误差阈值,得到目标拼接结果。The stitching adjustment module is configured to transform the source scanning point cloud into the coordinate system of the target scanning point cloud according to the rotation and translation matrix, determine the error between the transformed source scanning point cloud and the target scanning point cloud, until the error is less than or equal to a preset error threshold, and obtain the target stitching result.
本公开实施例所提供的扫描数据拼接装置可执行本公开任意实施例所提供的扫描数据拼接方法,具备执行方法相应的功能模块和有益效果。The scan data stitching device provided in the embodiment of the present disclosure can execute the scan data stitching method provided in any embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的扫描数据拼接方法。The embodiments of the present disclosure also provide a computer program product, including a computer program/instruction, which, when executed by a processor, implements the scan data stitching method provided by any embodiment of the present disclosure.
图4为本公开实施例提供的一种电子设备的结构示意图。下面具体参考图4,其示出了适于用来实现本公开实施例中的电子设备400的结构示意图。本公开实施例中的电子设备400可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。FIG4 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present disclosure. Referring specifically to FIG4 below, it shows a schematic diagram of the structure of an electronic device 400 suitable for implementing the embodiment of the present disclosure. The electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc. The electronic device shown in FIG4 is merely an example and should not impose any limitations on the functions and scope of use of the embodiment of the present disclosure.
如图4所示,电子设备400可以包括处理装置(例如中央处理器、 图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。As shown in FIG. 4 , the electronic device 400 may include a processing device (eg, a central processing unit, The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to the bus 404.
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 407 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 408 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 409. The communication device 409 may allow the electronic device 400 to communicate wirelessly or wired with other devices to exchange data. Although FIG. 4 shows an electronic device 400 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have alternatively.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的扫描数据拼接方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from the network through the communication device 409, or installed from the storage device 408, or installed from the ROM 402. When the computer program is executed by the processing device 401, the above-mentioned functions defined in the scan data stitching method of the embodiment of the present disclosure are executed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只 读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a A computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, device, or device. In the present disclosure, a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, which carries a computer-readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which may send, propagate, or transmit a program for use by or in combination with an instruction execution system, device, or device. The program code contained on the computer-readable medium may be transmitted using any suitable medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(Hyper Text Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and server may communicate using any currently known or future developed network protocol such as HTTP (Hyper Text Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), an internet (e.g., the Internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The computer-readable medium may be included in the electronic device, or may exist independently without being installed in the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取用户的面部扫描网格结果对应的面部位姿,并获取用户的口部扫描网格结果对应的牙齿位姿,基于面部位姿和牙齿位姿对面部扫描网格结果和口部扫描网格结果进行拼接,得到初始拼接结果。The above-mentioned computer-readable medium carries one or more programs. When the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains the facial posture corresponding to the user's facial scanning grid result, and obtains the tooth posture corresponding to the user's mouth scanning grid result, and splices the facial scanning grid result and the mouth scanning grid result based on the facial posture and the tooth posture to obtain an initial splicing result.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象 的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing operations of the present disclosure may be written in one or more programming languages, or a combination thereof, including but not limited to object-oriented programming languages. Programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as "C" or similar programming languages are also included. The program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., through the Internet using an Internet service provider).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flow chart and block diagram in the accompanying drawings illustrate the possible architecture, function and operation of the system, method and computer program product according to various embodiments of the present disclosure. In this regard, each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function. It should also be noted that in some implementations as replacements, the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved. It should also be noted that each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs the specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。The units involved in the embodiments described in the present disclosure may be implemented by software or hardware, wherein the name of a unit does not, in some cases, constitute a limitation on the unit itself.
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described above herein may be performed at least in part by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chip (SOCs), complex programmable logic devices (CPLDs), and the like.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以 包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that can A program containing or storing for use by or in conjunction with an instruction execution system, device or equipment. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or equipment, or any suitable combination of the foregoing. More specific examples of machine-readable storage media may include electrical connections based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device, including:
处理器;processor;
用于存储处理器可执行指令的存储器;a memory for storing processor-executable instructions;
处理器,用于从存储器中读取可执行指令,并执行指令以实现如本公开提供的任一的扫描数据拼接方法。The processor is used to read executable instructions from the memory and execute the instructions to implement any scan data stitching method provided in the present disclosure.
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,存储介质存储有计算机程序,计算机程序用于执行如本公开提供的任一的扫描数据拼接方法。According to one or more embodiments of the present disclosure, the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute any scan data stitching method provided by the present disclosure.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an explanation of the technical principles used. Those skilled in the art should understand that the scope of disclosure involved in the present disclosure is not limited to the technical solutions formed by a specific combination of the above technical features, but should also cover other technical solutions formed by any combination of the above technical features or their equivalent features without departing from the above disclosed concept. For example, the above features are replaced with the technical features with similar functions disclosed in the present disclosure (but not limited to) by each other.
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要 求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。Furthermore, although the operations are depicted in a particular order, this should not be construed as requiring that These operations may be performed in the specific order shown or in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Similarly, although some specific implementation details are included in the above discussion, these should not be interpreted as limiting the scope of the present disclosure. Certain features described in the context of a separate embodiment may also be implemented in a single embodiment in combination. On the contrary, the various features described in the context of a single embodiment may also be implemented in multiple embodiments individually or in any suitable sub-combination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。Although the subject matter has been described in language specific to structural features and/or methodological logical actions, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or actions described above. On the contrary, the specific features and actions described above are merely example forms of implementing the claims.
本公开提供的扫描数据拼接方法,通过面部位姿和牙齿位姿辅助拼接,在保证拼接效率的同时提高初始拼接结果的精确性,从而提高最终拼接成功率,具有很强的工业实用性。 The scan data stitching method provided by the present invention improves the accuracy of the initial stitching result while ensuring the stitching efficiency through the auxiliary stitching of facial posture and tooth posture, thereby improving the final stitching success rate and having strong industrial practicability.
Claims (11)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310229276.6 | 2023-03-07 | ||
| CN202310229276.6A CN116245731A (en) | 2023-03-07 | 2023-03-07 | Method, device, equipment and medium for splicing scanning data |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024183760A1 true WO2024183760A1 (en) | 2024-09-12 |
Family
ID=86627606
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/080360 Pending WO2024183760A1 (en) | 2023-03-07 | 2024-03-06 | Scanning data splicing method and apparatus, and device and medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN116245731A (en) |
| WO (1) | WO2024183760A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119399256A (en) * | 2025-01-03 | 2025-02-07 | 吉林大学 | A method for detecting the chopped length of corn stalks based on machine vision |
| CN119559160A (en) * | 2024-12-25 | 2025-03-04 | 北京大学口腔医学院 | Point cloud-based method, system and device for predicting facial changes after implantation in edentulous jaws |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116245731A (en) * | 2023-03-07 | 2023-06-09 | 先临三维科技股份有限公司 | Method, device, equipment and medium for splicing scanning data |
| CN118674764A (en) * | 2024-05-29 | 2024-09-20 | 先临三维科技股份有限公司 | Method, device, equipment and storage medium for determining width value of dental crown |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103392191A (en) * | 2011-02-22 | 2013-11-13 | 3M创新有限公司 | Hybrid stitching |
| CN108596980A (en) * | 2018-03-29 | 2018-09-28 | 中国人民解放军63920部队 | Circular target vision positioning precision assessment method, device, storage medium and processing equipment |
| US10839481B1 (en) * | 2018-12-07 | 2020-11-17 | Bellus 3D, Inc. | Automatic marker-less alignment of digital 3D face and jaw models |
| CN114098980A (en) * | 2021-11-19 | 2022-03-01 | 武汉联影智融医疗科技有限公司 | Camera pose adjusting method, space registration method, system and storage medium |
| CN114283236A (en) * | 2021-12-16 | 2022-04-05 | 中国地质大学(武汉) | Method, device and storage medium for oral cavity scanning by using smart phone |
| CN115457198A (en) * | 2022-08-30 | 2022-12-09 | 先临三维科技股份有限公司 | Tooth model generation method, device, electronic device and storage medium |
| CN116245731A (en) * | 2023-03-07 | 2023-06-09 | 先临三维科技股份有限公司 | Method, device, equipment and medium for splicing scanning data |
-
2023
- 2023-03-07 CN CN202310229276.6A patent/CN116245731A/en active Pending
-
2024
- 2024-03-06 WO PCT/CN2024/080360 patent/WO2024183760A1/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103392191A (en) * | 2011-02-22 | 2013-11-13 | 3M创新有限公司 | Hybrid stitching |
| CN108596980A (en) * | 2018-03-29 | 2018-09-28 | 中国人民解放军63920部队 | Circular target vision positioning precision assessment method, device, storage medium and processing equipment |
| US10839481B1 (en) * | 2018-12-07 | 2020-11-17 | Bellus 3D, Inc. | Automatic marker-less alignment of digital 3D face and jaw models |
| CN114098980A (en) * | 2021-11-19 | 2022-03-01 | 武汉联影智融医疗科技有限公司 | Camera pose adjusting method, space registration method, system and storage medium |
| CN114283236A (en) * | 2021-12-16 | 2022-04-05 | 中国地质大学(武汉) | Method, device and storage medium for oral cavity scanning by using smart phone |
| CN115457198A (en) * | 2022-08-30 | 2022-12-09 | 先临三维科技股份有限公司 | Tooth model generation method, device, electronic device and storage medium |
| CN116245731A (en) * | 2023-03-07 | 2023-06-09 | 先临三维科技股份有限公司 | Method, device, equipment and medium for splicing scanning data |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119559160A (en) * | 2024-12-25 | 2025-03-04 | 北京大学口腔医学院 | Point cloud-based method, system and device for predicting facial changes after implantation in edentulous jaws |
| CN119399256A (en) * | 2025-01-03 | 2025-02-07 | 吉林大学 | A method for detecting the chopped length of corn stalks based on machine vision |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116245731A (en) | 2023-06-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2024183760A1 (en) | Scanning data splicing method and apparatus, and device and medium | |
| CN115115593B (en) | Scanning processing method and device, electronic equipment and storage medium | |
| CN112766215B (en) | Face image processing method and device, electronic equipment and storage medium | |
| WO2024109268A1 (en) | Digital model comparison method and apparatus, device, and medium | |
| WO2024125048A1 (en) | Scanning processing method and apparatus, device, and medium | |
| WO2024240222A1 (en) | Image stylization processing method and apparatus, device, storage medium and program product | |
| WO2025087026A1 (en) | Image annotation method and apparatus, and storage medium | |
| WO2024174862A1 (en) | Virtual object mounting method and apparatus, device, and medium | |
| WO2025092407A1 (en) | Virtual articulator transfer method and apparatus, device, and storage medium | |
| WO2025066984A1 (en) | Bracket placement method and apparatus, device, and storage medium | |
| EP4541310A1 (en) | Method and apparatus for acquiring tooth model, device and medium | |
| WO2024174728A1 (en) | Method and system for detecting obstacle in multi-angle aerial view on basis of virtual camera | |
| CN118154758A (en) | Image processing method, device, medium, program product and electronic equipment | |
| WO2025247205A1 (en) | Alignment processing method and apparatus for three-dimensional face model, device, and medium | |
| CN115732094A (en) | Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and orthodontic treatment monitoring storage medium | |
| WO2022083213A1 (en) | Image generation method and apparatus, and device and computer-readable medium | |
| WO2025021169A1 (en) | Image processing method and device, and storage medium and program product | |
| CN116134476B (en) | Plane correction method and device, computer readable medium and electronic device | |
| WO2025167320A1 (en) | Object modeling method and apparatus, electronic device, storage medium, and program product | |
| CN118799480A (en) | A data processing method, device, equipment and storage medium | |
| WO2024260276A1 (en) | Three-dimensional model-based data processing method and apparatus, device and medium | |
| WO2024109646A1 (en) | Image rendering method and apparatus, device, and storage medium | |
| CN115131471B (en) | Image-based animation generation method, device, equipment and storage medium | |
| WO2024067033A1 (en) | Mark point recognition method and apparatus, and device and storage medium | |
| WO2024193438A1 (en) | Facial expression driving method and apparatus, device, and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24766475 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |