[go: up one dir, main page]

CN116646052B - Auxiliary acupuncture positioning system and method based on three-dimensional human body model - Google Patents

Auxiliary acupuncture positioning system and method based on three-dimensional human body model Download PDF

Info

Publication number
CN116646052B
CN116646052B CN202310777244.XA CN202310777244A CN116646052B CN 116646052 B CN116646052 B CN 116646052B CN 202310777244 A CN202310777244 A CN 202310777244A CN 116646052 B CN116646052 B CN 116646052B
Authority
CN
China
Prior art keywords
model
posture
acupuncture
human body
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310777244.XA
Other languages
Chinese (zh)
Other versions
CN116646052A (en
Inventor
安鹏
吴喜利
王亚峰
王文方
李流云
张涛
李星瑶
高琪
李金娥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Affiliated Hospital Of College Of Medicine Of Xi'an Jiaotong University
Original Assignee
Second Affiliated Hospital Of College Of Medicine Of Xi'an Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Affiliated Hospital Of College Of Medicine Of Xi'an Jiaotong University filed Critical Second Affiliated Hospital Of College Of Medicine Of Xi'an Jiaotong University
Priority to CN202310777244.XA priority Critical patent/CN116646052B/en
Publication of CN116646052A publication Critical patent/CN116646052A/en
Application granted granted Critical
Publication of CN116646052B publication Critical patent/CN116646052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H39/00Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
    • A61H39/02Devices for locating such points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H39/00Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
    • A61H39/08Devices for applying needles to such points, i.e. for acupuncture ; Acupuncture needles or accessories therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Rehabilitation Therapy (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Primary Health Care (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Urology & Nephrology (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an auxiliary acupuncture positioning system and method based on a three-dimensional human body model. The system consists of head display equipment, a three-dimensional human body model, a human body gesture recognition module, a human body model alignment module, an acupuncture point positioning module and a user interaction module. The head display device captures images of the real environment and three-dimensional structure information thereof. The three-dimensional human body model is a virtual model, and virtual acupuncture points are marked on the three-dimensional human body model. The human body posture recognition module predicts a posture of the patient from the captured image. The mannequin alignment module aligns the predicted patient pose with the virtual mannequin using a numerical optimization algorithm. The acupuncture point positioning module converts the positions of the virtual acupuncture points to the real patient body according to the alignment relation. The user interaction module provides a user interaction mode, so that a user can see the body of a real patient and can see a virtual human body model and acupuncture points. The system enables an acupuncture operator to find acupuncture points more accurately and conveniently, and improves treatment effect.

Description

一种基于三维人体模型的辅助针灸定位系统与方法An auxiliary acupuncture positioning system and method based on a three-dimensional human body model

技术领域Technical field

本申请涉及医学技术领域,尤其涉及一种基于三维人体模型的辅助针灸定位系统及方法。The present application relates to the field of medical technology, and in particular to an auxiliary acupuncture positioning system and method based on a three-dimensional human body model.

背景技术Background technique

针灸是一种古老的医疗技术,其主要是通过在人体特定的穴位上施加刺激,以达到治疗疾病或改善身体状况的目的。然而,穴位的定位是针灸的一项重要技能,需要花费大量的时间和精力去学习和掌握。即使是经验丰富的针灸师,也可能在穴位的定位上出现误差,影响到针灸的效果。Acupuncture is an ancient medical technique that mainly stimulates specific acupuncture points on the human body to treat diseases or improve physical conditions. However, acupuncture point positioning is an important skill in acupuncture and requires a lot of time and energy to learn and master. Even experienced acupuncture practitioners may make errors in the positioning of acupuncture points, which affects the effect of acupuncture.

然而,目前在市面上的针灸辅助系统主要还是基于二维图像的,对于穴位的定位精度和用户体验有一定的限制。虽然有一些系统开始尝试使用三维人体模型,但大多数还处于实验阶段,且对用户的技术要求较高。此外,现有的基于三维人体模型的针灸辅助系统主要是在计算机屏幕上显示三维模型,用户需要在看屏幕的同时操作患者的身体,这在一定程度上影响了操作的便捷性和精确性。此外,现有系统中的三维人体模型往往是统一的标准模型,无法适应不同患者的个体差异。However, the acupuncture auxiliary systems currently on the market are mainly based on two-dimensional images, which have certain limitations on acupoint positioning accuracy and user experience. Although some systems have begun to try to use three-dimensional human body models, most are still in the experimental stage and have high technical requirements for users. In addition, existing acupuncture auxiliary systems based on three-dimensional human models mainly display the three-dimensional model on a computer screen. Users need to operate the patient's body while looking at the screen, which affects the convenience and accuracy of the operation to a certain extent. In addition, the three-dimensional human body models in existing systems are often unified standard models that cannot adapt to the individual differences of different patients.

随着虚拟现实(VR)、增强现实(AR)和混合现实(MR)等新型显示技术的发展,越来越多的应用开始尝试将这些技术用于医疗领域,以提高医疗服务的效率和质量。尤其是在针灸等需要精确操作的领域,这些技术具有很大的应用潜力。With the development of new display technologies such as virtual reality (VR), augmented reality (AR) and mixed reality (MR), more and more applications are beginning to try to use these technologies in the medical field to improve the efficiency and quality of medical services. . Especially in areas such as acupuncture that require precise operations, these technologies have great application potential.

因此,研发一种全新的基于三维人体模型的辅助针灸定位系统,结合虚拟现实(VR)、增强现实(AR)和混合现实(MR)等新型显示技术,以提高针灸定位的精度,是目前辅助针灸定位系统技术发展的重要方向。Therefore, it is currently necessary to develop a new auxiliary acupuncture positioning system based on a three-dimensional human body model, combined with new display technologies such as virtual reality (VR), augmented reality (AR) and mixed reality (MR), to improve the accuracy of acupuncture positioning. An important direction for the technological development of acupuncture positioning systems.

发明内容Contents of the invention

本申请提供一种基于三维人体模型的辅助针灸定位系统,以提高针灸点的定位精度This application provides an auxiliary acupuncture positioning system based on a three-dimensional human body model to improve the positioning accuracy of acupuncture points.

该系统包括:头显设备,该设备具有内置的摄像头和深度感应器,用于捕捉真实环境的影像及其三维结构信息;The system includes: a head-mounted display device with a built-in camera and depth sensor for capturing images of the real environment and its three-dimensional structural information;

三维人体模型,该模型是由计算机上创建的虚拟的人体模型,并标注有虚拟针灸点;Three-dimensional human body model, which is a virtual human body model created on a computer and marked with virtual acupuncture points;

人体姿态识别模块,用于从头显设备捕捉的影像及其三维结构信息中预测患者的姿态;Human posture recognition module, used to predict the patient's posture from the image captured by the headset device and its three-dimensional structure information;

人体模型对齐模块,用于根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体;The human body model alignment module is used to align the virtual human model to the real patient's body through numerical optimization algorithms based on the predicted patient posture;

针灸点定位模块,用于根据虚拟的人体模型和真实的患者身体的对齐关系,将所述虚拟针灸点的位置转换到真实的患者身体上;An acupuncture point positioning module is used to convert the position of the virtual acupuncture point to the real patient's body based on the alignment relationship between the virtual human body model and the real patient's body;

用户交互模块,用于提供一种用户交互方式,使用户在看到真实的患者身体的同时,也可以看到虚拟的人体模型和转换到真实的患者身体上的针灸点。The user interaction module is used to provide a user interaction method so that the user can see the virtual human body model and the acupuncture points converted to the real patient's body while seeing the real patient's body.

可选的,所述用户交互模块允许用户通过手势或语音命令来选择和操作针灸点,并在用户选择了一个针灸点后在屏幕上显示出相关的信息。Optionally, the user interaction module allows the user to select and operate acupuncture points through gestures or voice commands, and displays relevant information on the screen after the user selects an acupuncture point.

可选的,所述人体姿态识别模块包括一个深度神经网络,用于从头显的影像中预测患者的姿态。Optionally, the human posture recognition module includes a deep neural network for predicting the patient's posture from the image of the headset.

可选的,所述人体模型对齐模块使用梯度下降算法来计算使虚拟人体模型和患者身体的差异最小的模型参数。Optionally, the human body model alignment module uses a gradient descent algorithm to calculate model parameters that minimize the difference between the virtual human body model and the patient's body.

可选的,所述针灸点定位模块根据所述对齐关系和虚拟针灸点的位置计算出真实的患者身体上对应的针灸点位置,从而将所述虚拟针灸点的位置转换到真实的患者身体上。Optionally, the acupuncture point positioning module calculates the corresponding acupuncture point position on the real patient's body based on the alignment relationship and the position of the virtual acupuncture point, thereby converting the position of the virtual acupuncture point on the real patient's body. .

本申请提供一种基于三维人体模型的辅助针灸定位方法,该方法包括:This application provides an auxiliary acupuncture positioning method based on a three-dimensional human body model. The method includes:

利用头显设备,捕捉患者的真实环境的影像并获取其三维结构信息;Use the head-mounted display device to capture images of the patient's real environment and obtain its three-dimensional structural information;

创建三维人体模型,并在模型上标注针灸点;Create a three-dimensional human body model and mark acupuncture points on the model;

从头显设备的影像中预测患者的姿态;Predict the patient's posture from images from the headset;

根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体;According to the predicted patient posture, the virtual human body model is aligned to the real patient's body through a numerical optimization algorithm;

根据虚拟的人体模型和真实的患者身体的对齐关系,将虚拟针灸点的位置转换到真实的患者身体上;According to the alignment relationship between the virtual human body model and the real patient's body, the position of the virtual acupuncture point is converted to the real patient's body;

通过用户交互技术,提供一种用户交互方式,使用户在看到真实的患者身体的同时,也可以看到虚拟的人体模型和转换到真实的患者身体上的针灸点。Through user interaction technology, a user interaction method is provided, so that users can see the virtual human body model and the acupuncture points converted to the real patient's body while seeing the real patient's body.

可选的,所述用户交互方式还包括允许用户通过手势或语音命令来选择和操作针灸点,并在用户选择了一个针灸点后在屏幕上显示出相关的信息。Optionally, the user interaction method also includes allowing the user to select and operate acupuncture points through gestures or voice commands, and display relevant information on the screen after the user selects an acupuncture point.

可选的,所述从头显设备的影像中预测患者的姿态包括:Optionally, predicting the patient's posture from the image of the headset device includes:

使用一个深度神经网络从头显的影像中预测患者的姿态。Use a deep neural network to predict patient posture from images from a headset.

可选的,所述根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体,包括:Optionally, the virtual human body model is aligned to the real patient's body through a numerical optimization algorithm based on the predicted patient posture, including:

使用梯度下降算法来找到使虚拟人体模型和患者身体的差异最小的模型参数;Use a gradient descent algorithm to find model parameters that minimize the difference between the virtual human model and the patient's body;

根据所述模型参数,将虚拟的人体模型对齐到真实的患者身体。According to the model parameters, the virtual human body model is aligned to the real patient's body.

可选的,所述根据虚拟的人体模型和真实的患者身体的对齐关系,将虚拟针灸点的位置转换到真实的患者身体上,包括:Optionally, converting the position of the virtual acupuncture point to the real patient's body based on the alignment relationship between the virtual human body model and the real patient's body includes:

根据所述对齐关系和虚拟针灸点的位置计算出真实的患者身体上对应的针灸点位置,从而将所述虚拟针灸点的位置转换到真实的患者身体上。The corresponding acupuncture point position on the real patient's body is calculated based on the alignment relationship and the position of the virtual acupuncture point, thereby converting the position of the virtual acupuncture point onto the real patient's body.

本申请提供的定位系统基于三维人体模型的针灸定位,与传统的基于二维图片或者肉眼识别的针灸定位方法相比,使用三维人体模型能更准确地定位针灸点,避免了因个体差异或者姿态变化引起的定位误差。利用头显设备捕捉真实环境,这种方法能够实时捕捉到患者的身体状态和环境信息,相比于静态的图片或者预先录制的视频,能提供更丰富的信息,更有利于精准定位。结合人体姿态识别和模型对齐技术,通过识别患者的实际姿态,并将其与三维模型进行对齐,能够进一步提高针灸点定位的准确性。用户交互模块,通过虚拟现实技术,使用户能够在观察到真实环境的同时,也能看到虚拟的人体模型和针灸点,大大增强了用户体验。The positioning system provided by this application is based on acupuncture positioning of a three-dimensional human body model. Compared with the traditional acupuncture positioning method based on two-dimensional pictures or naked eye recognition, the use of a three-dimensional human body model can more accurately locate acupuncture points, avoiding the need for individual differences or postures. Positioning errors caused by changes. Using a head-mounted display device to capture the real environment, this method can capture the patient's physical status and environmental information in real time. Compared with static pictures or pre-recorded videos, it can provide richer information and is more conducive to accurate positioning. Combining human posture recognition and model alignment technology, by identifying the patient's actual posture and aligning it with the three-dimensional model, the accuracy of acupuncture point positioning can be further improved. The user interaction module, through virtual reality technology, enables users to observe virtual human models and acupuncture points while observing the real environment, which greatly enhances the user experience.

本申请带来的有益技术效果包括:该系统能够实现精准的针灸点定位,提高了针灸治疗的效果和效率。另外,通过虚拟现实技术,能够让针灸师在进行针灸操作时有更直观的视觉效果,提高了针灸操作的便捷性和精准性。此外,通过利用头显设备和深度感应器,可以实现对患者的实时监控,及时调整针灸策略,更好地满足个体化治疗的需求。The beneficial technical effects brought by this application include: the system can achieve precise acupuncture point positioning and improve the effect and efficiency of acupuncture treatment. In addition, through virtual reality technology, acupuncturists can have a more intuitive visual effect when performing acupuncture operations, improving the convenience and accuracy of acupuncture operations. In addition, by using head-mounted devices and depth sensors, patients can be monitored in real time and acupuncture strategies can be adjusted in a timely manner to better meet the needs of individualized treatment.

附图说明Description of drawings

图1是本申请第一实施例提供的一种基于三维人体模型的辅助针灸定位系统的示意图。Figure 1 is a schematic diagram of an auxiliary acupuncture positioning system based on a three-dimensional human body model provided by the first embodiment of the present application.

图2是本申请第一实施例涉及的人体姿态识别模块的示意图。Figure 2 is a schematic diagram of a human gesture recognition module according to the first embodiment of the present application.

图3是本申请第一实施例涉及的姿态预测模型的示意图。Figure 3 is a schematic diagram of a posture prediction model related to the first embodiment of the present application.

图4是本申请第二实施例提供的一种基于三维人体模型的辅助针灸定位方法的流程图。Figure 4 is a flow chart of an auxiliary acupuncture positioning method based on a three-dimensional human body model provided in the second embodiment of the present application.

具体实施方式Detailed ways

在下面的描述中阐述了很多具体细节以便于充分理解本申请。但是本申请能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本申请内涵的情况下做类似推广,因此本申请不受下面公开的具体实施的限制。In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. However, the present application can be implemented in many other ways different from those described here. Those skilled in the art can make similar extensions without violating the connotation of the present application. Therefore, the present application is not limited by the specific implementation disclosed below.

本申请第一实施例提供一种基于三维人体模型的辅助针灸定位系统。请参看图1,该图为本申请第一实施例的示意图。以下结合图1对本申请第一实施例提供一种基于三维人体模型的辅助针灸定位系统进行详细说明。The first embodiment of the present application provides an auxiliary acupuncture positioning system based on a three-dimensional human body model. Please refer to Figure 1, which is a schematic diagram of the first embodiment of the present application. An auxiliary acupuncture positioning system based on a three-dimensional human body model according to the first embodiment of the present application will be described in detail below with reference to FIG. 1 .

辅助针灸定位系统100包括支持混合现实的头显设备102,三维人体模型104,人体姿态识别模块106,人体模型对齐模块108,针灸点定位模块110,以及用户交互模块112。The auxiliary acupuncture positioning system 100 includes a head-mounted display device 102 supporting mixed reality, a three-dimensional human body model 104, a human posture recognition module 106, a human body model alignment module 108, an acupuncture point positioning module 110, and a user interaction module 112.

头显设备102支持混合现实技术,具有内置的摄像头和深度感应器,用于捕捉真实环境的影像并获取其三维结构信息。The head-mounted display device 102 supports mixed reality technology and has a built-in camera and depth sensor for capturing images of the real environment and obtaining its three-dimensional structure information.

混合现实(MixedReality,简称MR)是一种新型的显示技术,它结合了虚拟现实(VirtualReality,简称VR)和增强现实(AugmentedReality,简称AR)的特点,通过高级设备和系统,将虚拟世界和真实世界融合在一起,创造出一个新的视觉体验。Mixed Reality (MR) is a new display technology that combines the characteristics of Virtual Reality (VR) and Augmented Reality (AR). It combines the virtual world and the real world through advanced equipment and systems. The worlds merge together to create a new visual experience.

MR技术的基本特点如下:The basic characteristics of MR technology are as follows:

(1)混合现实:MR不仅将虚拟物体嵌入到现实世界中,还能让用户与虚拟物体进行互动,从而创造出一个混合了现实和虚拟的环境。(1) Mixed reality: MR not only embeds virtual objects into the real world, but also allows users to interact with virtual objects, thereby creating an environment that mixes reality and virtuality.

(2)交互:MR允许用户使用自然的手势、语音等方式与虚拟世界进行交互,提供更自然、更直观的用户体验。(2) Interaction: MR allows users to interact with the virtual world using natural gestures, voice, etc., providing a more natural and intuitive user experience.

(3)现实感:通过对现实世界的实时3D建模,MR技术能够让虚拟物体在真实环境中以更真实的方式出现,例如,可以实现虚拟物体与现实世界的物体的遮挡关系、光照效果等。(3) Reality: Through real-time 3D modeling of the real world, MR technology can make virtual objects appear in a more realistic way in the real environment. For example, it can realize the occlusion relationship between virtual objects and objects in the real world and lighting effects. wait.

MR技术需要复杂的计算机视觉、机器学习、3D建模等技术的支持,其应用范围非常广泛,例如游戏娱乐、远程协作、教育培训、设计制造、医疗健康等领域。MR technology requires the support of complex computer vision, machine learning, 3D modeling and other technologies. Its application range is very wide, such as games and entertainment, remote collaboration, education and training, design and manufacturing, medical and health and other fields.

在混合现实中,虚拟物体和真实物体在同一视场中显示,为用户提供了丰富的视觉体验。本申请提供的头显设备102旨在实现这一效果。In mixed reality, virtual objects and real objects are displayed in the same field of view, providing users with a rich visual experience. The head-mounted display device 102 provided by this application is designed to achieve this effect.

本申请提供的头显设备102具有一种微型显示器,用于在用户的视场中显示虚拟图像。该设备也包括一种头带,用于将设备固定在用户的头部。头带的尺寸和形状可以调节,以适应不同用户的头部大小和形状。The head-mounted display device 102 provided by this application has a microdisplay for displaying virtual images in the user's field of view. The device also includes a headband for securing the device to the user's head. The size and shape of the headband can be adjusted to fit the head size and shape of different users.

头显设备102包含的摄像头用于捕捉用户视野中的真实环境,而深度感应器则用于获取真实环境的三维结构信息。这些信息用于在虚拟环境中生成相应的真实环境模型,使虚拟物体能够与真实物体以一种逼真的方式交互。The camera included in the head-mounted display device 102 is used to capture the real environment in the user's field of view, and the depth sensor is used to obtain the three-dimensional structure information of the real environment. This information is used to generate a corresponding real environment model in the virtual environment, allowing virtual objects to interact with real objects in a realistic way.

当用户佩戴本本实施例提供的头显设备后,设备的内置摄像头和深度感应器会开始捕捉和分析用户视野中的真实环境。这些信息通过一种图像处理算法转化为虚拟环境的模型。When a user wears the head-mounted display device provided in this embodiment, the device's built-in camera and depth sensor will begin to capture and analyze the real environment in the user's field of view. This information is transformed into a model of the virtual environment through an image processing algorithm.

接下来,头显设备的微型显示器会将虚拟环境和虚拟目标显示在用户的视场中。显示的虚拟目标可以是静态的或动态的,取决于应用程序的需求。用户可以通过设备的输入接口(如触摸板、按钮或语音识别系统)来与虚拟目标交互。通过上述方式,头显设备102为用户提供了一种混合现实的视觉体验。Next, the headset's microdisplay displays the virtual environment and virtual objects in the user's field of view. The virtual targets displayed can be static or dynamic, depending on the needs of the application. Users can interact with virtual targets through the device's input interface, such as a touchpad, buttons, or a voice recognition system. In the above manner, the head-mounted display device 102 provides the user with a mixed reality visual experience.

本实施例提供的三维人体模型104,该模型在计算机上创建的虚拟的人体模型,并在模型上标注有虚拟针灸点。下面针对三维人体模型104的实施步骤进行详细说明。The three-dimensional human body model 104 provided in this embodiment is a virtual human body model created on a computer, and virtual acupuncture points are marked on the model. The implementation steps of the three-dimensional human body model 104 will be described in detail below.

首先,需要创建一个基础的三维人体模型。这个模型可以采用各种已知的技术进行创建,包括但不限于:使用计算机图形学技术手动建模、通过激光扫描技术获取现实中的人体模型等。这个模型应该包括人体的主要部分,如头部、胸部、腹部、腿部等,以便为后续的针灸点标注提供基础。First, you need to create a basic 3D human body model. This model can be created using various known technologies, including but not limited to: manual modeling using computer graphics technology, obtaining a realistic human body model through laser scanning technology, etc. This model should include the main parts of the human body, such as head, chest, abdomen, legs, etc., to provide a basis for subsequent acupuncture point annotation.

在完成人体模型创建后,需要在模型上进行虚拟针灸点的标注。首先,需要收集针灸点的相关信息,包括但不限于:针灸点的名称、定位方法、对应的人体部位等。这些信息可以从各种中医学文献中获取。After the human body model is created, virtual acupuncture points need to be marked on the model. First, it is necessary to collect relevant information about acupuncture points, including but not limited to: the names of acupuncture points, positioning methods, corresponding human body parts, etc. This information can be obtained from various TCM literature.

接着,需要将收集到的针灸点信息转化为三维坐标,可以通过使用计算机图形学技术进行坐标转换来实现。然后,需要将这些三维坐标应用到人体模型上,将每一个针灸点标注到相应的位置。每一个针灸点可以通过一个小球体或者其他可视化图标来表示,并且附带有针灸点的基本信息,以方便用户识别和理解。通过以上步骤,就完成了基于三维人体模型的针灸点标注。Next, the collected acupuncture point information needs to be converted into three-dimensional coordinates, which can be achieved by using computer graphics technology for coordinate conversion. Then, these three-dimensional coordinates need to be applied to the human body model to mark each acupuncture point to the corresponding position. Each acupuncture point can be represented by a small sphere or other visual icon, and is accompanied by basic information about the acupuncture point to facilitate user identification and understanding. Through the above steps, the acupuncture point annotation based on the three-dimensional human body model is completed.

本实施例提供的人体姿态识别模块106,用于从头显设备捕捉的影像及其三维结构信息中预测患者的姿态。下面结合图2对于该模块进行说明。该模块主要由图像预处理部分202、姿态预测模型204和后处理部分206构成。The human posture recognition module 106 provided in this embodiment is used to predict the patient's posture from the image captured by the headset device and its three-dimensional structure information. This module will be described below in conjunction with Figure 2. This module mainly consists of an image pre-processing part 202, a posture prediction model 204 and a post-processing part 206.

图像预处理部分202:负责将头显设备的摄像头捕获的影像等信息转化为适合输入姿态预测模型的格式。具体操作可能包括裁剪、缩放、归一化等操作。Image preprocessing part 202: Responsible for converting information such as images captured by the camera of the head-mounted display device into a format suitable for inputting the posture prediction model. Specific operations may include cropping, scaling, normalization and other operations.

图像预处理部分可以通过如下步骤实施:The image preprocessing part can be implemented through the following steps:

图像接收S1001:首先,将从头显设备捕获的影像等信息作为原始输入。这些影像可能是RGB图像或者深度图像,也可能是二者的组合。Image receiving S1001: First, use the image and other information captured from the headset device as the original input. These images may be RGB images or depth images, or a combination of the two.

智能裁剪S1002:接下来,通过使用智能裁剪算法,自动识别出影像中人体的位置,并将这部分区域裁剪出来。智能裁剪算法使用的是预先训练好的深度学习模型,如卷积神经网络(CNN)。这个模型是在大量带有人体位置标注的图像上训练得到的。在进行裁剪时,首先将影像输入到模型中,模型会输出一个人体位置的热力图。然后,根据热力图中的最大值点,确定人体的大致中心位置。最后,以这个中心位置为中心,裁剪出一个预定义大小的区域。这种方式可以确保裁剪后的图像中,人体位于中心位置,并且尽可能多地包含了人体的信息。Smart cropping S1002: Next, by using the smart cropping algorithm, the position of the human body in the image is automatically identified and this part of the area is cropped. The smart cropping algorithm uses pre-trained deep learning models, such as convolutional neural networks (CNN). This model is trained on a large number of images with human body location annotations. When cropping, the image is first input into the model, and the model outputs a heat map of the human body's position. Then, based on the maximum value point in the heat map, the approximate center position of the human body is determined. Finally, an area of predefined size is cropped with this central position as the center. This method ensures that the human body is in the center of the cropped image and contains as much information about the human body as possible.

尺度归一化S1003:由于不同人的体型大小不同,因此需要将裁剪后的图像进行尺度归一化。具体的做法是,将图像的大小统一缩放到预定义的尺寸,例如256x256像素。这样,无论原始人体的大小如何,处理后的图像大小都是一致的。Scale normalization S1003: Since different people have different body sizes, the cropped images need to be scale normalized. The specific method is to uniformly scale the size of the image to a predefined size, such as 256x256 pixels. In this way, the processed image size is consistent regardless of the size of the original human body.

像素值归一化S1004:在尺度归一化之后,还需要对图像的像素值进行归一化。归一化的目的是将图像的像素值范围限定在一定的范围内,例如[0,1]或者[-1,1]。这样做可以使得模型的训练更加稳定,也能够防止某些像素值过大或过小而影响模型的性能。像素值归一化的具体方法可能包括最大最小归一化、Z-Score标准化等。Pixel value normalization S1004: After scale normalization, the pixel values of the image also need to be normalized. The purpose of normalization is to limit the pixel value range of the image to a certain range, such as [0,1] or [-1,1]. This can make the training of the model more stable and prevent certain pixel values from being too large or too small, which will affect the performance of the model. Specific methods of pixel value normalization may include maximum and minimum normalization, Z-Score normalization, etc.

图像增强S1005:最后,为了提高模型的泛化能力,还可以对图像进行一些增强操作,如随机旋转、平移、缩放等。图像增强的方法有很多种,可以根据具体的应用需求和数据特性来选择合适的方法。Image enhancement S1005: Finally, in order to improve the generalization ability of the model, you can also perform some enhancement operations on the image, such as random rotation, translation, scaling, etc. There are many methods of image enhancement, and the appropriate method can be selected according to specific application requirements and data characteristics.

姿态预测模型204:该模型是一个预先训练好的深度学习模型,能够从输入的影像中预测出人体的姿态。该模型可能是一种卷积神经网络(CNN)或其他适合处理图像数据的模型。Posture prediction model 204: This model is a pre-trained deep learning model that can predict the posture of the human body from the input image. The model may be a convolutional neural network (CNN) or other model suitable for processing image data.

这种模型通常被称为“姿态估计网络”,它可以是一种专门为人体姿态估计任务设计的卷积神经网络(CNN)。下面结合图3对其进行说明,其基本结构如下:This model is often called a "pose estimation network", which can be a convolutional neural network (CNN) specifically designed for human pose estimation tasks. It will be described below in conjunction with Figure 3. Its basic structure is as follows:

输入层302:模型的输入是预处理后的图像,大小可能是256x256像素。图像通常是RGB格式,因此输入的通道数为3。Input layer 302: The input to the model is a preprocessed image, which may be 256x256 pixels in size. The image is usually in RGB format, so the number of input channels is 3.

卷积层304:接下来,是一系列的卷积层。每个卷积层都包含卷积操作、非线性激活函数(如ReLU)和池化操作(可选)。卷积操作可以提取图像中的特征,池化操作可以减少特征的空间尺寸,从而降低模型的复杂度。Convolutional layer 304: Next, there is a series of convolutional layers. Each convolutional layer contains convolution operations, nonlinear activation functions (such as ReLU), and pooling operations (optional). The convolution operation can extract features in the image, and the pooling operation can reduce the spatial size of the features, thereby reducing the complexity of the model.

关键点预测层306:在卷积层之后,是关键点预测层。这一层的任务是预测图像中每个像素点是人体关键点(如脖子、肘部、膝盖等)的概率。它通常由一个卷积操作和一个Softmax激活函数组成。卷积操作的输出通道数等于关键点的数量,每个通道对应一个关键点的概率图。Softmax函数用于将概率图的值归一化到0-1之间。Key point prediction layer 306: After the convolutional layer, there is a key point prediction layer. The task of this layer is to predict the probability that each pixel in the image is a key point of the human body (such as neck, elbow, knee, etc.). It usually consists of a convolution operation and a Softmax activation function. The number of output channels of the convolution operation is equal to the number of key points, and each channel corresponds to the probability map of a key point. The Softmax function is used to normalize the values of the probability map to between 0-1.

关键点偏移向量预测层308:该层与关键点概率预测层306并行。这一层由一个卷积操作组成,输出通道数是关键点数量的两倍,因为每个关键点有x和y两个方向的偏移。这一层的任务是预测每个关键点的偏移向量,即相对于预测坐标的微小变化。Keypoint offset vector prediction layer 308: This layer is parallel to the keypoint probability prediction layer 306. This layer consists of a convolution operation, and the number of output channels is twice the number of keypoints, since each keypoint has an offset in both x and y directions. The task of this layer is to predict the offset vector of each keypoint, that is, a small change relative to the predicted coordinates.

解码层310:在关键点预测层之后,是解码层。解码层的任务是将概率图转化为具体的关键点坐标。具体的做法是找出每个概率图中的最大值点,然后加上对应的偏移向量,将其坐标作为关键点的坐标。Decoding layer 310: After the key point prediction layer, there is the decoding layer. The task of the decoding layer is to convert the probability map into specific key point coordinates. The specific method is to find the maximum value point in each probability map, then add the corresponding offset vector, and use its coordinates as the coordinates of the key point.

输出层312:最后,模型的输出是所有关键点的坐标。它们构成了人体的姿态。Output layer 312: Finally, the output of the model is the coordinates of all key points. They form the posture of the human body.

在实施过程中,深度学习模型的结构和参数需要经过大量的训练数据来优化。训练数据通常包括带有人体关键点标注的图像,通过反向传播和梯度下降等优化算法,模型的参数可以不断更新,以使模型的预测结果和真实结果尽可能接近。During the implementation process, the structure and parameters of the deep learning model need to be optimized through a large amount of training data. Training data usually includes images with human body key point annotations. Through optimization algorithms such as backpropagation and gradient descent, the parameters of the model can be continuously updated to make the model's prediction results and the real results as close as possible.

后处理部分206:后处理部分用于将姿态预测模型的输出转化为可以直接使用的姿态数据。这可能包括将模型输出的连续数值转化为离散的姿态类别,或者将模型输出的相对坐标转化为绝对坐标。Post-processing part 206: The post-processing part is used to convert the output of the attitude prediction model into attitude data that can be used directly. This may include converting continuous numerical values output by the model into discrete pose categories, or converting relative coordinates output by the model into absolute coordinates.

后处理部分包括如下实施步骤:The post-processing part includes the following implementation steps:

数据解析S2001:首先,从姿态预测模型的输出中解析出关键点坐标和关键点偏移向量。这些数据以连续数值的形式表示关键点的位置。Data analysis S2001: First, the key point coordinates and key point offset vectors are parsed from the output of the attitude prediction model. These data represent the location of key points in the form of continuous numerical values.

关键点坐标的调整S2002:然后,使用关键点偏移向量来修正关键点坐标,使得预测结果更精确。具体做法是,将关键点坐标加上对应的偏移向量,得到修正后的关键点坐标。Adjustment of key point coordinates S2002: Then, use the key point offset vector to correct the key point coordinates to make the prediction result more accurate. The specific method is to add the key point coordinates to the corresponding offset vector to obtain the corrected key point coordinates.

坐标转换S2003:为了适应针灸定位的需求,需要将关键点坐标从头显设备的坐标系统转换到真实世界的坐标系统。这可以通过以下步骤实现:首先,将修正后的关键点坐标映射到深度图像中,得到关键点的像素坐标和深度值;然后,使用头显设备的内部参数(如焦距和主点坐标)和外部参数(如设备的位置和朝向),将像素坐标和深度值转换为真实世界的坐标。Coordinate conversion S2003: In order to meet the needs of acupuncture positioning, the key point coordinates need to be converted from the coordinate system of the head-mounted device to the coordinate system of the real world. This can be achieved through the following steps: first, map the corrected key point coordinates into the depth image to obtain the pixel coordinates and depth values of the key points; then, use the internal parameters of the head-mounted display device (such as focal length and principal point coordinates) and External parameters, such as the device's position and orientation, convert pixel coordinates and depth values into real-world coordinates.

坐标过滤S2004:为了提高坐标预测的稳定性,我们可以采用一些过滤算法,如卡尔曼滤波器,对连续的预测结果进行平滑处理。Coordinate filtering S2004: In order to improve the stability of coordinate prediction, we can use some filtering algorithms, such as Kalman filter, to smooth the continuous prediction results.

身体部位识别S2005:由于针灸点的位置通常是相对于身体部位来定义的,需要识别出关键点对应的身体部位。这可以通过一些预先定义的规则来实现,例如,如果一个关键点位于两个手臂关键点之间的位置,那么可以认定它是胸部的关键点。Body part identification S2005: Since the positions of acupuncture points are usually defined relative to body parts, it is necessary to identify the body parts corresponding to key points. This can be achieved through some pre-defined rules, for example, if a key point is located between two arm key points, then it can be considered to be a chest key point.

姿态类别识别S2006:最后,可以根据关键点的相对位置关系,识别出患者的姿态类别。这可以通过一些预先定义的规则来实现,例如,如果所有的手部关键点都位于头部关键点的下方,那么可以认定患者是坐着的。Posture category recognition S2006: Finally, the patient's posture category can be identified based on the relative positional relationship of key points. This can be achieved through some predefined rules, for example, if all hand key points are located below the head key points, then the patient can be considered seated.

当头显设备102捕获到影像后,影像首先会被送入图像预处理部分。预处理后的影像数据将输入到姿态预测模型中,模型将预测出影像中人体的姿态。最后,后处理部分将预测出的姿态数据转化为可以直接使用的格式。After the head-mounted display device 102 captures an image, the image will first be sent to the image preprocessing part. The preprocessed image data will be input into the posture prediction model, and the model will predict the posture of the human body in the image. Finally, the post-processing part converts the predicted attitude data into a format that can be used directly.

通过以上步骤,人体姿态识别模块能够从头显设备的影像中预测出患者的姿态,为后续的针灸点定位提供依据。所述人体姿态识别模块可以在一台计算机中实现,也可以在头显设备中的计算单元中实现。Through the above steps, the human posture recognition module can predict the patient's posture from the image of the headset device, providing a basis for subsequent acupuncture point positioning. The human posture recognition module can be implemented in a computer or in a computing unit in a head-mounted display device.

本实施例提供的人体模型对齐模块108,用于根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体。The human body model alignment module 108 provided in this embodiment is used to align the virtual human body model to the real patient's body through a numerical optimization algorithm based on the predicted patient posture.

人体模型对齐模块108主要由姿态数据处理部分、对齐算法部分和模型调整部分构成。The human model alignment module 108 is mainly composed of a posture data processing part, an alignment algorithm part and a model adjustment part.

姿态数据处理部分:该部分负责处理来自人体姿态识别模块的数据,将其转化为适合输入对齐算法的格式。这可能包括将姿态类别转化为姿态参数,或者将绝对坐标转化为相对坐标。如果输入的姿态数据是类别形式(例如:站立,坐姿等),则需要将其转化为具体的姿态参数,如关节角度或者关节坐标等。这一步可以通过查表、基于规则的映射或者机器学习模型来实现。如果输入的姿态数据是绝对坐标,需要将其转化为相对于人体模型的坐标。这可以通过减去模型坐标的平均值或者其他预定的参考点来实现。Posture data processing part: This part is responsible for processing the data from the human posture recognition module and converting it into a format suitable for input alignment algorithm. This may include converting pose categories into pose parameters, or converting absolute coordinates into relative coordinates. If the input posture data is in a category form (for example: standing, sitting, etc.), it needs to be converted into specific posture parameters, such as joint angles or joint coordinates, etc. This step can be achieved through lookup tables, rule-based mapping, or machine learning models. If the input posture data is absolute coordinates, it needs to be converted into coordinates relative to the human body model. This can be achieved by subtracting the mean of the model coordinates or other predetermined reference points.

对齐算法部分:该部分采用数值优化算法,通过优化人体模型的姿态参数使模型与真实人体尽可能对齐。该算法可能是一种迭代优化算法,如梯度下降法,或者其他适合处理此类问题的算法。Alignment algorithm part: This part uses a numerical optimization algorithm to align the model with the real human body as much as possible by optimizing the posture parameters of the human body model. The algorithm may be an iterative optimization algorithm such as gradient descent, or another algorithm suitable for this type of problem.

对齐算法部分可以通过如下步骤实施:The alignment algorithm part can be implemented through the following steps:

(1)设定目标函数:(1) Set the objective function:

假设有预测的姿态参数,表示为向量P(包括了所有关节的位置或角度),和虚拟人体模型的姿态参数,表示为向量M。这里的目标是最小化这两个向量之间的差异。Assume that there are predicted posture parameters, represented by vector P (including the positions or angles of all joints), and posture parameters of the virtual human model, represented by vector M. The goal here is to minimize the difference between these two vectors.

因此,目标函数可以设定为两个向量的欧氏距离的平方,即L=||P-M||2。这个函数的值越小,表示虚拟模型的姿态和真实姿态之间的差异越小。Therefore, the objective function can be set as the square of the Euclidean distance between the two vectors, that is, L = ||PM|| 2 . The smaller the value of this function, the smaller the difference between the pose of the virtual model and the real pose.

在这个基础上,为了提高针灸定位的精度,可以增加一个针对针灸点的优化项。假设针灸点在预测姿态和模型姿态下的位置分别为P_a和M_a,可以将针灸点的位置差异也纳入目标函数中,即L=||P-M||2+λ||P_a-M_a||2,其中λ是一个权重参数,用于调节两项的相对重要性。On this basis, in order to improve the accuracy of acupuncture positioning, an optimization item for acupuncture points can be added. Assuming that the positions of the acupuncture points in the predicted posture and model posture are P_a and M_a respectively, the position difference of the acupuncture points can also be included in the objective function, that is, L=||PM|| 2 +λ||P_a-M_a|| 2 , where λ is a weight parameter used to adjust the relative importance of the two items.

预测姿态下的针灸点位置的数据(P_a)来自于已经建立的针灸知识库和人体姿态识别模块的输出。姿态识别:首先,人体姿态识别模块会通过分析从头显设备获取的真实环境影像,预测出患者的姿态参数P,这可能包括所有关节的位置或角度等。针灸知识库:已经建立的针灸知识库中存储着针灸点相对于各个关节的相对位置信息。这些信息是根据古代经典如《黄帝内经》和现代针灸学的研究成果等编制而成的。针灸点位置预测:根据预测的姿态参数P和针灸知识库,可以计算出预测姿态下的针灸点位置P_a。具体来说,这通常涉及到一些解析或数值的几何计算,例如旋转和平移等。The data (P_a) for predicting the acupuncture point position under the posture comes from the output of the established acupuncture knowledge base and the human posture recognition module. Posture recognition: First, the human posture recognition module predicts the patient's posture parameters P by analyzing the real environment images obtained from the headset device, which may include the positions or angles of all joints, etc. Acupuncture knowledge base: The established acupuncture knowledge base stores the relative position information of acupuncture points relative to each joint. This information is compiled based on ancient classics such as the Huangdi Neijing and modern acupuncture research results. Acupuncture point position prediction: According to the predicted posture parameter P and the acupuncture knowledge base, the acupuncture point position P_a in the predicted posture can be calculated. Specifically, this usually involves some analytical or numerical geometric calculations, such as rotations and translations.

需要注意的是,因为每个人的身体结构都有一定的差异,所以预测的针灸点位置P_a可能并不完全准确。这也是为什么在目标函数中,要加入一个针对针灸点的优化项,以进一步提高针灸定位的精度。It should be noted that because everyone's body structure has certain differences, the predicted acupuncture point location P_a may not be completely accurate. This is why an optimization term for acupuncture points should be added to the objective function to further improve the accuracy of acupuncture positioning.

(2)优化算法:(2) Optimization algorithm:

采用适合的数值优化算法,如梯度下降算法,来最小化目标函数。具体来说,首先随机初始化模型的姿态参数M,然后在每一步迭代中,按照目标函数的梯度方向调整M,使得目标函数的值逐渐减小。A suitable numerical optimization algorithm, such as the gradient descent algorithm, is used to minimize the objective function. Specifically, the attitude parameter M of the model is first randomly initialized, and then in each iteration, M is adjusted according to the gradient direction of the objective function, so that the value of the objective function gradually decreases.

在优化过程中,要考虑到模型的物理约束,比如关节角度的上下限。具体来说,如果在某一步迭代中,模型的某个关节角度超过了其物理上的上下限,那么需要将其调整回到限制范围内。这可以通过在每一步迭代后,对M进行裁剪来实现,即M=min(max(M,lower_bound),upper_bound),其中lower_bound和upper_bound分别表示关节角度的下限和上限。During the optimization process, the physical constraints of the model, such as the upper and lower limits of joint angles, must be taken into consideration. Specifically, if in a certain iteration, a certain joint angle of the model exceeds its physical upper and lower limits, it needs to be adjusted back to the limit range. This can be achieved by clipping M after each iteration step, that is, M = min (max (M, lower_bound), upper_bound), where lower_bound and upper_bound represent the lower and upper limits of the joint angles respectively.

通过以上的步骤,可以持续优化虚拟人体模型的姿态,使其逐渐接近真实的姿态,同时保证模型的物理合理性,并提高针灸定位的精度。Through the above steps, the posture of the virtual human body model can be continuously optimized to gradually approach the real posture, while ensuring the physical rationality of the model and improving the accuracy of acupuncture positioning.

模型调整部分:模型调整部分根据对齐算法的结果调整虚拟人体模型的姿态,使其与真实人体尽可能对齐。Model adjustment part: The model adjustment part adjusts the posture of the virtual human model according to the results of the alignment algorithm to align it with the real human body as much as possible.

当人体姿态识别模块预测出患者的姿态后,姿态数据首先会被送入姿态数据处理部分进行处理。处理后的数据将输入到对齐算法部分,对齐算法将优化人体模型的姿态参数以使模型尽可能对齐到真实人体。最后,模型调整部分将根据优化结果调整虚拟人体模型的姿态。同时,根据模型姿态的变化,对虚拟人体模型上的针灸点位置进行更新。例如,可以通过线性插值或者其他合适的插值方法,根据周围的肌肉和骨骼的变化,对针灸点的位置进行动态调整。After the human posture recognition module predicts the patient's posture, the posture data will first be sent to the posture data processing part for processing. The processed data will be input to the alignment algorithm part, which will optimize the posture parameters of the human model to make the model align to the real human body as much as possible. Finally, the model adjustment part will adjust the posture of the virtual human model based on the optimization results. At the same time, the acupuncture point positions on the virtual human body model are updated according to changes in the model's posture. For example, the position of the acupuncture point can be dynamically adjusted according to changes in the surrounding muscles and bones through linear interpolation or other suitable interpolation methods.

通过以上步骤,可以实现虚拟人体模型的姿态与真实的患者姿态的对齐,。Through the above steps, the posture of the virtual human model can be aligned with the posture of the real patient.

本实施例提供的针灸点定位模块110,用于根据虚拟的人体模型和真实的患者身体的对齐关系,将虚拟针灸点的位置转换到真实的患者身体上。The acupuncture point positioning module 110 provided in this embodiment is used to convert the position of the virtual acupuncture point to the real patient's body based on the alignment relationship between the virtual human body model and the real patient's body.

针灸点定位模块110主要由对齐数据处理部分、定位算法部分和定位结果输出部分构成。The acupuncture point positioning module 110 is mainly composed of an alignment data processing part, a positioning algorithm part and a positioning result output part.

对齐数据处理部分:该部分首先会接收来自人体模型对齐模块的数据,这些数据包括了虚拟人体模型的关节角度、位置等信息。这部分的工作是将这些姿态参数转化为对齐变换矩阵,也就是将模型姿态参数转化为一个4x4的齐次变换矩阵,该矩阵可以描述模型的旋转、平移和缩放等变换。具体的转化方法可以采用欧拉角或者四元数来描述旋转,平移和缩放可以直接使用对应的向量和数值。转化结果会被送入定位算法部分。Alignment data processing part: This part will first receive data from the human model alignment module, which includes the joint angles, positions and other information of the virtual human model. The work of this part is to convert these posture parameters into an alignment transformation matrix, that is, to convert the model posture parameters into a 4x4 homogeneous transformation matrix, which can describe the rotation, translation, scaling and other transformations of the model. The specific conversion method can use Euler angles or quaternions to describe rotation, and translation and scaling can directly use the corresponding vectors and values. The conversion results are fed into the targeting algorithm part.

定位算法部分:该部分接收来自对齐数据处理部分的变换矩阵,和预先设定的虚拟针灸点的位置。虚拟针灸点的位置可以表示为一个三维坐标。定位算法将变换矩阵应用到虚拟针灸点的位置,得到真实的患者身体上对应的针灸点位置。具体的计算过程可以采用矩阵乘法操作,即新的位置=变换矩阵*虚拟针灸点的位置。Positioning algorithm part: This part receives the transformation matrix from the alignment data processing part and the preset position of the virtual acupuncture point. The position of the virtual acupuncture point can be expressed as a three-dimensional coordinate. The positioning algorithm applies the transformation matrix to the location of the virtual acupuncture point to obtain the corresponding acupuncture point location on the real patient's body. The specific calculation process can use matrix multiplication operation, that is, the new position = transformation matrix * the position of the virtual acupuncture point.

定位结果输出部分:负责将定位算法的结果转化为可以直接使用的针灸点位置数据。定位结果可能是一个三维坐标,这部分将三维坐标值转化为相对于参考点(如患者的身体某一部分或者头显设备的摄像头)的距离和角度,或者将其转化为二维图像的像素坐标。具体的转化过程可以通过相应的坐标系转换公式来完成。最终的输出格式可以根据实际需求设定,比如可以以文本或者可视化的形式输出。Positioning result output part: Responsible for converting the results of the positioning algorithm into acupuncture point location data that can be used directly. The positioning result may be a three-dimensional coordinate. This part converts the three-dimensional coordinate value into a distance and angle relative to a reference point (such as a certain part of the patient's body or the camera of the head-mounted display device), or converts it into the pixel coordinates of a two-dimensional image. . The specific conversion process can be completed through the corresponding coordinate system conversion formula. The final output format can be set according to actual needs, such as text or visual output.

当人体模型对齐模块完成对齐操作后,对齐数据首先会被送入对齐数据处理部分进行处理。处理后的数据将输入到定位算法部分,定位算法将根据对齐数据和虚拟针灸点的位置计算出真实的患者身体上对应的针灸点位置。最后,定位结果输出部分将定位结果转化为可以直接使用的格式。When the human body model alignment module completes the alignment operation, the alignment data will first be sent to the alignment data processing part for processing. The processed data will be input into the positioning algorithm part, and the positioning algorithm will calculate the corresponding acupuncture point positions on the real patient's body based on the alignment data and the positions of the virtual acupuncture points. Finally, the positioning result output part converts the positioning results into a format that can be used directly.

通过以上步骤,针灸点定位模块能够将虚拟针灸点的位置转换到真实的患者身体上,为针灸师的操作提供依据。Through the above steps, the acupuncture point positioning module can convert the position of the virtual acupuncture point to the real patient's body, providing a basis for the acupuncturist's operation.

本实施例提供的用户交互模块112,用于提供一种用户交互方式,使用户在看到真实的患者身体的同时,也可以看到虚拟的人体模型和转换到真实的患者身体上的针灸点。The user interaction module 112 provided in this embodiment is used to provide a user interaction method, so that the user can not only see the real patient's body, but also see the virtual human body model and the acupuncture points converted to the real patient's body. .

用户交互模块主要由虚拟现实(VR)显示部分、交互输入部分和交互反馈部分构成。The user interaction module is mainly composed of a virtual reality (VR) display part, an interactive input part and an interactive feedback part.

虚拟现实(VR)显示部分:该部分负责将虚拟的人体模型和虚拟针灸点以及通过头显设备捕捉到的真实环境影像融合显示到用户的视野中。这涉及到一些图像合成技术,如图像融合、图像拼接等。另外,可以将显示部分升级为增强现实(AR)显示,它可以将虚拟的人体模型和针灸点与真实的环境结合起来,并且还可以根据用户的视角动态调整显示内容。这涉及到一些图像融合和三维图像渲染技术,以实现虚拟和真实的无缝融合。Virtual reality (VR) display part: This part is responsible for integrating and displaying the virtual human model, virtual acupuncture points, and real environment images captured through the head-mounted display device into the user's field of vision. This involves some image synthesis technologies, such as image fusion, image splicing, etc. In addition, the display part can be upgraded to an augmented reality (AR) display, which can combine virtual human models and acupuncture points with the real environment, and can also dynamically adjust the display content according to the user's perspective. This involves some image fusion and three-dimensional image rendering technology to achieve a seamless blend of virtual and real.

此外,系统可以将虚拟针灸点以三维箭头或者高亮区域的形式进行显示,提升了针灸点的可见性,为用户提供了直观的视觉指示,帮助用户更好地理解和定位针灸点。这些三维箭头或高亮区域可以根据针灸点的重要性、所处的体位等因素来进行色彩或大小的区分,使得用户能够迅速区分和识别各个针灸点的优先级和相对位置。In addition, the system can display virtual acupuncture points in the form of three-dimensional arrows or highlighted areas, which improves the visibility of acupuncture points, provides users with intuitive visual instructions, and helps users better understand and locate acupuncture points. These three-dimensional arrows or highlighted areas can be distinguished by color or size based on the importance of the acupuncture points, body position and other factors, allowing users to quickly distinguish and identify the priority and relative position of each acupuncture point.

此外,还可以将虚拟针灸点以三维箭头或者高亮区域的形式进行显示,以便于用户更好地理解和定位针灸点。为了进一步提高用户的交互体验,可以引入更多种类的显示方式。例如,可以通过动画效果,例如,使虚拟针灸点处的模型表皮产生波动或发光,以模拟针灸点被刺激时的情景。也可以通过增强现实(AR)技术,为虚拟针灸点添加描述性的文本标签或图形符号,提供关于针灸点名称、作用以及适用疾病的信息。In addition, virtual acupuncture points can also be displayed in the form of three-dimensional arrows or highlighted areas to facilitate users to better understand and locate acupuncture points. In order to further improve the user's interactive experience, more types of display methods can be introduced. For example, animation effects can be used, such as making the model skin at the virtual acupuncture point fluctuate or glow, to simulate the scene when the acupuncture point is stimulated. Augmented reality (AR) technology can also be used to add descriptive text labels or graphic symbols to virtual acupuncture points to provide information about the names, functions and applicable diseases of acupuncture points.

在一些复杂的针灸操作中,还可以使用虚拟现实(VR)技术展示操作的三维动态过程,例如,插针、转针、提针等操作过程,使用户可以从各种角度详细了解操作的步骤和技巧,进一步增强系统的教学和训练功能。此外,结合语音识别技术,用户可以通过语音指令进行模型和针灸点的查询或操作,提供更为方便快捷的交互方式。这些创新的显示方式不仅提高了系统的用户体验,也有助于提升针灸定位的准确性和效率。In some complex acupuncture operations, virtual reality (VR) technology can also be used to display the three-dimensional dynamic process of the operation, such as needle insertion, needle rotation, needle lifting, etc., so that users can understand the steps of the operation in detail from various angles. and techniques to further enhance the teaching and training functions of the system. In addition, combined with speech recognition technology, users can query or operate models and acupuncture points through voice commands, providing a more convenient and faster interaction method. These innovative display methods not only improve the user experience of the system, but also help improve the accuracy and efficiency of acupuncture positioning.

交互输入部分:该部分接收并处理用户的交互输入,如手势操作、语音命令等。这需要运用一些用户输入识别技术,如手势识别、语音识别等。此外,可以增加一种视线追踪技术,使系统能够知道用户正在看哪个针灸点或者模型区域,并将此区域自动放大或者高亮显示。视线追踪技术可以通过专门的设备或者使用一些现有的头显设备(如某些类型的VR设备)中的前置摄像头来实现。Interactive input part: This part receives and processes the user's interactive input, such as gesture operations, voice commands, etc. This requires the use of some user input recognition technologies, such as gesture recognition, voice recognition, etc. In addition, a gaze tracking technology can be added so that the system can know which acupuncture point or model area the user is looking at and automatically enlarge or highlight this area. Gaze tracking technology can be implemented through specialized equipment or using the front-facing camera in some existing headsets (such as certain types of VR equipment).

交互反馈部分:该部分根据用户的交互输入调整虚拟的人体模型和针灸点的显示,以及提供相应的用户反馈。这可以包括改变虚拟物体的位置、颜色、大小等属性,或者提供震动、声音等形式的反馈。另外,可以使用虚拟现实技术在用户选定某个针灸点后,显示一个虚拟的针在选定的点进行刺入的动画,以提供视觉上的反馈。此外,还可以结合触觉反馈设备,比如手持的震动设备或者穿戴式设备,当虚拟的针刺入虚拟模型时,设备会产生震动,使用户能够感受到虚拟的针刺入的感觉,从而提供更加直观和真实的反馈。Interactive feedback part: This part adjusts the display of the virtual human model and acupuncture points based on the user's interactive input, and provides corresponding user feedback. This can include changing the location, color, size and other attributes of virtual objects, or providing feedback in the form of vibrations, sounds, etc. In addition, virtual reality technology can be used to display an animation of a virtual needle piercing the selected point after the user selects an acupuncture point to provide visual feedback. In addition, it can also be combined with tactile feedback devices, such as handheld vibration devices or wearable devices. When the virtual needle penetrates the virtual model, the device will vibrate, allowing the user to feel the feeling of the virtual needle penetrating, thus providing more Intuitive and authentic feedback.

用户交互模块还可以包括一个自适应的用户辅助系统,它可以根据用户的操作习惯和技术水平,自动调整模型和针灸点的显示方式,以及反馈方式。例如,对于初级用户,系统可以显示更多的指导信息和辅助线,对于高级用户,则可以提供更多的自由度和自定义选项。此系统可以通过收集和分析用户的操作数据来自动调整,也可以让用户手动设定。The user interaction module can also include an adaptive user assistance system, which can automatically adjust the display mode of the model and acupuncture points, as well as the feedback mode according to the user's operating habits and technical level. For example, for novice users, the system can display more guidance information and auxiliary lines, and for advanced users, it can provide more freedom and customization options. This system can automatically adjust by collecting and analyzing user operation data, or it can allow users to set it manually.

用户通过头显设备看到的画面是由虚拟现实(VR)显示部分生成的,该画面包括虚拟的人体模型、针灸点以及真实环境的影像。用户可以通过交互输入部分进行操作,如旋转模型、选定针灸点等。交互输入部分会识别用户的输入,将识别结果发送给交互反馈部分。交互反馈部分根据识别结果调整虚拟物体的显示以及提供反馈,使用户能够感知到他的操作已经被系统接收并做出了响应。The picture that the user sees through the head-mounted display device is generated by the virtual reality (VR) display part, which includes virtual human models, acupuncture points, and images of the real environment. Users can perform operations through the interactive input section, such as rotating the model, selecting acupuncture points, etc. The interactive input part will recognize the user's input and send the recognition results to the interactive feedback part. The interactive feedback part adjusts the display of virtual objects and provides feedback based on the recognition results, so that the user can perceive that his operation has been received and responded to by the system.

通过以上步骤,用户交互模块112能够提供一种直观、易操作的交互方式,使用户在看到真实的患者身体的同时,也可以看到并操作虚拟的人体模型和针灸点。Through the above steps, the user interaction module 112 can provide an intuitive and easy-to-operate interaction method, allowing the user to see and operate the virtual human body model and acupuncture points while seeing the real patient's body.

本申请第二实施例提供一种基于三维人体模型的辅助针灸定位方法。请参看图4,该图为本申请第二实施例的示意图。以下结合图4对本申请第二实施例提供一种基于三维人体模型的辅助针灸定位方法进行详细说明。由于本实施例类似于第一实施例,因此介绍的比较简单,请参考第一实施例的相关部分。The second embodiment of the present application provides an auxiliary acupuncture positioning method based on a three-dimensional human body model. Please refer to Figure 4, which is a schematic diagram of the second embodiment of the present application. An auxiliary acupuncture positioning method based on a three-dimensional human body model according to the second embodiment of the present application will be described in detail below with reference to FIG. 4 . Since this embodiment is similar to the first embodiment, the introduction is relatively simple. Please refer to the relevant parts of the first embodiment.

本实施例提供的一种基于三维人体模型的辅助针灸定位方法,包括如下步骤:This embodiment provides an auxiliary acupuncture positioning method based on a three-dimensional human body model, including the following steps:

S400:利用头显设备,捕捉患者的真实环境的影像并获取其三维结构信息;S400: Use the head-mounted display device to capture images of the patient's real environment and obtain its three-dimensional structural information;

S402:创建三维人体模型,并在模型上标注针灸点;S402: Create a three-dimensional human body model and mark acupuncture points on the model;

S404:从头显设备的影像中预测患者的姿态;S404: Predict the patient's posture from the image of the headset device;

S406:根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体;S406: According to the predicted patient posture, align the virtual human body model to the real patient's body through a numerical optimization algorithm;

S408根据虚拟的人体模型和真实的患者身体的对齐关系,将虚拟针灸点的位置转换到真实的患者身体上;S408 converts the position of the virtual acupuncture point to the real patient's body based on the alignment relationship between the virtual human body model and the real patient's body;

S410:通过用户交互技术,提供一种用户交互方式,使用户在看到真实的患者身体的同时,也可以看到虚拟的人体模型和转换到真实的患者身体上的针灸点。S410: Provide a user interaction method through user interaction technology, allowing users to see the virtual human body model and the acupuncture points converted to the real patient's body while seeing the real patient's body.

本实施例中,所述用户交互方式还包括允许用户通过手势或语音命令来选择和操作针灸点,并在用户选择了一个针灸点后在屏幕上显示出相关的信息。In this embodiment, the user interaction method also includes allowing the user to select and operate acupuncture points through gestures or voice commands, and display relevant information on the screen after the user selects an acupuncture point.

本实施例中,所述从头显设备的影像中预测患者的姿态包括:In this embodiment, predicting the patient's posture from the image of the headset device includes:

使用一个深度神经网络从头显的影像中预测患者的姿态。Use a deep neural network to predict patient posture from images from a headset.

本实施例中,所述根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体,包括:In this embodiment, the virtual human body model is aligned to the real patient's body through a numerical optimization algorithm based on the predicted patient posture, including:

使用梯度下降算法来找到使虚拟人体模型和患者身体的差异最小的模型参数;Use a gradient descent algorithm to find model parameters that minimize the difference between the virtual human model and the patient's body;

根据所述模型参数,将虚拟的人体模型对齐到真实的患者身体。According to the model parameters, the virtual human body model is aligned to the real patient's body.

本实施例中,所述根据虚拟的人体模型和真实的患者身体的对齐关系,将虚拟针灸点的位置转换到真实的患者身体上,包括:In this embodiment, converting the position of the virtual acupuncture point to the real patient's body based on the alignment relationship between the virtual human body model and the real patient's body includes:

根据所述对齐关系和虚拟针灸点的位置计算出真实的患者身体上对应的针灸点位置,从而将所述虚拟针灸点的位置转换到真实的患者身体上。According to the alignment relationship and the position of the virtual acupuncture point, the corresponding acupuncture point position on the real patient's body is calculated, thereby converting the position of the virtual acupuncture point on the real patient's body.

本申请虽然以较佳实施例公开如上,但其并不是用来限定本申请,任何本领域技术人员在不脱离本申请的精神和范围内,都可以做出可能的变动和修改,因此本申请的保护范围应当以本申请权利要求所界定的范围为准。Although the present application is disclosed as above with preferred embodiments, it is not intended to limit the present application. Any person skilled in the art can make possible changes and modifications without departing from the spirit and scope of the present application. Therefore, the present application The scope of protection shall be subject to the scope defined by the claims of this application.

Claims (10)

1.一种基于三维人体模型的辅助针灸定位系统,该系统包括:1. An auxiliary acupuncture positioning system based on a three-dimensional human body model. The system includes: 头显设备,该设备具有内置的摄像头和深度感应器,用于捕捉真实环境的影像及其三维结构信息;Head-mounted display device, which has a built-in camera and depth sensor to capture images of the real environment and its three-dimensional structure information; 三维人体模型,该模型是由计算机上创建的虚拟的人体模型,并标注有虚拟针灸点;Three-dimensional human body model, which is a virtual human body model created on a computer and marked with virtual acupuncture points; 人体姿态识别模块,用于从头显设备捕捉的影像及其三维结构信息中预测患者的姿态;Human posture recognition module, used to predict the patient's posture from the image captured by the headset device and its three-dimensional structure information; 人体模型对齐模块,用于根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体;The human body model alignment module is used to align the virtual human body model to the real patient's body through numerical optimization algorithms based on the predicted patient posture; 针灸点定位模块,用于根据虚拟的人体模型和真实的患者身体的对齐关系,将所述虚拟针灸点的位置转换到真实的患者身体上;An acupuncture point positioning module is used to convert the position of the virtual acupuncture point to the real patient's body based on the alignment relationship between the virtual human body model and the real patient's body; 用户交互模块,用于提供一种用户交互方式,使用户在看到真实的患者身体的同时,看到虚拟的人体模型和转换到真实的患者身体上的针灸点;The user interaction module is used to provide a user interaction method that allows the user to see the virtual human body model and the acupuncture points converted to the real patient's body while seeing the real patient's body; 人体姿态识别模块由图像预处理部分、姿态预测模型和后处理部分构成;图像预处理部分通过如下步骤实施:The human posture recognition module consists of an image pre-processing part, a posture prediction model and a post-processing part; the image pre-processing part is implemented through the following steps: 图像接收S1001:首先,将从头显设备捕获的影像信息作为原始输入;Image reception S1001: First, use the image information captured by the headset device as the original input; 智能裁剪S1002:通过使用智能裁剪算法,自动识别出影像中人体的位置,并将这部分区域裁剪出来;智能裁剪算法使用的是预先训练好的深度学习模型;在进行裁剪时,首先将影像输入到模型中,模型会输出一个人体位置的热力图;然后,根据热力图中的最大值点,确定人体的大致中心位置;最后,以这个中心位置为中心,裁剪出一个预定义大小的区域;Smart Cropping S1002: By using the smart cropping algorithm, it automatically identifies the position of the human body in the image and crops out this area; the smart cropping algorithm uses a pre-trained deep learning model; when cropping, the image is first input To the model, the model will output a heat map of the human body's position; then, based on the maximum point in the heat map, the approximate center position of the human body is determined; finally, an area of predefined size is cut out with this center position as the center; 尺度归一化S1003:将图像的大小统一缩放到预定义的尺寸;Scale normalization S1003: uniformly scale the size of the image to a predefined size; 像素值归一化S1004:在尺度归一化之后,对图像的像素值进行归一化;Pixel value normalization S1004: After scale normalization, normalize the pixel values of the image; 图像增强S1005:为了提高模型的泛化能力,对图像进行增强操作;Image enhancement S1005: In order to improve the generalization ability of the model, image enhancement operations are performed; 姿态预测模型:该模型是一个预先训练好的深度学习模型,能够从输入的影像中预测出人体的姿态,其结构如下:Posture prediction model: This model is a pre-trained deep learning model that can predict the posture of the human body from the input image. Its structure is as follows: 输入层:模型的输入是预处理后的图像,大小是256x256像素;图像是RGB格式,输入的通道数为3;Input layer: The input of the model is a preprocessed image, the size is 256x256 pixels; the image is in RGB format, and the number of input channels is 3; 卷积层:每个卷积层都包含卷积操作、非线性激活函数和池化操作;Convolutional layer: Each convolutional layer contains convolution operations, nonlinear activation functions and pooling operations; 关键点预测层:在卷积层之后,是关键点预测层;这一层的任务是预测图像中每个像素点是人体关键点的概率;它由一个卷积操作和一个Softmax激活函数组成;卷积操作的输出通道数等于关键点的数量,每个通道对应一个关键点的概率图;Softmax函数用于将概率图的值归一化到0-1之间;Key point prediction layer: After the convolution layer, there is the key point prediction layer; the task of this layer is to predict the probability that each pixel in the image is a human body key point; it consists of a convolution operation and a Softmax activation function; The number of output channels of the convolution operation is equal to the number of key points, and each channel corresponds to the probability map of a key point; the Softmax function is used to normalize the value of the probability map to between 0-1; 关键点偏移向量预测层:该层与关键点概率预测层并行;这一层由一个卷积操作组成,输出通道数是关键点数量的两倍,因为每个关键点有x和y两个方向的偏移;这一层的任务是预测每个关键点的偏移向量,即相对于预测坐标的微小变化;Keypoint offset vector prediction layer: This layer is parallel to the keypoint probability prediction layer; this layer consists of a convolution operation, and the number of output channels is twice the number of keypoints, since each keypoint has two x and y Directional offset; the task of this layer is to predict the offset vector of each key point, that is, a small change relative to the predicted coordinates; 解码层:在关键点预测层之后,是解码层;解码层的任务是将概率图转化为具体的关键点坐标;具体的做法是找出每个概率图中的最大值点,然后加上对应的偏移向量,将其坐标作为关键点的坐标;Decoding layer: After the key point prediction layer, there is the decoding layer; the task of the decoding layer is to convert the probability map into specific key point coordinates; the specific method is to find the maximum value point in each probability map, and then add the corresponding The offset vector of , use its coordinates as the coordinates of the key point; 输出层:最后,模型的输出是所有关键点的坐标;它们构成了人体的姿态;Output layer: Finally, the output of the model is the coordinates of all key points; they constitute the posture of the human body; 后处理部分:后处理部分用于将姿态预测模型的输出转化为直接使用的姿态数据;这包括将模型输出的连续数值转化为离散的姿态类别;Post-processing part: The post-processing part is used to convert the output of the attitude prediction model into directly used attitude data; this includes converting the continuous values output by the model into discrete attitude categories; 后处理部分包括如下步骤:The post-processing part includes the following steps: 数据解析S2001:首先,从姿态预测模型的输出中解析出关键点坐标和关键点偏移向量;这些数据以连续数值的形式表示关键点的位置;Data analysis S2001: First, the key point coordinates and key point offset vectors are parsed from the output of the attitude prediction model; these data represent the location of the key points in the form of continuous numerical values; 关键点坐标的调整S2002:然后,使用关键点偏移向量来修正关键点坐标,使得预测结果更精确;具体做法是,将关键点坐标加上对应的偏移向量,得到修正后的关键点坐标;Adjustment of key point coordinates S2002: Then, use the key point offset vector to correct the key point coordinates to make the prediction result more accurate; the specific method is to add the key point coordinates to the corresponding offset vector to obtain the corrected key point coordinates ; 坐标转换S2003:将关键点坐标从头显设备的坐标系统转换到真实世界的坐标系统;通过以下步骤实现:首先,将修正后的关键点坐标映射到深度图像中,得到关键点的像素坐标和深度值;然后,使用头显设备的内部参数和外部参数,将像素坐标和深度值转换为真实世界的坐标;Coordinate conversion S2003: Convert key point coordinates from the coordinate system of the headset device to the real-world coordinate system; achieved through the following steps: First, map the corrected key point coordinates to the depth image to obtain the pixel coordinates and depth of the key point value; then, use the internal and external parameters of the headset device to convert the pixel coordinates and depth values into real-world coordinates; 坐标过滤S2004:采用过滤算法对连续的预测结果进行平滑处理;Coordinate filtering S2004: Use filtering algorithm to smooth the continuous prediction results; 身体部位识别S2005:当一个关键点位于两个手臂关键点之间的位置,那么认定它是胸部的关键点;Body part recognition S2005: When a key point is located between two arm key points, it is determined to be the key point of the chest; 姿态类别识别S2006:根据关键点的相对位置关系,识别出患者的姿态类别;如果所有的手部关键点都位于头部关键点的下方,那么认定患者是坐着的;Posture category recognition S2006: Identify the patient's posture category based on the relative position of key points; if all hand key points are located below the head key points, then the patient is considered to be sitting; 当头显设备捕获到影像后,影像首先会被送入图像预处理部分;预处理后的影像数据将输入到姿态预测模型中,模型将预测出影像中人体的姿态;最后,后处理部分将预测出的姿态数据转化为直接使用的格式;When the head-mounted display device captures an image, the image will first be sent to the image pre-processing part; the pre-processed image data will be input into the posture prediction model, and the model will predict the posture of the human body in the image; finally, the post-processing part will predict Convert the outgoing attitude data into a directly usable format; 人体模型对齐模块由姿态数据处理部分、对齐算法部分和模型调整部分构成;The human model alignment module consists of a posture data processing part, an alignment algorithm part and a model adjustment part; 姿态数据处理部分:该部分负责处理来自人体姿态识别模块的数据,这包括将姿态类别转化为姿态参数,或者将绝对坐标转化为相对坐标;如果输入的姿态数据是类别形式,则需要将其转化为具体的姿态参数;这一步通过查表、基于规则的映射或者机器学习模型来实现;如果输入的姿态数据是绝对坐标,需要将其转化为相对于人体模型的坐标;这通过减去模型坐标的平均值或者其他预定的参考点来实现;Posture data processing part: This part is responsible for processing data from the human posture recognition module, which includes converting posture categories into posture parameters, or converting absolute coordinates into relative coordinates; if the input posture data is in category form, it needs to be converted is the specific posture parameter; this step is implemented through a lookup table, rule-based mapping or machine learning model; if the input posture data is absolute coordinates, it needs to be converted into coordinates relative to the human body model; this is done by subtracting the model coordinates average value or other predetermined reference points; 对齐算法部分:该部分采用数值优化算法,通过优化人体模型的姿态参数使模型与真实人体对齐;Alignment algorithm part: This part uses a numerical optimization algorithm to align the model with the real human body by optimizing the posture parameters of the human body model; 对齐算法部分通过如下步骤实施:The alignment algorithm is partially implemented through the following steps: (1)设定目标函数:(1) Set the objective function: 假设有预测的姿态参数,表示为向量P,包括了所有关节的位置或角度,和虚拟人体模型的姿态参数,表示为向量M;这里的目标是最小化这两个向量之间的差异;Suppose there are predicted posture parameters, represented by vector P, which includes the positions or angles of all joints, and posture parameters of the virtual human model, represented by vector M; the goal here is to minimize the difference between these two vectors; 目标函数设定为两个向量的欧氏距离的平方,即 L = ||P - M||2;这个函数的值越小,表示虚拟模型的姿态和真实姿态之间的差异越小;The objective function is set as the square of the Euclidean distance between the two vectors, that is, L = ||P - M|| 2 ; the smaller the value of this function, the smaller the difference between the posture of the virtual model and the real posture; 在这个基础上,增加一个针对针灸点的优化项;设针灸点在预测姿态和模型姿态下的位置分别为P_a和M_a,将针灸点的位置差异纳入目标函数中,即 L = ||P - M||2 + λ||P_a - M_a||2,其中λ是一个权重参数,用于调节两项的相对重要性;On this basis, add an optimization term for acupuncture points; let the positions of the acupuncture points in the predicted posture and model posture be P_a and M_a respectively, and incorporate the position difference of the acupuncture points into the objective function, that is, L = ||P - M|| 2 + λ||P_a - M_a|| 2 , where λ is a weight parameter used to adjust the relative importance of the two items; 预测姿态下的针灸点位置的数据P_a来自于已经建立的针灸知识库和人体姿态识别模块的输出;姿态识别:首先,人体姿态识别模块通过分析从头显设备获取的真实环境影像,预测出患者的姿态参数P,针灸知识库:已经建立的针灸知识库中存储着针灸点相对于各个关节的相对位置信息;针灸点位置预测:根据预测的姿态参数P和针灸知识库,计算出预测姿态下的针灸点位置P_a;The data P_a for predicting the acupuncture point position under the posture comes from the established acupuncture knowledge base and the output of the human posture recognition module; posture recognition: First, the human posture recognition module predicts the patient's posture by analyzing the real environment image obtained from the headset device. Posture parameter P, acupuncture knowledge base: the established acupuncture knowledge base stores the relative position information of acupuncture points relative to each joint; acupuncture point position prediction: based on the predicted posture parameter P and the acupuncture knowledge base, calculate the predicted posture Acupuncture point location P_a; (2)优化算法:(2) Optimization algorithm: 采用数值优化算法,来最小化目标函数;具体来说,首先随机初始化模型的姿态参数M,然后在每一步迭代中,按照目标函数的梯度方向调整M,使得目标函数的值逐渐减小;A numerical optimization algorithm is used to minimize the objective function; specifically, the attitude parameter M of the model is first randomly initialized, and then in each iteration, M is adjusted according to the gradient direction of the objective function, so that the value of the objective function gradually decreases; 在优化过程中,要考虑到模型的物理约束,包括关节角度的上下限;如果在某一步迭代中,模型的某个关节角度超过了其物理上的上下限,那么需要将其调整回到限制范围内;通过在每一步迭代后,对M进行裁剪来实现,即 M = min(max(M, lower_bound), upper_bound),其中lower_bound和upper_bound分别表示关节角度的下限和上限;During the optimization process, the physical constraints of the model, including the upper and lower limits of joint angles, must be taken into consideration; if in a certain iteration, a joint angle of the model exceeds its physical upper and lower limits, it needs to be adjusted back to the limit. Within the range; this is achieved by clipping M after each iteration, that is, M = min(max(M, lower_bound), upper_bound), where lower_bound and upper_bound represent the lower limit and upper limit of the joint angle respectively; 模型调整部分:模型调整部分根据对齐算法的结果调整虚拟人体模型的姿态,使其与真实人体对齐;Model adjustment part: The model adjustment part adjusts the posture of the virtual human model according to the results of the alignment algorithm to align it with the real human body; 当人体姿态识别模块预测出患者的姿态后,姿态数据首先会被送入姿态数据处理部分进行处理;处理后的数据将输入到对齐算法部分,对齐算法将优化人体模型的姿态参数以使模型对齐到真实人体;最后,模型调整部分将根据优化结果调整虚拟人体模型的姿态;同时,根据模型姿态的变化,对虚拟人体模型上的针灸点位置进行更新;When the human posture recognition module predicts the patient's posture, the posture data will first be sent to the posture data processing part for processing; the processed data will be input to the alignment algorithm part, and the alignment algorithm will optimize the posture parameters of the human model to align the model to the real human body; finally, the model adjustment part will adjust the posture of the virtual human body model according to the optimization results; at the same time, the acupuncture point positions on the virtual human body model will be updated according to changes in the model posture; 针灸点定位模块由对齐数据处理部分、定位算法部分和定位结果输出部分构成;The acupuncture point positioning module consists of an alignment data processing part, a positioning algorithm part and a positioning result output part; 对齐数据处理部分:该部分首先会接收来自人体模型对齐模块的数据,这些数据包括了虚拟人体模型的关节角度、位置信息;将这些姿态参数转化为对齐变换矩阵,将模型姿态参数转化为一个4x4的齐次变换矩阵;转化结果会被送入定位算法部分;Alignment data processing part: This part will first receive data from the human model alignment module, which includes joint angles and position information of the virtual human model; convert these posture parameters into an alignment transformation matrix, and convert the model posture parameters into a 4x4 homogeneous transformation matrix; the transformation result will be sent to the positioning algorithm part; 定位算法部分:该部分接收来自对齐数据处理部分的变换矩阵,和预先设定的虚拟针灸点的位置;虚拟针灸点的位置表示为一个三维坐标;定位算法将变换矩阵应用到虚拟针灸点的位置,得到真实的患者身体上对应的针灸点位置;具体的计算过程采用矩阵乘法操作,即新的位置=变换矩阵*虚拟针灸点的位置;Positioning algorithm part: This part receives the transformation matrix from the alignment data processing part and the preset position of the virtual acupuncture point; the position of the virtual acupuncture point is represented as a three-dimensional coordinate; the positioning algorithm applies the transformation matrix to the position of the virtual acupuncture point , to obtain the corresponding acupuncture point position on the real patient's body; the specific calculation process uses matrix multiplication operation, that is, the new position = transformation matrix * the position of the virtual acupuncture point; 定位结果输出部分:负责将定位算法的结果转化为直接使用的针灸点位置数据;定位结果是一个三维坐标,这部分将三维坐标值转化为相对于参考点的距离和角度,或者将其转化为二维图像的像素坐标;具体的转化过程通过相应的坐标系转换公式来完成;Positioning result output part: Responsible for converting the results of the positioning algorithm into directly used acupuncture point position data; the positioning result is a three-dimensional coordinate. This part converts the three-dimensional coordinate value into a distance and angle relative to the reference point, or converts it into Pixel coordinates of the two-dimensional image; the specific conversion process is completed through the corresponding coordinate system conversion formula; 当人体模型对齐模块完成对齐操作后,对齐数据首先会被送入对齐数据处理部分进行处理;处理后的数据将输入到定位算法部分,定位算法将根据对齐数据和虚拟针灸点的位置计算出真实的患者身体上对应的针灸点位置;最后,定位结果输出部分将定位结果转化为直接使用的格式。When the human body model alignment module completes the alignment operation, the alignment data will first be sent to the alignment data processing part for processing; the processed data will be input to the positioning algorithm part, and the positioning algorithm will calculate the real acupuncture point based on the alignment data and the position of the virtual acupuncture point. The corresponding acupuncture point location on the patient's body; finally, the positioning result output part converts the positioning result into a format for direct use. 2.根据权利要求1所述的辅助针灸定位系统,其特征在于,所述用户交互模块允许用户通过手势或语音命令来选择和操作针灸点,并在用户选择了一个针灸点后在屏幕上显示出相关的信息。2. The auxiliary acupuncture positioning system according to claim 1, characterized in that the user interaction module allows the user to select and operate acupuncture points through gestures or voice commands, and displays it on the screen after the user selects an acupuncture point. out relevant information. 3.根据权利要求1所述的辅助针灸定位系统,其特征在于,所述人体姿态识别模块包括一个深度神经网络,用于从头显的影像中预测患者的姿态。3. The auxiliary acupuncture positioning system according to claim 1, wherein the human posture recognition module includes a deep neural network for predicting the patient's posture from the image of the headset. 4.根据权利要求1所述的辅助针灸定位系统,其特征在于,所述人体模型对齐模块使用梯度下降算法来计算使虚拟人体模型和患者身体的差异最小的模型参数。4. The auxiliary acupuncture positioning system according to claim 1, wherein the human body model alignment module uses a gradient descent algorithm to calculate model parameters that minimize the difference between the virtual human body model and the patient's body. 5.根据权利要求1所述的辅助针灸定位系统,其特征在于,所述针灸点定位模块根据所述对齐关系和虚拟针灸点的位置计算出真实的患者身体上对应的针灸点位置,从而将所述虚拟针灸点的位置转换到真实的患者身体上。5. The auxiliary acupuncture positioning system according to claim 1, wherein the acupuncture point positioning module calculates the corresponding acupuncture point position on the real patient's body based on the alignment relationship and the position of the virtual acupuncture point, thereby The positions of the virtual acupuncture points are converted to the real patient's body. 6.一种如权利要求1所述的基于三维人体模型的辅助针灸定位系统的辅助针灸定位方法,该方法包括:6. An auxiliary acupuncture positioning method based on a three-dimensional human body model auxiliary acupuncture positioning system as claimed in claim 1, the method comprising: 利用头显设备,捕捉患者的真实环境的影像并获取其三维结构信息;Use the head-mounted display device to capture images of the patient's real environment and obtain its three-dimensional structural information; 创建三维人体模型,并在模型上标注针灸点;Create a three-dimensional human body model and mark acupuncture points on the model; 从头显设备的影像中预测患者的姿态;Predict the patient's posture from images from the headset; 根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体;According to the predicted patient posture, the virtual human body model is aligned to the real patient's body through a numerical optimization algorithm; 根据虚拟的人体模型和真实的患者身体的对齐关系,将虚拟针灸点的位置转换到真实的患者身体上;According to the alignment relationship between the virtual human body model and the real patient's body, the position of the virtual acupuncture point is converted to the real patient's body; 通过用户交互技术,提供一种用户交互方式,使用户在看到真实的患者身体的同时,也看到虚拟的人体模型和转换到真实的患者身体上的针灸点。Through user interaction technology, a user interaction method is provided, so that users can see the virtual human body model and the acupuncture points converted to the real patient's body while seeing the real patient's body. 7.根据权利要求6所述的辅助针灸定位方法,其特征在于,所述用户交互方式还包括允许用户通过手势或语音命令来选择和操作针灸点,并在用户选择了一个针灸点后在屏幕上显示出相关的信息。7. The auxiliary acupuncture positioning method according to claim 6, wherein the user interaction method further includes allowing the user to select and operate acupuncture points through gestures or voice commands, and after the user selects an acupuncture point, the Relevant information is displayed on. 8.根据权利要求6所述的辅助针灸定位方法,其特征在于,所述从头显设备的影像中预测患者的姿态包括:8. The auxiliary acupuncture positioning method according to claim 6, wherein predicting the patient's posture from the image of the headset device includes: 使用一个深度神经网络从头显的影像中预测患者的姿态。Use a deep neural network to predict patient posture from images from a headset. 9.根据权利要求6所述的辅助针灸定位方法,其特征在于,所述根据预测的患者姿态,通过数值优化算法将虚拟的人体模型对齐到真实的患者身体,包括:9. The auxiliary acupuncture positioning method according to claim 6, wherein the virtual human body model is aligned to the real patient's body through a numerical optimization algorithm based on the predicted patient posture, including: 使用梯度下降算法来找到使虚拟人体模型和患者身体的差异最小的模型参数;Use a gradient descent algorithm to find model parameters that minimize the difference between the virtual human model and the patient's body; 根据所述模型参数,将虚拟的人体模型对齐到真实的患者身体。According to the model parameters, the virtual human body model is aligned to the real patient's body. 10.根据权利要求6所述的辅助针灸定位方法,其特征在于,所述根据虚拟的人体模型和真实的患者身体的对齐关系,将虚拟针灸点的位置转换到真实的患者身体上,包括:10. The auxiliary acupuncture positioning method according to claim 6, characterized in that, according to the alignment relationship between the virtual human body model and the real patient's body, converting the position of the virtual acupuncture point to the real patient's body includes: 根据所述对齐关系和虚拟针灸点的位置计算出真实的患者身体上对应的针灸点位置,从而将所述虚拟针灸点的位置转换到真实的患者身体上。According to the alignment relationship and the position of the virtual acupuncture point, the corresponding acupuncture point position on the real patient's body is calculated, thereby converting the position of the virtual acupuncture point on the real patient's body.
CN202310777244.XA 2023-06-28 2023-06-28 Auxiliary acupuncture positioning system and method based on three-dimensional human body model Active CN116646052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310777244.XA CN116646052B (en) 2023-06-28 2023-06-28 Auxiliary acupuncture positioning system and method based on three-dimensional human body model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310777244.XA CN116646052B (en) 2023-06-28 2023-06-28 Auxiliary acupuncture positioning system and method based on three-dimensional human body model

Publications (2)

Publication Number Publication Date
CN116646052A CN116646052A (en) 2023-08-25
CN116646052B true CN116646052B (en) 2024-02-09

Family

ID=87624889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310777244.XA Active CN116646052B (en) 2023-06-28 2023-06-28 Auxiliary acupuncture positioning system and method based on three-dimensional human body model

Country Status (1)

Country Link
CN (1) CN116646052B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI872764B (en) * 2023-10-30 2025-02-11 慧術科技股份有限公司 Three-dimensional position and motion detection system for human body and object status based on mixed reality and method thereof
TWI896472B (en) * 2023-10-30 2025-09-01 慧術科技股份有限公司 Three-dimensional position and motion detection system for human body and object status based on mixed reality and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109243575A (en) * 2018-09-17 2019-01-18 华南理工大学 A kind of virtual acupuncture-moxibustion therapy method and system based on mobile interaction and augmented reality
CN111524433A (en) * 2020-05-29 2020-08-11 深圳华鹊景医疗科技有限公司 Acupuncture training system and method
CN112258921A (en) * 2020-11-12 2021-01-22 胡玥 Acupuncture interactive teaching system and method based on virtual and mixed reality
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method
KR20220074008A (en) * 2020-11-27 2022-06-03 아주통신(주) System for mixed-reality acupuncture training with dummy and acupuncture controller

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11475630B2 (en) * 2018-10-17 2022-10-18 Midea Group Co., Ltd. System and method for generating acupuncture points on reconstructed 3D human body model for physical therapy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109243575A (en) * 2018-09-17 2019-01-18 华南理工大学 A kind of virtual acupuncture-moxibustion therapy method and system based on mobile interaction and augmented reality
CN111524433A (en) * 2020-05-29 2020-08-11 深圳华鹊景医疗科技有限公司 Acupuncture training system and method
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method
CN112258921A (en) * 2020-11-12 2021-01-22 胡玥 Acupuncture interactive teaching system and method based on virtual and mixed reality
KR20220074008A (en) * 2020-11-27 2022-06-03 아주통신(주) System for mixed-reality acupuncture training with dummy and acupuncture controller

Also Published As

Publication number Publication date
CN116646052A (en) 2023-08-25

Similar Documents

Publication Publication Date Title
US12125149B2 (en) Interfaces for presenting avatars in three-dimensional environments
CN112819947B (en) Three-dimensional face reconstruction method, device, electronic device and storage medium
US12254565B2 (en) Visualization of post-treatment outcomes for medical treatment
CN106859956B (en) A kind of human acupoint identification massage method, device and AR equipment
Gannon et al. Tactum: a skin-centric approach to digital design and fabrication
KR100722229B1 (en) Apparatus and Method for Instant Generation / Control of Virtual Reality Interactive Human Body Model for User-Centered Interface
WO2022056036A2 (en) Methods for manipulating objects in an environment
CN116646052B (en) Auxiliary acupuncture positioning system and method based on three-dimensional human body model
JP2019198638A (en) Measurement information providing system, measurement information providing method, server device, communication terminal, and computer program
Andersen et al. Virtual annotations of the surgical field through an augmented reality transparent display
CN106293082A (en) A Human Anatomy Interactive System Based on Virtual Reality
CN102047199A (en) Interactive virtual reality image generating system
CN114005511A (en) Rehabilitation training method and system, training self-service equipment and storage medium
US20250022237A1 (en) Interfaces for presenting avatars in three-dimensional environments
CN112687131A (en) 3D meridian circulation visual teaching system based on HoloLens
KR20220120731A (en) Method and device for providing affordance healthcare contents using mirror type display
WO2024033768A1 (en) Arcuate imaging for altered reality visualization
CN113703583A (en) Multi-mode cross fusion virtual image fusion system, method and device
CN117435055A (en) Gesture-enhanced eye tracking human-computer interaction method based on spatial stereoscopic display
Hernoux et al. A seamless solution for 3D real-time interaction: design and evaluation
TW201619754A (en) Medical image object-oriented interface auxiliary explanation control system and method thereof
Scheggi et al. Shape and weight rendering for haptic augmented reality
Ha et al. Automatic control of virtual mirrors for precise 3D manipulation in VR
CN118233619A (en) Image adjusting method, device, storage medium and mixed reality equipment
Siegl et al. An augmented reality human–computer interface for object localization in a cognitive vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant