[go: up one dir, main page]

CN1286438C - Vision registering method for medical robot - Google Patents

Vision registering method for medical robot Download PDF

Info

Publication number
CN1286438C
CN1286438C CNB2003101224957A CN200310122495A CN1286438C CN 1286438 C CN1286438 C CN 1286438C CN B2003101224957 A CNB2003101224957 A CN B2003101224957A CN 200310122495 A CN200310122495 A CN 200310122495A CN 1286438 C CN1286438 C CN 1286438C
Authority
CN
China
Prior art keywords
coordinate system
robot
coordinate
gauge point
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2003101224957A
Other languages
Chinese (zh)
Other versions
CN1554315A (en
Inventor
张玉茹
王田苗
丑武胜
刘军传
张寿红
孟偲
朱艳菊
栾胜
唐粲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNB2003101224957A priority Critical patent/CN1286438C/en
Publication of CN1554315A publication Critical patent/CN1554315A/en
Application granted granted Critical
Publication of CN1286438C publication Critical patent/CN1286438C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

本发明公开了一种辅助立体定向神经外科手术机器人中的视觉注册方法,它使用机器人、摄像装置、计算机、扫描成像装置和标记点来实现从扫描图像坐标系到机器人坐标系的注册。为了在手术过程中使机器人按照医生在扫描图像中的规划对准病灶点,需要建立扫描图像坐标系跟机器人坐标系之间的关系,称为扫描图像坐标系到机器人坐标系的注册。手术时,在计算机中确定贴在病人头上的标记点在扫描图像坐标系中的坐标,同时使用摄像装置测量标记点在机器人坐标系中的坐标,由此可以完成注册。本发明的注册方法与接触式测量方法相比,它的定位精确度高,且可以自动的实时测量标记点的位置;与其它非接触方法相比,它的价格低廉,操作简单方便,具有很大的优越性。

The invention discloses a visual registration method in an auxiliary stereotaxic neurosurgery robot, which uses a robot, a camera, a computer, a scanning imaging device and marking points to realize registration from a scanning image coordinate system to a robot coordinate system. In order to align the robot to the lesion according to the doctor's plan in the scanned image during the operation, it is necessary to establish the relationship between the scanned image coordinate system and the robot coordinate system, which is called the registration of the scanned image coordinate system to the robot coordinate system. During the operation, the computer determines the coordinates of the marking points attached to the patient's head in the scan image coordinate system, and at the same time uses the camera to measure the coordinates of the marking points in the robot coordinate system, thereby completing the registration. Compared with the contact measurement method, the registration method of the present invention has high positioning accuracy, and can automatically measure the position of the marking point in real time; compared with other non-contact methods, it is cheap, simple and convenient to operate, and has great advantages. Great advantage.

Description

Medical robot vision register method
Technical field
The present invention relates to a kind of process registration between the medical operating coordinate system that is used for, be meant a kind of vision register method that is used for auxiliary stereotactic neurosurgery robot particularly.
Background technology
Stereotactic neurosurgery is a subject branch of modern neurosurgery development, it is to utilize imaging localization and position finder guiding, small operating theater instruments such as microelectrode, puncture needle or endoscope are inserted appointment target spot in the brain, by writing down electric physiology, leave and take tissue specimen, producing methods such as damaging kitchen range, removal focus, diagnosis and treatment central nervous system's various diseases.Traditional stereotactic neurosurgery generally uses three-dimensional positioning framework that the focus point is positioned, before the operation beginning a three-dimensional positioning framework is fixed tightly in firmly patient's head, because three-dimensional positioning framework is a metal structure, so be visible in the image that scanned imagery device became, the pathological changes point in the image can be to determine its locus by frame coordinates just so.This method precision is higher relatively, but patient is had certain injury, belongs to damaging location.
Adopt the reference marker positioning mode to position in auxiliary stereotactic neurosurgery robot system, the fixed omniselector guiding of relative patient operation is adopted in the planning that undergos surgery in the scanogram space of computer.Traditional relatively have a framework directional operation, and it has eliminated framework to bring painful of patient and the inconvenience that brings to the doctor.
In order to make robot aim at the focus point according to the spatial surgery planning of computer scanning images, the scanning image coordinate system that need set up computer is called two registrations between the coordinate system with the relation between the robot coordinate system.The general reference marker point that uses is finished two registrations between the coordinate system.Need to obtain the coordinate of these gauge points in the robot coordinate system for this reason.In existing auxiliary stereotactic neurosurgery robot system, the locate mode of gauge point generally is divided into two classes, one class is a contact, and is arm-type as machinery, promptly uses the terminal contact mark point of mechanical arm to obtain the coordinate of gauge point in the robot coordinate system.This method advantage is to be easy to realize, the operation intuitive and convenient, and shortcoming is to need doctor's manual operation, certainty of measurement is subjected to artificial factor, and can not realize automatic measurement.Another kind of is contactless, as ultrasonic, electromagnetism and infrared etc., these modes can realize the automatic real-time positioning to gauge point, but deficiency is separately arranged again, be subjected to the influence of temperature, air-flow, humidity and propagation medium of environment bigger as ultrasonic power, electromagnetic mode is subjected to the electromagnetic effect of environment bigger, and infrared mode light path between photographic head and light emitting diode in operation process can not be kept off, and cost is higher.In these several modes, best mode is infrared locate mode, and it is the precision height not only, and sophisticated product is arranged and obtained widely and use, but it costs an arm and a leg and makes that domestic hospital is difficult to accept.
Summary of the invention
The objective of the invention is to propose the vision register method in a kind of auxiliary stereotactic neurosurgery robot, be to adopt the coordinate of technique of binocular stereoscopic vision measurement markers point in the robot coordinate system, thereby finish the registration of scanning image coordinate system to the robot coordinate system.This method only needs two camera heads can determine the coordinate of gauge point in the robot coordinate system.Compare with contact measurement method, its setting accuracy height, and can be automatically the position of measurement markers point in real time; Compare with other non-contact method, it cheap, simple to operation, tool has an enormous advantage.
Vision register method in a kind of auxiliary stereotactic neurosurgery of the present invention robot, this method is by containing robot, camera head, computer, the coordinate of scanned imagery device execution vision technique measurement markers point in the robot coordinate system, thereby finish the registration of scanning image coordinate system to the robot coordinate system, this method comprises the following steps:
(a) at patient head labelling point;
(b) determine the coordinate of gauge point in scanning image coordinate system:
Scan patient head with scanned imagery device, become image to be input to computer institute, the surgery planning software processes in the machine is determined the coordinate of gauge point in image coordinate system as calculated;
(c) determine the coordinate of gauge point in the robot coordinate system:
The patient head image that camera head is obtained is input to computer, and the positioning software in the machine is handled as calculated, determines the coordinate of gauge point in the robot coordinate system, and sends this coordinate to surgery planning software;
The coordinate of described gauge point in the robot coordinate system comprises that robot coordinate system OXYZ, camera head coordinate system O ' X ' Y ' Z ', undistorted image coordinates are that o ' u ' v ', distorted image coordinates are o " u " v " with computer screen coordinate system o cFive coordinate systems of uv, the conversion simultaneous between described five coordinate systems obtains the coordinate of gauge point on computer screen;
(d) surgery planning software is finished the registration of scanning image coordinate system to the robot coordinate system.
Described vision register method, the number of its gauge point can have 4.
Described vision register method, its camera head are finished the coordinate of measurement markers point in the robot coordinate system.
Described vision register method, its scanned imagery device can be CT or MRI.
Advantage of the present invention: this vision register method is compared with the contact register method, its setting accuracy height, and can be automatically the position of measurement markers point in real time; Compare with other noncontact register method, it cheap, simple to operation, tool has an enormous advantage.
Description of drawings
Fig. 1 is the FB(flow block) of vision register method of the present invention.
Fig. 2 is scanogram of the present invention space, robot space and patient head sketch map.
Fig. 3 is the flow chart that the present invention determines the coordinate of gauge point in scanning image coordinate system.
Fig. 4 concerns sketch map between each coordinate system in the vision register method of the present invention.
The specific embodiment
The present invention is described in further detail below in conjunction with drawings and Examples.
In the auxiliary stereotactic neurosurgery of robot, patient head is behind the scanning imagery device scan, and its image is input to " the surgery planning software " of computer, and the doctor sketches the contours the focus point by software in scanogram, and definite target spot and puncture path." surgery planning software " is converted into the object pose of robot among the robot coordinate system with doctor's the planning in scanogram, and the control robot undergos surgery with the guiding doctor according to the path alignment target spot of planning.Finish the conversion from the scanning image coordinate system to robot coordinate system, need set up the mapping relations of scanning image coordinate system, promptly finish two registrations between the coordinate system to the robot coordinate system.
The present invention is the vision register method in a kind of auxiliary stereotactic neurosurgery robot, use the gauge point of robot, camera head, computer, scanned imagery device and patient head, finish the registration of scanning image coordinate system, so that the focus point can be aimed at according to the planning of doctor in scanogram by robot in the operation process to the robot coordinate system.Registration process as shown in Figure 1,
Specific practice is:
(a) paste four gauge points in patient head, then patient head is carried out CT or MRI scanning, obtain the image of patient head surface and internal structure thereof, and be entered in " the surgery planning software " of computer;
(b) doctor goes up at " the surgery planning software " of computer and determines the position of gauge point on scanogram, calculates the coordinate of gauge point in scanning image coordinate system by " surgery planning software ";
(c) camera head is demarcated in the robot coordinate system, use camera head collection patient head image then and it is imported " the vision localization software " of computer, calculate the coordinate of gauge point in the robot coordinate system, and these coordinates are passed to " surgery planning software ";
(d) " surgery planning software " calculates the relation between scanning image coordinate system and the robot coordinate system according to the coordinate of gauge point in scanning image coordinate system and robot coordinate system, promptly finishes two registrations between the coordinate system.The target coordinate in scanning image coordinate system, determined of doctor and the puncture path pose that promptly can be exchanged into robot afterwards, thus the control robot arrives the desired position.
The coordinate of image calculation gauge point in the robot coordinate system that " vision localization software " is gathered by camera head in the present invention." surgery planning software " is used for determining the coordinate of gauge point in scanning image coordinate system in the present invention, and receive the coordinate of gauge point in the robot coordinate system that " vision localization software " is imported, finish the registration from the scanning image coordinate system to robot coordinate system.
(1) coordinate of gauge point in scanning image coordinate system
The process of determining the coordinate of gauge point in scanning image coordinate system as shown in Figure 3.Scanogram is input to " surgery planning software " in the computer, searches the image of underlined point and make marks with the distal point of mouse at gauge point, planning software is noted the figure layer at gauge point place and residing position on this figure layer.All scanograms are carried out the interlayer interpolation arithmetic, the residing figure layer of gauge point sequence number can be converted into the w coordinate in the scanning image coordinate system shown in the accompanying drawing 2, the position of gauge point in the figure layer promptly is u, the v coordinate in the scanning image coordinate system, can obtain the three-dimensional coordinate of gauge point in scanning image coordinate system thus.
(2) determine the coordinate of gauge point in the robot coordinate system
The present invention uses the coordinate of method measurement markers point in the robot coordinate system of vision localization.Be the formulation of this method below.
In the method, have five coordinate systems, as shown in Figure 4, wherein coordinate system OXYZ is the robot coordinate system, and O ' X ' Y ' Z ' is the camera head coordinate system, and o ' u ' v ' is undistorted image coordinates system, o " u " v " is distorted image coordinates system, o cUv is the computer screen coordinate system.Can set up computer screen coordinate system o by coordinate transform cRelation between uv and the robot coordinate system OXYZ.Vision localization in this method is exactly to put at computer screen coordinate system o by certain cCoordinate among the uv calculates this coordinate in machine x people coordinate system OXYZ.
(1) coordinate system transformational relation
(a) robot coordinate system → camera head coordinate system
X ′ Y ′ Z ′ = r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z X Y Z 1 - - - ( 1 )
(t x, t y, t z) be translational component, (α, beta, gamma, t x, t y, t z) be 6 external parameters of camera head. R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 Be spin matrix, comprise three parameters (α, beta, gamma), α, β, γ be respectively around X,
The corner of Y, Z axle then has:
R = cos β cos γ sin α sin β cos γ + cos α sin γ - cos α sin β cos γ + sin α sin γ - cos β sin γ - sin α sin β sin γ + cos α cos γ cos α sin β sin γ + sin α cos γ sin β - sin α cos β cos α cos β
(b) camera head coordinate system → undistorted picture coordinate system
Perspective projection relation according to imaging
u ′ = f · s x · X ′ Z ′ - - - ( 2 )
v ′ = f · s y · Y ′ Z ′ - - - ( 3 )
F is the camera head focal length, s x, s yBe the camera head scale factor, unit (pixel/mm).
(c) undistorted picture coordinate system → distortion is as coordinate system
There are two class distortions in camera head lens: radial distortion and tangential distortion.Wherein radial distortion is the main cause that causes the imaging point skew, generally only considers radial distortion.
u″=u′-R u (4)
v″=v′-R v (5)
R u=u×(k 1r 2+k 2r 4+…)≈u×(k 1r 2+k 2r 4) (6)
R v=v×(k 1r 2+k 2r 4+…)≈v×(k 1r 2+k 2r 4) (7)
r = u 2 + v 2 - - - ( 8 )
The radial distortion of camera head is the even multinomial of r, and high order can be ignored.k 1, k 2Be the coefficient relevant with radial distortion.
(d) distortion is as coordinate system → computer screen coordinate system
u=u″+u 0 (9)
v=v″+v 0 (10)
u 0, v 0Side-play amount for the relative computer picture coordinate system of image coordinates initial point.(f, s x, s y, k 1, k 2, u 0, v 0) be the inner parameter of camera.
(2) camera head is demarcated
Because camera head and target range are closer in using, and the distortion of camera head itself is also little, thereby in native system, do not consider distortion.Above-mentioned each coordinate system transformational relation simultaneous can be got, be tied to the transformation relation of computer screen coordinate system from robot coordinate
X ′ Y ′ Z ′ = r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z X Y Z 1 , u v 1 = 1 z k 1 0 u 0 0 k 2 v 0 0 0 1 X ′ Y ′ Z ′ - - - ( 11 )
K wherein 1=fs x, k 2=fs yThen
u = k 1 r 11 X + r 12 Y + r 13 Z + t x r 31 X + r 32 Y + r 33 Z + t z + u 0
v = k 2 r 21 X + r 22 Y + r 23 Z + t y r 31 X + r 32 Y + r 33 Z + t z + v 0
Promptly
u = X ( k 1 r 11 + r 31 u 0 ) + Y ( k 1 r 12 + u 0 r 32 ) + Z ( k 1 r 13 + u 0 r 33 ) + ( k 1 t x + u 0 t z ) r 31 X + r 32 Y + r 33 Z + t z
v = X ( k 2 r 21 + r 31 v 0 ) + Y ( k 2 r 22 + v 0 r 32 ) + Z ( k 2 r 23 + v 0 r 33 ) + ( k 2 t y + v 0 t z ) r 31 X + r 32 Y + r 33 Z + t z
Order
A 1 = k 1 r 11 + r 31 u 0 t z , A 2 = k 1 r 12 + u 0 r 32 t z , A 3 = k 1 r 13 + u 0 r 33 t z , A 4 = k 1 t x + u 0 t z t z
A 5 = r 31 t z , A 6 = r 32 t z , A 7 = r 33 t z
A 8 = k 2 r 21 + r 31 v 0 t z , A 9 = k 2 r 22 + v 0 r 32 t z , A 10 = k 2 r 23 + v 0 r 33 t z , A 11 = k 2 t y + v 0 t z t z
Then
u = A 1 X + A 2 Y + A 3 Z + A 4 A 5 X + A 6 Y + A 7 Z + 1 , v = A 8 X + A 9 Y + A 10 Z + A 11 A 5 X + A 6 Y + A 7 Z + 1
Therefore have
A 1X+A 2Y+A 3Z+A 4-A 5Xu-A 6Yu-A 7Zu=u (12)
A 8X+A 9Y+A 10Z+A 11-A 5Xv-A 6Yv-A 7Zv=v (13)
More than two equations comprise 11 parameters to be asked, given n (n 〉=6) group reference point (u n, v n, X n, Y n, Z n), order
E = X 1 Y 1 Z 1 1 - X 1 u 1 - Y 1 u 1 - Z 1 u 1 0 0 0 0 · · · · · · · · · · · · · · · X n Y n Z n 1 - X n u n - Y n u n - Z n u n 0 0 0 0 0 0 0 0 - X 1 v 1 - Y 1 v 1 - Z 1 v 1 X 1 Y 1 Z 1 1 · · · · · · · · · · · · · · · 0 0 0 0 - X n v n - Y n v n - Z n v n X n Y n Z n 1
A=[A 1?A 2?A 3?A 4?A 5?A 6?A 7?A 8?A 9?A 10?A 11] T
C=[u 1?u 2?…?u n?v 1?v 2?…?v n] T
Then have
E·A=C
A=E +·C (14)
If the undetermined parameter of two camera heads is respectively:
A l=[A 1l?A 2l?A 3l?A 4l?A 5l?A 6l?A 7l?A 8l?A 9l?A 10l?A 11l] T
A r=[A 1r?A 2r?A 3r?A 4r?A 5r?A 6r?A 7r?A 8r?A 9r?A 10r?A 11r] T
Given n within two common field ranges of camera head (n 〉=6) individual known in the robot coordinate system reference point of coordinate, can calibrate all undetermined parameters of two camera heads respectively by (14).Timing signal can be got on the mechanical arm terminal needle point point as with reference to point, allows mechanical arm move to the individual different position of n (n 〉=6) respectively, with obtain n individual known in the robot coordinate system reference point of coordinate.
(3) coordinate Calculation of gauge point
Get by equation (12), (13):
(A 1-A 5u)X+(A 2-A 6u)Y+(A 3-A 7u)Z=u-A 4 (15)
(A 8-A 5v)X+(A 9-A 6v)Y+(A 10-A 7v)Z=v-A 11 (16)
If the coordinate to be asked of certain point is that (Z), it is respectively (u through the coordinate in the computer screen coordinate system after two camera head imagings for X, Y in the robot coordinate system l, v l) and (u r, v r), then have:
A 1 l - A 5 l u l A 2 l - A 6 l u l A 3 l - A 7 l u l A 8 l - A 5 l v l A 9 l - A 6 l v l A 10 l - A 7 l v l A 1 r - A 5 r u r A 2 r - A 6 r u r A 3 r - A 7 r u r A 8 r - A 5 r v r A 9 r - A 6 r v r A 10 r - A 7 r v r X Y Z u l - A 4 l v l - A 11 l u r - A 4 r v r - A 11 r - - - ( 17 )
Can obtain the coordinate of gauge point in the robot coordinate system by (17):
X Y Z = A 1 l - A 5 l u l A 2 l - A 6 l u l A 3 l - A 7 l u l A 8 l - A 5 l v l A 9 l - A 6 l v l A 10 l - A 7 l v l A 1 r - A 5 r u r A 2 r - A 6 r u r A 3 r - A 7 r u r A 8 r - A 5 r v r A 9 r - A 6 r v r A 10 r - A 7 r v r + u l - A 4 l v l - A 11 l u r - A 4 r v r - A 11 r - - - ( 18 )
(3) registration from the scanning image coordinate system to robot coordinate system
As shown in Figure 2, establishing the coordinate of four gauge points in the robot coordinate system is respectively
(X 1, Y 1, Z 1), (X 2, Y 2, Z 2), (X 3, Y 3, Z 3), (X 4, Y 4, Z 4), the coordinate in scanning image coordinate system is respectively
(u 1, v 1, w 1), (u 2, v 2, w 2), (u 3, v 3, w 3), (u 4, v 4, w 4), scanning image coordinate system is T to robot coordinate system's transformation matrix, then
X 4 - X 1 X 3 - X 1 X 2 - X 1 X 1 Y 4 - Y 1 Y 3 - Y 1 Y 2 - Y 1 Y 1 Z 4 - Z 1 Z 3 - Z 1 Z 2 - Z 1 Z 1 0 0 0 1 = T u 4 - u 1 u 3 - u 1 u 2 - u 1 u 1 v 4 - v 1 v 3 - v 1 v 2 - v 1 v 1 w 4 - w 1 w 3 - w 1 w 2 - w 1 w 1 0 0 0 1
Try to achieve
T = X 4 - X 1 X 3 - X 1 X 2 - X 1 X 1 Y 4 - Y 1 Y 3 - Y 1 Y 2 - Y 1 Y 1 Z 4 - Z 1 Z 3 - Z 1 Z 2 - Z 1 Z 1 0 0 0 1 u 4 - u 1 u 3 - u 1 u 2 - u 1 u 1 v 4 - v 1 v 3 - v 1 v 2 - v 1 v 1 w 4 - w 1 w 3 - w 1 w 2 - w 1 w 1 0 0 0 1 - 1
Can finish the registration of scanning image coordinate system accurately by the sign of above-mentioned mathematical way to the robot coordinate system.This method only needs two camera heads can determine the coordinate of gauge point in the robot coordinate system.Compare with contact measurement method, its setting accuracy height, and can be automatically the position of measurement markers point in real time; Compare with other non-contact method, it cheap, simple to operation, tool has an enormous advantage.
Medical robot vision register method of the present invention also can be used for Wicresoft thoracic cavity, abdominal operation, and its register method is identical with the auxiliary stereotactic neurosurgery of robot with gauge point.

Claims (4)

1, the vision register method in a kind of auxiliary stereotactic neurosurgery robot, this method is carried out the coordinate of gauge point in the robot coordinate system that the vision technique measurement is attached to patient head by containing robot, camera head, computer, scanned imagery device, thereby finish the registration of scanning image coordinate system to the robot coordinate system, this method comprises the following steps:
(a) at patient head labelling point;
(b) determine the coordinate of gauge point in scanning image coordinate system:
Scan patient head with scanned imagery device, become image to be input to computer institute, the surgery planning software processes in the machine is determined the coordinate of gauge point in scanning image coordinate system as calculated;
(c) determine the coordinate of gauge point in the robot coordinate system:
The coordinate of gauge point in the robot coordinate system comprises that robot coordinate system OXYZ, camera head coordinate system O ' X ' Y ' Z ', undistorted image coordinates are that o ' u ' v ', distorted image coordinates are o " u " v " and five coordinate systems of computer screen coordinate system ocuv; the conversion simultaneous between described five coordinate systems calibrates the undetermined parameter of camera head; by the coordinate of gauge point in the robot coordinate system, tries to achieve the coordinate of gauge point on computer screen;
The patient head image that camera head is obtained is input to computer, and the positioning software in the machine is handled as calculated, determines the coordinate of gauge point in the robot coordinate system, and sends this coordinate to surgery planning software;
(d) surgery planning software is finished the registration of scanning image coordinate system to the robot coordinate system.
2, vision register method according to claim 1 is characterized in that: the number of gauge point is 4.
3, vision register method according to claim 1 is characterized in that: camera head is finished the coordinate of measurement markers point in the robot coordinate system.
4, vision register method according to claim 1 is characterized in that: scanned imagery device can be CT or MRI.
CNB2003101224957A 2003-12-26 2003-12-26 Vision registering method for medical robot Expired - Fee Related CN1286438C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2003101224957A CN1286438C (en) 2003-12-26 2003-12-26 Vision registering method for medical robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2003101224957A CN1286438C (en) 2003-12-26 2003-12-26 Vision registering method for medical robot

Publications (2)

Publication Number Publication Date
CN1554315A CN1554315A (en) 2004-12-15
CN1286438C true CN1286438C (en) 2006-11-29

Family

ID=34338674

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2003101224957A Expired - Fee Related CN1286438C (en) 2003-12-26 2003-12-26 Vision registering method for medical robot

Country Status (1)

Country Link
CN (1) CN1286438C (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100345525C (en) * 2005-12-07 2007-10-31 嘉兴市第一医院 Framed stereo directed neurosurgery system registration method
CN100464720C (en) * 2005-12-22 2009-03-04 天津市华志计算机应用技术有限公司 Brain surgery robot system and implementation method based on optical tracking closed-loop control
CN102106758B (en) * 2011-03-21 2013-10-16 北京航空航天大学 Automatic visual location device and automatic visual location method for head marks of patient in stereotactic neurosurgery
CN105852970B (en) * 2016-04-29 2019-06-14 北京柏惠维康科技有限公司 Neurosurgical Robot navigation positioning system and method
CN106903665A (en) * 2017-04-18 2017-06-30 中国科学院重庆绿色智能技术研究院 A kind of master-slave mode telesurgery robot control system based on stereoscopic vision
CN110547867A (en) * 2018-05-31 2019-12-10 上海联影医疗科技有限公司 control method, device, equipment, storage medium and system of mechanical arm
CN110559077B (en) * 2018-06-05 2021-08-17 武汉联影智融医疗科技有限公司 Coordinate system registration method, robot control method, device, equipment and medium
CN110664484A (en) * 2019-09-27 2020-01-10 江苏工大博实医用机器人研究发展有限公司 Space registration method and system for robot and image equipment
CN110946659A (en) * 2019-12-25 2020-04-03 武汉中科医疗科技工业技术研究院有限公司 Registration method and system for image space and actual space
CN113041519A (en) * 2019-12-27 2021-06-29 重庆海扶医疗科技股份有限公司 Intelligent space positioning method
CN113729949A (en) * 2021-09-10 2021-12-03 武汉联影智融医疗科技有限公司 Space registration method, surgical robot and space registration system
CN114848170B (en) * 2022-05-09 2024-10-29 浙江睿创精准医疗科技有限公司 Registration method of framed stereotactic system
CN118429547B (en) * 2024-07-04 2024-09-24 浙江大学 Registration method, system and storage medium of framed stereotactic system

Also Published As

Publication number Publication date
CN1554315A (en) 2004-12-15

Similar Documents

Publication Publication Date Title
CN1286438C (en) Vision registering method for medical robot
US9792721B2 (en) Method and device for displaying an object
CN101861600B (en) System and method for quantitative 3D CEUS analysis
CN105078573B (en) Use of Neuronavigation spatial registration method based on hand-held scanner
CN113876426A (en) An intraoperative positioning and tracking system and method combined with a shadowless lamp
CN103829966B (en) For automatically determining the method and system of the position line in detecting image
CN113974830A (en) A surgical navigation system for ultrasound-guided thermal ablation of thyroid tumors
CN1806771A (en) Puncture guiding system and method in computer aided percutaneous nephrostolithotomy
CN112641512A (en) Spatial registration method applied to surgical robot front planning
CN1925793A (en) System for guiding a medical device inside a patient
CN104248454B (en) A kind of two-dimensional ultrasonic image and the coplanar determination methods of puncture needle
CN1760915A (en) Registration of first and second image data of an object
CA3022207A1 (en) Apparatus and method for establishing patient registration using 3d scanner and tracking system
CN1471375A (en) Image processing device and image processing method
CN108245788B (en) A binocular ranging device and method, and an accelerator radiotherapy system including the device
CN113855287B (en) Oral implantation operation robot with evaluation of implantation precision and control method
CN108324369A (en) Method for registering and Use of Neuronavigation equipment in art based on face
CN105055022A (en) Surgical navigation general marking structure and image position obtaining method thereof
JP2010274044A (en) Surgery support apparatus, surgery support method, and surgery support program
Meng et al. An automatic markerless registration method for neurosurgical robotics based on an optical camera
CN117835933A (en) Method and system for registering surgical robot coordinate system and CT machine coordinate system
CN116883471B (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN1957834A (en) 3D positioning system and method in endoscopic main body in medical use
CN200970224Y (en) Three dimension positioning system for medical endoscope body
CN114947943B (en) Non-contact ultrasonic probe rapid recalibration method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20061129

Termination date: 20101226