[go: up one dir, main page]

WO2025046358A1 - Procédé pour fournir des informations numériques de surveillance et/ou de commande à un dispositif médical, par un système de vision externe - Google Patents

Procédé pour fournir des informations numériques de surveillance et/ou de commande à un dispositif médical, par un système de vision externe Download PDF

Info

Publication number
WO2025046358A1
WO2025046358A1 PCT/IB2024/057744 IB2024057744W WO2025046358A1 WO 2025046358 A1 WO2025046358 A1 WO 2025046358A1 IB 2024057744 W IB2024057744 W IB 2024057744W WO 2025046358 A1 WO2025046358 A1 WO 2025046358A1
Authority
WO
WIPO (PCT)
Prior art keywords
encoding
vision system
data
digital data
encoded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2024/057744
Other languages
English (en)
Inventor
Emanuele Ruffaldi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Medical Microinstruments Inc
Original Assignee
Medical Microinstruments Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medical Microinstruments Inc filed Critical Medical Microinstruments Inc
Publication of WO2025046358A1 publication Critical patent/WO2025046358A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/37Leader-follower robots
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation

Definitions

  • the present invention relates to a method for providing monitoring and/or control digital information to a medical device (e.g., comprising a robotic system for medical or surgical teleoperation), by a vision system and/or image acquisition system which is external to the aforesaid robotic system of the medical device.
  • a medical device e.g., comprising a robotic system for medical or surgical teleoperation
  • a vision system and/or image acquisition system which is external to the aforesaid robotic system of the medical device.
  • Endoscopic robotic systems are provided with fully incorporated vision systems, which allow a complete bidirectional exchange of information between the endoscope and the robot, in particular control and monitoring information in addition to optical/visual information, and thus allow an effective control of the vision system and also an optimal control of the robot, based on the control/monitoring information provided by the vision system.
  • robotic systems for surgical or microsurgical teleoperation are typically decoupled from the vision system, and thus comprise vision systems, i.e. , digital exoscopes, which are outside the robotic system and transmit only video images of the teleoperation area thereto.
  • vision systems i.e. , digital exoscopes
  • optical monitoring/ control technical information such as focal length, zoom factor and lens features, are not available to the robotic system;
  • a solution could be to provide an additional communication channel (in addition to the video transmission means) between the vision system and the robotic system, comprising the related hardware and software capable of managing all the communication protocols, at various levels.
  • Such an object is achieved by a method according to claim 1 .
  • FIG. 1A and 1 B show a medical system, in accordance with two possible embodiments of the invention, comprising a robotic system for medical or surgical teleoperation and a vision system external to the robotic system;
  • FIG. 2 shows in more detail another embodiment of a medical system, in accordance with the invention.
  • FIG. 3 and 4 show a block diagram of some steps of a method, according to respective embodiments of the present invention.
  • FIG. 5 depicts a video image in which control and/or monitoring digital data is encoded, according to an embodiment of the method of the present invention, in which an enlarged rectangle of the encoding is shown in the upper left;
  • a method for providing monitoring and/or control digital information to a medical device comprising at least one robotic system, by a vision system 120 adapted to acquire images and/or videos of a teleoperation area and send them to the medical device by vision system-to-medical device transmission means (i.e., transmission means from vision system to medical device).
  • the aforesaid vision system 120 is outside (i.e., external to) the robotic system of the medical device.
  • the method firstly includes encoding digital data representative of the aforesaid monitoring and/or control digital information within pixels of a raw video data stream, representative of the images and/or videos acquired by the vision system, by means of steganographic techniques, to generate a multiplexed stream containing videos and data.
  • the method then comprises the steps of transmitting the aforesaid multiplexed stream containing videos and data, by the aforesaid vision system-to-medical device transmission means; then, demultiplexing and decoding the aforesaid encoded and multiplexed digital data; and finally making the demultiplexed digital data available to the robotic system and/or to a control unit associated with the robotic system and/or to display means associated with the vision system.
  • the medical device comprises a robotic system 100 for medical or surgical teleoperation, in which the vision system 120 is adapted to acquire teleoperation images and/or videos and to send them to the robotic system 100 by vision system-to-robotic system transmission means (i.e., transmission means from vision system to robotic system).
  • vision system-to-robotic system transmission means i.e., transmission means from vision system to robotic system.
  • the vision system 120 comprises a robotic vision head.
  • the step of encoding the digital data comprises encoding and/or multiplying the digital data in the raw video data stream so that the multiplexed stream, containing videos and data, is substantially undisturbed from the viewpoint of displaying the video images with respect to the original raw video data stream.
  • the color distortions of the video contained in the multiplexed data stream are lower than a perceptibility threshold established in accordance with international standards, such as the CIE 1976 standard.
  • the measurement of distortion perception is defined in the CIE 1976 standard, which uses the CIE L*a*b* encoding of each color. Such a measurement is reflected in a "distance", or "CIE distortion level” the value of which is representative of the distortion, when tested on a uniform surface of a given color with respect to another. T ogether with this "CIE distortion level”, the aforesaid standard also defines a threshold value of such a level, equal to 2, the JND (Just Noticeable Level) perceptibility level.
  • the method is applied to a raw video data stream corresponding to a video signal encoded and transmitted with RGB (Red Green Blue) encoding, for example 8-bit or more per channel, or with YCbCr (Luminance and Chrominance) encoding, for example YCbCr (4:4:4, 4:2:2, 4:2:0), where 4:2:2 and 4:2:0 are spatial subsampling modes of the chrominance space.
  • RGB Red Green Blue
  • YCbCr Luminance and Chrominance
  • the method applies, for example, to raw images with signals generated and transmitted in RGB (8-bit or more) or grayscale or YCbCr (4:4:4, 4:2:2, 4:2:0) with each type of ColorSpace (for example: sRGB, BT.709, BT.2020).
  • the aforesaid encoding step comprises encoding each bit of digital data to be encoded into one and only one respective pixel of the raw video data stream.
  • the aforesaid step of encoding digital data comprises generating a payload of digital data, protecting the generated payload, encoding the generated and protected payload.
  • the aforesaid step of demultiplexing the encoded and multiplexed digital data comprises the following sub-steps:
  • the aforesaid step of encoding the digital data or the payload of digital data comprises three levels of encoding:
  • the aforesaid content-level encoding comprises encoding in standard XML/JSON/ASN.1 formats, and said serialized-data-level encoding comprises serialization, encryption and protection.
  • the serialization is performed for example by means of JSON byte encoding or XDR encoding or ASN.1 encoding.
  • the encryption is performed for example by means of standard symmetric algorithms such as DES or Blowfish.
  • the protection is performed for example by means of CRC-8 I CRC-16 or cryptographic signature such as the SHA algorithm family.
  • the aforesaid pixel-level encoding is performed by means of algorithms combining the bytes sequence of the data to be encoded on the pixel sequence of the video signal, taking into account the type of video encoding.
  • the method applies to a raw video stream that is RGB-encoded, and requires that each bit of the data to be encoded is encoded in a respective pixel.
  • the encoding provides "full black” to encode 0 and “full white” to encode 1 , or vice versa.
  • the encoding provides using the least significant bit (LSB) of the color pixel to encode the content.
  • LSB least significant bit
  • the method is applied to a YCbCr-encoded raw video stream, and the encoding of the digital data is performed on the luminance channel Y so as to be robust with respect to the various chrominance down-sampling techniques (for example, 4:2:2 and 4:2:0).
  • each data bit to be encoded is encoded in K bits of a respective pixel of the luminance channel Y, where, for each of said K bits, the value 0 corresponds to 0 while the value 1 corresponds to 2 k -1.
  • K 3 such that one bit 0 of the encoding is 0 on 3 bits (e.g., 000), and a 1 bit is 7 on 3 bits (e.g., 111).
  • the encoding is entirely performed in YCbCr so as to ensure that the transformations carried out, such as conversion from RGB and then back to YCbCr, or sub-sampling, do not damage it.
  • the decoding work includes extracting the value of the encoded bit by reading the aforesaid K bits, with the following decision criterion which takes into account the possible distortion of the values:
  • the method applies to a YCbCr-encoded raw video stream, and the encoding of the digital data is performed on the Y luminance channel.
  • each data bit to be encoded is encoded, as 0 or 1 , into a respective single bit of the luminance channel Y of a respective pixel.
  • the method also includes checking if the modified pixel in which the data bit is encoded is a valid pixel, from the viewpoint of RGB video encoding, and thus remains usable as a video pixel.
  • the method includes modifying the Cb and/or Cr channels of the YCbCr pixel so as to obtain an adjusted modified pixel which is a valid pixel, from the viewpoint of RGB video encoding.
  • the YCbCr pixel recognized as invalid according to the RGB encoding (i.e. , not having a valid meaning in the RGB encoding) is modified, keeping the value of the channel Y unchanged and modifying the values of the channels Cb and/or Cr (according to several possible options known per se) based on the criterion that the modified YCbCr pixel as indicated above, once reconverted into RGB format, gives rise to a triplet of valid values, which encode a significant RGB pixel.
  • Such an embodiment takes into account the fact that, if a generic set of values (Y,Cb,Cr) is considered and the signal Y is altered, the resulting overall value may be an invalid RGB value, which would lead to making the pixel insignificant, and ultimately to a loss of content.
  • the RGB space is contained as a polyhedron inside the parallelepiped of the YCbCr space. The validity is ensured by reprojecting the YCbCr value inside the subset of space which is mappable on valid RGBs.
  • Each RGB point has a corresponding YCbCr point, while the opposite is not true.
  • YCbCr be 8-bit with the limited range convention, for which Y is from 16 to 235 and Cb,Cr from 16 to 240.
  • the YCbCr value (16,240,16) considered in the sRGB gamut, is linearly mapped to (-179,47,226) in the RGB space, which mapping is invalid.
  • the projection is then completed by returning it to RGB (0,47,226), the projected YCbCr of which is (62,214,95).
  • This option in addition to being simpler and ensuring better robustness, also ensures less distortion, since it provides a "CIE parameter" value of less than 2, which, as disclosed above, is below the perceptibility threshold, i.e., the JND (Just Noticeable Level) perceptibility level.
  • the distortion is in fact not perceptible, and the modified video stream (containing the encoded digital data) is substantially undisturbed with respect to the original video stream, with respect to the perception of the video image.
  • the method comprises, before the step of encoding the digital data, the step of converting the RGB signal into a YCbCr- encoded raw video stream.
  • the aforesaid step of converting the RGB signal into a YCbCr- encoded raw video stream comprises:
  • integer YCbCr on 8 bits as [16, 235] for luminance Y, and [16,240] for chrominances Cb and Cr.
  • RGBf RGBf
  • YPbPr YPbPr
  • KR, KG, KB constants defined by color space (e.g., sRGB uses 0.2990 0.5870 0.1140).
  • YPbPr to YCbCr
  • YCbCr The conversion from YPbPr to YCbCr is trivial and known per se, i.e. , for example, Y is multiplied by 219 and 16 is added, while for Cb, Cr it is multiplied by 224 and 128 is added.
  • the YPbPr/YCbCr frame can be interpreted as a new frame, in which the RGBf/RGBu space cube is rotated and tilted
  • Figure 6 is a representation of YCbCr with respect to RGBu.
  • the method comprises the further steps of:
  • the input signal (e.g., provided by the camera of the vision system) is at least 8-bit;
  • the encoding and insertion of the digital data can be performed at step 1 or step 2.
  • the conversion to YCbCr 4:2:0 is the reason why the encoding is done on the channel Y.
  • step 1 is not performed.
  • the method before the encoding step, provides a step of selecting the pixels of the video image in which to encode the digital data based on the geometric position of the pixels in the video image, corresponding to the display on a screen, according to the criterion of minimizing the distortion and/or local impact of the pixels being encoded on the video image quality.
  • the aforesaid selection step comprises selecting a plurality of pixels belonging to the same horizontal or vertical line of the video image as the pixels of the video image in which to encode the digital data.
  • the selection step comprises selecting a plurality of pixels belonging to the first half of the first or the last horizontal or vertical line of the video image as the pixels of the video image in which to encode the digital data.
  • the aforesaid plurality of selected pixels comprises a sequence of consecutive pixels at the beginning of the line, or pixels evenly distributed in the first half of the line.
  • the pixels of any single vertical or horizontal line (preferably horizontal) up to the half thereof are encoded.
  • Figure 5 depicts a video image in which control and/or monitoring digital data is encoded, barely visible in the upper left part (at the beginning of the first horizontal line).
  • the example of the figure refers to an encoding of 1 pixel per bit, black/white coding, with CRC 16.
  • the pixels of the first or last line are selected, with a width position from or up to W/2, or N pixels of the first or last line evenly distributed distant from each other (W/2)/N.
  • the distribution of the pixels to be encoded, along the chosen line follows other temporal patterns adapted to avoid detection.
  • the aforesaid monitoring and/or control digital information comprises one or more of the following information:
  • optical information related to the operation of the vision system and/or - dynamic optical information related to the operation of the vision system, and/or
  • the aforesaid monitoring and/or control digital information comprises information adapted to guide the operator and/or information related to safety measures.
  • the aforesaid optical information related to the operation of the vision system comprises focal length and/or zoom information, usable by Computer Vision algorithms present in the control unit of the robotic system, for self-adjustment purposes and/or to provide metric information.
  • the aforesaid dynamic optical information related to the operation of the vision system comprises real-time variations of zoom and/or brightness and/or exposure and/or focus parameters communicated to the robotic system by the vision system.
  • the aforesaid dynamic information related to the movement of the vision system comprises information related to the movement of a head of the vision system, and/or changes in focal length and/or zoom, usable by an operator to decide whether to stop the movement of the surgical instrument.
  • the aforesaid pose information of the surgical instrument 170 comprises surgical instrument location information viewable by the operator.
  • the aforesaid signaling and/or monitoring information of the vision system comprises an indication of any faults and/or prioritization information of the information to be displayed.
  • the aforesaid sensor-related information comprises indications that the sensors are Visible Light or ICG (Indocyanine Green in Fluorescence) or other non-visible but clinically relevant spectrum.
  • ICG Indocyanine Green in Fluorescence
  • a medical system 10 (i.e., medical device 10), comprised in the present invention, is described below.
  • a system comprises a robotic system 100 for medical or surgical teleoperation, a vision system 120 outside (i.e., external to) the robotic system, encoding means, vision system-to-robotic system transmission means, demultiplexing and decoding means.
  • the robotic system 100 for medical or surgical teleoperation comprises at least one surgical instrument 170, adapted to operate in teleoperation, and further comprises a control unit.
  • the vision system 120 which is outside the robotic system 100 (and/or, in particular, outside the surgical instrument), is configured to acquire images and/or videos of a teleoperation area and to generate a related raw video data stream, representative of the acquired images and/or videos.
  • the encoding means are configured to encode digital data representative of the aforesaid monitoring and/or control digital information within pixels of the raw video data stream, by means of steganographic techniques, and thus generate a multiplexed stream containing videos and data.
  • the vision system-to robotic system transmission means (i.e. , transmission means from vision system to robotic system) are configured to transmit data streams, comprising the aforesaid raw video stream and/or the aforesaid multiplexed stream containing videos and data, from the vision system to the robotic system.
  • the demultiplexing and decoding means are configured to demultiplex and decode the aforesaid encoded and multiplexed digital data and make the demultiplexed digital data available to the robotic system and/or to the control unit of the robotic system and/or to display means 130 associated with the vision system.
  • the medical system further comprises display means 130 associated with the vision system 120, configured to display video images of the teleoperation area from the vision system 120.
  • the vision system 120 comprises one or more image sensors or cameras, having the same viewpoint or two different respective viewpoints, configured to provide video signals representative of two-dimensional or three-dimensional images.
  • the vision system comprises an exoscope.
  • the vision system 120 and/or the display means 130 comprise at least one electronic screen or monitor.
  • the transmission means from vision system to robotic system comprise at least one HDMI cable.
  • the robotic system is a robotic system for surgery further comprising at least one master device 110, adapted to be moved by an operator 150, and at least one slave device comprising the aforesaid surgical instrument 170 adapted to be controlled by the master device 110.
  • the encoding means are configured to carry out the steps of encoding and generating a multiplexed stream containing videos and data, in accordance with the method according to any one of the embodiments disclose above.
  • the demultiplexing and decoding means are configured to carry out the steps of demultiplexing and decoding the digital, in accordance with the method according to any one of the embodiments disclosed above.
  • the method and system disclosed above allow the transmission channel already existing between the external vision system and the robotic system to be optimally exploited to provide the latter with all the monitoring and/or control information, coming from the vision system, which the robotic system may need.
  • Such information can comprise inter alia optical monitoring/ control technical information (such as focal length, zoom factor and lens features) and/or camera head movement information of the vision system.
  • the architecture of the system suggested with the present invention allows an effective integration of information between the vision system and the robotic system.
  • the technical solution of the present invention does not require providing a further dedicated hardware communication channel (with related software capable of managing all the communication protocols, at various levels) between vision system and robotic system, which would be complex and expensive, so as to be impractical.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Mechanical Engineering (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)

Abstract

Est décrit un procédé pour fournir des informations numériques de surveillance et/ou de commande à un dispositif médical comprenant un système robotique 100 (par exemple, un système robotique pour une téléopération médicale ou chirurgicale), par un système de vision 120 conçu pour acquérir des images et/ou des vidéos d'une zone de téléopération et les envoyer au système robotique 100 par des moyens de transmission de système de vision à dispositif médical. Le système de vision 120 susmentionné se situe à l'extérieur du système robotique 100 du dispositif médical. Le procédé consiste tout d'abord à coder des données numériques représentatives des informations numériques de surveillance et/ou de commande susmentionnées dans des pixels d'un flux de données vidéo brutes, représentatives des images et/ou des vidéos acquises par le système de vision, au moyen de techniques stéganographiques, pour générer un flux multiplexé contenant des vidéos et des données. Le procédé comprend ensuite les étapes consistant à transmettre le flux multiplexé susmentionné contenant des vidéos et des données, par le moyen de transmission de système de vision à système robotique susmentionné ; puis à démultiplexer et à décoder les données numériques codées et multiplexées susmentionnées ; et enfin à rendre les données numériques démultiplexées disponibles pour le système robotique et/ou pour une unité de commande associée au système robotique et/ou à des moyens d'affichage associés au système de vision. Est également décrit un système ou un dispositif médical dans lequel le procédé susmentionné est mis en œuvre, et comprenant un système robotique pour téléopération médicale ou chirurgicale et un système de vision qui se situe à l'extérieur dudit système robotique, mais est relié à celui-ci par des moyens de transmission de données.
PCT/IB2024/057744 2023-09-01 2024-08-09 Procédé pour fournir des informations numériques de surveillance et/ou de commande à un dispositif médical, par un système de vision externe Pending WO2025046358A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT202300017994 2023-09-01
IT102023000017994 2023-09-01

Publications (1)

Publication Number Publication Date
WO2025046358A1 true WO2025046358A1 (fr) 2025-03-06

Family

ID=88690070

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2024/057744 Pending WO2025046358A1 (fr) 2023-09-01 2024-08-09 Procédé pour fournir des informations numériques de surveillance et/ou de commande à un dispositif médical, par un système de vision externe

Country Status (1)

Country Link
WO (1) WO2025046358A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210000557A1 (en) * 2019-07-02 2021-01-07 Intuitive Surgical Operations, Inc. Remotely Controlling A System Using Video
WO2022066797A1 (fr) * 2020-09-23 2022-03-31 Wayne State University Détection, localisation, évaluation et visualisation de saignements dans un champ chirurgical
US20220202508A1 (en) * 2020-10-27 2022-06-30 Verily Life Sciences Llc Techniques for improving processing of video data in a surgical environment
US11587491B1 (en) * 2018-10-25 2023-02-21 Baylor University System and method for a multi-primary wide gamut color system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11587491B1 (en) * 2018-10-25 2023-02-21 Baylor University System and method for a multi-primary wide gamut color system
US20210000557A1 (en) * 2019-07-02 2021-01-07 Intuitive Surgical Operations, Inc. Remotely Controlling A System Using Video
WO2022066797A1 (fr) * 2020-09-23 2022-03-31 Wayne State University Détection, localisation, évaluation et visualisation de saignements dans un champ chirurgical
US20220202508A1 (en) * 2020-10-27 2022-06-30 Verily Life Sciences Llc Techniques for improving processing of video data in a surgical environment

Similar Documents

Publication Publication Date Title
US11183143B2 (en) Transitioning between video priority and graphics priority
JP6019157B2 (ja) 画像のダイナミックレンジの拡大
JP5992997B2 (ja) 映像符号化信号を発生する方法及び装置
US9986258B2 (en) Efficient encoding of multiple views
JP6563915B2 (ja) Hdr画像のための汎用コードマッピングのためのeotf関数を生成するための方法及び装置、並びにこれらの画像を用いる方法及び処理
CN104486605B (zh) 扩展动态范围和扩展维数图像信号转换
US20120064944A1 (en) Universal stereoscopic file format
JP2013538474A (ja) 3次元画像に対する視差の算出
KR102176398B1 (ko) 영상처리장치 및 영상처리방법
CN110050464B (zh) 图像处理设备、图像处理方法和存储介质
US20130315473A1 (en) Image processing device and image processing method
CN109417588B (zh) 信号处理设备、信号处理方法、相机系统、视频系统和服务器
JP2007081685A (ja) 画像信号処理装置、画像信号処理方法及び画像信号処理システム
JP5975301B2 (ja) 符号化装置、符号化方法、プログラム、および記録媒体
US9319656B2 (en) Apparatus and method for processing 3D video data
TWI524730B (zh) 處理視頻之方法及其系統
WO2025046358A1 (fr) Procédé pour fournir des informations numériques de surveillance et/ou de commande à un dispositif médical, par un système de vision externe
JP4173684B2 (ja) 立体画像作成装置
CN115272440A (zh) 一种图像处理方法、设备及系统
JP7555277B2 (ja) 画像表示システム、表示装置、および画像表示方法
JP4329072B2 (ja) テストパターン発生装置、撮像装置、画像出力装置、及び高精細画像表示システム
KR101272264B1 (ko) 색상 공간 roi에 기반을 둔 3차원 영상 압축 방법 및 장치
CN120457671A (zh) 视频电话会议方法及装置
Mai Tone-Mapping High Dynamic Range Images and Videos for Bit-Depth Scalable Coding and 3D Displaying
HK1167970A (en) Method and system for video processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24768687

Country of ref document: EP

Kind code of ref document: A1