[go: up one dir, main page]

WO2025223515A1 - Imaging method and system and surgical robot system - Google Patents

Imaging method and system and surgical robot system

Info

Publication number
WO2025223515A1
WO2025223515A1 PCT/CN2025/090978 CN2025090978W WO2025223515A1 WO 2025223515 A1 WO2025223515 A1 WO 2025223515A1 CN 2025090978 W CN2025090978 W CN 2025090978W WO 2025223515 A1 WO2025223515 A1 WO 2025223515A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
region
signal channel
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2025/090978
Other languages
French (fr)
Chinese (zh)
Inventor
林斌
胡飏
何芳
吴凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cornerstone Technology Shenzhen Ltd
Original Assignee
Cornerstone Technology Shenzhen Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cornerstone Technology Shenzhen Ltd filed Critical Cornerstone Technology Shenzhen Ltd
Publication of WO2025223515A1 publication Critical patent/WO2025223515A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • This disclosure relates to the field of image processing technology, and in particular to imaging methods and systems and surgical robot systems.
  • a specially designed optical system can be used to excite and detect the special light emitted by the objects, thereby achieving visualization.
  • the target area may include multiple objects, and related technologies struggle to accurately distinguish between different objects visually.
  • embodiments of this disclosure provide an imaging method, the method comprising: illuminating a target region with a light source, the light source including a first light source and a second light source, the target region providing first light under the illumination of the first light source, the target region providing second light under the illumination of the second light source, the target region including a first object and a second object, the first object and the second object providing light with different spectral characteristics under the illumination of the second light source; acquiring an image of the target region, the acquired image including a first region corresponding to the first object and a second region corresponding to the second object, the first region and the second region having different image signals; and performing image enhancement processing on the acquired image to enhance the image signals of the first region and the second region in the acquired image, thereby obtaining an enhanced image.
  • embodiments of this disclosure provide an imaging system, the system comprising: a light source configured to illuminate a target region, the light source including a first light source and a second light source, the target region providing first light under illumination by the first light source and providing second light under illumination by the second light source, the target region including a first object and a second object, the first object and the second object providing light with different spectral characteristics under illumination by the second light source; an imaging unit configured to acquire an image of the target region, the acquired image including a first region corresponding to the first object and a second region corresponding to the second object, the first region and the second region having different image signals; and an image processor configured to perform image enhancement processing on the acquired image to enhance the image signals of the first region and the second region in the acquired image, thereby obtaining an enhanced image.
  • embodiments of this disclosure provide a surgical robot system, the system including the aforementioned imaging system, and further including a display unit that receives and displays images from the imaging system.
  • the target area is imaged based on the first light and the second light. After obtaining the image of the target area, the image is further processed to enhance the difference between the image signal of the first area and the image signal of the second area. This allows the user to more intuitively distinguish the first area corresponding to the first object and the second area corresponding to the second object in the image, thereby improving the recognition between the first area and the second area.
  • Figure 1 is a schematic diagram of an imaging system according to an embodiment of the present disclosure.
  • Figure 2 is a schematic diagram of the response intensity of the image sensor of this disclosure to light of different wavelengths.
  • Figure 3 is a schematic diagram of the pixel values of each channel of the pixel point before and after processing according to an embodiment of the present disclosure.
  • Figure 4 is a schematic diagram of an endoscope system according to an embodiment of the present disclosure.
  • Figure 5 is a schematic diagram of a surgical robot system according to an embodiment of the present disclosure.
  • Figure 6 is a schematic diagram of the imaging method according to an embodiment of the present disclosure.
  • Figure 7 is a schematic diagram of a computer device according to an embodiment of the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various information, such information should not be limited to these terms. These terms are used only to distinguish information of the same type from one another.
  • first information may also be referred to as second information, and similarly, second information may also be referred to as first information.
  • word “if” as used herein may be interpreted as "when,” “when,” or "in response to determination.”
  • the target area may be illuminated by multiple light sources, providing various types of light. Based on these multiple light sources, the target area can be imaged to obtain an image of the target area, thus enabling visualization of objects within the target area.
  • the target area may include multiple objects, and different objects have different light absorption and reflection characteristics, resulting in different image signals in the image. Theoretically, based on the differences between the image signals corresponding to the regions of each object in the image, the regions corresponding to each object can be distinguished from the image. However, these differences are sometimes quite subtle, making it difficult to accurately distinguish the regions corresponding to each object in the image visually.
  • Fluorescence imaging refers to the method of visualizing objects that cannot be directly distinguished by labeling them with fluorescent dyes, and then using a specially designed optical system to excite and detect fluorescence.
  • fluorescent dyes can be injected into the target area through injection, application, etc., and the target area can be illuminated with visible light and excitation light.
  • the target area can reflect visible light under the illumination of the visible light source and emit fluorescence under the illumination of the excitation light source. Imaging can be performed based on the visible light and fluorescence from the target area to obtain an image of the target area, which can then be analyzed and processed.
  • the target area can be the surgical area, and the image of the target area can help doctors distinguish between normal tissues and diseased tissues, thereby improving surgical accuracy.
  • the target area may include multiple objects, and different objects may absorb fluorescent dyes to varying degrees. Therefore, under excitation light, the fluorescence intensity emitted by different objects may differ.
  • multiple objects may include normal in vivo tissue and diseased tissue.
  • diseased tissue absorbs fluorescent dyes more readily than normal in vivo tissue; therefore, normal in vivo tissue emits weaker fluorescence under excitation light, while diseased tissue emits stronger fluorescence.
  • image signals for regions corresponding to normal in vivo tissue and regions corresponding to diseased tissue in the target area image can be distinguished from the image based on the differences in their image signals.
  • image sensors have different transmittance for visible light and fluorescence, the differences between these image signals are not significant enough, making it difficult to visually and accurately distinguish the image regions corresponding to different objects.
  • fluorescence imaging scenarios are merely illustrative examples. Other methods can be used to achieve visualization in other application scenarios, which will not be listed here. For ease of description, fluorescence imaging technology will be used as an example below.
  • Figure 1 shows a schematic diagram of an imaging system 10, which includes a light source 101, an imaging unit 102, and an image processor 103.
  • the light source 101 includes a first light source 101a and a second light source 101b, both of which can illuminate the target area S.
  • the target area S receives first light under the illumination of the first light source 101a and second light under the illumination of the second light source 101b, the first and second lights having different wavelengths.
  • the target area S includes a first object and a second object, which provide light with different spectral characteristics under the illumination of the second light source 101b.
  • the first light source 101a may be a visible light source
  • the second light source 101b may be an excitation source.
  • the target region S provides visible light under the illumination of the visible light source and fluorescence under the illumination of the excitation source, with the visible light and fluorescence having different wavelengths.
  • the first light source 101a being a visible light source and the second light source 101b being an excitation source is merely illustrative, and the first light source 101a and the second light source 101b are not limited to being a visible light source and an excitation source, respectively.
  • the embodiments of this disclosure will be described using the first light source 101a as a visible light source and the second light source 101b as an excitation source as an example. It can be understood that the principles of the embodiments of this disclosure are the same as when the first light source 101a and the second light source 101b are different light sources.
  • the visible light source 101a may be a visible light source in a portion of the wavelength range from 380 nm to 780 nm.
  • different wavelengths of visible light La may be selected.
  • the visible light La may be yellow light in the wavelength range of 550 nm to 650 nm, or it may be red light in the wavelength range of 600 nm to 660 nm, or it may be blue light in the wavelength range of 380 nm to 470 nm.
  • the excitation light source 101b can emit excitation light Lb, which can be any excitation light that can cause the target region S to emit fluorescence, such as infrared light, near-infrared light, or ultraviolet light.
  • the target region S when irradiated by visible light La emitted from visible light source 101a, can reflect visible light Lax, which is the first light.
  • the target region S emits fluorescence Lby when irradiated by excitation light Lb emitted from excitation source 101b, which is the second light.
  • the visible light Lax and the fluorescence Lby have different wavelengths.
  • the visible light Lax has a wavelength between 380 nm and 780 nm
  • the fluorescence Lby has a wavelength between 800 nm and 900 nm. It is understood that the values in the above embodiments are merely illustrative and are not intended to limit this disclosure.
  • the target area S includes a first object O1 and a second object O2.
  • Both the first object O1 and the second object O2 reflect visible light Lax when illuminated by visible light La emitted by visible light source 101a.
  • the first object O1 and the second object O2 provide light with different spectral characteristics when illuminated by excitation source 101b. For example, the first object O1 does not emit fluorescence Lby when illuminated by excitation source 101b, while the second object O2 emits fluorescence Lby when illuminated by excitation source 101b.
  • the target area S is a target area in a surgical scene
  • the first object O1 is tissue in the surgical scene that has not been fluorescently labeled
  • the second object O2 is tissue in the surgical scene that has been fluorescently labeled.
  • the first object O1 does not fluoresce under the illumination of the excitation light source 101b
  • the second object O2 fluoresces under the illumination of the excitation light source 101b, thereby enabling the first object O1 and the second object O2 to provide light with different spectral characteristics under the illumination of the excitation light source 101b.
  • fluorescent markers such as indocyanine green, fluorescein, and rhodamine can be used to fluorescently label tissues in a surgical setting.
  • the first object O1 is normal tissue within the surgical area
  • the second object O2 is diseased tissue, such as tumor tissue, within the surgical area.
  • the first object O1 and the second object O2 can also be other types of objects.
  • the diseased tissue within the target area S can be labeled using fluorescent markers (such as indocyanine green, fluorescein, and rhodamine). Normal tissue does not fluoresce under the illumination of the excitation light source 101b, while the diseased tissue labeled with the fluorescent marker fluoresces under the illumination of the excitation light source 101b.
  • Imaging unit 102 acquires an image Is of target region S.
  • imaging unit 102 may include a lens 102a and an image sensor 102b. Visible light Lax and fluorescence Lby from target region S can be captured by lens 102a and imaged by image sensor 102b to obtain image Is.
  • image sensor 102b may include, but is not limited to, a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) image sensor.
  • Image Is includes a first region R1 corresponding to the first object O1 and a second region R2 corresponding to the second object O2.
  • the first region R1 and the second region R2 have different image signals.
  • the image signals may include, but are not limited to, at least one of brightness, chroma, saturation, contrast, and boundary pixels of different regions.
  • Imaging unit 102 images the target region S based on the visible light Lax and fluorescence Lby provided by the target region S, and obtains the acquired image Is. Since the first object O1 and the second object O2 provide light with different spectral characteristics under the illumination of the excitation light source 101b, such as the first object O1 not emitting fluorescence under the illumination of the excitation light source 101b, and the second object O2 emitting fluorescence under the illumination of the excitation light source 101b, the first region R1 corresponding to the first object O1 in image Is does not include fluorescence signal (also called non-fluorescent region), and the second region R2 corresponding to the second object O2 in image Is includes fluorescence signal (also called fluorescent region), so that the first region R1 corresponding to the first object O1 and the second region R2 corresponding to the second object O2 in image Is have different image signals.
  • fluorescence signal also called non-fluorescent region
  • fluorescence signal also called fluorescent region
  • the first object O1 is tissue in a surgical scene that has not been fluorescently labeled
  • the second object O2 is tissue in a surgical scene that has been fluorescently labeled.
  • the first object O1 does not fluoresce under the illumination of the excitation light source 101b
  • the second object O2 fluoresces under the illumination of the excitation light source 101b.
  • the first region R1 corresponding to the first object O1 in the acquired image Is does not have a fluorescent signal and is called a non-fluorescent region
  • the second region R2 corresponding to the second object O2 has a fluorescent signal and is called a fluorescent region.
  • the first region R1 corresponding to the first object O1 and the second region R2 corresponding to the second object O2 in the image Is have different image signals.
  • the image sensor 102b is a single image sensor, meaning the number of image sensors 102b used to image the target region S can be equal to one.
  • This single image sensor may include multiple channels.
  • each channel may image the target region S based on both visible light (Lax) and fluorescence (Lby).
  • at least one of the multiple channels may image the target region S based on visible light (Lax), and at least one other channel may image the target region S based on fluorescence (Lby).
  • the multiple channels may include at least two of a red signal (R) channel, a green signal (G) channel, and a blue signal (B) channel.
  • the multiple channels may form a Bayer array, where each channel can sense color information from the red, green, or blue bands of visible light (Lax), and at least one of the red, green, and blue signal channels may sense fluorescence (Lby), or each of the red, green, and blue signal channels may sense fluorescence (Lby).
  • the multiple channels may also include at least two of a red signal (R) channel, a first green signal (Gr) channel, a second green signal (Gb) channel, and a blue signal (B) channel.
  • the multiple channels can form a Bayer array, where each channel can sense one of the color information in visible light Lax: red, first green, second green, or blue.
  • At least one of the red, green, and blue signal channels can sense fluorescence Lby, or each of the red, green, and blue signal channels can sense fluorescence Lby.
  • Figure 2 shows a schematic diagram of the image sensor 102b sensing visible light Lax and fluorescence Lby.
  • an image Is obtained by the single image sensor includes both fluorescent and non-fluorescent regions.
  • the target area S provides red light and fluorescence under the illumination of red light and excitation light.
  • the imaging unit 102 acquires an image Is of the target area S based on the red light and fluorescence provided by the target area S. Since the absorption rate of red light by the tissue in the target area is generally low in surgical scenarios, the image Is acquired by the imaging unit 102 based on the red light and fluorescence provided by the target area S has high brightness.
  • the target area S provides blue light and fluorescence under the illumination of blue light and excitation light.
  • the imaging unit 102 acquires an image Is of the target area S based on the blue light and fluorescence provided by the target area S. Since the tissue in the target area generally has a high absorption rate of blue light in surgical scenarios, the image Is acquired by the imaging unit 102 based on the blue light and fluorescence provided by the target area S has high contrast and clarity.
  • the target area S provides yellow light and fluorescence under the illumination of yellow light and excitation light.
  • the imaging unit 102 acquires the image Is of the target area S based on the yellow light and fluorescence provided by the target area S. Since the absorption rate of yellow light by the tissue in the target area in the surgical scene is between that of blue light and red light, the image Is acquired by the imaging unit 102 based on the yellow light and fluorescence provided by the target area S can ensure both image brightness and image clarity and contrast.
  • the imaging unit 102 further includes a filter disposed in the optical path before the image sensor 102b, for filtering the excitation light Lbx from the target region S.
  • the visible light Lax and/or fluorescence Lby from the target region S may also include the excitation light Lbx reflected from the target region S.
  • the excitation light Lbx will form stray light and interfere with the imaging process. Therefore, by filtering out the excitation light Lbx included in the visible light Lax and/or fluorescence Lby from the target region S by the filter disposed in the optical path before the image sensor 102b, the interference of the excitation light Lbx on the imaging process is reduced, and the imaging quality is improved.
  • Image sensor 102b can send image Is to image processor 103.
  • Image processor 103 is configured to perform image enhancement processing on image Is to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2 in image Is.
  • the image enhancement processing includes, but is not limited to, at least one of the following: white balance processing, linear color correction, and nonlinear color mapping.
  • the image processor 103 directly performs image enhancement processing on the image Is acquired by the imaging unit 102 to obtain an enhanced image. That is, the image processor 103 does not perform composite processing on the acquired image Is during the image enhancement processing.
  • the imaging unit 102 uses a single image sensor to image the target region S based on visible light Lax and fluorescence Lby provided by the target region S, resulting in an image Is that includes both fluorescent and non-fluorescent regions.
  • the image processor 103 only performs enhancement processing on this single image Is.
  • the image processor 103 obtains an enhanced image that visually distinguishes the first and second regions simply by directly performing image enhancement processing on the image Is obtained by the image sensor 102b, without requiring composite processing.
  • the enhanced image obtained by the image processor 103 is directly presented to the observer through a display device, allowing the observer to directly distinguish the first and second regions from the presented enhanced image.
  • the image processor 103 can be processed by image merging, color adjustment and other image processing before being presented to the observer through a display device.
  • the image processor 103 performs white balance processing on the image Is, specifically by adjusting the ratio of each color channel in the image Is to amplify the image signal in the second region of the image Is, such as amplifying the fluorescence signal in the fluorescence region of the image Is, thereby amplifying the color difference between the image signal in the first region and the image signal in the second region of the image Is.
  • Each color channel includes three color channels: R, G, and B.
  • the linear color correction processing of the image processor 103 on the image Is specifically includes: correcting the image Is acquired by the imaging unit 102 using a linear color correction matrix to improve the color difference between the first region R1 and the second region R2 in the acquired image Is.
  • a mapping relationship between the color space of the input image and the target color space can be established.
  • linear color correction uses a 3 ⁇ 3 linear color correction matrix. Each element of the linear color correction matrix represents the weight relationship between the input channel and the target color channel. The corrected color value is obtained by multiplying each pixel value of the input image with the linear color correction matrix.
  • the color difference between the fluorescent and non-fluorescent regions in the image Is can be improved, and the color of the fluorescent region can be corrected to the target color, such as green or blue.
  • the color of the non-fluorescent region can be corrected to black and white, or pseudo-color, or color with the highest possible color fidelity.
  • the image processor 103 performs nonlinear color mapping processing on the image Is, specifically including: establishing a color mapping table; and performing a nonlinear transformation on the RGB values of the image Is based on the color mapping table and a preset nonlinear mapping algorithm to enhance the color difference between the first and second regions in the image Is, such as enhancing the color difference between fluorescent and non-fluorescent regions in the image Is.
  • nonlinear three-dimensional color mapping processing is used to perform nonlinear color mapping processing on the image Is. First, a color mapping table is established.
  • a preset nonlinear transformation algorithm is used to output the target R’G’B’, thereby obtaining an enhanced image.
  • the color difference between fluorescent and non-fluorescent regions in the same image can be improved, and the color of the fluorescent region can be corrected to the target color, such as green or blue.
  • the color of the non-fluorescent region can be corrected to black and white, or pseudo-color, or color with the highest possible color fidelity.
  • Figure 3 illustrates the signal intensity of each channel before and after image enhancement processing, using visible light (Lax) as the first light and fluorescence (Lby) as an example.
  • the image processor 103 receives each frame image Is from the image sensor 102b.
  • the RGB channels of the pixels In the first region of image Is without fluorescence, the RGB channels of the pixels contain only visible light information.
  • the RGB channels of the pixels In the second region of image Is with fluorescence, the RGB channels of the pixels contain both visible light and fluorescence information (in Figure 3, fluorescence information is represented by the shaded areas on the R, G, and B channels of the second region).
  • image enhancement processing of image Is may involve adjusting the first region R1 to a colored region and the second region R2 to a green region.
  • the first region R1 may be adjusted to a grayscale region and the second region R2 to a green region.
  • the above embodiment images the target region S based on visible light Lax and fluorescence Lby. After obtaining the image Is of the target region S, image enhancement processing is performed directly on the image Is, thereby enhancing the difference between the image signal of the first region R1 and the image signal of the second region R2. This allows the user to more intuitively distinguish the first region R1 corresponding to the first object O1 and the second region R2 corresponding to the second object O2 in the image Is, improving the recognition between the first region R1 and the second region R2.
  • the visible light source 101a and the excitation source 101b can synchronously illuminate the target region S.
  • the target region S receives mixed light under the synchronous illumination of the visible light source 101a and the excitation source 101b.
  • This mixed light includes visible light Lax and fluorescence Lby provided by the target region S under the synchronous illumination of the visible light source 101a and the excitation source 101b.
  • the lens 102a of the imaging unit 102 captures the mixed light from the target region S.
  • the image sensor 102b images the target region S based on the mixed light captured by the lens 102a, obtaining an image Is.
  • the image processor 103 performs image processing on the image Is obtained by the image sensor 102b to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2.
  • the image sensor 102b is a single image sensor.
  • a visible light source 101a and an excitation source 101b are simultaneously used to illuminate the target region S.
  • the image processor 103 directly processes the image obtained by the image sensor 102b to enhance the difference between the image signals of the first region R1 and the second region R2.
  • image enhancement processing is performed directly on the image obtained by the image sensor, without the processing and synthesis of multiple frames, the complexity of image enhancement processing is simplified. Furthermore, since image enhancement processing only needs to be performed on a single frame to process both visible light and fluorescence information, the complexity of image enhancement processing is further simplified, improving image processing efficiency. Since a single image sensor 102b can be used to obtain an image Is that includes both visible light and fluorescence information, it can effectively reduce hardware size and cost, and facilitate pipelined processing of multiple frames of images, thereby improving efficiency and flexibility in the image processing process.
  • the imaging system 10 further includes a controller (not shown) configured to control the imaging unit 102 to image the target region S.
  • the controller is further configured to adjust the intensity of the first light source 101a and/or the second light source 101b to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2 of the image Is acquired by the imaging unit 102. For example, under illumination by the first light source 101a and the second light source 101b of the same intensity, if the signal intensity of the first light is relatively strong and the signal intensity of the second light is relatively weak, the intensity of the first light source 101a can be reduced, and/or the intensity of the second light source 101b can be increased.
  • the endoscope system includes a light source device 110, an imaging module 120, and an image processing device 130.
  • the light source device 110 includes a light source 101 of the imaging system 10
  • the imaging module 120 includes an imaging unit 102 of the imaging system 10
  • the image processing device 130 includes an image processor 103 of the imaging system 10.
  • the endoscope system may further include a display unit (not shown) configured to receive and display images processed by the image processing device 130.
  • the endoscope system may further include a controller (not shown) configured to control the imaging module 120 to image the target region S.
  • the controller is further configured to control the light source device 110 to adjust the intensity of the first light source 101a and/or the second light source 101b to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2 of the image Is acquired by the imaging unit 102. For example, under illumination by the first light source 101a and the second light source 101b of the same intensity, if the signal intensity of the first light is relatively strong and the signal intensity of the second light is relatively weak, the intensity of the first light source 101a can be reduced, and/or the intensity of the second light source 101b can be increased.
  • the imaging system 10 described above can be applied to a surgical robot system 20.
  • the surgical robot system 20 may include the imaging system 10 and a display unit 201.
  • the imaging system 10 can send the image Is processed by the image processor 103 to the display unit 201 for display.
  • the imaging unit 102 of the imaging system 10 may be an endoscope or a microscope.
  • the surgical robot system 20 may include a robotic arm system 202, which includes one or more robotic arms 202a.
  • the imaging unit 102 can be held on any one of the robotic arms 202a. By adjusting the end-effector position and orientation of the robotic arm 202a, the orientation of the imaging unit 102 can be changed, thereby controlling the imaging unit 102 to image the target region S in a specific orientation.
  • the surgical robot system 20 may also include a console 203, on which the surgeon can operate to adjust the end-effector position and orientation of the robotic arm 202a.
  • the surgical robot system 20 may also include a controller (not shown) for controlling the imaging system 10 to image the target region S.
  • the controller mentioned above can be the console 203 of the surgical robot system 20, or it can be the control unit integrated into the imaging unit 102.
  • the controller can be the main unit of the endoscope.
  • This disclosure also provides an imaging method, as shown in FIG6, the imaging method including:
  • Step S1 Illuminate the target area S with a light source, which includes a first light source 101a and a second light source 101b.
  • the target area S receives first light under the illumination of the first light source 101a and second light under the illumination of the second light source 101b.
  • the target area S includes a first object O1 and a second object O2.
  • the first object O1 and the second object O2 provide light with different spectral characteristics under the illumination of the second light source 101b. Illuminating the target area S with a light source can be done automatically or manually, so that the light source illuminates the target area S after it is turned on.
  • Step S2 Acquire an image Is of the target region S.
  • the image Is includes a first region R1 corresponding to the first object O1 and a second region R2 corresponding to the second object O2.
  • the first region R1 and the second region R2 have different image signals.
  • Step S3 Perform image enhancement processing on the acquired image Is to enhance the image signal of the first region R1 and the image signal of the second region R2 to obtain an enhanced image.
  • the acquired image Is is not subjected to compositing processing during the process of performing image enhancement processing on the acquired image Is to obtain the enhanced image.
  • the acquired image Is is only one image during the image enhancement process.
  • acquiring the image Is of the target region includes: receiving mixed light from the target region S, the mixed light including first light and second light provided by the target region S under the synchronous illumination of the first light source 101a and the second light source 101b; and imaging the target region S based on the mixed light to obtain an image of the target region S.
  • imaging the target region S based on mixed light to obtain an image of the target region S includes: imaging the target region S based on mixed light using a single image sensor to obtain an image of the target region S.
  • the single image sensor includes a red signal channel, a green signal channel, and a blue signal channel.
  • the red signal channel can sense the red band of the first light included in the mixed light
  • the green signal channel can sense the green band of the first light included in the mixed light
  • the blue signal channel can sense the blue band of the first light included in the mixed light
  • at least one of the red, green, and blue signal channels can sense the second light in the mixed light.
  • a single image sensor includes a red signal channel, a green signal channel, and a blue signal channel.
  • the red signal channel can sense the red band of the first light included in the mixed light
  • the green signal channel can sense the green band of the first light included in the mixed light
  • the blue signal channel can sense the blue band of the first light included in the mixed light.
  • Each of the red, green, and blue signal channels can sense the second light in the mixed light.
  • the target region S can reflect the excitation light Lb emitted by the second light source 101b.
  • the excitation light Lbx formed by the reflection of the excitation light Lb by the target region S is filtered out before an image of the target region S is obtained by imaging the target region S based on the first light and the second light from the target region S.
  • image enhancement processing includes at least one of the following: white balance processing, linear color correction, and nonlinear color mapping.
  • white balance processing of image Is includes adjusting the ratios of each color channel in the acquired image Is to amplify the image signal of the second region in the acquired image.
  • Each color channel includes three color channels: R, G, and B.
  • linear color correction processing of image Is includes: correcting the acquired image using a linear color correction matrix to improve the color difference between the first region and the second region in the acquired image.
  • performing nonlinear color mapping processing on image Is includes: establishing a color mapping table, and performing a nonlinear transformation on the acquired image based on the color mapping table and a preset nonlinear transformation algorithm to enhance the color difference between the first region and the second region in the acquired image.
  • the image signal includes at least one of the following: brightness, chroma, saturation, contrast, and boundary pixels of different regions.
  • the first light and the second light have different wavelengths.
  • the first light is a portion of the visible light spectrum, with the visible light spectrum ranging from 380 nm to 780 nm, and the second light spectrum ranging from 800 nm to 900 nm.
  • the first light is yellow light with a wavelength between 550nm and 650nm; or the first light is red light with a wavelength between 600nm and 660nm; or the first light is blue light with a wavelength between 380nm and 470nm.
  • image enhancement processing is performed on image Is to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2, including: adjusting the first region R1 to a color region and adjusting the second region R2 to a green region; or adjusting the first region R1 to a grayscale region and adjusting the second region R2 to a green region.
  • the method further includes: adjusting the intensity of the first light source 101a and/or the second light source 101b to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2.
  • This disclosure also provides a computer device, which includes at least a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the methods described in any of the foregoing embodiments.
  • Figure 7 illustrates a more specific hardware structure diagram of a computing device provided in an embodiment of this disclosure.
  • the device may include: a processor 31, a memory 32, an input/output interface 33, a communication interface 34, and a bus 35.
  • the processor 31, memory 32, input/output interface 33, and communication interface 34 are interconnected internally via the bus 35.
  • the processor 31 can be implemented using a general-purpose central processing unit (CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits, and is used to execute relevant programs to implement the technical solutions provided in the embodiments of this disclosure.
  • the processor 31 may also include a graphics card, such as an Nvidia Titan X graphics card or a 1080Ti graphics card.
  • the memory 32 can be implemented as a read-only memory (ROM), random access memory (RAM), static storage device, dynamic storage device, etc.
  • the memory 32 can store the operating system and other applications.
  • the relevant program code is stored in the memory 32 and is called and executed by the processor 31.
  • Input/output interface 33 is used to connect input/output modules to realize information input and output.
  • Input/output modules can be configured as components in the device (not shown in the figure) or externally connected to the device to provide corresponding functions.
  • Input devices may include keyboards, mice, touch screens, microphones, various sensors, etc.
  • output devices may include displays, speakers, vibrators, indicator lights, etc.
  • Communication interface 34 is used to connect a communication module (not shown in the figure) to enable communication between this device and other devices.
  • the communication module can communicate via wired means (such as USB (Universal Serial Bus), network cable, etc.) or wireless means (such as mobile network, Wi-Fi, Bluetooth, etc.).
  • wired means such as USB (Universal Serial Bus), network cable, etc.
  • wireless means such as mobile network, Wi-Fi, Bluetooth, etc.
  • Bus 35 includes a pathway for transmitting information between various components of the device (e.g., processor 31, memory 32, input/output interface 33, and communication interface 34).
  • the above-described device only shows the processor 31, memory 32, input/output interface 33, communication interface 34, and bus 35, in specific implementations, the device may also include other components necessary for normal operation. Furthermore, those skilled in the art will understand that the above-described device may only include the components necessary for implementing the embodiments of this disclosure, and not necessarily all the components shown in the figures.
  • This disclosure also provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the methods described in any of the foregoing embodiments.
  • Computer-readable media including both permanent and non-permanent, removable and non-removable media, can store information using any method or technology.
  • Information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change random access memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory, read-only memory, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic tape, magnetic magnetic disk storage or other magnetic storage devices, or any other non-transfer medium that can be used to store information accessible by a computing device.
  • PRAM phase-change random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • EEPROM electrically erasable programmable read-only memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disc
  • magnetic tape magnetic magnetic
  • a typical implementation device is a computer, which can take the form of a personal computer, laptop computer, cellular phone, camera phone, smartphone, personal digital assistant, media player, navigation device, email sending and receiving device, game console, tablet computer, wearable device, or any combination of these devices.
  • the various embodiments in this disclosure are described in a progressive manner. Similar or identical parts between embodiments can be referred to mutually. Each embodiment focuses on describing the differences from other embodiments.
  • the device embodiments are basically similar to the method embodiments, so the description is relatively simple; relevant parts can be referred to the descriptions in the method embodiments.
  • the device embodiments described above are merely illustrative.
  • the modules described as separate components may or may not be physically separate.
  • the functions of each module can be implemented in one or more software and/or hardware. Alternatively, some or all of the modules can be selected to achieve the purpose of this embodiment according to actual needs. Those skilled in the art can understand and implement this without creative effort.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Robotics (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Endoscopes (AREA)

Abstract

Provided are an imaging method and system and a surgical robot system. The method comprises: irradiating a target area using a light source, wherein the light source comprises a first light source and a second light source, the target area provides first light under the irradiation of the first light source, the target area provides second light under the irradiation of the second light source, the target area comprises a first object and a second object, and the first object and the second object provide light with different spectral characteristics under the irradiation of the second light source; acquiring an image of the target area, wherein the acquired image comprises a first area corresponding to the first object and a second area corresponding to the second object, and the first area and the second area have different image signals; and performing image enhancement processing on the acquired image to enhance the image signal of the first area and the image signal of the second area in the acquired image, thus obtaining an enhanced image.

Description

成像方法和系统及手术机器人系统Imaging methods and systems and surgical robot systems

相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS

本申请基于申请号为“202410513112.0”、申请日为2024年4月25日的中国专利申请提出,并要求上述中国专利申请的优先权,上述中国专利申请的全部内容在此以引入方式并入本申请。This application is based on and claims priority to Chinese Patent Application No. 202410513112.0, filed on April 25, 2024, the entire contents of which are hereby incorporated herein by reference.

技术领域Technical Field

本公开涉及图像处理技术领域,尤其涉及成像方法和系统及手术机器人系统。This disclosure relates to the field of image processing technology, and in particular to imaging methods and systems and surgical robot systems.

背景技术Background Technology

在相关技术中,对于目标区域内的一些难以通过肉眼直接观察的对象,可以采用特殊设计的光学系统激发和检测对象发出的特殊的光,从而实现可视化。然而,目标区域内可能包括多个对象,相关技术难以从视觉上准确地区分不同的对象。In related technologies, for objects within a target area that are difficult to observe directly with the naked eye, a specially designed optical system can be used to excite and detect the special light emitted by the objects, thereby achieving visualization. However, the target area may include multiple objects, and related technologies struggle to accurately distinguish between different objects visually.

发明内容Summary of the Invention

第一方面,本公开实施例提供一种成像方法,所述方法包括:采用光源照射目标区域,所述光源包括第一光源和第二光源,所述目标区域在所述第一光源的照射下提供第一光,所述目标区域在所述第二光源的照射下提供第二光,所述目标区域包括第一对象和第二对象,所述第一对象和所述第二对象在所述第二光源的照射下提供不同光谱特性的光;采集所述目标区域的图像,采集的图像包括与所述第一对象对应的第一区域和与所述第二对象对应的第二区域,所述第一区域和所述第二区域具有不同的图像信号;对所述采集的图像进行图像增强处理,以增强所述采集的图像中的所述第一区域的图像信号和所述第二区域的图像信号,得到增强图像。In a first aspect, embodiments of this disclosure provide an imaging method, the method comprising: illuminating a target region with a light source, the light source including a first light source and a second light source, the target region providing first light under the illumination of the first light source, the target region providing second light under the illumination of the second light source, the target region including a first object and a second object, the first object and the second object providing light with different spectral characteristics under the illumination of the second light source; acquiring an image of the target region, the acquired image including a first region corresponding to the first object and a second region corresponding to the second object, the first region and the second region having different image signals; and performing image enhancement processing on the acquired image to enhance the image signals of the first region and the second region in the acquired image, thereby obtaining an enhanced image.

第二方面,本公开实施例提供一种成像系统,所述系统包括:光源,所述光源被配置为照射目标区域,所述光源包括第一光源和第二光源,所述目标区域在所述第一光源的照射下提供第一光,在所述第二光源的照射下可提供第二光,所述目标区域内包括第一对象和第二对象,所述第一对象和所述第二对象在所述第二光源的照射下提供不同光谱特性的光;成像单元,被配置为采集所述目标区域的图像,采集的图像包括与所述第一对象对应的第一区域和与所述第二对象对应的第二区域,所述第一区域和所述第二区域具有不同的图像信号;图像处理器,被配置为对所述采集的图像进行图像增强处理,以增强所述采集的图像中的所述第一区域的图像信号和所述第二区域的图像信号,得到增强图像。Secondly, embodiments of this disclosure provide an imaging system, the system comprising: a light source configured to illuminate a target region, the light source including a first light source and a second light source, the target region providing first light under illumination by the first light source and providing second light under illumination by the second light source, the target region including a first object and a second object, the first object and the second object providing light with different spectral characteristics under illumination by the second light source; an imaging unit configured to acquire an image of the target region, the acquired image including a first region corresponding to the first object and a second region corresponding to the second object, the first region and the second region having different image signals; and an image processor configured to perform image enhancement processing on the acquired image to enhance the image signals of the first region and the second region in the acquired image, thereby obtaining an enhanced image.

第三方面,本公开实施例提供一种手术机器人系统,所述系统包括前述的成像系统,还包括显示单元,所述显示单元接收来自所述成像系统的图像并显示。Thirdly, embodiments of this disclosure provide a surgical robot system, the system including the aforementioned imaging system, and further including a display unit that receives and displays images from the imaging system.

在本公开实施例中,基于第一光和第二光对目标区域进行成像,得到目标区域的图像之后,进一步对图像进行图像处理,从而增强了第一区域的图像信号和第二区域的图像信号之间的差异,使得用户能够更加直观地从视觉上区分图像中的第一对象对应的第一区域和与第二对象对应的第二区域,提高了第一区域和第二区域之间的辨识度。In this embodiment of the disclosure, the target area is imaged based on the first light and the second light. After obtaining the image of the target area, the image is further processed to enhance the difference between the image signal of the first area and the image signal of the second area. This allows the user to more intuitively distinguish the first area corresponding to the first object and the second area corresponding to the second object in the image, thereby improving the recognition between the first area and the second area.

应当理解,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。It should be understood that the above general description and the following detailed description are exemplary and explanatory only, and are not intended to limit this disclosure.

附图说明Attached Figure Description

此处的附图被并入说明书中并构成本公开的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。The accompanying drawings, which are incorporated in and form part of this disclosure, illustrate embodiments consistent with this disclosure and, together with the description, serve to illustrate the technical solutions of this disclosure.

图1是本公开实施例的成像系统的示意图。Figure 1 is a schematic diagram of an imaging system according to an embodiment of the present disclosure.

图2是本公开实施例的图像传感器对不同波段的光的响应强度的示意图。Figure 2 is a schematic diagram of the response intensity of the image sensor of this disclosure to light of different wavelengths.

图3是本公开实施例的处理前后的像素点的各通道的像素值的示意图。Figure 3 is a schematic diagram of the pixel values of each channel of the pixel point before and after processing according to an embodiment of the present disclosure.

图4是本公开实施例的内窥镜系统的示意图。Figure 4 is a schematic diagram of an endoscope system according to an embodiment of the present disclosure.

图5是本公开实施例的手术机器人系统的示意图。Figure 5 is a schematic diagram of a surgical robot system according to an embodiment of the present disclosure.

图6是本公开实施例的成像方法的示意图。Figure 6 is a schematic diagram of the imaging method according to an embodiment of the present disclosure.

图7是本公开实施例的计算机设备的示意图。Figure 7 is a schematic diagram of a computer device according to an embodiment of the present disclosure.

具体实施方式Detailed Implementation

这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的系统和方法的例子。Exemplary embodiments will now be described in detail, examples of which are illustrated in the accompanying drawings. When the following description relates to the drawings, unless otherwise indicated, the same numerals in different drawings denote the same or similar elements. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with this disclosure. Rather, they are merely examples of systems and methods consistent with some aspects of this disclosure as detailed in the appended claims.

在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合。The terminology used in this disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The singular forms “a,” “the,” and “the” as used in this disclosure and the appended claims are also intended to include the plural forms unless the context clearly indicates otherwise. It should also be understood that the term “and/or” as used herein refers to and includes any or all possible combinations of one or more of the associated listed items. Additionally, the term “at least one” herein means any combination of at least two of any one or more of a plurality.

应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc., may be used in this disclosure to describe various information, such information should not be limited to these terms. These terms are used only to distinguish information of the same type from one another. For example, without departing from the scope of this disclosure, first information may also be referred to as second information, and similarly, second information may also be referred to as first information. Depending on the context, the word "if" as used herein may be interpreted as "when," "when," or "in response to determination."

为了使本技术领域的人员更好的理解本公开实施例中的技术方案,并使本公开实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本公开实施例中的技术方案作进一步详细的说明。To enable those skilled in the art to better understand the technical solutions in the embodiments of this disclosure, and to make the above-mentioned objectives, features and advantages of the embodiments of this disclosure more apparent and understandable, the technical solutions in the embodiments of this disclosure will be further described in detail below with reference to the accompanying drawings.

在相关技术中,对于目标区域内难以通过肉眼直接观察的对象,可以采用特殊设计的光学系统激发和检测对象发出的特殊的光,从而实现可视化。在实际应用场景中,目标区域可能会受到多种光源的照射,从而提供多种光,基于目标区域提供的多种光可以对目标区域进行成像,得到目标区域的图像,从而实现对目标区域内的对象的可视化。目标区域内可能包括多个对象,不同的对象对光的吸收和反射特性不同,从而会在图像中呈现出不同的图像信号。理论上,基于图像中各个对象分别对应的区域的图像信号之间的差异,可以从图像中区分出各个对象分别对应的区域。然而,上述差异有时是比较微小的,因此,难以从视觉上准确地区分出图像中各个对象分别对应的区域。In related technologies, for objects within a target area that are difficult to observe directly with the naked eye, a specially designed optical system can be used to excite and detect the special light emitted by the object, thereby achieving visualization. In practical applications, the target area may be illuminated by multiple light sources, providing various types of light. Based on these multiple light sources, the target area can be imaged to obtain an image of the target area, thus enabling visualization of objects within the target area. The target area may include multiple objects, and different objects have different light absorption and reflection characteristics, resulting in different image signals in the image. Theoretically, based on the differences between the image signals corresponding to the regions of each object in the image, the regions corresponding to each object can be distinguished from the image. However, these differences are sometimes quite subtle, making it difficult to accurately distinguish the regions corresponding to each object in the image visually.

上述可视化处理的一种具体的实现方式是荧光成像(Fluorescence imaging)。荧光成像是指,对无法直接区分的物体,通过荧光染料标记,然后采用特殊设计的光学系统激发和检测荧光,从而实现可视化的手法。目前,荧光成像技术在很多领域都有应用。在一些应用场景中,可以通过注射、涂抹等方式在目标区域内注入荧光染料,并采用可见光光源和激发光光源照射目标区域。目标区域可以在可见光光源的照射下反射可见光,并在激发光光源的照射下发出荧光。可以基于来自目标区域的可见光和荧光进行成像,得到目标区域的图像,并基于目标区域的图像进行分析处理。例如,在外科手术领域,目标区域可以是手术区域,目标区域的图像可以辅助医生确定正常的体内组织与病变组织,从而提高手术准确度。One specific implementation of the aforementioned visualization process is fluorescence imaging. Fluorescence imaging refers to the method of visualizing objects that cannot be directly distinguished by labeling them with fluorescent dyes, and then using a specially designed optical system to excite and detect fluorescence. Currently, fluorescence imaging technology has applications in many fields. In some applications, fluorescent dyes can be injected into the target area through injection, application, etc., and the target area can be illuminated with visible light and excitation light. The target area can reflect visible light under the illumination of the visible light source and emit fluorescence under the illumination of the excitation light source. Imaging can be performed based on the visible light and fluorescence from the target area to obtain an image of the target area, which can then be analyzed and processed. For example, in the field of surgery, the target area can be the surgical area, and the image of the target area can help doctors distinguish between normal tissues and diseased tissues, thereby improving surgical accuracy.

目标区域内可能包括多个对象,不同的对象对荧光染料的吸收程度可能是不同的。因此,在激发光的激发下,不同的对象所发出的荧光的强度可能是不同的。例如,在外科手术领域,多个对象可以包括正常体内组织和病变组织。通常,相比于正常体内组织,病变组织对荧光染料的吸收率更高,因此,正常体内组织在激发光的激发下发出的荧光较弱,而病变组织在激发光的激发下发出的荧光较强,从而使得目标区域的图像上对应于正常体内组织的区域和对应于病变组织的区域呈现出不同的图像信号。理论上,可以基于各个区域的图像信号之间的差异,从图像中区分出各个区域。然而,由于图像传感器对可见光和荧光的通过率不同,因此,上述图像信号之间的差异不够明显,从而导致难以从视觉上准确地区分不同的对象对应的图像区域。The target area may include multiple objects, and different objects may absorb fluorescent dyes to varying degrees. Therefore, under excitation light, the fluorescence intensity emitted by different objects may differ. For example, in the field of surgery, multiple objects may include normal in vivo tissue and diseased tissue. Typically, diseased tissue absorbs fluorescent dyes more readily than normal in vivo tissue; therefore, normal in vivo tissue emits weaker fluorescence under excitation light, while diseased tissue emits stronger fluorescence. This results in different image signals for regions corresponding to normal in vivo tissue and regions corresponding to diseased tissue in the target area image. Theoretically, different regions can be distinguished from the image based on the differences in their image signals. However, because image sensors have different transmittance for visible light and fluorescence, the differences between these image signals are not significant enough, making it difficult to visually and accurately distinguish the image regions corresponding to different objects.

可以理解,上述荧光成像场景仅为示例性说明。在其他应用场景中,还可以采用其他方式实现可视化处理,此处不再一一列举。为便于描述,下面以荧光成像技术为例进行说明。It is understood that the above fluorescence imaging scenarios are merely illustrative examples. Other methods can be used to achieve visualization in other application scenarios, which will not be listed here. For ease of description, fluorescence imaging technology will be used as an example below.

为了能够从视觉上准确地区分出不同的对象对应的图像区域,本公开实施例提供一种成像系统10。图1示出了一种成像系统10的示意图,该成像系统10包括光源101、成像单元102和图像处理器103。其中,To accurately distinguish image regions corresponding to different objects visually, embodiments of this disclosure provide an imaging system 10. Figure 1 shows a schematic diagram of an imaging system 10, which includes a light source 101, an imaging unit 102, and an image processor 103.

光源101包括第一光源101a和第二光源101b,第一光源101a和第二光源101b均可以照射目标区域S。目标区域S在第一光源101a的照射下提供第一光,在第二光源101b的照射下提供第二光,该第一光和第二光具有不同的波段。该目标区域S包括第一对象和第二对象,该第一对象和第二对象在第二光源101b的照射下提供不同光谱特性的光。The light source 101 includes a first light source 101a and a second light source 101b, both of which can illuminate the target area S. The target area S receives first light under the illumination of the first light source 101a and second light under the illumination of the second light source 101b, the first and second lights having different wavelengths. The target area S includes a first object and a second object, which provide light with different spectral characteristics under the illumination of the second light source 101b.

在一些实施例中,该第一光源101a比如可以为可见光光源,第二光源101b比如可以为激发光源。该目标区域S在可见光光源的照射下提供可见光,在激发光源的照射下提供荧光,可见光和荧光的波段不同。但第一光源101a为可见光光源,第二光源101b为激发光源仅为举例说明,第一光源101a和第二光源101b并不限定为可见光光源和激发光源。在后续的说明中将以第一光源101a为可见光光源,第二光源101b为激发光源为例对本公开的各实施例进行说明,可以理解,本公开的各实施例在第一光源101a和第二光源101b分别为不同的其他光源的时,其原理与第一光源101a为可见光光源,第二光源101b为激发光源相同。In some embodiments, the first light source 101a may be a visible light source, and the second light source 101b may be an excitation source. The target region S provides visible light under the illumination of the visible light source and fluorescence under the illumination of the excitation source, with the visible light and fluorescence having different wavelengths. However, the first light source 101a being a visible light source and the second light source 101b being an excitation source is merely illustrative, and the first light source 101a and the second light source 101b are not limited to being a visible light source and an excitation source, respectively. In the following description, the embodiments of this disclosure will be described using the first light source 101a as a visible light source and the second light source 101b as an excitation source as an example. It can be understood that the principles of the embodiments of this disclosure are the same as when the first light source 101a and the second light source 101b are different light sources.

在一些实施例中,可见光光源101a可以是380nm到780nm范围内的部分波段的可见光光源。根据不同的需求,可以选择不同波段的可见光La。可选地,可见光La为波段在550nm到650nm之间的黄光,或者,可见光La也可以是波段在600nm到660nm之间的红光,或者,可见光La也可以是波段在380nm到470nm之间的蓝光。In some embodiments, the visible light source 101a may be a visible light source in a portion of the wavelength range from 380 nm to 780 nm. Depending on the specific requirements, different wavelengths of visible light La may be selected. Optionally, the visible light La may be yellow light in the wavelength range of 550 nm to 650 nm, or it may be red light in the wavelength range of 600 nm to 660 nm, or it may be blue light in the wavelength range of 380 nm to 470 nm.

在一些实施例中,激发光源101b能够发出激发光Lb,激发光Lb可以是能使目标区域S发出荧光的任意激发光,如红外光、近红外光或者紫外光等。In some embodiments, the excitation light source 101b can emit excitation light Lb, which can be any excitation light that can cause the target region S to emit fluorescence, such as infrared light, near-infrared light, or ultraviolet light.

在一些实施例中,目标区域S在可见光光源101a发出的可见光La的照射下可以反射出可见光Lax,该可见光Lax即为第一光。目标区域S在激发光源101b发出的激发光Lb的照射下发出荧光Lby,该荧光Lby即为第二光。该可见光Lax和荧光Lby的波段不同。例如,可见光Lax的波段处于380nm到780nm之间,而荧光Lby的波段处于800nm到900nm之间。可以理解,上述实施例中的数值仅为示例性说明,并非用于限制本公开。In some embodiments, the target region S, when irradiated by visible light La emitted from visible light source 101a, can reflect visible light Lax, which is the first light. The target region S emits fluorescence Lby when irradiated by excitation light Lb emitted from excitation source 101b, which is the second light. The visible light Lax and the fluorescence Lby have different wavelengths. For example, the visible light Lax has a wavelength between 380 nm and 780 nm, while the fluorescence Lby has a wavelength between 800 nm and 900 nm. It is understood that the values in the above embodiments are merely illustrative and are not intended to limit this disclosure.

目标区域S内包括第一对象O1和第二对象O2,第一对象O1和第二对象O2在可见光光源101a发出的可见光La的照射下均反射出可见光Lax,第一对象O1和第二对象O2在激发光源101b的照射下提供不同光谱特性的光,如第一对象O1在激发光源101b的照射下不发出荧光Lby,第二对象O2在激发光源101b的照射下发出荧光Lby。The target area S includes a first object O1 and a second object O2. Both the first object O1 and the second object O2 reflect visible light Lax when illuminated by visible light La emitted by visible light source 101a. The first object O1 and the second object O2 provide light with different spectral characteristics when illuminated by excitation source 101b. For example, the first object O1 does not emit fluorescence Lby when illuminated by excitation source 101b, while the second object O2 emits fluorescence Lby when illuminated by excitation source 101b.

在一些实施例中,目标区域S为外科手术场景中的目标区域,第一对象O1为外科手术场景中未进行荧光标记的组织,第二对象O2为外科手术场景中进行了荧光标记的组织,从而第一对象O1在激发光源101b的照射下不发出荧光,第二对象O2在激发光源101b的照射下发出荧光,进而使第一对象O1和第二对象O2在激发光源101b的照射下提供不同光谱特性的光。In some embodiments, the target area S is a target area in a surgical scene, the first object O1 is tissue in the surgical scene that has not been fluorescently labeled, and the second object O2 is tissue in the surgical scene that has been fluorescently labeled. Thus, the first object O1 does not fluoresce under the illumination of the excitation light source 101b, while the second object O2 fluoresces under the illumination of the excitation light source 101b, thereby enabling the first object O1 and the second object O2 to provide light with different spectral characteristics under the illumination of the excitation light source 101b.

在一些实施例中,可以采用荧光标记物如吲哚菁绿、荧光素、罗丹明等对外科手术场景中的组织进行荧光标记。例如第一对象O1为外科手术区域内的正常组织,第二对象O2为外科手术区域内的病变组织,如肿瘤组织。在其他例子中,第一对象O1和第二对象O2也可以是其他类型的对象。以第一对象O1为正常组织和第二对象O2为病变组织为例,可以通过荧光标记物(如吲哚菁绿、荧光素、罗丹明等)对目标区域S内的病变组织进行标记。正常组织在激发光源101b的照射下不发出荧光,而通过荧光标记物进行荧光标记的病变组织在激发光源101b的照射下发出荧光。In some embodiments, fluorescent markers such as indocyanine green, fluorescein, and rhodamine can be used to fluorescently label tissues in a surgical setting. For example, the first object O1 is normal tissue within the surgical area, and the second object O2 is diseased tissue, such as tumor tissue, within the surgical area. In other examples, the first object O1 and the second object O2 can also be other types of objects. Taking the first object O1 as normal tissue and the second object O2 as diseased tissue as an example, the diseased tissue within the target area S can be labeled using fluorescent markers (such as indocyanine green, fluorescein, and rhodamine). Normal tissue does not fluoresce under the illumination of the excitation light source 101b, while the diseased tissue labeled with the fluorescent marker fluoresces under the illumination of the excitation light source 101b.

成像单元102采集目标区域S的图像Is。在一些实施例中,成像单元102可以包括镜头102a和图像传感器102b,来自目标区域S的可见光Lax和荧光Lby可以被镜头102a捕获,并进入图像传感器102b进行成像,得到图像Is。作为一个示意性示例,图像传感器102b可以包括但不限于电荷耦合元件(Charge Coupled Device,CCD)或互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)图像传感器等。图像Is包括与第一对象O1对应的第一区域R1和与第二对象O2对应的第二区域R2,第一区域R1和第二区域R2具有不同的图像信号。在一些实施例中,图像信号可以包括但不限于亮度、色度、饱和度、对比度、不同区域的边界像素中的至少一者。Imaging unit 102 acquires an image Is of target region S. In some embodiments, imaging unit 102 may include a lens 102a and an image sensor 102b. Visible light Lax and fluorescence Lby from target region S can be captured by lens 102a and imaged by image sensor 102b to obtain image Is. As an illustrative example, image sensor 102b may include, but is not limited to, a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) image sensor. Image Is includes a first region R1 corresponding to the first object O1 and a second region R2 corresponding to the second object O2. The first region R1 and the second region R2 have different image signals. In some embodiments, the image signals may include, but are not limited to, at least one of brightness, chroma, saturation, contrast, and boundary pixels of different regions.

成像单元102基于目标区域S提供的可见光Lax和荧光Lby对目标区域S进行成像,得到采集的图像Is,由于第一对象O1和第二对象O2在激发光源101b的照射下提供不同的光谱特性的光,如第一对象O1在激发光源101b的照射下不发出荧光,第二对象O2在激发光源101b的照射下发出荧光,因此,图像Is中与第一对象O1对应的第一区域R1不包括荧光信号(也称为非荧光区域),图像Is中与第二对象O2对应的第二区域R2包括荧光信号(也称为荧光区域),从而使得图像Is中与第一对象O1对应的第一区域R1和与第二对象O2对应的第二区域R2具有不同的图像信号。例如,第一对象O1为外科手术场景中未进行荧光标记的组织,第二对象O2为外科手术场景中进行了荧光标记的组织,使得第一对象O1在激发光源101b的照射下不发出荧光,第二对象O2在激发光源101b的照射下发出荧光,使得采集的图像Is中与第一对象O1对应的第一区域R1中不具有荧光信号,称为非荧光区域,与第二对象O2对应的第二区域R2具有荧光信号,称为荧光区域,从而使得图像Is中与第一对象O1对应的第一区域R1和与第二对象O2对应的第二区域R2具有不同的图像信号。Imaging unit 102 images the target region S based on the visible light Lax and fluorescence Lby provided by the target region S, and obtains the acquired image Is. Since the first object O1 and the second object O2 provide light with different spectral characteristics under the illumination of the excitation light source 101b, such as the first object O1 not emitting fluorescence under the illumination of the excitation light source 101b, and the second object O2 emitting fluorescence under the illumination of the excitation light source 101b, the first region R1 corresponding to the first object O1 in image Is does not include fluorescence signal (also called non-fluorescent region), and the second region R2 corresponding to the second object O2 in image Is includes fluorescence signal (also called fluorescent region), so that the first region R1 corresponding to the first object O1 and the second region R2 corresponding to the second object O2 in image Is have different image signals. For example, the first object O1 is tissue in a surgical scene that has not been fluorescently labeled, and the second object O2 is tissue in a surgical scene that has been fluorescently labeled. This is such that the first object O1 does not fluoresce under the illumination of the excitation light source 101b, while the second object O2 fluoresces under the illumination of the excitation light source 101b. As a result, the first region R1 corresponding to the first object O1 in the acquired image Is does not have a fluorescent signal and is called a non-fluorescent region, while the second region R2 corresponding to the second object O2 has a fluorescent signal and is called a fluorescent region. Thus, the first region R1 corresponding to the first object O1 and the second region R2 corresponding to the second object O2 in the image Is have different image signals.

在一些实施例中,图像传感器102b为单图像传感器,即用于对目标区域S进行成像的图像传感器102b的数量可以等于1。其中,该单图像传感器可以包括多个通道。在一些实施例中,每个通道既可以基于可见光Lax对目标区域S进行成像,又可以基于荧光Lby对目标区域S进行成像。在另一些实施例中,多个通道中的至少一个通道可以基于可见光Lax对目标区域S进行成像,多个通道中的其他至少一个通道可以基于荧光Lby对目标区域S进行成像。可选地,多个通道可以包括红色信号(R)通道、绿色信号(G)通道和蓝色信号(B)通道中的至少两者。多个通道可以组成拜耳(Bayer)阵列,每个通道可以感应可见光Lax中的红色波段、绿色波段或蓝色波段中的一种颜色信息,红色信号通道、绿色信号通道、蓝色信号通道中的至少一个通道可感应荧光Lby,或者红色信号通道、绿色信号通道、蓝色信号通道中的每一个均可感应荧光Lby。可选地,多个通道也可以包括红色信号(R)通道、第一绿色信号(Gr)通道、第二绿色信号(Gb)通道和蓝色信号(B)通道中的至少两者。多个通道可以组成拜耳(Bayer)阵列,每个通道可以感应可见光Lax中的红色、第一绿色、第二绿色或蓝色中的一种颜色信息,红色信号通道、绿色信号通道、蓝色信号通道中的至少一个通道可感应荧光Lby,或者红色信号通道、绿色信号通道、蓝色信号通道中的每一个均可感应荧光Lby。如图2所示,是图像传感器102b对可见光Lax和荧光Lby的感应的示意图。In some embodiments, the image sensor 102b is a single image sensor, meaning the number of image sensors 102b used to image the target region S can be equal to one. This single image sensor may include multiple channels. In some embodiments, each channel may image the target region S based on both visible light (Lax) and fluorescence (Lby). In other embodiments, at least one of the multiple channels may image the target region S based on visible light (Lax), and at least one other channel may image the target region S based on fluorescence (Lby). Optionally, the multiple channels may include at least two of a red signal (R) channel, a green signal (G) channel, and a blue signal (B) channel. The multiple channels may form a Bayer array, where each channel can sense color information from the red, green, or blue bands of visible light (Lax), and at least one of the red, green, and blue signal channels may sense fluorescence (Lby), or each of the red, green, and blue signal channels may sense fluorescence (Lby). Optionally, the multiple channels may also include at least two of a red signal (R) channel, a first green signal (Gr) channel, a second green signal (Gb) channel, and a blue signal (B) channel. The multiple channels can form a Bayer array, where each channel can sense one of the color information in visible light Lax: red, first green, second green, or blue. At least one of the red, green, and blue signal channels can sense fluorescence Lby, or each of the red, green, and blue signal channels can sense fluorescence Lby. Figure 2 shows a schematic diagram of the image sensor 102b sensing visible light Lax and fluorescence Lby.

在采用单图像传感器基于目标区域S提供的可见光Lax和荧光Lby对目标区域S进行成像,得到采集的图像Is时,由于目标区域S的第一对象仅反射可见光而不发出荧光,第二对象反射可见光且发出荧光,而该单图像传感器可以感应可见光Lax和荧光Lby,因此该单图像传感器采集的图像Is中与第一对象对应的第一区域包括可见光信号不包括荧光信号,称为非荧光区域,与第二对象对应的第二区域包括可见光信号和荧光信号,称为荧光区域,进而使得该单图像传感器得到的一幅图像Is中既包括荧光区域,又包括非荧光区域。When a single image sensor is used to image the target region S based on the visible light Lax and fluorescence Lby provided by the target region S to obtain the acquired image Is, since the first object in the target region S only reflects visible light and does not emit fluorescence, and the second object reflects visible light and emits fluorescence, and the single image sensor can sense both visible light Lax and fluorescence Lby, the first region corresponding to the first object in the image Is acquired by the single image sensor includes visible light signals but does not include fluorescence signals, and is called the non-fluorescent region. The second region corresponding to the second object includes both visible light signals and fluorescence signals, and is called the fluorescent region. Thus, an image Is obtained by the single image sensor includes both fluorescent and non-fluorescent regions.

在一些实施例中,若可见光光源101a为波段在600nm到660nm之间的红光,目标区域S在红光和激发光的照射下提供红光和荧光,成像单元102基于目标区域S提供的红光和荧光采集到目标区域S的图像Is,由于外科手术场景中目标区域的组织对红光的吸收率普遍较低,从而使得成像单元102基于目标区域S提供的红光和荧光采集到的图像Is具有较高的亮度。In some embodiments, if the visible light source 101a is red light in the band between 600nm and 660nm, the target area S provides red light and fluorescence under the illumination of red light and excitation light. The imaging unit 102 acquires an image Is of the target area S based on the red light and fluorescence provided by the target area S. Since the absorption rate of red light by the tissue in the target area is generally low in surgical scenarios, the image Is acquired by the imaging unit 102 based on the red light and fluorescence provided by the target area S has high brightness.

在一些实施例中,若可见光光源101a为波段在380nm到470nm之间的蓝光,目标区域S在蓝光和激发光的照射下提供蓝光和荧光,成像单元102基于目标区域S提供的蓝光和荧光采集到目标区域S的图像Is,由于外科手术场景中目标区域的组织对蓝光的吸收率普遍较高,从而使得成像单元102基于目标区域S提供的蓝光和荧光采集到的图像Is具有较高的对比度和清晰度。In some embodiments, if the visible light source 101a is blue light with a wavelength between 380nm and 470nm, the target area S provides blue light and fluorescence under the illumination of blue light and excitation light. The imaging unit 102 acquires an image Is of the target area S based on the blue light and fluorescence provided by the target area S. Since the tissue in the target area generally has a high absorption rate of blue light in surgical scenarios, the image Is acquired by the imaging unit 102 based on the blue light and fluorescence provided by the target area S has high contrast and clarity.

在一些实施例中,若可见光光源101a为波段在550nm到650nm之间的黄光,目标区域S在黄光和激发光的照射下提供黄光和荧光,成像单元102基于目标区域S提供的黄光和荧光采集到目标区域S的图像Is,由于外科手术场景中目标区域的组织对黄光的吸收率介于蓝光和红光之间,从而使得成像单元102基于目标区域S提供的黄光和荧光采集到的图像Is在保证图像亮度的同时,又可以保证图像的清晰度和对比度。In some embodiments, if the visible light source 101a is yellow light with a wavelength between 550nm and 650nm, the target area S provides yellow light and fluorescence under the illumination of yellow light and excitation light. The imaging unit 102 acquires the image Is of the target area S based on the yellow light and fluorescence provided by the target area S. Since the absorption rate of yellow light by the tissue in the target area in the surgical scene is between that of blue light and red light, the image Is acquired by the imaging unit 102 based on the yellow light and fluorescence provided by the target area S can ensure both image brightness and image clarity and contrast.

在一些实施例中,成像单元102还包括滤光片,该滤光片设置于图像传感器102b之前的光路中,用于对来自目标区域S的激发光Lbx进行过滤。在本实施例中,来自目标区域S的可见光Lax和/或荧光Lby中可能还包括目标区域S反射的激发光Lbx,在对目标区域S进行成像时,激发光Lbx会形成杂散光并对成像过程产生干扰,因此,通过在图像传感器102b之前的光路中设置的滤光片将来自目标区域S的可见光Lax和/或荧光Lby中包括的激发光Lbx过滤掉,从而减少激发光Lbx对成像过程的干扰,提高成像质量。In some embodiments, the imaging unit 102 further includes a filter disposed in the optical path before the image sensor 102b, for filtering the excitation light Lbx from the target region S. In this embodiment, the visible light Lax and/or fluorescence Lby from the target region S may also include the excitation light Lbx reflected from the target region S. When imaging the target region S, the excitation light Lbx will form stray light and interfere with the imaging process. Therefore, by filtering out the excitation light Lbx included in the visible light Lax and/or fluorescence Lby from the target region S by the filter disposed in the optical path before the image sensor 102b, the interference of the excitation light Lbx on the imaging process is reduced, and the imaging quality is improved.

图像传感器102b可以将图像Is发送至图像处理器103。图像处理器103被配置为对图像Is进行图像增强处理,以增强图像Is中的第一区域R1的图像信号和第二区域R2的图像信号之间的差异。其中,图像增强处理包括但不限于以下至少一者:白平衡处理、线性颜色校正,以及非线性色彩映射。Image sensor 102b can send image Is to image processor 103. Image processor 103 is configured to perform image enhancement processing on image Is to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2 in image Is. The image enhancement processing includes, but is not limited to, at least one of the following: white balance processing, linear color correction, and nonlinear color mapping.

在一些实施例中,图像处理器103是直接对成像单元102采集的图像Is进行图像增强处理得到增强图像,即图像处理器103在对成像单元102采集的图像Is进行图像增强处理得到增强图像的过程中未对该采集的图像Is进行合成处理。在本实施例中,成像单元102采用单图像传感器基于目标区域S提供的可见光Lax和荧光Lby对目标区域S进行成像,使得得到的一幅图像Is中既包括荧光区域,又包括非荧光区域,图像处理器103直接对成像单元102采集的图像Is进行图像增强处理得到增强图像的处理过程中,均是对该一幅图像Is进行增强处理。本实施例中图像处理器103仅通过对图像传感器102b得到的一幅图像Is直接做图像增强处理,而不需要做合成处理就可以得到能从视觉上将第一区域和第二区域区分开来的增强图像。在一些实施例中,将图像处理器103得到的该增强图像直接通过显示设备呈现给观察者,观察者就可以直接从呈现的增强图像中区分出来第一区域和第二区域。在另一些实施例中,也可以在图像处理器103得到该增强图像后,经过图像合并、色彩调整等图像处理后,再通过显示设备呈现给观察者。In some embodiments, the image processor 103 directly performs image enhancement processing on the image Is acquired by the imaging unit 102 to obtain an enhanced image. That is, the image processor 103 does not perform composite processing on the acquired image Is during the image enhancement processing. In this embodiment, the imaging unit 102 uses a single image sensor to image the target region S based on visible light Lax and fluorescence Lby provided by the target region S, resulting in an image Is that includes both fluorescent and non-fluorescent regions. During the image enhancement processing of the image Is acquired by the imaging unit 102, the image processor 103 only performs enhancement processing on this single image Is. In this embodiment, the image processor 103 obtains an enhanced image that visually distinguishes the first and second regions simply by directly performing image enhancement processing on the image Is obtained by the image sensor 102b, without requiring composite processing. In some embodiments, the enhanced image obtained by the image processor 103 is directly presented to the observer through a display device, allowing the observer to directly distinguish the first and second regions from the presented enhanced image. In other embodiments, after the image processor 103 obtains the enhanced image, it can be processed by image merging, color adjustment and other image processing before being presented to the observer through a display device.

在一些实施例中,图像处理器103对该图像Is进行白平衡处理具体包括:图像处理器103调整图像Is中的各颜色通道的比值,以放大该图像Is中的第二区域中的图像信号,如放大该图像Is中的荧光区域中的荧光信号,进而放大该图像Is中的第一区域的图像信号和第二区域的图像信号之间的色彩差异。其中各颜色通道包括R、G、B三个颜色通道。In some embodiments, the image processor 103 performs white balance processing on the image Is, specifically by adjusting the ratio of each color channel in the image Is to amplify the image signal in the second region of the image Is, such as amplifying the fluorescence signal in the fluorescence region of the image Is, thereby amplifying the color difference between the image signal in the first region and the image signal in the second region of the image Is. Each color channel includes three color channels: R, G, and B.

在一些实施例中,图像处理器103对该图像Is进行线性颜色校正处理具体包括:采用线性色彩校正矩阵对成像单元102采集的图像Is进行矫正,以提高该采集的图像Is中第一区域R1和第二区域R2的颜色差异。在一些例子中,可以建立输入图像的颜色空间与目标颜色空间之间的映射关系,通常对于RGB三通道的传感器,线性色彩校正是使用一个3×3的线性色彩校正矩阵,线性色彩校正矩阵的每个元素表示输入通道与目标颜色通道之间的权重关系,通过将输入图像的每个像素值与线性色彩校正矩阵相乘,得到校正后的颜色值。在本实施例中,通过对图像Is进行线性色彩校正,可提高该幅图像Is中荧光区域和非荧光区域的颜色差异,并且把荧光区域的色彩校正到目标颜色,比如绿色或蓝色。把非荧光区域的色彩校正到黑白,或者伪彩,也可以是色彩还原性尽可能高的彩色。In some embodiments, the linear color correction processing of the image processor 103 on the image Is specifically includes: correcting the image Is acquired by the imaging unit 102 using a linear color correction matrix to improve the color difference between the first region R1 and the second region R2 in the acquired image Is. In some examples, a mapping relationship between the color space of the input image and the target color space can be established. Typically, for an RGB three-channel sensor, linear color correction uses a 3×3 linear color correction matrix. Each element of the linear color correction matrix represents the weight relationship between the input channel and the target color channel. The corrected color value is obtained by multiplying each pixel value of the input image with the linear color correction matrix. In this embodiment, by performing linear color correction on the image Is, the color difference between the fluorescent and non-fluorescent regions in the image Is can be improved, and the color of the fluorescent region can be corrected to the target color, such as green or blue. The color of the non-fluorescent region can be corrected to black and white, or pseudo-color, or color with the highest possible color fidelity.

在一些实施例中,图像处理器103对该图像Is进行非线性颜色映射处理具体包括:建立颜色映射表,基于该颜色映射表以及预设的非线性映射算法对图像Is的RGB值进行非线性变换,以增强该图像Is中第一区域和第二区域的颜色差异,如增强该图像Is中荧光区域和非荧光区域的颜色差异。在一些例子中,采用非线性三维色彩映射处理对该图像Is进行非线性颜色映射处理,先建立一个颜色映射表,根据该图像Is的RGB值,根据颜色映射表,基于预设的非线性变换算法,输出目标R’G’B’,进而得到增强图像。本实施例中,利用非线性三维色彩映射,可以提高同一幅图像中荧光区域和非荧光区域的颜色差异,并且把荧光区域的色彩校正到目标颜色,比如绿色或蓝色。把非荧光区域的色彩校正到黑白,或者伪彩,也可以是色彩还原性尽可能高的彩色。In some embodiments, the image processor 103 performs nonlinear color mapping processing on the image Is, specifically including: establishing a color mapping table; and performing a nonlinear transformation on the RGB values of the image Is based on the color mapping table and a preset nonlinear mapping algorithm to enhance the color difference between the first and second regions in the image Is, such as enhancing the color difference between fluorescent and non-fluorescent regions in the image Is. In some examples, nonlinear three-dimensional color mapping processing is used to perform nonlinear color mapping processing on the image Is. First, a color mapping table is established. Based on the RGB values of the image Is, and according to the color mapping table, a preset nonlinear transformation algorithm is used to output the target R’G’B’, thereby obtaining an enhanced image. In this embodiment, by using nonlinear three-dimensional color mapping, the color difference between fluorescent and non-fluorescent regions in the same image can be improved, and the color of the fluorescent region can be corrected to the target color, such as green or blue. The color of the non-fluorescent region can be corrected to black and white, or pseudo-color, or color with the highest possible color fidelity.

图3以第一光是可见光Lax,第二光是荧光Lby为例,示出了图像增强处理前后各个通道的信号强度的示意图。图像处理器103接收图像传感器102b的每一帧图像Is,在该帧图像Is中的无荧光的第一区域,像素点的RGB三通道只含可见光信息。在该帧图像Is中的有荧光的第二区域,像素点的RGB三个通道同时含有可见光和荧光信息(在图3中将荧光信息表示为第二区域R、G、B三个通道上的阴影部分)。在一些实施例中,对图像Is进行图像增强处理,可以是将第一区域R1调整为彩色区域,并将第二区域R2调整为绿色区域。或者,在一些实施例中,将第一区域R1调整为灰度区域,并将第二区域R2调整为绿色区域。上述实施例基于可见光Lax和荧光Lby对目标区域S进行成像,得到目标区域S的图像Is之后,通过直接对图像Is进行图像增强处理,从而增强了第一区域R1的图像信号和第二区域R2的图像信号之间的差异,使得用户能够更加直观地从视觉上区分图像Is中的第一对象O1对应的第一区域R1和与第二对象O2对应的第二区域R2,提高了第一区域R1和第二区域R2之间的辨识度。Figure 3 illustrates the signal intensity of each channel before and after image enhancement processing, using visible light (Lax) as the first light and fluorescence (Lby) as an example. The image processor 103 receives each frame image Is from the image sensor 102b. In the first region of image Is without fluorescence, the RGB channels of the pixels contain only visible light information. In the second region of image Is with fluorescence, the RGB channels of the pixels contain both visible light and fluorescence information (in Figure 3, fluorescence information is represented by the shaded areas on the R, G, and B channels of the second region). In some embodiments, image enhancement processing of image Is may involve adjusting the first region R1 to a colored region and the second region R2 to a green region. Alternatively, in some embodiments, the first region R1 may be adjusted to a grayscale region and the second region R2 to a green region. The above embodiment images the target region S based on visible light Lax and fluorescence Lby. After obtaining the image Is of the target region S, image enhancement processing is performed directly on the image Is, thereby enhancing the difference between the image signal of the first region R1 and the image signal of the second region R2. This allows the user to more intuitively distinguish the first region R1 corresponding to the first object O1 and the second region R2 corresponding to the second object O2 in the image Is, improving the recognition between the first region R1 and the second region R2.

在一些实施例中,可见光光源101a和激发光源101b可以同步地照射目标区域S,目标区域S在可见光光源101a和激发光源101b的同步照射下提供混合光,该混合光包括目标区域S在可见光光源101a和激发光源101b的同步照射下提供的可见光Lax和荧光Lby。成像单元102的镜头102a捕获来自目标区域S的混合光,图像传感器102b基于镜头102a捕获的混合光对目标区域S进行成像,得到图像Is,图像处理器103对图像传感器102b得到的图像Is进行图像处理,以增强第一区域R1的图像信号和第二区域R2的图像信号之间的差异。其中图像传感器102b采用单图像传感器。In some embodiments, the visible light source 101a and the excitation source 101b can synchronously illuminate the target region S. The target region S receives mixed light under the synchronous illumination of the visible light source 101a and the excitation source 101b. This mixed light includes visible light Lax and fluorescence Lby provided by the target region S under the synchronous illumination of the visible light source 101a and the excitation source 101b. The lens 102a of the imaging unit 102 captures the mixed light from the target region S. The image sensor 102b images the target region S based on the mixed light captured by the lens 102a, obtaining an image Is. The image processor 103 performs image processing on the image Is obtained by the image sensor 102b to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2. The image sensor 102b is a single image sensor.

本实施例采用可见光光源101a和激发光源101b同步照射目标区域S,这样,目标区域S在同一时刻既能够提供可见光Lax,又能够提供荧光Lby。从而使得图像传感器102b得到的每一帧图像Is中的第一区域R1和第二区域R2满足:第一区域R1中包括可见光的信息,第二区域R2中包括可见光和荧光的信息。从而可以使采集到的每一帧图像均不存在运动拖影和延时问题,提高了成像质量。同时通过图像处理器103直接对图像传感器102b得到的图像进行图像处理,以增强图像中第一区域R1的图像信号和第二区域R2的图像信号之间的差异,由于是直接对图像传感器得到的图像进行图像增强处理,即进行图像增强处理的图像没有经过多帧图像的处理和合成,从而简化了图像增强处理复杂度,同时由于只需要对一帧图像进行图像增强处理就能够实现对可见光和荧光的信息的处理,从而进一步简化了图像增强处理复杂度,提高了图像处理效率。由于采用单个图像传感器102b可以得到同时包括可见光信息和荧光信息的图像Is,因此能够有效减少硬件体积和成本,且便于对多帧图像进行流水线处理,提高了图像处理过程中的效率和灵活性。In this embodiment, a visible light source 101a and an excitation source 101b are simultaneously used to illuminate the target region S. This ensures that the target region S provides both visible light (Lax) and fluorescence (Lby) at the same time. Consequently, in each frame image Is obtained by the image sensor 102b, the first region R1 and the second region R2 satisfy the following conditions: the first region R1 includes visible light information, and the second region R2 includes both visible light and fluorescence information. This eliminates motion blur and delay issues in each acquired frame, improving image quality. Simultaneously, the image processor 103 directly processes the image obtained by the image sensor 102b to enhance the difference between the image signals of the first region R1 and the second region R2. Since image enhancement processing is performed directly on the image obtained by the image sensor, without the processing and synthesis of multiple frames, the complexity of image enhancement processing is simplified. Furthermore, since image enhancement processing only needs to be performed on a single frame to process both visible light and fluorescence information, the complexity of image enhancement processing is further simplified, improving image processing efficiency. Since a single image sensor 102b can be used to obtain an image Is that includes both visible light and fluorescence information, it can effectively reduce hardware size and cost, and facilitate pipelined processing of multiple frames of images, thereby improving efficiency and flexibility in the image processing process.

在一些实施例中,该成像系统10还包括控制器(图未示出),该控制器配置为控制成像单元102对目标区域S进行成像。In some embodiments, the imaging system 10 further includes a controller (not shown) configured to control the imaging unit 102 to image the target region S.

在一些实施例中,控制器还被配置为调整第一光源101a和/或第二光源101b的强度,以增强成像单元102成像得到的图像Is的第一区域R1的图像信号和第二区域R2的图像信号之间的差异。例如,在相同强度的第一光源101a和第二光源101b的照射下,若第一光的信号强度相对较强,而第二光的信号强度相对较弱,可以降低第一光源101a的强度,和/或增加第二光源101b的强度。In some embodiments, the controller is further configured to adjust the intensity of the first light source 101a and/or the second light source 101b to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2 of the image Is acquired by the imaging unit 102. For example, under illumination by the first light source 101a and the second light source 101b of the same intensity, if the signal intensity of the first light is relatively strong and the signal intensity of the second light is relatively weak, the intensity of the first light source 101a can be reduced, and/or the intensity of the second light source 101b can be increased.

本公开实施例还提供一种内窥镜系统,如图4所示,该内窥镜系统包括光源装置110、成像模组120和图像处理装置130。其中光源装置110包括成像系统10的光源101,成像模组120包括成像系统10的成像单元102,图像处理装置130包括成像系统10的图像处理器103。This disclosure also provides an endoscope system, as shown in FIG4. The endoscope system includes a light source device 110, an imaging module 120, and an image processing device 130. The light source device 110 includes a light source 101 of the imaging system 10, the imaging module 120 includes an imaging unit 102 of the imaging system 10, and the image processing device 130 includes an image processor 103 of the imaging system 10.

在一些实施例中,该内窥镜系统还可以包括显示单元(图未示出),该显示单元被配置为接收经图像处理装置130处理后的图像并进行显示。In some embodiments, the endoscope system may further include a display unit (not shown) configured to receive and display images processed by the image processing device 130.

在一些实施例中,该内窥镜系统还可以包括控制器(图未示出),该控制器配置为控制成像模组120对目标区域S进行成像。In some embodiments, the endoscope system may further include a controller (not shown) configured to control the imaging module 120 to image the target region S.

在一些实施例中,控制器还被配置为控制光源装置110调整第一光源101a和/或第二光源101b的强度,以增强成像单元102成像得到的图像Is的第一区域R1的图像信号和第二区域R2的图像信号之间的差异。例如,在相同强度的第一光源101a和第二光源101b的照射下,若第一光的信号强度相对较强,而第二光的信号强度相对较弱,可以降低第一光源101a的强度,和/或增加第二光源101b的强度。In some embodiments, the controller is further configured to control the light source device 110 to adjust the intensity of the first light source 101a and/or the second light source 101b to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2 of the image Is acquired by the imaging unit 102. For example, under illumination by the first light source 101a and the second light source 101b of the same intensity, if the signal intensity of the first light is relatively strong and the signal intensity of the second light is relatively weak, the intensity of the first light source 101a can be reduced, and/or the intensity of the second light source 101b can be increased.

如图5所示,上述成像系统10可应用于手术机器人系统20。手术机器人系统20可以包括成像系统10和显示单元201,成像系统10可以将图像处理器103处理后的图像Is发送至显示单元201进行显示。在一些实施例中,成像系统10的成像单元102可以是内窥镜或显微镜。手术机器人系统20可以包括机械臂系统202,机械臂系统202包括一个或多个机械臂202a,成像单元102可以持握在任意一个机械臂202a上。通过调整机械臂202a的末端位置和姿态,可以改变成像单元102的位姿,从而控制成像单元102在特定的位姿下对目标区域S进行成像。手术机器人系统20还可以包括控制台203,医生可以在控制台203上进行操作,从而调整机械臂202a的末端位置和姿态。在一些实施例中,手术机器人系统20还可以包括控制器(图未示出),用于控制成像系统10对目标区域S进行成像。上述控制器可以是手术机器人系统20的控制台203,也可以是成像单元102自带的控制单元。例如,在成像单元102为内窥镜时,控制器可以是内窥镜的主机。As shown in Figure 5, the imaging system 10 described above can be applied to a surgical robot system 20. The surgical robot system 20 may include the imaging system 10 and a display unit 201. The imaging system 10 can send the image Is processed by the image processor 103 to the display unit 201 for display. In some embodiments, the imaging unit 102 of the imaging system 10 may be an endoscope or a microscope. The surgical robot system 20 may include a robotic arm system 202, which includes one or more robotic arms 202a. The imaging unit 102 can be held on any one of the robotic arms 202a. By adjusting the end-effector position and orientation of the robotic arm 202a, the orientation of the imaging unit 102 can be changed, thereby controlling the imaging unit 102 to image the target region S in a specific orientation. The surgical robot system 20 may also include a console 203, on which the surgeon can operate to adjust the end-effector position and orientation of the robotic arm 202a. In some embodiments, the surgical robot system 20 may also include a controller (not shown) for controlling the imaging system 10 to image the target region S. The controller mentioned above can be the console 203 of the surgical robot system 20, or it can be the control unit integrated into the imaging unit 102. For example, when the imaging unit 102 is an endoscope, the controller can be the main unit of the endoscope.

本公开实施例还提供一种成像方法,参见图6,成像方法包括:This disclosure also provides an imaging method, as shown in FIG6, the imaging method including:

步骤S1:采用光源照射目标区域S,该光源包括第一光源101a和第二光源101b,该目标区域S在第一光源101a的照射下提供第一光,该目标区域S在第二光源101b的照射下提供第二光,该目标区域S包括第一对象O1和第二对象O2,该第一对象O1和该第二对象O2在第二光源101b的照射下提供不同光谱特性的光。其中,采用光源照射目标区域S可以是通过自动或手动方式打开光源,以使光源照在打开后照射目标区域S;Step S1: Illuminate the target area S with a light source, which includes a first light source 101a and a second light source 101b. The target area S receives first light under the illumination of the first light source 101a and second light under the illumination of the second light source 101b. The target area S includes a first object O1 and a second object O2. The first object O1 and the second object O2 provide light with different spectral characteristics under the illumination of the second light source 101b. Illuminating the target area S with a light source can be done automatically or manually, so that the light source illuminates the target area S after it is turned on.

步骤S2:采集目标区域S的图像Is,图像Is包括与第一对象O1对应的第一区域R1和与第二对象O2对应的第二区域R2,第一区域R1和第二区域R2具有不同的图像信号。Step S2: Acquire an image Is of the target region S. The image Is includes a first region R1 corresponding to the first object O1 and a second region R2 corresponding to the second object O2. The first region R1 and the second region R2 have different image signals.

步骤S3:对采集的图像Is进行图像增强处理,以增强第一区域R1的图像信号和第二区域R2的图像信号,得到增强图像。Step S3: Perform image enhancement processing on the acquired image Is to enhance the image signal of the first region R1 and the image signal of the second region R2 to obtain an enhanced image.

上述方法实施例的具体细节详见前述成像系统10的实施例,此处不再赘述。For details of the above method embodiments, please refer to the embodiments of the aforementioned imaging system 10, which will not be repeated here.

在一些实施例中,在对采集的图像Is进行图像增强处理得到增强图像的过程中未对采集的图像Is进行合成处理。In some embodiments, the acquired image Is is not subjected to compositing processing during the process of performing image enhancement processing on the acquired image Is to obtain the enhanced image.

在一些实施例中,在对采集的图像Is进行图像增强处理得到增强图像的过程中,该采集的图像Is在图像增强处理过程中仅为一幅图像。在一些实施例中,采集目标区域的图像Is包括:接收来自目标区域S的混合光,混合光包括目标区域S在第一光源101a和第二光源101b同步照射下提供的第一光和第二光;基于混合光对目标区域S进行成像得到目标区域S的图像。In some embodiments, during the image enhancement process of the acquired image Is to obtain an enhanced image, the acquired image Is is only one image during the image enhancement process. In some embodiments, acquiring the image Is of the target region includes: receiving mixed light from the target region S, the mixed light including first light and second light provided by the target region S under the synchronous illumination of the first light source 101a and the second light source 101b; and imaging the target region S based on the mixed light to obtain an image of the target region S.

在一些实施例中,基于混合光对目标区域S进行成像得到目标区域S的图像包括:通过单图像传感器基于混合光对目标区域S进行成像得到目标区域S的图像。In some embodiments, imaging the target region S based on mixed light to obtain an image of the target region S includes: imaging the target region S based on mixed light using a single image sensor to obtain an image of the target region S.

在一些实施例中,单图像传感器包括红色信号通道、绿色信号通道和蓝色信号通道,红色信号通道可感应混合光包括的第一光的红色波段,绿色信号通道可感应混合光包括的第一光中的绿色波段,蓝色信号通道可感应混合光包括的第一光中蓝色波段,红色信号通道、绿色信号通道和蓝色信号通道中的至少一个通道可感应混合光中的第二光。In some embodiments, the single image sensor includes a red signal channel, a green signal channel, and a blue signal channel. The red signal channel can sense the red band of the first light included in the mixed light, the green signal channel can sense the green band of the first light included in the mixed light, the blue signal channel can sense the blue band of the first light included in the mixed light, and at least one of the red, green, and blue signal channels can sense the second light in the mixed light.

在一些实施例中,单图像传感器包括红色信号通道、绿色信号通道和蓝色信号通道,红色信号通道可感应混合光包括的第一光的红色波段,绿色信号通道可感应混合光包括的第一光中的绿色波段,蓝色信号通道可感应混合光包括的第一光中蓝色波段,红色信号通道、绿色信号通道和蓝色信号通道中的每一个通道均可感应混合光中的第二光。In some embodiments, a single image sensor includes a red signal channel, a green signal channel, and a blue signal channel. The red signal channel can sense the red band of the first light included in the mixed light, the green signal channel can sense the green band of the first light included in the mixed light, and the blue signal channel can sense the blue band of the first light included in the mixed light. Each of the red, green, and blue signal channels can sense the second light in the mixed light.

在一些实施例中,目标区域S能够反射第二光源101b发出的激发光Lb,目标区域S反射激发光Lb形成的激发光Lbx,在基于来自目标区域S的第一光和第二光对目标区域S进行成像得到目标区域S的图像之前,被过滤掉。In some embodiments, the target region S can reflect the excitation light Lb emitted by the second light source 101b. The excitation light Lbx formed by the reflection of the excitation light Lb by the target region S is filtered out before an image of the target region S is obtained by imaging the target region S based on the first light and the second light from the target region S.

在一些实施例中,图像增强处理包括以下至少一者:白平衡处理、线性颜色校正,以及非线性色彩映射。In some embodiments, image enhancement processing includes at least one of the following: white balance processing, linear color correction, and nonlinear color mapping.

在一些实施例中,对图像Is进行白平衡处理包括:调整采集的图像Is中各颜色通道的比值,以放大采集的图像中第二区域的图像信号。其中各颜色通道包括R、G、B三个颜色通道。In some embodiments, white balance processing of image Is includes adjusting the ratios of each color channel in the acquired image Is to amplify the image signal of the second region in the acquired image. Each color channel includes three color channels: R, G, and B.

在一些实施例中,对图像Is进行线性颜色校正处理包括:采用线性色彩校正矩阵对采集的图像进行矫正,以提高采集的图像中第一区域和第二区域的颜色差异。In some embodiments, linear color correction processing of image Is includes: correcting the acquired image using a linear color correction matrix to improve the color difference between the first region and the second region in the acquired image.

在一些实施例中,对图像Is进行非线性色彩映射处理包括:建立颜色映射表,基于颜色映射表以及预设的非线性变换算法对采集的图像进行非线性变换,以增强采集的图像中第一区域和第二区域的颜色差异。In some embodiments, performing nonlinear color mapping processing on image Is includes: establishing a color mapping table, and performing a nonlinear transformation on the acquired image based on the color mapping table and a preset nonlinear transformation algorithm to enhance the color difference between the first region and the second region in the acquired image.

在一些实施例中,图像信号包括以下至少一者:亮度、色度、饱和度、对比度、不同区域的边界像素。In some embodiments, the image signal includes at least one of the following: brightness, chroma, saturation, contrast, and boundary pixels of different regions.

在一些实施例中,第一光和第二光的波段不同。In some embodiments, the first light and the second light have different wavelengths.

在一些实施例中,第一光为可见光的部分波段,可见光的波段处于380nm到780nm之间,第二光的波段处于800nm到900nm之间。In some embodiments, the first light is a portion of the visible light spectrum, with the visible light spectrum ranging from 380 nm to 780 nm, and the second light spectrum ranging from 800 nm to 900 nm.

在一些实施例中,第一光为波段在550nm到650nm之间的黄光;或者第一光为波段在600nm到660nm之间的红光;或者第一光为波段在380nm到470nm之间的蓝光。In some embodiments, the first light is yellow light with a wavelength between 550nm and 650nm; or the first light is red light with a wavelength between 600nm and 660nm; or the first light is blue light with a wavelength between 380nm and 470nm.

在一些实施例中,对图像Is进行图像增强处理,以增强第一区域R1的图像信号和第二区域R2的图像信号之间的差异,包括:将第一区域R1调整为彩色区域,并将第二区域R2调整为绿色区域;或者将第一区域R1调整为灰度区域,并将第二区域R2调整为绿色区域。In some embodiments, image enhancement processing is performed on image Is to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2, including: adjusting the first region R1 to a color region and adjusting the second region R2 to a green region; or adjusting the first region R1 to a grayscale region and adjusting the second region R2 to a green region.

在一些实施例中,所述方法还包括:调整第一光源101a和/或第二光源101b的强度,以增强第一区域R1的图像信号和第二区域R2的图像信号之间的差异。In some embodiments, the method further includes: adjusting the intensity of the first light source 101a and/or the second light source 101b to enhance the difference between the image signal of the first region R1 and the image signal of the second region R2.

本公开实施例还提供一种计算机设备,其至少包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,处理器执行所述程序时实现前述任一实施例所述的方法。This disclosure also provides a computer device, which includes at least a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the methods described in any of the foregoing embodiments.

图7示出了本公开实施例所提供的一种更为具体的计算设备硬件结构示意图,该设备可以包括:处理器31、存储器32、输入/输出接口33、通信接口34和总线35。其中处理器31、存储器32、输入/输出接口33和通信接口34通过总线35实现彼此之间在设备内部的通信连接。Figure 7 illustrates a more specific hardware structure diagram of a computing device provided in an embodiment of this disclosure. The device may include: a processor 31, a memory 32, an input/output interface 33, a communication interface 34, and a bus 35. The processor 31, memory 32, input/output interface 33, and communication interface 34 are interconnected internally via the bus 35.

处理器31可以采用通用的中央处理器(Central Processing Unit,CPU)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本公开实施例所提供的技术方案。处理器31还可以包括显卡,所述显卡可以是Nvidia titan X显卡或者1080Ti显卡等。The processor 31 can be implemented using a general-purpose central processing unit (CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits, and is used to execute relevant programs to implement the technical solutions provided in the embodiments of this disclosure. The processor 31 may also include a graphics card, such as an Nvidia Titan X graphics card or a 1080Ti graphics card.

存储器32可以采用只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、静态存储设备,动态存储设备等形式实现。存储器32可以存储操作系统和其他应用程序,在通过软件或者固件来实现本公开实施例所提供的技术方案时,相关的程序代码保存在存储器32中,并由处理器31来调用执行。The memory 32 can be implemented as a read-only memory (ROM), random access memory (RAM), static storage device, dynamic storage device, etc. The memory 32 can store the operating system and other applications. When the technical solutions provided in the embodiments of this disclosure are implemented by software or firmware, the relevant program code is stored in the memory 32 and is called and executed by the processor 31.

输入/输出接口33用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。Input/output interface 33 is used to connect input/output modules to realize information input and output. Input/output modules can be configured as components in the device (not shown in the figure) or externally connected to the device to provide corresponding functions. Input devices may include keyboards, mice, touch screens, microphones, various sensors, etc., and output devices may include displays, speakers, vibrators, indicator lights, etc.

通信接口34用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB(通用串行总线,Universal Serial Bus)、网线等)实现通信,也可以通过无线方式(例如移动网络、Wi-Fi、蓝牙等)实现通信。Communication interface 34 is used to connect a communication module (not shown in the figure) to enable communication between this device and other devices. The communication module can communicate via wired means (such as USB (Universal Serial Bus), network cable, etc.) or wireless means (such as mobile network, Wi-Fi, Bluetooth, etc.).

总线35包括一通路,在设备的各个组件(例如处理器31、存储器32、输入/输出接口33和通信接口34)之间传输信息。Bus 35 includes a pathway for transmitting information between various components of the device (e.g., processor 31, memory 32, input/output interface 33, and communication interface 34).

需要说明的是,尽管上述设备仅示出了处理器31、存储器32、输入/输出接口33、通信接口34以及总线35,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本公开实施例方案所必需的组件,而不必包含图中所示的全部组件。It should be noted that although the above-described device only shows the processor 31, memory 32, input/output interface 33, communication interface 34, and bus 35, in specific implementations, the device may also include other components necessary for normal operation. Furthermore, those skilled in the art will understand that the above-described device may only include the components necessary for implementing the embodiments of this disclosure, and not necessarily all the components shown in the figures.

本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述的方法。This disclosure also provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the methods described in any of the foregoing embodiments.

计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change Random Access Memory,PRAM)、静态随机存取存储器(Static Random-Access Memory,SRAM)、动态随机存取存储器(Dynamic Random Access Memory,DRAM)、其他类型的随机存取存储器、只读存储器、电可擦除可编程只读存储器(Electrically Erasable Programmable Read Only Memory,EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(Transitory Media),如调制的数据信号和载波。Computer-readable media, including both permanent and non-permanent, removable and non-removable media, can store information using any method or technology. Information can be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to, phase-change random access memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory, read-only memory, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic tape, magnetic magnetic disk storage or other magnetic storage devices, or any other non-transfer medium that can be used to store information accessible by a computing device. As defined in this article, computer-readable media do not include transient computer-readable media, such as modulated data signals and carrier waves.

通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本公开实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例各个实施例或者实施例的某些部分所述的方法。As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that the embodiments of this disclosure can be implemented by means of software plus necessary general-purpose hardware platforms. Based on this understanding, the technical solutions of the embodiments of this disclosure, in essence or the part that contributes to the prior art, can be embodied in the form of a software product. This computer software product can be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions to cause a computer device (which may be a personal computer, server, or network device, etc.) to execute the methods described in various embodiments or some parts of the embodiments of this disclosure.

上述实施例阐明的系统、装置、模块或单元,具体可以由计算机装置或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。The systems, devices, modules, or units described in the above embodiments can be implemented by computer devices or entities, or by products with certain functions. A typical implementation device is a computer, which can take the form of a personal computer, laptop computer, cellular phone, camera phone, smartphone, personal digital assistant, media player, navigation device, email sending and receiving device, game console, tablet computer, wearable device, or any combination of these devices.

本公开中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,在实施本公开实施例方案时可以把各模块的功能在同一个或多个软件和/或硬件中实现。也可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。The various embodiments in this disclosure are described in a progressive manner. Similar or identical parts between embodiments can be referred to mutually. Each embodiment focuses on describing the differences from other embodiments. In particular, the device embodiments are basically similar to the method embodiments, so the description is relatively simple; relevant parts can be referred to the descriptions in the method embodiments. The device embodiments described above are merely illustrative. The modules described as separate components may or may not be physically separate. When implementing the embodiments of this disclosure, the functions of each module can be implemented in one or more software and/or hardware. Alternatively, some or all of the modules can be selected to achieve the purpose of this embodiment according to actual needs. Those skilled in the art can understand and implement this without creative effort.

以上所述仅是本公开实施例的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开实施例的保护范围。The above description is merely a specific implementation of the embodiments of this disclosure. It should be noted that for those skilled in the art, several improvements and modifications can be made without departing from the principles of the embodiments of this disclosure, and these improvements and modifications should also be considered within the protection scope of the embodiments of this disclosure.

Claims (35)

一种成像方法,所述方法包括:An imaging method, the method comprising: 采用光源照射目标区域,所述光源包括第一光源和第二光源,所述目标区域在所述第一光源的照射下提供第一光,所述目标区域在所述第二光源的照射下提供第二光,所述目标区域包括第一对象和第二对象,所述第一对象和所述第二对象在所述第二光源的照射下提供不同光谱特性的光;A target area is illuminated by a light source, which includes a first light source and a second light source. The target area receives a first light under the illumination of the first light source and a second light under the illumination of the second light source. The target area includes a first object and a second object. The first object and the second object receive light with different spectral characteristics under the illumination of the second light source. 采集所述目标区域的图像,采集的图像包括与所述第一对象对应的第一区域和与所述第二对象对应的第二区域,所述第一区域和所述第二区域具有不同的图像信号;The target area is captured as an image, which includes a first area corresponding to the first object and a second area corresponding to the second object, wherein the first area and the second area have different image signals. 对所述采集的图像进行图像增强处理,以增强所述采集的图像中的所述第一区域的图像信号和所述第二区域的图像信号,得到增强图像。The acquired image is subjected to image enhancement processing to enhance the image signals of the first region and the second region in the acquired image, thereby obtaining an enhanced image. 根据权利要求1所述的方法,其中,所述对所述采集的图像进行图像增强处理,得到增强图像包括:According to the method of claim 1, wherein performing image enhancement processing on the acquired image to obtain an enhanced image includes: 在对所述采集的图像进行图像增强处理得到所述增强图像的过程中未对所述采集的图像进行合成处理。No image compositing process was performed on the acquired images during the image enhancement process to obtain the enhanced image. 根据权利要求1或2所述的方法,其中,所述采集的图像在所述图像增强处理过程中仅为一幅图像。According to the method of claim 1 or 2, the acquired image is only one image in the image enhancement process. 根据权利要求1-3中的任一项所述的方法,其中,所述采集目标区域的图像包括:The method according to any one of claims 1-3, wherein acquiring the image of the target area includes: 接收来自所述目标区域的混合光,所述混合光包括所述目标区域在所述第一光源和所述第二光源同步照射下提供的所述第一光和所述第二光;Receive mixed light from the target area, the mixed light including the first light and the second light provided by the target area under synchronous illumination by the first light source and the second light source; 基于所述混合光对所述目标区域进行成像得到所述采集的图像。The acquired image is obtained by imaging the target area based on the mixed light. 根据权利要求4所述的方法,其中,所述基于所述混合光对所述目标区域进行成像得到所述采集的图像包括:According to the method of claim 4, wherein the step of imaging the target region based on the mixed light to obtain the acquired image comprises: 通过单图像传感器基于所述混合光对所述目标区域进行成像得到所述采集的图像。The acquired image is obtained by imaging the target area using a single image sensor based on the mixed light. 根据权利要求5所述的方法,其中,所述单图像传感器包括红色信号通道、绿色信号通道和蓝色信号通道,所述红色信号通道用于感应所述混合光中的红色波段,所述绿色信号通道用于感应所述混合光中的绿色波段,所述蓝色信号通道用于感应所述混合光中的蓝光波段,所述红色信号通道、所述绿色信号通道、所述蓝色信号通道中的至少一个通道还用于感应所述混合光中的所述第二光。According to the method of claim 5, the single image sensor includes a red signal channel, a green signal channel, and a blue signal channel, wherein the red signal channel is used to sense the red band in the mixed light, the green signal channel is used to sense the green band in the mixed light, the blue signal channel is used to sense the blue band in the mixed light, and at least one of the red signal channel, the green signal channel, and the blue signal channel is further used to sense the second light in the mixed light. 根据权利要求5所述的方法,其中,所述单图像传感器包括红色信号通道、绿色信号通道和蓝色信号通道,所述红色信号通道用于感应所述混合光中的红色波段,所述绿色信号通道用于感应所述混合光中的绿色波段,所述蓝色信号通道用于感应所述混合光中的蓝光波段,所述红色信号通道、所述绿色信号通道和所述蓝色信号通道均还用于感应所述混合光中的所述第二光。According to the method of claim 5, the single image sensor includes a red signal channel, a green signal channel, and a blue signal channel, wherein the red signal channel is used to sense the red band in the mixed light, the green signal channel is used to sense the green band in the mixed light, the blue signal channel is used to sense the blue band in the mixed light, and the red signal channel, the green signal channel, and the blue signal channel are all further used to sense the second light in the mixed light. 根据权利要求1-7中的任一项所述的方法,其中,在所述目标区域反射了所述第二光源发出的激发光时,所述目标区域反射的激发光在采集所述目标区域的图像得到所述采集的图像之前被过滤掉。The method according to any one of claims 1-7, wherein, when the target region reflects the excitation light emitted by the second light source, the excitation light reflected by the target region is filtered out before acquiring an image of the target region to obtain the acquired image. 根据权利要求1-8中的任一项所述的方法,其中,所述图像增强处理包括以下至少一者:The method according to any one of claims 1-8, wherein the image enhancement processing comprises at least one of the following: 白平衡处理、线性颜色校正,以及非线性色彩映射。White balance processing, linear color correction, and non-linear color mapping. 根据权利要求9所述的方法,其中,对所述采集的图像进行白平衡处理包括:According to the method of claim 9, the white balance processing of the acquired image includes: 调整所述采集的图像中各颜色通道的比值,以放大所述采集的图像中所述第二区域的图像信号。Adjust the ratio of each color channel in the acquired image to amplify the image signal of the second region in the acquired image. 根据权利要求9或10所述的方法,其中,对所述采集的图像进行线性颜色校正处理包括:According to the method of claim 9 or 10, the linear color correction processing of the acquired image includes: 采用线性色彩校正矩阵对所述采集的图像进行矫正,以提高所述采集的图像中所述第一区域和所述第二区域的颜色差异。A linear color correction matrix is used to correct the acquired image to improve the color difference between the first region and the second region in the acquired image. 根据权利要求9-11中的任一项所述的方法,其中,对所述采集的图像进行非线性色彩映射处理包括:The method according to any one of claims 9-11, wherein performing nonlinear color mapping processing on the acquired image comprises: 建立颜色映射表,基于所述颜色映射表以及预设的非线性变换算法对所述采集的图像进行非线性变换,以增强所述采集的图像中所述第一区域和所述第二区域的颜色差异。A color mapping table is established, and the acquired image is subjected to nonlinear transformation based on the color mapping table and a preset nonlinear transformation algorithm to enhance the color difference between the first region and the second region in the acquired image. 根据权利要求1-12中的任一项所述的方法,其中,所述采集的图像中的图像信号包括以下至少一者:The method according to any one of claims 1-12, wherein the image signal in the acquired image includes at least one of the following: 亮度、色度、饱和度、对比度、不同区域的边界像素。Brightness, chroma, saturation, contrast, and boundary pixels of different areas. 根据权利要求1-13中的任一项所述的方法,其中,所述第一光包括可见光的部分波段,所述第二光为荧光,所述可见光和所述荧光的波段不同。The method according to any one of claims 1-13, wherein the first light includes a portion of the visible light spectrum, the second light is fluorescence, and the visible light and the fluorescence are in different spectrums. 根据权利要求14所述的方法,其中,所述可见光的波段处于380nm到780nm之间,所述荧光的波段处于800nm到900nm之间。According to the method of claim 14, the visible light wavelength is between 380 nm and 780 nm, and the fluorescence wavelength is between 800 nm and 900 nm. 根据权利要求15所述的方法,其中,所述可见光包括波段在550nm到650nm之间的黄光;或者,According to the method of claim 15, wherein the visible light comprises yellow light in the wavelength range of 550 nm to 650 nm; or, 所述可见光包括波段在600nm到660nm之间的红光;或者,The visible light includes red light in the wavelength range of 600 nm to 660 nm; or... 所述可见光包括波段在380nm到470nm之间的蓝光。The visible light includes blue light in the wavelength range of 380nm to 470nm. 根据权利要求1-16中的任一项所述的方法,其中,所述对所述采集的图像进行图像增强处理包括:The method according to any one of claims 1-16, wherein the image enhancement processing of the acquired image comprises: 将所述第一区域调整为彩色区域,并将所述第二区域调整为绿色区域;或者,Adjust the first area to a colored area and the second area to a green area; or... 将所述第一区域调整为灰度区域,并将所述第二区域调整为绿色区域。Adjust the first area to a grayscale area and the second area to a green area. 根据权利要求1-17中的任一项所述的方法,其中,所述方法还包括:The method according to any one of claims 1-17, wherein the method further comprises: 调整所述第一光源和/或所述第二光源的强度,以增强所述第一区域的图像信号和所述第二区域的图像信号之间的差异。Adjust the intensity of the first light source and/or the second light source to enhance the difference between the image signal of the first region and the image signal of the second region. 一种成像系统,所述成像系统包括:An imaging system, the imaging system comprising: 光源,所述光源被配置为照射目标区域,所述光源包括第一光源和第二光源,所述目标区域在所述第一光源的照射下提供第一光,在所述第二光源的照射下可提供第二光,所述目标区域内包括第一对象和第二对象,所述第一对象和所述第二对象在所述第二光源的照射下提供不同光谱特性的光;A light source configured to illuminate a target area, the light source including a first light source and a second light source, the target area providing first light under the illumination of the first light source and providing second light under the illumination of the second light source, the target area including a first object and a second object, the first object and the second object providing light with different spectral characteristics under the illumination of the second light source; 成像单元,被配置为采集所述目标区域的图像,采集的图像包括与所述第一对象对应的第一区域和与所述第二对象对应的第二区域,所述第一区域和所述第二区域具有不同的图像信号;An imaging unit is configured to acquire an image of the target region, the acquired image including a first region corresponding to the first object and a second region corresponding to the second object, the first region and the second region having different image signals; 图像处理器,被配置为对所述采集的图像进行图像增强处理,以增强所述采集的图像中的所述第一区域的图像信号和所述第二区域的图像信号,得到增强图像。An image processor is configured to perform image enhancement processing on the acquired image to enhance the image signals of the first region and the second region in the acquired image, thereby obtaining an enhanced image. 根据权利要求19所述的系统,其中,所述图像处理器还被配置为在对所述采集的图像进行图像增强处理得到增强图像的过程中未对所述采集的图像进行合成处理。The system according to claim 19, wherein the image processor is further configured to not perform compositing processing on the acquired image during the process of performing image enhancement processing on the acquired image to obtain an enhanced image. 根据权利要求19或20所述的系统,其中,所述采集的图像在所述图像增强处理过程中仅为一幅图像。The system according to claim 19 or 20, wherein the acquired image is only one image during the image enhancement process. 根据权利要求19-21中的任一项所述的系统,其中,所述成像单元包括镜头和图像传感器,所述镜头接收来自所述目标区域的混合光并将所述混合光成像在所述图像传感器上,得到所述目标区域的图像,所述混合光包括所述目标区域在所述第一光源和所述第二光源同步照射下提供的所述第一光和所述第二光。The system according to any one of claims 19-21, wherein the imaging unit includes a lens and an image sensor, the lens receiving mixed light from the target region and imaging the mixed light onto the image sensor to obtain an image of the target region, the mixed light including the first light and the second light provided by the target region under synchronous illumination by the first light source and the second light source. 根据权利要求22所述的系统,其中,所述图像传感器为单图像传感器。The system according to claim 22, wherein the image sensor is a single image sensor. 根据权利要求23所述的系统,其中,所述单图像传感器包括红色信号通道、绿色信号通道和蓝色信号通道,所述红色信号通道用于感应所述混合光中的红色波段,所述绿色信号通道用于感应所述混合光中的绿色波段,所述蓝色信号通道用于感应所述混合光中的蓝光波段,所述红色信号通道、所述绿色信号通道、所述蓝色信号通道中的至少一个通道还用于感应所述混合光中的所述第二光。According to the system of claim 23, the single image sensor includes a red signal channel, a green signal channel, and a blue signal channel, wherein the red signal channel is used to sense the red band in the mixed light, the green signal channel is used to sense the green band in the mixed light, the blue signal channel is used to sense the blue band in the mixed light, and at least one of the red signal channel, the green signal channel, and the blue signal channel is further used to sense the second light in the mixed light. 根据权利要求23所述的系统,其中,所述单图像传感器包括红色信号通道、绿色信号通道和蓝色信号通道,所述红色信号通道用于感应所述混合光中的红色波段,所述绿色信号通道用于感应所述混合光中的绿色波段,所述蓝色信号通道用于感应所述混合光中的蓝光波段,所述红色信号通道、所述绿色信号通道和所述蓝色信号通道均还用于感应所述混合光中的所述第二光。According to the system of claim 23, the single image sensor includes a red signal channel, a green signal channel, and a blue signal channel, wherein the red signal channel is used to sense the red band in the mixed light, the green signal channel is used to sense the green band in the mixed light, the blue signal channel is used to sense the blue band in the mixed light, and the red signal channel, the green signal channel, and the blue signal channel are all further used to sense the second light in the mixed light. 根据权利要求22-25中的任一项所述的系统,其特征在于,所述成像单元还包括滤光片,所述滤光片设置在所述图像传感器之前的光路中,用于在基于来自所述目标区域的所述第一光和所述第二光对所述目标区域进行成像得到所述目标区域的图像之前,过滤所述目标区域反射的激发光。The system according to any one of claims 22-25 is characterized in that the imaging unit further includes a filter disposed in the optical path before the image sensor for filtering the excitation light reflected by the target region before imaging the target region based on the first light and the second light from the target region to obtain an image of the target region. 根据权利要求19-26中的任一项所述的系统,其中,所述图像增强处理包括以下至少一者:The system according to any one of claims 19-26, wherein the image enhancement processing includes at least one of the following: 白平衡处理、线性颜色校正,以及非线性色彩映射。White balance processing, linear color correction, and non-linear color mapping. 根据权利要求19-27中的任一项所述的系统,其中,所述图像处理器还被配置为调整所述采集的图像中各颜色通道的比值,以放大所述采集的图像中所述第二区域的图像信号。The system according to any one of claims 19-27, wherein the image processor is further configured to adjust the ratio of each color channel in the acquired image to amplify the image signal of the second region in the acquired image. 根据权利要求19-28中的任一项所述的系统,其中,所述图像处理器还被配置为采用线性色彩校正矩阵对所述采集的图像进行矫正,以提高所述采集的图像中所述第一区域和所述第二区域的颜色差异。The system according to any one of claims 19-28, wherein the image processor is further configured to correct the acquired image using a linear color correction matrix to improve the color difference between the first region and the second region in the acquired image. 根据权利要求19-29中的任一项所述的系统,其中,所述图像处理器还被配置为建立颜色映射表,基于所述颜色映射表以及预设的非线性变换算法对所述采集的图像进行非线性变换,以增强所述采集的图像中所述第一区域和所述第二区域的颜色差异。The system according to any one of claims 19-29, wherein the image processor is further configured to establish a color mapping table, and to perform a nonlinear transformation on the acquired image based on the color mapping table and a preset nonlinear transformation algorithm to enhance the color difference between the first region and the second region in the acquired image. 根据权利要求19-30中的任一项所述的系统,其中,所述第一光为可见光的部分波段,所述第二光为荧光,所述可见光和所述荧光的波段不同。The system according to any one of claims 19-30, wherein the first light is a portion of the visible light spectrum, the second light is fluorescence, and the visible light and the fluorescence spectrum are different. 根据权利要求31所述的系统,其中,所述可见光包括波段在550nm到650nm之间的黄光;或者,According to the system of claim 31, the visible light comprises yellow light in the wavelength range of 550 nm to 650 nm; or, 所述可见光包括波段在600nm到660nm之间的红光;或者,The visible light includes red light in the wavelength range of 600 nm to 660 nm; or... 所述可见光包括波段在380nm到470nm之间的蓝光。The visible light includes blue light in the wavelength range of 380nm to 470nm. 根据权利要求19-32中的任一项所述的系统,其中,所述图像处理器还被配置为将所述第一区域调整为彩色区域,并将所述第二区域调整为绿色区域;或者,The system according to any one of claims 19-32, wherein the image processor is further configured to adjust the first region as a color region and adjust the second region as a green region; or, 将所述第一区域调整为灰度区域,并将所述第二区域调整为绿色区域。Adjust the first area to a grayscale area and the second area to a green area. 一种手术机器人系统,所述手术机器人系统包括权利要求19-33中的任一项所述的成像系统,还包括显示单元,所述显示单元接收来自所述成像系统的图像并显示。A surgical robot system comprising an imaging system according to any one of claims 19-33, and further comprising a display unit that receives and displays images from the imaging system. 根据权利要求34所述的手术机器人系统,其中,所述手术机器人系统还包括:The surgical robot system according to claim 34, wherein the surgical robot system further comprises: 控制器,被配置为控制所述成像系统对所述目标区域进行成像。The controller is configured to control the imaging system to image the target area.
PCT/CN2025/090978 2024-04-25 2025-04-24 Imaging method and system and surgical robot system Pending WO2025223515A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410513112.0A CN120837011A (en) 2024-04-25 2024-04-25 Imaging method and system and surgical robot system
CN202410513112.0 2024-04-25

Publications (1)

Publication Number Publication Date
WO2025223515A1 true WO2025223515A1 (en) 2025-10-30

Family

ID=97422617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2025/090978 Pending WO2025223515A1 (en) 2024-04-25 2025-04-24 Imaging method and system and surgical robot system

Country Status (2)

Country Link
CN (1) CN120837011A (en)
WO (1) WO2025223515A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009028136A1 (en) * 2007-08-29 2009-03-05 Panasonic Corporation Fluorescence observation device
CN108095701A (en) * 2018-04-25 2018-06-01 上海凯利泰医疗科技股份有限公司 Image processing system, fluorescence endoscope illumination imaging device and imaging method
CN113208567A (en) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 Multispectral imaging system, imaging method and storage medium
CN115177199A (en) * 2022-06-30 2022-10-14 上海微觅医疗器械有限公司 Image processing system, surgical system, and image processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009028136A1 (en) * 2007-08-29 2009-03-05 Panasonic Corporation Fluorescence observation device
CN108095701A (en) * 2018-04-25 2018-06-01 上海凯利泰医疗科技股份有限公司 Image processing system, fluorescence endoscope illumination imaging device and imaging method
CN113208567A (en) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 Multispectral imaging system, imaging method and storage medium
CN115177199A (en) * 2022-06-30 2022-10-14 上海微觅医疗器械有限公司 Image processing system, surgical system, and image processing method

Also Published As

Publication number Publication date
CN120837011A (en) 2025-10-28

Similar Documents

Publication Publication Date Title
JP7314976B2 (en) Imaging device and imaging method
CN101878653B (en) Method and apparatus for achieving panchromatic response from a color-mosaic imager
JP2021510313A (en) Time-correlated light source modulation for endoscopy
CN110893096A (en) Multispectral imaging system and method based on image exposure
CN110236694B (en) Same-screen near-infrared double-spectrum fluorescence imaging method and system based on spectral response characteristics
CN112004454B (en) endoscope system
JP6891304B2 (en) Endoscope system
US20210251570A1 (en) Surgical video creation system
JP6462594B2 (en) Imaging apparatus, image processing apparatus, and image processing method
WO2021177446A1 (en) Signal acquisition apparatus, signal acquisition system, and signal acquisition method
WO2025223515A1 (en) Imaging method and system and surgical robot system
WO2011035092A1 (en) Method and apparatus for wide-band imaging based on narrow-band image data
CN214231268U (en) Endoscopic imaging device and electronic apparatus
KR20190135705A (en) System and method for providing visible ray image and near-infrared ray image, using a single color camera and capable of those images simultaneously
JP2010274048A (en) Fundus camera
WO2025124236A1 (en) Asymmetric binocular endoscope and three-dimensional reconstruction method therefor
JP6476610B2 (en) Dermoscopy imaging apparatus, control method therefor, and program
JP2025517188A (en) Method, processor and medical observation device using two color images and a color camera for fluorescence and white light - Patents.com
JP6398334B2 (en) Dermoscopy imaging device and method of using dermoscopy imaging device
US20230316521A1 (en) Image-based determination of a property of a fluorescing substance
EP3889886A1 (en) Systems, methods and computer programs for a microscope system and for determining a transformation function
US12349863B2 (en) Data processing device and computer-implemented method combining two images and an overlay color using a uniform color space
JP2022064100A (en) Display color correction method
JP4588076B2 (en) Imaging method and imaging system
US20250060576A1 (en) Microscope System and Corresponding Method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25793869

Country of ref document: EP

Kind code of ref document: A1