HK40083562A - Open view, multi-modal, calibrated digital loupe with depth sensing - Google Patents
Open view, multi-modal, calibrated digital loupe with depth sensing Download PDFInfo
- Publication number
- HK40083562A HK40083562A HK62023071839.2A HK62023071839A HK40083562A HK 40083562 A HK40083562 A HK 40083562A HK 62023071839 A HK62023071839 A HK 62023071839A HK 40083562 A HK40083562 A HK 40083562A
- Authority
- HK
- Hong Kong
- Prior art keywords
- head
- user
- distance
- pair
- digital
- Prior art date
Links
Description
Cross Reference to Related Applications
This application claims priority from U.S. provisional application No.62/964,287, entitled "DIGITAL loop WITH CALIBRATED DEPTH SENSING," filed on month 1, 22 of 2020, which is incorporated by reference as if fully set forth herein.
Is incorporated by reference
All publications and patent applications mentioned in this specification are herein incorporated by reference in their entirety in a manner similar to how each individual publication or patent application is specifically and individually indicated to be incorporated by reference.
Technical Field
The present invention describes an apparatus and method for a magnifying glass for improving digital magnification. More specifically, these devices and methods allow for a large working distance range, excellent visual ergonomics, and integration of advanced multi-channel optical imaging modalities.
Background
Surgeons, dentists, jewelers, and others whose work relies on precise hand-eye coordination on a micro scale have long used binocular loupes as a visual aid. Such a magnifying lens comprises a pair of non-inverting lenses with a working distance of about 0.5m, i.e. the distance from the user's eye to the nominal convergence point of the optical axes of the two lenses, which in normal use is the position of the object or working area to be viewed, is about 0.5 m. The tubes are typically embedded in the user's glasses in a "near vision" position, similar to the near vision position at the bottom of the bifocal lens, except that they provide an angular magnification of about 2x to 3x over a relatively limited field of view, while allowing peripheral and "distance" vision when the user looks around the tube.
The term "digital magnifier" has been used to refer to magnifier-like systems commonly used in surgery, where a focal plane array (image sensor) is placed at the focal plane of each lens tube to digitize the image. The digitized image can be converted by various forms of signal processing and then displayed at the focal planes of two eyepieces or oculars, one for each eye. This arrangement forms a binocular Head Mounted Display (HMD) with a digitally generated enlarged view of the work area.
Digital loupes present many challenges and many opportunities. For example, image sensors, displays and other electronic components increase in weight and may cause a loss of depth of field due to the natural focus adjustment of the eye. However, as will be explained in the context of the present invention, digital technology brings capabilities such as image stabilization, auto-focusing and auto-convergence, which enable magnification to approach that of a surgical microscope. This capability allows flexibility in working distance and freedom of movement, neither of which is provided by such microscopes and conventional analog loupes. Furthermore, the bifurcation of the sensing/imaging side and the display side of the magnifying lens (achieved by numerical rather than optical information transfer between the two) allows for individual optimization of their mounting configuration. As will be shown, this creates more ergonomic working conditions for the surgeon, such as a more upright head position, and the ability to view objects or work areas both directly (i.e., not through a magnifying glass) and through a magnifying glass, simultaneously and in parallel. Finally, with digital techniques, more advanced optical imaging modalities may be included, such as fluorescence imaging or hyperspectral imaging, for example for visualization of tumor margins superimposed on a magnifying glass image.
Embodiments of the present invention aim to address several outstanding challenges of digital loupes in the prior art. First, for high magnification binocular systems, a condition known as diplopia or diplopia is known to occur, particularly when the left and right optical axes of the system are not properly aligned. Moreover, at higher magnifications, slight changes in the working distance translate into large relative shifts in the positions of the left and right images, so that the human visual system cannot comfortably maintain monoscopic vision. The prior art attempts to overcome this problem in an incomplete way, and the present invention completely overcomes this challenge by combining a distance sensor with a defined field angle and a processor with camera calibration information for electronically converting the image and measurements from the distance sensor. Before addressing other issues, we now review the prior art related to this first prominent challenge.
It was recognized recently that, just as it is important for a camera to have an auto-focus function to maintain a clear image when the distance from an object is changed, a set of magnifiers should automatically adjust their horizontal convergence angles, or acute angles formed between the optical axes of the left and right tubes when viewed in top projection, so that the optical axes of the left and right tubes converge to the object or work area being viewed. U.S. Pat. No.5,374,820 teaches the use of a distance sensor on a conventional (analog) magnifying glass to measure distance to an object. The distance measurement is then used to mechanically change the focal length and the convergence angle of the lens tube or eyepiece in a corresponding manner. However, such mechanical movements are not sufficiently accurate at high magnifications, do not provide a combination with calibration information that may be used to correct angular misalignments (horizontal and vertical) of the mirror tube by distance, and the distance sensor does not have a defined field of view. Only adjustment of the convergence angle (i.e., horizontal convergence angle) when viewed in top projection is provided. The eye is typically more sensitive to image misalignment in the vertical direction, but the patent does not teach a way to overcome any such misalignment, which may be caused by a slightly different tilt of the eyepieces relative to their design or intended configuration.
WO 2001005161 teaches a digital surgical loupe that dynamically optimizes images based on surgical conditions. It teaches that optimal stereo vision is given when the baseline, corresponding to the interpupillary distance (IPD) of the stereo pair of cameras, is about 1/20 of the working distance. Based on the focal length inferred from the best focus settings of the stereo camera pair, the system has a motor that first adjusts the IPD to within an optimal range, and then subsequently adjusts the horizontal convergence angle (in the plane of both cameras and the object) so that the cameras of the stereo camera pair converge on the object. However, using the focus setting as a substitute for the true distance measurement to the object is too inaccurate for the need for a high magnification loupe-for example, the transformation of the focus setting to distance can be accurate to within a few centimeters, whereas distance accuracy better than a few millimeters is required for an optimal system. Furthermore, the use of motors to adjust the IPD and convergence angle results in a system that is bulky and may lack sufficient accuracy, repeatability, stability, and rapid settling to a given convergence angle setting. No camera calibration information is provided that includes horizontal and vertical misalignments that can be used to actively correct between the horizontal convergence angle setting and the actual camera orientation. The present invention overcomes these limitations.
Some embodiments of prior art digital loupes and/or augmented reality headsets rely on methods of determining convergence angle without using distance sensors or direct distance measurements, while other embodiments do not rely on motor driven systems. For example, US2010/0045783 teaches a method for dynamic virtual convergence of video see-through head mounted displays in a surgical context. Each camera of the pair of stereo cameras has a larger field of view than the display used to display the camera images, and heuristics (e.g., distance to the point closest to the viewer within the estimated scene geometry, or distance to the tracked tool) are used to estimate the viewer's gaze distance. The display frustum is electronically transformed to match the estimated gaze distance. In fact, the convergence is virtually adjusted, since only a portion of each camera image is shown on each corresponding display, which portion corresponds to the object at which the viewer is looking and depends on the distance to this object. One notable feature of the system described in US2010/0045783 is the use of filtering of high temporal frequency components of the gaze distance. However, the user's gaze distance cannot be accurately measured by a separate sensor. Also, the display frustum is transformed to a convergence angle of 0 degrees, i.e., parallel vision, as if the object were at infinity, so that the technique can be used with conventional binocular head mounted displays having an opposing eyepiece convergence angle of 0 degrees. This method produces vergence-disparity collisions whereby the horizontal disparity (pixel shift) between the left and right images is the same as if the object was in its original (near) position, but the lack of eye convergence sends a collision signal to the brain that the object is far away. Thus, this approach is not useful for comfortably maintaining parallelism between peripheral near vision and enhanced vision, where one may wish to switch between a magnified or enhanced view of the object or work area and a direct view of the object or work area while looking through the eyepiece or display, and while maintaining the same eye vergence state when looking above or below the eyepiece or display. The present invention overcomes this limitation by: utilizing an eyepiece convergence angle of the head mounted display that nominally matches the actual working distance of the user, and utilizing a processor that is capable of converting images from the pair of stereo cameras such that the eyes do not substantially need to change their vergence states when switching between viewing an image of an object or working area through the head mounted display and concurrently viewing the object or working area directly over a range of working distances.
Some embodiments of the digital loupe use position tracking or image feature tracking to maintain an object within the field of view of both eyes, effectively maintaining convergence of the pair of stereo cameras for that object. US 9967475 B2 teaches a digital loupe that requires an operator to manually select an object of interest in an image and a processor that determines the line of sight of the object of interest with respect to the camera optical axis and repositions and crops a portion of the image based on the tracked head-to-line-of-sight deviation so that the object of interest stays in the center of the cropped image. US 20160358327 A1 teaches a digital magnifier in which a real-time magnified view of a (dental) work area is provided and automatically tracked using image feature recognition and micro pan and tilt adjustment of an attached camera to keep it centered within the field of view of a head mounted display. US 9690119 B2 teaches a first optical path and a second optical path on a head-mounted device, wherein the direction of the first optical path is individually adjustable relative to the direction of the second optical path, and an enlarged image passes between the two. US 9690119 B2 also teaches the setting of a convergence angle such that monoscopic vision occurs at a working distance (e.g. using adjustable mirrors) and it teaches automatic tracking of points within the field of view by identifying implicit or explicit features, but it does not teach the use of direct measurements of distance to an object.
US9772495 teaches a digital loupe intended to replace traditional surgical loupes and surgical microscopes. It teaches two cameras, each on an axial rotation system, and an illumination module. The axial rotation system and illumination module are responsive to feedback signals from both cameras to maintain consistent illumination and a stable image. Furthermore, although US9772495 teaches that the axial rotation module rotates to allow a desired surgical view to be captured, it does not provide how to manually or automatically determine what the desired view includes, nor how to track it as the surgeon moves about. It also explains that the images from the two cameras must be aligned to avoid double vision, presumably by rotating the cameras, but no explanation or details are given as to how this is done. In any case, embodiments using position or image feature tracking rely on the ability to derive stable, accurate and reliable estimates of the distance to the object, which these methods cannot give. For example, image feature tracking relies on the presence of different features in an image, which cannot always be assumed due to the presence of relatively featureless or amorphous objects.
A second challenge of prior art digital loupes overcome by the present invention involves the combination of multiple optical imaging modalities. Some modalities (e.g., hyperspectral imaging) depend on measurements of multiple channels for a given image point. Various examples of digital loupes exist in the prior art that incorporate advanced imaging modalities; however, it is known that multi-channel modalities like hyperspectral imaging are difficult to integrate in digital loupes due to the volume of the instrument and/or trade-offs involving spatial or temporal resolution, etc. One aspect of the invention is to form a hyperspectral imager or other multi-channel imager (e.g., stokes imaging polarimeter) small enough to be included in a digital magnifier without sacrificing light throughput or temporal or spatial resolution by a calibration array that includes an imaging depth sensor and a single-channel imager. The processor uses the depth image from the imaging depth sensor to remove parallax from the image from the single channel imager so that they appear to have been captured from the same viewpoint, as in the more traditional multi-channel but single viewpoint imager. In the present invention, "channel" may refer to an individual wavelength band, an individual polarization component, or a corresponding concept; alternatively, it may refer to an image acquired from light corresponding to one of these channel concepts. Thus, multispectral and hyperspectral imagers are multichannel imagers because they image multiple wavelength bands, while stokes imaging polarimeters are multichannel imagers because they image multiple polarization components.
Previous embodiments of digital loupes have incorporated multiple optical imaging modalities. For example, US 6032070 teaches the use of various optical methods (different wavelength bands, polarizations, etc.) and digital processing to reflect light away from tissue and image it to enhance the contrast of tissue structures other than those visible to the naked eye. This is done in conjunction with a helmet or head-mounted device so that the enhanced contrast image is displayed stereoscopically along the user's line of sight. WO 2011002209 A2 teaches a digital magnifier that combines magnification and illumination in various spectral bands and has a manually adjustable camera convergence. WO 2018235088 A1 teaches a digital magnifier with an array of cameras having the same working distance for each eye, e.g. on a headband. The different cameras within the array for a given eye may have different magnifications or may comprise a colour camera and an infrared camera in such a way that at least two corresponding cameras from the left and right eyes are used to provide a stereoscopic view. The manual controller is used to select a low magnification stereoscopic image, or a high magnification stereoscopic image, or an infrared stereoscopic image, or the like. Note that although this publication discloses a camera array, it does not teach the use of spatially resolved distance information to fuse images from the array into a single-view multi-channel image as disclosed by the present invention. For the purposes of the present invention, a multi-channel imager may include a single channel of different magnification.
US 10230943 B2 teaches a digital loupe with integrated fluorescence imaging so that within one sensor both NIR (fluorescence) and visible light are recorded, with a modified bayer pattern where pixels in the visible and infrared bands can be tiled on the same sensor. This simplifies the registration of the color RGB and NIR fluorescence images since they are obtained from the same viewpoint. However, with current image sensor technology, this technique is somewhat impractical due to the significantly different optimal imaging conditions that are desired for each modality. Each modality may have a different optimal exposure time, gain, resolution, pixel size, etc., but because the modalities are recorded on the same sensor, they must be recorded in the same conditions if they are to be recorded simultaneously. There is also a loss in spatial resolution for each modality due to sharing of pixels of the image sensor across modalities.
US 2018/0270474 A1 teaches registration of optical imaging with other pre-operative imaging modalities and topography/depth information from a 3D scanning module. It also teaches that depth information can be used to register images between multiple intraoperative optical imaging modalities such as NIR fluorescence, color RGB or hyperspectral (using tunable liquid crystal filters or filter wheels), but not between individual channels of a multi-channel imaging system. While hyperspectral imaging (or other modalities such as imaging polarimetry) can be a potentially valuable source of information in digital magnifier systems, the methods proposed in the prior art do not allow for an efficient combination of miniaturization, temporal resolution, spatial resolution, and light throughput desired for optimal systems.
A third challenge overcome by the present invention relates to ergonomics. A conventional or simulated surgical loupe includes a pair of non-inverting tubes suspended in front of the user's eyes, with the optical axes of the left and right tubes correspondingly aligned with the optical axes of the user's left and right eyes. In the prior art, there are three prototype solutions for suspending these tubes or eyepieces in front of the eyes of the user. Each has advantages and disadvantages with respect to the functional attributes of the magnifier system, including weight, comfort, field of view, view occlusion and peripheral vision, customization of fit, stability, and adjustability.
For purposes of the present invention, "weight" includes concepts such as the total mass of an analog or digital surgical loupe or other vision aid and the distribution of that mass on the surgeon's head. This has an impact on the comfort of the surgeon. For example, if the mass of such a system is distributed such that, in operation, it moves the combined center of gravity of the system and the surgeon's head significantly forward from the center of gravity of the individual surgeon's head, this will increase the stress on the surgeon's neck relative to the unassisted surgeon. This stress can be uncomfortable for the surgeon, especially in the case of long-term use. Furthermore, while certain areas of the head are more sensitive to pressure than others, for example, the temples and supraorbital areas are affected by over-tightening the headband, and the nose is also affected when used to support the loupes via the nose pads, distributing the weight of the system over a larger area of the surgeon's head generally provides greater comfort than distributing the weight over a smaller area of the surgeon's head.
Field of view and field of view occlusion or peripheral vision are also important functional attributes useful over the magnifying glass system. Here, the field of view refers to the apparent field of view, which is the angular extent of the magnified field of view presented to the user. This is to be distinguished from the true field of view, which is the angular extent of the unmagnified field of view. An eyepiece with a given clear aperture of the front lens surface supports a larger apparent field of view when it is closer to the user's eye or when the exit pupil distance is smaller than when it is farther away. However, the closer the eyepiece is to the eye, the more the user's peripheral vision is occluded. An ideal system does not occlude any portion of the user's field of view outside of the apparent field of view of the magnifier system. In practice, this is not possible because the eyepiece must be stably mechanically supported and aligned with and in front of the optical axis of the user's eye. As in the present invention, careful consideration of the support mechanisms may be used to minimize disturbances from the user's field of view of these support mechanisms, thereby preserving the perception of an open view of the user's surroundings.
Finally, the relevant attributes of customization, stability and adjustability of the adaptation are important to determine the overall performance of the magnifier system. As a general rule, the greater the adjustability of the system's suitability, the less suitability that must be customized for the user. However, creating a mechanism that is both adjustable and stable typically requires more material, and therefore greater mass, than a stable but not adjustable system. This excess material can increase the weight and field of view obstruction of the system, negatively impacting comfort and visual performance.
We turn now to a description of the design currently used to simulate surgical loupes. The first, which we call "through-the-lens" mounting, is the smallest profile, but the least flexible of the three. A pair of glasses is custom-made for the surgeon by the relevant fitting procedure. Working distance, interpupillary distance, inclination angle (degree angle), prescription, frame size and precise drilling position must all be carefully measured and combined at the time of manufacture and cannot be subsequently changed. A hole is drilled in each of the left and right lenses of the eyeglasses and these holes are used to support the eyepieces in a position that is precisely aligned with the optical axis of the user's eyes in the near vision position. This lack of customization level and adjustability is feasible because surgical loupes, like eyeglasses, are not traditionally shared. In addition, custom magnifier incorporates the surgeon's optical prescription in both the glasses and the barrel, so the peripheral field of view viewed through the glasses remains in focus. This solution has the lowest weight, since no frame is needed outside the glasses. However, most of the weight is supported by the nose pads resting on the surgeon's nose, and therefore this style becomes uncomfortable at higher magnifications due to the weight of the large objective lens that needs to be supported by these nose pads. Furthermore, the placement of the magnifying lens (and thus the maximum tilt angle of the eyepiece) is limited to some extent by the anatomy of the surgeon, such as the height of the surgeon's nose relative to the eye. The same apparent field of view is achieved by the lensing of the eyepiece which is smaller because the eyepiece can be placed very close to the surgeon's eye. But if the prescription needs to be changed, the magnifier system needs to be remanufactured. Furthermore, laser safety glasses are not easily integrated with such magnifying glasses.
The next style of magnifying glass is a flip-up mount or front lens mount that clips onto the front of the glasses. The eyepieces are supported entirely in front of the eyeglasses via adjustable support arms. This allows for more adjustment, tilting, etc. of the lens position and requires less customization. However, the weight of the system, which is mainly supported by the nose pads, increases significantly: a larger lens is required to maintain the same apparent field of view because the lens is now farther away from the surgeon's eye; more frames are required to support the lens in an adjustable manner; finally, since the center of gravity is forward relative to through the lens magnifier, a greater force is exerted on the surgeon's nose and there is greater stress on the surgeon's neck. The presence of the support system in front of and over the nose of the surgeon partially obscures the surgeon's near-center field of view and does not give a less comfortable experience with respect to what is there. Flexibility is enhanced since the eyeglass lens can be changed to enable changing the prescription or adding a laser or other filter. While adjustable eyepiece positioning is enabled by this mount, adjustment is only possible to a relatively small extent due to the need to keep the eyepiece support system small to minimize view occlusion, and due to the relatively short length of the eyepiece support arms. Adjustable tilt is useful because it allows the surgeon to use various cervical spine extension angles while viewing the same magnified working area, but as the lens protrudes more than in a through-the-lens loupe, there is a greater likelihood of interference with a conventional mask.
A third style of magnifying glass is a flip-up mount, but the support is on the headband rather than the front of the eyeglasses. This reduces the support of the weight of the nasal docking eyepiece and is therefore suitable for higher magnification and/or prismatic magnifiers that utilize a kepler configuration rather than a galilean configuration, where the prism is used to eliminate the effects of image inversion caused by the keplerian tube. A larger support structure/longer support arms are required to hold the eyepieces in front of the surgeon's eyes, requiring more weight, but which can be distributed across the head using a headband structure. Thus, a longer support arm may appear more prominent in the surgeon's peripheral vision, especially at larger eyepiece tilt angles, which is an undesirable feature of this configuration. While longer or larger support structures generally enable longer translation ranges and greater distances between the pivot point and the supported object, and thus greater freedom of adjustment, this is at the expense of stability because the rotating head motion is amplified by the longer lever arm. But personal glasses including laser safety glasses are independent of the magnifier system and therefore are easily used in conjunction with the magnifier system. Such a magnifying glass system can be easily shared between surgeons.
Many of the considerations and tradeoffs that arise in the field of surgical loupes also arise in the field of near-eye displays or head-mounted displays, particularly those that provide visual enhancement. These include weight and comfort, stability and adjustability of fit, and preservation (or non-preservation) of peripheral vision. US patent publication US 20040113867 A1 teaches a head-mountable display system designed to minimize occlusion of the user's field of view while maintaining the ability to see above and/or below the display. The viewing angle of the display relative to the horizontal is comparable to the tilt angle of the magnifying glass, being adjustable, as are the various fitting parameters of the system, so as to achieve a more comfortable fit and a better viewing experience in terms of reduced stress and preservation of background perception. US7319437 teaches a lightweight binocular near-eye display that preserves the user's forward peripheral vision, however it does not specifically describe how to achieve this in a manner that is flexible enough for a wide range of head sizes and shapes.
The tube of an analog surgical loupe is sometimes referred to as an eyepiece, although the terms "eyepiece" and "eyepiece" may be used interchangeably to describe the lens system or lens element closest to the user's eye in an optical system designed for a human user. The word "objective lens" is generally used to describe the foremost lens of the lens tube that faces the object or work area. For an analog magnifier, the optical axes of the objective and eyepiece are collinear without any folding of the optical path using reflective or refractive means (which again add volume and weight). As previously mentioned, the advantage of a digital loupe is that the imaging side comprising the paired stereo cameras and the display side comprising the binocular near-eye display diverge into two distinct entities. Information is transferred between them electronically and there is no requirement that their optical axes be collinear or even aligned. This is advantageous because the means of supporting the two entities can be optimized independently with respect to factors such as adjustability, stability, peripheral vision and ergonomics. For example, by introducing parallax between the relative viewpoints of the pair of stereo cameras and the eyes of the user or shifting the relative viewpoints of the pair of stereo cameras and the eyes of the user, a direct view and an enhanced view of the object can be obtained simultaneously. Furthermore, a mirror tube is generally understood to be an afocal optical system that provides angular magnification (the incident and exiting beams are substantially collimated). Thus, an angular shift of the mirror tube results in a magnification shift of the image viewed through the mirror tube. However, for bifurcated objectives and oculars, we must consider how the angular offset of each of these subsystems affects the viewed image: when viewed at the eyepiece, the angular offset of the objective lens is magnified, while the angular offset of the eyepiece is not magnified. Thus, the stability requirement of the objective lens is one magnification higher than the stability requirement of the eyepiece.
Furthermore, the magnification of the lens tube typically comes from the longer focal length of the objective lens relative to the eyepiece; the objective lens is therefore correspondingly larger and heavier than the objective lens. In order to minimize the forward pull of the center of gravity beyond the individual surgeon's head center of gravity, it is advantageous to mount the stereo camera pair (objective) of the digital magnifier behind the display (eyepiece/eyepiece), moving the center of gravity backward in a manner not possible with conventional analog magnifiers. Furthermore, the only objective end that needs to be adjusted is the tilt angle, as opposed to the eyepiece/eyepiece, which needs to be precisely aligned with the optical axis of the user's eye.
Accordingly, there is a need for a new eyepiece support system that can be used to simulate a loupe, digital loupe, head mounted display, or any head mounted optical system that includes an eyepiece, that retains peripheral vision and thus the user's perception of background and perception of an open field of view, and that is lightweight, easily adjustable, and stable. The present invention aims to provide such an eyepiece support system that is particularly suitable for a digital magnifier system, wherein the supports for the eyepiece and the pair of stereo cameras can be individually optimized and adjusted so that direct and enhanced vision can be simultaneously performed.
Although the prior art devices and methods lay a solid foundation for the powerful visual aids of surgery, there is still a critical gap in the physical and visual ergonomics of such systems, particularly with respect to: minimizing double vision by stable auto-convergence; preserving peripheral vision and comfortable visual parallelism between a magnified or enhanced view of an object and a direct view of the object in a comfortable, stable, and easily adjustable form; and to incorporate advanced optical imaging modalities, such as hyperspectral and multichannel fluorescence imaging, without compromising image quality or spatial or temporal resolution. The object of the present invention is to fill these gaps and to provide several key improvements that make digital loupes an attractive and viable tool for enhancing surgeon vision.
Disclosure of Invention
Aspects of the present invention provide a digital loupe that combines freedom of motion and flexible working distance, ergonomic comfort, open peripheral vision, parallelism between magnified (or enhanced) vision and normal unobstructed vision, magnification with high image quality, and (optionally) advanced optical imaging modalities to enhance the surgeon's vision in real time. In addition to the particular arrangement of distance sensing, camera calibration, and image translation that presents the surgeon with a stereoscopic enhanced view of the surgical wound in an optimal manner, these aspects also achieve these advantages via a particular means of supporting the ocular lens in front of the surgeon's eye. Unlike surgical microscopes, the freedom of motion and flexible working distance enabled by aspects of the present invention allow surgeons to quickly and naturally integrate views of surgical wounds from multiple viewpoints. Also, unlike conventional simulated magnifiers, the open peripheral view and the parallelism of the direct and enhanced views allow the surgeon to maintain maximum background awareness of the surgical procedure, ensuring smoother results.
In one embodiment, the digital loupe includes a pair of stereo cameras mounted to the head of the user, the pair of stereo cameras including a depth sensing element having a sensing direction nominally bisecting a line of sight or an optical axis of the cameras of the pair of stereo cameras. The depth sensing element may give a single non-spatially resolved measurement or a spatially resolved measurement. It may have a defined field of view that may depend on the magnification of the digital magnifier. The digital magnifier may include illumination also nominally directed along a line bisecting the camera line of sight, whose parameters may be adjusted in response to the distance from the object or subject being viewed. It may also include a binocular head mounted display and a processor in operable communication with the pair of stereoscopic cameras, the depth sensing element, and the binocular head mounted display. The processor may be in operative communication with a lighting controller that controls the lighting source to adjust parameters of the lighting, such as lighting intensity and spatial distribution or intensity range, as a function of the distance measured by the distance sensor. The illumination may potentially be pulsed in synchronization with the exposure intervals of the paired stereo cameras.
The lines of sight of the pair of stereo cameras may intersect at a nominal working distance of the user, which may be, for example, an average distance between the eyes and hands of a surgeon in an operating pose, or an average of such distances for a group of surgeons. The difference between the predefined nominal working distance of the system and the actual working distance between the user's eyes and hands should be small. Further, the eyepieces of a binocular head mounted display may have similar convergence angles such that the optical axes of the left and right displays intersect at similar nominal working distances. The head mounted display may have a virtual image distance similar to the nominal working distance, or the distance between the user's eyes and the virtual image plane formed by the head mounted display optics, and it may be designed to preserve the user's peripheral or "distance" vision. For example, the eyepieces of a head mounted display may be placed in a near vision position familiar to a user of a conventional loupe with only minimal blurring of peripheral vision. This allows the user to toggle between direct vision of a surgical wound above or below the normal vision or ocular lens and magnified or enhanced vision through the ocular lens of the head mounted display with only eye rotation (i.e., no head movement) and with minimal changes in visual accommodation and vergence, thus maximizing visual and ergonomic comfort and reducing eye fatigue. Thus, the direct view and the enhanced view are "concurrent". To further accommodate the seamless transition between direct and enhanced vision, the digital magnifier system may change one or more of the virtual convergence angle of the image within the eyepieces, the true convergence angle of the eyepieces themselves, and the virtual image distance in response to information derived from the distance sensor to preferably minimize changes in the vergence and visual accommodation when switching between direct and enhanced views of the object.
The processor may be used to store and update calibration information that models the precise alignment of the cameras (e.g., intrinsic and extrinsic camera matrices used in the pinhole camera model) or other subsystems of a pair of stereo cameras, including the relative positions and orientations of all cameras, distance or other sensors, illumination sources, and displays or eyepieces. Depending on the ergonomic or mechanical degrees of freedom and the relative arrangement of these different subsystems in the digital magnifier, it may be necessary to track the state of these degrees of freedom in order to have a complete picture of the relative position and orientation of each of these subsystems. However, such a full picture is only required for some embodiments of the invention.
At least, it is important to calibrate the cameras of a pair of stereo cameras, because there are always slight differences in design and implementation in practice between the camera parameters (position, orientation, sensor position relative to the optical axis of the lens, focal length and pixel size). In addition to the designed convergence angles of the stereo camera pairs, these minor differences also manifest as image shifts that vary with distance from the object being viewed, especially at high magnifications, which can shift the left and right images of the stereo camera pairs sufficiently that they cannot be viewed directly through the binocular head mounted display without further conversion. With knowledge of the camera calibration information, in combination with knowledge of the distance from the distance sensor to the object, the processor can accurately correct for the effects of slight camera misalignment by distance. The processor may translate or transform the images prior to displaying the images such that they appear to be from such a pair of stereo cameras: the optical axis of which converges to a point along the optical axis of the distance sensor and at the distance measured by the distance sensor. That is, it appears to the user that both cameras are precisely directed at the object, directly in front of the stereo pair of cameras at a distance given by the distance sensor. Because the images are then viewed by the user in a head mounted display with the optical axes of the left and right eyepieces converging to a nominal working distance, a magnified view of the subject will appear in the center of each display, and thus also at the nominal working distance. Thus, since the nominal working distance is the same as or close to the actual working distance between the user's eye and the object, the user can switch between viewing the object directly and viewing the object through the eyepiece with minimal change in its eye vergence state. The processor may optionally perform a second transformation of the image based on the measured distance prior to display such that the displayed object appears at an actual, rather than nominal, working distance. This second conversion would be equivalent to virtually adjusting the relative convergence angles of the two eyepieces so that the left and right eyes converge at the actual working distance (e.g., as measured by the distance sensor) when viewing the left and right images of the subject in monoscopic vision. Further, if a variable focus eyepiece is used, the processor may modify the virtual image distance to match the actual working distance. Thus, in this alternative approach, switching between a magnified or enhanced view of the subject and a direct view of the subject above or below the ocular lens does not require changing the visual accommodation or vergence.
If imaging distance or depth sensors are used, or the geometry of the scene is estimated (e.g. by parallax calculation from pairs of stereo cameras, possibly making it more accurate with point depth sensors, or via structure from motion algorithms), it will be possible to fully adjust the scene parallax. One example scenario where this capability is useful is to convert to the point of view of the cameras of the stereo camera to match the point of view of the user's eyes. It is advantageous to mount the camera as close to the head as possible to minimise the lever arm relative to the head, as this makes the image most stable; thus, the preferred mounting location is on top of the user's head. The vertical displacement of the pair of stereoscopic cameras relative to the user's eyes introduces vertical parallax to the viewed image, which can be mitigated via appropriate conversion. Although spatially resolved depth information enables full correction of scene parallax, average scene parallax may also be corrected using only point-distance sensors. If the relative geometry of the eyepiece and the pair of stereo cameras is known, the average scene disparity can be adjusted according to the measured distance by translating or shifting the image of the object to appear as if the pair of stereo cameras were always pointing at the object.
Additional cameras may be used to include other modalities such as fluorescence imaging, polarization imaging, hyperspectral imaging, and the like. With the imaging depth sensor, for example, an NIR fluorescence imager, a polarization imager, and a color RGB stereo pair can be mounted side-by-side, and the spatially resolved depth information is used to correct parallax and map the fluorescence or other information onto the viewpoints of the paired stereo cameras or onto the viewpoints of the user's eyes. The processor may include extrinsic and intrinsic camera calibration matrices or other camera models to properly map between different viewpoints with minimal registration error without requiring computationally expensive and error prone iterative registration algorithms.
It is an aspect of the present invention to provide a novel form of multi-channel imager that is more suitable for a digital magnifier than prior art multi-channel imagers. Here, a multi-channel imager refers to an imaging modality traditionally thought of using a single device, such as a hyperspectral imager or imaging polarimeter, that outputs an "image cube" that is a stack of individual 2D images corresponding to a single channel (such as a wavelength band or polarization component, etc.). Such imaging techniques can be bulky and heavy, and are therefore not suitable for integration into digital loupes; furthermore, they may not have sufficient spatial and/or temporal resolution or optical throughput according to the techniques. Rather, by using a calibrated array of miniature cameras, each camera recording one slice or channel of the image cube corresponding to a given modality, information from the imaging depth sensor can be used to remove the disparity of each camera of the array and synthesize the complete image cube as if it were recorded simultaneously from a single viewpoint. This technique of the present invention has advantages over tiling sensors or polarizing filters of various spectra on the pixel level, as it preserves spatial resolution and allows flexibility in integration, selection of filters, and independent sensor and exposure parameters. Also, since time scanning is not involved, it has advantages over time scanning sensors. Thus, the multi-channel imaging technique of the present invention enables real-time integration of images from multiple different optical imaging modalities within a digital magnifier system.
The invention also relates to an eyepiece support structure particularly suited for use in a digital magnifier system. In the present disclosure, the word "eyepiece" may be used to describe any optical element or system of elements mounted in front of the eye for viewing or visualization by the eye, such as a lens tube in the case of an analog loupe, or an eyepiece with or without an adjacent microdisplay in the case of a head mounted display or near-eye display. Many embodiments of the present invention are directed to novel means of supporting and aligning eyepieces in front of the user's eyes while improving the prior art in terms of better overall ergonomics, including peripheral vision, comfort, stability and adjustability, etc. One embodiment of the present invention may occur in the context of a digital loupe system such that the visual output of such a system may be displayed in a manner that allows comfortable and stable use over multiple hours of a surgical procedure, thereby allowing a surgeon to select the most ergonomic operating position while minimizing obstruction to the surgeon's peripheral vision.
Embodiments of the present invention include judicious placement of one or more support arms of an ocular lens relative to the anatomy of a human head. In some embodiments, the eyepiece support arms or systems described herein do not include a lens barrel or direct housing of a lens or eyepiece. Rather, they include linkages that mechanically connect the eyepieces to the user, or any number of mechanical links that are remote from the eyepieces, starting with the nearest link. Embodiments of the present invention may include ocular support arms, structures or systems that reduce the burden on the nose and other sensitive parts of the head and face while maintaining as much peripheral vision or as much open field of view as possible. Some embodiments relate to eyepiece support systems that include multiple hinge points that enable full positional adjustment of the eyepieces or components of such systems (such as the headband), better enabling such systems to perform as needed. Other embodiments relate to the placement of the eyepiece support arm relative to the wearer's head or relative to the user's field of view. Further embodiments contemplate the relative placement of the pair of stereoscopic cameras such that the tilt of the pair of stereoscopic cameras can be adjusted separately from the tilt of the eyepieces, thereby enabling a more upright operating pose and parallelism of the view of the object through the eyepieces and the view of the same object above or below the eyepieces captured by the pair of stereoscopic cameras.
There is provided a digital magnifier system, comprising: a pair of stereo cameras adapted and configured to generate image signals of an object or a work area; a distance sensor adapted and configured to obtain a measurement of distance to the object or work area; and a processor operatively connected to the pair of stereo cameras and the distance sensor, wherein the processor includes a memory configured to store camera calibration information related to the pair of stereo cameras and perform conversion on image signals from the pair of stereo cameras based on distance measurements from the distance sensor and the camera calibration information.
In some embodiments, the conversion causes the image signals to appear as if they were generated from a pair of stereo cameras having optical axes that converge at a distance corresponding to the distance measurement.
In other embodiments, the distance sensor has an adjustable field of view. In some examples, the field of view of the distance sensor is adjustable based on a magnification of the digital magnifier system. In other embodiments, the optical axis of the distance sensor approximately bisects an angle formed by the optical axes of the pair of stereo cameras. In another embodiment, the distance sensor is an imaging distance sensor. In another embodiment, the distance sensor has a narrow collimated beam.
In one embodiment, the pair of stereo cameras is adapted to be mounted on the crown or forehead of the user's head.
In some embodiments, the tilt angle of the pair of stereo cameras is adjustable.
In other embodiments, each camera of the pair of cameras has an optical axis, the optical axes of the pair of stereo cameras configured to converge at a distance approximately equal to an intended working distance of a user.
In some examples, the digital loupe system further includes a binocular head mounted display including a first display and a second display, the first and second displays being operatively connected to the processor to receive the image signals generated by the pair of stereoscopic cameras from the processor and to display images according to the image signals. In some examples, the converting causes the image to appear as if the pair of stereo cameras have optical axes that converge at a distance corresponding to the distance measurement. In other embodiments, the head mounted display is configured to have a virtual image distance that approximately corresponds to a working distance of the user. In some embodiments, the display is mounted at a near vision location. In another embodiment, the processor is further configured to display the image signal in the display at a spatially variable magnification. The binocular head mounted display may also include an ambient light sensor, the processor being further configured to use signals from the ambient light sensor to adjust display characteristics of the head mounted display. In some examples, the optical axes of the head mounted display converge at a distance that is approximately equal to a working distance of a user.
In some embodiments, the processor of the digital magnifier system is further configured to offset a viewpoint of the image signal using distance information from the distance sensor.
In another embodiment, the pair of stereoscopic cameras includes a color camera that provides a color image signal to the processor. In some embodiments, the processor is further configured to process the color image signal using a three-dimensional look-up table. In other examples, the processor is further configured to process the color image signal to replace color from an area in a color space where the user is less sensitive to color changes to a second area in the color space where the user is more sensitive to color changes.
In some embodiments, the system is configured to perform image stabilization by optical image stabilization at the pair of stereo cameras or by electronic image stabilization at the processor.
In other embodiments, the camera is configured to automatically maintain focus.
In one embodiment, the system further comprises an illumination source adapted to illuminate the object or work area. In some examples, the illumination source is controlled by an illumination controller that adjusts an illumination parameter based on a measurement of distance from the distance sensor. In other examples, the illumination may be pulsed in synchronization with an exposure interval of the pair of stereo cameras.
In some examples, at least one image sensor of the pair of stereo cameras is an RGB-IR sensor. In another embodiment, the at least one image sensor has high dynamic range capability.
In some examples, the system further includes an additional imaging modality different from the imaging modality included by the pair of stereoscopic cameras. For example, the additional imaging modality may include a multi-channel imaging system.
There is also provided a multi-channel imaging system comprising: an array of at least two cameras, wherein at least two channels are distributed across the at least two cameras; an imaging distance sensor adapted and configured to image a field of view similar to the field of view imaged by the at least two cameras; and a processor configured to store camera calibration information relating to the at least two cameras, wherein the camera calibration information is defined in a coordinate system relative to the imaging distance sensor, wherein the processor is configured to receive image signals from the at least two cameras and depth information from the imaging distance sensor, and to use the depth information and the camera calibration information to correct for parallax between the at least two cameras to provide a multi-channel image that appears to originate from a single viewpoint.
In some embodiments, the system is a multispectral imaging system, the channels correspond to spectral bands, and the multichannel image comprises a hyperspectral image.
In another embodiment, the system is an imaging polarimeter, the channels correspond to polarization combinations, and the multi-channel image comprises a polarimetry image.
There is also provided a method of obtaining a stereoscopic image of an object, the method comprising: obtaining a first image and a second image of an object with a first camera and a second camera; obtaining a measurement of a distance to the object with a distance sensor; and applying a transformation to the first and second images using the measurements of the distance and using calibration information for the first and second cameras.
In some examples, the method further includes displaying the converted first and second images on the first and second displays, respectively. Additionally, the method may include supporting the first display and the second display in a field of view of the user. In some embodiments, the optical axes of the first and second displays converge at a distance approximately equal to a working distance between the user and the object. In one example, the step of applying a transformation includes virtually adjusting a convergence angle of the first and second displays. In another example, the step of applying a transformation includes causing the first and second images to appear on the first and second displays as if the first and second cameras had optical axes that converged at a distance corresponding to a measurement of the distance.
In some embodiments, the applying step comprises using the measurement of the distance to adjust the field of view of the first and second images.
In other embodiments, the method further comprises using the measurement of the distance to shift the viewpoints of the first and second images.
In some embodiments, the method further comprises changing the magnification of the first and second images, and adjusting the field of view of the distance sensor as the magnification changes.
In another embodiment, the method further comprises changing a distance between the object and the first and second cameras, and adjusting the conversion as the distance changes.
The method may further comprise illuminating the object. In some examples, the illuminating step comprises determining an illumination parameter based on the measurement of the distance, and illuminating the object based on the illumination parameter. In another example, the illuminating step includes pulsing the illumination source in synchronization with the exposure intervals of the first and second cameras.
There is also provided a method of viewing an object, comprising: engaging a head engaging member with a user's head, the head engaging member supporting two cameras above the user's head; placing each of the first display and the second display in a line of sight of an eye of the user; obtaining a first image and a second image of the object with a first camera and a second camera; obtaining a measurement of a distance to the object with a distance sensor supported by the head engaging member; applying a transformation to the first and second images using the measurements of the distance and using calibration information of the first and second cameras; and displaying the converted first image and second image on the first display and the second display, respectively.
In some embodiments, the method further comprises supporting the first display and the second display with the head engaging member.
In one example, the optical axes of the first and second displays converge at a distance that is approximately equal to a working distance between the user and the object.
In one embodiment, the step of applying a transformation comprises virtually adjusting the convergence angle of the first and second displays.
In another example, the step of applying a transformation includes causing the first and second images to appear on the first and second displays as if the first and second cameras had optical axes that converged at a distance corresponding to a measurement of the distance.
In some embodiments, the applying step comprises using the measurement of the distance to adjust the field of view of the first and second images.
In other embodiments, the method further comprises using the measurement of the distance to shift the viewpoints of the first and second images.
In another embodiment, the method further comprises changing the magnification of the first and second images, and adjusting the field of view of the distance sensor as the magnification changes.
In some examples, the method further comprises changing a distance between the object and the first and second cameras, and adjusting the transformation as the distance changes.
In one embodiment, the method further comprises illuminating an object with an illumination source supported by the head engaging member. In some examples, the illuminating step comprises determining an illumination parameter based on the measurement of the distance, and illuminating the object based on the illumination parameter. In other examples, the illuminating step includes pulsing the illumination source in synchronization with an exposure interval of the first and second cameras.
There is also provided a method of obtaining a multi-channel image, the method comprising: obtaining at least first and second images of an object from at least first and second cameras; obtaining a depth image of an object using an imaging depth sensor; and applying a transformation to the at least first and second images based on the depth image and calibration information of the at least first and second cameras, wherein the at least first and second images correspond to a single channel of a multi-channel imaging modality and the transformation eliminates parallax between the at least first and second images.
In some examples, the channels correspond to spectral bands and the multi-channel image comprises a multi-spectral image. In other examples, the channels correspond to polarization combinations and the multi-channel image includes a polarimetry image.
There is also provided a head-mounted system for supporting a pair of eyepieces within a line of sight of a human user, the head-mounted system adapted to be worn by the user, the system comprising: a head engaging member adapted to engage with a user's head, and first and second support arms, each having: a proximal portion supported by the head engaging member, a distal portion arranged to support an eyepiece within a line of sight of a user, and a central portion disposed between the proximal portion and the distal portion, the head-mounted system configured such that when the head engaging member is engaged with a head of a user, the central portion of each support arm is configured to extend laterally and upwardly from the distal portion toward the proximal portion without extending through a region of the user's face above an interior side of the user's eye and below a brow of the user, and the proximal portion of each support arm is arranged and configured to be disposed inside the central portion.
In some embodiments, the proximal portion of each support arm is further configured to be disposed medial of the user's fronto-temporal lobe when the head engaging member is engaged with the user's head.
In one embodiment, the central portion of each support arm is further configured to extend rearwardly from the distal portion toward the proximal portion when the head engaging member is engaged with the head of the user without extending through a region of the user's face that is above an inner side of the user's eyes and below the user's glabella.
In some examples, the proximal portions of the first and second support arms are each connected to the head engaging member by a hinge adapted to allow the angle between the support arms and the head engaging member to be changed. In one embodiment, the hinge is adapted to allow the proximal, central and distal portions of the support arm to move over the user's eye when the head engaging member is engaged with the user's head.
In some examples, the first and second support arms are each supported by a sliding connection, thereby allowing the height of the support arms relative to the head engaging member to be changed.
In another embodiment, each of the first and second support arms comprises a plurality of segments. In one embodiment, the system further comprises a connector connecting adjacent segments of each support arm. In some embodiments, the connector is adapted and configured to allow the effective length of the segments of the support arm to be adjusted.
In one example, a distal portion of each of the first and second support arms includes a display bar adapted to be connected to one of the pair of eyepieces. In one embodiment, the display bar of the first support arm is integral with the display bar of the second support arm. In another embodiment, the display bar of the first support arm is not connected to the display bar of the second support arm. In another embodiment, the system further comprises first and second hinges connecting the display bar to a central portion of the first and second support arms, respectively. In one example, the hinge is adapted and configured to allow the tilt angle of an eyepiece attached to the display bar to be changed. In another example, the hinge is adapted and configured to allow the first and second support arms to move towards or away from a user's head.
In some embodiments, the head engaging member comprises a headband. In some examples, the headband is adjustable to fit different user head sizes.
In one embodiment, the head engaging member comprises a plurality of parts adapted to engage the head of a user, the plurality of parts being connected by a flexible connection.
In another embodiment, the head engaging member comprises a connector adapted to connect to a head strap.
In some embodiments, the first and second support arms are both ends of a unitary support arm. In one example, the integral support arm has a goat-horn shape. In another example, the integral support arm has a partially rectangular shape.
In some embodiments, the system further comprises a transparent window attached to the eyepiece support and adapted to protect a user's face.
In other embodiments, the system includes a sensor configured to report the articulation state of the head-mounted system.
In one example, the articulation of the head-mounted system is adapted to be automatically actuated.
In one embodiment, the system further comprises a linkage between the first and second support arms, the linkage configured to actuate a corresponding portion of one of the support arms in response to actuation of a portion of the other support arm.
There is also provided an imaging system adapted to be worn by a human user to provide a view of a work area, the system comprising: a head-mounted subsystem for supporting a pair of eyepieces within a line of sight of a human user, the head-mounted subsystem adapted to be worn by the user, the head-mounted subsystem comprising a head-engaging member adapted to engage a user's head, and first and second support arms, each having a proximal portion supported by the head-engaging member, a distal portion arranged to support an eyepiece within the user's line of sight, and a central portion disposed between the proximal portion and the distal portion, the head-mounted subsystem configured such that when the head-engaging member is engaged with the user's head, the central portion of each support arm is configured to extend laterally and upwardly from the distal portion toward the proximal portion without extending through a region of the user's face above the inside of the user's eye and below the user's glabella, and the proximal portion of each support arm is arranged and configured to be disposed inside the central portion; two cameras supported by the head engaging member; first and second eyepieces supported by the distal portions of the first and second support arms, respectively, so as to be positionable within a user's line of sight when the head engaging member is engaged with the user's head; and a processor adapted and configured to display images obtained by the two cameras on a display of the eyepiece.
In some embodiments, the proximal portion of each support arm is further configured to be disposed inside a user's frontotemporal lobe when the head engaging member is engaged with the user's head.
In one embodiment, the central portion of each support arm is further configured to extend rearwardly from the distal portion toward the proximal portion without extending through a region of the user's face above an inner side of the user's eyes and below the user's glabella when the head engaging member is engaged with the user's head.
In another embodiment, the proximal portion of the first support arm and the proximal portion of the second support arm are each connected to the head engaging member by a hinge adapted to allow the angle between the support arms and the head engaging member to be changed. In some examples, the hinge is adapted to allow the proximal, central and distal portions of the support arm to move above the user's eye when the head engaging member engages the user's head.
In one embodiment, the first and second support arms are each supported by a sliding connection that allows the height of the support arms relative to the head engaging member to be varied.
In some embodiments, each of the first and second support arms comprises a plurality of segments. In one example, the system further comprises a connector connecting adjacent segments of each support arm. In other embodiments, the connector is adapted and configured to allow the effective length of the segments of the support arm to be adjusted.
In one embodiment, the system further comprises a first eyepiece support and a second eyepiece support adapted to vary a distance between the eyepieces.
In some examples, the head-mounted subsystem is configured to allow a tilt angle of the eyepieces relative to a user's line of sight to be changed.
In another embodiment, the distal portion of each of the first and second support arms includes a display bar supporting the first and second eyepieces. In one example, the display bar of the first support arm is integral with the display bar of the second support arm. In another example, the display bar of the first support arm is not connected to the display bar of the second support arm. In one embodiment, the system further comprises first and second hinges connecting the display bar to the central portions of the first and second support arms, respectively. In one embodiment, the hinge is adapted and configured to allow the tilt angle of the eyepieces to be changed. In another embodiment, the hinge is adapted and configured to allow the first and second support arms to move towards or away from the head of the user.
In some examples, the head engaging member comprises a plurality of components adapted to engage the head of a user, the plurality of components being connected by a flexible connection.
In other examples, the first and second support arms are both ends of a unitary support arm.
In some embodiments, each of the first and second support arms has a goat shape.
In another embodiment, each of the first and second support arms has a partially rectangular shape.
In one embodiment, the system further comprises a transparent window attached to the eyepiece support and adapted to protect the user's face.
In another example, the system further comprises a distance sensor supported by the head engaging member.
The system may include a camera mount movable relative to the head engaging member to change a viewing angle of one or both of the cameras.
In one embodiment, the system further comprises a transparent window extending in front of the display and adapted to protect a user's face.
In some embodiments, the system further comprises an illumination source supported by the head engaging member.
In another embodiment, the system includes a sensor configured to report the articulation status of the head mounted subsystem.
In some embodiments, the articulation of the head-mounted subsystem is adapted to be automatically actuated.
In another example, the system includes a linkage between the first and second support arms, the linkage configured to actuate a corresponding portion of one of the support arms in response to actuation of a portion of the other support arm. In one example, the linkage mechanism includes a sensor configured to sense an actuation status of the portion of one of the support arms and report the actuation status to the processor, and an actuator configured to actuate the corresponding portion of the other support arm and receive commands generated by the processor, the processor configured to generate commands to the actuator in response to reports received from the sensor.
In one embodiment, the head engaging member comprises a headband. In some examples, the headband is adjustable to fit different user head sizes.
In another embodiment, the head engaging member comprises a connector adapted to connect to a head strap.
There is also provided a method of viewing a work area, comprising: engaging a head engaging member with a user's head, the head engaging member supporting two cameras above the user's head, placing each of two eyepieces within a user's eye line of sight, the first and second eyepieces being supported by first and second support arms, respectively, the first and second support arms being positioned such that a central portion of each support arm extends laterally and upwardly from the eyepiece toward the head engaging member without extending through a region of the user's face above the inside of the user's eyes and below the user's brow, each of the first and second support arms being supported at a location of the head engaging member that is inboard of the central portions of the first and second support arms, respectively; and displaying an image of the work area obtained by the camera in the eyepiece.
In some examples, the step of supporting comprises supporting each of the first and second support arms at a location of the head engaging member that is medial to a user's frontotemporal lobe.
In one embodiment, when the head engaging member is engaged with the head of the user, the central portion of each support arm also extends rearwardly from the distal side of the eyepiece toward the head engaging member without extending through a region of the user's face that is above the inner side of the user's eyes and below the user's glabella.
In some examples, the method further includes viewing the work area along a line of sight extending over the eyepiece.
In another embodiment, the method further comprises viewing the work area along a line of sight extending below the eyepieces.
In one embodiment, the method further comprises simultaneously viewing the work area through and around the eyepiece.
In another example, the method includes moving the eyepiece upward relative to the user's eye.
In some embodiments, the method includes moving the eyepiece downward relative to the user's eye.
In another example, the method includes varying a distance between the eyepieces.
In some embodiments, the method further comprises adjusting the shape of the head engaging member to fit the head of the user.
In some examples, the method includes moving at least one of the first support arm and the second support arm inboard or outboard.
In another example, the method includes moving the first support arm and the second support arm over the user's eyes.
In some embodiments, the method further comprises obtaining a measurement of the distance from the camera to the work area and applying a transformation to the image obtained by the camera to produce a transformed image, the displaying step comprising displaying the transformed image on the eyepiece. In one example, the step of obtaining a measurement of the distance from the camera to the work area is performed by using a distance sensor supported by the head engaging member. In another example, the step of applying a transformation includes virtually adjusting the convergence angle of the first and second eyepieces. In one embodiment, the step of applying the transformation comprises causing the first and second images to appear on the first and second eyepieces as if the first and second cameras have optical axes that converge at a distance corresponding to the measurement of the distance.
In one example, the method further comprises illuminating the object. In one example, the illuminating step comprises determining an illumination parameter based on the measurement of the distance, and illuminating the object based on the illumination parameter. In another example, the illuminating step includes pulsing the illumination source in synchronization with the exposure intervals of the first camera and the second camera.
In another embodiment, the method further comprises automatically moving at least one of the first support arm and the second support arm.
In some embodiments, the method includes automatically moving at least a portion of the second support arm in response to movement of a corresponding portion of the first support arm.
In one example, the method includes sensing an actuation state of one of the support arms.
There is provided a head mounted system for supporting an eyepiece within a line of sight of a human user, the head mounted system adapted to be worn by the user, the system comprising a head engaging member adapted to engage with the head of the user, and a support arm having a proximal portion supported by the head engaging member, a distal portion arranged to support the eyepiece within the line of sight of the user, and a central portion arranged between the proximal portion and the distal portion, the head mounted system being configured such that when the head engaging member is engaged with the head of the user, the central portion of the support arm is configured to extend laterally and upwardly from the distal portion towards the proximal portion without extending through a region of the user's face that is above the medial side of the user's eye and below the glabella of the user, and the proximal portion of the support arm is arranged and configured to be arranged inside the central portion.
In some embodiments, the proximal portion of the support arm is further configured to be disposed medial to the user's frontotemporal lobe when the head engaging member is engaged with the user's head.
In another embodiment, the central portion of the support arm is further configured to extend rearwardly from the distal portion toward the proximal portion when the head engaging member is engaged with the user's head without extending through a region of the user's face that is above an inner side of the user's eyes and below the user's glabella.
In some examples, the proximal portion of the support arm is connected to the head engaging member by a hinge adapted to allow the angle between the support arm and the head engaging member to be changed. In one embodiment, the hinge is adapted to allow the proximal, central and distal portions of the support arm to move over the user's eye when the head engaging member engages the user's head.
In some embodiments, the support arm is supported by a sliding connection, thereby allowing the height of the support arm relative to the head engaging member to be changed.
In another example, the support arm includes a plurality of segments. In some examples, the system further comprises a connector connecting adjacent segments of the support arm. In one example, the connector is adapted and configured to allow the effective length of the segments of the support arm to be adjusted.
In some embodiments, the distal portion of the support arm includes a display bar adapted to connect to one of the pair of eyepieces. In one example, the system further includes a hinge connecting the display bar to a central portion of the support arm.
In some embodiments, the hinge is adapted and configured to allow the tilt angle of an eyepiece attached to the display bar to be changed. In one example, the hinge is adapted and configured to allow the support arm to move towards or away from the head of the user.
In one embodiment, the head engaging member comprises a headband. In some examples, the headband is adjustable to fit different user head sizes.
In another embodiment, the head engaging member comprises a plurality of components adapted to engage the head of a user, the plurality of components being connected by a flexible connector.
In some examples, the head engaging member comprises a connector adapted to connect to a head strap.
In another embodiment, the support arm has a goat-horn shape. In another example, the support arm has a partially rectangular shape.
In some embodiments, the system includes a transparent window attached to the eyepiece support and adapted to protect a user's face.
Drawings
The novel features of the invention are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:
FIG. 1 illustrates a surgeon operating with an exemplary embodiment of the present invention.
Fig. 2 shows a schematic diagram of an embodiment of the invention.
Fig. 3 shows a schematic diagram of an exemplary binocular head mounted display of the present invention, including working distance and convergence angle associated with the virtual image plane.
Fig. 4 is a schematic diagram of a pair of cameras along with a distance sensor whose optical axis nominally bisects the optical axes of the pair of cameras.
Fig. 5 depicts a frontal projection of the head, depicting a preferred region for guiding the eyepiece support arm.
Fig. 6 shows a plot depicting the field of view of the left eye of a user for guiding a preferred region of the eyepiece support arm.
Fig. 7A is a perspective view of a digital magnifier system.
Fig. 7B is a side view of the digital magnifier system.
Fig. 7C is a front view of the digital magnifier system.
Fig. 8A-8C illustrate different articulation states of the digital magnifier system.
Fig. 9A-9D further illustrate different articulation states of the digital magnifier system.
Fig. 10A-10B illustrate a segmented headband contemplated for use in a digital magnifier system.
Fig. 11A-11D depict different views and articulation states of the eyepiece support structure.
Fig. 12A-12D depict different views and articulation states of another eyepiece support structure.
Fig. 13A-13D depict different views and articulated states of yet another eyepiece support structure.
Fig. 14 depicts a coupled side arm of the eyepiece support structure.
Fig. 15 depicts a portion of an eyepiece support structure in which the eyepieces are coupled at an incline by a top member.
Fig. 16A-16B illustrate a face mask that can be used with the digital magnifier system of the present invention.
Detailed Description
Fig. 1 depicts a surgeon 100 operating on a wound 110 (i.e., a target tissue site or surgical work area) and wearing an example embodiment of the present invention, which includes a sensing/illumination unit 120 and a binocular Head Mounted Display (HMD) 130. Both the sensing unit 120 and the HMD 130 are operatively connected to a processor (not shown). The sensing unit 120 includes a pair of stereo cameras that receive a stereo image of the wound 110 and transmit the stereo image to the HMD 130.HMD 130 has eyepieces or eye pieces 131a, 131b that are mounted in "near" vision positions familiar to those wearing conventional surgical loupes and bifocal eyeglasses, so as to retain "normal" or "far" vision. The surgeon 100 may view the wound 110 directly, such as at a "distant" visual location above the eyepiece of the HMD 130, or view a magnified version of the wound 110 through the HMD 130. The virtual image distance of the HMD 130 is approximately the same as the working distance from the surgeon's eye to the wound 110. In addition, the optical axis of HMD 130 converges to a nominal position of surgical wound 110 relative to surgeon 100. Thus, when the surgeon 100 switches between looking directly at the wound 110 or through the HMD 130, there is minimal change in the adjustment or convergence of his vision. As will be explained further below with respect to system ergonomics, the sensing unit 120 is mounted on top of the surgeon's head in order to have a stable mounting platform, since the potentially high magnification achieved by the system benefits from the stable mounting platform of the camera. Furthermore, the displacement of the sensing unit 120 relative to the HMD 130 in a direction transverse to the optical axis of the HMD 130 enables simultaneous and concurrent presentation of direct and magnified views of the surgical wound 110 in the field of view of the surgeon 100. The surgeon 100 can switch between centering the direct view of the wound 110 at the center of its field of view or centering the magnified view at the center of its field of view, with only eye rotation and without the need to move his head. Thus, the direct view around the HMD and the enhanced view in the HMD are "concurrent". The placement and support of the eyepieces of HMD 130 is such that an open view of surgical wound 110 and the surrounding environment of surgeon 100 is maintained during the surgical procedure for maximum background awareness.
Note that as used herein, a pair of stereoscopic cameras may include any electronic imaging device that outputs signals that can be viewed stereoscopically with a suitable display. For example, it may include two color RGB cameras with a baseline separation, similar to the separation of a person's two eyes, providing slightly different viewpoints, thus providing a stereoscopic view when presented on a binocular head mounted display. Alternatively, it may comprise two infrared cameras, or other types of cameras or focal plane arrays. As another alternative, it may comprise a single plenoptic (light field) camera, where the signals for the left and right displays are virtually rendered by computing an image derived from the viewpoint offset. As a further alternative, it may comprise a single camera and depth imager, with the combined information from the single camera and depth imager being used to simulate a second viewpoint for stereoscopic vision.
Fig. 2 shows a schematic diagram 200 of an embodiment of the invention. This embodiment includes three main components: a processor 210, a sensing unit 220, and a Head Mounted Display (HMD) 230. The sensing unit 220 may include a pair of stereo cameras 221, a distance sensor 222, and an illumination source 223. The processor 210 may include camera calibration information in the memory module 211 and may be used to control zoom-in settings of embodiments of the present invention based on input from a user, such as voice commands, button presses, or gestures, or other means of capturing user intent. The processor 210 may receive information in the form of left and right images from the pair of stereo cameras 221 and distance measurements from the distance sensor 222. The processor 210 may be used to perform the conversion of the left and right images from the stereo pair of cameras 221 based on the camera calibration information and the distance measurements, in particular to cause the images to be presented to the user as follows: when displayed, they converge the eyes to a nominal or actual working distance, and the processor 210 may send the converted images for display to the HMD 230. Processor 210 may filter distance measurement results over time, and it may adjust settings of distance sensors 222, paired stereo cameras 221, illumination sources 223, or HMD 230. For example, it may adjust the integration time or field of view of the distance sensor 222, or the exposure time of the pair of stereo cameras 221, or the illumination level or spatial distribution of the illumination source 223 based on image information from the pair of stereo cameras 221, or distance measurements from the distance sensor 222 or other information sources such as ambient light sensors. Processor 210 may adjust the focus settings of one or both cameras of stereo pair of cameras 221, and/or one or both oculars of HMD 230, and/or it may receive focal length information from stereo pair of cameras 221 and compare it to the distance measurements of distance sensor 222. Further, processor 210 may be used to control and/or perform optical and/or electronic image stabilization. The distance sensor 222 may comprise, for example, a light or sound based time-of-flight sensor, or a triangulation or capacitance based sensor, or any other known means of measuring distance. To the extent that distance information may be calculated from stereo disparity between images obtained from calibrated pairs of stereo cameras, the function of distance sensor 222 may be carried out by pairs of stereo cameras (such as pair of stereo cameras 221) and processor 210.
The illumination source 223 may comprise one or more different types of illumination sources, such as a white LED designed with phosphor to cover most of the visible spectrum, or an LED or laser for fluorescence excitation, or a plurality of LEDs combined to form a wavelength tunable illumination source, or an incandescent or plasma source such as a xenon arc lamp, which is present on the sensing unit 220, or remotely placed but guided to the sensing unit 220 via a light guide, or remotely placed and guided via self-illumination from the sensing unit 220Guided by spatial propagation to the surgical wound. The processor 210 may pulse the illumination source 223 in synchronization with the exposure interval of the pair of stereo cameras 221 in order to achieve a shorter exposure time than would be possible with the same average illumination intensity without pulsing; such impulse control is a useful strategy to mitigate motion blur at higher magnification. The processor 210 may control the angular range or angular/spatial distribution of the illumination beam exiting the illumination source 223, potentially in accordance with the distance measured by the distance sensor 222, to match the field of view of the pair of stereo cameras 221, potentially in accordance with the magnification of the digital magnifier system. Variation of the angular and/or spatial extent and/or distribution of illumination may be achieved in a number of ways: by using a zoom optic in front of the LED; by using an array of individually addressable LEDs in front of the lens, the illumination intensity distribution at the surgical wound is controlled by the intensity setting of each LED; alternatively, by using other forms of tunable beam shaping, e.g. by LensVector TM Those developed. The illumination source 223 may comprise a plurality of individually addressable LEDs of different wavelengths, where the light is mixed together and directed to the object in the form of a beam. With this arrangement, multispectral images of an object can be captured by time-series illumination with different wavelengths, or even better for video rate imaging by time-multiplexed combination of wavelengths, as described in Park, jong-Il et al: "Multispectral imaging using multiplexed amplification", "2007IEEE 11th International Conference on Computer Vision. IEEE,2007.
Fig. 3 depicts portions of a binocular head mounted display of an embodiment of the present invention. The user's left and right eyes 301a, 301b view the corresponding near-eye displays/eyepieces 331a, 331b of the head-mounted display at a fixed convergence angle 305. The head-mounted and support structure (such as, for example, one or more head-mounted embodiments described below) of the near-eye displays 331a, 331b allows adjustment of the interpupillary distance (IPD) 303 of the displays such that the optical axes of the near-eye displays/eyepieces 331a, 331b (and centers of the displays) are aligned with the optical axes 302a, 302b of the user's eyes 301a, 301b, thereby projecting the center of each display onto the center of the retina of each corresponding eye. By setting the appropriate focusing distance between the eyepieces of the near-eye displays/eyepieces 331a, 331b and the display, the virtual image 309 of the near-eye displays 331a, 331b is set at a virtual image distance 304 corresponding to the user's nominal working distance. The virtual image 309 at virtual image distance 304 is also the position where the optical axes 302a, 302b nominally intersect when aligned with the optical axes of the near-eye displays/eyepieces 331a, 331b. Thus, whether the user is looking through the near-eye display/eyepiece 331a, 331b, or directly at an object or work area at the nominal working distance, there is little or no change in the eyepiece accommodation or convergence, which facilitates a seamless, comfortable transition between the two views. Furthermore, as will be explained later, the ergonomics of the digital loupe contemplated in the present invention allow both the object or working area and the near-eye display to be placed in the user's field of view simultaneously, a condition that is enabled by the lateral displacement of the pair of stereoscopic cameras relative to the optical axis of the near-eye display.
As described above with respect to the digital magnifier system of fig. 1, some head mounted display systems employ a distance sensor and a pair of stereo cameras to obtain images for display on a near-eye display. Fig. 4 depicts a viewpoint frustum 401a, 401b, the viewpoint frustum 401a, 401b indicating the orientation and field angle of the pair of stereo cameras of the head-mounted digital magnifier system, wherein the viewpoint frustum 411 of the distance sensor has an optical axis 413 that nominally bisects the angle between the optical axes 403a, 403b of the pair of stereo cameras. The optical axes 403a, 403b correspond to the centers of the fields of view of the frustum 401a, 401 b. The optical axes 403a, 403b converge towards a point near the nominal working distance of the user of the digital magnifier, such that objects at their nominal convergence point are also at or near the nominal convergence point of the optical axes of the user's eyes (i.e., the convergence point of the optical axes 302a, 302b in fig. 3). For example, referring to fig. 3, the interpupillary distance 303 may be 60mm and the angle 305 may be 0.1rad, so the distance 304 may be about 600mm, corresponding to the nominal working distance. Thus, the object point depicted at the center of each near-eye display 331a, 331b appears to be located at a distance of 600mm from the user's eye. Referring to fig. 4, ideally, the optical axes 403a, 403b of the pair of stereoscopic camera frustum 401a, 401b nominally converge at the same point 600mm from the user's eye. In practice, there may be slight angular misalignment of these optical axes from their ideal positions, which will be dealt with later.
The camera frustums 401a, 401b of the pair of stereo cameras may each have a field of view 402a, 402b that is larger than the field of view of the near-eye displays 331a, 331b. Nominally, the near-eye displays 331a, 331b depict a magnified view compared to what is seen by the naked eye. For example, an angular magnification in the range of 2x to 10x may be used. In some embodiments, the magnification may be about 1x, e.g., nominally un-magnified. One way to achieve this magnification is to select a portion of each camera's field of view 402a, 402b for depiction on each display 331a, 331b in an enlarged size (e.g., crop and zoom). Suppose we select a portion of each field of view 402a, 402b around the optical axis 403a, 403b for display. As the magnification of the digital magnifier system increases, the display portion of each field of view 402a, 402b shrinks around the respective optical axis 403a, 403b. At high magnifications, if the object is not located near the nominal intersection of the optical axes 403a, 403b, the object may disappear from the displayed portion of the fields of view 402a, 402b. Furthermore, if there is a slight misalignment in the optical axes 403a, 403b, for example, if they do not intersect, it is not possible to view a magnified object with monoscopic vision, because the magnified object will be displaced differently when viewed by each eye 301a, 301b based on the exact misalignment of each optical axis 403a, 403b.
A solution to both problems is to use information from the distance sensor represented by the frustum 411 with the potentially adjustable field of view 412 and the optical axis 413, as well as camera calibration information about the camera represented by the frustum 401a, 401b, in order to calculate a transformation of the image from the camera represented by the frustum 401a, 401b before cropping and scaling. For example, assume that an object is located at a distance 414 along the optical axis 413 of the distance sensor. If the cameras represented by the truncated cones 401a, 401b have an optical axis pointing towards the object, e.g. pointing along the axes 404a, 404b, they will register the object in the centre of their field of view 402a, 402b and will therefore display the object in the centre of each display 331a, 331b, providing a comfortable single vision without problems. However, because the object does not appear in the center of the field of view 402a, 402b, it is not possible to comfortably view the magnified object through the near-eye display 331a, 331b without double vision, or even at all.
To remedy this, the system may compute a transformation of the image from the camera represented by the frustum 401a, 401b that depends on the distance measurements from the distance sensor represented by the frustum 411 and the camera calibration information (e.g., stored in the memory module 211 of the system in fig. 2) so that the image appears as if the detected object at the distance 414 was measured at the center of the field of view 402a, 402b, i.e., as if the axes 404a, 404b were also the optical axes of the camera represented by the frustum 401a, 401 b. To this end, we make extensive use of pinhole camera models, which are a useful mathematical abstraction to relate positions corresponding to points in a physical three-dimensional (3D) object space to positions in a 2-dimensional (2D) image space corresponding to pixel coordinates within the image. The operations referenced herein, including camera calibration to obtain a camera matrix and affine transformation to transform images between viewpoints based on camera calibration information, may be used as software routines in most computer vision software packages, such as OpenCV. A common practice seen in such software packages for the operations referenced herein is to use mathematics of homogeneous coordinates and projection geometry. The 3D object point X may be written in 4D homogeneous coordinates and the 2D image point y may be written in 3D homogeneous coordinates. Ignoring image distortion (as is known in the art how to deal with image distortion in the process), mapping between the object and image space may be performed by multiplying the object point X by the (3x 4) camera matrix to obtain the image point y. The camera matrix includes extrinsic parameters related to camera position and orientation, and intrinsic parameters related to focal length, optical center of the image, and pixel size. It is known in the art how to obtain the parameters of such a camera matrix for both single-camera and multi-camera systems by a process called camera calibration (e.g. using the routines available in OpenCV). Camera calibration information may also be obtained using a procedure such as described in Zhang, z, the following: "A Flexible New technology for Camera Calibration", microsoft Corporation Technical Report MSR-TR-98-71 (Dec.2, 1998). Both the camera matrix and the matrix representing the inverse transformation (mapping from coordinates in a given image to real world coordinates, up to a scale factor) can be obtained. The inverse transform is known only up to a scale factor corresponding to the depth or distance of the object point from the camera. However, if the distance is known, the object point corresponding to the image point of the camera can be unambiguously determined, which facilitates registration of image points recorded from different camera viewpoints but corresponding to the same object point.
The camera matrix can be decomposed into a matrix of its intrinsic parameters and a matrix of its extrinsic parameters, the complete camera matrix being the product of the two. The extrinsic parameter matrix corresponds to a rigid transformation that may include both rotation and translation. We call the camera matrix W for each camera i of the pair of stereo cameras represented by frustum 401a, 401b i Which can be decomposed into an intrinsic component C i And an extrinsic component H i So that W i =C i H i . The optical axes 403a, 403b of the cameras, represented by frustums 401a, 401b, nominally intersect at some working distance, possibly with slight misalignment with respect to their design orientation, and with respect to the center of each corresponding image sensor. Assume that the distance sensor represented by frustum 411 is located at the origin of the 3D Cartesian coordinate system and that the distance measurement to the observed object is reported as point X = (0, z, 1) along optical axis 413 with homogeneous coordinates T . Can utilize the camera matrix W i Converting the point into an image point, e.g. y i =W i And (4) X. Now will image point y i The center of the image from camera i is taken and therefore the cropping and scaling of the image is done around the new image center. After the image is cropped and scaled and displayed in the corresponding near-eye display 331a, 331b, an object point corresponding to the intersection of the distance sensor optical axis 413 and the observed object will appear in the center of each near-eye display 331a, 331b.
Another way to convert the images from the cameras represented by the frustums 401a, 401b is to assume the entire viewed objectThe body is planar and perpendicular to the optical axis 413 at a measurement distance z from the distance sensor represented by the truncated cone 411. Each image point (a, b, 1) from the image of camera i expressed in homogeneous coordinates T A ray that emanates from the origin of the camera and passes through a point expressed in the object space coordinate system of the camera is associated via the intrinsic camera matrix. The ray may be written as (x 'w, y' w, w) T Where prime numbers indicate that we are in the camera's coordinate system. The coordinate system may be converted to the reference coordinate system of the distance sensor represented by the frustum 411 using the inverse of the extrinsic camera matrix. If we assume that the object is located at a measured distance z on a plane perpendicular to the optical axis 413, we can solve the parameter w at each image point to get the coordinates of the assumed object point corresponding to each image point. This process is equivalent to calculating the intersection of the ray associated with the image point with the hypothetical planar object detected by the range sensor represented by frustum 411. For each camera i, we can assign an ideal extrinsic camera matrix that aims the center of the camera towards a point X at a measured distance z along the optical axis 413; in fig. 4, if the distance z is given by 414, this will correspond to the reorientation of the camera frustums 401a, 401b along the axes 404a, 404 b. By multiplying the object point coordinates corresponding to each image point with the ideal extrinsic camera matrix and then with the intrinsic camera matrix, we can convert the image points into new image points as if the camera was aimed at point X and assumed to be a planar object. Although similar to the earlier simpler process of translating a given image to align its center with a point along the optical axis 413, the latter process is more general in that it can capture a complete homography between the observed image from camera i and the image with the camera in the ideal orientation (e.g., aiming point X). However, there is no significant difference between these two processes, provided that the ideal camera position and orientation are close enough to the actual camera position.
After completing the conversions enumerated in the above-described processes, left and right images of an object or work area are displayed in and centered with respect to the left and right eyepieces of a head-mounted display, such as near-eye displays 331a, 331b of fig. 3. Since the optical axes 302a, 302b of these displays converge at an angle 305 to a point at the nominal working distance 304 (which may be similar to the actual working distance, e.g., distance 414 of fig. 4), the eyes 301a, 301b will not have to change convergence significantly to view the object or working area directly, as compared to viewing the object or working area through the near-eye displays 331a, 331b. Further, the processor 210 may adjust the convergence angle 305 of the near-eye displays 331a, 331b, either virtually (by translating the displayed image to the left and right) or physically (by rotating the near-eye displays 331a, 331 b), such that when the left eye 301a and the right eye 301b are viewed through the near-eye displays 331a, 331b, they converge to an actual working distance corresponding to the measurement from the distance sensor 222 represented by the frustum 411. The processor 210 may also virtually or physically change the convergence angle 305 in proportion to changes in the measured distance to the object or work area being viewed. Finally, the processor 210 may change the focus state of the near-eye display 331a, 331b to match or track the virtual image plane 309 to the actual working distance corresponding to the measurement from the distance sensor 222. In this way, switching between directly viewing the object (e.g., above or below the near-eye display 331a, 331 b) and viewing the object through the near-eye display 331a, 331b would require no or only minimal eye 301a, 301b accommodation changes and/or vergence state changes.
It is a feature of the present invention that the distance sensor represented by frustum 411 may have a defined field of view 412 that may be adjustable. The distance measurements may be from only those objects within the field of view 412. If the field of view is dependent on the magnification of the digital magnifier system, the field of view 412 of the distance sensor, represented by frustum 411, may decrease as the magnification of the digital magnifier increases. This is to ensure that the field of view 412 of the distance sensor represented by the frustum 411 matches (or corresponds to) the field of view displayed to the user by the near-eye displays 331a, 331b. VL53L1X distance sensor, a lidar time-of-flight sensor from STMicroelectronics gmbh, provides such adjustable field-of-view features. However, changing the field of view affects the amount of light collected in a given distance measurement, thereby affecting measurement accuracy, and a single measurement may not be initially accurate enough, so some form of temporal filtering of the distance measurements is desirable. The distance sensor represented by frustum 411 may be calibrated to ensure accuracy of its distance measurements under operating conditions. Further, the camera calibration information (e.g., orientation and position) may reference calibration information of the distance sensor represented by the frustum 411, such as a coordinate system defined by the position and orientation of the distance sensor represented by the frustum 411.
In some embodiments, it may be preferable to have a distance sensor that utilizes a narrow collimated beam, such as a laser-based time-of-flight distance sensor, such as the TF-Luna distance sensor from Benewake, inc., so there is minimal ambiguity as to the actual distance measured within the field of view. Typically, time-of-flight sensors report the measured distance based on statistical data such as the average time-of-flight of all collected photons. If the collected photons form a bimodal histogram of photon counts versus distance (e.g., if the active area of the distance measurement includes different edges with foreground and background objects), the average will be between the two peaks, and thus the reported distance will not correspond to the center of either peak. Thus, the optics of the distance sensor may be configured with a narrow beam, thereby minimizing the possibility of encountering a blurred distance measurement scenario.
A further possibility is enabled if the distance sensor represented by the frustum 411 is an imaging distance sensor providing a spatially resolved point map or point cloud over its field of view. Consider the case above with respect to a hypothetical planar object at a measured distance z along the optical axis 413 and perpendicular to that axis. With spatially resolved distance information, we can relax the assumption that the object is planar. The point cloud reported by the imaging distance sensor represents points on the object surface, and these points can be mapped to the camera coordinate system to associate each image point with an object surface point. This means that for each point in the image we can find the exact object point in our reference coordinate system. Thus, we can use the new virtual camera matrix to re-project the object points of a given image to see them as if imaged by virtual cameras that may have different positions, orientations, focal lengths, etc. For example, the sensing unit 120 is worn on the forehead of the surgeon 100, but the head-mounted device 130 is naturally worn in front of the eye. We can re-project the images derived from the sensing unit 120 as if they were imaged by a camera located at the position of the surgeon's 100 eye, particularly if the relative positions and orientations of the camera and the surgeon's eye are at least approximately known. In this way, the effective viewpoint of the sensing unit 120 is the same as that of the surgeon 100, thereby reducing or eliminating parallax with respect to the viewpoint of the surgeon 100. Even without an imaging distance sensor, it is still useful to perform this operation to remove the average disparity on the image, which can be done by again assuming that the object is planar at a certain distance z along the optical axis 413 and then re-projecting those assumed object points onto the surgeon's 100 viewpoint.
Returning to fig. 2, note that processor 210 may be configured to update the camera calibration information stored in memory 211 during operation of the digital magnifier system, such as by undergoing the camera calibration routine described in the Zhang publication referenced above. Alternatively, the processor 210 may identify similar features between the cameras of the stereo camera pair 221 and adjust the camera calibration information 211 such that when the processor 210 converts the images of the stereo camera pair 221 using translation or full homography, these similar features appear in similar locations for each eye of the binocular head mounted display 230. This can be done using self-calibration techniques as described in the following documents by Dang, t, et al: "Continuous Stereo Self-Calibration by Camera Parameter Tracking," IEEE Trans. Image Proc., vol.18, no.7 (July 2009). This may be important for slight misalignments of the optical axes of the pair of stereo cameras 221 that may occur over time during operation of the digital loupe system.
In another embodiment of the present invention, a multi-channel imager is provided that combines an array of multiple single-channel imagers and uses an imaging depth sensor to remove parallax from the multiple single-channel imagers such that the multi-channel image appears to be derived from a single camera or viewpoint. The process of mapping one view to another view may be the same as that used for the previous embodiments of the present invention. For example, the multi-channel imager may include a processor configured to store camera calibration information relating to at least two cameras, where the calibration information is defined in a coordinate system relative to an imaging distance sensor of the system. The processor of the multi-channel imager may be configured to receive image signals from the camera and depth information from the imaging distance sensor and use the depth information and camera calibration information to correct for parallax between the cameras to provide a multi-channel image that appears to originate from a single viewpoint. Some examples of multi-channel imagers are hyperspectral imagers or stokes imaging polarimeters. Of course, as in the prior art, imaging depth sensors may be used to combine images from different modalities, for example, US 2018/0270474 A1 teaches that depth information may be used to register images acquired by a variety of intraoperative optical imaging modalities, such as NIR fluorescence, color RGB, or hyperspectral imaging using tunable liquid crystal filters or mechanical filter wheels. No one has so far conceived of using depth information to implement a single modality multi-channel imager. This is a conceptual leap forward over the prior art in view of the fact that a multi-channel optical imager can be formed from a single-channel imager array (which is nominally arranged in a plane transverse to its line of sight) together with an imaging depth sensor (which provides sufficient information to remove parallax effects from different locations of the imager array). The output of such a system would include a multi-channel image cube, as obtained from a conventional multi-channel imager (i.e., from a single viewpoint).
Such a multi-channel imager may be combined with the digital magnifier system of the present invention to provide other intraoperative optical imaging modalities simultaneously within the magnified view of the digital magnifier system. For example, a sensor array of a contemplated multi-channel imaging system may include a plurality of individual spectral bands such that with parallax removed, the output will include a multi-spectral or hyper-spectral image. The hyperspectral image can be analyzed and compared to prior information to determine the region of the surgical wound 110 that includes cancerous tissue to be resected. An image may be formed indicating the probability of cancerous tissue at each pixel location. This image may be superimposed or combined with a magnified image presented in the display 130 of the digital magnifier system using known image fusion techniques, so the surgeon 100 has a more accurate map of where to resect tissue than from the magnified image alone.
Similarly, the channels of the multi-channel imager may each correspond to an independent stokes polarization component. Thus, the multi-channel imager may comprise a stokes imaging polarimeter. A stokes imaging polarimeter would be a useful addition to a digital magnifier because it can provide a glare reduced image, either alone or by modifying the polarization of the illumination. If used in combination with circularly polarized illumination, the Stokes polarization image can potentially be used to visualize birefringent structures such as nerves, as described in Cha et al: "Real-time, label-free, iterative visualization of both intrinsic novel and micro-vascular using multimodal imaging technique", biological Optics Express 9 (3): 1097.
Other embodiments of the digital magnifier system capture the enhanced functionality relative to the prior art. For example, as mentioned in the background and related drawbacks, US 10230943 B2 teaches a digital loupe with integrated fluorescence imaging so that both NIR (fluorescence) and visible (RGB) light are recorded within one sensor, with a modified bayer pattern where pixels in the visible and infrared bands can be tiled on the same sensor. The paired stereo cameras of the present invention may include one or more such sensors. A limitation of such sensors is that when NIR and visible light are imaged simultaneously, the same exposure, gain and other settings are used for NIR and visible light. However, some modern image sensors have High Dynamic Range (HDR) capability, which makes multiple exposures consecutively with different exposure durations. The advantage of combining HDR with such RGB-NIR sensors can be exploited in order to optimize the imaging conditions of visible and near infrared light, e.g. exposure duration, separately.
Some aspects of the present invention are directed to enhancing the user experience of a digital magnifier system. For example, it may be desirable to soften the edges of the displayed image in each eye, for example, using digital vignetting, so that the eyes are not attracted by sharp edges of the image.
The digital magnifier system may include an ambient light sensor that detects the spectrum and/or intensity of ambient light. It is well known that ambient light can affect the viewing experience, so measurements of ambient light can be used to adjust, for example, the white point and brightness settings of a head mounted display of a digital loupe system.
It may be useful to present images in a digital magnifier with spatially variable magnification. For example, the central rectangular portion of the image in each near-eye display (possibly covering an area extending 20% in each dimension of the field of view of each display) may be displayed at a significantly higher magnification than the surrounding portions. If this high magnification is used over the entire image, the user may lose the background of the object portion around the displayed portion. However, with spatially variable magnification, both high magnification and persistence of the background can be achieved.
The processor of the digital magnifier system may include the most common color replacement algorithm, which is a three-dimensional look-up table that replaces a given color with another color. It is known that the response or sensitivity of the eye to light of different colors and intensities is significantly different from that of standard color cameras. For example, the eye is most sensitive to light intensity variations at green wavelengths, but less sensitive to light intensity variations at red and blue wavelengths. There is then a high probability of loss of useful information between the color image being recorded and the color image being displayed to the user. Imaging of surgical procedures expects many red tones, primarily due to the presence of hemoglobin and other body pigments in the blood. Not only is the human eye less sensitive to red wavelengths, but typical electronic displays can have difficulty reproducing the saturated red light comprised by blood images, as they can be outside the display color gamut. In either case, it may be advantageous to shift the red, especially saturated red, towards green (e.g., to make them yellow) so that the eye can distinguish more subtle changes in the red tissue. In effect, this increases the amount of perceptual information available to the user. This can be easily done with a three-dimensional look-up table. Color replacement may also be dynamic or may be determined by algorithms that may utilize machine learning.
Ergonomic enhancement functions are also provided in various embodiments of the present invention. Fig. 5 shows a frontal projection of a forward-looking person's head 500. Note that the present invention is not limited to configurations that require the user to look forward; for example, the user may have a downward gaze. Vertical line 510 intersects horizontal line 511 at the pupil of each eye. The circles 531 and 532 are approximately centered with respect to the pupil such that objects within the circle 531 will appear closer to the visual center of the person shown in fig. 5 than objects within the circle 532 but not within the circle 531. Objects outside of the circle 532 will appear within the peripheral vision of the person (i.e., only at the edges of the vision of the person) or will not be visible at all. Vertical line 521 intersects the frontotemporal lobe of person's head 500 to define a region 522 outside the frontotemporal lobe. The frontotemporal lobe is the foremost point of the temporal ridge on either side of the frontal bone of the skull; the temporal crest marks a transition point between the more vertical slope on the lateral side of the skull and the more horizontal slope on the medial side. Region 512 is inside and above the pupil and extends vertically to about the top edge of the person's peripheral vision, approximately aligned with the brow ridge of head 500, or to the glabella, which is the point between the eyebrows.
When viewed in frontal projection of head 500, prior art eyepiece supports typically encroach upon, intersect, or fit within region 512 and/or region 522. For example, the spectacle-like support utilizes temple pieces that are supported by the ears in region 522. Furthermore, existing binocular head magnifier include a pair of simple magnifiers mounted in a visor that is attached to the headband on the sides of the head, outside of the frontotemporal lobes. Front lens mounted magnifier systems or flip-up mounted systems typically have a support arm that descends from above within region 512 when viewed in front projection.
When viewed in frontal projection of head 500, the eyepiece support system or support arm of the present invention can support the eyepiece in the eye's line of sight and then extend laterally outward, rearward, and upward (e.g., radially outward at least with respect to circles 531 and 532) while avoiding intersection with region 512 and then extend to the head engaging member at a location that is inboard of region 522. The auxiliary support arms may intersect regions 512 and/or 522, for example linking together two eyepieces supported via the main support arm following the pattern described above. The secondary support arm linking the two eyepieces and passing through region 512 may still be substantially out of the user's peripheral vision if it is guided in such a way that it is located primarily behind the eyepieces ' apparent field of view from the user's point of view. It is also beneficial if the image viewed through the eyepiece extends to the edge of the eyepiece. Although this approach blurs the image edge because the eyepiece edge is close to the eye and not in focus, the presence of this blurred image edge within the user's field of view blurs the eyepiece support arm even further so that the image appears as if it emerges in front of the eye with minimal visible support. In addition, blurring at the edges of the image is useful to prevent the eyes from being attracted by the sharp image edges, which might otherwise interfere with binocular vision by providing conflicting binocular cues when two eyepieces are used in a binocular head mounted display.
The particular head-mounted system for an eyepiece that employs an eyepiece support arm that conforms to the general standards listed above will be described in further detail below. They are preferred for eyepiece support systems having a main support arm that descends through region 512 because they do not create the same uncomfortable feeling of having something directly in front of the face. Extending the proximal ends of the eyepiece support arms to the medial frontotemporal position enables the head-worn eyepiece support system of the present invention to accommodate different user head widths, which is more easily done if the proximal ends of the support arms extend to the head-engaging member at or near the top of the head rather than the sides of the head. In some embodiments, the two support arms are separate structures supported by the head engaging member. In other embodiments, the two support arms are part of a unitary structure that is centrally supported by the head engaging member and extends distally from the central support point to their respective eyepieces or eyepiece support structures.
Fig. 6 shows a plot 600 of the extent of the field of view 606 of the left eye of a subject. Vertical line 601 and horizontal line 602 intersect at the visual center, corresponding to the fovea. Contour lines 603, 604, and 605 represent specific angular deviations from the center of vision, each greater than the previous deviation from the center. For example, contour 603 represents a 10 degree deviation from the center of vision, contour 604 represents a 30 degree deviation, and contour 605 represents a 60 degree deviation. The visual area may be designated as being in one of four quadrants. Those on the same side of vertical line 601 as the subject's nose are labeled "nasal" and those on the same side of vertical line 601 as the subject's left temple are labeled "temporal". Likewise, the area above horizontal line 602 is labeled "above" and the area below horizontal line 602 is labeled "below". Thus, the four areas are the top of the nose 610, the top of the temple 611, the bottom of the temple 612, and the bottom of the nose 613. The outline of eyepiece 620 is shown centered at the center of vision, although this is only a nominal position, and other positions near the center of vision are contemplated. The eyepieces 620 are supported by an eyepiece support arm 621.
Embodiments of the present invention include an eyepiece, such as eyepiece 620, supported by an eyepiece support arm, such as support arm 621, that is attached to the eyepiece in a manner that avoids obscuring vision in the upper region 610 of the nose. The support arm has a more distal portion extending laterally beyond the eyepiece support location, a more proximal portion extending medially toward the head-engaging member, and a central portion extending between the distal and proximal portions beyond or nearly beyond the periphery of the user's vision. In some embodiments, the support arm may have multiple segments that are movable relative to each other to change the position of the eyepieces that it supports and to adjust the system to fit the user's head. From the user's point of view, the eyepiece support arms described herein have the same advantages as the eyepiece support arms described with reference to fig. 5: peripheral vision, particularly minimal obscuration in the sensitive regions between and above the eyes, and the ability to accommodate a range of head widths.
Fig. 7A-7C depict an embodiment of a digital magnifier system 700 worn on a user's head 701. The head-mounted system of this embodiment may be used to support eyepieces other than digital magnifier eyepieces. Portions of the head-mounted system may also be used to support a single eyepiece using, for example, a single eyepiece support arm and associated structure. Fig. 7A depicts a perspective view, fig. 7B depicts a side view, and fig. 7C depicts a front view. This embodiment includes an adjustable binocular display and support structure 710 and a pair of stereoscopic cameras 720 mounted on a head engaging member 730 on a user's head 701. The adjustable binocular display and support structure has a pair of eyepieces 711a and 711b supported by an adjustable support arm that minimizes interference with the user's vision, as described below. The pair of stereo cameras 720 are mounted in the housing 702 via a rotational hinge 721 at an adjustable tilt angle so that the cameras 722a, 722b of the pair of cameras 720 can be pointed in a desired direction, e.g., at an object or work area. In addition to the paired stereo cameras 720, a distance sensor 723 and an illumination source 724 are arranged in the housing 702. The cameras 722a, 722b, the distance sensor 723, and the illumination source 724 all have optical axes that converge at a user's nominal working distance (e.g., 50 cm). As described above with reference to fig. 1-4, the cameras 722a, 722b and the distance sensor 723 are controlled by a controller (not shown) to display images, e.g., of a work area or object, on the eyepieces 711a, 711b for viewing by a user wearing the device.
In this embodiment, eyepieces 711a and 711b are supported by a segmented support arm structure that extends proximally from a distal eyepiece support location to the periphery of the user's vision by extending laterally outward, rearward, upward, and medially prior to coupling to head engaging member 730 in a medial frontotemporal position. In embodiments, the support structure includes an optional display bar to which the eyepieces are movably attached and a pair of support arms that may include hinges that allow the lateral position of each eyepiece to be adjusted, e.g., to accommodate the interpupillary distance of different users; coupling adjustment of the vertical inclination angle of the ocular lens; coupling adjustment of the vertical position of the ocular lens; and coupled adjustment of the exit pupil distance of the eyepiece. Further, the gap between the support arm and the side of the head may be adjustable.
In particular, eyepieces 711a and 711b are each coupled to display bar 712 by a slidable coupling mechanism in order to adjust the interpupillary distance. The display bar 712 forms an eyepiece support arm that depends from the side support arms 715a, 715b and is primarily obscured from the user's view by the eyepieces 711a, 711b, which can display images that extend at least to the edge of the eyepieces 711a, 711 b. The convergence angle of the eyepieces may be maintained independently of their sliding position or adjusted with additional hinges (not shown) that cause each eyepiece to rotate inward relative to the other eyepiece. A display bar 712 extends laterally outward from the eyepieces to connect to the distal ends of the side support arms 715a and 715b via hinges 713a, 713b and hinges 714a, 714 b. The display bar 712 can be rotated about the hinges 713a, 713b to adjust the tilt angle of the eyepieces. The tilt angles of the two eyepieces are adjusted together in this way, avoiding vertical divergence (dipvergence) and thus double vision. The hinges 714a, 714b allow the side support arms 715a, 715b to move toward and away from the sides of the user's head.
In the embodiment shown in fig. 7A-7C, the side support arms 715a and 715b each have three straight segments connected by corner connectors 703a, 703b and sliding connectors 716a, 716 b. In other embodiments, the side support arms may be a unitary member having straight and/or curved portions. By varying the effective height of the side support arms 715a, 715b, i.e. the distance the side support arms 715a, 715b extend downwardly from the head engaging member, the sliding connections 716a, 716b enable the vertical height of the eyepieces 711a, 711b relative to the user's head 701 to be adjusted. The side support arms 715a, 715b are rotatably connected to the top support arm 718 via hinges 717a, 717b, the top support arm 718 being coupled to the head engaging member 730 via a rotational hinge 719. The swivel hinge 719 is located on the medial side of the user's frontotemporal lobe when the head engaging member is engaged with the user's head. Similar to hinges 714a, 714b, hinges 717a, 717b allow side support arms 715a, 715b to move toward and away from the sides of the user's head. The axes of rotation of the hinges 714a and 717a are nominally collinear, and the axes of rotation of the hinges 714b and 717b are nominally collinear, to enable movement of the side support arms 715a, 715b to adjust the gap between the side support arms and the side of the user's head. The exit pupil distance or distance from eyepieces 711a, 711b to the user's face is primarily adjusted via rotation of the top support arm 718 about hinge 719, which causes the side support arms 715a, 715b and display bar 712 to move toward or away from the user's face. When the head engaging member 730 is engaged with the user's head, the display bar 712 extends laterally outward from the eyepieces 711a, 711b to the side support arms 715a, 715b, and the side support arms 715a, 715b extend rearward and upward from the hinges 713a, 713b to a position at or outside the perimeter of the user's field of view. The support arms 715a, 715b may also extend laterally outward if the support arms 715a, 715b have been rotated away from the user's head about the hinges 714a, 714b and hinges 717a, 717 b. The top support arm 718 extends from its connection with the side support arms 715a, 715b medially to the head engaging member 730. Thus, this configuration enables the support arms to extend from the eyepieces to their connection with the head engaging member on the inside of the user's frontotemporal lobe without extending through the area of the user's face that is inside and above the center of the user's eyes and below the user's glabella.
Fig. 8A-8C illustrate various hinged states of the embodiment of the digital magnifier system 700 shown in fig. 7A-7C, and a front view of the embodiment shown in fig. 7C is reproduced in fig. 8A for reference. Fig. 8B shows the system 700, the system 700 being adapted to give the user a larger interpupillary distance relative to the state shown in fig. 8A, which can be achieved by sliding the eyepieces 711a, 711B along the display bar 712. Fig. 8C shows the system 700 with a larger gap between the side arms 715a, 715B and the sides of the wearer's head 701 than the state shown in fig. 8A and 8B; this state involves a change in the state of the swivel hinges 714a, 714b and 717a, 717 b.
FIGS. 9A-9D further illustrate various hinged states of the embodiment of the digital magnifier system 700 as shown in FIGS. 7A-7C, with a side view of the embodiment shown in FIG. 7B reproduced in FIG. 9A for reference. Fig. 9B shows the system 700 adjusted to achieve an increased camera tilt angle given to the user by rotation of the housing 702 about the hinge 721. Fig. 9C and 9D each show a state where the eyepieces of system 700 have a reduced tilt angle, where the configuration of fig. 9D has a smaller tilt and a larger exit pupil distance for the user than the state of fig. 9C. Both states are achieved by rotation of the display bar 712 about the hinges 713a, 713b, adjustment of the side support arms 715a, 715b via the slides 716a, 716b, and rotation of the upper support arm 718 about the hinge 719.
It should be understood that the different articulation states of fig. 8A-8C and 9A-9D are representative samples from a continuum of articulation states, and that the surgeon may select an articulation state that provides the best fit and ergonomics in terms of a number of factors by intuitively adjusting the position and tilt of the eyepieces. One way to grasp the concept of "intuition" from the perspective of adjustment of the position and tilt of the eyepiece is as follows. Each of the operating positions shown in fig. 8A-8C and 9A-9D includes a particular state of each hinge point, such as a slider and hinge. Each articulation state exists in a one-dimensional continuum, and thus the operating position comprises a point in a multidimensional space that is the product of each one-dimensional articulation range. An adjustment between two operating positions may be referred to as intuitive if the adjustment corresponds to passing a straight line in the multi-dimensional space. Ideally, the operating position is uniquely defined by a point in the configuration space.
The flexibility provided by the various hinges contributes to a number of advantages, one of which is the ability to provide optimal ergonomics for a wide variety of head shapes and sizes and operating styles. The interpupillary distance of the eyepieces 711a, 711b can be adjusted to match the interpupillary distance of any surgeon. Depending on how the support head engaging member 730 rests on the surgeon's head 701, the position of the eyepieces 711a, 711b relative to the surgeon's eyes may be different even though all of the hinges are in the same state — e.g., the same sliding position of the sliding hinges, and the same rotational position of the rotating hinges. Thus, the adjustment range of the vertical position and exit pupil distance may be made large enough to account for variations in how the head engaging member 730 may be supported on the surgeon's head 701, as well as variations in various head shapes, sizes, and hairstyles (different hairstyles may result in the head engaging member 730 sitting differently on the surgeon's head 701). Further, as compared with the state shown in fig. 8B, the state shown in fig. 8C can accommodate a wider face by unfolding the side support arms 715a, 715B.
The articulation gives flexibility in the manner of operation, even for a given surgeon. The adjustable height and tilt of the eyepieces 711a, 711b, in combination with the adjustable tilt of the pair of stereo cameras 720, allow the surgeon to set the operating pose whereby she can directly view the surgical area or work area with her eyes and then with only a small eye rotation, in parallel view the enlarged or augmented surgical area as displayed in the eyepieces 711a, 711 b. Depending on whether the surgeon chooses to view the unmagnified surgical field above the eyepieces with a slight upward eye rotation or to view the unmagnified surgical field below the eyepieces with a slight downward eye rotation, she can adjust the height and tilt of the eyepieces 711a, 711 b. The surgeon may choose to operate in either a standing or sitting position by simply adjusting the tilt angle of the stereo camera 720 to redirect it toward the surgical field. If standing, it may be preferable to look directly at the surgical area below the eyepieces rather than above the eyepieces, as this maintains a more upright cervical spine, thus reducing complications associated with a forward-leaning head posture. The optical axes of the stereo camera pairs 720 and the ocular lenses 711a, 711b can be adjusted to converge together at the user's nominal working distance, or they can be adjusted to diverge so that the user can assume a more upright head position by increasing the tilt of the stereo camera pairs 720 while still viewing the working area pointing downward.
A given surgeon may select different hinges for the side arms 715a, 715b to accommodate various glasses or goggles or masks. The mask may also be incorporated directly into the frame 710 by attaching one or more transparent windows to the eyepiece support arms. The mask may be configured so that the optical path from the camera 720 to the surgical field, and from the user to the eyepieces 711a, 711b, is not obstructed. It may also have segments attached to the side arms 715a, 715b to provide enclosure protection. It can be removed from the frame to be replaced with a different type of mask, for example a mask incorporating laser filters, to protect the eyes from different laser wavelengths that may be used during operation.
Features of head engaging member 730 are shown in fig. 10A-10B. Such a head engaging member has a number of inventive features that are particularly useful for supporting an eyepiece of a pair of stereo camera and digital magnifier system, such as the digital magnifier system described above. First, the head engaging component must accommodate a variety of head lengths, head circumferences, slopes and curvatures of the front of the head, and slopes and curvatures of the back of the head. Furthermore, it must provide a stable mounting platform for the stereo camera and ocular lens pair that is rigidly and tightly coupled to the skull of the surgeon so that the surgeon's head motion is directly translated into the motion of these subsystems without magnification or oscillation caused by long and/or limited rigid lever arms.
The head engaging member 730 has an adjustable circumferential headband 1001 and an adjustable upper band 1031. The rear channel 1033 receives a pair of flexible strips including 1023a that can be adjusted in length to accommodate changes in head circumference using, for example, an actuator 1034 with a rack and pinion mechanism. The flexible support 1032 suspends the rear of the head engaging member 730 over the rear of the wearer's head, but the flexible support 1032 is compliant and flexible so as to accommodate different curvatures and inclinations of the rear of the head. The flexible band comprising 1023a includes a rotational attachment (including 1022 a) that allows the angle of the flexible headband extension 1021a, 1021b to change relative to the angle of the flexible band comprising 1023 a. This is to accommodate the difference in the relative slope of the front and rear portions of the head, as the flexible extensions 1021a, 1021b are rigidly coupled to the headgear pieces 1020a, 1020b, which are made of a more rigid material. Adjustable strap 1031 accommodates different head lengths and may be used to help set the height at which centerpiece 1010 rests on the head and to transfer weight (downward force) from an object mounted thereto more toward the back of the head. The centerpiece 1010 has mounting points 1040 and 1041 for various accessories, such as a pair of stereo cameras and/or a support frame for an eyepiece, as described above with respect to fig. 7A-7C. The part 1030 serves as an attachment point for the strap 1031. Component 1010 is designed to stably engage the head of a user in order to support and maintain the stability of the pair of stereo cameras and the eyepiece support subsystem attached thereto. Note that the member 1010 is supported via tension from three directions to engage it with the user's head, i.e., from both sides and from the top.
The member 1010 has a toric curvature that approximates the curvature of the average anterior portion of the head. It may comprise a thin layer of a conformal material, such as a gel or foam, that rests on the head without significantly disengaging it from the movement of the head. The curvature of the annulus of the members 1020a, 1020b also approximates the curvature of the average head on which they would be positioned. They may also comprise a thin layer of conformal material as described above. These conformal layers of material serve to better match the shape of the wearer's head. The flexible couplings 1011, 1012 (shown here as swivel hinges) between the side members 1020a, 1020b and the center member 1010 allow the combination of these components to better match the curvature of the wearer's head over a larger distance where the deviation between the curvature of the average head and the curvature of the wearer's head will become more pronounced. Thus, the segmented nature of the front of the head engaging member allows for a larger surface than a single component to rigidly and tightly couple to the user's head, providing more support for the weight of the dispensing accessory, and thus greater comfort.
Those skilled in the art will appreciate that not all of the hinges of digital magnifier system 700 (including its head engaging member 730) are necessary for design purposes. The articulations can also be designed in different ways to achieve the same or similar degrees of freedom and the support points of the eyepiece frame can be moved forward or backward on the skull bone while still achieving all the objects of the invention. Fig. 11A-11D depict some aspects of different embodiments of the digital loupe 1100 of the present invention in perspective view (fig. 11A), front view (fig. 11B), and side view (fig. 11C-11D). The head-mounted system of this embodiment can be used to support eyepieces other than digital magnifier eyepieces. Portions of the head-mounted system may also be used to support a single eyepiece using, for example, a single eyepiece support arm and associated structure.
Fig. 11D depicts a different state of articulation than the state in fig. 11A-11C. The eyepieces 1105a, 1105b are movably supported by the display bar 1104 (e.g., via a sliding connection that allows adjustment of the distance between the eyepieces, as described above), the display bar 1104 being rotatably coupled to the unitary goat-horn support arm 1101 via hinges 1106a and 1106 b.
The housings 1190 of the pairs of stereo cameras 1192a, 1192b are mounted on the hub 1110 of the head engagement member 1140. A distance sensor (not shown) may also be disposed in the housing 1190 as described with respect to the above embodiments. As in the embodiment of fig. 10A-10B, the hub 1110 of the head engaging member 1140 is designed to stably engage the head of a user in order to support and maintain the stability of the pair of stereo cameras and the eyepiece support subsystems attached thereto. The member 1110 has a toric curvature that approximates the curvature of the average anterior portion of the head. It may comprise a thin layer of a conformal material, such as a gel or foam, that rests on the head without significantly disengaging it from the movement of the head. The side members 1120a, 1120b of the head engaging member 1140 are connected to the center piece 1110 via flexible couplings 1111 and 1112 (e.g., rotational hinges). The curvature of the annulus of the side members 1120a, 1120b of the head engaging member 1140 also approximates the curvature of the average head they would lie on. They may also comprise a thin layer of conformal material as described above. These conformal layers of material serve to better match the shape of the wearer's head. The head engaging member 1140 may also have headgear and/or an upper strap, such as shown in fig. 10A-10B.
The central portion of the support arm 1101 is connected to the center piece 1110 of the head engaging member 1140 via a rotational hinge 1103 and a slider 1102 to enable positional freedom of the support arm 1101 and the eyepieces supported thereby in the vertical dimension and the exit pupil distance dimension. When the head engaging member 1140 is engaged with the user's head, the rotary hinge 1103 and the slider 1102 are located on the medial side of the user's frontotemporal lobe. The eyepieces 1105a, 1105b are supported by the movable display bar 1104, and the eyepieces are connected to the display bar 1104 in a manner that allows the distance between the eyepieces to be adjusted. As in the previous embodiment, the display bar extends rearward, upward and inward from the eyepiece support position along with the support arm 1101. In this particular embodiment, display bar 1104 extends laterally and rearwardly from eyepieces 1105a, 1105b, and the two sides of support arm 1101 extend downwardly, rearwardly and laterally in a three-dimensional curve from their connection to display bar 1104; then extend upwardly, rearwardly and laterally outwardly; finally extending upwardly and medially toward the hinge 1103 and slider 1102 of the head engaging member at a position at or beyond the perimeter of the user's field of view. Thus, this configuration enables both sides of the integral support arm to extend from the eyepiece to the connection with the head engaging member on the inside of the user's frontotemporal lobe without extending through the area of the user's face that is inside and above the center of the user's eyes and below the user's glabella.
The articulated state shown in fig. 11D differs from that in fig. 11C in that the eyepieces 1105a, 1105b are taller and closer to the eye, but still in the line of sight of the eye. This is achieved by different hinge states of the hinge 1103, different states of the slider 1102 and different states of the hinges 1106a, 1106 b. Each of the two sides of the display bar 1104 and the integral support arm 1101 extends laterally outward, rearward, and upward (more particularly, downward, rearward, and outward; then upward, rearward, and outward; finally, upward and inward) from the eyepieces 1105a or 1105b beyond the edge of the user's peripheral vision while avoiding the portion of the face above the inner side of the pupil and below the glabella, after which it finally extends inward toward the center piece 1110 to rest on top of the head inside the frontotemporal lobe. The goat-horn shape of the support arm 1101 is such that even for the widest face, the wearer can still use glasses or a mask, but it is located primarily outside the peripheral vision of the user. Note that the complete supporting headband is not shown in fig. 11A-11D, 12A-12D, and 13A-13D.
It should be clear that by taking into account the change in shape of the support arm 1101, the mounting point near the head may be more towards the back of the head or more towards the front of the head. The combination of two hinges at the mounting points that slide and/or rotate depending on the exact mounting location and other design considerations may provide vertical and exit pupil distance positioning of the eyepieces. The hinge for different adjustments may also include a slide and/or hinge on the support arm. For example, with respect to the embodiment of fig. 7A-7C, the slides 716a, 716b of the support arms 710 generally provide vertical position adjustment for the eyepieces, but if the mounting points of the support arms are at the back of the head, a similar slide could be used to adjust the exit pupil distance, while rotating the pivot points would provide primarily vertical adjustment capability. Such an adjustment mechanism may be applied to the embodiment of fig. 11A-11D. However, as shown in fig. 7A-7C, mounting points towards the front of the head are generally preferred as this provides a shorter and therefore more stable support structure. Another way to adjust the interpupillary distance is to have a sliding mechanism that allows the width of the eyepiece support structure to be adjusted, e.g., both the display bar 1104 and the support arm 1101 are elongated at their midpoints.
Fig. 12A-12D depict alternative embodiments of supporting an eyepiece support structure, such as a digital magnifier system. The head-mounted system of this embodiment can be used to support eyepieces other than digital magnifier eyepieces. Portions of the head-mounted system may also be used to support a single eyepiece using, for example, a single eyepiece support arm and associated structure. In this embodiment, the head engaging member 1210 has a shape adapted to fit the head of a person. As shown, the head engagement member 1210 supports a pair of stereo cameras 1292a, 1292b. The loops 1220, 1221 and 1222 provide a connection with headgear and an upper strap (not shown) to hold the head engaging member 1210 against the user's head, as shown in fig. 10A-10B. The vertical slide 1202 and hinge 1203 support the integral support arm 1201 and may be used to adjust the height and exit pupil distance, respectively, of the eyepieces 1205a, 1205b supported by the support arm 1201. The display bar 1204 supports the eyepieces 1205a, 1205b, and the sliding connection between the eyepieces 1205a, 1205b and the display bar 1204 allows the eyepieces to be adjusted to accommodate various interpupillary distances, as described above. Hinges 1206a, 1206b between the display bar 1204 and the support arm 1201 allow for adjustable and articulated tilt angles of the eyepieces. Fig. 12D depicts a hinged state, different from the views of fig. 12A-12C, in which the vertical slide 1202 and hinge 1203 have been adjusted to provide a more horizontal line of sight with a greater exit pupil distance. The display bar 1204, together with the two sides of the support arm 1201, extends in a part rectangular shape from the ocular lens back, then laterally outward, upward and laterally inward to the hinge 1203 supporting the arm 1201, beyond the edge of peripheral vision, while avoiding the portion of the face above the inner pupil and below the glabella, after which it finally rests on the top of the head on the inner frontotemporal lobe.
Fig. 13A-13D provide four views of yet another embodiment of a digital magnifier system according to the present invention. The head-mounted system of this embodiment may be used to support eyepieces other than digital magnifier eyepieces. Portions of the head-mounted system may also be used to support a single eyepiece using, for example, a single eyepiece support arm and associated structure. The display bar 1304 supports eyepieces 1305a, 1305b. The display bar 1304 is coupled to the distal ends of the side support arms 1301a and 1301b via hinges 1306a, 1306b to enable the tilt angle of the eyepieces to be adjusted. The sliding connection between the eyepieces 1305a, 1305b and the display bar 1304 allows the eyepieces to be adjusted to accommodate various interpupillary distances, as described above. It should be noted that the display bar 1304, as well as the previously described display bars, provide additional stability to the eyepieces by connecting the entire eyepiece support structure at the bottom and top, i.e., by linking the distal ends of the support arms to the head engaging members in addition to linking the proximal linkage of the support arms to the head engaging members.
The housings 1390 of the pair of stereo cameras 1392a, 1392b are mounted on the center piece 1310 of the head engaging member. A distance sensor (not shown) and/or an illumination source (not shown) may also be disposed in the housing 1390, as described with respect to the above-described embodiments. As in the embodiment of fig. 10A-10B, the centerpiece 1310 is designed to stably engage the head of a user in order to support and maintain the stability of the pair of stereo cameras and the eyepiece support subsystems attached thereto. The component 1310 has a toric curvature that approximates the curvature of the average anterior portion of the head. It may comprise a thin layer of a conformal material, such as a gel or foam, that rests on the head without significantly disengaging it from the movement of the head. The side members 1320a, 1320b of the head engaging member are connected to the center piece 1310 via a flexible coupling (e.g., a swivel hinge) as described above. The toroidal curvature of the side members 1320a, 1320b also approximates the curvature of the average head on which they will be located. They may also comprise a thin layer of conformal material as described above. These conformal layers of material serve to better match the shape of the wearer's head.
Extending behind the housing 1390 are support arm engagement members 1330 mounted on a linear slide 1321 to provide adjustment of the exit pupil spacing between the eyepieces 1305a, 1305b and the user's head. Support arm engagement member 1330 can slide on linear slide 1321 in a fore-aft direction relative to housing 1390. The side support arms 1301a, 1301b engage with support arm engagement members 1330 via slides 1332a, 1332 b. Thus, as the eyepieces 1305a, 1305b are coupled to the support arm engagement member 1330 through the display bar 1304 and the side support arms 1301a, 1301b, the articulation of the linear slide 1321 causes a change in the anterior-posterior positioning of the eyepieces 1305a, 1305b, thereby causing a change in the exit pupil distance. The support arms 1301a, 1301b can slide relative to the slides 1332a, 1332b to enable the effective length of the support arms 1301a, 1301b to be adjusted. The curved proximal portions of support arms 1301a, 1301b and the curves of slides 1332a, 1332b follow a circle 1331 (as shown in fig. 13C), the circle 1331 having a center point some distance behind the user's eye. By sliding the arms 1301a, 1301b relative to the slides 1332a, 1332b, the eyepieces 1305a, 1305b coupled to the arms 1301a, 1301b via the display bar 1304 also follow the circle, thereby enabling the height of the eyepieces 1305a, 1305b to be adjusted relative to the user's eye. Fig. 13D shows different articulations of the support arms 1301a, 1301b with respect to the position of the support arm engagement member 1130, so that the eyepieces 1305a, 1305b have a higher position with respect to their position, as shown in fig. 13C. Figure 13D also shows a different articulation of the support arm engagement member 1130 relative to the linear slide 1321, as compared to its articulation shown in figure 13C, with the result that the exit pupil distance changes. By moving the display bar 1304 about the hinges 1306a, 1306b, the tilt angle of the magnifying lens is also adjusted to a different state in FIG. 13D. When the head engaging member engages with the user's head, the slides 1332a, 1332b are located on the medial side of the user's frontotemporal lobe. The display bar 1304 and support arms 1301a, 1301b together extend laterally, posteriorly and superiorly from their connection to the eyepieces, and then medially toward support arm engagement member 1330 at a location at or beyond the perimeter of the user's field of view. 13A-13D extend from the eyepieces to a connection with the head engaging member inboard of the user's frontotemporal lobe without extending through a region of the user's face that is above the medial center of the user's eye and below the user's brow.
Fig. 14 illustrates the manner in which the rotational states of the two side support arms 1402a, 1402b of the head-mounted eyepiece support are coupled together. The side support arms 1402a, 1402B are similar to the arms 715a, 715B, and variations in rotational conditions similar to the differences between the hinged conditions shown in fig. 8B and 8C are envisaged. The change in the rotational state of one of the arms 1402a, 1402b rotates the pulleys 1403a, 1403b, respectively, and the pulleys 1403a, 1403b are located on top of the rigid member 1401. 1403a, 1403b, the rotation of one is transmitted in the opposite direction to the other of the two. Here, the mechanism that transmits the rotational motion is a set of meshing gears 1404a, 1404b that are connected to pulleys 1403a, 1403b via a belt or push rod. A rotary encoder and motor may also be used to measure the articulation of one side arm 1402a, 1402b and actuate the other to match. This mechanism may be used, for example, when there is no structure between a pair of eyepieces (e.g., the portion of display bar 712 between eyepieces 711a, 711b in fig. 7A-7C) that requires the eyepieces to move together.
Figure 15 depicts a support arm structure having eyepiece supports 1530a, 1530b such that adjusting the tilt angle of one eyepiece support automatically adjusts the tilt angle of the other for matching. This mechanism may be used when there is no structure between a pair of eyepieces (e.g., the portion of display bar 712 between eyepieces 711a, 711b in fig. 7A-7C) that requires the eyepieces to move together. Member 1501 rotatably supports members 1502 and 1503 and is itself rigidly coupled to the head of the user. Members 1502 and 1503 remain parallel when they are rotatably connected to linkages 1504a, 1504b and 1505a, 1505 b. The side support arms 1510a, 1510b and 1511a, 1511b may be rotated about linkages 1504a, 1504b and 1505a, 1505b, respectively, and 1520a, 1520b and 1521a, 1521b, respectively, to adjust their clearance from the user's head. The rotational state of arms 1510a and 1511a may be coupled to each arm by a pin 1512a that mates with a ball joint; similarly, arms 1510b and 1511b pass through pin 1512b. Eyepiece supports 1530a, 1530b are rotatably coupled to members 1520a, 1520b and 1521a, 1521b, respectively, and because of the parallel linkage, the tilt angle of eyepiece supports 1530a, 1530b must be the same as members 1502 and 1503, so adjusting the tilt of one eyepiece causes the same tilt of the other eyepiece. Alternatively, as described above, the tilt angles of the two eyepieces may be coupled via a pair of sensors/actuators.
Each of the articulations described in the present invention may be actuated manually or automatically, for example with a motor. Each articulated piece may include sensors to determine its status, for feedback and control purposes, or simply to track usage. As previously mentioned, the relative positions and orientations of the different subsystems of the known digital magnifier system, e.g., the different tilt states of the camera and/or the ocular lens and the distances between them, may enable compensation for vertical parallax or at least average vertical parallax that varies as a function of distance from the surgical field of view.
Additional hinges or hinge ranges not yet described are contemplated as aspects of the invention. For example, the invention may include substantially or completely eliminating the hinges or hinge range of the eyepieces and/or the eyepiece support structure from the user's field of view. This can be accomplished with the digital magnifier system 700 of fig. 7A-7C by hinged hinge 719, such that the eyepiece support structure 710 is lifted away from the user's field of view. Similarly, for the system 1100 of fig. 11A-11D, the hinge 1103 may be brought into a state in which the eyepieces 1105a, 1105b and support arms 1101, 1104 are completely lifted out of the field of view. One can imagine a rail system such as a rail at the end of the arms 1301a, 1301b that inserts into a slot such as 1302a, 1302b with sufficient reach to lift the eyepieces and the eyepiece support structure completely out of view.
Fig. 16A-16B illustrate a mask or window that may be used with the digital magnifier system of fig. 7A-10B. For clarity, fig. 16A-16B omit all but the central plate 1010 of the head engaging member of this embodiment. The front cover plate 1600 cooperates with the side cover plates 1602a and 1602b to protect the face of the user while wearing the digital magnifier system. Side cover plates 1602a, 1602b are coupled at their tops to portions of the side support arms 715a, 715b to maintain the freedom to adjust the height of the support arms. The mask plates 1602a, 1602b are hinged together with the side support arms 715a, 715b, respectively, to adjust the distance between the mask plates and the user's face in concert with the same adjustments made to the side support arms 715a, 715 b. As shown, the mask sheet 1600 has five facets, including a slanted front facet 1604 with a cutout 1606, the cutout 1606 allowing the camera 720 and the distance sensor 724 to view an object or work area without interference from the mask. The mask plate 1600 may be connected at the top with a hinge allowing it to tilt upward. In other embodiments, the mask may have fewer parts or facets, and alternative means of coupling to the eyepiece support arms and/or the head engaging structure. The face mask may be added to any of the other digital magnifier systems or eyepiece support systems described above.
Digital magnifier controls, such as those used for magnification change or starting and stopping video recording, may be actuated via buttons disposed on the eyepiece support arms. This is useful because the eyepiece support arms are easily covered to provide sterility; the components of the eyepiece support structure may already need to be covered to enable the surgeon to adjust the various articulations intraoperatively. However, a hinged member driven by a motor or other actuator may be commanded to different positions in a hands-free manner via voice or gestures or other means of commanding a digital system.
Placement of digital magnifier system components (such as batteries) at the back of the head may be used to balance the weight of components such as the paired stereo camera and eye piece. The eyepieces may include built-in heaters or structures to transfer heat dissipated from the display or other electronic components, keeping them warm enough to prevent fogging from the user's breath.
The processor of the digital magnifier system may include additional peripherals that may enhance the functionality of the system. For example, it may include a wired or wireless interface for sending video signals to and from the head mounted display so that live video can be streamed from one digital magnifier system to another, or to a server for recording or streaming to a remote location, or from a server for playback. An instructional surgeon at a remote location may use such settings to mark the field of view of the operating surgeon, which may be a trainee, or to make a remote presentation and indicate points of interest. The presence of motion sensing units such as accelerometers, gyroscopes, and/or magnetometers may aid in various functions.
For the purposes of this disclosure, the term "processor" is defined to include, but is not necessarily limited to, an instruction execution system, such as a computer/processor-based system, application Specific Integrated Circuit (ASIC), computing device, or hardware and/or software system, that can fetch or obtain logic from a non-transitory storage medium or a non-transitory computer readable storage medium and execute the instructions contained therein. "processor" may also include any controller, state machine, microprocessor, cloud-based utility, service or feature, or any other analog, digital, and/or mechanical implementation thereof. When a feature or element is referred to herein as being "on" another feature or element, it can be directly on the other feature or element or intervening features and/or elements may also be present. In contrast, when a feature or element is referred to as being "directly on" another feature or element, there are no intervening features or elements present. It will also be understood that when a feature or element is referred to as being "connected," "attached," or "coupled" to another feature or element, it can be directly connected, attached, or coupled to the other feature or element or intervening features or elements may be present. In contrast, when a feature or element is referred to as being "directly connected," "directly attached," or "directly coupled" to another feature or element, there are no intervening features or elements present. Although described or illustrated with respect to one embodiment, the features and elements so described or illustrated may be applied to other embodiments. Those skilled in the art will also appreciate that references to a structure or feature that is "adjacent" another feature may have portions that overlap or underlie the adjacent feature.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. For example, as used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," "including," "contains" and/or "containing," when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items and may be abbreviated as "/".
Spatially relative terms such as "under 8230;" \8230; "," under 8230; "\8230;", \8230; under "," lower "," at 8230; "\8230; over", "upper", etc., may be used herein for convenience of description to describe one element or feature's relationship to other elements or features as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary term "below" can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Similarly, the terms "upward," "downward," "vertical," "horizontal," and the like are used herein for explanatory purposes unless specifically stated otherwise.
Although the terms "first" and "second" may be used herein to describe various features/elements (including steps), these features/elements should not be limited by these terms unless context dictates otherwise. These terms may be used to distinguish one feature/element from another feature/element. Thus, a first feature/element discussed below could be termed a second feature/element, and similarly, a second feature/element discussed below could be termed a first feature/element, without departing from the teachings of the present invention.
In this specification and the appended claims, unless the context requires otherwise, the word "comprise", and variations such as "comprises" and "comprising 8230, the term" 8230means that the various components can be used together in methods and articles of manufacture (e.g., compositions and devices comprising the apparatus and methods). For example, the term "comprising" \8230 ";" is to be understood as implying the inclusion of any stated element or step, but not the exclusion of any other element or step.
As used herein in the specification and claims, including as used in the examples, and unless otherwise expressly specified, all numbers may be read as if prefaced by the word "about" or "approximately", even if the term does not expressly appear. When values and/or locations are described, the phrase "about" or "approximately" may be used to indicate that the described values and/or locations are within a reasonable expected range of values and/or locations. For example, a numerical value can have a value of +/-0.1% of the value (or range of values), a value of +/-1% of the value (or range of values), a value of +/-2% of the value (or range of values), a value of +/-5% of the value (or range of values), a value of +/-10% of the value (or range of values), and the like. Unless the context indicates otherwise, any numerical value given herein is also to be understood as including about or approximately the stated value. For example, if the value "10" is disclosed, then "about 10" is also disclosed. Any numerical range recited herein is intended to include all sub-ranges subsumed therein. It is also understood that when values are disclosed, as is well understood by those skilled in the art, "less than or equal to" the value, "greater than or equal to the value," and possible ranges between values are also disclosed. For example, if the value "X" is disclosed, "less than or equal to X" and "greater than or equal to X" (e.g., where X is a numerical value) are also disclosed. It should also be understood that throughout this application, data is provided in a number of different formats, and that the data represents endpoints and starting points, and ranges for any combination of data points. For example, if a particular data point "10" and a particular data point "15" are disclosed, it should be understood that greater than, greater than or equal to, less than or equal to, and equal to 10 and 15, and between 10 and 15 are considered disclosed. It is also understood that each unit between two particular units is also disclosed. For example, if 10 and 15 are disclosed, 11, 12, 13 and 14 are also disclosed.
Although various exemplary embodiments are described above, any of numerous variations may be made to the various embodiments without departing from the scope of the invention as described by the claims. For example, in alternative embodiments, the order in which the various described method steps are performed may generally be varied, and in other alternative embodiments, one or more method steps may be skipped altogether. Optional features of various apparatus and system embodiments may be included in some embodiments and not in others. Accordingly, the foregoing description is provided primarily for the purpose of illustration and should not be construed as limiting the scope of the invention, which is set forth in the following claims.
The examples and illustrations included herein show by way of illustration, and not limitation, specific embodiments in which the present subject matter may be practiced. As described above, other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term "invention" merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
Claims (60)
1. A digital magnifier system, comprising:
a pair of stereo cameras adapted and configured to be capable of generating an image signal of an object or a work area;
a distance sensor adapted and configured to enable obtaining a measurement of a distance to the object or work area; and
a processor operatively connected to the pair of stereo cameras and the distance sensor,
wherein the processor comprises a memory configured to be able to store camera calibration information related to the pair of stereo cameras and to perform conversion on image signals from the pair of stereo cameras based on the camera calibration information and distance measurements from the distance sensor.
2. The digital magnifier system according to claim 1, wherein said converting causes said image signals to appear as if produced from such a pair of stereo cameras: optical axes of the pair of stereo cameras converge at a distance corresponding to the distance measurement.
3. The digital magnifier system according to claim 1, wherein said distance sensor has an adjustable field of view.
4. The digital loupe system of claim 3, wherein the field of view of the distance sensor is adjustable based on a magnification of the digital loupe system.
5. The digital loupe system of claim 1, wherein an optical axis of the distance sensor approximately bisects an angle formed by optical axes of the pair of stereo cameras.
6. The digital loupe system of claim 1, wherein the pair of stereo cameras are adapted to be mounted on the crown or forehead of a user's head.
7. The digital loupe system of claim 1, wherein the tilt angle of the pair of stereo cameras is adjustable.
8. The digital loupe system of claim 1, wherein each camera of the pair of stereoscopic cameras has an optical axis, the optical axes of the pair of stereoscopic cameras being configured to converge at a distance approximately equal to an intended working distance of a user.
9. The digital loupe system of claim 1, further comprising a binocular head mounted display including a first display and a second display, the first and second displays being operably connected to the processor to receive the image signals generated by the pair of stereo cameras from the processor and display images according to the image signals.
10. The digital loupe system of claim 9, wherein the conversion causes the image to appear as if the pair of stereo cameras have optical axes that converge at a distance corresponding to the distance measurement.
11. The digital magnifier system according to claim 9, wherein said head mounted display is configured to be capable of having a virtual image distance substantially corresponding to a working distance of a user.
12. The digital magnifier system according to claim 9, wherein the eyepiece lens is mounted at a near vision position.
13. The digital magnifier system according to claim 9, wherein said processor is further configured to display said image signal in an eyepiece at a spatially varying magnification.
14. The digital magnifier system according to claim 9, further comprising an ambient light sensor, said processor further configured to enable adjustment of display characteristics of said head mounted display using signals from said ambient light sensor.
15. The digital loupe system of claim 9, wherein the optical axes of the head mounted display converge at a distance that is approximately equal to a working distance of a user.
16. The digital magnifier system according to claim 1, wherein said distance sensor is an imaging distance sensor.
17. The digital loupe system of claim 1, wherein the processor is further configured to be able to offset a viewpoint of the image signal using distance information from the distance sensor.
18. The digital loupe system of claim 1, wherein the pair of stereo cameras includes a color camera that provides a color image signal to the processor.
19. The digital loupe system of claim 18, wherein the processor is further configured to be able to process the color image signal using a three-dimensional look-up table.
20. The digital loupe system of claim 18, wherein said processor is further configured to be capable of processing said color image signal to replace color from an area in a color space where the user is less sensitive to color changes to a second area in a color space where the user is more sensitive to color changes.
21. The digital loupe system of claim 1, wherein the system is configured to be able to perform image stabilization by optical image stabilization at the stereo pair of cameras or by electronic image stabilization at the processor.
22. The digital magnifier system according to claim 1, wherein said camera is configured to automatically maintain focus.
23. The digital loupe system of claim 1, further comprising an illumination source adapted to illuminate the object or work area.
24. The digital magnifier system according to claim 23, wherein said illumination source is controlled by an illumination controller, said illumination controller adjusting an illumination parameter based on a measurement of distance from said distance sensor.
25. The digital magnifier system according to claim 23, wherein said illumination is pulsed in synchronization with an exposure interval of said pair of stereo cameras.
26. The digital loupe system of claim 1, wherein at least one image sensor of the pair of stereo cameras is an RGB-IR sensor.
27. The digital loupe system of claim 26, wherein the at least one image sensor has high dynamic range capability.
28. The digital loupe system of claim 1, wherein said system further comprises an additional imaging modality different from the imaging modality comprised by said pair of stereo cameras.
29. The digital loupe system of claim 28, wherein the additional imaging modality comprises a multi-channel imaging system.
30. The digital magnifier system according to claim 1, wherein said distance sensor has a narrow collimated beam.
31. An imaging system adapted to be worn by a human user to provide a view of a work area, the system comprising:
a head-mounted subsystem for supporting a pair of eyepieces within a line of sight of a human user, the head-mounted subsystem adapted to be worn by the user, the head-mounted subsystem comprising:
a head engaging member adapted to engage with a user's head, an
First and second support arms, each of the first and second support arms having:
a proximal portion supported by the head engaging member,
a distal portion configured to support the eyepiece within a user's line of sight, an
A central portion disposed between the proximal portion and the distal portion;
the head-mounted subsystem is configured such that when the head engaging member is engaged with the head of a user, the central portion of each support arm is configured to be extendable laterally and upwardly from the distal portion toward the proximal portion without extending through a region of the user's face above the inside of the user's eyes and below the user's brow space, and the proximal portion of each support arm is arranged and configured to be disposable inside the central portion;
two cameras supported by the head engaging member;
first and second eyepieces supported by the distal portions of the first and second support arms, respectively, so as to be positionable within a user's line of sight when the head engaging member is engaged with the user's head; and
a processor adapted and configured to be able to display images obtained by the two cameras on a display of the eyepiece.
32. The system of claim 31, wherein the proximal portion of each support arm is further configured to be disposable inside a user's frontotemporal lobe when the head engaging member is engaged with the user's head.
33. The system of claim 31, wherein the central portion of each support arm is further configured to extend rearwardly from the distal portion toward the proximal portion without extending through a region of the user's face above an inner side of the user's eyes and below the user's brow when the head engaging member is engaged with the user's head.
34. The system of claim 31, wherein the proximal portion of the first support arm and the proximal portion of the second support arm are each connected to the head engaging member by a hinge adapted to allow an angle between the support arm and the head engaging member to change.
35. The system of claim 34, wherein the hinge is adapted to allow the proximal, central, and distal portions of the support arm to move above the user's eye when the head engaging member engages the user's head.
36. The system of claim 31, wherein the first and second support arms are each supported by a sliding connection that allows the height of the support arms relative to the head engaging member to be changed.
37. The system of claim 31, wherein each of the first and second support arms comprises a plurality of segments.
38. The system of claim 37, further comprising a connector connecting adjacent segments of each support arm.
39. The system of claim 38, wherein the connector is adapted and configured to allow adjustment of the effective length of the segments of the support arm.
40. The system of claim 31, further comprising first and second eyepiece supports adapted to vary a distance between the eyepieces.
41. The system of claim 31, wherein the head-mounted subsystem is configured to allow a tilt angle of the eyepieces relative to a user's line of sight to be changed.
42. The system of claim 31, wherein the distal portion of each of the first and second support arms comprises a display bar supporting the first and second eyepieces.
43. The system of claim 42, wherein the display bar of the first support arm is integral with the display bar of the second support arm.
44. The system of claim 42, wherein the display bar of the first support arm is unconnected to the display bar of the second support arm.
45. The system of claim 42, further comprising first and second hinges connecting the display bar to a central portion of the first and second support arms, respectively.
46. The system of claim 45 wherein the hinge is adapted and configured to allow the tilt angle of the eyepiece to be changed.
47. The system of claim 45 wherein the hinge is adapted and configured to allow the first and second support arms to move toward or away from the head of the user.
48. The system of claim 31, wherein the head engaging member comprises a plurality of components adapted to engage the head of the user, the plurality of components connected by a flexible connection.
49. The system of claim 31, wherein the first and second support arms are two ends of a unitary support arm.
50. The system of claim 31, wherein each of the first and second support arms has a goat-horn shape.
51. The system of claim 31, wherein each of the first and second support arms has a partially rectangular shape.
52. The system of claim 31, further comprising a transparent window attached to the support of the eyepiece and adapted to protect a user's face.
53. The system of claim 31, further comprising a distance sensor supported by the head engaging member.
54. The system of claim 31, further comprising a camera mount movable relative to the head engaging member to change a viewing angle of one or both of the cameras.
55. The system of claim 31, further comprising a transparent window extending in front of the display and adapted to protect a user's face.
56. The system of claim 31, further comprising an illumination source supported by the head engaging member.
57. The system of claim 31, further comprising a sensor configured to be able to report a status of a hinge of the head-mounted subsystem.
58. The system of claim 31, wherein the hinges of the head-mounted subsystem are adapted to be automatically actuated.
59. The system of claim 31, further comprising a linkage between the first and second support arms, the linkage configured to actuate a corresponding portion of one of the support arms in response to actuation of a portion of the other support arm.
60. The system according to claim 59, wherein the linkage mechanism includes a sensor configured to sense an actuation status of the portion of one of the support arms and report the actuation status to the processor, and an actuator configured to actuate the corresponding portion of the other support arm and receive commands generated by the processor, the processor configured to generate commands to the actuator in response to reports received from the sensor.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US62/964,287 | 2020-01-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK40083562A true HK40083562A (en) | 2023-06-30 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12075019B2 (en) | Open view, multi-modal, calibrated digital loupe with depth sensing | |
| US11571272B2 (en) | Stereoscopic camera with fluorescence visualization | |
| JP6875550B2 (en) | 3D visualization camera and platform | |
| AU2024219905A1 (en) | Stereoscopic visualization camera and integrated robotics platform | |
| US9077973B2 (en) | Wide field-of-view stereo vision platform with dynamic control of immersive or heads-up display operation | |
| EP2786196A1 (en) | Wide field-of-view 3d stereo vision platform with dynamic control of immersive or heads-up display operation | |
| JP2019086712A (en) | Observation apparatus, observation unit and observation method | |
| US20250169916A1 (en) | Stereoscopic camera with fluorescence strobing based visualization | |
| US20040070823A1 (en) | Head-mount recording of three-dimensional stereo video images | |
| US20230141727A1 (en) | Imaging apparatus with multiple stereoscopic cameras | |
| JP7275124B2 (en) | Image projection system, image projection device, optical element for image display light diffraction, instrument, and image projection method | |
| US20220313085A1 (en) | Surgery 3D Visualization Apparatus | |
| HK40083562A (en) | Open view, multi-modal, calibrated digital loupe with depth sensing | |
| US20230179755A1 (en) | Stereoscopic imaging apparatus with multiple fixed magnification levels |