WO2024240812A1 - Système, procédé et programme informatique pour commander une interface utilisateur - Google Patents
Système, procédé et programme informatique pour commander une interface utilisateur Download PDFInfo
- Publication number
- WO2024240812A1 WO2024240812A1 PCT/EP2024/064070 EP2024064070W WO2024240812A1 WO 2024240812 A1 WO2024240812 A1 WO 2024240812A1 EP 2024064070 W EP2024064070 W EP 2024064070W WO 2024240812 A1 WO2024240812 A1 WO 2024240812A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- user
- user interface
- gesture
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Definitions
- Examples relate to a system, to a method and to a computer program for controlling a user interface.
- Gesture-based input refers to a way of interacting with a computer, mobile device, or any other electronic device through the use of gestures or movements made by the user's body. These gestures are captured by built-in sensors, such as cameras or depth sensors, and are translated into commands that the device understands. Examples of gesture-based input include swiping, pinching, tapping, shaking, or tilting a device to navigate through applications, adjust settings, or perform specific actions. Gesture-based input is becoming more and more common because it is a natural and intuitive way of interacting with technology, and it allows for hands-free or touch-free control, making it useful for people with physical disabilities.
- Modem working environments and living spaces are equipped with multiple devices with screens.
- the internet of things is a concept that allows devices to be connected.
- the smart home is a generic platform term that aims to interconnect devices.
- this creates a fragmented information feed requiring the user to look at multiple screens.
- surgical room vision there are multiple concepts for surgical room vision, which all include a surgical ecosystem that interconnects multiple devices and allows information fusion and display concentrated on a single screen or set of screens.
- Various example of the present disclosure are based on the finding that humans, naturally, perceive and understand a physical set of devices in terms of their spatial arrangement. This is typically not reflected in the configuration user interface in surgical ecosystems, and thus the configuration of the display of information of different devices of the surgical ecosystem is usually cumbersome.
- this is addressed by enabling a contactless control about the transfer of information to be displayed between devices, starting by pointing at a device and dragging and dropping the desired information to a target device. This is done from the point of view of the user - the system detects where the user is pointing at from the user’s point of view, and thus determines the information to be displayed and the target device where the information is to be displayed.
- This enables a more intuitive control of which information is to be displayed at which device, such that the configuration can be changed ad-hoc without requiring complex thought by the user.
- This can improve the safety during complex procedures, such as surgical procedures, where the surgeon is unable to spare thought for reconfiguring the information being displayed, while benefitting from the possibility of changing the configuration in an ad-hoc manner.
- the system comprises one or more processors.
- the system is configured to determine a position of a first point of reference based on a position of a first point on the body of a user for a first point of time and a second point of time.
- the system is configured to determine a position of a second point of reference based on a position of a pointing indicator used by the user for the first point of time and the second point of time.
- the system is configured to determine a first device based on the first point of reference and the second point of reference for the first point of time and a second device based on the first point of reference and the second point of reference for the second point of time.
- the system is configured to determine a response based on the determined first and second device.
- the act of determining a response comprises controlling the second device to display information provided by, e.g., displayed on, the first device.
- the user can select at the respective devices from his point of view, which is intuitive.
- selecting the devices by pointing at them is more intuitive than using a configuration menu for this purpose. Therefore, a more intuitive approach for controlling the distribution of information in a multi -device setting is provided, which can improve safety in critical scenarios, such as during surgery, while providing a benefit by the user having desired information at their fingertips.
- the second point of time may be later than the first point of time.
- the gesture being performed by the user may involve first pointing at the first device and then pointing at the second device.
- pointing at the first and second device may be part of a gesture being performed by the suer.
- the system may be configured to identify a pre-defined gesture being performed by the user, and to determine the first and second point in time based on the pre-defined hand-gesture.
- the pre-defined gesture may include pointing at the first device at the first point in time and pointing at the second device at the second point in time.
- the pre-defined gesture may be a drag-and-drop gesture.
- Using a drag and drop gesture may be intuitive for the purpose of transferring information from one device to another.
- the proposed concept’s implementation is facilitated if the devices are interconnected, such that the respective information can be provided by the first device to the second device via a network connection.
- the system may be configured to obtain the information provided by the first device from the first device.
- the system may be further configured to provide the information provided by the first device to the second device, for displaying on the second device.
- legacy devices that cannot provide their information via a network can also be integrated.
- the information provided by the first de- vice may be obtained externally, e.g., via a camera sensor.
- the system may be configured to obtain imaging sensor data showing the information provided by the first device from a camera being separate from the first device, the imaging sensor data comprising the information provided by the first device. This enables integration of legacy devices.
- the imaging sensor data may comprise a representation of information shown on a screen of the first device. This representation can then be processed to extract the relevant information, for display on the second device.
- the system may be configured to extract the information by the first device from the imaging sensor data, e.g., by cropping the imaging sensor data and/or geometrically transforming the imaging sensor data. This way, the information provided by the first device can be pre-processed for displaying on the second device, resulting in an improved quality of the information displayed on the second device.
- the system may be further configured to provide the information extracted from the imaging sensor data to the second device, for displaying on the second device.
- the respective information may be provided to the second device.
- the system may be configured to provide a display signal for a display device of the second device, the display signal comprising a representation of the information provided by the first device.
- the display signal may be shown by the display device of the second device without requiring additional processing by the second device, which reduces or minimizes a computational effort required by the second device.
- the system may be configured to process at least one of depth sensor data of a depth sensor, imaging sensor data of a camera sensor and stereo imaging sensor data of a stereo camera to determine at least one of the position of the first point of reference and the position of the second point of reference, and a position of one or more devices.
- sensors that enable determination of a 3D position of the respective points such as the stereo imaging sensor data or the depth sensor data, may be particularly suitable for determining the position of the points of reference and/or of the devices.
- the depth sensor data may be used together with (co -registered) imaging sensor data.
- a skeletal model may be used.
- the system may be configured to determine the positions of joints of a skeletal model of the body of the user by processing the respective sensor data, and to determine at least one of the position of the first point of reference and the position of the second point of reference based on the positions of the joints of the skeletal model.
- pre-trained machinelearning models that enable generation of a skeletal model, which may lower the implementation complexity of the system.
- An aspect of the present disclosure relates to a surgical imaging system, comprising the above system and at least one of the first device and the second device.
- An aspect of the present disclosure relates to computer program with a program code for performing the above method when the computer program is run on a processor.
- Fig. la shows a schematic diagram of an example of a system for controlling a user interface
- Fig. lb shows a schematic diagram of examples of points of reference when the user interface is shown on a screen
- Fig. 1c shows a schematic diagram of examples of points of reference when the user interface is projected or a haptic user interface of a device
- Fig. Id shows a schematic diagram of an example of a surgical imaging system comprising the system
- Fig. 2a shows a schematic diagram of a user dragging information from a vitals monitor to a screen of a surgical microscope system
- Fig. 2b shows a schematic diagram of a user dragging information from a screen of a surgical microscope system to a surgeon’s head-mounted viewer;
- Fig. 2c shows a schematic diagram of a grab gesture
- Fig. 3a shows a schematic drawing of a display device with ultrasonic emitters
- Fig. 3c shows a schematic drawing of haptic feedback while moving a microscope
- Fig. 4 shows a flow chart of an example of a method for controlling a user interface
- Fig. 5 shows a schematic diagram of a system comprising an imaging device and a computer system.
- Fig. la shows a schematic diagram of an example of a system 110 for controlling a user interface.
- the system 110 may be implemented as a computer system.
- the system 110 comprises one or more processors 114 and, one or more interfaces 112 and/or one or more storage devices 116.
- the one or more processors 114 are coupled to the one or more storage devices 116 and to the one or more interfaces 112.
- the functionality of the system 110 may be provided by the one or more processors 114, in conjunction with the one or more interfaces 112 (for exchanging data/information with one or more other components, such as a depth sensor 160, a camera sensor 150, a stereo camera 170, a display device 180, a projection device 185 and/or an emitter 190 of a contactless tactile feedback system), and with the one or more storage devices 116 (for storing information, such as machine-readable instructions of a computer program being executed by the one or more processors).
- the functionality of the one or more processors 114 may be implemented by the one or more processors 114 executing machine-readable instructions. Accordingly, any feature ascribed to the one or more processors 114 may be defined by one or more instructions of a plurality of machine-readable instructions.
- the system 110 may comprise the machine-readable instructions, e.g., within the one or more storage devices 116.
- the system 110 is configured to determine a position of a first point of reference 120 based on a position of a first point 12 on the body of a user 10.
- the system 110 is configured to determine a position of a second point of reference 130 based on a position of a pointing indicator 16 used by the user.
- the system 110 is configured to determine at least one of a position in the user interface, an element of the user interface and a device being accessible via the user interface based on the first point of reference and the second point of reference.
- the system 110 is configured to determine a response based on at least one of the determined position in the user interface, the determined element of the user interface and the determined device.
- the determined response may be to control the user interface based on the determined position in the user interface or element of the user interface.
- Various examples of the proposed concept are based on the finding that, in order to improve the intuitiveness of controlling a user interface by pointing at it, not only the pointing indicator being used (e.g., the hand, or a pointing stick) is to be tracked, but also another point of reference, which is used, together with the pointing indicator, to establish where the user is pointing from the user’s point of view.
- the pointing indicator being used e.g., the hand, or a pointing stick
- two points of reference are determined - a first point of reference at the body of the person (e.g., at the head/face, or at the sternum), and a second point of reference at a pointing indicator used by the person (e.g., at the hand/index finger, or at a pointing device, such as a pointing stick, pen, pencil, or scalpel).
- a pointing indicator used by the person
- a pointing device such as a pointing stick, pen, pencil, or scalpel
- the system is configured to determine the position of the first point of reference 120 based on a position of a first point 12 on the body of the user 10.
- this first point 12 on the body of the user 10 is a point that can be used as starting point for establishing the point of view of the user. Therefore, the first point 12 may preferably a point at the head/face of the user, such as the forehead, an eye, the position between two eyes, the nose, or the chin of the user.
- the system may be configured to determine the first point of reference based on the position of one or both eyes 12 of the user (as shown in Figs, lb, 1c. for example).
- the selection of an eye (or both eyes) as first point of reference may be supported by heuristics or machine-learningbased approaches to further improve the precision.
- the system may be configured to determine the first point of reference based on the position of both eyes 12 of the user (e.g., between the positions of both eyes).
- a (system) pre-defined eye or a user-defined eye e.g., the left or right eye
- the system may be configured to determine the first point of reference based on the position of a predefined or user-defined eye 12 of the user.
- the system may be configured to select one of the eyes based on the second point of reference, and in particular based on the proximity to the second point of reference.
- the system may be configured to determine the first point of reference based on a position of the eye 12 of the user being in closer proximity to the second point of reference.
- the system may be configured to determine a dominant eye of the user, e.g., by processing imaging sensor data or stereo imaging sensor data to determine whether the user occasionally only uses one eye (and closes the other) and selecting the eye that remains open as dominant eye.
- the dominant eye may be determined based on previous attempts of the user at controlling the user interface.
- the system may be select an eye as dominant eye by initially calculating two positions (and intersecting elements) in the user interface based on two different first points of reference based on the positions of both eyes, and determining one of the eyes as dominant eye if a corresponding position in the user interface intersects with a user interface element.
- the system may learn which eye is dominant from inference and use this eye for future calculations.
- the system may be configured to determine the first point of reference based on a position of the dominant eye 12 of the user.
- the system is configured to determine the position of the second point of reference 130 based on the position of a pointing indicator 16 used by the user.
- the term “pointing indicator” is used, which can both refer to a hand or finger (preferably index finder) being used to point at the user interface and to a tool that is being used for pointing at the user interface.
- the pointing indicator 16 is an entity (body part or tool) being used to point at the user interface.
- the system may be configured to determine the second point of reference based on a position of a finger 16a of the hand of the user. Additionally, or alternatively, the system may be configured to determine the second point of reference based on a position of a pointing device held by the user.
- both may be an option - if the user uses their hand to point at the user interface, the position of the finger may be used, if, however, the user uses a tool, such as a pen, pencil, pointing stick, scalpel, endoscope etc. that can be considered an extension of the hand, the tool may be used.
- the system may be configured to determine whether the user is holding a pointing device (e.g., tool/object) extending the hand towards the user interface, and to select the finger of the user if the user is pointing towards the user interface without such a tool/object, and to select the pointing device if the user is pointing towards the user interface with a pointing device.
- a pointing device e.g., tool/object
- a machine-learning model may be trained, using supervised learning and (stereo) imaging data and/or depth sensor data and corresponding labels, to perform classification as to whether the user is holding a pointing device, and to perform the selection based on the trained machine-learning model.
- image segmentation tech- niques may be used.
- the second point of reference may be determined accordingly, based on the position of (a tip of) the finger and based on the position of (a tip of) the pointing device.
- the position of the second point of reference may be adjusted such, that the resulting intersection point in the user interface when an extrapolated line (in the following also denoted imaginary line) passing through the first and second point of reference up to the user interface is determined, is adjusted in line with the swiveling motion of the finger (and, in some case, disproportional to the change in position of the tip of the index finger).
- an extrapolated line in the following also denoted imaginary line
- the system may be configured to process at least one of depth sensor data of a depth sensor 160, imaging sensor data of a camera sensor 150, combined RGB-D sensor data of a combined depth sensor and camera sensor, and stereo imaging sensor data of a stereo camera 170 to determine at least one of the position of the first point of reference and the position of the second point of reference.
- three-dimensional (human) pose estimation may be applied on the respective sensor data to determine the respective points of reference.
- algorithms and machine-learning-based approaches for generating three- dimensional pose-estimation data from two- and three-dimensional video data usually include generation of a three-dimensional skeletal model of the user based on the respective sensor data.
- the system may be configured to determine the positions of joints of a (three-dimensional) skeletal model of the body of the user by processing the respective sensor data.
- the term “skeletal model” might not be understood in a biological sense. Instead, the skeletal model may refer to a pose-estimation skeleton, which is merely modeled after a “biological” skeleton.
- the generated three- dimensional pose-estimation data may be defined by a position of joints of a skeleton in a three-dimensional coordinate system, with the joints of the one or more skeletons being interconnected by limbs.
- the terms “joints” and “limbs” might not be used in their strict biological sense, but with reference to the pose-estimation skeleton referred to in the context of the present disclosure.
- Such a skeletal model may be generated using the respective sensor data specified above.
- the system may be configured to process at least one of the depth sensor data, the imaging sensor data, the combined RGB-D sensor data, and the stereo imaging sensor data to determine the skeletal model of the user.
- machine-learning algorithms and machine-learning models that are known from literature or that are available off the shelf may be used for this purpose. Examples of such algorithms and models are given in the following. For example, Mehta et al: "vNect: Realtime 3D Human Pose Estimation with a Single RGB Camera” (2017) provide a machinelearning based three-dimensional pose estimation algorithm for generating a three- dimensional skeletal model from imaging data.
- Zhang et al "Deep Learning Methods for 3D Human Pose Estimation under Different Supervision Paradigms: A Survey” (2021) provide a machine-learning based three-dimensional pose estimation algorithm for generating a three-dimensional skeletal model from RGB-D data.
- Lallemand et al “Human Pose Estimation in Stereo Images” provide a machine-learning based three-dimensional pose estimation algorithm for generating a three-dimensional skeletal model from stereo imaging sensor data.
- an application-specific machine-learning model may be trained, in line with the approaches listed above.
- the resulting skeletal model may now be used to determine the points of reference.
- the system may be configured to determine at least one of the position of the first point of reference and the position of the second point of reference based on the positions of the joints of the skeletal model.
- the skeletal model may include, as joints, one or more of the position of the eyes of the user, the position of the forehead of the user, the position of the tip of the nose of the user, the position of the wrists of the user, the position of the tips of the (index) fingers of the user.
- the skeletal model may include an additional point representing the tip of a pointing device being held by the user.
- the machine-learning model may be trained to include such an additional point in the skeletal model.
- the system may be configured to determine whether the user is using a pointing device to point at the user interface, to determine a position of the tip of the pointing device relative to a point on the hand of the user (based on the skeletal model and based on the respective sensor data), and to add a joint representing the tip of the pointing device to the skeletal model.
- another machine-learning model may be trained to output the position of the tip of the pointing device relative to a point on the hand of the user, e.g., using supervised learning.
- such a machine-learning model may be trained as regressor, using a skeletal model and the respective sensor data as input, and the relative position as desired output.
- the first and second point of reference are now used to determine at least one of a position in the user interface, an element of the user interface and a device being accessible via the user interface.
- the system may be configured to determine the position in the user interface, the element of the user interface or the device by projecting an imaginary line 140 through the first point of reference and the second point of reference towards the user interface.
- the position in the user interface may correspond to the point where the imaginary line intersects the user interface.
- the element of the user interface may correspond to an element of the user interface that overlaps with the point in the user interface.
- the device may correspond to a device that is intersected by the imaginary line.
- the proposed concept is not limited to using a geometric approach that is based on projecting an imaginary line.
- a look-up table may be used to determine the position, element and/or device, or a machine-learning model may be trained to output the position, from which the element or device can be determined (similar to the case where an imaginary line is used).
- supervised learning may be used to train such a machine-learning model, using coordinates of the first and/or second point of reference, and optionally coordinates of the user interface, as training input and information on a position in the user interface as desired output.
- positions are used, such as the point on the body/first point of reference, the second point of reference, the position in the user interface etc.
- These positions may be defined in a (or several) coordinate systems.
- the first and second point of reference may be defined in a world coordinate system. If the coordinates of the user interface are known, e.g., also defined in the world coordinate system (which may be defined relative to the user interface), the imaginary line can be projected through the first and second points of reference towards the user interface, and the point of in the user interface (and element intersecting with the point) can be determined based on the point of intersection of the imaginary line.
- the user interface is not necessarily limited to being a two-dimensional user interface or a user interface that is being shown on a display device - in some examples, as will become clear in connection with Figs, lb and 1c, the user interface may be a three-dimensional user interface, which may be either a real-world user interface with haptic/tactile input modalities, or a three-dimensional user interface being shown on a 3D display, a three-dimensional projection or a hologram.
- the system is configured to determine a response based on the determined position in the user interface, the determined element of the user interface and/or the determined device. Such a response may be used to determine, based on the position or movement of the position in the user interface, whether the user is trying to interact with, and thus control, the user interface, or whether the user is not engaged with the user interface. In the latter case, the determined position and/or element of the user interface may be used to control the user interface. In other words, the system may be configured to control the user interface based on the determined position in the user interface or element of the user interface.
- the “element” may be a user interface element, such as a button, a label, a dropdown menu, a radio button, a checkbox, a rotational input, or a slider of a virtual user interface (being displayed on a screen or projected), or a button, a label, a radio button, a rotational knob, or a slider of a haptic/tactile “real-world” user interface (that is arranged, as dedicated input modalities, at a device).
- a user interface element such as a button, a label, a dropdown menu, a radio button, a checkbox, a rotational input, or a slider of a virtual user interface (being displayed on a screen or projected)
- a button, a label, a radio button, a rotational knob, or a slider of a haptic/tactile “real-world” user interface that is arranged, as dedicated input modalities, at a device.
- hand gestures may be used.
- the system may be configured to identify a hand gesture being performed by the user, and to control the user interface based on the determined element and based on the identified hand gesture.
- the system may be configured to process the sensor data, using a machine-learning model, to determine the hand gesture being used.
- a machine-learning model may be trained, using supervised learning, as a classifier, using the respective sensor data as input data and a classification of the gesture as desired output.
- examples of such a machine-learning model and algorithm can be found in Bhushan et al: "An Experimental Analysis of Various Machine Learning Algorithms for Hand Gesture Recognition" (2022).
- Various hand gestures may be identified and used to control the user interface.
- the system may be configured to identify at least one of a clicking gesture, a confirmation gesture, a rotational gesture, a pick gesture, a drop gesture, and a zoom gesture.
- the clicking gesture the pointing indicator may be moved intentionally towards the user interface, e.g., for a pre-defined distance in at most a pre-defined time. This gesture may be used to select a user interface element.
- the rotational gesture the user may rotate the hand. This gesture may be used to control a rotational input or rotational knob of the user interface.
- the user may first move the pointing indicator intentionally towards an element of the user interface, draw the pointing indicator back towards the body (pick gesture), and move the pointing indicator towards another position in the user interface (drop gesture).
- pick gesture draw the pointing indicator back towards the body
- drop gesture move the pointing indicator towards another position in the user interface
- the user may move the hand, with palm pointed towards the user interface, towards or away from the user interface.
- the gestures may be implemented differently, and/or used to control different elements of the user interface.
- the system may be configured to highlight the determined position in the user interface continuously using a positional indicator 230.
- a positional indicator may be included in the representation of the (virtual) user interface itself, e.g., overlaid over the user interface.
- the system may be configured to generate a display signal, with the display signal comprising the user interface (e.g., and the positional indicator).
- the system may be configured to provide the display signal to a display device 180, such as a display screen or a head -mounted display.
- a display device 180 such as a display screen or a head -mounted display.
- the system may be configured to generate a projection signal, which may comprise (in some cases, where the user interface itself is being projected) the user interface, and the positional indicator.
- the system may be configured to provide the projection signal to a projection device 185 for projecting the projection signal onto the haptic user interface.
- a projection can also be used with “haptic” user inter- faces (i.e., user interfaces that are arranged, as haptic input modalities, at the respective devices being controlled, without including a projection of the user interface.
- the positional indicator may be provided in a shape that conveys whether the system has determined the correct position or item.
- the system may be configured to generate the positional indicator with the shape of a hand based on a shape of the hand of the user.
- the system may be configured to compute the shape of the hand of the user based on the respective sensor data, and generate the positional indicator based on the shape of the hand of the user (or based on a simplified hand shape).
- the system may be configured to generate the shape of the positional indicator such, that, form the point of view of the user, the shape of the positional indicator extends beyond the shape of the hand of the user, e.g., by “exploding” the shape by a pre-defined absolute margin, such that the user can see the shape even if the hand partially obscures the user interface.
- the system may be configured to determine the position of the positional indicator such, that, form the point of view of the user, the positional indicator is offset by a pre-defined or user-defined offset (e.g., in two dimensions).
- a visual representation of the identified hand gesture may be shown, e.g., as a pictogram representation of the hand gesture or of the functionality being controlled by the hand gesture.
- the system may be configured to display a visual representation of the identified hand gesture via a display device 180 or via a projection device 185.
- an emitter of a contactless tactile feedback system can be used to give tactile feedback to the user.
- the system may be configured to control an emitter 190 of a contactless tactile feedback system based on at least one of the position in the user interface and the element of the user interface, and to provide tactile feedback to the user using the emitter according to the position in the user interface or according to the element of the user interface.
- the emitter 190 may be one of an ultrasound emitter or array of ultrasound emitters, one or more laser emitters, one or more air emitters, or one or more magnetic field emitters.
- tactile feedback may be given when the user interface is being controlled, e.g., if a button is actuated by a clicking gesture or a rotational input/knob is rotated using a rotational gesture.
- a first level or first pattern of tactile feedback may be given when an element of the user interface or device is selected, and a second level or second pattern of tactile feedback may be given while the user interacts with the element of the user interface or with the device.
- a third level or third pattern of tactile feedback may be given when the user reaches a boundary, e.g., while controlling an element of the user interface or a device via gesture control. For example, if the user controls a virtual rotational element (e.g., rotational knob) via gesture control, the third level or third pattern of tactile feedback may be provided when the rotational element is at an extreme position (e.g., lowest or highest allowed position).
- the third level or third pattern of tactile feedback may be provided when the position of the device reaches a limit, e.g., due to stability restrictions or due to a limit of an actuator).
- Fig. lb an example is shown where the user interface is displayed on a display device (such as a screen).
- Fig. lb shows a schematic diagram of examples of points of reference when the user interface is shown on a screen.
- Fig. lb shows a user interface 100.
- Two stereo cameras 170 are arranged above (or below) the user interface 100, having two partially overlapping fields of view 170a, 170b.
- the stereo cameras provide stereo imaging sensor data, which are used to determine the two points of reference - the first point of reference 120 at the position of an eye 12 of a user 10, and the second point of reference 130 at the position of a fingertip 16a (of the index finger) of the hand of the user.
- an imaginary line 140 is projected, pointing at a position in the user interface 100.
- the user interface 100 can be either a projected user interface, e.g., a user interface that is projected onto a projection surface, or the user interface can be a haptic user interface (indicated by the zig-zag shape) that is arranged at one or multiple devices being controlled by the system.
- Fig. 1c shows a schematic diagram of examples of points of reference when the user interface is projected or a haptic user interface of a device.
- a depth sensor 160 and a camera sensor 150 are used to generate RGB-D sensor data, which is processed by the system 110 to de- termine the first point of reference 120 and the second point of reference 130.
- An imaginary line 140 is projected towards the user interface.
- the user interface is projected by projection device 185 onto a projection surface 185a.
- the projection signal may comprise at least one of a positional indicator for highlighting the determined position of the haptic user interface, an indicator for highlighting the element of the haptic user interface, and a visual representation of the identified hand gesture.
- the user interface may be a haptic, real-world user interface, as indicated by the rotational knob shown as part of the user interface 100.
- the knob may be part of the projected user interface.
- the position in the user interface intersects with the (real -world or projected) rotational knob, such that the rotational knob is determined as user interface element and/or such that the device comprising the rotational knob is determined as device.
- the rotational knob may now be controlled via a hand gesture, e.g., in the projected user interface, or by controlling the control functionality associated with the knob.
- the determined element may be an element of the haptic user interface.
- the determined device may be a device that is accessible (i.e., controllable) via the haptic user interface.
- the system may be configured to control a control functionality associated with the element of the haptic user interface (e.g., without turning the knob or pressing a button of the haptic user interface) and/or associated with the determined device. This way, the user can control the device or user interface element in a way that is not unlike telekinesis.
- Such ways of controlling a user interface are particularly useful in scenarios, where, due to hygiene or as the user is unable to move around, a user is unable to control the user interface by touch.
- a scenario can, for example, be found in surgical theatres, where a surgeon seeks to control a surgical imaging device, such as a surgical microscope or surgical exoscope (also sometimes called an extracorporeal telescope), or other devices, such as an endoscope, an OCT (Optical Coherence Tomography) device.
- Exoscopes are camera-based imaging systems, and in particular camera-based 3D imaging systems, which are suitable for providing images of surgical sites with high magnification and a large depth of field.
- Fig. Id shows a schematic diagram of an example of a surgical imaging system comprising the system 110.
- the surgical microscope system shown in Fig. Id comprises a base unit 20 with a haptic user interface (indicated by the knobs and buttons).
- the aforementioned indicators may be projected onto the haptic user interface.
- the surgeon or an assistant
- the haptic user interface e.g., the knobs and buttons, and control the control functionality associated with the different elements of the user interface using hand gestures.
- the system 110 may be an integral part of such a surgical imaging system and may be used to control various aspects of the optical imaging system 100.
- the system 110 may be used to process imaging data of an imaging device (e.g., the microscope or exoscope) of the optical imaging system, and to generate a display signal based on the image data. Therefore, the system 110 may also serve as an image processing system of the surgical imaging system.
- the system 110 may be configured to control additional aspects of the surgical imaging system, such as an illumination provided by the surgical imaging system, a placement of the imaging device relative to a surgical site being imaged, or one or more auxiliary surgical devices being controlled via the surgical imaging device.
- a surgical imaging system is an optical imaging system 100 that comprises a (surgical) imaging device, such as a digital stereo microscope, an exoscope or an endoscope, as imaging device.
- a (surgical) imaging device such as a digital stereo microscope, an exoscope or an endoscope, as imaging device.
- the proposed concept is not limited to such embodiments.
- the imaging device is often also called the “optics carrier” of the surgical imaging system.
- the one or more interfaces 112 of the system 110 may correspond to one or more inputs and/or outputs for receiving and/or transmitting information, which may be in digital (bit) values according to a specified code, within a module, between modules or between modules of different entities.
- the one or more interfaces 112 may comprise interface circuitry configured to receive and/or transmit information.
- the one or more processors 114 of the system 110 may be implemented using one or more processing units, one or more processing devices, any means for processing, such as a processor, a computer or a pro- grammable hardware component being operable with accordingly adapted software.
- the described function of the one or more processors 114 may as well be implemented in software, which is then executed on one or more programmable hardware components.
- Such hardware components may comprise a general -purpose processor, a Digital Signal Processor (DSP), a micro-controller, etc.
- the one or more storage devices 116 of the system 110 may comprise at least one element of the group of a computer readable storage medium, such as a magnetic or optical storage medium, e.g., a hard disk drive, a flash memory, Floppy-Disk, Random Access Memory (RAM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), an Electronically Erasable Programmable Read Only Memory (EEPROM), or a network storage.
- a computer readable storage medium such as a magnetic or optical storage medium, e.g., a hard disk drive, a flash memory, Floppy-Disk, Random Access Memory (RAM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), an Electronically Erasable Programmable Read Only Memory (EEPROM), or a network storage.
- the system may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept, or one or more examples described above or below.
- Figs, la to Id an application of the concept shown in Figs, la to Id, in which the concept is used to transfer information between devices of an ecosystem of devices, thereby creating a contactless ecosystem, with contactless collective control of an ecosystem of devices.
- contactless control an intelligent way to control an ecosystem of connected devices is proposed. This is based around the technique of allowing the user to configure a set of devices and displays so that the information if collected and displayed in a preferred way, using gestures as if the actual physical space is a VR (Virtual Reality) space.
- a surgical room may be equipped with a surgical microscope and a vital monitor (shown in Fig. 2a).
- Fig. 2a Another medical example is when a user wants to send a video stream from a microscope or endoscope to the head-mounted digital viewer of a surgeon (see Fig. 2b).
- the interaction with the device ecosystem is very intuitive and fast and allows for a more agile adaptation of the ecosystem according to the dynamic shift of needs, as it is easy to add and remove devices when needed or not needed. It also enables more flexibility in the physical arrangement of devices in a room, as the user can collect all the data without the need to have optical access to each device. Fig.
- FIG. 2a shows a schematic diagram of a user dragging information 215 from a vitals monitor 210 to a screen 220 of a surgical microscope system using a drag & drop gesture.
- the stereo cameras 170 for detection of the user pointing are shown.
- the user can easily use virtual drag & drop between devices to transfer the display of one device on the monitor as picture in picture.
- Fig. 2b shows a schematic diagram of a user dragging information 225 from a screen 220 of a surgical microscope system to a surgeon’s head-mounted viewer 230.
- the user can easily send a video stream from a screen to a surgeon’s head mounted digital viewer.
- the system 110 is configured to determine the position of the first point of reference 120 based on the position of the first point 12 on the body of a user 10 for a first point of time and a second point of time.
- the system 110 is configured to determine the position of the second point of reference 130 based on the position of the pointing indicator 16 used by the user for the first point of time and the second point of time.
- the system 110 is configured to determine a first device based on the first point of reference and the second point of reference for the first point of time and a second device based on the first point of reference and the second point of reference for the second point of time.
- the system 110 is configured to determine the response based on the determined first and second device, the act of determining the response comprising controlling the second device to display information 215, 225 (shown in Figs. 2a and 2b) provided by the first device.
- the transfer of information from the first device to the second device is based on the user pointing at both devices.
- the second point of time may be later than the first point of time.
- the user pointing at both devices may be part of an overarching gesture, which includes pointing sequentially at both devices (to indicate which of the devices is to act as a source and which of the devices is to act as display).
- the system may be configured to identify a pre-defined gesture being performed by the user, and to determine the first and second point in time based on the pre-defined hand-gesture.
- This pre-defined gesture may include pointing at the first device at the first point in time and pointing at the second device at the second point in time. In other words, the user first points at the first device, and then at the sec- ond device. However, these two pointing actions may be connected by the remainder of the gesture, to indicate that the user does not merely want to select one of the devices for gesture control.
- the predefined gesture may also include a transition between both devices. For example, during the transition, the user may keep their arm outstretched, so that, instead of two separate pointing gestures, a single pointing gesture is performed that is merely moved towards the second device.
- a grab gesture may be used, i.e., the user may point at the first device, perform the grab gesture while pointing at the first device and keep performing the grab gesture until the user points at the second device, where the user releases the grab gesture.
- Fig. 2c shows a schematic diagram of an example grab gesture, in which the index finger and the thumb of the hand are moved towards each other. This gesture may be used for a drag & drop operation, such as the one performed in the present context. Accordingly, the pre-defined gesture may be a drag-and-drop gesture. The start and end of the grab gesture may also define the first and second points in time.
- the system 110 can obtain the information provided by the first device from the first device and control the second device to display the information obtained from the first device.
- the system may be configured to obtain the information provided by the first device from the first device.
- an alternative implementation is to use cameras to video capture the screen of a device, e.g., a vital monitor (see Fig. 2a), and then display that information on the screen of the second device (e.g., the microscope screen) without any digital connection between the microscope and the vitals monitor.
- the system may be configured to obtain imaging sensor data showing the information provided by the first device from a camera being separate from the first device, the imaging sensor data comprising the information provided by the first device.
- the providing of the information merely refers to showing the information on a screen of the first device, i.e., the information is provided by showing the information on a screen of the first device, and not digitally via a network.
- the imaging sensor data may comprise a representation of information shown on a screen of the first device. To make this information usable, the imaging sensor data may be processed, and the relevant information may be extracted.
- the system may be configured to extract the information by the first device from the imaging sensor data, by cropping the imaging sensor data and/or geometrically transforming the imaging sensor data (e.g., to perform perspective corrections).
- the system may be configured to process the imaging sensor data to determine a portion of the imaging sensor data showing the information, and to crop and/or geometrically transform the imaging sensor data to extract the information.
- the system is configured to control the second device to display information provided by the first device.
- the system may be configured to provide the information provided by the first device for displaying to the second device.
- the system may be configured to provide a display signal for a display device of the second device, the display signal comprising a representation of the information provided by the first device.
- the display signal may be a signal for driving the display device of the second device, or it may be a video stream to be displayed by the second device, e.g., within a window or dedicated area on the display device of the second device.
- the system drives the display device of the second device.
- the system 110 may be part of the second device.
- the second device is a surgical imaging system 100 (such as a surgical microscope system or a surgical exoscope system).
- a surgical imaging system 100 comprising the system 110 and at least one of the first device and the second device.
- the second device is merely interconnected and part of the same ecosystem as the system, with the second device accepting the display signal / video stream for displaying as part of the ecosystem.
- multiple cameras may be placed at different positions and devices to ensure sufficient optical coverage in the whole room, e.g., for both determination of the first and second points of reference, and, optionally, also for providing imaging sensor data showing the screen of a device.
- the system may be configured to process at least one of depth sensor data of a depth sensor 160, imaging sensor data of a camera sensor 150 and stereo imaging sensor data of a stereo camera 170 to determine at least one of the position of the first point of reference and the position of the second point of reference, and a position of one or more devices.
- the system may be configured to determine the positions of joints of a skeletal model of the body of the user by processing the respective sensor data, and to determine at least one of the position of the first point of reference and the position of the second point of reference based on the positions of the joints of the skeletal model.
- each device could be equipped with cameras, and also security-like cameras could have better overview in the room. The cameras could be used collectively to ensure seamless operation.
- the concept for a contactless ecosystem of devices may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept, or one or more examples described above or below.
- Haptic feedback is valuable in medical devices like surgical microscopes because it enhances the surgeon's precision and control during delicate procedures. By providing real-time touch sensations, haptic feedback allows the surgeon to better gauge the applied force and spatial orientation, ultimately improving the accuracy and safety of the operation.
- various technologies may be used to provide haptic feedback.
- ultrasound-based haptic feedback may be used.
- this technology creates contactless haptic feedback through the generation of pressure points in mid-air, allowing users to feel tactile sensations without touching a physical surface.
- Inami, M., Shinoda, H., & Makino, Y. (2016): “Haptoclone (Haptic -Optical Clone) for Mutual Tele-Environment” provide an example of ultrasound-based haptic feedback.
- laser-based haptic feedback may be used.
- this method uses laser-induced plasma to generate haptic feedback by creating a shockwave in the air, producing a touch sensation when interacting with the plasma.
- Futami, R., Asai, Y., & Shinoda, H. (2019): “Hapbeat: Contactless haptic feedback with laser -induced plasma” shows the use of haptic feedback based on laser-induced plasma.
- Another technology generates air vortices. Controlled air vortices are to deliver targeted haptic feedback without physical contact, enabling users to experience a touch-like sensation from a distance. Sodhi, R., Poupyrev, I., Glisson, M., & franr, A.
- a concept for improving the user experience of contactless control by adding contactless haptic feedback.
- a 3D space map may be used that defines the type of feedback to be provided to the user by any type of contactless haptic creation technology such as ultrasound.
- Fig. 3a illustrates an example implementation of the concept using ultrasound emitters 190.
- a contactless control monitor is equipped with ultrasound emitters arranged perimetrically around the monitor frame.
- Fig. 3a shows a schematic drawing of a display device 180 with ultrasonic emitters 190.
- the ultrasound emitters 190 are arranged at a frame of the display device 180, surrounding a display area of the display device.
- Fig. 3b shows a schematic drawing of haptic zones with different tactile feedback, and in particular the concept of zones of feedback.
- 3D-zones are defined where the fin- gertip/hand will receive different type of haptic feedback.
- the virtual button defines two zones: the “button stand-by zone” 320 (when the fingertip is aligned with the button) where a week haptic feedback informs the user that the fingertip is at a position that the user can press a virtual button (i.e., the finger is aligned with the button).
- the other zone is the “button activation zone” 330 (when the button is pressed) which provides feedback that the virtual button is pressed (akin to confirmation of a click).
- Fig. 3c illustrates an example of haptic feedback in the contactless control of a microscope positioning.
- Fig. 3c shows a schematic drawing of haptic feedback while moving a microscope.
- 3c shows a microscope 340 (which is equipped with ultrasound emitters, shown as circles), which can be moved between two positions 340a; 340b, which may be considered virtual boundaries, and which define an angle adjustment range. While moving the orientation of the microscope, the microscope stays orientated towards a surgical cavity. In the example shown the user’s hand moves near to the microscope and when enters a predefined distance, then the microscope locks and follows the user’s hand.
- the haptic feedback serves a guide for the alignment and as confirmation of the user’s actions.
- haptic confirmation 350a is given when the device is engaged, less pronounced haptic feedback (indicated by the short bars between the extreme positions and the middle position) is given during movement, and increasingly strong haptic feedback 350b, 350c is given as haptic warning indicating that the boundary is reached.
- haptic feedback increases, the more the respective bars extend from the arrowed circle indicating the angle.
- a way to describe the concept of programming the haptic feedback is to define 3D boundaries and zones which define the type of haptic feedback.
- the virtual boundaries 340a, 340b are derived by the microscope’s physical limitations of adjustment range, i.e., the microscope arm cannot move beyond.
- the virtual boundaries could be determined by oth- er limitations such as the alignment with the surgical cavity, i.e., the microscope is restricted within the angles that allow to look in the surgical cavity.
- zones and boundaries positioned relative to the microscope’s position can also be useful.
- a “virtual grabbing zone” which is always around the microscope. This means that there is an area that when the hand is in then it can be recognized as a potential control guide. In that case it would be useful to provide haptic feedback as to if the user’s hand is at correct spot.
- Figs. 3d and 3e illustrate an example of haptic feedback designed for a specific type of control, in this case a rotation knob.
- Figs. 3d and 3e show schematic drawing of haptic feedback while operating a rotational user interface element via gesture control.
- the haptic feedback pattern shown in Fig. 3e represents the feedback during the rotation of the virtual knob shown in Fig.
- haptic feedback 360a that the device is engaged.
- the haptic feedback is not restricted in the rotation feedback as shown in Figs. 3d and 3e but can also assist the user to align the hand and thus have more confident in the use of a control element, and consequently use the interface easier and faster.
- Examples of alignment feedback is to provide haptic confirmation (i) when the hand is aligned, and (ii) when the gesture is recognized, (iii) when the hand is getting misaligned during the use of the control element, and (iv) when the gesture is getting difficult to be recognized.
- the haptic feedback may be used to help the user transfer information between devices, as discussed in connection with Figs, la to 2c and 4.
- the system 110 of the discussed figures may be configured to control an emitter of a contactless tactile feedback system, such as one or more ultrasound emitters, one or more laser emitters, one or more air emitters or one or more magnetic field emitters to provide tactile feedback in response to the user performing the pre-defined gesture (e.g., by giving tactile feedback when the pre-defined gesture is detected to start and when the pre-defined gesture is detected to end, and/or by giving tactile feedback when the first and second device are determined as part of the pre-defined gesture.
- a contactless tactile feedback system such as one or more ultrasound emitters, one or more laser emitters, one or more air emitters or one or more magnetic field emitters to provide tactile feedback in response to the user performing the pre-defined gesture (e.g., by giving tactile feedback when the pre-defined gesture is detected to start and when the pre-defined gesture is detected to end, and/or by giving tactile feedback when the
- Figs. 3d and 3e illustrates a rotating knob, but the same concept can be extended in other virtual control elements such as a sliding bar, and a switch.
- the goal of the haptic feedback in virtual controls is to resemble the experience of a physical control unit, so the user can have a natural, intuitive, and efficient utilization.
- the user may have a natural and intuitive experience.
- the confirmation may offer certainty to the user about whether the contactless control is activated or not.
- the intuitive and assertive experience may allow the user to use the virtual control faster and more efficiently.
- the confirmation may also reduce errors, i.e., false unintentional activation, misinterpretation of gestures, and activation failures.
- the contactless tactile feedback may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept, or one or more examples described above or below.
- Fig. 4 shows a flow chart of an example of a method for controlling a user interface.
- the method comprises determining 410 a position of a first point of reference based on a position of a first point on the body of a user for a first point of time and a second point of time.
- the method comprises determining 420 a position of a second point of reference based on a position of a pointing indicator used by the user for the first point of time and the second point of time.
- the method comprises determining 430 a first device based on the first point of reference and the second point of reference for the first point of time and a second device based on the first point of reference and the second point of reference for the second point of time.
- the method comprises determining 440 a response based on the determined first and second device.
- the act of determining a response comprises controlling 450 the second device to display information provided by the first device.
- the method may be performed by a computer system, such as the system 110 discussed in connection with one or more of Figs, la to 3e, 5.
- a computer system such as the system 110 discussed in connection with one or more of Figs, la to 3e, 5.
- Features introduced in connection with the system and optical imaging system of one of Figs, la to 3e, 5 may likewise be included in the corresponding method.
- the method may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept, or one or more examples described above or below.
- aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
- an imaging device such as a microscope or exoscope
- an imaging device such as a microscope or exoscope
- an imaging device may be part of or connected to a system as described in connection with one or more of the Figs, la to 4.
- Fig. 5 shows a schematic illustration of a system 500 configured to perform a method described herein.
- the system 500 comprises an imaging device 510 (such as a microscope or exoscope) and a computer system 520.
- the imaging device 510 is configured to take images and is connected to the computer system 520.
- the computer system 520 is configured to execute at least a part of a method described herein.
- the computer system 520 may be configured to execute a machine learning algorithm.
- the computer system 520 and imaging device 510 may be separate entities but can also be integrated together in one common housing.
- the computer system 520 may be part of a central processing system of the imaging device 510 and/or the computer system 520 may be part of a subcomponent of the imaging device 510, such as a sensor, an actor, a camera, or an illumination unit, etc. of the imaging device 510.
- the computer system 520 may be a local computer device (e.g. personal computer, laptop, tablet computer or mobile phone) with one or more processors and one or more storage devices or may be a distributed computer system (e.g.
- the computer system 520 may comprise any circuit or combination of circuits.
- the computer system 520 may include one or more processors which can be of any type.
- processor may mean any type of computational circuit, such as but not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor (DSP), multiple core processor, a field programmable gate array (FPGA), for example, of a microscope or a microscope component (e.g. camera) or any other type of processor or processing circuit.
- CISC complex instruction set computing
- RISC reduced instruction set computing
- VLIW very long instruction word
- DSP digital signal processor
- FPGA field programmable gate array
- circuits may be included in the computer system 520 may be a custom circuit, an application-specific integrated circuit (ASIC), or the like, such as, for example, one or more circuits (such as a communication circuit) for use in wireless devices like mobile telephones, tablet computers, laptop computers, two-way radios, and similar electronic systems.
- the computer system 520 may include one or more storage devices, which may include one or more memory elements suitable to the particular application, such as a main memory in the form of random access memory (RAM), one or more hard drives, and/or one or more drives that handle removable media such as compact disks (CD), flash memory cards, digital video disk (DVD), and the like.
- RAM random access memory
- CD compact disks
- DVD digital video disk
- the computer system 520 may also include a display device, one or more speakers, and a keyboard and/or controller, which can include a mouse, trackball, touch screen, voice -recognition device, or any other device that permits a system user to input information into and receive information from the computer system 520.
- a display device one or more speakers
- a keyboard and/or controller which can include a mouse, trackball, touch screen, voice -recognition device, or any other device that permits a system user to input information into and receive information from the computer system 520.
- the act of determining a response may comprise selecting a target device of two or more devices based on at least one of the determined position in the user interface, the determined element of the user interface and the determined device.
- the act of determining a response may further comprise controlling the target device.
- the proposed concept allows easy, fast, precise, and intuitive control of devices. The user does not need to use any additional equipment such as handheld devices or wearing eye-trackers. It also supports multiple users, limited only by the resolution and the field of view of the cameras. For example, the proposed concept may be used by multiple users simultaneously.
- the presently discussed application of the proposed concept relates to contactless device control, i.e., contactless control of devices.
- contactless device control i.e., contactless control of devices.
- complex working environments such as surgical rooms, there are multiple devices, and controlling them typically requires physical contact.
- a contactless methodology of fast and intuitive control of devices is presented. This methodology builds upon the techniques introduced in connection with Figs, la to Id., i.e., the use of finger-pointing contactless control for one or multiple devices.
- Figs. 6a, 6b and 7 additional details are shown on how the techniques introduced in connection with Figs, la to Id can be used to select and control different devices.
- the platform may continuously analyze users’ positions and gestures until he/she puts their hand in the direction of a device (e.g., points towards a device).
- a device e.g., points towards a device.
- Fig. 6a shows a schematic diagram of an example of a selection of a device.
- Fig. 6a shows the first point of reference 120 (illustrated by the eyes of the user, a surgeon) and the second point of reference 130 (illustrated by the hand of the user).
- the platform continuously analyzes the users’ position and gesture while the user moves their hand until the user puts their hand in the direction of a device.
- the system may be configured to select the target device by determining which device of the two or more devices the is pointing at based on the first point of reference and based on the second point of reference. This may be done using the imaginary line extending through the first and second points of reference towards the respective devices, as discussed in connection with Figs, la to Id.
- the system may be configured to determine which device the user is pointing at by projecting an imaginary line 140 through the first point of reference and the second point of reference towards the two or more devices.
- the system may be configured to select one of the two or more devices as target device based on the imaginary line intersecting the respective device.
- the user moves his hand to select a device, and accordingly points at one of three devicesan Image-Guided Surgery (IGS) device 610, a surgical microscope 620, and an endoscope control unit 630 to select the device. If an imaginary line from the first point of reference 120 and through the second point of reference 130 is within a device recognition zone or detection area 610a; 620a; 630a around one of the devices 610; 620; 630, the respective device is selected.
- IGS Image-Guided Surgery
- the system may be configured to determine which device the user is pointing at by projecting an imaginary line 140 through the first point of reference and the second point of reference towards the two or more devices.
- the system may be configured to select one of the two or more devices as target device based on the imaginary line intersecting a detection area 610a, 620a, 630a encompassing the respective device.
- the detection area may extend beyond the device, e.g., by a pre-defined distance surrounding the respective device, or by a pre-defined number of degrees of a circle having its center point at the first point of reference, the pre-defined number of degrees extending from an imaginary line between the first point of reference and a center point of the respective device.
- the latter approach is shown in Fig. 6a.
- the devices have a fixed arrangement, e.g., when they are integrated in a surgical theatre.
- the surgical microscope’s arm may be attached at a fixed point at the ceiling, and the endoscope and the IGS may be integrated or attached to the wall of the operating room.
- a fixed spatial relationship exists between the devices, and therefore potentially the coordinate system of the user interface.
- the system may be configured to determine the position of the two or more devices relative to the user based on a fixed (spatial) relationship between the two or more devices and at least one sensor being used to determine the position of the first and second point of reference. Based on the fixed (spatial) relationship, the system may determine which device the user is pointing towards, e.g., using the imaginary line technique.
- the two or more devices may have visual identifiers (e.g., two-dimensional codes) visibly printed or attached to their respective casings, which may be used to identify the respective devices and to include them for selection and may be used as further point of reference for determining their respective position. Similar to the determination of the position of the first and second points of reference, different sensors may be used for this purpose.
- the system may be configured to process at least one of depth sensor data of a depth sensor 160, imaging sensor data of a camera sensor 150, and stereo imaging sensor data of a stereo camera 170 to determine at least one of the position of the first point of reference, the position of the second point of reference, and a position of one or more devices.
- An indication on each device may be used to indicate the status of each device.
- the indication may be provided by a display or light of the respective device, or the indication may be projected onto the respective device.
- the system may be configured to control the target device to display an indication 625 that the device is selected.
- the system may be configured to control a projection device 185 to project, onto the selected device, an indication that the device is selected.
- a first visual indicator may be shown if the system is selected, and a second visual indicator may be shown if a control command is being performed.
- a third visual indicator may be shown, when a device is available for being selected.
- a green light is used to indicate that a device is ready to be selected
- a yellow light is used to indicate that a device is selected and ready to be controlled via gesture controls
- a red light is used to indicate that a device is currently being controlled.
- the indication may change (e.g., from green to yellow) so that the user is sure about the device being selected/activated. Then each user gesture may be executed by the device (see Fig. 6b). Accordingly, an indicator light 625 of the selected device is activated or switched to a different color, e.g., from green to yellow. For example, a green indicator light may indicate that the corresponding device is available but not selected, a yellow indicator light may indicate that the corresponding device is selected and waiting for a command, and a red indicator light may indicate that a command is being executed.
- a green indicator light may indicate that the corresponding device is available but not selected
- a yellow indicator light may indicate that the corresponding device is selected and waiting for a command
- a red indicator light may indicate that a command is being executed.
- Fig. 6b shows a schematic diagram of an example of controlling a selected device.
- selected device 620 executes (immediately) the user’s gesture command.
- the system may be configured to identify a hand gesture being performed by the user, and to control the target device based on the identified hand gesture.
- the device indicator changes (e.g., the indicator light may become red, coming from yellow) and the device execute the command.
- the user moves the microscope 620 higher by raising the open palm upwards. Examples of gesture commands are shown in Fig. 7.
- Fig. 7 shows schematic drawings of gestures being used for controlling a selected device.
- a click gesture 710 is shown, in which the index finger of the hand is moved towards the palm of the hand, and which may be used to activate a virtual button.
- a grab gesture 720 is shown, in which the index finger and the thumb of the hand are moved towards each other, and which may be used for a drag & drop operation.
- a turn gesture 730 is shown, in which the hand is turned around an axis defined by the arm, and which may be used to adjust a virtual dial (rotational) button.
- a scroll gesture 740 is shown, in which the open palm of the hand points upwards and is raised or lowered, which may be used to move a microscope up or down.
- a rotate gesture 750 is shown, in which the fingers of the hand are rotated towards the inner side of the wrist, and which may be used to rotate an image.
- a device may have more than one aspect that is controllable via gesture controls.
- the microscope’s angle towards the surgical site may be adjusted via gesture control, as shown in Fig. 4c.
- the system may disambiguate between the different control functionalities.
- the system may be configured to determine an element of a user interface (i.e., an aspect of the target device that is to be controlled) of the target device based on the first point of reference and based on the second point of reference, and to control a control functionality associated with the element of the user interface based on the identified hand gesture.
- different sections of the respective devices may be associated with different control functionalities.
- a different control functionality may be controlled. For example, if the user is pointing at the objective of the microscope, the focus may be controlled. If the user is pointing at the handles of the microscope, the working distance or angle may be controlled. For this purpose, the system may further select the control functionality being controlled based on the gesture being performed.
- the system may be configured to display a visual representation of the identified hand gesture or of a functionality of the device being controlled by the hand gesture (e.g., an icon representing the identified hand gesture or an icon representing the functionality) via a display device 180 or via a projection device 185.
- a visual representation of the identified hand gesture or of a functionality of the device being controlled by the hand gesture e.g., an icon representing the identified hand gesture or an icon representing the functionality
- the user can be sure that he is controlling the desired functionality.
- the detection of the users’ positions and gestures can be done using sensors on each device, a central sensor system in the room, or a combination of the two.
- the system may be part of one or each of the devices, or the system may be associated with two or more devices.
- the system 110 is assumed to be part of the surgical microscope system, with the other devices also being controllable via the system 110 of the surgical microscope system. However, the system 110 may also be separate from the surgical microscope.
- each device may operate independently using embedded sensors.
- the devices may be interconnected using peer-to- peer connection or a central communication hub.
- interconnected devices may support “collective control of devices”, e.g., drag and drop the output image of an endoscope to be displayed on a microscope’s monitor.
- Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a processor, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus. Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software.
- the implementation can be performed using a non- transitory storage medium such as a digital storage medium, for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
- a digital storage medium for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
- Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
- embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer.
- the program code may, for example, be stored on a machine readable carrier.
- inventions comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
- an embodiment of the present invention is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
- a further embodiment of the invention is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein.
- the data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example, via the internet.
- a further embodiment comprises a processing means, for example, a computer or a programmable logic device, configured to, or adapted to, perform one of the methods described herein.
- a processing means for example, a computer or a programmable logic device, configured to, or adapted to, perform one of the methods described herein.
- a further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
- a further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver.
- the receiver may, for example, be a computer, a mobile device, a memory device or the like.
- the apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
- a programmable logic device for example, a field programmable gate array
- a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein.
- the methods are preferably performed by any hardware apparatus.
- Embodiments may be based on using a machine-learning model or machine-learning algorithm.
- Machine learning may refer to algorithms and statistical models that computer systems may use to perform a specific task without using explicit instructions, instead relying on models and inference.
- a transformation of data may be used, that is inferred from an analysis of historical and/or training data.
- the content of images may be analyzed using a machine-learning model or using a machine-learning algorithm.
- the machine-learning model may be trained using training images as input and training content information as output.
- the machine-learning model "learns” to recognize the content of the images, so the content of images that are not included in the training data can be recognized using the machine-learning model.
- the same principle may be used for other kinds of sensor data as well: By training a machine-learning model using training sensor data and a desired output, the machine-learning model "learns” a transformation between the sensor data and the output, which can be used to provide an output based on non-training sensor data provided to the machine-learning model.
- the provided data e.g. sensor data, meta data and/or image data
- Machine-learning models may be trained using training input data.
- the examples specified above use a training method called "supervised learning".
- supervised learning the machine-learning model is trained using a plurality of training samples, wherein each sample may comprise a plurality of input data values, and a plurality of desired output values, i.e. each training sample is associated with a desired output value.
- the machine-learning model "learns" which output value to provide based on an input sample that is similar to the samples provided during the training.
- semi -supervised learning may be used. In semi-supervised learning, some of the training samples lack a corresponding desired output value.
- Supervised learning may be based on a supervised learning algorithm (e.g.
- Classification algorithms may be used when the outputs are restricted to a limited set of values (categorical variables), i.e. the input is classified to one of the limited set of values.
- Regression algorithms may be used when the outputs may have any numerical value (within a range).
- Similarity learning algorithms may be similar to both classification and regression algorithms but are based on learning from examples using a similarity function that measures how similar or related two objects are.
- unsupervised learning may be used to train the machine-learning model. In unsupervised learning, (only) input data might be supplied and an unsupervised learning algorithm may be used to find structure in the input data (e.g.
- Clustering is the assignment of input data comprising a plurality of input values into subsets (clusters) so that input values within the same cluster are similar according to one or more (pre-defined) similarity criteria, while being dissimilar to input values that are included in other clusters.
- Reinforcement learning is a third group of machine-learning algorithms.
- reinforcement learning may be used to train the machine-learning model.
- one or more software actors (called “software agents") are trained to take actions in an environment. Based on the taken actions, a reward is calculated.
- Reinforcement learning is based on training the one or more software agents to choose the actions such, that the cumulative reward is increased, leading to software agents that become better at the task they are given (as evidenced by increasing rewards).
- Feature learning may be used.
- the machine-learning model may at least partially be trained using feature learning, and/or the machine-learning algorithm may comprise a feature learning component.
- Feature learning algorithms which may be called representation learning algorithms, may preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions.
- Feature learning may be based on principal components analysis or cluster analysis, for example.
- anomaly detection i.e. outlier detection
- the machine-learning model may at least partially be trained using anomaly detection, and/or the machine-learning algorithm may comprise an anomaly detection component.
- the machine-learning algorithm may use a decision tree as a predictive model.
- the machine-learning model may be based on a decision tree.
- observations about an item e.g. a set of input values
- an output value corresponding to the item may be represented by the leaves of the decision tree.
- Decision trees may support both discrete values and continuous values as output values. If discrete values are used, the decision tree may be denoted a classification tree, if continuous values are used, the decision tree may be denoted a regression tree.
- Association rules are a further technique that may be used in machine-learning algorithms. In other words, the machine-learning model may be based on one or more association rules.
- Association rules are created by identifying relationships between variables in large amounts of data.
- the machine-learning algorithm may identify and/or utilize one or more relational rules that represent the knowledge that is derived from the data.
- the rules may e.g. be used to store, manipulate or apply the knowledge.
- Machine-learning algorithms are usually based on a machine-learning model.
- the term “machine-learning algorithm” may denote a set of instructions that may be used to create, train or use a machine-learning model.
- the term “machine-learning model” may denote a data structure and/or set of rules that represents the learned knowledge (e.g. based on the training performed by the machine-learning algorithm).
- the usage of a machine-learning algorithm may imply the usage of an underlying machinelearning model (or of a plurality of underlying machine-learning models).
- the usage of a machine-learning model may imply that the machine-learning model and/or the data structure/set of rules that is the machine-learning model is trained by a machine-learning algorithm.
- the machine-learning model may be an artificial neural network (ANN).
- ANNs are systems that are inspired by biological neural networks, such as can be found in a retina or a brain.
- ANNs comprise a plurality of interconnected nodes and a plurality of connections, so-called edges, between the nodes.
- Each node may represent an artificial neuron.
- Each edge may transmit information, from one node to another.
- the output of a node may be defined as a (non-linear) function of its inputs (e.g. of the sum of its inputs).
- the inputs of a node may be used in the function based on a "weight" of the edge or of the node that provides the input.
- the weight of nodes and/or of edges may be adjusted in the learning process.
- the training of an artificial neural network may comprise adjusting the weights of the nodes and/or edges of the artificial neural network, i.e. to achieve a desired output for a given input.
- the machine-learning model may be a support vector machine, a random forest model or a gradient boosting model.
- Support vector machines i.e. support vector networks
- Support vector machines may be trained by providing an input with a plurality of training input values that belong to one of two categories.
- the support vector machine may be trained to assign a new input value to one of the two categories.
- the machine-learning model may be a Bayesian network, which is a probabilistic directed acyclic graphical model.
- a Bayesian network may represent a set of random variables and their conditional dependencies using a directed acyclic graph.
- the machine-learning model may be based on a genetic algorithm, which is a search algorithm and heuristic technique that mimics the process of natural selection.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Des exemples concernent un système, un procédé et un programme informatique pour commander une interface utilisateur. Le système est configuré pour déterminer une position d'un premier point de référence sur la base d'une position d'un premier point sur le corps d'un utilisateur à un premier instant et à un second instant. Le système est configuré pour déterminer une position d'un second point de référence sur la base d'une position d'un indicateur de pointage utilisé par l'utilisateur pour au premier instant et au second instant. Le système est configuré pour déterminer un premier dispositif sur la base du premier point de référence et du second point de référence au premier instant et un second dispositif sur la base du premier point de référence et du second point de référence au second instant. Le système est configuré pour déterminer une réponse sur la base des premier et second dispositifs déterminés. L'action de détermination d'une réponse comprend la commande du second dispositif pour afficher des informations affichées sur le premier dispositif.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23175143.9A EP4468120A1 (fr) | 2023-05-24 | 2023-05-24 | Système, procédé et programme informatique pour commander une interface utilisateur |
| EP23175139.7A EP4468119A1 (fr) | 2023-05-24 | 2023-05-24 | Système, procédé et programme informatique pour commander une interface utilisateur |
| EP23175139.7 | 2023-05-24 | ||
| EP23175143.9 | 2023-05-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024240812A1 true WO2024240812A1 (fr) | 2024-11-28 |
Family
ID=91186823
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/064070 Pending WO2024240812A1 (fr) | 2023-05-24 | 2024-05-22 | Système, procédé et programme informatique pour commander une interface utilisateur |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024240812A1 (fr) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180217671A1 (en) * | 2016-02-23 | 2018-08-02 | Sony Corporation | Remote control apparatus, remote control method, remote control system, and program |
| US20210325970A1 (en) * | 2017-09-25 | 2021-10-21 | Boe Technology Group Co., Ltd. | Interactive display method, display terminal and interactive display system |
| US11354009B2 (en) * | 2015-06-12 | 2022-06-07 | Nureva, Inc. | Method and apparatus for using gestures across multiple devices |
-
2024
- 2024-05-22 WO PCT/EP2024/064070 patent/WO2024240812A1/fr active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11354009B2 (en) * | 2015-06-12 | 2022-06-07 | Nureva, Inc. | Method and apparatus for using gestures across multiple devices |
| US20180217671A1 (en) * | 2016-02-23 | 2018-08-02 | Sony Corporation | Remote control apparatus, remote control method, remote control system, and program |
| US20210325970A1 (en) * | 2017-09-25 | 2021-10-21 | Boe Technology Group Co., Ltd. | Interactive display method, display terminal and interactive display system |
Non-Patent Citations (3)
| Title |
|---|
| BHUSHA ET AL., AN EXPERIMENTAL ANALYSIS OF VARIOUS MACHINE LEARNING ALGORITHMS FOR HAND GESTURE RECOGNITION, 2022 |
| LALLEMAND ET AL., HUMAN POSE ESTIMATION IN STEREO IMAGES |
| ZHANG ET AL., DEEP LEARNING METHODS FOR 3D HUMAN POSE ESTIMATION UNDER DIFFERENT SUPERVISION PARADIGMS: A SURVEY, 2021 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7504180B2 (ja) | ウェアラブルシステムのためのトランスモード入力融合 | |
| JP7411133B2 (ja) | 仮想現実ディスプレイシステム、拡張現実ディスプレイシステム、および複合現実ディスプレイシステムのためのキーボード | |
| US11986259B2 (en) | Association processes and related systems for manipulators | |
| US11422530B2 (en) | Systems and methods for prototyping a virtual model | |
| US11612446B2 (en) | Systems, methods, and computer-readable program products for controlling a robotically delivered manipulator | |
| US8745541B2 (en) | Architecture for controlling a computer using hand gestures | |
| CN103365411B (zh) | 信息输入设备、信息输入方法 | |
| CN106468917B (zh) | 一种可触摸现场实时视频图像的远程呈现交互方法和系统 | |
| US20140101604A1 (en) | Interfacing device and method for providing user interface exploiting multi-modality | |
| EP4468120A1 (fr) | Système, procédé et programme informatique pour commander une interface utilisateur | |
| EP4468119A1 (fr) | Système, procédé et programme informatique pour commander une interface utilisateur | |
| EP4468117A1 (fr) | Système, procédé et programme informatique pour commander une interface utilisateur | |
| WO2024240812A1 (fr) | Système, procédé et programme informatique pour commander une interface utilisateur | |
| EP4468118A1 (fr) | Système, procédé et programme informatique pour commander une interface utilisateur | |
| EP4468121A1 (fr) | Système, procédé et programme informatique pour commander une interface utilisateur | |
| WO2024240811A1 (fr) | Système, procédé et programme informatique pour commander une interface utilisateur | |
| KR102156175B1 (ko) | 멀티 모달리티를 활용한 유저 인터페이스를 제공하는 인터페이싱 장치 및 그 장치를 이용한 방법 | |
| CN121175644A (en) | System, method and computer program for controlling a user interface | |
| Liu et al. | Advances in the development and application of non-contact intraoperative image access systems | |
| Rossol | Novel Methods for Robust Real-time Hand Gesture Interfaces |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24727420 Country of ref document: EP Kind code of ref document: A1 |