EP1730970A1 - Device and method for simultaneously representing virtual and real environment information - Google Patents
Device and method for simultaneously representing virtual and real environment informationInfo
- Publication number
- EP1730970A1 EP1730970A1 EP05731750A EP05731750A EP1730970A1 EP 1730970 A1 EP1730970 A1 EP 1730970A1 EP 05731750 A EP05731750 A EP 05731750A EP 05731750 A EP05731750 A EP 05731750A EP 1730970 A1 EP1730970 A1 EP 1730970A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- information
- environment
- unit
- detection device
- stored
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
Definitions
- the invention relates to a device and a method for displaying information, in particular augmented reality information, for at least one user.
- a device is used, for example, in the planning of plants and machines. With the help of this device, a first impression of a planned system or a planned conversion measure in an already existing environment should be obtained quickly in the planning phase.
- Augmented Reality is a form of human-technology interaction that B. fade into information in his field of vision via data glasses, thereby expanding the current reality he perceives. This happens depending on the context.
- H. suitable for and derived from the object under consideration e.g. B. a component, a tool, a machine or its location.
- An example of this can be a safety instruction during an assembly / disassembly process.
- augmented reality there are two common methods for representing augmented reality.
- OST optical see-through
- the virtual information is superimposed directly into the user's field of vision, whereby the user can still perceive reality directly.
- a miniature monitor worn on the head a so-called head-mounted display
- VST video see-through
- reality is captured using a video camera.
- the virtual information is displayed in the recorded video image.
- the image information generated in this way can then be displayed with one or more head-mounted displays or with a standard display such as a monitor. This allows multiple users to view augmented reality, e.g. B. also at a remote location.
- the video camera and the room detection system can also be mounted on the user's head.
- the video camera is mounted on a portable computer (for example a tablet PC), and thus advantageously also the mounting of the room detection system.
- the invention is based on the object of specifying a device or a method for displaying information, in particular augmented reality information, for at least one user, which, with reduced expenditure of time and money, can be used to plan new systems or to expand existing systems in a currently existing environment, whereby dynamic processes of both the real and the virtual environment can be detected and displayed.
- This object is achieved by a device for displaying information, in particular augmented reality information, for at least one user
- At least one room detection unit for detecting a current reality and for generating corresponding room information
- At least one environment detection unit for detecting an environment and for generating corresponding environment information which characterize a position and / or an orientation of the device in relation to the environment
- At least one processing unit for linking the environmental information, the spatial information and information stored in a first storage medium, which are used to describe at least one object, to a lot of image information such that mutual concealments of the current reality and that caused by the stored information described object can be made recognizable by at least one playback unit.
- This object is further achieved by a method for displaying information, in particular augmented reality information, for at least one user in which • current reality is detected with the aid of at least one room detection unit and corresponding room information is generated, • environment is detected with the aid of at least one environment detection unit and corresponding environmental information is generated, which characterize a position and / or an orientation of the device in relation to the environment, • with the help of at least one processing unit, the environmental information, the spatial information and information stored in a first storage medium, which are used to describe at least one object, are linked to a lot of image information in such a way that mutual concealments of the current reality and of the object described by the stored information are recognized by at least one display unit be made identifiable.
- an AR system known from the prior art is supplemented by a room detection unit which permits continuous detection of the current reality.
- the room detection unit recognizes which areas are in a detection area of the device and what distance these areas are from the device.
- an environment detection unit With the help of an environment detection unit, the relative position and orientation of the device in relation to the real environment are detected.
- the data generated by the room and the environment acquisition unit are linked to a further data set describing at least one object in such a way that mutual concealments be made visible in the user's field of vision when the detected reality and the objects described by the amount of data are displayed.
- the invention is based on the finding that the
- the continuous detection of the current reality located in the detection area of the device even enables the virtual arrangement and a dynamic in the real environment to be visualized within the expanded reality. This eliminates the considerable synchronization effort that such a visualization would cause with a complete 3-dimensional modeling of a non-static reality.
- the room detection unit comprises A space detection device which is provided for detecting surface distance points of the current reality, and
- a processing unit which is provided for calculating the room information, the room information in particular describing a 3-dimensional model of reality.
- the information content of the current reality required for calculating any occlusion areas that may occur is recorded and modeled, as a result of which the effort is considerably reduced compared to the creation of a complete 3-dimensional model of a real geometric arrangement.
- the environment detection unit advantageously has: an environment detection device which is provided for detecting the position and / or the orientation of the device in relation to the real environment, and
- a processing unit for calculating the environmental information which describe the position and / or the orientation of the device in relation to the real environment, for example in the form of a matrix.
- a model of a system and / or a system part is provided as the object described by the information stored in the first storage medium.
- a virtual model of the system to be installed or of the system part to be installed can be created in order to quickly gain a first impression of the system to be planned in the reality provided for this purpose.
- the reproduction unit is designed as a head-mounted display, the objects described by the image information generated by the processing unit being directly in the field of view of the user, and the user continues to directly perceive the part of the current reality that is not covered by the objects described by the image information.
- This type of presentation of augmented reality information is the so-called optical see-through method.
- the space detection device and the environment detection device are advantageously attached to the user's head so that they rest in relation to the user's eyes.
- the viewing angles of the room detection device and the surroundings detection device ideally overlap with the current field of vision of the user in such a way that the entire field of vision of the user is captured.
- the processing units of the room acquisition unit and the environment acquisition unit can be implemented on a computer carried by the user.
- the reproduction unit is designed in such a way that the objects described by the image information generated by the processing unit and the part of the current reality that is not hidden by the objects described by the image information are displayed, with the device in particular at least one image acquisition unit for this purpose, which is designed, for example, as a video camera for capturing the current reality.
- This embodiment enables the presentation of the augmented reality information for multiple users.
- This type of presentation of augmented reality information is the so-called video see-through method.
- the parts of the virtual objects described by the image information and not covered by the current reality are faded into the image captured by the video camera and displayed on one or, for. B. using a video spitter, multiple playback units.
- the playback units can be head-mounted displays or ordinary monitors, which in particular also depend on the current can be positioned in reality in distant places.
- the space detection device, the environment detection device and the image acquisition unit can be mounted on the head of a user or on another device, for example a portable computer.
- a second storage medium which is used to store calibration information is used, the calibration information describing geometric deviations between the user's eye, the position of the display system and the space detection device and the ambient detection device.
- the second storage medium can also be implemented with the first storage medium in the form of a common storage medium.
- the second storage medium is advantageously provided for storing calibration information, the Calibration information describes geometric deviations between the position of the video camera, the position of the playback system and the space detection device and the environment detection device.
- the second storage medium can also be implemented with the first storage medium in the form of a common storage medium.
- a simple calculation of possible concealments of virtual geometric arrangements by arrangements in the current reality is advantageously realized in that that the processing unit, based on the information generated by the room detection unit and the environment detection unit and the information stored in the storage media, represents the objects described by means of the space information and by means of the information stored in the first storage medium in a common coordinate system. On the basis of this common reference system, the processing unit can calculate new image information in which the areas described by the information stored in the first storage medium are hidden, which are covered by the areas described by the spatial information in the field of vision of the user or the video camera.
- the processing unit for linking the environmental and spatial information can be implemented on a computer together with the processing unit of the spatial detection unit and / or the processing unit of the environmental detection unit .
- a dynamic system model can be made dynamic with the device according to the invention in that the device has at least one simulation system for generating the information stored in the first storage medium.
- the dynamic processes are calculated by the simulation system.
- the information stored in the first storage medium for describing the virtual objects is continuously adapted in accordance with the data calculated by the simulation system.
- the room detection device of the room detection unit can be designed, for example, as a radar system, as an ultrasound system, as a laser system or as a stereo camera system.
- the space detection device and the environment detection device can be implemented in a common detection device.
- an integration of the spatial solution device and / or the environment detection device in the video camera required for this method.
- FIG. 1 shows a schematic representation of a device for displaying information, in particular augmented reality information, for at least one user
- FIG. 2 shows a schematic representation of an embodiment of the device based on the video see-through method
- FIG. 3 shows an alternative device for displaying Information that is based on the video see-through method
- FIG. 4 shows a representation of a typical application scenario of an embodiment of the device shown in FIG. 1 shows a schematic illustration of a device 1 for displaying information, in particular augmented reality information, for at least one user 2.
- the device shown relates to an embodiment which is based on the optical see-through method. With the help of a room detection unit 3, the user 2 detects a current reality 13 that is in his field of vision.
- room detection device 3 a On the head of the user 2, as part of the room detection unit 3, there is a room detection device 3 a, which rests relative to the eyes of the user 2. With the help of a processing unit 3b of the room detection unit 3, room detection information 10 is generated which is forwarded to a processing unit 9.
- An environment detection device 4a is also positioned on the head of the user 2, with which the position and the viewing angle of the user 2 can be detected.
- the Environment detection device 4a rests in relation to the space detection device 3a and in relation to the eyes of the user 2.
- a processing device 4b generates ambient information 5 from the detected position and the detected viewing angle, which information is likewise passed on to the processing unit 9.
- the environment detection device comprises a sensor which is positioned on the head of the user and a further detection device which is set up in such a way that it detects the position and the orientation of the sensor and thus also of the user can grasp the current reality.
- Information 8 is stored in a storage medium 7, e.g. describe a virtual geometric arrangement.
- the virtual geometric arrangement can be, for example, the three-dimensional model of a planned system or a planned system part.
- the information 8, which describes the three-dimensional model of such a system, is also fed to the processing unit 9.
- Another storage medium 12 contains calibration information 11, which describes the geometric deviations between the eye of the user 2, the position of a display unit 6 located on the head of the user 2, and the space detection device 3a and the environment detection device 4a.
- the processing unit 9 now links the spatial information 10, the environmental information 5, the information 8 that describes the three-dimensional model of the virtual installation, and the calibration information 11 to a set of image information 14 in such a way that mutual concealments of the current reality and the planned virtual installation the playback unit 6 can be made recognizable.
- the playback unit 6, which in this example is designed as a head-mounted display only an image of the planned virtual system is shown, in which the areas hidden by the current reality 13 in the user's field of vision 2 are hidden. The part of the current reality 13 that is not covered by the planned virtual system is directly perceived by the user.
- FIG. 2 shows a schematic representation of a based on the video see-through method embodiment of the device 1. It will here and the same reference numerals are used as well as in the description of * . shimmereren figures in FIG. 1
- the device 1 is supplemented by an image capture unit 18, which is in particular designed as a video camera.
- the playback unit 6 now represents the complete augmented reality.
- the image information 14 which describes a model of the virtual system in which the surface covered by the current reality is hidden
- the part of the current reality 13 which is captured by the video camera 18 and which is not covered by the virtual system is also included Help of the playback unit 6 shown.
- the image of the virtual system in which the areas covered by the current reality 13 are hidden, is superimposed on the image captured by the video camera .18.
- a conversion unit 15 with a mixing function is used to generate a corresponding signal that can be represented by the playback unit 6.
- This can be both in terms of software and hardware - for example as a video card with the appropriate functionality.
- the space capturing device 3a, the environmental capturing device 4a and the image capturing unit 18 are positioned on the head of the user 2.
- the playback unit 6 used to visualize the augmented reality is also connected to the head of the user 2, for example it is a head-mounted display.
- the space detection device 3a, the environment detection device 4a, the image detection unit 18 and the reproduction unit 6 are mounted on a portable computer.
- This embodiment enables multiple users to view augmented reality. With the help of a video splitter, the representation of augmented reality on several playback units is also possible.
- FIG. 4 shows a typical application scenario of an embodiment of the device shown in FIG.
- a current reality 13 which can be a conveyor belt, for example.
- the space detection device 3a of the space detection system 3 detects the part of the conveyor belt 13 that is in the viewpoint of the user 2.
- the processing unit 3b of the space detection unit 3 models the surfaces of the conveyor belt 13 that are in the user 2 field of view into a three-dimensional surface model 10b.
- the processing unit 4b of the environment detection unit 4 uses this to generate information 5 which is represented in the form of a matrix 5b.
- a simulation system 16 continuously generates a data set 8 which describes the 3D model 8b of a virtual geometric arrangement, in this example a robot. In this way, the virtual robot is dynamized in the expanded reality.
- the corresponding amount of data 8 is stored in a first storage medium 7.
- the previously described calibration information 11 is stored in the form of a matrix 11b or a plurality of matrices in a second storage medium 12.
- the processing unit 9 now uses the calibration information 11 and the environmental information 5 to combine the 3-dimensional surface model 10b of the real arrangement 13 (here the conveyor belt) and the 3-dimensional model 8b of the virtual arrangement (here a virtual robot) Coordinate system. Within this coordinate system, the processing unit 9 calculates concealments of the virtual robot, which are caused by the conveyor belt. As a result, the processing unit 9 generates a new amount of data 14, which in turn describes a virtual model 14b of the robot, in which the areas hidden by the conveyor belt are hidden.
- the model 14b of the virtual robot, in which the covers are hidden by the conveyor belt, is converted by means of a video card 15 into a signal which can be represented by the playback unit 6.
- the user .2 By simultaneously capturing the model 14b of the virtual robot represented by the playback unit 6 and the real conveyor belt, the user .2 sees a mixed virtual real image 17, in which by hiding the from the processing unit 9 calculates the desired 3-dimensional impression.
- the need for a time-consuming and costly 3-dimensional modeling of the conveyor belt is eliminated in the device and the method according to the invention.
- the invention relates to an apparatus and a method for displaying virtual and real environmental information for one or more users, virtual arrangements and real arrangements being displayed in such a way that concealments of the virtual arrangements are made recognizable by real arrangements.
- an environment detection unit 4 the relative position and orientation of the device in the real environment are detected.
- the reality is continuously recorded and converted into a 3-dimensional surface model.
- a processing system 9 transfers the 3-dimensional surface model of the real arrangement and the 3-dimensional model of the virtual arrangement into a common coordinate system and calculates any hidden areas of the virtual arrangement by the real arrangement.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Beschreibungdescription
Vorrichtung und Verfahren zur gleichzeitigen Darstellung virtueller und realer UmgebungsinformationenDevice and method for the simultaneous display of virtual and real environmental information
Die Erfindung betrifft eine Vorrichtung sowie ein Verfahren zur Darstellung von Informationen, insbesondere Augmented- Reality Informationen, für mindestens einen Anwender. Eine derartige Vorrichtung kommt beispielsweise bei der Planung von Anlagen und Maschinen zum Einsatz. Mit Hilfe dieser Vorrichtung soll schon in der Planungsphase schnell ein erster Eindruck einer geplanten Anlage oder einer geplanten Umbaumaßnahme in einer bereits existierenden Umgebung gewonnen werden.The invention relates to a device and a method for displaying information, in particular augmented reality information, for at least one user. Such a device is used, for example, in the planning of plants and machines. With the help of this device, a first impression of a planned system or a planned conversion measure in an already existing environment should be obtained quickly in the planning phase.
Augmented Reality (AR) , erweiterte Realität, ist eine Form der Mensch-Technik-Interaktion, die dem Menschen z. B. über eine Datenbrille In ormationen in sein Sichtfeld einblendet und damit die von ihm wahrgenommene aktuelle Realität erweitert. Dieses geschieht kontextabhängig, d. h. passend zum und abgeleitet vom betrachteten Objekt, z. B. einem Bauteil, einem Werkzeug, einer Maschine oder zu seinem Standort. Beispiel hierfür kann ein Sicherheitshinweis während eines Mon- tage-/Demontageprozesses sein.Augmented Reality (AR) is a form of human-technology interaction that B. fade into information in his field of vision via data glasses, thereby expanding the current reality he perceives. This happens depending on the context. H. suitable for and derived from the object under consideration, e.g. B. a component, a tool, a machine or its location. An example of this can be a safety instruction during an assembly / disassembly process.
Zur Darstellung der erweiterten Realität existieren zwei gängige Verfahren. Beim sogenannten Optical-See-Through (OST) erfolgt die Einblendung der virtuellen Informationen direkt in das Sichtfeld des Anwenders, wobei der Anwender die Realität weiterhin direkt wahrnehmen kann. In diesem Fall kommt typischerweise ein am Kopf getragener Miniaturmonitor, ein sogenanntes Head-Mounted Display, zur Darstellung der Bildinformationen zum Einsatz. Beim sogenannten Video-See-Through (VST) wird die Realität über eine Videokamera erfasst. Die •Einblendung der virtuellen Informationen erfolgt in das aufgenommene Videobild. Die so generierten Bildinformationen können dann mit einem oder mehreren Head-Mounted-Displays o- der mit einem Standarddisplay wie einem Monitor zur Anzeige gebracht werden. Somit können mehrere Anwender die erweiterte Realität betrachten, z. B. auch an einem entfernten Ort. Bei Video-See-Through können die Videokamera und das Raumerfassungssystem auch am Kopf des Anwenders montiert sein. In der heutigen Praxis erfolgt aber die Montage der Videokamera - und somit vorteilhafterweise auch die Montage des Raumerfassungssystems - an einem tragbaren Computer (z.B. Tablet PC).There are two common methods for representing augmented reality. With the so-called optical see-through (OST), the virtual information is superimposed directly into the user's field of vision, whereby the user can still perceive reality directly. In this case, a miniature monitor worn on the head, a so-called head-mounted display, is typically used to display the image information. In the so-called video see-through (VST), reality is captured using a video camera. The virtual information is displayed in the recorded video image. The image information generated in this way can then be displayed with one or more head-mounted displays or with a standard display such as a monitor. This allows multiple users to view augmented reality, e.g. B. also at a remote location. With video see-through, the video camera and the room detection system can also be mounted on the user's head. In today's practice, however, the video camera is mounted on a portable computer (for example a tablet PC), and thus advantageously also the mounting of the room detection system.
Der Erfindung liegt die Aufgabe zugrunde, eine Vorrichtung bzw. ein Verfahren zur Darstellung von Informationen, insbesondere Augmented-Reality Informationen, für mindestens einen Anwender anzugeben, die bzw. das bei reduziertem Zeit- und Kostenaufwand eine Planung neuer Anlagen bzw. eine Erweiterung bestehender Anlagen in einer aktuell existenten Umgebung ermöglicht, wobei auch dynamische Vorgänge sowohl der realen als auch der virtuellen Umgebung erfassbar und darstellbar sind.The invention is based on the object of specifying a device or a method for displaying information, in particular augmented reality information, for at least one user, which, with reduced expenditure of time and money, can be used to plan new systems or to expand existing systems in a currently existing environment, whereby dynamic processes of both the real and the virtual environment can be detected and displayed.
Diese Aufgabe wird gelöst durch eine Vorrichtung zur Darstellung von Informationen, insbesondere Augmented-Reality Informationen, für mindestens einen Anwender mitThis object is achieved by a device for displaying information, in particular augmented reality information, for at least one user
• mindestens einer Raumerfassungseinheit zur Erfassung einer aktuellen Realität und zur Generierung entsprechender Rauminformationen,At least one room detection unit for detecting a current reality and for generating corresponding room information,
• mindestens einer Umgebungserfassungseinheit zur Erfassung einer Umgebung und zur Generierung entsprechender Umgebungsinformationen, die eine Position und/oder eine Aus- richtung der Vorrichtung in Bezug auf die Umgebung kennzeichnen,At least one environment detection unit for detecting an environment and for generating corresponding environment information which characterize a position and / or an orientation of the device in relation to the environment,
• mindestens einer Verarbeitungseinheit zur Verknüpfung der Umgebungsinformationen, der Rauminformationen und von in einem ersten Speichermedium abgelegten Informationen, die zur Beschreibung mindestens eines Objektes dienen, zu einer Menge von Bildinformationen derart, dass wechselseitige Verdeckungen der aktuellen Realität und des durch die abgelegten Informationen beschriebenen Objektes durch mindestens eine Wiedergabeeinheit erkennbar gemacht werden.• At least one processing unit for linking the environmental information, the spatial information and information stored in a first storage medium, which are used to describe at least one object, to a lot of image information such that mutual concealments of the current reality and that caused by the stored information described object can be made recognizable by at least one playback unit.
Diese Aufgabe wird weiter gelöst durch eine Verfahren zur Darstellung von Informationen, insbesondere Augmented-Reality Informationen, für mindestens einen Anwender bei dem • mit Hilfe mindestens einer Raumerfassungseinheit eine aktuelle Realität erfasst und entsprechende Rauminformationen generiert werden, • mit Hilfe mindestens einer Umgebungserfassungseinheit eine Umgebung erfasst und entsprechende Umgebungsinformationen generiert werden, die eine Position und/oder eine Ausrichtung der Vorrichtung in Bezug auf die Umgebung kennzeichnen, • mit Hilfe mindestens einer Verarbeitungseinheit die Umgebungsinformationen, die Rauminformationen und in einem ersten Speichermedium abgelegte Informationen, die zur Beschreibung mindestens eines Objektes dienen, zu einer Menge von Bildinformationen derart verknüpft werden, dass wechselseitige Verdeckungen der aktuellen Realität und des durch die abgelegten Informationen beschriebenen Objektes durch mindestens eine Wiedergabeeinheit erkennbar gemacht werden.This object is further achieved by a method for displaying information, in particular augmented reality information, for at least one user in which • current reality is detected with the aid of at least one room detection unit and corresponding room information is generated, • environment is detected with the aid of at least one environment detection unit and corresponding environmental information is generated, which characterize a position and / or an orientation of the device in relation to the environment, • with the help of at least one processing unit, the environmental information, the spatial information and information stored in a first storage medium, which are used to describe at least one object, are linked to a lot of image information in such a way that mutual concealments of the current reality and of the object described by the stored information are recognized by at least one display unit be made identifiable.
Bei der erfindungsgemäßen Vorrichtung bzw. dem erfindungsgemäßen Verfahren wird ein aus dem Stand der Technik bekanntes AR System um eine Raumerfassungseinheit ergänzt, die eine kontinuierliche Erfassung der aktuellen Realität erlaubt. Die Raumerfassungseinheit erkennt, welche Flächen sich in einem Erfassungsbereich der Vorrichtung befinden und welchen Abstand diese Flächen zur Vorrichtung haben. Mit Hilfe einer Umgebungserfassungseinheit werden relative Position und Ausrichtung der Vorrichtung in Bezug auf die reale Umgebung erfasst. Mittels eines Verarbeitungssystems werden die von der Raum- und der Umgebungserfassungseinheit generierten Daten mit einer weiteren mindestens ein Objekt beschreibenden Datenmenge derart verknüpft, dass wechselseitige Verdeckungen im Blickfeld des Anwenders bei einer Darstellung der erfass- ten Realität und der von der Datenmenge beschriebenen Objekte sichtbar gemacht werden.In the device according to the invention and the method according to the invention, an AR system known from the prior art is supplemented by a room detection unit which permits continuous detection of the current reality. The room detection unit recognizes which areas are in a detection area of the device and what distance these areas are from the device. With the help of an environment detection unit, the relative position and orientation of the device in relation to the real environment are detected. By means of a processing system, the data generated by the room and the environment acquisition unit are linked to a further data set describing at least one object in such a way that mutual concealments be made visible in the user's field of vision when the detected reality and the objects described by the amount of data are displayed.
Der Erfindung liegt die Erkenntnis zu Grunde, dass bei derThe invention is based on the finding that the
Planung neuer Anlagen bzw. Erweiterung bestehender Anlagen in einer aktuell existenten Umgebung Fragestellungen bezüglich räumlicher Aspekte beantwortet werden müssen wie:Planning new plants or expanding existing plants in a currently existing environment Questions regarding spatial aspects must be answered such as:
- Kann ein neues Anlagenteil in einer bestimmten Position eingebaut werden?- Can a new plant part be installed in a certain position?
- Wie ist der Einbauweg?- How is the installation path?
- Können durch bewegte Maschinenteile und Menschen Kollisionen entstehen?- Can collisions occur due to moving machine parts and people?
Diese Fragestellungen können mit der erfindungsgemäßen Vorrichtung bzw. dem erfindungsgemäßen System mit Hilfe von AR Systemen beantwortet werden, ohne dass ein komplettes 3- dimensionales Modell der real existenten Umgebung unter großem Zeit- und Kostenaufwand generiert werden muss. Mittels der die erfasste Realität beschreibenden Informationen können Verdeckungen der geplanten Anlage, die in Form eines virtuellen 3-dimensionalen Modells dargestellt wird, berechnet werden und in der erweiterten Realität durch Ausblenden dargestellt werden.These questions can be answered with the device according to the invention or the system according to the invention with the aid of AR systems, without a complete 3-dimensional model of the real existing environment having to be generated with great expenditure of time and money. By means of the information describing the detected reality, concealments of the planned system, which is represented in the form of a virtual 3-dimensional model, can be calculated and can be shown in the augmented reality by hiding.
Die kontinuierliche Erfassung der sich im Erfassungsbereich der Vorrichtung befindlichen aktuellen Realität ermöglicht sogar eine Visualisierung der virtuellen Anordnung und einer in der realen Umgebung vorhandenen Dynamik innerhalb der er- weiterten Realität. Hierbei entfällt der erhebliche Synchronisationsaufwand, den eine solche Visualisierung bei einer vollständigen 3-dimensionalen Modellierung einer nicht statischen Realität hervorrufen würde .The continuous detection of the current reality located in the detection area of the device even enables the virtual arrangement and a dynamic in the real environment to be visualized within the expanded reality. This eliminates the considerable synchronization effort that such a visualization would cause with a complete 3-dimensional modeling of a non-static reality.
Bei einer vorteilhaften Ausführungsform der Erfindung umfasst die Raumerfassungseinheit • eine Raumerfassungsvorrichtung, die zur Erfassung von Flä- chenabstandspunkten der aktuellen Realität vorgesehen ist, undIn an advantageous embodiment of the invention, the room detection unit comprises A space detection device which is provided for detecting surface distance points of the current reality, and
• eine Verarbeitungseinheit , die zur Berechnung der Raumin- formationen vorgesehen ist, wobei die Rauminformationen insbesondere ein 3-dimensionales Modell der Realität beschreiben. Auf diese Weise wird nur der zur Berechnung eventuell auftretender Verdeckungsflächen benötigte Informationsgehalt der aktuellen Realität erfasst und modelliert, wodurch der Aufwand gegenüber der Erstellung eines kompletten 3-dimensionalen Modells einer realen geometrischen Anordnung erheblich reduziert wird.• a processing unit which is provided for calculating the room information, the room information in particular describing a 3-dimensional model of reality. In this way, only the information content of the current reality required for calculating any occlusion areas that may occur is recorded and modeled, as a result of which the effort is considerably reduced compared to the creation of a complete 3-dimensional model of a real geometric arrangement.
Die Umgebungserfassungseinheit weist vorteilhafter Weise • eine Umgebungserfassungsvorriehtung, die zur Erfassung der Position und/oder der Ausrichtung der Vorrichtung in Bezug auf die reale Umgebung vorgesehen ist, undThe environment detection unit advantageously has: an environment detection device which is provided for detecting the position and / or the orientation of the device in relation to the real environment, and
• eine Verarbeitungseinheit zur Berechnung der Umgebungsinformationen, welche die Position und/oder die Ausrichtung der Vorrichtung in Bezug auf die reale Umgebung beispielsweise in Form einer Matrix beschreiben, auf.A processing unit for calculating the environmental information, which describe the position and / or the orientation of the device in relation to the real environment, for example in the form of a matrix.
Bei einer typischen Anwendung der erfindungsgemäßen Vorrich- tung ist als das durch die in dem ersten Speichemedium abgelegten Informationen beschriebene Objekt ein Modell einer Anlage und/oder eines Anlagenteils vorgesehen. Beispielsweise kann bei einer geplanten Einbaumaßnahme ein virtuelles Modell der einzubauenden Anlage bzw. des einzubauenden Anlagenteils erstellt werden, um so schnell einen ersten Eindruck des zu planenden Systems in der hierfür vorgesehenen Realität zu gewinnen.In a typical application of the device according to the invention, a model of a system and / or a system part is provided as the object described by the information stored in the first storage medium. For example, in the case of a planned installation measure, a virtual model of the system to be installed or of the system part to be installed can be created in order to quickly gain a first impression of the system to be planned in the reality provided for this purpose.
Bei einer vorteilhaften Ausführungsform der Erfindung ist die Wiedergabeeinheit als Head-Mounted-Display ausgebildet, wobei die von den durch die Verarbeitungseinheit generierten Bildinformationen beschriebenen Objekte direkt in das Sichtfeld des Anwenders eingeblendet werden, und der Anwender den von den durch die Bildinformationen beschriebenen Objekten nicht verdeckten Teil der aktuellen Realität weiterhin direkt wahrnimmt . Bei dieser Art der Darstellung von Augmented-Reality Informationen handelt es sich um das sogenannte Optical-See- Through Verfahren. In diesem Fall sind die Raumerfassungsvorrichtung und die Umgebungserfassungsvorrichtung vorteilhafter Weise am Kopf des Anwenders angebracht, so dass sie in Relation zu den Augen des Anwenders ruhen. Die Blickwinkel der Raumerfassungsvorrichtung und der Umgebungserfassungsvorrichtung überschneiden sich mit dem aktuellen Blickfeld des Anwenders im Idealfall so, dass das komplette Sichtfeld des Anwenders erfasst wird. Hingegen können die Verarbeitungseinheiten der Raumerfassungseinheit und der Umgebungserfassungs- einheit auf einem von dem Anwender mitgeführten Rechner realisiert werden.In an advantageous embodiment of the invention, the reproduction unit is designed as a head-mounted display, the objects described by the image information generated by the processing unit being directly in the field of view of the user, and the user continues to directly perceive the part of the current reality that is not covered by the objects described by the image information. This type of presentation of augmented reality information is the so-called optical see-through method. In this case, the space detection device and the environment detection device are advantageously attached to the user's head so that they rest in relation to the user's eyes. The viewing angles of the room detection device and the surroundings detection device ideally overlap with the current field of vision of the user in such a way that the entire field of vision of the user is captured. In contrast, the processing units of the room acquisition unit and the environment acquisition unit can be implemented on a computer carried by the user.
Bei einer alternativen vorteilhaften Ausführungsform ist die Wiedergabeeinheit derart ausgebildet, dass die von den durch die Verarbeitungseinheit generierten Bildinformationen beschriebenen Objekte und der von den durch die Bildinformationen beschriebenen Objekte nicht verdeckte Teil der aktuellen Realität dargestellt werden, wobei die Vorrichtung hierzu insbesondere mindestens eine Bilder assungseinheit, die bei- spielsweise als Videokamera ausgeführt ist, zur Erfassung der aktuelle Realität aufweist. Diese Ausführungsform ermöglicht die Darstellung der Augmented-Reality Informationen für mehrer Anwender. Bei dieser Art der Darstellung von Augmented- Reality Informationen handelt es sich um das sogenannte Vi- deo-See-Through Verfahren. Hierbei werden die von den Bildinformationen beschriebenen und nicht durch die aktuelle Realität verdeckten Teile der virtuellen Objekte in das von der Videokamera erfasste Bild eingeblendet und auf einem oder, z. B. unter Verwendung eines Videospiltters, mehreren Wieder- , gabeeinheiten dargestellt. Bei den Wiedergabeeinheiten kann es sich um Head-Mounted-Displays oder aber gewöhnliche Monitore handeln die insbesondere auch an von der erfassten aktu- eilen Realität entfernten Orten positioniert sein können. Bei dieser Art der Ausführung können die Raumerfassungsvorrichtung, die Umgebungserfassungsvorrichtung und die Bilderfas- suηgseinheit auf dem Kopf eines Anwenders oder an einer sons- tigen Vorrichtung, beispielsweise einem tragbaren Computer, montiert sein.In an alternative advantageous embodiment, the reproduction unit is designed in such a way that the objects described by the image information generated by the processing unit and the part of the current reality that is not hidden by the objects described by the image information are displayed, with the device in particular at least one image acquisition unit for this purpose, which is designed, for example, as a video camera for capturing the current reality. This embodiment enables the presentation of the augmented reality information for multiple users. This type of presentation of augmented reality information is the so-called video see-through method. Here, the parts of the virtual objects described by the image information and not covered by the current reality are faded into the image captured by the video camera and displayed on one or, for. B. using a video spitter, multiple playback units. The playback units can be head-mounted displays or ordinary monitors, which in particular also depend on the current can be positioned in reality in distant places. In this type of embodiment, the space detection device, the environment detection device and the image acquisition unit can be mounted on the head of a user or on another device, for example a portable computer.
Um bei einer Ausführung der Vorrichtung unter Anwendung des Optical-See-Through Verfahrens den Blickwinkel und die Posi- tion des Anwenders möglichst exakt mit der Position und der Ausrichtung der Raumerfassungsvorrichtung abgleichen zu können, ist vorteilhafterweise ein zweites Speichermedium vorgesehen, das zur Speicherung von Kalibrierinformationen verwendet wird, wobei die Kalibrierinformationen geometrische Ab— weichungen zwischen Auge des Anwenders, der Position des Wiedergabesystems sowie der Raumerfassungsvorrichtung und der Umgebungser assungsvorrichtung beschreiben. Das zweite Speichermedium kann alternativ auch mit dem ersten Speichermedium in Form eines gemeinsamen Speichermediums realisiert sein.In order to be able to match the viewing angle and the position of the user as exactly as possible with the position and the orientation of the room detection device when the device is implemented using the optical see-through method, a second storage medium is advantageously provided, which is used to store calibration information is used, the calibration information describing geometric deviations between the user's eye, the position of the display system and the space detection device and the ambient detection device. Alternatively, the second storage medium can also be implemented with the first storage medium in the form of a common storage medium.
Um bei einer Ausführung der Vorrichtung .unter Anwendung des Video-See-Through Verfahrens den Blickwinkel und die Position der Videokamera möglichst exakt mit der Position und der Ausrichtung der Raumerfassungsvorrichtung abgleichen zu können, ist vorteilhafterweise das zweite Speichermedium zur Speicherung von Kalibrierinformationen vorgesehen, wobei die Kalibrierinformationen geometrische Abweichungen zwischen der Position der Videokamera, der Position des Wiedergabesystems sowie der Raumerfassungsvorrichtung und der Umgebungserfas- sungsvorrichtung beschreiben. Das zweite Speichermedium kann alternativ auch mit dem ersten Speichermedium in Form eines gemeinsamen Speichermediums realisiert sein.In order to be able to match the viewing angle and the position of the video camera as precisely as possible with the position and the orientation of the space detection device when the device is implemented using the video see-through method, the second storage medium is advantageously provided for storing calibration information, the Calibration information describes geometric deviations between the position of the video camera, the position of the playback system and the space detection device and the environment detection device. Alternatively, the second storage medium can also be implemented with the first storage medium in the form of a common storage medium.
Eine einfache Berechnung eventueller Verdeckungen virtueller geometrischer Anordnungen durch Anordnungen in der aktuellen Realität wird vorteilhafter Weise dadurch realisiert, dass die Verarbeitungseinheit auf Basis der von der Raumerfassungseinheit und der Umgebungserfassungseinheit generierten Informationen sowie den in den Speichermedien abgelegten Informationen die mittels der Rauminformationen und mittels der im ersten Speichermedium abgelegten Informationen beschriebenen Objekte in einem gemeinsamen Koordinatensystem darstellt. Auf Basis dieses gemeinsamen Bezugssystems kann die Verarbeitungseinheit neue Bildinformationen berechnen, bei der diejenigen von den durch die im ersten Speichermedium abgelegten Informationen beschriebenen Flächen ausgeblendet werden, die von den durch die Rauminformationen beschriebenen Flächen im Blickfeld des Anwenders bzw. der Videokamera verdeckt werden. Die Verarbeitungseinheit zur Verknüpfung der Umgebungs- und der Rauminformationen kann zusammen mit der Verarbeitungseinheit der Raumerfassungseinheit und/oder der Verarbeitungseinheit der Umgebungserfassungseinheit auf einem • Rechner realisiert sein.A simple calculation of possible concealments of virtual geometric arrangements by arrangements in the current reality is advantageously realized in that that the processing unit, based on the information generated by the room detection unit and the environment detection unit and the information stored in the storage media, represents the objects described by means of the space information and by means of the information stored in the first storage medium in a common coordinate system. On the basis of this common reference system, the processing unit can calculate new image information in which the areas described by the information stored in the first storage medium are hidden, which are covered by the areas described by the spatial information in the field of vision of the user or the video camera. The processing unit for linking the environmental and spatial information can be implemented on a computer together with the processing unit of the spatial detection unit and / or the processing unit of the environmental detection unit .
Eine Dynamisierung eines virtuellen Anlagenmodells kann mit der erfindungsgemäßen Vorrichtung dadurch realisiert werden, dass die Vorrichtung mindestens ein Simulationssystem zur Generierung der in dem ersten Speichermedium abgelegten Informationen aufweist. Die dynamischen Vorgänge werden von dem Simulationssystem berechnet. Die im ersten Speichermedium ab- gelegten Informationen zur Beschreibung der virtuellen Objekte werden entsprechend der vom Simulationssystem berechneten Daten kontinuierlich angepasst.A dynamic system model can be made dynamic with the device according to the invention in that the device has at least one simulation system for generating the information stored in the first storage medium. The dynamic processes are calculated by the simulation system. The information stored in the first storage medium for describing the virtual objects is continuously adapted in accordance with the data calculated by the simulation system.
Die Raumerfassungsvorrichtung der Raumerfassungseinheit kann zum Beispiel als Radarsystem, als Ultraschallsystem, als Lasersystem oder als Stereokamerasystem ausgeführt sein. Zur Minimierung des erforderlichen Hardwareaufwands können insbesondere bei kamerabasierten Systemen die Raumerfassungsvorrichtung und die Umgebungserfassungsvorrichtung in einer ge- meinsamen Erfassungsvorrichtung ausgeführt sein. Bei einer Ausführungsform unter Anwendung des Video-See-Through Verfahrens ist darüber hinaus auch eine Integration der Raumerfas- sungsvorrichtung und/oder der Umgebungserfassungsvorrichtung in die für diese Methode benötigte Videokamera möglich.The room detection device of the room detection unit can be designed, for example, as a radar system, as an ultrasound system, as a laser system or as a stereo camera system. In order to minimize the hardware expenditure required, in particular in the case of camera-based systems, the space detection device and the environment detection device can be implemented in a common detection device. In one embodiment using the video see-through method, an integration of the spatial solution device and / or the environment detection device in the video camera required for this method.
Im Folgenden wird die Erfindung anhand der in den Figuren dargestellten Ausführungsbeispiele näher beschrieben und erläutert. Es zeigen:The invention is described and explained in more detail below on the basis of the exemplary embodiments illustrated in the figures. Show it:
FIG 1 eine schematische Darstellung einer Vorrichtung zur Darstellung von Informationen, insbesondere Augmented- Reality Informationen, für mindestens einen Anwender, FIG 2 eine schematische Darstellung einer auf das Video-See- Through Verfahrens basierenden Ausführungsform der Vorrichtung, FIG 3 eine alternative Vorrichtung zur Darstellung von Infor- mationen, die auf dem Video-See-Through Verfahren basiert und FIG 4 eine Darstellung eines typischen Anwendungsszenarios einer Ausführungsform der in FIG 1 gezeigten Vorrichtung. FIG 1 zeigt eine schematische Darstellung einer Vorrichtung 1 zur Darstellung von Informationen, insbesondere Augmented- Reality Informationen, für mindestens einen Anwender 2. Die dargestellte Vorrichtung betrifft eine Ausführungsform, die auf dem Optical-See-Through Verfahren beruht. Der Anwender 2 erfasst mit Hilfe einer Raumerfassungseinheit 3 eine sich in seinem Blickfeld befindende aktuelle Realität 13. Auf dem Kopf des Anwenders 2 befindet sich als Teil der Raumerfassungseinheit 3 eine Raumerfassungsvorrichtung 3a, die relativ zu den Augen des Anwenders 2 ruht. Mit Hilfe einer Verarbeitungseinheit 3b der Raumerfassungseinheit 3 werden Raumerfassungsinformationen 10 generiert, die an eine Verarbeitungseinheit 9 weitergegeben werden.1 shows a schematic representation of a device for displaying information, in particular augmented reality information, for at least one user, FIG. 2 shows a schematic representation of an embodiment of the device based on the video see-through method, FIG. 3 shows an alternative device for displaying Information that is based on the video see-through method and FIG. 4 shows a representation of a typical application scenario of an embodiment of the device shown in FIG. 1 shows a schematic illustration of a device 1 for displaying information, in particular augmented reality information, for at least one user 2. The device shown relates to an embodiment which is based on the optical see-through method. With the help of a room detection unit 3, the user 2 detects a current reality 13 that is in his field of vision. On the head of the user 2, as part of the room detection unit 3, there is a room detection device 3 a, which rests relative to the eyes of the user 2. With the help of a processing unit 3b of the room detection unit 3, room detection information 10 is generated which is forwarded to a processing unit 9.
Ebenfalls auf dem Kopf des Anwenders 2 ist eine Umgebungser- fassungsvorrichtung 4a positioniert, mit der die Position und der Blickwinkel des Anwenders 2 erfasst werden können. Die Umgebungserfassungsvorrichtung 4a ruht in Bezug auf die Raumerfassungsvorrichtung 3a und in Bezug auf die Augen des Anwenders 2. Eine Verarbeitungsvorrichtung 4b generiert aus der erfassten Position und dem erfassten Blickwinkel Umgebungsin- formationen 5, die ebenfalls an die Verarbeitungseinheit 9 weitergegeben werden.An environment detection device 4a is also positioned on the head of the user 2, with which the position and the viewing angle of the user 2 can be detected. The Environment detection device 4a rests in relation to the space detection device 3a and in relation to the eyes of the user 2. A processing device 4b generates ambient information 5 from the detected position and the detected viewing angle, which information is likewise passed on to the processing unit 9.
Bei einer alternativen Ausführungsform der Umgebungserfassungseinheit 4 umfasst die Umgebungserfassungsvorrichtung ei- nen Sensor, der auf dem Kopf des Anwenders positioniert ist und eine weitere Erfassungsvorrichtung, die so aufgestellt ist, dass sie die Position und die Ausrichtung des Sensors und somit auch des Anwenders in Bezug auf die aktuelle Realität erfassen kann.In an alternative embodiment of the environment detection unit 4, the environment detection device comprises a sensor which is positioned on the head of the user and a further detection device which is set up in such a way that it detects the position and the orientation of the sensor and thus also of the user can grasp the current reality.
In einem Speichermedium 7 sind Informationen 8 abgelegt, die z.B. eine virtuelle geometrische Anordnung beschreiben. Bei der virtuellen geometrischen Anordnung kann es sich beispielsweise um das dreidimensionale Modell einer geplanten Anlage oder eines geplanten Anlagenteils handeln. Die Informationen 8, die das dreidimensionale Modell einer solchen Anlage beschreiben, werden ebenfalls der Verarbeitungseinheit 9 zugeführt .Information 8 is stored in a storage medium 7, e.g. describe a virtual geometric arrangement. The virtual geometric arrangement can be, for example, the three-dimensional model of a planned system or a planned system part. The information 8, which describes the three-dimensional model of such a system, is also fed to the processing unit 9.
Ein weiteres Speichermedium 12 enthält Kalibrierinformationen 11, die die geometrischen Abweichungen zwischen Auge des Anwenders 2, der Position einer sich auf dem Kopf des Anwenders 2 befindlichen Wiedergabeeinheit 6 sowie der Raumerfassungsvorrichtung 3a und der Umgebungserfassungsvorrichtung 4a be- schreiben.Another storage medium 12 contains calibration information 11, which describes the geometric deviations between the eye of the user 2, the position of a display unit 6 located on the head of the user 2, and the space detection device 3a and the environment detection device 4a.
Die Verarbeitungseinheit 9 verknüpft nun die Rauminformation 10, die Umgebungsinformation 5, die Information 8, die das dreidimensionale Modell der virtuellen Anlage beschreiben, und die Kalibrierinformation 11 zu einer Menge von Bildinformationen 14 derart, dass wechselseitige Verdeckungen der aktuellen Realität und der geplanten virtuellen Anlage durch die Wiedergabeeinheit 6 erkennbar gemacht werden. Über die Wiedergabeeinheit 6, die in diesem Beispiel als Head-Mounted- Display ausgeführt ist, wird lediglich ein Bild der geplanten virtuellen Anlage dargestellt, bei dem die von der aktuellen Realität 13 im Blickfeld des Anwenders 2 verdeckten Flächen ausgeblendet sind. Der Teil der aktuellen Realität 13, der nicht durch die geplante virtuelle Anlage verdeckt wird, wird vom Anwender direkt wahrgenommen.The processing unit 9 now links the spatial information 10, the environmental information 5, the information 8 that describes the three-dimensional model of the virtual installation, and the calibration information 11 to a set of image information 14 in such a way that mutual concealments of the current reality and the planned virtual installation the playback unit 6 can be made recognizable. Via the playback unit 6, which in this example is designed as a head-mounted display, only an image of the planned virtual system is shown, in which the areas hidden by the current reality 13 in the user's field of vision 2 are hidden. The part of the current reality 13 that is not covered by the planned virtual system is directly perceived by the user.
Auf diese Weise wird dem Anwender 2 eine gemischt virtuell reale Umgebung visualisiert, ohne dass eine zeit- und kostenintensive komplett 3-dimensionale Modellierung der realen Umgebung erforderlich wird. Auch dynamische Vorgänge innerhalb der realen Umgebung können visualisiert werden, wobei ledig- lieh eine kontinuierliche Berechnung der Verdeckungsflächen durchzuführen ist.In this way, a mixed virtual real environment is visualized for the user 2 without a time-consuming and cost-intensive completely 3-dimensional modeling of the real environment being necessary. Dynamic processes within the real environment can also be visualized, whereby only a continuous calculation of the cover areas is to be carried out.
FIG 2 zeigt eine schematische Darstellung einer auf dem Video-See-Through Verfahren basierenden Ausführungsform der Vorrichtung 1. Es werden hier und auch bei der Beschreibung der*.weiteren Figuren die selben Bezugszeichen wie in FIG 1 verwendet. Bei dieser Ausführungsform wird die Vorrichtung 1 um eine Bilderfassungseinheit 18, die insbesondere als Videokamera ausgeführt ist, ergänzt. Die Wiedergabeeinheit 6 stellt nun die komplette erweiterte Realität dar. D. h. neben den Bildin ormationen 14, die ein Modell der virtuellen Anlage beschreiben, bei dem die von der aktuellen Realität verdeckten Fläche ausgeblendet sind, wird auch der von der Videokamera 18 erfasste Teil der aktuellen Realität 13, der nicht von der virtuellen Anlage verdeckt wird, mit Hilfe der Wiedergabeeinheit 6 dargestellt. Hierzu wird das Bild der virtuellen Anlage, bei dem die von der aktuellen Realität 13 verdeckten Flächen ausgeblendet sind, in das von der Videokamera .18 erfasste Bild eingeblendet. Bei dieser Art der Aus- führung kommt eine Konvertierungseinheit 15 mit Mixing Funktion zum Einsatz, um ein entsprechendes von der Wiedergabeeinheit 6 darstellbares Signal zu erzeugen. Diese kann sowohl softwaretechnisch als auch hardwaretechnisch - beispielsweise als Videokarte mit entsprechender Funktionalität - ausgeführt sein.2 shows a schematic representation of a based on the video see-through method embodiment of the device 1. It will here and the same reference numerals are used as well as in the description of * .weiteren figures in FIG. 1 In this embodiment, the device 1 is supplemented by an image capture unit 18, which is in particular designed as a video camera. The playback unit 6 now represents the complete augmented reality. In addition to the image information 14, which describes a model of the virtual system in which the surface covered by the current reality is hidden, the part of the current reality 13 which is captured by the video camera 18 and which is not covered by the virtual system is also included Help of the playback unit 6 shown. For this purpose, the image of the virtual system, in which the areas covered by the current reality 13 are hidden, is superimposed on the image captured by the video camera .18. In this type of embodiment, a conversion unit 15 with a mixing function is used to generate a corresponding signal that can be represented by the playback unit 6. This can be both in terms of software and hardware - for example as a video card with the appropriate functionality.
In der dargestellten Ausführungsform sind die Raumerfassungsvorrichtung 3a, die Umgebungserfassungsvorrichtung 4a und die Bilderfassungseinheit 18 auf dem Kopf des Anwenders 2 positioniert. Die zur Visualisierung der erweiterten Realität verwendete Wiedergabeeinheit 6 ist ebenfalls mit dem Kopf des Anwenders 2 verbunden, beispielsweise handelt es sich hierbei um ein Head-Mounted-Display.In the illustrated embodiment, the space capturing device 3a, the environmental capturing device 4a and the image capturing unit 18 are positioned on the head of the user 2. The playback unit 6 used to visualize the augmented reality is also connected to the head of the user 2, for example it is a head-mounted display.
FIG 3 zeigt eine alternative Vorrichtung zur Darstellung von Informationen, die auf dem Video-See-Through Verfahren ba- siert . Bei dieser Ausführungsform der Vorrichtung 1 sind die Raumerfassungsvorrichtung 3a, die Umgebungserfassungsvorrichtung 4a, die Bilderfassungseinheit 18 sowie die Wiedergabeeinheit 6 an einem tragbaren Computer montiert . Diese Ausführungsform ermöglicht es mehreren Anwendern, die erweiterte Realität zu betrachten. Mit Hilfe eines Videosplitters ist überdies auch die Darstellung der erweiterten Realität auf mehreren Wiedergabeeinheiten möglich.3 shows an alternative device for displaying information, which is based on the video see-through method. In this embodiment of the device 1, the space detection device 3a, the environment detection device 4a, the image detection unit 18 and the reproduction unit 6 are mounted on a portable computer. This embodiment enables multiple users to view augmented reality. With the help of a video splitter, the representation of augmented reality on several playback units is also possible.
FIG 4 zeigt ein typisches Anwendungsszenario einer Ausfüh- rungsform der in FIG 1 gezeigten Vorrichtung. Im Blickfeld eines Anwenders 2 befindet sich eine aktuelle Realität 13, die beispielsweise ein Förderband sein kann. Die Raumerfassungsvorrichtung 3a des Raumerfassungssystems 3 erfasst den im Blickwinkel des Anwenders 2 befindlichen Teil des Förder- bandes 13. Die Verarbeitungseinheit 3b der Raumerfassungseinheit 3 modelliert die im Blickfeld des Anwenders 2 befindlichen Flächen des Förderbandes 13 in ein dreidimensionales Flächenmodell 10b.4 shows a typical application scenario of an embodiment of the device shown in FIG. In the field of view of a user 2 is a current reality 13, which can be a conveyor belt, for example. The space detection device 3a of the space detection system 3 detects the part of the conveyor belt 13 that is in the viewpoint of the user 2. The processing unit 3b of the space detection unit 3 models the surfaces of the conveyor belt 13 that are in the user 2 field of view into a three-dimensional surface model 10b.
Mit Hilfe der Umgebungserfassungseinheit 4, bei der es sich beispielsweise um ein handelsübliches Trackingsystem handelt, werden die Position und der Blickwinkel des Anwenders 2 zur aktuellen Realität erfasst. Die Verarbeitungseinheit 4b der Umgebungserfassungseinheit 4 generiert hieraus Informationen 5, die in Form einer Matrix 5b dargestellt werden.With the help of the environment detection unit 4, which is, for example, a commercially available tracking system, the position and the viewing angle of the user 2 become current reality captured. The processing unit 4b of the environment detection unit 4 uses this to generate information 5 which is represented in the form of a matrix 5b.
Ein Simulationssystem 16 generiert kontinuierlich eine Datenmenge 8, die das 3D-Modell 8b einer virtuellen geometrischen Anordnung, in diesem Beispiel eines Roboters, beschreibt. Auf diese Weise wird eine Dynamisierung des virtuellen Roboters in der erweiteten Realität durchgeführt. Die entsprechende Datenmenge 8 ist in einem ersten Speichermedium 7 abgelegt.A simulation system 16 continuously generates a data set 8 which describes the 3D model 8b of a virtual geometric arrangement, in this example a robot. In this way, the virtual robot is dynamized in the expanded reality. The corresponding amount of data 8 is stored in a first storage medium 7.
In einem zweiten Speichermedium 12 sind die zuvor beschriebenen Kalibrierinformationen 11 in Form einer Matrix 11b bzw. mehrerer Matrizen abgelegt.The previously described calibration information 11 is stored in the form of a matrix 11b or a plurality of matrices in a second storage medium 12.
Die Verarbeitungseinheit 9 stellt nun mit Hilfe der Kalibrierinformationen 11 und der Umgebungsinformationen 5 das 3- dimensionale Flächenmodell 10b der realen Anordnung 13 (hier des Förderbandes) und das 3-dimensionale Modell 8b der virtu- eilen Anordnung (hier eines virtuellen Roboters) in einem gemeinsamen Koordinatensystem dar. Innerhalb dieses Koordinatensystems berechnet die Verarbeitungseinheit 9 Verdeckungen des virtuellen Roboters, die durch das Förderband hervorgerufen werden. Als Ergebnis generiert die Verarbeitungseinheit 9 eine neue Datenmenge 14, die wiederum ein virtuelles Modell 14b des Roboters beschreibt, bei der die vom Förderband verdeckten Flächen ausgeblendet werden.The processing unit 9 now uses the calibration information 11 and the environmental information 5 to combine the 3-dimensional surface model 10b of the real arrangement 13 (here the conveyor belt) and the 3-dimensional model 8b of the virtual arrangement (here a virtual robot) Coordinate system. Within this coordinate system, the processing unit 9 calculates concealments of the virtual robot, which are caused by the conveyor belt. As a result, the processing unit 9 generates a new amount of data 14, which in turn describes a virtual model 14b of the robot, in which the areas hidden by the conveyor belt are hidden.
Das Modell 14b des virtuellen Roboters, bei dem die Verde- ckungen durch das Förderband ausgeblendet sind, wird mittels einer Videokarte 15 in ein von der Wiedergabeeinheit 6 darstellbares Signal konvertiert.The model 14b of the virtual robot, in which the covers are hidden by the conveyor belt, is converted by means of a video card 15 into a signal which can be represented by the playback unit 6.
Der Anwender .2 sieht durch gleichzeitiges Erfassen des mit- tels der Wiedergabeeinheit 6 dargestellten Modells 14b des virtuellen Roboters und des realen Förderbandes ein gemischt virtuell reales Bild 17, bei dem durch das Ausblenden der von der Verarbeitungseinheit 9 berechneten Flächen der gewünschte 3-dimensionale Eindruck vermittelt wird. Die Notwendigkeit einer zeit- und kostenintensiven 3-dimensionalen Modellierung des Förderbandes wird bei der erfindungsgemäßen Vorrichtung bzw. dem erfindungsgemäßen Verfahren eliminiert.By simultaneously capturing the model 14b of the virtual robot represented by the playback unit 6 and the real conveyor belt, the user .2 sees a mixed virtual real image 17, in which by hiding the from the processing unit 9 calculates the desired 3-dimensional impression. The need for a time-consuming and costly 3-dimensional modeling of the conveyor belt is eliminated in the device and the method according to the invention.
Zusammenfassend betrifft die Erfindung eine Vorrichtung und ein Verfahren zur Darstellung von virtuellen und realen Umgebungsinformationen für einen oder mehrere Anwender, wobei virtuelle Anordnungen und reale Anordnungen derartig dargestellt werden, dass Verdeckungen der virtuellen Anordnungen durch reale Anordnungen erkennbar gemacht werden. Mit Hilfe einer Umgebungserfassungseinheit 4 werden relative Position und Ausrichtung der Vorrichtung in der realen Umgebung er- fasst. Zusätzlich wird mittels einer Raumerfassungseinheit 3 kontinuierlich eine Erfassung der Realität und deren Umre- chung in ein 3-dimensionales Flächenmodell durchgeführt . Ein VerarbeitungsSystem 9 transferiert das 3-dimensionale Flächenmodell der realen Anordnung und das 3-dimensionale Modell der virtuellen Anordnung in ein gemeinsames Koordinatensystem und berechnet eventuelle Verdeckungsflächen der virtuellen Anordnung durch die reale Anordnung. In summary, the invention relates to an apparatus and a method for displaying virtual and real environmental information for one or more users, virtual arrangements and real arrangements being displayed in such a way that concealments of the virtual arrangements are made recognizable by real arrangements. With the help of an environment detection unit 4, the relative position and orientation of the device in the real environment are detected. In addition, by means of a room acquisition unit 3, the reality is continuously recorded and converted into a 3-dimensional surface model. A processing system 9 transfers the 3-dimensional surface model of the real arrangement and the 3-dimensional model of the virtual arrangement into a common coordinate system and calculates any hidden areas of the virtual arrangement by the real arrangement.
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102004016331A DE102004016331B4 (en) | 2004-04-02 | 2004-04-02 | Apparatus and method for concurrently displaying virtual and real environmental information |
| PCT/EP2005/051195 WO2005096638A1 (en) | 2004-04-02 | 2005-03-16 | Device and method for simultaneously representing virtual and real environment information |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP1730970A1 true EP1730970A1 (en) | 2006-12-13 |
Family
ID=34963623
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP05731750A Withdrawn EP1730970A1 (en) | 2004-04-02 | 2005-03-16 | Device and method for simultaneously representing virtual and real environment information |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US8345066B2 (en) |
| EP (1) | EP1730970A1 (en) |
| DE (1) | DE102004016331B4 (en) |
| WO (1) | WO2005096638A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9781697B2 (en) | 2014-06-20 | 2017-10-03 | Samsung Electronics Co., Ltd. | Localization using converged platforms |
Families Citing this family (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102005060980B4 (en) * | 2005-12-20 | 2012-10-25 | Metaio Gmbh | Method and system for determining a collision-free three-dimensional space volume along a movement path with respect to a real environment |
| US8094090B2 (en) * | 2007-10-19 | 2012-01-10 | Southwest Research Institute | Real-time self-visualization system |
| US8606657B2 (en) * | 2009-01-21 | 2013-12-10 | Edgenet, Inc. | Augmented reality method and system for designing environments and buying/selling goods |
| US20170028557A1 (en) * | 2015-07-28 | 2017-02-02 | Comprehensive Engineering Solutions, Inc. | Robotic navigation system and method |
| US20130137076A1 (en) * | 2011-11-30 | 2013-05-30 | Kathryn Stone Perez | Head-mounted display based education and instruction |
| US10055512B2 (en) * | 2012-07-16 | 2018-08-21 | Omc2 Llc | System and method for CNC machines and software |
| FR3000242A1 (en) | 2012-12-21 | 2014-06-27 | France Telecom | METHOD FOR MANAGING A GEOGRAPHIC INFORMATION SYSTEM SUITABLE FOR USE WITH AT LEAST ONE POINTING DEVICE, WITH CREATION OF ASSOCIATIONS BETWEEN DIGITAL OBJECTS |
| FR3000241A1 (en) * | 2012-12-21 | 2014-06-27 | France Telecom | METHOD FOR MANAGING A GEOGRAPHIC INFORMATION SYSTEM ADAPTED TO BE USED WITH AT LEAST ONE POINTING DEVICE, WITH THE CREATION OF PURELY VIRTUAL DIGITAL OBJECTS. |
| US20140375684A1 (en) * | 2013-02-17 | 2014-12-25 | Cherif Atia Algreatly | Augmented Reality Technology |
| WO2015034535A1 (en) | 2013-09-09 | 2015-03-12 | Empire Technology Development, Llc | Augmented reality alteration detector |
| US10229523B2 (en) | 2013-09-09 | 2019-03-12 | Empire Technology Development Llc | Augmented reality alteration detector |
| US9677840B2 (en) | 2014-03-14 | 2017-06-13 | Lineweight Llc | Augmented reality simulator |
| CN204048546U (en) * | 2014-05-02 | 2014-12-31 | 加埃塔诺·朱塞佩·克赛拉 | Hair extensions, microrings and kits comprising the hair extensions |
| DE102017207894A1 (en) * | 2017-05-10 | 2018-11-15 | Krones Aktiengesellschaft | Method and computer system for planning a plant in the field of the beverage processing industry |
| US10816807B2 (en) * | 2017-11-01 | 2020-10-27 | Vrgineers, Inc. | Interactive augmented or virtual reality devices |
| US11501224B2 (en) | 2018-01-24 | 2022-11-15 | Andersen Corporation | Project management system with client interaction |
| US10540821B2 (en) * | 2018-03-09 | 2020-01-21 | Staples, Inc. | Dynamic item placement using 3-dimensional optimization of space |
| CN109165329A (en) * | 2018-07-09 | 2019-01-08 | 中兵勘察设计研究院有限公司 | A kind of the underground pipe network intelligence control technology and system of fusion augmented reality and Internet of Things |
| DE102020201375B3 (en) * | 2020-02-05 | 2021-06-24 | Magna Steyr Fahrzeugtechnik Ag & Co Kg | Method for checking a safety area of a robot |
| DE102021131060B3 (en) | 2021-11-26 | 2022-07-28 | Sick Ag | System and method with a system |
| DE102022100840A1 (en) * | 2022-01-14 | 2023-07-20 | Sick Ag | AUGMENTED REALITY SYSTEM FOR TEACHING A USER OF AN APPLICATION |
Family Cites Families (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5428724A (en) * | 1992-04-29 | 1995-06-27 | Canon Information Systems | Method and apparatus for providing transparency in an object based rasterized image |
| US6151009A (en) * | 1996-08-21 | 2000-11-21 | Carnegie Mellon University | Method and apparatus for merging real and synthetic images |
| US5986674A (en) * | 1996-10-31 | 1999-11-16 | Namco. Ltd. | Three-dimensional game apparatus and information storage medium |
| US5923333A (en) * | 1997-01-06 | 1999-07-13 | Hewlett Packard Company | Fast alpha transparency rendering method |
| US6166744A (en) * | 1997-11-26 | 2000-12-26 | Pathfinder Systems, Inc. | System for combining virtual images with real-world scenes |
| US6369830B1 (en) * | 1999-05-10 | 2002-04-09 | Apple Computer, Inc. | Rendering translucent layers in a display system |
| JP2000350865A (en) * | 1999-06-11 | 2000-12-19 | Mr System Kenkyusho:Kk | Mixed reality space game apparatus, image processing method thereof, and program storage medium |
| US6335765B1 (en) * | 1999-11-08 | 2002-01-01 | Weather Central, Inc. | Virtual presentation system and method |
| DE10127396A1 (en) * | 2000-06-13 | 2001-12-20 | Volkswagen Ag | Method for utilization of old motor vehicles using a sorting plant for removal of operating fluids and dismantling of the vehicle into components parts for sorting uses augmented reality (AR) aids to speed and improve sorting |
| JP2002157607A (en) * | 2000-11-17 | 2002-05-31 | Canon Inc | Image generation system, image generation method, and storage medium |
| JP3406965B2 (en) * | 2000-11-24 | 2003-05-19 | キヤノン株式会社 | Mixed reality presentation device and control method thereof |
| US20020133264A1 (en) * | 2001-01-26 | 2002-09-19 | New Jersey Institute Of Technology | Virtual reality system for creation of design models and generation of numerically controlled machining trajectories |
| DE10240392A1 (en) * | 2002-09-02 | 2004-03-11 | Patron, Günter | A system for determining relative spacing of virtual and real objects e.g. for planning of buildings and manufacturing equipment, requires involving an augmented reality system for environmental real object position |
| CA2556082A1 (en) * | 2004-03-12 | 2005-09-29 | Bracco Imaging S.P.A. | Accuracy evaluation of video-based augmented reality enhanced surgical navigation systems |
-
2004
- 2004-04-02 DE DE102004016331A patent/DE102004016331B4/en not_active Expired - Fee Related
-
2005
- 2005-03-16 US US11/547,503 patent/US8345066B2/en not_active Expired - Fee Related
- 2005-03-16 EP EP05731750A patent/EP1730970A1/en not_active Withdrawn
- 2005-03-16 WO PCT/EP2005/051195 patent/WO2005096638A1/en not_active Ceased
Non-Patent Citations (7)
| Title |
|---|
| EINSELE T: "Real-time self-localization in unknown indoor environment using a panorama laser range finder", INTELLIGENT ROBOTS AND SYSTEMS, 1997. IROS '97., PROCEEDINGS OF THE 19 97 IEEE/RSJ INTERNATIONAL CONFERENCE ON GRENOBLE, FRANCE 7-11 SEPT. 1997, NEW YORK, NY, USA,IEEE, US, vol. 2, 7 September 1997 (1997-09-07), pages 697 - 702, XP010264722, ISBN: 978-0-7803-4119-7, DOI: 10.1109/IROS.1997.655087 * |
| GEIGER C ET AL: "Mobile AR4ALL", AUGMENTED REALITY, 2001. PROCEEDINGS. IEEE AND ACM INTERNATIONAL SYMPO SIUM ON NEW YORK, NY, USA 29-30 OCT. 2001, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 29 October 2001 (2001-10-29), pages 181 - 182, XP010568065, ISBN: 978-0-7695-1375-1, DOI: 10.1109/ISAR.2001.970532 * |
| PIEKARSKI W ET AL: "Interactive augmented reality techniques for construction at a distance of 3D geometry", IPT/EGVE 2003. SEVENTH IMMERSIVE PROJECTION TECHNOLOGY WORKSHOP. NINTH EUROGRAPHICS WORKSHOP ON VIRTUAL ENVIRONMENTS EUROGRAPHICS ASSOC. AIRE-LA-VILLE, SWITZERLAND, 2003, pages 19 - 28, ISBN: 3-905673-00-2 * |
| REKIMOTO J ET AL: "THE WORLD THROUGH THE COMPUTER: COMPUTER AUGMENTED INTERACTION WITHREAL WORLD ENVIRONMENTS", UIST '95. 8TH ANNUAL SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY. PROCEEDINGS OF THE ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY. PITTSBURGH, PA, NOV. 14 - 17, 1995; [ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY], NEW Y, 14 November 1995 (1995-11-14), pages 29 - 36, XP000634412, ISBN: 978-0-89791-709-4, DOI: 10.1145/215585.215639 * |
| RONALD T AZUMA: "A Survey of Augmented Reality", PRESENCE, CAMBRIDGE, MA, US, 1 August 1997 (1997-08-01), pages 1 - 48, XP002254668, ISSN: 1054-7460 * |
| WAGNER D ET AL: "First steps towards handheld augmented reality", WEARABLE COMPUTERS, 2003. PROCEEDINGS. SEVENTH IEEE INTERNATIONAL SYMP OSIUM ON 21-23 OCT. 2003, PISCATAWAY, NJ, USA,IEEE, LOS ALAMITOS, CA, USA, 21 October 2003 (2003-10-21), pages 127 - 135, XP010673786, ISBN: 978-0-7695-2034-6, DOI: 10.1109/ISWC.2003.1241402 * |
| ZHAO FENG-JI ET AL: "A mobile robot localization using ultrasonic sensors in indoor environment", ROBOT AND HUMAN COMMUNICATION, 1997. RO-MAN '97. PROCEEDINGS., 6TH IEE E INTERNATIONAL WORKSHOP ON SENDAI, JAPAN 29 SEPT.-1 OCT. 1997, NEW YORK, NY, USA,IEEE, US, 29 September 1997 (1997-09-29), pages 52 - 57, XP010263258, ISBN: 978-0-7803-4076-3 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9781697B2 (en) | 2014-06-20 | 2017-10-03 | Samsung Electronics Co., Ltd. | Localization using converged platforms |
Also Published As
| Publication number | Publication date |
|---|---|
| DE102004016331B4 (en) | 2007-07-05 |
| DE102004016331A1 (en) | 2005-11-03 |
| US8345066B2 (en) | 2013-01-01 |
| US20070202472A1 (en) | 2007-08-30 |
| WO2005096638A1 (en) | 2005-10-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| DE102004016331B4 (en) | Apparatus and method for concurrently displaying virtual and real environmental information | |
| DE102015015503B4 (en) | Robotic system having an augmented reality compatible display | |
| DE102014006732B4 (en) | Image overlay of virtual objects in a camera image | |
| EP1683063A1 (en) | System and method for carrying out and visually displaying simulations in an augmented reality | |
| DE102005021735B4 (en) | Video surveillance system | |
| DE102019105630B4 (en) | Display control device, vehicle surroundings display system and computer program | |
| EP1701233B1 (en) | Generation of virtual worlds based on a real environment | |
| DE102018113336A1 (en) | A method of using a machine to set an augmented reality display environment | |
| WO2020126240A1 (en) | Method for operating an automation technology field device in an augmented-reality/mixed-reality environment | |
| DE102011112617A1 (en) | Cooperative 3D workplace | |
| DE102004046144A1 (en) | Augmented reality system used for planning a production plant layout has camera system to enter units into a central controller where they are combined and then stored in data base | |
| DE102018209377A1 (en) | A method of presenting AR / VR content on a mobile terminal and mobile terminal presenting AR / VR content | |
| EP2831839B1 (en) | Method for automatically operating a monitoring system | |
| DE102004016329A1 (en) | System and method for performing and visualizing simulations in an augmented reality | |
| DE102018118422A1 (en) | METHOD AND SYSTEM FOR PRESENTING DATA FROM A VIDEO CAMERA | |
| DE112019003579T5 (en) | INFORMATION PROCESSING DEVICE, PROGRAM AND INFORMATION PROCESSING METHOD | |
| DE102006004731A1 (en) | Camera position and/or orientation determining method for virtual or augmented reality system, involves detecting real-image object, and determining camera position and/or orientation based on reference image objects and real-image object | |
| DE102005014979B4 (en) | Method and arrangement for planning production facilities | |
| WO2013178358A1 (en) | Method for spatially visualising virtual objects | |
| DE102018106731A1 (en) | Military device and method for operating a military device | |
| EP4172729A1 (en) | Method for displaying a virtual object | |
| DE102007056835A1 (en) | Image processing module for estimating an object position of a surveillance object, method for determining an object position of a surveillance object and computer program | |
| WO2023232819A1 (en) | Generating holograms for display in an augmented reality environment | |
| WO2025031546A1 (en) | Method for the creation of a representation of surroundings representing vehicle surroundings, control device, driver assistance system, and vehicle | |
| WO2023078514A1 (en) | Method and system for ascertaining the position and alignment of a virtual camera of a surroundings monitoring system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| 17P | Request for examination filed |
Effective date: 20060726 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE FR GB IT |
|
| DAX | Request for extension of the european patent (deleted) | ||
| RBV | Designated contracting states (corrected) |
Designated state(s): DE FR GB IT |
|
| 17Q | First examination report despatched |
Effective date: 20090626 |
|
| RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: SIEMENS AKTIENGESELLSCHAFT |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
| 18D | Application deemed to be withdrawn |
Effective date: 20130319 |