[go: up one dir, main page]

GB2513865A - A method for interacting with an augmented reality scene - Google Patents

A method for interacting with an augmented reality scene Download PDF

Info

Publication number
GB2513865A
GB2513865A GB1308177.3A GB201308177A GB2513865A GB 2513865 A GB2513865 A GB 2513865A GB 201308177 A GB201308177 A GB 201308177A GB 2513865 A GB2513865 A GB 2513865A
Authority
GB
United Kingdom
Prior art keywords
data
augmented
augmented reality
objects
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1308177.3A
Other versions
GB201308177D0 (en
Inventor
Ying Wei Peter Zhuo
Sauman Mahata
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PRESENT Pte Ltd
Original Assignee
PRESENT Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PRESENT Pte Ltd filed Critical PRESENT Pte Ltd
Priority to GB1308177.3A priority Critical patent/GB2513865A/en
Publication of GB201308177D0 publication Critical patent/GB201308177D0/en
Publication of GB2513865A publication Critical patent/GB2513865A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/904Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Augmented reality scene having one or more objects, each object in the augmented reality scene having data augmented to the object and using one or more sensors to detect a request from a user to search at least one of the augmented data which maybe of one or more categories such as picture, video or multimedia. Further providing a sequence of the augmented data to be searched and searching along said sequence of the augmented data based on a command signal sensed at the one or more sensors. The searched piece may then be displayed based on a command signal received from the sensors which may be touch sensors. Searching along sequence of augmented data may be based on rotational touch commands. The one or more objects may be filtered based on a proximity to the user where the filtering may be based on multi-touch command such as pinch or spread zoom command.

Description

A Method For Interacting With An Augmented Reality Scene
TECHNICAL FIELD
The present disclosure relates broadly to a method for interacting with an augmented reality scene, to a device for interaction with an augmented reality system and to an augmented reality system.
BACKGROUND
Augmented Reality (AR) typically relates to usage of real world data together with computer generated data. Typically, AR is used to graphically superimpose (or augment) digitally processed information using video imagery onto real world data such as a current scene viewed through an electronic device such as a video viewer.
Typically, the computer generated data may be 3D images or statistical information relating to the real world data. For example, a building may be identified as an object by an AR system and statistical information such as construction date of the building generated by the AR system may be superimposed (or augmented) onto the object when viewed via a screen as a window e.g. through a screen of a video viewer.
Using current technologies, most AR systems are typically vision-based. Vision-based systems utilise image recognition to identify real world objects for data to be augmented onto those objects. The focus of typical AR systems is primarily for marketing, for example, AR systems typically provide 3D images such that real world objects appear "magical". Such uses of AR may typically limit transmission of useful information to a user.
Furthermore, AR systems typically provide a point-and-show functionality such that augmented data of an object, typically statistical information, appears after a user has specifically requested for the data. This may be a one way query-and-reply process to the user. Thus, AR systems may have limited ability in combining processed data and real world data for enhancing a user experience.
In addition, in a scenario dealing with multiple real world objects in a current scene, the augmented statistical information may result in the screen being populated with multiple pieces of information, each relating to a different real world object. This may reduce enjoyment of a user.
Hence, in view of the above, there exists a need for a method for interacting with an augmented reality scene, a device for interaction with an augmented reality system and an augmented reality system that seek to address at least one of the above problems.
SUMMARY
In accordance with an aspect of the present invention, there is provided a method for interacting with an augmented reality scene, the method comprising providing the augmented reality scene having one or more objects, each object in the augmented reality scene having data augmented to the object; using one or more sensors to detect a request from a user to search at least one of the augmented data; providing a sequence of the augmented data to be searched; and searching along said sequence of the augmented data based on a command signal sensed at the one or more sensors.
The method may further comprise displaying a searched piece of the augmented data based on the command signal received from the one or more sensors.
The method may further comprise providing touch sensors as said one or more sensors.
The searching along said sequence of the augmented data may be based on a rotational touch command sensed at the touch sensors.
The method may further comprise filtering the one or more objects from the augmented reality scene based on a proximity from the user to said one or more objects.
The filtering of the one or more objects from the augmented reality scene may be based on a multi-touch command sensed at the touch sensors.
The method may further comprise providing picture data, video/multimedia data, or both, as the data augmented to each object.
The method may further comprise providing one or more categories of the augmented data to be searched, each category having a searchable sequence.
The sequence may be a chronological sequence.
The providing the augmented reality scene having one or more objects may be based on location information of the one or more objects.
The method may further comprise obtaining the data for augmentation to each object from an augmented reality server.
The method further comprise obtaining the data for augmentation to each object from a storage medium of a device for interaction with an augmented reality system.
In accordance with another aspect of the present invention, there is provided a device for interaction with an augmented reality system, the device comprising a screen capable of displaying an augmented reality scene having one or more objects, each object in the augmented reality scene having data augmented to the object; one or more sensors arranged to detect a request from a user to search at least one of the augmented data; a processor module configured to determine a sequence of the augmented data to be searched; and to search along said sequence of the augmented data based on a command signal sensed at the one or more sensors.
The processor module may be further configured to instruct a display of a searched piece of the augmented data based on the command signal received from the one or more sensors.
Said one or more sensors may comprise touch sensors.
The processor module may be configured to search along said sequence of the augmented data based on a rotational touch command sensed at the touch sensors.
The processor module may be further configured to filter the one or more objects from the augmented reality scene based on a proximity from the user to said one or more objects.
The processor module may be configured to filter the one or more objects from the augmented reality scene based on a multi-touch command sensed at the touch sensors.
The processor module may be further configured to provide picture data, video/multimedia data, or both, as the data augmented to each object.
The processor module may be further configured to determine one or more categories of the augmented data to be searched, each category having a searchable sequence.
The sequence may be a chronological sequence.
The processor module may be further configured to instruct the display of the augmented reality scene having one or more objects based on location information of the one or more objects.
The device may further comprise a communication module capable of obtaining the data for augmentation to each object from an augmented reality server.
S
The device may further comprise a communication module capable of obtaining the data for augmentation to each object from another device.
The device may further comprise a storage medium, wherein the data for augmentation to each object is obtained from the storage medium.
In accordance with another aspect of the present invention, there is provided a computer readable storage medium having stored thereon instructions for instructing a processor of a device for interaction with an augmented reality system to execute a method for interacting with an augmented reality scene, the method comprising providing the augmented reality scene having one or more objects, each object in the augmented reality scene having data augmented to the object; using one or more sensors to detect a request from a user to search at least one of the augmented data; providing a sequence of the augmented data to be searched; and searching along said sequence of the augmented data based on a command signal sensed at the one or more sensors.
The searching along said sequence of the augmented data may be based on a touch command sensed at the one or more sensors.
The method may further comprise filtering the one or more objects from the augmented reality scene based on a proximity from the user to said one or more objects.
The filtering the one or more objects from the augmented reality scene may be based on a multi-touch command sensed at the one or more sensors.
The method may further comprise providing picture data, video/multimedia data, or both, as the data augmented to each object.
The method may further comprise providing one or more categories of the augmented data to be searched, each category having a searchable sequence.
The sequence may be a chronological sequence.
In accordance with another aspect of the present invention, there is provided an augmented reality system, the system comprising the device described above.
The augmented reality system may further comprise an augmented reality server networked to the system, the server configured to implement a social network environment that is capable of allowing different users to share the augmented data.
BRIEF DESCRIPTION OF THE DRAWINGS
Example embodiments of the invention will be better understood and readily apparent to one of ordinary skill in the art from the following written description, by way of example only, and in conjunction with the drawings, in which: Figure 1 is a schematic diagram illustrating an augmented reality system in an
example embodiment.
Figures 2(a) to (f) are schematic drawings illustrating a device interacting with an augmented reality system in an example embodiment.
Figures 3(a) to (f) are schematic drawings illustrating a device interacting with an augmented reality system in another example embodiment.
Figure 4 is a schematic flowchart illustrating an exemplary augmented reality system in an example embodiment.
Figure 5 is a schematic flowchart illustrating an exemplary provision of data relating to an object in an example embodiment.
Figure 6 is a schematic flowchart illustrating an exemplary provision of an augmented scene comprising an object in an example embodiment.
Figure 7 is a schematic flowchart illustrating a method for interacting with an augmented reality scene in an example embodiment.
Figure 8 is a schematic illustration of a communication device suitable for an
example embodiment.
Figure 9 is a schematic illustration of a general purpose computer system suitable for an example embodiment.
DETAILED DESCRIPTION
In example embodiments described herein, there can be provided an augmented reality system that can provide a search process within a piece of augmented data based on a detected user request. In addition, the system may provide a filtering process within multiple pieces of augmented data based on a detected user request.
Thus, a user can interact with an augmented reality scene e.g. within the augmented reality system. The user's requests and commands can be detected or sensed by sensors, for example but not limited to, touch sensors. The user can request to search within augmented data via the sensors. The augmented data can be searched along a sequence (e.g. an ordered sequence of the data), based on sensing the users commands sensed at the sensors. For example, the augmented data can be picture images arranged in a chronological sequence e.g. a timeline.
The real world scene or real world data viewed can be real-time data. The augmented reality scene can thus be real-time, and the searching and/or filtering process can be implemented real-time over the augmented reality scene.
The terms "coupled" or "connected" as used in this description are intended to cover both directly connected or connected through one or more intermediate means, unless otherwise stated.
The description herein may be, in certain portions, explicitly or implicitly described as algorithms and/or functional operations that operate on data within a computer memory or an electronic circuit. These algorithmic descriptions and/or functional operations are usually used by those skilled in the information/data processing arts for efficient description. An algorithm is generally relating to a self-consistent sequence of steps leading to a desired result. The algorithmic steps can include physical manipulations of physical quantities, such as electrical, magnetic or optical signals capable of being stored, transmitted, transferred, combined, compared, and otherwise manipulated.
Further, unless specifically stated otherwise, and would ordinarily be apparent from the following, a person skilled in the art will appreciate that throughout the present specification, discussions utilizing terms such as "scanning", calculating", "determining", "replacing", generating", initializing", "outputting", and the like, refer to action and processes of an instructing processor/computer system, or similar electronic circuit/device/component, that manipulates/processes and transforms data represented as physical quantities within the described system into other data similarly represented as physical quantities within the system or other information storage, transmission or display devices etc. The description also discloses relevant device/apparatus for performing the steps of the described methods. Such apparatus may be specifically constructed for the purposes of the methods, or may comprise a general purpose computer/processor or other device selectively activated or reconfigured by a computer program stored in a storage member.
The algorithms and displays described herein are not inherently related to any particular computer or other apparatus. It is understood that general purpose devices/machines may be used in accordance with the teachings herein. Alternatively, the construction of a specialized device/apparatus to perform the method steps may be desired.
In addition, it is submitted that the description also implicitly covers a computer program, in that it would be clear that the steps of the methods described herein may be put into effect by computer code. It will be appreciated that a large variety of programming languages and coding can be used to implement the teachings of the description herein.
Moreover, the computer program if applicable is not limited to any particular control flow and can use different control flows without departing from the scope of the invention.
Furthermore, one or more of the steps of the computer program if applicable may be performed in parallel and/or sequentially. Such a computer program if applicable may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a suitable reader/general purpose computer. In such instances, the computer readable storage medium is non-transitory. Such storage medium also covers all computer-readable media e.g. medium that stores data only for short periods of time and/or only in the presence of power, such as register memory, processor cache and Random Access Memory (RAM) and the like. The computer readable medium may even include a wired medium such as exemplified in the Internet system, or wireless medium such as exemplified in bluetooth technology. The computer program when loaded and executed on a suitable reader effectively results in an apparatus that can implement the steps of the described methods.
The example embodiments may also be implemented as hardware modules. A module is a functional hardware unit designed for use with other components or modules.
For example, a module may be implemented using digital or discrete electronic components, or it can form a portion of an entire electronic circuit such as an Application Specific Integrated Circuit (ASIC). A person skilled in the art will understand that the example embodiments can also be implemented as a combination of hardware and software modules.
Further, in the description herein, the word substantially" whenever used is understood to include, but not restricted to, "entirely" or "completely" and the like. In addition, terms such as "comprising", "comprise", and the like whenever used, are intended to be non-restricting descriptive language in that they broadly include elements/components recited after such terms, in addition to other components not explicitly recited. Further, terms such as "about", "approximately" and the like whenever used, typically means a reasonable variation, for example a variation of +1-5% of the disclosed value, or a variance of 4% of the disclosed value, or a variance of 3% of the disclosed value, a variance of 2% of the disclosed value or a variance of 1% of the disclosed value.
Furthermore, in the description herein, certain values may be disclosed in a range.
The values showing the end points of a range are intended to illustrate a preferred range.
Whenever a range has been described, it is intended that the range covers and teaches all possible sub-ranges as well as individual numerical values within that range. That is, the end points of a range should not be interpreted as inflexible limitations. For example, a description of a range of 1% to 5% is intended to have specifically disclosed sub-ranges 1% to 2%, 1% to 3%, 1% to 4%, 2% to 3% etc., as well as individually, values within that range such as 1%, 2%, 3%, 4% and 5%. The intention of the above specific disclosure is applicable to any depth/breadth of a range.
Figure 1 is a schematic diagram illustrating an augmented reality system in an example embodiment. The system 102 comprises a server 104 in communication with devices 106, 108, 110. The communication links e.g. 112 may be via Wi-Fi, cellular networks or Long Term Evolution (LTE, 4G) networks. The devices 106, 108, 110 can provide location information to the server 104 to enable the server 104 to retrieve stored data relating to one or more objects at the location information. The stored data corresponds to the location information and further corresponds to the one or more objects.
For ease of description purposes, only one device 106 is described below. It will be appreciated that the other devices 108, 110 can function substantially similarly to the device 106.
In the example embodiment, the device 106 obtains location information relating to a first object 114 and a second object 116. The device 106 can provide a scene showing the objects 114, 116. The scene is a real world view or real world data e.g. of the objects 114, 116 when viewed via a screen of the device 106.
The device 106 transmits the location information to the server 104. The server 104 obtains the location information and determines which stored data corresponds to the location information. In the example embodiment, the location information identifies objects 114 and 116 as being related to the location information. This may, for example, be due to proximity of the objects 114, 116 eg. about lOOm to the location information In the example embodiment, the location information may comprise additional information obtained from e.g. a magnetometer of the device for determining a direction of the real world view or real world data being viewed with respect to the device 106. In other embodiments, directional information may be obtained by other means and may be optional.
The server 104 retrieves stored data relating to each object 114, 116 and transmits the data to the device 106. The device 106 obtains the transmitted data and superimposes/augments/overlays the data graphically over the scene showing the objects 114, 116, to provide an augmented scene. The computation for generation of the augmented data may be performed at the device 106.
In the example embodiment, the objects 114, 116 may be augmented with name or identification data shown graphically above the objects 114, 116. That is, in the example, there are two pieces of augmented data. Each augmented data is shown corresponding to the related object.
In the example embodiment, a user of the device 106 is able to select an object 114 or 116 and to search within the data corresponding to the selected object 114 or 116.
For example, the user can select the object 114 based on the name or identification data. The user is then provided with one or more categories of data corresponding to the selected object 114. The categories may be provided in the form of a pop-up graphic upon selection of the object 114. The user can then search within the one or more categories. For example, one category may be picture images and another category may be video/multimedia recordings. Some or all of the categories comprise a sequence of the data within the respective category. For example, if the data corresponding to the selected object 114 is picture data or picture images arranged according to time stamps tagged to the picture images, the user can search within the picture data along a chronological sequence e.g. timeline (by time/date stamps) to find a desired picture.
In the description, "tagged" is used and can be ordinarily be understood by a skilled person. Tagging a time/date stamp to a picture image may constitute concatenating a text string containing DD/mm/YYYY information to the picture image to form a picture data. Location information is also tagged, e.g. may also be concatenated, to the picture image as picture data.
In the example embodiment, the selection of an object e.g. 114 may be provided by a sensing of a user request. This may be via a user touch command sensed by touch sensors coupled to a touch screen of a device e.g. 106. For example, the user may tap the touch screen on object 114 to select the object 114. In an alternative embodiment, selection may be optional, i.e. the user can perform searching on one or more objects simultaneously.
The search process may be performed by a detection/sensing of a user request.
This may be via a user touch command sensed or detected by touch sensors coupled to a touch screen of a device e.g. 106. Preferably, a multi-touch command is designated for the search process. For example, the user may contact the touch screen on the selected object 114 and rotate with a continuous touch command in a clockwise direction to search for data in the sequence in a "forward" direction. Alternatively, for example, the user may contact the touch screen on the selected object 114 and rotate with a continuous touch command in an anti-clockwise direction to search for data in the sequence in a "backward" direction.
In an example, if the data corresponding to the selected object 114 is picture data or picture images arranged in a chronological sequence, according to date/time stamps tagged to the picture images, the user can search within the picture data along e.g. a timeline (by time/date stamps) to find a desired picture.
For example, a rotation with a continuous touch command in a clockwise direction searches for pictures in a direction "forward" in time, i.e. pictures that have been created with date stamps nearer, or more recent, to the present time, as compared to the timeline instance the user is performing the search process from. For example, the user may have entered the timeline in the middle (timeline instance) and a forward" search searches for pictures e.g. 2 days from the present time. On the other hand, a rotation with a continuous touch command in an anti-clockwise direction searches for pictures in a direction "backward" in time, i.e. pictures that have been created with date stamps further, or less recent, to the present time, as compared to the timeline instance the user is performing the search process from. For example, the user may have entered the timeline in the middle (timeline instance) and a backward search (or "rewind") searches for pictures e.g. 2 months from the present time.
Therefore, a search capability is provided for the user to search among categories e.g. along a timeline.
In the example embodiment, the user can also filter the multiple pieces of augmented data. For example, before selecting the object 114, the user can filter the object 116 such that the augmented data corresponding to object 116 is removed; or faded away; or moved in a translational manner away from the object 114. The filtering process can allow the user to filter objects of interest and focus on the remaining objects of interest. The filtering process may be useful in a situation of multiple objects each having a corresponding augmented data such that the scene can be made less cluttered with information. The filter may be based on objects being closer to, or further away, in distance from the user.
The filter process may be performed by a detection of a user request. This may be via a user touch command sensed or detected by touch sensors coupled to a touch screen of a device e.g. 106. Preferably, a multi-touch command is designated for the filter process. The filtering may be based on proximity (or distance) of the user from the objects in the augmented scene.
For example, the user may contact the touch screen with two contact points e.g. with two fingertips, and move the contact points, each with a continuous touch command, away from each other. Such a multi-touch command may signify a "spread" or zoom-in command such that only objects closer to in distance from the user are maintained/augmented in the augmented scene. Alternatively, for example, the user may contact the touch screen with two contact points e.g. with two fingertips, and move the contact points, each with a continuous touch command, towards each other Such a multi-touch command may signify a "pinch" or zoom-out command such that objects further away in distance from the user are also augmented in the augmented scene.
Preferably, with a "pinch" command, not only are objects further to the user augmented in the augmented scene, the objects closer to in distance from the user are made not augmented in the augmented scene.
In an example, with a "spread' command, the user can specify that objects closer in distance to the user be augmented. For example, a multi-touch command moving the contact points, each with a continuous touch command, away from each other instructs the device 106 to augment object 114 that is closer in distance to the user. Preferably, further object 116 is not augmented in the augmented scene. With a "pinch" command, the user can specify that objects further in distance to the user be augmented. For example, a multi-touch command moving the contact points, each with a continuous touch command, towards each other instructs the device 106 to augment object 116 that is further away in distance to the user, together with closer object 114. Preferably, closer object 114 is made not augmented in the augmented scene.
Therefore, a filter capability is provided for the user to filter objects of interest and focus on the remaining augmented objects of interest, e.g. the user can filter objects based on proximity of the objects to the user.
Therefore, in the example embodiment, an augmented reality system can be provided that allows users to perform search processes/functions within augmented data using devices. The users can also perform filter processes/functions among augmented data using the devices.
In the example embodiment, data can be transmitted and stored at the server 104 to be used for generating the augmented data. The data may be created using the devices e.g. 106, 108, 110, and alternatively/additionally, using other communication devices such as a personal desktop computer 118 in communication link to the server 104.
For example, a device 106 may be used to create a picture image at a location information relating to an object 114. The picture image can be tagged with a time/date stamp e.g. at the time of creation of the picture image, to form picture data. The picture image can also be tagged with the object 114 and the final picture data is transmitted to the server 104. In another example, a personal desktop computer 118 may be used to transfer a picture image, for example, from a digital camera (not shown) to the server 104. In the transmission process, the personal desktop computer 118 may be used to tag the picture image to the object 114 as a picture data. The personal desktop computer 118 may also be used to add or edit a time/date stamp to be tagged to the picture image to form a final picture data. Optionally, if the location information is not tagged to the picture image during generation of the image, the personal desktop computer 118 can be used to input e.g. location co-ordinates as location information to be tagged to the picture image to form the picture data. Subsequently, the server 104 can retrieve the picture data for generation as augmented data corresponding to the object 114.
Figures 2(a) to (1) are schematic drawings illustrating a device interacting with an augmented reality system in an example embodiment.
In Figure 2(a), the device 202 views a scene comprising objects such as buildings or landmarks 204, 206 via a screen 208. In the example embodiment, the device 202 is a mobile computing device such as a smart phone and the screen 208 is a liquid crystal display (LCD) touch screen. The screen 208 comprises touch screen sensors coupled to the screen 208 for sensing/detecting a user's touch commands. In the example embodiment, the device 202 is in communication with an augmented reality server (not shown) via Wi-fi or 3G cellular network etc. In addition, the device 202 comprises a Global Positioning System (GPS) module (not shown) capable of obtaining location information. The location information can be transmitted to the augmented reality server (not shown).
In the example embodiment, a user of the device 202 can selectably transmit location information to the augmented reality server. This may be via activation of a software application loaded on the device 202. At the augmented reality server, the location information is used to determine stored data corresponding to the location information The location information is also used to identify buildings 204, 206 as being related to the location information. The determined stored data is retrieved and transmitted from the server to the device 202.
In Figure 2(b), the device 202 receives the data and generates augmented data 210, 212. The augmented data 210, 212 are superimposed/augmented/overlain graphically over the scene by a display module instructed by a processor or processing module of the device 202. In the example embodiment, the augmented data 210, 212 are building names identifying the buildings 204, 206, and displayed graphically in the augmented scene 214.
In Figure 2(c), the user of the device 202 can select the building 206 using a touch command 216. The user request of selection is sensed by the touch screen sensors coupled to the screen 208. In an alternative embodiment, selection may be optional, i.e. the user can perform searching on one or more objects simultaneously.
In Figure 2(d), upon selection of the building 206, a pop-up graphic 218 appears to show the categories of data available for searching by the user. In the example embodiment, in one category, the data is picture images arranged according to individual date or time stamps tagged to the picture images, i.e. in a chronological sequence. As an example, four pictures 226, 228, 230 and 232 are shown arranged in chronological order.
In Figure 2(e), a user request for searching or scrolling backwards in time is detected by the touch screen sensors coupled to the screen 208. In the example embodiment, the user can contact the touch screen at a start point 220 (finger is not shown for ease of illustration) and rotate with a continuous touch command to a reference point 222. The touch screen sensors detect the path 224 taken from the start point 220 to the reference point 222, and determines that the touch command is in an anti-clockwise direction. The detection instructs the device 202 to search for picture data backwards in time. In the example embodiment, the search process is conducted until the user ends the search at third picture 230 from the picture 226, in a chronological timeline. It can be provided that the selected third picture 230 is then enlarged and displayed on the device 202.
In Figure 2(f), a user request for searching or scrolling forward in time is detected by the touch screen sensors coupled to the screen 208. In the example embodiment, the user can contact the touch screen at a start point 234 (finger is not shown for ease of illustration) and rotate with a continuous touch command to a reference point 236.
The touch screen sensors detect the path 238 taken from the start point 234 to the reference point 236, and determines that the touch command is in a clockwise direction.
The detection instructs the device 202 to search for picture data forward in time. In the example embodiment, the search process is conducted forward until the user ends the search at second picture 228 from the picture 230, in a chronological tinieline. It can be provided that the selected second picture 228 is then enlarged and displayed on the device 202.
In the example embodiment, the rotational touch command is calibrated according to the number of picture images arranged in the chronological sequence. For example, a one-quarter arc of a rotational touch command can correspond to searching for one out of the four picture images. A half-arc of a rotational touch command can correspond to moving by two picture images along the sequence. A full 360-degrees rotational touch command can correspond to moving by four picture images along the sequence. Optionally, a virtual scroll wheel may be graphically generated for the user to rotate along for the searching function.
Therefore, a search capability is provided for the user to search among categories, and along searchable sequences within the categories e.g. along a timeline.
Thus, the device 202 uses a processor module to determine a sequence of the augmented data to be searched and to search along said sequence of the augmented data based on a command signal sensed at the one or more sensors.
Figures 3(a) to (e) are schematic drawings illustrating a device interacting with an augmented reality system in another example embodiment. The device 302 can function substantially identically to the device 202 as described with reference to Figures 2(a) to 2(f).
In addition to providing a user with a search capability, the device 302 can additionally provide a filter capability/process In Figure 3(a), the device 302 views a scene comprising objects such as buildings or landmarks 304, 306, 308, 310 via a screen 312. In the example embodiment, the device 302 obtains and generates augmented data substantially identically to the description provided with reference to Figures 2(a) and 2(b).
The user can filter the multiple pieces of augmented data corresponding to the multiple buildings 304, 306, 308, 310. In the example embodiment, the filter is based on buildings being closer to, or further away, in proximity (or distance) from the user. The buildings further away from the user, i.e. buildings 308, 310, are shown shaded for illustration purposes.
In Figure 3(b), a user request for a "spread" or zoom-in command is detected by the touch screen sensors coupled to the screen 312. In the example embodiment, the user contacts the touch screen with two contact start points 314, 316 e.g. with two fingertips, and moves the contact points 314, 316, each with a continuous touch command, away from each other, to respective contact reference points 318, 320. The touch screen sensors detect the paths 322, 324 taken from the start points 314, 316 to the reference points 318, 320. The touch screen sensors determine that the multi-touch command is a "spread" command.
In Figure 3(c), the detection instructs the device 302 filter away the buildings 308, 310 further in distance away from the user such that only the buildings 304, 306 closer in distance to the user are maintained/augmented in the augmented scene. It may be provided that the touch command is calibrated according to the distance from the user.
For example, a one-centimetre "spread' command can correspond to filtering buildings 200-meters away from the user and only augmenting buildings within 200-meters from the user. A two-centimetre "spread' command can correspond to filtering buildings 100-meters away from the user and only augmenting buildings within 100-meters from the user.
In Figure 3(d), a user request for a "pinch" or zoom-out command is detected by the touch screen sensors coupled to the screen 312. In the example embodiment, the user contacts the touch screen with two contact start points 326, 328 e.g. with two fingertips, and moves the contact points 326, 328, each with a continuous touch command, towards from each other, to respective contact reference points 330, 332.
The touch screen sensors detect the paths 334, 336 taken from the start points 326, 328 to the reference points 330, 332. The touch screen sensors determine that the multi-touch command is a "pinch" command.
In Figure 3(e), the detection instructs the device 302 to zoom-out and bring the buildings 308, 310 further in distance away from the user back into focus. Thus, the augmented scene is substantially similar in setting to the augmented scene of Figure 3(a). It may be provided that the touch command is calibrated according to the distance from the user. For example, a one-centimetre "pinch" command can correspond to augmenting buildings within 100-meters from the user. A two-centimetre "pinch' command can correspond to augmenting buildings 200-meters from the user.
In an alternative example embodiment, the "pinch" command may be provided such that buildings closer in distance to the user are made not augmented in the augmented scene. In Figure 3(f), the "pinch" command may filter the buildings 304, 306 that are closer in distance to the user and only augment the buildings 308, 310 that are further in distance from the user.
Therefore, a filter capability is provided for the user to filter objects of interest and focus on the remaining augmented objects of interest, e.g. the user can filter objects based on proximity to the user. The device 302 uses a processor module configured to filter the one or more objects from the augmented reality scene based on a proximity from the user to said one or more objects.
In another example embodiment, the augmented reality system may be modified to add social networks capabilities, e.g. a social network environment. For example, users can store personal data such as picture data or video data or any multimedia data in a chronological order or along a timeline. Users can also be able to share the personal data with acquaintances or friends.
Figure 4 is a schematic flowchart 402 illustrating an exemplary augmented reality system in an example embodiment. At step 404, a first person X can create a "Moment" at a first location at a first time. For example, the first person X can upload to an augmented reality server a picture image relating to a first location such as a landmark and specify a time stamp to be tagged to the picture image. As an alternative, the first person X can obtain a picture image at a current time by using a digital camera at the first location and the time stamp is automatically tagged to the picture image. The picture data can then be transmitted to the augmented reality server for storage.
At step 406, the augmented reality server stores in one or more databases a plurality of different "Moments' belonging to the first person X and at different locations.
The storage may be based on a user identification, such as a unique identification number or mobile number, of the person X. As a default, the Moments are tagged/designated as "Personal Moments" and are only accessible by authentication based on the user identification.
At step 408, the first person X can therefore generate augmented scenes at different locations using the augmented reality server. The stored data allows the first person X to have a journal of different events, e.g. based on different picture images taken at different times/occasions, in chronological order at each location. Thus, the first person X can retrieve or search for data for subsequent viewing.
At step 410, the first person X can utilise a categorisation function to organise the "Moments". For example, the first person X can organise the "Moments" into categories such as "Friends", "Family", or even groups such as "School Mates". The first person X can also use the function to preset ranges to isolate the "Moments". For example, the "Moments" may be set to be viewable if they are generated during the past two weeks, or within a 20-meter radius from a viewing device. Such categorisation/organisation is carried out before steps 412, 414 below.
At step 412, the first person X can navigate or search through "Moments" along a timeline or in chronological order by using e.g. rotation touch commands (compare e.g. Figures 2(e) and 2(f)).
At step 414, the first person X can navigate or filter different "Moments" through space or based on proximity from viewing distance by using e.g. "spreading" or "pinching" touch commands (compare e.g. Figures 3(b) and 3(d)).
At step 416, the first person X can share access rights to the Moments" to other users. The access rights may be viewable based on the user identification of the person X. The first person X may specify that the access rights are granted to specific users or to all users.
At step 418, if the access rights have been granted to all users, the Moments" belonging to the first person X are then tagged as Public Moments'.
At step 420, as the access rights have been granted to all users, any user using the augmented reality system can view the "Moments" belonging to the first person X. For example, another user in proximity to the first location can view the picture data uploaded by first person X, e.g. step 404. As the access rights have been granted to all users, any user can perform the steps 410, 412, 414.
Figure 5 is a schematic flowchart 500 illustrating an exemplary provision of data relating to an object in an example embodiment. At step 502, a user registers an account with an augmented reality server. Preferably, the server can implement a social network environment. A user identification is provided corresponding to the account. For example, a mobile smartphone number of the user can be registered with the server and linked to the user's account. The registration may be via the internet using a personal computer system or using a software application for interacting with the augmented reality system.
At step 504, the user obtains a picture image at a first object. For example, the user may use a camera function of a mobile smartphone to obtain a digital photograph of himself/herself at a landmark/building as the picture image. A time stamp can be automatically tagged to the picture image upon the digital photograph being taken. For example, the tagging may be performed by a processing module of the mobile smartphone upon instruction by, for example, a software application for interacting with the augmented reality system. The location information of the landmark/building can also be automatically tagged to the picture image e.g. using a location function such as a Global Positioning System (GPS) function of the mobile smartphone. Optionally, the user may be provided with other options such as adding other information to the generated data. For example, the user can use a software application for interacting with the augmented reality system to add a title to the picture image, or tag/identify other persons in the picture image etc. At step 506, the user can transmit the final picture data to the server. For example, the user can activate a software application for interacting with the augmented reality system and transmit the picture data to the server, with the transmission being corresponding to the user identification.
At step 508, the server receives the picture data and stores the picture data in a database. The picture data is linked to the user identification and corresponds to information such as, but not limited to, picture time stamp, location information etc. Figure 6 is a schematic flowchart 600 illustrating an exemplary provision of augmented data relating to an object in an example embodiment.
At step 602, a user views a scene of a first object and requests for augmented data. For example, the user can view a real world scene having a landmark/building via a liquid crystal display (LCD) touchscreen of a mobile smartphone. The user can activate a software application for interacting with an augmented reality system and request for augmented data. As another example, at step 602, the user may activate a software application for interacting with an augmented reality system. The software application can derive location coordinates/information during activation.
At step 604, location information e.g. relating to the first object is obtained. For example, upon detecting a request for augmented data, the software application can obtain location information e.g. using a location function such as a Global Positioning System (GPS) function of the mobile smartphone.
At step 606, the location information is transmitted to an augmented reality server For example, the location information is transmitted via the software application and a user identification e.g. the mobile smartphone number is tagged to the location information for use by the server. The server retrieves stored data relating to the first object based on the location information and based on the user identification.
At step 608, stored data is transmitted to the user from the augmented reality server.
At step 610, the transmitted data is used for generation of augmented data, and is superimposed/augmented/overlain graphically over the scene showing the first object.
For example, the mobile smartphone, upon receiving the transmitted data, processes the data at a processing module upon instruction by a software application in interaction with the augmented reality system, and the processing module instructs a display module of the smartphone to graphically display data over the first object such that the screen of the smartphone shows an augmented scene.
At step 612, the user is allowed to interact with the augmented scene of the first object. For example, the user can use rotational touch commands sensed by touch sensors coupled to the LCD screen of the smartphone to search through e.g. picture images arranged along a chronological sequence e.g. timeline. The user can also use "pinch" or spread touch commands sensed by touch sensors coupled to the LCD screen of the smartphone to allow/remove other objects with augmented data with respect to the augmented scene.
Figure 7 is a schematic flowchart 700 illustrating a method for interacting with an augmented reality scene in an example embodiment. At step 702, the augmented reality scene is provided having one or more objects, each object in the augmented reality scene having data augmented to the object. At step 704, one or more sensors is used to detect a request from a user to search at least one of the augmented data. At step 706, a sequence of the augmented data to be searched is provided. At step 708, a search is conducted along said sequence of the augmented data based on a command signal sensed at the one or more sensors.
In the above example embodiments, although a server is described as being present to facilitate the augmented reality system, the example embodiments are not limited as such. That is, an augmented reality system can be provided without a server, e.g. decentralised. In such an example embodiment, the data for augmentation may be stored locally e.g. in a storage medium such as a flash memory card provided with the device for interaction with the augmented reality scene. For social networking environments, one device can allow selective access to data for augmentation by allowing access to the stored data by another device e.g. via Bluetooth or via sending a data packet to said another device via a communication port or module or file transfer via cloud storage. For example, each user/device can register a personal file depository with a cloud service for storing data for augmentation. Download links may then be selectively provided with other users/devices for access to the stored data.
In the above example embodiments, the objects are generally taken to comprise buildings, landmarks etc. However, the example embodiments are not limited as such.
The objects may also comprise temporary monuments, areas of disaster etc. In addition, it will be appreciated that the search function is not limited strictly to searching. For example, the search function can also mean scrolling or browsing along a sequence. Furthermore, the searching is not limited to starting at the data at the start or end of a sequence. Rather, a sequence instance can mean that the searching is started at e.g. middle of the sequence. For example, the searching may start at the middle of a chronological sequence.
In the above example embodiments, it is described that names or identification data are shown graphically above the objects. It will be appreciated that the example embodiments are not limited as such. That is, for example, the objects may be augmented with other graphics such as 3-D images or moving images etc. In the above example embodiments, the augmented data is described to comprise picture data, video/multimedia data such as recordings etc. However, it will be appreciated that other forms of data may also be used, and in other categories.
In the above example embodiments, devices used for interaction with the augmented reality scene, server or system can be mobile or electronic devices such as mobile smartphones, tablet computing devices, interactive video viewers, video eye-wear, Wi-fi enabled portable players etc. In the above example embodiments, location information may be obtained using a OPS module. However, it will be appreciated that the example embodiments are not limited as such. That is, the objects can be identified by vision-based technology or using object identifiers placed at the objects such as commemorative plates etc. Location information may also be determined by cellular or wifi technologies, e.g. triangulation calculation using cell base stations.
In the above example embodiments, the sensors and user commands are not limited to touch sensors and commands, and can include other forms. For example, motion detectors may be used to detect gestures. Retina eye detectors may be used to detect eye movements. Video detectors may be used to detect facial movements such as head tilting etc. Sound detectors may be used to detect voice commands.
In the above example embodiments, the touch commands are not limited to those described. That is, other touch commands or multi-touch commands may be designated to perform the search and/or filter functions described.
In addition, to lessen the impact of accuracy relating to augmented reality positioning, the example embodiments can be provided such that the augmented data in proximity to the user are grouped as close as possible such that the user still experiences the AR experience by finding the objects visually. To lessen the impact of elevation i.e. data created at objects at different elevation points, the example embodiments can be provided such that the augmented scene is made insensitive to elevation (e.g. tilting of the viewer or device).
Thus, in the described example embodiments, an AR system may be provided. For users' devices, a software application may be provided for interacting with the AR system. In some embodiments, a social networking function may be provided with the software application that allows collecting and sharing moments that users create with e.g friends, family and loved ones of the users. These moments may be events, experiences etc. captured by e.g. picture images, video recordings etc. The content may be user-generated.
In these example embodiments, navigation of the moments can be provided through space (proximity to the user) and time with e.g. multi-touch gestures. For example, navigation through space may be achieved by using pinching" and "spreading" gestures. For example, navigation through time may be achieved by using "rotate" gestures.
The inventors have recognized that the navigation of time-based data in an AR environment can provide an improvement over other AR systems. For example, an AR system with such capabilities can allow keeping of e.g. a visual diary of moments in life, and can allow users to, for example, reminisce and remember the past, see the world as it evolves over time and share moments to motivate or inspire other users.
Different example embodiments can be implemented in the context of data structure, program modules, program and computer instructions executed in a communication device.
An exemplary communication device is briefly disclosed herein. One or more example embodiments may be embodied in one or more communication devices e.g. 800, such as is schematically illustrated in Figure 8.
One or more example embodiments may be implemented as software, such as a computer program being executed within a communication device 800, and instructing the communication device 800 to conduct a method of an example embodiment.
The communication device 800 comprises a processor module 802, an input module such as a touchscreen interface or a keypad 804 and an output module such as a display 806 on a touchscreen.
The processor module 802 can process information and data for the communication device 800 e.g. generation of augmented data. The processor module 802 is also capable of instructing graphical data for output at the output module e.g. display 806.
The input module 804 may, for example, be coupled to touch screen sensors in turn coupled to the display 806, for sensing touch commands on a touchscreen e.g. 806. In addition, the input module 804 is coupled to a sound input unit 832 e.g. a microphone, for detecting sounds or speech of a user. The sound input unit 832 can be used for a user to make voice calls. The sound input unit 832 can also be additionally configured to detect voice commands of a user.
A camera'video unit 830 is provided to enable a user to obtain digital picture images and/or video/multimedia recordings. The camera/video unit 830 can be placed on the front face of the communication device 800 or the back face of the communication device 800.
There may even be more than one camera/video units provided for the communication device 800 (e.g. one on the front face and one on the back face).
The camera/video unit 830 is also configured to provide real world data or a real world scene on the display 806, for viewing by a user.
The camera/video unit 830 can also be additionally configured to detect motion such as facial gestures or eye retina motion, as user commands. Facial gestures may include e.g. tilting of a user's head etc. A locator unit 834 is provided e.g. to obtain location information and/or directional information. The locator unit 834 can comprise a Global Positioning System (GPS) module and/or a compass module. It will be appreciated that location information can also be obtained via other means within the communication device 800. For example, location information may be obtained by using cellular networks or by using Wi-fi cells etc. The processor module 802 is coupled to a first communication unit 808 for communication with a cellular network 810. The first communication unit 808 can include, but is not limited to, a subscriber identity module (SIM) card loading bay. The cellular network 810 can, for example, be a 30 or 40 network.
The processor module 802 is further coupled to a second communication unit 812 for connection to a network 814. For example, the second communication unit 812 can enable access to e.g. the Internet or other network systems such as Local Area Network (LAN) or Wide Area Network (WAN) or a personal network. The network 814 can comprise a server e.g. an AR server, a router, a network personal computer, a peer device or other common network node, a wireless telephone or wireless personal digital assistant. Networking environments may be found in offices, enterprise-wide computer networks and home computer systems etc. The second communication unit 812 can include, but is not limited to, a wireless network card or an ethernet network cable port The second communication unit 812 can also be a modem/router unit and may be any type of modem/router such as a cable-type modem or a satellite-type modem.
It will be appreciated that network connections shown are exemplary and other ways of establishing a communications link between computers can be used. The existence of any of various protocols, such as TCP/IP, Frame Relay, Ethernet, FTP, HTTP and the like, is presumed, and the communication device 800 can be operated in a client-server configuration to permit a user to retrieve web pages from a web-based server. Furthermore, any of various web browsers can be used to display and manipulate data on web pages.
The processor module 802 in the example includes a processor 816, a Random Access Memory (RAM) 818 and a Read Only Memory (ROM) 820. The ROM 820 can be a system memory storing basic input! output system (BIOS) information. The RAM 818 can store one or more program modules such as operating systems, application programs and program data.
The RAM 818 can also store one or more databases e.g. databases for storing picture images, picture data, video/multimedia recordings etc. The processor module 802 also includes a number of Input/Output (I/O) interfaces, for example I/O interface 822 to the display 806, and I/O interface 824 to the keypad 804.
The components of the processor module 802 typically communicate and interface/couple connectedly via an interconnected bus 826 and in a manner known to the person skilled in the relevant art. The bus 826 can be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
It will be appreciated that other devices can also be connected to the system bus 826.
For example, a universal serial bus (USB) interface can be used for coupling an accessory of the communication device, such as a card reader, to the system bus 826.
The application program is typically supplied to the user of the communication device 800 encoded on a data storage medium such as a flash memory module or memory card/stick and read utilising a corresponding memory reader-writer of a data storage device 828. The data storage medium is not limited to being portable and can include instances of being embedded in the communication device 800.
The application program is read and controlled in its execution by the processor 816.
Intermediate storage of program data may be accomplished using RAM 818. The method(s) of the example embodiments can be implemented as computer readable instructions, computer executable components, or software modules. One or more software modules may alternatively be used. These can include an executable program, a data link library, a configuration file, a database, a graphical image, a binary data file, a text data file, an object file, a source code file, or the like. When one or more processor modules execute one or more of the software modules, the software modules interact to cause one or more processor modules to perform according to the teachings herein.
The operation of the communication device 800 can be controlled by a variety of different program modules. Examples of program modules are routines, programs, objects, components, data structures, libraries, etc. that perform particular tasks or implement particular abstract data types.
The example embodiments may also be practiced with other computer system configurations, including handheld devices, multiprocessor systems/servers, microprocessor-based or programmable consumer electronics, network FCs, minicomputers, mainframe computers, personal digital assistants, mobile telephones and the like.
Furthermore, the example embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wireless or wired communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Different example embodiments can be implemented in the context of data structure, program modules, program and computer instructions executed in a computer implemented environment. A general purpose computing environment is briefly disclosed herein. One or more example embodiments may be embodied in one or more computer systems, such as is schematically illustrated in Figure 9.
For example, an augmented reality server may be embodied by such a system.
In addition, one or more example embodiments may be implemented as software, such as a computer program being executed within a computer system 900, and instructing the computer system 900 to conduct a method of an example embodiment.
The computer system 900 comprises a computer unit 902, input modules such as a keyboard 904 and a pointing device 906 and a plurality of output devices such as a display 908, and printer 910. A user can interact with the computer unit 902 using the above devices.
The pointing device can be implemented with a mouse, track ball, pen device or any similar device. One or more other input devices (not shown) such as a joystick, game pad, satellite dish, scanner, touch sensitive screen or the like can also be connected to the computer unit 902. The display 908 may include a cathode ray tube (CRT), liquid crystal display (LCD), field emission display (FED), plasma display or any other device that produces an image that is viewable by the user.
The computer unit 902 can be connected to a computer network 912 via a suitable transceiver device 914, to enable access to e.g. the Internet or other network systems such as Local Area Network (LAN) or Wide Area Network (WAN) or a personal network. The network 912 can comprise a server, a router, a network personal computer, a peer device or other common network node, a wireless telephone or wireless personal digital assistant.
Networking environments may be found in oftices, enterprise-wide computer networks and home computer systems etc. The transceiver device 914 can be a modem/router unit located within or external to the computer unit 902, and may be any type of modem/router such as a cable modem or a satellite modem.
It will be appreciated that network connections shown are exemplary and other ways of establishing a communications link between computers can be used. The existence of any of various protocols, such as TCP/IP, Frame Relay, Ethernet, FTP, HTTP and the like, is presumed, and the computer unit 902 can be operated in a client-server configuration to permit a user to retrieve web pages from a web-based server. Furthermore, any of various web browsers can be used to display and manipulate data on web pages.
The computer unit 902 in the example comprises a processor 918, a Random Access Memory (RAM) 920 and a Read Only Memory (ROM) 922. The ROM 922 can be a system memory storing basic input/ output system (BIOS) information. The RAM 920 can store one or more program modules such as operating systems, application programs and program data.
The processor 918 can facilitate instructing the storage of data, retrieval of data, referencing of data, searching of data, transmission/receipt of data etc. The RAM 920 can also store one or more databases e.g. databases for storing picture images, picture data, video/multimedia recordings, user identification, location information etc. The computer unit 902 further comprises a number of Input/Output (I/O) interface units, for example I/O interface unit 924 to the display 908, and I/O interface unit 926 to the keyboard 904. The components of the computer unit 902 typically communicate and interface/couple connectedly via an interconnected system bus 928 and in a manner known to the person skilled in the relevant art. The bus 928 can be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
It will be appreciated that other devices can also be connected to the system bus 928.
For example, a universal serial bus (USB) interface can be used for coupling a video or digital camera to the system bus 928. An IEEE 1394 interface may be used to couple additional devices to the computer unit 902. Other manufacturer interfaces are also possible such as FireWire developed by Apple Computer and i.Link developed by Sony. Coupling of devices to the system bus 928 can also be via a parallel port, a game port, a PCI board or any other interface used to couple an input device to a computer. It will also be appreciated that, while the components are not shown in the figure, sound/audio can be recorded and reproduced with a microphone and a speaker. A sound card may be used to couple a microphone and a speaker to the system bus 928. It will be appreciated that several peripheral devices can be coupled to the system bus 928 via alternative interfaces simultaneously.
An application program can be supplied to the user of the computer system 900 being encoded/stored on a data storage medium such as a CD-ROM or flash memory carrier.
The application program can be read using a corresponding data storage medium drive of a data storage device 930. The data storage medium is not limited to being portable and can include instances of being embedded in the computer unit 902. The data storage device 930 can comprise a hard disk interface unit and/or a removable memory interface unit (both not shown in detail) respectively coupling a hard disk drive and/or a removable memory drive to the system bus 928. This can enable reading/writing of data. Examples of removable memory drives include magnetic disk drives and optical disk drives. The drives and their associated computer-readable media, such as a floppy disk provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computer unit 902. It will be appreciated that the computer unit 902 may include several of such drives. Furthermore, the computer unit 902 may include drives for interfacing with other types of computer readable media.
The application program is read and controlled in its execution by the processor 918.
Intermediate storage of program data may be accomplished using RAM 920. The method(s) of the example embodiments can be implemented as computer readable instructions, computer executable components, or software modules. One or more software modules may alternatively be used. These can include an executable program, a data link library, a configuration file, a database, a graphical image, a binary data file, a text data file, an object file, a source code file, or the like. When one or more computer processors execute one or more of the software modules, the software modules interact to cause one or more computer systems to perform according to the teachings herein.
The operation of the computer unit 902 can be controlled by a variety of different program modules. Examples of program modules are routines, programs, objects, components, data structures, libraries, etc. that perform particular tasks or implement particular abstract data types. The example embodiments may also be practiced with other computer system configurations, including handheld devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, personal digital assistants, mobile telephones and the like.
Furthermore, the example embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wireless or wired communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
It will be appreciated by a person skilled in the art that other variations and/or modifications may be made to the specific embodiments without departing from the scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects to be illustrative and not restrictive.

Claims (34)

  1. CLAIMS1. A method for interacting with an augmented reality scene, the method comprising, providing the augmented reality scene having one or more objects, each object in the augmented reality scene having data augmented to the object; using one or more sensors to detect a request from a user to search at least one of the augmented data; providing a sequence of the augmented data to be searched; and searching along said sequence of the augmented data based on a command signal sensed at the one or more sensors.
  2. 2. The method as claimed in claim 1, further comprising displaying a searched piece of the augmented data based on the command signal received from the one or more sensors.
  3. 3. The method as claimed in claims 1 or 2, further comprising providing touch sensors as said one or more sensors.
  4. 4. The method as claimed in claim 3, wherein the searching along said sequence of the augmented data is based on a rotational touch command sensed at the touch sensors.
  5. 5. The method as claimed in claims 3 or 4, further comprising filtering the one or more objects from the augmented reality scene based on a proximity from the user to said one or more objects.
  6. 6. The method as claimed in claim 5, wherein the filtering the one or more objects from the augmented reality scene is based on a multi-touch command sensed at the touch sensors.
  7. 7. The method as claimed in any one of claims 1 to 6, further comprising providing picture data, video/multimedia data, or both, as the data augmented to each object.
  8. 8. The method as claimed in any one of claims 1 to 7, further comprising providing one or more categories of the augmented data to be searched, each category having a searchable sequence.
  9. 9. The method as claimed in any one of claims 1 to 8, wherein the sequence is a chronological sequence.
  10. 10. The method as claimed in any one of claims ito 9, wherein the providing the augmented reality scene having one or more objects is based on location information of the one or more objects.
  11. 11. The method as claimed in any one of claims 1 to 10, further comprising obtaining the data for augmentation to each object from an augmented reality server.
  12. 12. The method as claimed in any one of claims 1 to 10, further comprising obtaining the data for augmentation to each object from a storage medium of a device for interaction with an augmented reality system.
  13. 13. A device for interaction with an augmented reality system, the device comprising, a screen capable of displaying an augmented reality scene having one or more objects, each object in the augmented reality scene having data augmented to the object; one or more sensors arranged to detect a request from a user to search at least one of the augmented data; a processor module configured to determine a sequence of the augmented data to be searched; and to search along said sequence of the augmented data based on a command signal sensed at the one or more sensors.
  14. 14. The device as claimed in claim 13, wherein the processor module is further configured to instruct a display of a searched piece of the augmented data based on the command signal received from the one or more sensors.
  15. 15. The device as claimed in claims 13 or 14, wherein said one or more sensors comprises touch sensors.
  16. 16. The device as claimed in claim 15, wherein the processor module is configured to search along said sequence of the augmented data based on a rotational touch command sensed at the touch sensors.
  17. 17. The device as claimed in claims 15 or 16, wherein the processor module is further configured to filter the one or more objects from the augmented reality scene based on a proximity from the user to said one or more objects.
  18. 18. The device as claimed in claim 17, wherein the processor module is configured to filter the one or more objects from the augmented reality scene based on a multi-touch command sensed at the touch sensors.
  19. 19. The device as claimed in any one of claims 13 to 18, wherein the processor module is further configured to provide picture data, video/multimedia data, or both, as the data augmented to each object.
  20. 20. The device as claimed in any one of claims 13 to 19, wherein the processor module is further configured to determine one or more categories of the augmented data to be searched, each category having a searchable sequence.
  21. 21. The device as claimed in any one of claims 13 to 20, wherein the sequence is a chronological sequence.
  22. 22 The device as claimed in any one of claims 13 to 21, wherein the processor module is further configured to instruct the display of the augmented reality scene having one or more objects based on location information of the one or more objects.
  23. 23. The device as claimed in any one of claims 13 to 22, further comprising a communication module capable of obtaining the data for augmentation to each object from an augmented reality server.
  24. 24. The device as claimed in any one of claims 13 to 22, further comprising a communication module capable of obtaining the data for augmentation to each object from another device.
  25. 25. The device as claimed in any one of claims 13 to 22, further comprising a storage medium, wherein the data for augmentation to each object is obtained from the storage medium.
  26. 26. A computer readable storage medium having stored thereon instructions for instructing a processor of a device for interaction with an augmented reality system to execute a method for interacting with an augmented reality scene, the method comprising, providing the augmented reality scene having one or more objects, each object in the augmented reality scene having data augmented to the object; using one or more sensors to detect a request from a user to search at least one of the augmented data; providing a sequence of the augmented data to be searched; and searching along said sequence of the augmented data based on a command signal sensed at the one or more sensors.
  27. 27. The computer readable storage medium as claimed in claim 26, wherein the searching along said sequence of the augmented data is based on a touch command sensed at the one or more sensors.
  28. 28. The computer readable storage medium as claimed in claims 26 or 27, the method further comprising filtering the one or more objects from the augmented reality scene based on a proximity from the user to said one or more objects.
  29. 29. The computer readable storage medium as claimed in claim 28, wherein the filtering the one or more objects from the augmented reality scene is based on a multi-touch command sensed at the one or more sensors.
  30. 30. The computer readable storage medium as claimed any one of claims 26 to 29, the method further comprising providing picture data, video/multimedia data, or both, as the data augmented to each object.
  31. 31. The computer readable storage medium as claimed any one of claims 26 to 30, the method further comprising providing one or more categories of the augmented data to be searched, each category having a searchable sequence.
  32. 32. The computer readable storage medium as claimed any one of claims 26 to 31, wherein the sequence is a chronological sequence.
  33. 33. An augmented reality system, the system comprising the device as claimed in any one of claims 1 to 25.
  34. 34. The augmented reality system as claimed in claim 33, further comprising an augmented reality server networked to the system, the server configured to implement a social network environment that is capable of allowing different users to share the augmented data.
GB1308177.3A 2013-05-07 2013-05-07 A method for interacting with an augmented reality scene Withdrawn GB2513865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1308177.3A GB2513865A (en) 2013-05-07 2013-05-07 A method for interacting with an augmented reality scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1308177.3A GB2513865A (en) 2013-05-07 2013-05-07 A method for interacting with an augmented reality scene

Publications (2)

Publication Number Publication Date
GB201308177D0 GB201308177D0 (en) 2013-06-12
GB2513865A true GB2513865A (en) 2014-11-12

Family

ID=48627382

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1308177.3A Withdrawn GB2513865A (en) 2013-05-07 2013-05-07 A method for interacting with an augmented reality scene

Country Status (1)

Country Link
GB (1) GB2513865A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018118657A1 (en) * 2016-12-21 2018-06-28 Pcms Holdings, Inc. Systems and methods for selecting spheres of relevance for presenting augmented reality information
US10867181B2 (en) 2017-02-20 2020-12-15 Pcms Holdings, Inc. Dynamically presenting augmented reality information for reducing peak cognitive demand
WO2021052660A1 (en) 2019-09-19 2021-03-25 Robert Bosch Gmbh Method and device for processing an image recorded by a camera
WO2023049042A1 (en) * 2021-09-24 2023-03-30 Chinook Labs Llc Methods and systems for tracking contexts

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120188179A1 (en) * 2010-12-10 2012-07-26 Sony Ericsson Mobile Communications Ab Touch sensitive display
US20120223966A1 (en) * 2010-12-28 2012-09-06 Pantech Co., Ltd. Terminal to provide augmented reality
US20120231814A1 (en) * 2011-03-08 2012-09-13 Bank Of America Corporation Real-time analysis involving real estate listings
US20130016123A1 (en) * 2011-07-15 2013-01-17 Mark Skarulis Systems and methods for an augmented reality platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120188179A1 (en) * 2010-12-10 2012-07-26 Sony Ericsson Mobile Communications Ab Touch sensitive display
US20120223966A1 (en) * 2010-12-28 2012-09-06 Pantech Co., Ltd. Terminal to provide augmented reality
US20120231814A1 (en) * 2011-03-08 2012-09-13 Bank Of America Corporation Real-time analysis involving real estate listings
US20130016123A1 (en) * 2011-07-15 2013-01-17 Mark Skarulis Systems and methods for an augmented reality platform

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018118657A1 (en) * 2016-12-21 2018-06-28 Pcms Holdings, Inc. Systems and methods for selecting spheres of relevance for presenting augmented reality information
US10930082B2 (en) 2016-12-21 2021-02-23 Pcms Holdings, Inc. Systems and methods for selecting spheres of relevance for presenting augmented reality information
US11113895B2 (en) 2016-12-21 2021-09-07 Pcms Holdings, Inc. Systems and methods for selecting spheres of relevance for presenting augmented reality information
US10867181B2 (en) 2017-02-20 2020-12-15 Pcms Holdings, Inc. Dynamically presenting augmented reality information for reducing peak cognitive demand
WO2021052660A1 (en) 2019-09-19 2021-03-25 Robert Bosch Gmbh Method and device for processing an image recorded by a camera
WO2023049042A1 (en) * 2021-09-24 2023-03-30 Chinook Labs Llc Methods and systems for tracking contexts

Also Published As

Publication number Publication date
GB201308177D0 (en) 2013-06-12

Similar Documents

Publication Publication Date Title
CN112074797B (en) System and method for anchoring virtual objects to physical locations
US9661214B2 (en) Depth determination using camera focus
EP4191385B1 (en) Surface aware lens
CN109313812B (en) Shared experience with contextual enhancements
US10147399B1 (en) Adaptive fiducials for image match recognition and tracking
US8769437B2 (en) Method, apparatus and computer program product for displaying virtual media items in a visual media
US9699375B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CA2804096C (en) Methods, apparatuses and computer program products for automatically generating suggested information layers in augmented reality
US9880640B2 (en) Multi-dimensional interface
US9992429B2 (en) Video pinning
CN116671121A (en) AR content for multi-video clip capture
US20130083055A1 (en) 3D Position Tracking for Panoramic Imagery Navigation
EP2811731B1 (en) Electronic device for editing dual image and method thereof
CN107771312A (en) Select events based on user input and current context
CN104350736A (en) Augmented reality arrangement of nearby location information
KR20160112898A (en) Method and apparatus for providing dynamic service based augmented reality
KR20140133640A (en) Method and apparatus for providing contents including augmented reality information
US9600720B1 (en) Using available data to assist in object recognition
US10074216B2 (en) Information processing to display information based on position of the real object in the image
CN106250421A (en) A kind of method shooting process and terminal
JP6617547B2 (en) Image management system, image management method, and program
GB2513865A (en) A method for interacting with an augmented reality scene
CN109791432A (en) Postponing state changes of information affecting a graphical user interface until during an inattentive condition
US10915778B2 (en) User interface framework for multi-selection and operation of non-consecutive segmented information
KR102100667B1 (en) Apparatus and method for generating an image in a portable terminal

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)