US20240070299A1 - Revealing collaborative object using countdown timer - Google Patents
Revealing collaborative object using countdown timer Download PDFInfo
- Publication number
- US20240070299A1 US20240070299A1 US17/900,792 US202217900792A US2024070299A1 US 20240070299 A1 US20240070299 A1 US 20240070299A1 US 202217900792 A US202217900792 A US 202217900792A US 2024070299 A1 US2024070299 A1 US 2024070299A1
- Authority
- US
- United States
- Prior art keywords
- processor
- users
- access
- collaborative object
- collaborative
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6209—Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/103—Workflow collaboration or project management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2113—Multi-level security, e.g. mandatory access control
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2137—Time limited access, e.g. to a computer or data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/024—Multi-user, collaborative environment
Definitions
- Examples set forth in the present disclosure relate to the field of virtual reality for electronic devices, including mobile devices and wearable devices such as eyewear devices. More particularly, but not by way of limitation, the present disclosure describes a collaborative method with selective access.
- Many types of computers and electronic devices available today such as mobile devices (e.g., smartphones, tablets, and laptops), handheld devices, and wearable devices (e.g., smart glasses, digital eyewear, headwear, headgear, and head-mounted displays), include a variety of cameras, sensors, wireless transceivers, input systems, and displays.
- Graphical user interfaces allow the user to interact with displayed content, including virtual objects and graphical elements such as icons, taskbars, list boxes, menus, buttons, and selection control elements like cursors, pointers, handles, and sliders.
- virtual objects such as icons, taskbars, list boxes, menus, buttons, and selection control elements like cursors, pointers, handles, and sliders.
- VR virtual reality
- AR Augmented reality
- XR Cross reality
- Collaborative tools are available to users of VR technology.
- the collaborative tools enable users to virtually meet in a collaborative session.
- users can communicate with one another in a virtual setting.
- FIG. 1 A is a side view (right) of an example hardware configuration of an eyewear device suitable for use in an example collaboration system;
- FIG. 1 B is a perspective, partly sectional view of a right corner of the eyewear device of FIG. 1 A depicting a right visible-light camera, and a circuit board;
- FIG. 1 C is a side view (left) of an example hardware configuration of the eyewear device of FIG. 1 A , which shows a left visible-light camera;
- FIG. 1 D is a perspective, partly sectional view of a left corner of the eyewear device of FIG. 1 C depicting the left visible-light camera, and a circuit board;
- FIGS. 2 A and 2 B are rear views of example hardware configurations of an eyewear device utilized in an example collaboration system
- FIG. 3 is a diagrammatic depiction of a three-dimensional scene, a left raw image captured by a left visible-light camera, and a right raw image captured by a right visible-light camera;
- FIG. 4 is a functional block diagram of an example collaboration system including a wearable device (e.g., an eyewear device) and a server system connected via various networks;
- a wearable device e.g., an eyewear device
- a server system connected via various networks;
- FIG. 5 is a diagrammatic representation of an example hardware configuration for a mobile device suitable for use in the example system of FIG. 4 ;
- FIG. 6 is a schematic illustration of a user in an example environment for use in describing simultaneous localization and mapping
- FIG. 7 is a perspective illustration of an example collaborative object in the form of a box that may be manipulated with a hand;
- FIG. 8 is a perspective illustration of an example first hand gesture associated with an opening gesture for opening the box depicted in FIG. 7 ;
- FIG. 9 is a perspective illustration of an example second hand gesture associated with a closing gesture for closing the box depicted in FIG. 7 ;
- FIG. 10 is a flow chart listing the steps in an example collaboration method
- FIG. 11 is a flow chart listing the steps of an example selective collaboration object access method.
- FIG. 12 is a flow chart including steps of a method for use in the collaboration application.
- a collaborative session (e.g., a virtual time capsule) in which access to a collaborative object and added virtual content is selectively provided to participants/users.
- a processor provides users with access to a collaborative object using respective physically remote devices, and associates virtual content received from the users with the collaborative object during a collaboration period.
- the processor maintains a timer including a countdown indicative of when the collaboration period ends for associating virtual content with the collaborative object.
- the processor provides the users with access to the collaborative object with associated virtual content at the end of the collaboration period.
- the processor serves a time indicator for display on the physically remote devices, the time indicator representing the countdown indicative of when the collaboration period ends.
- the time indicator may be a countdown time, or a timeline.
- the session creator i.e., the host
- other approved participants can access the contents of a session (e.g., which may be recorded using an application such as lens cloud feature; available from Snap Inc. of Santa Monica, California).
- Coupled or “connected” as used herein refer to any logical, optical, physical, or electrical connection, including a link or the like by which the electrical or magnetic signals produced or supplied by one system element are imparted to another coupled or connected system element.
- coupled or connected elements or devices are not necessarily directly connected to one another and may be separated by intermediate components, elements, or communication media, one or more of which may modify, manipulate, or carry the electrical signals.
- on means directly supported by an element or indirectly supported by the element through another element that is integrated into or supported by the element.
- proximal is used to describe an item or part of an item that is situated near, adjacent, or next to an object or person; or that is closer relative to other parts of the item, which may be described as “distal.”
- distal the end of an item nearest an object
- the proximal end the end of an item nearest an object
- distal end the end of an item nearest an object
- the eyewear device may be oriented in any other direction suitable to the particular application of the eyewear device; for example, up, down, sideways, or any other orientation.
- any directional term such as front, rear, inward, outward, toward, left, right, lateral, longitudinal, up, down, upper, lower, top, bottom, side, horizontal, vertical, and diagonal are used by way of example only, and are not limiting as to the direction or orientation of any camera or inertial measurement unit as constructed or as otherwise described herein.
- Advanced AR technologies such as computer vision and object tracking, may be used to produce a perceptually enriched and immersive experience.
- Computer vision algorithms extract three-dimensional data about the physical world from the data captured in digital images or video.
- Object recognition and tracking algorithms are used to detect an object in a digital image or video, estimate its orientation or pose, and track its movement over time. Hand and finger recognition and tracking in real time is one of the most challenging and processing-intensive tasks in the field of computer vision.
- pose refers to the static position and orientation of an object at a particular instant in time.
- gesture refers to the active movement of an object, such as a hand, through a series of poses, sometimes to convey a signal or idea.
- pose and gesture are sometimes used interchangeably in the field of computer vision and augmented reality.
- the terms “pose” or “gesture” are intended to be inclusive of both poses and gestures; in other words, the use of one term does not exclude the other.
- FIG. 1 A is a side view (right) and FIG. 1 C is a side view (left) of an example hardware configuration of an eyewear device 100 that includes a touch-sensitive input device or touchpad 181 .
- the touchpad 181 may have a boundary that is subtle and not easily seen; alternatively, the boundary may be plainly visible or include a raised or otherwise tactile edge that provides feedback to the user about the location and boundary of the touchpad 181 .
- the eyewear device 100 may include a touchpad on the left side.
- the surface of the touchpad 181 is configured to detect finger touches, taps, and gestures (e.g., moving touches) for use with a GUI displayed by the eyewear device, on an image display, to allow the user to navigate through and select menu options in an intuitive manner, which enhances and simplifies the user experience.
- finger touches, taps, and gestures e.g., moving touches
- Detection of finger inputs on the touchpad 181 can enable several functions. For example, touching anywhere on the touchpad 181 may cause the GUI to display or highlight an item on the image display, which may be projected onto at least one of the optical assemblies 180 A, 180 B. Double tapping on the touchpad 181 may select an item or icon. Sliding or swiping a finger in a particular direction (e.g., from front to back, back to front, up to down, or down to) may cause the items or icons to slide or scroll in a particular direction; for example, to move to a next item, icon, video, image, page, or slide. Sliding the finger in another direction may slide or scroll in the opposite direction; for example, to move to a previous item, icon, video, image, page, or slide.
- the touchpad 181 can be virtually anywhere on the eyewear device 100 .
- an identified finger gesture of a single tap on the touchpad 181 initiates selection or pressing of a graphical user interface element in the image presented on the image display of the optical assembly 180 A, 180 B.
- An adjustment to the image presented on the image display of the optical assembly 180 A, 180 B based on the identified finger gesture can be a primary action which selects or submits the graphical user interface element on the image display of the optical assembly 180 A, 180 B for further display or execution.
- the eyewear device 100 includes a left visible-light camera 114 A and a right visible-light camera 114 B.
- the two cameras 114 A, 114 B capture image information for a scene from two separate viewpoints.
- the two captured images may be used to project a three-dimensional display onto an image display for viewing with 3D glasses.
- the eyewear device 100 includes a right optical assembly 180 B with an image display to present images, such as depth images.
- the eyewear device 100 can include multiple visible-light cameras 114 A, 114 B that form a passive type of three-dimensional camera, such as stereo camera, of which the right visible-light camera 114 B is located on a right corner 110 B and, as shown in FIGS. 1 C-D , a left visible-light camera 114 A is located on a left corner 110 A.
- Left and right visible-light cameras 114 A, 114 B are sensitive to the visible-light range wavelength.
- Each of the visible-light cameras 114 A, 114 B have a different frontward facing field of view which are overlapping to enable generation of three-dimensional depth images, for example, left visible-light camera 1114 A captures a left field of view 111 A and right visible-light camera 114 B captures a right field of view 111 B.
- a “field of view” is the part of the scene that is visible through the camera at a particular position and orientation in space.
- the fields of view 111 A and 111 B have an overlapping field of view 304 ( FIG. 3 ).
- Objects or object features outside the field of view 111 A, 111 B when the visible-light camera captures the image are not recorded in a raw image (e.g., photograph or picture).
- the field of view describes an angle range or extent, which the image sensor of the visible-light camera 114 A, 114 B picks up electromagnetic radiation of a given scene in a captured image of the given scene.
- Field of view can be expressed as the angular size of the view cone; i.e., an angle of view.
- the angle of view can be measured horizontally, vertically, or diagonally.
- one or both visible-light cameras 114 A, 114 B has a field of view of 100° and a resolution of 480 ⁇ 480 pixels.
- the “angle of coverage” describes the angle range that a lens of visible-light cameras 114 A, 114 B or infrared camera 410 (see FIG. 2 A ) can effectively image.
- the camera lens produces an image circle that is large enough to cover the film or sensor of the camera completely, possibly including some vignetting (e.g., a darkening of the image toward the edges when compared to the center). If the angle of coverage of the camera lens does not fill the sensor, the image circle will be visible, typically with strong vignetting toward the edge, and the effective angle of view will be limited to the angle of coverage.
- Examples of such visible-light cameras 114 A, 114 B include digital camera elements such as high-resolution complementary metal-oxide-semiconductor (CMOS) image sensor and a digital VGA camera (video graphics array) capable of resolutions of 480p (e.g., 640 ⁇ 480 pixels), 720p, 1080p, or greater.
- CMOS complementary metal-oxide-semiconductor
- VGA camera video graphics array
- Other examples include visible-light cameras 114 A, 114 B that can capture high-definition (HD) video at a high frame rate (e.g., thirty to sixty frames per second, or more) and store the recording at a resolution of 1216 by 1216 pixels (or greater).
- HD high-definition
- the eyewear device 100 may capture image sensor data from the visible-light cameras 114 A, 114 B along with geolocation data, digitized by an image processor, for storage in a memory.
- the visible-light cameras 114 A, 114 B capture respective left and right raw images in the two-dimensional space domain that comprise a matrix of pixels on a two-dimensional coordinate system that includes an X-axis for horizontal position and a Y-axis for vertical position.
- Each pixel includes a color attribute value (e.g., a red pixel light value, a green pixel light value, or a blue pixel light value); and a position attribute (e.g., an X-axis coordinate and a Y-axis coordinate).
- the image processor 412 may be coupled to the visible-light cameras 114 A, 114 B to receive and store the visual image information.
- the image processor 412 controls operation of the visible-light cameras 114 A, 114 B to act as a stereo camera simulating human binocular vision and may add a timestamp to each image.
- the timestamp on each pair of images allows display of the images together as part of a three-dimensional projection.
- Three-dimensional projections produce an immersive, life-like experience that is desirable in a variety of contexts, including virtual reality (VR) and video gaming.
- VR virtual reality
- FIG. 1 B is a perspective, cross-sectional view of a right corner 110 B of the eyewear device 100 of FIG. 1 A depicting the right visible-light camera 114 B of the camera system, and a circuit board.
- FIG. 1 C is a side view (left) of an example hardware configuration of an eyewear device 100 of FIG. 1 A , which shows a left visible-light camera 114 A of the camera system.
- FIG. 1 D is a perspective, cross-sectional view of a left corner 110 A of the eyewear device of FIG. 1 C depicting the left visible-light camera 114 A of the three-dimensional camera, and a circuit board.
- the eyewear device 100 includes the right visible-light camera 114 B and a circuit board 140 B, which may be a flexible printed circuit board (PCB).
- a right hinge 126 B connects the right corner 110 B to a right temple 125 B of the eyewear device 100 .
- components of the right visible-light camera 114 B, the flexible PCB 140 B, or other electrical connectors or contacts may be located on the right temple 125 B or the right hinge 126 B.
- Construction and placement of the left visible-light camera 114 A is substantially similar to the right visible-light camera 114 B, except the connections and coupling are on the left lateral side 170 A.
- a left hinge 126 B connects the left corner 110 A to a left temple 125 A of the eyewear device 100 .
- components of the left visible-light camera 114 A, the flexible PCB 140 A, or other electrical connectors or contacts may be located on the left temple 125 A or the left hinge 126 A.
- the left and right corners 110 A and 110 B each include a corner body 190 and a corner cap, with the corner caps omitted in the cross-sections of FIGS. 1 B and 1 D .
- various interconnected circuit boards 140 A and 140 B such as PCBs or flexible PCBs, that include controller circuits for left and right visible-light cameras 114 A and 114 B, microphone(s), low-power wireless circuitry (e.g., for wireless short range network communication via BluetoothTM), high-speed wireless circuitry (e.g., for wireless local area network communication via Wi-Fi).
- the corners 110 A, 110 B may be integrated into the frame 105 on the respective lateral sides 170 A, 170 B (as illustrated) or implemented as separate components attached to the frame 105 on the respective sides 170 A, 170 B. Alternatively, the corners 110 A, 110 B may be integrated into temples 125 A, 125 B attached to the frame 105 .
- the left and right visible-light cameras 114 A and 114 B are coupled to or disposed on respective flexible PCBs 140 A and 140 B and are covered by visible-light camera cover lens, which are aimed through opening(s) formed in the frame 105 .
- the left and right rims 107 A and 107 B of the frame 105 are connected to the left and right corners 110 A and 110 B and include the openings for the visible-light camera cover lenses.
- the frame 105 includes a front side configured to face outward and away from the eye of the user.
- the opening for the visible-light camera cover lens is formed on and through the front or outward-facing side of the frame 105 .
- the left and right visible-light cameras 114 A and 114 B each has a respective outward-facing field of view 111 A and 111 B with a line of sight or perspective that is correlated with the respective left and right eyes of the user of the eyewear device 100 .
- the visible-light camera cover lens can also be adhered to a front side or outward-facing surface of the right corner 110 B in which an opening is formed with an outward-facing angle of coverage, but in a different outwardly direction.
- the coupling can also be indirect via intervening components.
- FIGS. 2 A and 2 B are perspective views, from the rear, of example hardware configurations of the eyewear device 100 , including two different types of image displays.
- the eyewear device 100 is sized and shaped in a form configured for wearing by a user; the form of eyeglasses is shown in the example.
- the eyewear device 100 can take other forms and may incorporate other types of frameworks; for example, a headgear, a headset, or a helmet.
- eyewear device 100 includes a frame 105 including a left rim 107 A connected to a right rim 107 B via a bridge 106 adapted to be supported by a nose of the user.
- the left and right rims 107 A, 107 B include respective apertures 175 A, 175 B, which hold a respective optical element 180 A, 180 B, such as a lens and a display device.
- the term “lens” is meant to include transparent or translucent pieces of glass or plastic having curved or flat surfaces that cause light to converge or diverge or that cause little or no convergence or divergence.
- eyewear device 100 can include other arrangements, such as a single optical element (or it may not include any optical element 180 A, 180 B), depending on the application or the intended user of the eyewear device 100 .
- eyewear device 100 includes a left corner 110 A adjacent the left lateral side 170 A of the frame 105 and a right corner 110 B adjacent the right lateral side 170 B of the frame 105 .
- each optical assembly 180 A, 180 B includes an integrated image display. As shown in FIG. 2 A , each optical assembly 180 A, 180 B includes a suitable display matrix 177 , such as a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, or any other such display. Each optical assembly 180 A, 180 B also includes an optical layer or layers 176 , which can include lenses, optical coatings, prisms, mirrors, waveguides, optical strips, and other optical components in any combination.
- the optical layers 176 A, 176 B, . . . 176 N (shown as 176 A-N in FIG.
- the prism of the optical layers 176 A-N extends over all or at least a portion of the respective apertures 175 A, 175 B formed in the left and right rims 107 A, 107 B to permit the user to see the second surface of the prism when the eye of the user is viewing through the corresponding left and right rims 107 A, 107 B.
- the first surface of the prism of the optical layers 176 A-N faces upwardly from the frame 105 and the display matrix 177 overlies the prism so that photons and light emitted by the display matrix 177 impinge the first surface.
- the prism is sized and shaped so that the light is refracted within the prism and is directed toward the eye of the user by the second surface of the prism of the optical layers 176 A-N.
- the second surface of the prism of the optical layers 176 A-N can be convex to direct the light toward the center of the eye.
- the prism can optionally be sized and shaped to magnify the image projected by the display matrix 177 , and the light travels through the prism so that the image viewed from the second surface is larger in one or more dimensions than the image emitted from the display matrix 177 .
- the optical layers 176 A-N may include an LCD layer that is transparent (keeping the lens open) unless and until a voltage is applied that makes the layer opaque (closing or blocking the lens).
- the image processor 412 ( FIG. 4 ) on the eyewear device 100 may execute programming to apply the voltage to the LCD layer in order to produce an active shutter system, making the eyewear device 100 suitable for viewing visual content when displayed as a three-dimensional projection. Technologies other than LCD may be used for the active shutter mode, including other types of reactive layers that are responsive to a voltage or another type of input.
- the image display device of optical assembly 180 A, 180 B includes a projection image display as shown in FIG. 2 B .
- Each optical assembly 180 A, 180 B includes a laser projector 150 , which is a three-color laser projector using a scanning mirror or galvanometer.
- an optical source such as a laser projector 150 is disposed in or on one of the temples 125 A, 125 B of the eyewear device 100 .
- Optical assembly 180 B in this example includes one or more optical strips 155 A, 155 B, . . . 155 N (shown as 155 A-N in FIG. 2 B ) which are spaced apart and across the width of the lens of each optical assembly 180 A, 180 B or across a depth of the lens between the front surface and the rear surface of the lens.
- each optical assembly 180 A, 180 B As the photons projected by the laser projector 150 travel across the lens of each optical assembly 180 A, 180 B, the photons encounter the optical strips 155 A-N. When a particular photon encounters a particular optical strip, the photon is either redirected toward the user's eye, or it passes to the next optical strip.
- a combination of modulation of laser projector 150 , and modulation of optical strips may control specific photons or beams of light.
- a processor controls optical strips 155 A-N by initiating mechanical, acoustic, or electromagnetic signals.
- the eyewear device 100 can include other arrangements, such as a single or three optical assemblies, or each optical assembly 180 A, 180 B may have arranged different arrangement depending on the application or intended user of the eyewear device 100 .
- the eyewear device 100 shown in FIG. 2 B may include two projectors, a left projector (not shown) and a right projector 150 .
- the left optical assembly 180 A may include a left display matrix 177 or a left set of optical strips (not shown) which are configured to interact with light from the left projector.
- the right optical assembly 180 B may include a right display matrix (not shown) or a right set of optical strips 155 A, 155 B, . . . 155 N which are configured to interact with light from the right projector 150 .
- the eyewear device 100 includes a left display and a right display.
- FIG. 3 is a diagrammatic depiction of a three-dimensional scene 306 , a left raw image 302 A captured by a left visible-light camera 114 A, and a right raw image 302 B captured by a right visible-light camera 114 B.
- the left field of view 111 A may overlap, as shown, with the right field of view 111 B.
- the overlapping field of view 304 represents that portion of the image captured by both cameras 114 A, 114 B.
- the term ‘overlapping’ when referring to field of view means the matrix of pixels in the generated raw images overlap by thirty percent (30%) or more.
- ‘Substantially overlapping’ means the matrix of pixels in the generated raw images—or in the infrared image of scene—overlap by fifty percent (50%) or more.
- the two raw images 302 A, 302 B may be processed to include a timestamp, which allows the images to be displayed together as part of a three-dimensional projection.
- a pair of raw red, green, and blue (RGB) images are captured of a real scene 306 at a given moment in time—a left raw image 302 A captured by the left camera 114 A and right raw image 302 B captured by the right camera 114 B.
- RGB red, green, and blue
- the pair of raw images 302 A, 302 B are processed (e.g., by the image processor 412 )
- depth images are generated.
- the generated depth images may be viewed on an optical assembly 180 A, 180 B of an eyewear device, on another display (e.g., the image display 580 on a mobile device 401 ), or on a screen.
- the generated depth images are in the three-dimensional space domain and can comprise a matrix of vertices on a three-dimensional location coordinate system that includes an X axis for horizontal position (e.g., length), a Y axis for vertical position (e.g., height), and a Z axis for depth (e.g., distance).
- Each vertex may include a color attribute (e.g., a red pixel light value, a green pixel light value, or a blue pixel light value); a position attribute (e.g., an X location coordinate, a Y location coordinate, and a Z location coordinate); a texture attribute; a reflectance attribute; or a combination thereof.
- the texture attribute quantifies the perceived texture of the depth image, such as the spatial arrangement of color or intensities in a region of vertices of the depth image.
- FIG. 4 is a functional block diagram of an example collaboration system 400 that includes a wearable device (e.g., an eyewear device 100 ), a mobile device 401 , and a server system 498 connected via various networks 495 such as the Internet.
- the server system 498 may be one or more computing devices as part of a service or network computing system, for example, that include a processor, a memory, and network communication interface to communicate over the network 495 with an eyewear device 100 and a mobile device 401 .
- the server system 498 includes a server processor 499 that may be configured to host collaboration sessions. Functionality of the eyewear device 100 or mobile device 401 described herein, such as collaboration processing and serving collaborative objects to users, can be performed by the processor 499 of the server system 498 .
- the eyewear device 100 includes one or more visible-light cameras 114 A, 114 B that capture still images, video images, or both still and video images, as described herein.
- the cameras 114 A, 114 B may have a direct memory access (DMA) to high-speed circuitry 430 and function as a stereo camera.
- the cameras 114 A, 114 B may be used to capture initial-depth images that may be rendered into three-dimensional (3D) models that are texture-mapped images of a red, green, and blue (RGB) imaged scene.
- the device 100 may also include a depth sensor 213 , which uses infrared signals to estimate the position of objects relative to the device 100 .
- the depth sensor that in some examples includes one or more infrared emitter(s) 215 and infrared camera(s) 410 .
- the eyewear device 100 further includes two image displays of each optical assembly 180 A, 180 B (one associated with the left side 170 A and one associated with the right side 170 B).
- the eyewear device 100 also includes an image display driver 442 , an image processor 412 , low-power circuitry 420 , and high-speed circuitry 430 .
- the image displays of each optical assembly 180 A, 180 B are for presenting images, including still images, video images, or still and video images.
- the image display driver 442 is coupled to the image displays of each optical assembly 180 A, 180 B in order to control the display of images.
- the eyewear device 100 additionally includes one or more microphones (not shown) and one or more speakers 413 (e.g., one associated with the left side of the eyewear device and another associated with the right side of the eyewear device).
- the speakers 413 may be incorporated into the frame 105 , temples 125 , or corners 110 of the eyewear device 100 .
- the one or more speakers 413 are driven by an audio processor 414 and audio driver 415 under control of low-power circuitry 420 , high-speed circuitry 430 , or both.
- the speakers 413 are for presenting audio signals including, for example, a beat track.
- the audio processor 414 are coupled to the microphones and the speakers 413 in order to control the respective capture and presentation of sound.
- the components shown in FIG. 4 for the eyewear device 100 are located on one or more circuit boards, for example a printed circuit board (PCB) or flexible printed circuit (FPC), located in the frame or temples.
- PCB printed circuit board
- FPC flexible printed circuit
- the depicted components can be located in the corners, rims, hinges, or bridge of the eyewear device 100 .
- high-speed circuitry 430 includes a high-speed processor 432 , a memory 434 , and high-speed wireless circuitry 436 .
- the image display driver 442 is coupled to the high-speed circuitry 430 and operated by the high-speed processor 432 in order to drive the left and right image displays of each optical assembly 180 A, 180 B.
- High-speed processor 432 may be any processor capable of managing high-speed communications and operation of any general computing system needed for eyewear device 100 .
- High-speed processor 432 includes processing resources needed for managing high-speed data transfers on high-speed wireless connection 437 to a wireless local area network (WLAN) using high-speed wireless circuitry 436 .
- WLAN wireless local area network
- the high-speed processor 432 executes an operating system such as a LINUX operating system or other such operating system of the eyewear device 100 and the operating system is stored in memory 434 for execution. In addition to any other responsibilities, the high-speed processor 432 executes a software architecture for the eyewear device 100 that is used to manage data transfers with high-speed wireless circuitry 436 .
- high-speed wireless circuitry 436 is configured to implement Institute of Electrical and Electronic Engineers (IEEE) 802.11 communication standards, also referred to herein as Wi-Fi. In other examples, other high-speed communications standards may be implemented by high-speed wireless circuitry 436 .
- IEEE Institute of Electrical and Electronic Engineers
- the low-power circuitry 420 includes a low-power processor 422 and low-power wireless circuitry 424 .
- the low-power wireless circuitry 424 and the high-speed wireless circuitry 436 of the eyewear device 100 can include short-range transceivers (BluetoothTM or Bluetooth Low-Energy (BLE)) and wireless wide, local, or wide-area network transceivers (e.g., cellular or Wi-Fi).
- Mobile device 401 including the transceivers communicating via a low-power wireless connection 425 and the high-speed wireless connection 437 , may be implemented using details of the architecture of the eyewear device 100 , as can other elements of the network 495 .
- Memory 434 includes any storage device capable of storing various data and applications, including, among other things, camera data generated by the left and right visible-light cameras 114 A, 114 B, the infrared camera(s) 410 , the image processor 412 , and images generated for display by the image display driver 442 on the image display of each optical assembly 180 A, 180 B.
- the memory 434 is shown as integrated with high-speed circuitry 430 , the memory 434 in other examples may be an independent, standalone element of the eyewear device 100 .
- electrical routing lines may provide a connection through a chip that includes the high-speed processor 432 from the image processor 412 or low-power processor 422 to the memory 434 .
- the high-speed processor 432 may manage addressing of memory 434 such that the low-power processor 422 will boot the high-speed processor 432 any time that a read or write operation involving memory 434 is needed.
- the high-speed processor 432 of the eyewear device 100 can be coupled to the camera system (visible-light cameras 114 A, 114 B), the image display driver 442 , the user input device 491 , and the memory 434 .
- the output components of the eyewear device 100 include visual elements, such as the left and right image displays associated with each lens or optical assembly 180 A, 180 B as described in FIGS. 2 A and 2 B (e.g., a display such as a liquid crystal display (LCD), a plasma display panel (PDP), a light emitting diode (LED) display, a projector, or a waveguide).
- the eyewear device 100 may include a user-facing indicator (e.g., an LED, a loudspeaker 413 , or a vibrating actuator), or an outward-facing signal (e.g., an LED, a loudspeaker 413 ).
- the image displays of each optical assembly 180 A, 180 B are driven by the image display driver 442 .
- the output components of the eyewear device 100 further include additional indicators such as audible elements (e.g., loudspeakers 413 ), tactile components (e.g., an actuator such as a vibratory motor to generate haptic feedback), and other signal generators.
- the device 100 may include a user-facing set of indicators, and an outward-facing set of signals.
- the user-facing set of indicators are configured to be seen or otherwise sensed by the user of the device 100 .
- the device 100 may include an LED display positioned so the user can see it, a one or more speakers 413 positioned to generate a sound the user can hear, or an actuator to provide haptic feedback the user can feel.
- the outward-facing set of signals are configured to be seen or otherwise sensed by an observer near the device 100 .
- the device 100 may include an LED, a loudspeaker, or an actuator that is configured and positioned to be sensed by an observer.
- the input components of the eyewear device 100 may include alphanumeric input components (e.g., a touch screen or touchpad configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric-configured elements), pointer-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a button switch, a touch screen or touchpad that senses the location, force or location and force of touches or touch gestures, or other tactile-configured elements), visual input components (e.g., cameras 114 / 420 ), and audio input components (e.g., a microphone), and the like.
- the mobile device 401 and the server system 498 may include alphanumeric, pointer-based, tactile, audio, and other input components.
- the eyewear device 100 includes a collection of motion-sensing components referred to as an inertial measurement unit 472 .
- the motion-sensing components may be micro-electro-mechanical systems (MEMS) with microscopic moving parts, often small enough to be part of a microchip.
- the inertial measurement unit (IMU) 472 in some example configurations includes an accelerometer, a gyroscope, and a magnetometer.
- the accelerometer senses the linear acceleration of the device 100 (including the acceleration due to gravity) relative to three orthogonal axes (x, y, z).
- the gyroscope senses the angular velocity of the device 100 about three axes of rotation (pitch, roll, yaw).
- the accelerometer and gyroscope can provide position, orientation, and motion data about the device relative to six axes (x, y, z, pitch, roll, yaw).
- the magnetometer if present, senses the heading of the device 100 relative to magnetic north.
- the position of the device 100 may be determined by location sensors, such as a GPS unit, one or more transceivers to generate relative position coordinates, altitude sensors or barometers, and other orientation sensors.
- location sensors such as a GPS unit, one or more transceivers to generate relative position coordinates, altitude sensors or barometers, and other orientation sensors.
- Such positioning system coordinates can also be received over the wireless connections 425 , 437 from the mobile device 401 via the low-power wireless circuitry 424 or the high-speed wireless circuitry 436 .
- the IMU 472 may include or cooperate with a digital motion processor or programming that gathers the raw data from the components and compute a number of useful values about the position, orientation, and motion of the device 100 .
- the acceleration data gathered from the accelerometer can be integrated to obtain the velocity relative to each axis (x, y, z); and integrated again to obtain the position of the device 100 (in linear coordinates, x, y, and z).
- the angular velocity data from the gyroscope can be integrated to obtain the position of the device 100 (in spherical coordinates).
- the programming for computing these useful values may be stored in memory 434 and executed by the high-speed processor 432 of the eyewear device 100 .
- the eyewear device 100 may optionally include additional peripheral sensors, such as biometric sensors, specialty sensors, or display elements integrated with eyewear device 100 .
- peripheral device elements may include any I/O components including output components, motion components, position components, or any other such elements described herein.
- the biometric sensors may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), to measure bio signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), or to identify a person (e.g., identification based on voice, retina, facial characteristics, fingerprints, or electrical bio signals such as electroencephalogram data), and the like.
- the mobile device 401 may be a smartphone, tablet, laptop computer, access point, or any other such device capable of connecting with eyewear device 100 using both a low-power wireless connection 425 and a high-speed wireless connection 437 .
- Mobile device 401 is connected to server system 498 and network 495 .
- the network 495 may include any combination of wired and wireless connections.
- the illustrated collaboration system 400 includes a computing device, such as mobile device 401 , coupled to an eyewear device 100 over a network.
- the collaboration system 400 includes a memory for storing instructions and a processor for executing the instructions. Execution of the instructions of the collaboration system 400 by the processor 432 configures the eyewear device 100 to cooperate with the mobile device 401 .
- the collaboration system 400 may utilize the memory 434 of the eyewear device 100 or the memory elements 540 A, 540 B, 540 C of the mobile device 401 ( FIG. 5 ). Also, the collaboration system 400 may utilize the processor elements 432 , 422 of the eyewear device 100 or the central processing unit (CPU) 540 of the mobile device 401 ( FIG. 5 ).
- CPU central processing unit
- collaboration system 400 may further utilize the memory and processor elements of the server system 498 .
- the memory and processing functions of the collaboration system 400 can be shared or distributed across the processors and memories of the eyewear device 100 , the mobile device 401 , and the server system 498 to implement functionality described herein.
- the memory 434 includes or is coupled to a hand gesture library 480 , as described herein.
- the process of detecting a hand shape or gesture involves comparing the pixel-level data in one or more captured frames of video data by the eyewear 100 or mobile device 401 to the hand shapes and gestures stored in the library 480 until a good match is found.
- a gesture may be a static gesture that can be detect in one or a few frames of data or a dynamic gesture that is detected over the course of two or more frames of data.
- the memory 434 additionally includes, in some example implementations, an element animation application 910 , a localization system 915 , an image processing system 920 , and a collaboration application 925 .
- the element animation application 910 configures the processor 432 to control the movement of a series of virtual items 700 on a display in response to detecting one or more inputs, e.g., IMU data, captured images, and hand shapes or gestures.
- the localization system 915 configures the processor 432 to obtain localization data for use in determining the position of the eyewear device 100 relative to the physical environment.
- the localization data may be derived from a series of images, an IMU unit 472 , a GPS unit, or a combination thereof.
- the image processing system 920 configures the processor 432 to present a captured image on a display of an optical assembly 180 A, 180 B in cooperation with the image display driver 442 and the image processor 412 .
- the collaboration application 925 configures the processor 432 to implement collaboration functions described herein.
- FIG. 5 is a high-level functional block diagram of an example mobile device 401 .
- Mobile device 401 includes a flash memory 540 A which stores programming to be executed by the CPU 540 to perform all or a subset of the functions described herein.
- the mobile device 401 may include a camera 570 that comprises at least two visible-light cameras (first and second visible-light cameras with overlapping fields of view) or at least one visible-light camera and a depth sensor with substantially overlapping fields of view.
- the mobile device 401 may additionally include a speaker 571 .
- Flash memory 540 A may further include multiple images or video, which are generated via the camera 570 .
- the mobile device 401 includes an image display 580 , a mobile display driver 582 to control the image display 580 , and a display controller 584 .
- the image display 580 includes a user input layer 591 (e.g., a touchscreen) that is layered on top of or otherwise integrated into the screen used by the image display 580 .
- FIG. 5 therefore provides a block diagram illustration of the example mobile device 401 with a user interface that includes a touchscreen input layer 891 for receiving input (by touch, multi-touch, or gesture, and the like, by hand, stylus, or other tool), a camera 570 for capturing images of objects (including hands of the user and potential virtual content), and an image display 580 for displaying content.
- a touchscreen input layer 891 for receiving input (by touch, multi-touch, or gesture, and the like, by hand, stylus, or other tool)
- a camera 570 for capturing images of objects (including hands of the user and potential virtual content)
- an image display 580 for displaying content.
- the mobile device 401 includes at least one digital transceiver (XCVR) 510 , shown as WWAN XCVRs, for digital wireless communications via a wide-area wireless mobile communication network.
- the mobile device 401 also includes additional digital or analog transceivers, such as short-range transceivers (XCVRs) 520 for short-range network communication, such as via NFC, VLC, DECT, ZigBee, BluetoothTM, or Wi-Fi.
- short range XCVRs 520 may take the form of any available two-way wireless local area network (WLAN) transceiver of a type that is compatible with one or more standard protocols of communication implemented in wireless local area networks, such as one of the Wi-Fi standards under IEEE 802.11.
- WLAN wireless local area network
- the mobile device 401 can include a global positioning system (GPS) receiver.
- GPS global positioning system
- the eyewear device 100 or the mobile device 401 can utilize either or both the short range XCVRs 520 and WWAN XCVRs 510 for generating location coordinates for positioning.
- cellular network, Wi-Fi, or BluetoothTM based positioning systems can generate very accurate location coordinates, particularly when used in combination.
- Such location coordinates can be transmitted between the eyewear device 100 or mobile device 401 over one or more network connections via XCVRs 510 , 520 .
- the mobile device 401 in some examples includes a collection of motion-sensing components referred to as an inertial measurement unit (IMU) 572 for sensing the position, orientation, and motion of the client device 401 .
- the motion-sensing components may be micro-electro-mechanical systems (MEMS) with microscopic moving parts, often small enough to be part of a microchip.
- the inertial measurement unit (IMU) 572 in some example configurations includes an accelerometer, a gyroscope, and a magnetometer.
- the accelerometer senses the linear acceleration of the client device 401 (including the acceleration due to gravity) relative to three orthogonal axes (x, y, z).
- the gyroscope senses the angular velocity of the client device 401 about three axes of rotation (pitch, roll, yaw). Together, the accelerometer and gyroscope can provide position, orientation, and motion data about the device relative to six axes (x, y, z, pitch, roll, yaw).
- the magnetometer if present, senses the heading of the client device 401 relative to magnetic north.
- the IMU 572 may include or cooperate with a digital motion processor or programming that gathers the raw data from the components and compute a number of useful values about the position, orientation, and motion of the client device 401 .
- the acceleration data gathered from the accelerometer can be integrated to obtain the velocity relative to each axis (x, y, z); and integrated again to obtain the position of the client device 401 (in linear coordinates, x, y, and z).
- the angular velocity data from the gyroscope can be integrated to obtain the position of the client device 401 (in spherical coordinates).
- the programming for computing these useful values may be stored in on or more memory elements 540 A, 540 B, 540 C and executed by the CPU 540 of the client device 401 .
- the transceivers 510 , 520 conforms to one or more of the various digital wireless communication standards utilized by modern mobile networks.
- WWAN transceivers 510 include (but are not limited to) transceivers configured to operate in accordance with Code Division Multiple Access (CDMA) and 3rd Generation Partnership Project (3GPP) network technologies including, for example and without limitation, 3GPP type 2 (or 3GPP2) and LTE, at times referred to as “4G.”
- CDMA Code Division Multiple Access
- 3GPP 3rd Generation Partnership Project
- 3GPP type 2 or 3GPP2
- LTE Long Term Evolution
- the transceivers 510 , 520 provide two-way wireless communication of information including digitized audio signals, still image and video signals, web page information for display as well as web-related inputs, and various types of mobile message communications to/from the mobile device 401 .
- the mobile device 401 further includes a microprocessor that functions as a central processing unit (CPU); shown as CPU 540 in FIG. 5 .
- a processor is a circuit having elements structured and arranged to perform one or more processing functions, typically various data processing functions. Although discrete logic components could be used, the examples utilize components forming a programmable CPU.
- a microprocessor for example includes one or more integrated circuit (IC) chips incorporating the electronic elements to perform the functions of the CPU.
- the CPU 540 may be based on any known or available microprocessor architecture, such as a Reduced Instruction Set Computing (RISC) using an ARM architecture, as commonly used today in mobile devices and other portable electronic devices. Of course, other arrangements of processor circuitry may be used to form the CPU 540 or processor hardware in smartphone, laptop computer, and tablet.
- RISC Reduced Instruction Set Computing
- the CPU 540 serves as a programmable host controller for the mobile device 401 by configuring the mobile device 401 to perform various operations, for example, in accordance with instructions or programming executable by CPU 540 .
- operations may include various general operations of the mobile device, as well as operations related to the programming for applications on the mobile device.
- a processor may be configured by use of hardwired logic, typical processors in mobile devices are general processing circuits configured by execution of programming.
- the mobile device 401 includes a memory or storage system, for storing programming and data.
- the memory system may include a flash memory 540 A, a random-access memory (RAM) 540 B, and other memory components 540 C, as needed.
- the RAM 540 B serves as short-term storage for instructions and data being handled by the CPU 540 , e.g., as a working data processing memory.
- the flash memory 540 A typically provides longer-term storage.
- the flash memory 540 A is used to store programming or instructions for execution by the CPU 540 .
- the mobile device 401 stores and runs a mobile operating system through which specific applications are executed. Examples of mobile operating systems include Google Android, Apple iOS (for iPhone or iPad devices), Windows Mobile, Amazon Fire OS, RIM BlackBerry OS, or the like.
- the CPU 540 of the mobile device 401 may be coupled to a camera system 570 , a mobile display driver 582 , a user input layer 591 , and a memory 540 A.
- a camera system 570 the CPU 540 of the mobile device 401 may be coupled to a camera system 570 , a mobile display driver 582 , a user input layer 591 , and a memory 540 A.
- Components and functionality of the eyewear device 100 described herein can be incorporated into the mobile device 401 .
- components and functionality of the mobile device 401 described herein may be incorporated into the eyewear device 100 .
- the processor 432 within the eyewear device 100 or the processor 540 within the mobile device 401 may construct a map of the environment surrounding the respective device, determine a location of the device within the mapped environment, and determine a relative position of the device to one or more objects in the mapped environment.
- the processor 432 / 540 may construct the map and determine location and position information using a conventional simultaneous localization and mapping (SLAM) algorithm applied to data received from one or more sensors.
- SLAM simultaneous localization and mapping
- Sensor data includes images received from one or both of the cameras 114 A, 114 B or camera(s) 570 , distance(s) received from a laser range finder, position information received from a GPS unit, motion and acceleration data received from an IMU 472 / 572 , or a combination of data from such sensors, or from other sensors that provide data useful in determining positional information.
- a SLAM algorithm is used to construct and update a map of an environment, while simultaneously tracking and updating the location of a device (or a user) within the mapped environment.
- the mathematical solution can be approximated using various statistical methods, such as particle filters, Kalman filters, extended Kalman filters, and covariance intersection.
- the SLAM algorithm updates the map and the location of objects at least as frequently as the frame rate; in other words, calculating and updating the mapping and localization thirty times per second.
- FIG. 6 depicts an example physical environment 600 along with elements that are useful when using a SLAM application and other types of tracking applications (e.g., natural feature tracking (NFT)).
- NFT natural feature tracking
- a user 602 of eyewear device 100 is present in an example physical environment 600 (which, in FIG. 6 , is an interior room).
- the processor 432 of the eyewear device 100 determines its position with respect to one or more objects 604 within the environment 600 using captured images, constructs a map of the environment 600 using a coordinate system (x, y, z) for the environment 600 , and determines its position within the coordinate system.
- the processor 432 determines a head pose (roll, pitch, and yaw) of the eyewear device 100 within the environment by using two or more location points (e.g., three location points 606 a , 606 b , and 606 c ) associated with a single object 604 a , or by using one or more location points 606 associated with two or more objects 604 a , 604 b , 604 c .
- the processor 432 of the eyewear device 100 may position a virtual object 608 (such as the key shown in FIG. 6 ) within the environment 600 for viewing during an augmented reality experience such as a collaborative augmented reality experience where each user has a respective augmented reality device (e.g., eyewear device 100 or mobile device 401 .
- the localization system 915 in some examples associates a virtual marker 610 a with a virtual object 608 in the environment 600 .
- markers are registered at locations in the environment to assist devices with the task of tracking and updating the location of users, devices, and objects (virtual and physical) in a mapped environment. Markers are sometimes registered to a high-contrast physical object, such as the relatively dark object, such as the framed picture 604 a , mounted on a lighter-colored wall, to assist cameras and other sensors with the task of detecting the marker.
- the markers may be preassigned or may be assigned by the eyewear device 100 upon entering the environment.
- Markers can be encoded with or otherwise linked to information.
- a marker might include position information, a physical code (such as a bar code or a QR code; either visible to the user or hidden), or a combination thereof.
- a set of data associated with the marker is stored in the memory 434 of the eyewear device 100 .
- the set of data includes information about the marker 610 a , the marker's position (location and orientation), one or more virtual objects, or a combination thereof.
- the marker position may include three-dimensional coordinates for one or more marker landmarks 616 a , such as the corner of the generally rectangular marker 610 a shown in FIG. 6 .
- the marker location may be expressed relative to real-world geographic coordinates, a system of marker coordinates, a position of the eyewear device 100 , or other coordinate system.
- the one or more virtual objects associated with the marker 610 a may include any of a variety of material, including still images, video, audio, tactile feedback, executable applications, interactive user interfaces and experiences, and combinations or sequences of such material. Any type of content capable of being stored in a memory and retrieved when the marker 610 a is encountered or associated with an assigned marker may be classified as a virtual object in this context.
- the key 608 shown in FIG. 6 for example, is a virtual object displayed as a still image, either 2D or 3D, at a marker location.
- the marker 610 a may be registered in memory as being located near and associated with a physical object 604 a (e.g., the framed work of art shown in FIG. 6 ). In another example, the marker may be registered in memory as being a particular position with respect to the eyewear device 100 .
- FIGS. 7 , 8 , and 9 are illustrations of an example collaborative object 700 being developed by adding virtual content 702 during a collaboration period of a collaboration session for use in describing the steps of the methods illustrated in FIGS. 10 and 11 below (e.g., to create a virtual time capsule).
- a box is used for the collaborative object 700 in many of the examples described herein, any virtual object may be selected for use as the collaborative object 700 .
- FIG. 7 provides a perspective view of an example collaborative object 700 in the form of a box in a first state (closed) that may be manipulated in three dimensions 701 with a hand 651 (e.g., through detected gestures in images or touch inputs on a touchscreen) based on corresponding movements in three dimensions 681 .
- the hand 651 may be rotated to rotate the collaborative object.
- an extended index finger may be detected adjacent the collaborative object and a corresponding audible signal will be presented via a speaker when the user makes a tapping gesture.
- the location of the eyewear device may also be tracked in three dimensions 840 within an environment 600 so that the overlays generated for presentation of the display 180 B are more realistic.
- the hand 651 may be predefined to be the left hand, as shown.
- the system includes a process for selecting and setting the hand, right of left, which will serve as the hand 651 to be detected.
- FIG. 8 provides a perspective view of the example collaborative object 700 in a second state (open) with associated virtual content 702 (watch face 702 a , urn 702 b , book 702 c , other virtual content 702 d - f ) added during a collaboration period.
- the hand 652 is illustrated in the open position. This position of the hand 652 or the transition of the hand 651 in the relaxed position ( FIG. 7 ) to the hand 652 in the open position may be set to correspond to opening the collaborative object 700 such that when this hand position or hand gesture is detected, the collaborative object 700 transitions to an open state.
- FIG. 9 provides a perspective view of the example collaborative object 700 in the first state (closed) with virtual content 702 added to exterior surfaces of the collaborative object 700 .
- the hand 653 is illustrated in a closed position. This position of the hand 653 or the transition of the hand 651 in the relaxed position ( FIG. 7 ) to the hand 653 in the closed position may be set to correspond to closing the collaborative object 700 such that when this hand position or hand gesture is detected, the collaborative object 700 transitions to a closed state.
- the process of detecting and tracking includes detecting the hand 651 / 652 / 653 , over time, in various postures, in a set or series of captured frames of video data.
- detecting refers to and includes detecting a hand in as few as one frame of video data, as well as detecting the hand, over time, in a subset or series of frames of video data.
- the process includes detecting a hand 651 in a particular posture in one or more of the captured frames of video data.
- the process includes detecting the hand 651 / 652 / 653 , over time, in various postures, in a subset or series of captured frames of video data.
- FIG. 10 is a flow chart 1000 depicting an example method of developing a collaborative object 700 during a collaboration period of a collaboration session including multiple physically remote devices (e.g., eyewear devices 100 , mobile devices 401 , or a combination thereof).
- the steps of FIG. 10 are performed by the processor 499 of the server system 498 (see FIG. 4 ) accessible by the physically remote devices.
- one or more steps may be performed by processors 432 and 540 of the physically remote devices or a combination of processors of the server system 498 and the physically remote device(s) (acting as a processor to implement the step(s)).
- One or more of the steps shown and described may be performed simultaneously, in a series, in an order other than shown and described, or in conjunction with additional steps. Some steps may be omitted or, in some applications, repeated.
- the processor receives user parameters for the collaborative session.
- the processor receives user parameters from a physically remote device of a host user where the host user designates the user parameters through their physically remote device during a server system 498 connection.
- the user parameters include identifiers for the users that are permitted to access the collaborative session.
- User parameters may also include access levels identifying what individuals have access to during the collaborative session. Additional details regarding setting up and maintaining access levels is described below with reference to the steps of flow chart 1100 ( FIG. 11 ).
- the processor receives object parameters.
- the processor receives object parameters from a physically remote device of a host user (or other user with suitable access level) where the user designates the object parameters through their physically remote device during a server system 498 connection.
- the object parameters include identifiers identifying the object to be used as the collaborative object 700 (e.g., a box as illustrated in FIGS. 7 - 9 ).
- object parameters may include a material for the object (e.g., cardboard, metal, glass) or a time parameter providing a time window or deadline (e.g., in the form of a clock value 710 or a time bar 712 ) during which the virtual content 702 can be added to the collaborative object 700 (after which the virtual content 702 can no longer be added to the collaborative object 700 ).
- a material for the object e.g., cardboard, metal, glass
- time parameter providing a time window or deadline e.g., in the form of a clock value 710 or a time bar 712
- the processor of the sever system 498 may present the physically remote device via network 495 with a list of available virtual objects 702 for selection by the user through their device, which is received by the processor of the server system 498 upon selection.
- the user may send a virtual object 702 (e.g., a 3D image) they generated on their physically remote device to the server system 498 , where the processor 499 of the sever system 498 designates the received virtual object 702 as the collaborative object 700 upon receipt.
- the processor provides access to the collaborative object 700 .
- the processor provides access to the collaborative object 700 through server connections with the physically remote devices based on the access level associated with each of the devices.
- the processor 499 develops the collaborative object 700 responsive to the object parameters received and stores the collaborative object 700 in a location accessible to the physically remote devices.
- the processor provides access to the collaborative object 700 based on the level of access associated with the user of the physically remote device.
- the processor sends a file containing the collaborative object 700 that the physically remote device uses to generate an overlay for presentation on a display of the physically remote devices, such as display 180 A-B of the eyewear device 100 or display 580 to the mobile device 401 .
- the user may then interact with the representation of the collaborative object 700 on their display (e.g., using a hand gesture such as depicted in FIG. 7 )
- the processor receives design parameters.
- the processor receives design parameters from the physically remote devices having suitable permission levels accessing the collaborative object 700 .
- the processor sends a file to the physically remote device containing the collaborative object 700 .
- the physically remote device generates an overlay for display that the user can interact with to add design parameters to the collaborative object 700 .
- the user may then, for example, select an image (e.g., from their camera, such as camera 114 A-B or camera 570 ) and select a surface of the collaborative object 700 where, upon selection of the surface, the selected image is associated with the collaborative object 700 on the physically remote device.
- the added/changed design parameter is communicated by the physically remote device to the server system 498 via network 498 .
- the processor updates the collaborative object 700 responsive to the design parameters.
- the processor updates the collaborative object 700 in response to changes received from the physically remote devices via network 495 .
- the processor upon receipt of the added/changed design parameter from the physically remote device, associates the added/changed design parameter with the collaborative object 700 in the location accessible to the physically remote devices.
- the processor receives virtual content 702 .
- the processor receives virtual content 702 from the physically remote devices having suitable permission levels for accessing the collaborative object 700 .
- the processor sends a file to the physically remote device containing the collaborative object 700 .
- the physically remote device generates an overlay for display that the user can interact with to add virtual content 702 to the collaborative object 700 .
- the user may then, for example, add visual virtual content 702 by selecting an image (e.g., from their camera) and performing an action (e.g., drag and drop the image on the collaborative object 700 or double tap on the object) to associate the virtual content 702 with the collaborative object 700 on the physically remote device.
- audio virtual content may be added to the video virtual content by, for example, pressing and holding the video virtual content and speaking into a microphone where the audio received while depressing the video virtual content is associated with the video virtual content.
- the added virtual content 702 is communicated by the physically remote device to the server system 498 via network 495 .
- the users may associate virtual content 702 with the collaborative object 700 by, for example, dragging and dropping the virtual content 702 onto a surface of the collaborative object 700 .
- the eyewear device 100 may recognize hand gestures and the user may manipulate the displayed collaborative object 700 on display 180 A-B and select the virtual content 702 via hand gestures captured and processed by the eyewear device 100 .
- the mobile device 401 may interpret instructions received via the touchscreen 580 of the mobile device 401 .
- the user may manipulate the collaborative object 700 and select the virtual content 702 by touching/tapping the touchscreen 580 with their finger to select the virtual content 702 and by dragging their finger to move the virtual content 702 onto the collaborative object 700 (which may associate the virtual content 702 with the collaborative object 700 ).
- the processor associates the virtual content 702 with the collaborative object 700 .
- the processor associates the virtual content 702 with the collaborative object 700 by updating the collaborative object 700 in response to changes received from the physically remote devices.
- the processor upon receipt of the added virtual content 702 from the physically remote device, associates the added virtual content 702 with the collaborative object 700 in the location accessible to the physically remote devices.
- the processor stores the collaborative object 700 .
- the processor 499 stores the collaborative object 700 in memory accessible to the physically remote devices via a network 495 during a collaborative session.
- the processor provides access to the collaborative object 700 .
- the processor 499 provides access to the collaborative object 700 in memory accessible to the physically remote devices via a network 495 .
- the processor checks credentials (e.g., user ID) of users requesting access and permits access if the credentials match credentials associated with the collaborative session for the collaborative object 700 .
- the processor presents the collaborative object 700 .
- the processor 499 presents the collaborative object 700 to the physically remote devices via the network 495 .
- the processor sends a file including the collaborative object 700 (and associated virtual content or links to such content) to a physically remote device having access to the collaborative session in response to a request from the physically remote device, which the physically remote device uses to generate an overlay including the collaborative object 700 and associated virtual content for presentation on the display of the remote physical devices.
- the associated virtual content is presented all at once when the collaborative object 700 is placed in an open state.
- the associated virtual content is presented in sequential order based on time stamps added when the virtual content was associated with the collaborative object 700 .
- FIG. 11 is a flow chart listing the steps of an example selective collaboration object access method.
- the steps of FIG. 11 are performed by the processor 499 of the server system 498 (see FIG. 4 ) accessible by the physically remote devices.
- one or more steps may be performed by processors 432 and 540 of the physically remote devices or a combination of processors 499 of the server system 498 and the physically remote device(s) (acting as a processor to implement the step(s)).
- One or more of the steps shown and described may be performed simultaneously, in a series, in an order other than shown and described, or in conjunction with additional steps. Some steps may be omitted or, in some applications, repeated.
- the processor receives user identifiers.
- the processor receives user identifiers for users to be associated with a collaborative session.
- a user accesses the server system 498 using a physically remote device.
- the user creates a collaborative sessions and designates other users for participation in the collaboration.
- the host may invite another user (user B) to participate in a collaboration to prepare content for another user (user C) with the intent to provide that user with the content at a later date.
- the processor receives access parameters for the users.
- the processor receives access parameters for the users from the host or another user with acceptable access levels.
- the access parameters indicate a respective access level to the collaborative object 700 of each of the users that allows the respective access level of at least one of the users to be different than the respective access level of another user.
- the host may have a first access level (enabling access to access the collaborative object 700 in order to associate virtual content 704 and view associated virtual content 704 during a collaboration period) and another user may have a second level of access to the collaborative object 700 that is less than the first level of access (e.g., it only permits access after the collaboration period has ended; or it permits access to the collaborative object 700 during the collaboration period, but not associated virtual content 704 until after the collaboration period has ended).
- the host receives an access level enabling access by default and the host may grant other users access rights by providing the access rights to the processor 499 of the sever system 498 via a physically remote device.
- the processor maintains a table of access parameters.
- the processor maintains a table in cloud storage including an identifier for each of the users and their respective access levels based on access parameters supplied by the host.
- the processor may initially create a table including the information identified in TABLE 1 below:
- the processor maintains a timer.
- the processor may receive an initial time value from the host.
- the timer may be used to track the time remaining during a collaboration period.
- the processor provides a time value corresponding to the initial time or the tracked time to the physically remote devices, which may use the time value to generate overlays depicting or representing time remaining (e.g., a clock value 710 or a time bar 712 ).
- the processor provides access to collaborative object 700 .
- the processor provide access to the collaborative object 700 by the users based on their respective access levels.
- the processor will compare the user identification (ID) for the user to values in the table. If the user's ID (e.g., D_ID) is not found on the table, that user will not be able to access the collaborative object 700 . If another user (e.g., C_ID) is found in the table, but has a “No” permission level, that user will only be provided with access commensurate with that level of access. If another user (e.g., A_ID) is found in the table with a “Yes” permission level, that user will be provided with access to the collaborative object 700 commensurate with that level of access.
- ID user identification
- the processor identifies an access level change.
- the processor identifies an access level change for at least one other of the users.
- the host grants another user (e.g., B_ID) access rights by sending an access rights change request to the processor 499 of the sever system 498 including the user ID to be changed and the new access level via a physically remote device.
- all users with a “Yes” permission level must request an access level change for a user with another permission level.
- the access level change is identified by the processor upon receipt of the request(s).
- the processor may identify a change based on the expiration of a collaboration period or a preset “reveal” time (e.g., based on a monitored timer). For example, where user A and user B are preparing content for user C, user A and user B may initially have a “Yes” permission level and the processor may automatically identify an access level change request for user C once the collaboration period has ended or the reveal time has been reached.
- the processor changes the respective access level.
- the processor changes the respective access level of the at least one other of the users responsive to the access level change.
- the processor updates the table as shown in TABLE 2 below:
- FIG. 12 is a flow chart 1200 including steps of a method for use in the collaboration application 925 .
- a processor 499 of server system 498 enables users to associate the virtual content 702 with the collaborative object 700 via network 495 , where the processor 499 maintains a timer 704 with the clock value 710 indicative of when a collaboration period ends (e.g., to create a sense of urgency, excitement and motivation among the users during a collaboration session).
- the collaboration session includes a collaboration period during which virtual content 702 can be associated with collaborative object 700 followed by access period during which the virtual content 702 is fixed for viewing by users and can no longer be added.
- the collaboration period may be an object parameter specified by a user (see, for example, block 1004 of FIG. 10 and the related description).
- the processor 499 provides the users with access to the collaborative object 700 during the session. This is seen in FIGS. 7 - 9 .
- the users are provided authorization via network 495 to join the session and collaborate on the generation of the collaborative object 700 .
- the processor 499 serves, via network 495 , the collaborative object 700 to the physically remote devices, which present the collaborative object 700 to the user, e.g., as an overlay on the display 180 of the eyewear device 100 or the display 580 of the mobile device 401 .
- users cannot access the associated virtual content 702 received from other users until the collaboration period ends, such as when the clock value 710 is zero or the time bar 712 is completed.
- a subset of users can access the associated virtual content 702 added by others in that subset of users.
- the processor 499 enables the users to associate the virtual content 702 with the collaborative object 700 .
- An example of this is depicted in FIGS. 7 - 9 .
- the users contribute to the joint collaboration by using their physically remote devices to add and modify virtual content 702 at chosen locations of the collaborative object 700 , and shown in the display of their respective devices, such as on display 180 A-B of the eyewear device 100 and display 580 of the mobile device 401 .
- the processor 499 maintains the timer 704 with the clock value 712 , which is indicative of when the collaboration period ends. This is seen in FIGS. 7 - 9 .
- the timer 704 displays the clock value 710 as a countdown on the respective device display, e.g., to create excitement and a sense of urgency to complete the generation of the collaborative object 700 .
- the countdown is indicative of when the collaboration period ends.
- the countdown can be a countdown of time and can be presented as the time value 710 , a corresponding timeline shown as the time bar 712 , or both on a device display.
- the duration of the collaboration period may be a week, or as short as a few hours.
- the countdown can change color when the countdown is close to the end of the collaboration period, such as changing from a green color to a red color.
- the processor 499 provides the users access to the collaborative object 700 and the associated virtual content 702 at the end of the collaboration period. This is seen in FIGS. 7 - 9 .
- the processor 499 provides all users with access (e.g., by changing access rights) to the finished collaborative object 700 in order to reveal the results of the collaboration to all users.
- Machine learning refers to an algorithm that improves incrementally through experience. By processing a large number of different input datasets, a machine-learning algorithm can develop improved generalizations about particular datasets, and then use those generalizations to produce an accurate output or solution when processing a new dataset. Broadly speaking, a machine-learning algorithm includes one or more parameters that will adjust or change in response to new experiences, thereby improving the algorithm incrementally; a process similar to learning.
- Deep learning refers to a class of machine-learning methods that are based on or modeled after artificial neural networks.
- An artificial neural network is a computing system made up of a number of simple, highly interconnected processing elements (nodes), which process information by their dynamic state response to external inputs.
- nodes simple, highly interconnected processing elements (nodes), which process information by their dynamic state response to external inputs.
- a large artificial neural network might have hundreds or thousands of nodes.
- a convolutional neural network is a type of neural network that is frequently applied to analyzing visual images, including digital photographs and video.
- the connectivity pattern between nodes in a CNN is typically modeled after the organization of the human visual cortex, which includes individual neurons arranged to respond to overlapping regions in a visual field.
- a neural network that is suitable for use in the determining process described herein is based on one of the following architectures: VGG16, VGG19, ResNet50, Inception V3, Xception, or other CNN-compatible architectures.
- the processor 432 determines whether a detected series of hand shapes substantially matches a predefined hand gesture using a machine-trained algorithm referred to as a hand feature model.
- the processor 432 is configured to access the hand feature model, trained through machine learning, and applies the hand feature model to identify and locate features of the hand shape in one or more frames of the video data.
- the trained hand feature model receives a frame of video data which contains a detected hand shape and abstracts the image in the frame into layers for analysis. Data in each layer is compared to hand gesture data stored in the hand gesture library 480 , layer by layer, based on the trained hand feature model, until a good match is identified.
- the layer-by-layer image analysis is executed using a convolutional neural network.
- the CNN identifies learned features (e.g., hand landmarks, sets of joint coordinates, and the like).
- learned features e.g., hand landmarks, sets of joint coordinates, and the like.
- the image is transformed into a plurality of images, in which the learned features are each accentuated in a respective sub-image.
- the sizes and resolution of the images and sub-images are reduced in order isolation portions of each image that include a possible feature of interest (e.g., a possible palm shape, a possible finger joint).
- the values and comparisons of images from the non-output layers are used to classify the image in the frame.
- Classification refers to the process of using a trained model to classify an image according to the detected hand shape. For example, an image may be classified as a “touching action” if the detected series of bimanual hand shapes matches the touching gesture stored in the library 480 .
- any of the functionality described herein for the eyewear device 100 , the mobile device 401 , and the server system 498 can be embodied in one or more computer software applications or sets of programming instructions, as described herein.
- “function,” “functions,” “application,” “applications,” “instruction,” “instructions,” or “programming” are program(s) that execute functions defined in the programs.
- Various programming languages can be employed to develop one or more of the applications, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language).
- a third-party application may include mobile software running on a mobile operating system such as IOSTM ANDROIDTM, WINDOWS® Phone, or another mobile operating system.
- the third-party application can invoke API calls provided by the operating system to facilitate functionality described herein.
- Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer devices or the like, such as may be used to implement the client device, media gateway, transcoder, etc. shown in the drawings.
- Volatile storage media include dynamic memory, such as main memory of such a computer platform.
- Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system.
- Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications.
- Computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
- any and all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. Such amounts are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain. For example, unless expressly stated otherwise, a parameter value or the like may vary by as much as plus or minus ten percent from the stated amount or range.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Computer Graphics (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Security & Cryptography (AREA)
- Architecture (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Marketing (AREA)
- Data Mining & Analysis (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- Examples set forth in the present disclosure relate to the field of virtual reality for electronic devices, including mobile devices and wearable devices such as eyewear devices. More particularly, but not by way of limitation, the present disclosure describes a collaborative method with selective access.
- Many types of computers and electronic devices available today, such as mobile devices (e.g., smartphones, tablets, and laptops), handheld devices, and wearable devices (e.g., smart glasses, digital eyewear, headwear, headgear, and head-mounted displays), include a variety of cameras, sensors, wireless transceivers, input systems, and displays.
- Graphical user interfaces allow the user to interact with displayed content, including virtual objects and graphical elements such as icons, taskbars, list boxes, menus, buttons, and selection control elements like cursors, pointers, handles, and sliders.
- Virtual reality (VR) technology generates a complete virtual environment including realistic images, sometimes presented on a VR headset or other head-mounted display. VR experiences allow a user to move through the virtual environment and interact with virtual objects. Augmented reality (AR) is a type of VR technology that combines real objects in a physical environment with virtual objects and displays the combination to a user. The combined display gives the impression that the virtual objects are authentically present in the environment, especially when the virtual objects appear and behave like the real objects. Cross reality (XR) is generally understood as an umbrella term referring to systems that include or combine elements from AR, VR, and MR (mixed reality) environments.
- Collaborative tools are available to users of VR technology. The collaborative tools enable users to virtually meet in a collaborative session. During a collaborative session, users can communicate with one another in a virtual setting.
- Features of the various examples described will be readily understood from the following detailed description, in which reference is made to the figures. A reference numeral is used with each element in the description and throughout the several views of the drawing. When a plurality of similar elements is present, a single reference numeral may be assigned to like elements, with an added lower-case letter referring to a specific element.
- The various elements shown in the figures are not drawn to scale unless otherwise indicated. The dimensions of the various elements may be enlarged or reduced in the interest of clarity. The several figures depict one or more implementations and are presented by way of example only and should not be construed as limiting. Included in the drawing are the following figures:
-
FIG. 1A is a side view (right) of an example hardware configuration of an eyewear device suitable for use in an example collaboration system; -
FIG. 1B is a perspective, partly sectional view of a right corner of the eyewear device ofFIG. 1A depicting a right visible-light camera, and a circuit board; -
FIG. 1C is a side view (left) of an example hardware configuration of the eyewear device ofFIG. 1A , which shows a left visible-light camera; -
FIG. 1D is a perspective, partly sectional view of a left corner of the eyewear device ofFIG. 1C depicting the left visible-light camera, and a circuit board; -
FIGS. 2A and 2B are rear views of example hardware configurations of an eyewear device utilized in an example collaboration system; -
FIG. 3 is a diagrammatic depiction of a three-dimensional scene, a left raw image captured by a left visible-light camera, and a right raw image captured by a right visible-light camera; -
FIG. 4 is a functional block diagram of an example collaboration system including a wearable device (e.g., an eyewear device) and a server system connected via various networks; -
FIG. 5 is a diagrammatic representation of an example hardware configuration for a mobile device suitable for use in the example system ofFIG. 4 ; -
FIG. 6 is a schematic illustration of a user in an example environment for use in describing simultaneous localization and mapping; -
FIG. 7 is a perspective illustration of an example collaborative object in the form of a box that may be manipulated with a hand; -
FIG. 8 is a perspective illustration of an example first hand gesture associated with an opening gesture for opening the box depicted inFIG. 7 ; -
FIG. 9 is a perspective illustration of an example second hand gesture associated with a closing gesture for closing the box depicted inFIG. 7 ; -
FIG. 10 is a flow chart listing the steps in an example collaboration method; -
FIG. 11 is a flow chart listing the steps of an example selective collaboration object access method; and -
FIG. 12 is a flow chart including steps of a method for use in the collaboration application. - A collaborative session (e.g., a virtual time capsule) in which access to a collaborative object and added virtual content is selectively provided to participants/users. In one example of the collaborative session, a processor provides users with access to a collaborative object using respective physically remote devices, and associates virtual content received from the users with the collaborative object during a collaboration period. The processor maintains a timer including a countdown indicative of when the collaboration period ends for associating virtual content with the collaborative object. The processor provides the users with access to the collaborative object with associated virtual content at the end of the collaboration period. The processor serves a time indicator for display on the physically remote devices, the time indicator representing the countdown indicative of when the collaboration period ends. The time indicator may be a countdown time, or a timeline. The session creator (i.e., the host) and other approved participants can access the contents of a session (e.g., which may be recorded using an application such as lens cloud feature; available from Snap Inc. of Santa Monica, California).
- The following detailed description includes systems, methods, techniques, instruction sequences, and computing machine program products illustrative of examples set forth in the disclosure. Numerous details and examples are included for the purpose of providing a thorough understanding of the disclosed subject matter and its relevant teachings. Those skilled in the relevant art, however, may understand how to apply the relevant teachings without such details. Aspects of the disclosed subject matter are not limited to the specific devices, systems, and method described because the relevant teachings can be applied or practice in a variety of ways. The terminology and nomenclature used herein is for the purpose of describing particular aspects only and is not intended to be limiting. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail.
- The terms “coupled” or “connected” as used herein refer to any logical, optical, physical, or electrical connection, including a link or the like by which the electrical or magnetic signals produced or supplied by one system element are imparted to another coupled or connected system element. Unless described otherwise, coupled or connected elements or devices are not necessarily directly connected to one another and may be separated by intermediate components, elements, or communication media, one or more of which may modify, manipulate, or carry the electrical signals. The term “on” means directly supported by an element or indirectly supported by the element through another element that is integrated into or supported by the element.
- The term “proximal” is used to describe an item or part of an item that is situated near, adjacent, or next to an object or person; or that is closer relative to other parts of the item, which may be described as “distal.” For example, the end of an item nearest an object may be referred to as the proximal end, whereas the generally opposing end may be referred to as the distal end.
- The orientations of the eyewear device, other mobile devices, associated components and any other devices incorporating a camera, an inertial measurement unit, or both such as shown in any of the drawings, are given by way of example only, for illustration and discussion purposes. In operation, the eyewear device may be oriented in any other direction suitable to the particular application of the eyewear device; for example, up, down, sideways, or any other orientation. Also, to the extent used herein, any directional term, such as front, rear, inward, outward, toward, left, right, lateral, longitudinal, up, down, upper, lower, top, bottom, side, horizontal, vertical, and diagonal are used by way of example only, and are not limiting as to the direction or orientation of any camera or inertial measurement unit as constructed or as otherwise described herein.
- Advanced AR technologies, such as computer vision and object tracking, may be used to produce a perceptually enriched and immersive experience. Computer vision algorithms extract three-dimensional data about the physical world from the data captured in digital images or video. Object recognition and tracking algorithms are used to detect an object in a digital image or video, estimate its orientation or pose, and track its movement over time. Hand and finger recognition and tracking in real time is one of the most challenging and processing-intensive tasks in the field of computer vision.
- The term “pose” refers to the static position and orientation of an object at a particular instant in time. The term “gesture” refers to the active movement of an object, such as a hand, through a series of poses, sometimes to convey a signal or idea. The terms, pose and gesture, are sometimes used interchangeably in the field of computer vision and augmented reality. As used herein, the terms “pose” or “gesture” (or variations thereof) are intended to be inclusive of both poses and gestures; in other words, the use of one term does not exclude the other.
- Additional objects, advantages and novel features of the examples will be set forth in part in the following description, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The objects and advantages of the present subject matter may be realized and attained by means of the methodologies, instrumentalities and combinations particularly pointed out in the appended claims.
- Reference now is made in detail to the examples illustrated in the accompanying drawings and discussed below.
-
FIG. 1A is a side view (right) andFIG. 1C is a side view (left) of an example hardware configuration of aneyewear device 100 that includes a touch-sensitive input device ortouchpad 181. As shown, thetouchpad 181 may have a boundary that is subtle and not easily seen; alternatively, the boundary may be plainly visible or include a raised or otherwise tactile edge that provides feedback to the user about the location and boundary of thetouchpad 181. In other implementations, theeyewear device 100 may include a touchpad on the left side. - The surface of the
touchpad 181 is configured to detect finger touches, taps, and gestures (e.g., moving touches) for use with a GUI displayed by the eyewear device, on an image display, to allow the user to navigate through and select menu options in an intuitive manner, which enhances and simplifies the user experience. - Detection of finger inputs on the
touchpad 181 can enable several functions. For example, touching anywhere on thetouchpad 181 may cause the GUI to display or highlight an item on the image display, which may be projected onto at least one of the 180A, 180B. Double tapping on theoptical assemblies touchpad 181 may select an item or icon. Sliding or swiping a finger in a particular direction (e.g., from front to back, back to front, up to down, or down to) may cause the items or icons to slide or scroll in a particular direction; for example, to move to a next item, icon, video, image, page, or slide. Sliding the finger in another direction may slide or scroll in the opposite direction; for example, to move to a previous item, icon, video, image, page, or slide. Thetouchpad 181 can be virtually anywhere on theeyewear device 100. - In one example, an identified finger gesture of a single tap on the
touchpad 181, initiates selection or pressing of a graphical user interface element in the image presented on the image display of the 180A, 180B. An adjustment to the image presented on the image display of theoptical assembly 180A, 180B based on the identified finger gesture can be a primary action which selects or submits the graphical user interface element on the image display of theoptical assembly 180A, 180B for further display or execution.optical assembly - As shown, the
eyewear device 100 includes a left visible-light camera 114A and a right visible-light camera 114B. As further described herein, the two 114A, 114B capture image information for a scene from two separate viewpoints. The two captured images may be used to project a three-dimensional display onto an image display for viewing with 3D glasses.cameras - The
eyewear device 100 includes a rightoptical assembly 180B with an image display to present images, such as depth images. As shown inFIGS. 1A and 1C , theeyewear device 100 can include multiple visible- 114A, 114B that form a passive type of three-dimensional camera, such as stereo camera, of which the right visible-light cameras light camera 114B is located on aright corner 110B and, as shown inFIGS. 1C-D , a left visible-light camera 114A is located on aleft corner 110A. - Left and right visible-
114A, 114B are sensitive to the visible-light range wavelength. Each of the visible-light cameras 114A, 114B have a different frontward facing field of view which are overlapping to enable generation of three-dimensional depth images, for example, left visible-light camera 1114A captures a left field oflight cameras view 111A and right visible-light camera 114B captures a right field ofview 111B. Generally, a “field of view” is the part of the scene that is visible through the camera at a particular position and orientation in space. The fields of 111A and 111B have an overlapping field of view 304 (view FIG. 3 ). Objects or object features outside the field of 111A, 111B when the visible-light camera captures the image are not recorded in a raw image (e.g., photograph or picture). The field of view describes an angle range or extent, which the image sensor of the visible-view 114A, 114B picks up electromagnetic radiation of a given scene in a captured image of the given scene. Field of view can be expressed as the angular size of the view cone; i.e., an angle of view. The angle of view can be measured horizontally, vertically, or diagonally.light camera - In an example configuration, one or both visible-
114A, 114B has a field of view of 100° and a resolution of 480×480 pixels. The “angle of coverage” describes the angle range that a lens of visible-light cameras 114A, 114B or infrared camera 410 (seelight cameras FIG. 2A ) can effectively image. Typically, the camera lens produces an image circle that is large enough to cover the film or sensor of the camera completely, possibly including some vignetting (e.g., a darkening of the image toward the edges when compared to the center). If the angle of coverage of the camera lens does not fill the sensor, the image circle will be visible, typically with strong vignetting toward the edge, and the effective angle of view will be limited to the angle of coverage. - Examples of such visible-
114A, 114B include digital camera elements such as high-resolution complementary metal-oxide-semiconductor (CMOS) image sensor and a digital VGA camera (video graphics array) capable of resolutions of 480p (e.g., 640×480 pixels), 720p, 1080p, or greater. Other examples include visible-light cameras 114A, 114B that can capture high-definition (HD) video at a high frame rate (e.g., thirty to sixty frames per second, or more) and store the recording at a resolution of 1216 by 1216 pixels (or greater).light cameras - The
eyewear device 100 may capture image sensor data from the visible- 114A, 114B along with geolocation data, digitized by an image processor, for storage in a memory. The visible-light cameras 114A, 114B capture respective left and right raw images in the two-dimensional space domain that comprise a matrix of pixels on a two-dimensional coordinate system that includes an X-axis for horizontal position and a Y-axis for vertical position. Each pixel includes a color attribute value (e.g., a red pixel light value, a green pixel light value, or a blue pixel light value); and a position attribute (e.g., an X-axis coordinate and a Y-axis coordinate).light cameras - In order to capture stereo images for later display as a three-dimensional projection, the image processor 412 (
FIG. 4 ) may be coupled to the visible- 114A, 114B to receive and store the visual image information. Thelight cameras image processor 412, or another processor, controls operation of the visible- 114A, 114B to act as a stereo camera simulating human binocular vision and may add a timestamp to each image. The timestamp on each pair of images allows display of the images together as part of a three-dimensional projection. Three-dimensional projections produce an immersive, life-like experience that is desirable in a variety of contexts, including virtual reality (VR) and video gaming.light cameras -
FIG. 1B is a perspective, cross-sectional view of aright corner 110B of theeyewear device 100 ofFIG. 1A depicting the right visible-light camera 114B of the camera system, and a circuit board.FIG. 1C is a side view (left) of an example hardware configuration of aneyewear device 100 ofFIG. 1A , which shows a left visible-light camera 114A of the camera system.FIG. 1D is a perspective, cross-sectional view of aleft corner 110A of the eyewear device ofFIG. 1C depicting the left visible-light camera 114A of the three-dimensional camera, and a circuit board. - As shown in the example of
FIG. 1B , theeyewear device 100 includes the right visible-light camera 114B and acircuit board 140B, which may be a flexible printed circuit board (PCB). Aright hinge 126B connects theright corner 110B to aright temple 125B of theeyewear device 100. In some examples, components of the right visible-light camera 114B, theflexible PCB 140B, or other electrical connectors or contacts may be located on theright temple 125B or theright hinge 126B. - Construction and placement of the left visible-
light camera 114A is substantially similar to the right visible-light camera 114B, except the connections and coupling are on the leftlateral side 170A. Aleft hinge 126B connects theleft corner 110A to aleft temple 125A of theeyewear device 100. In some examples, components of the left visible-light camera 114A, theflexible PCB 140A, or other electrical connectors or contacts may be located on theleft temple 125A or theleft hinge 126A. - The left and
110A and 110B each include aright corners corner body 190 and a corner cap, with the corner caps omitted in the cross-sections ofFIGS. 1B and 1D . Disposed inside the left and 110A and 110B are variousright corners 140A and 140B, such as PCBs or flexible PCBs, that include controller circuits for left and right visible-interconnected circuit boards 114A and 114B, microphone(s), low-power wireless circuitry (e.g., for wireless short range network communication via Bluetooth™), high-speed wireless circuitry (e.g., for wireless local area network communication via Wi-Fi). Thelight cameras 110A, 110B may be integrated into thecorners frame 105 on the respective 170A, 170B (as illustrated) or implemented as separate components attached to thelateral sides frame 105 on the 170A, 170B. Alternatively, therespective sides 110A, 110B may be integrated intocorners 125A, 125B attached to thetemples frame 105. - The left and right visible-
114A and 114B are coupled to or disposed on respectivelight cameras 140A and 140B and are covered by visible-light camera cover lens, which are aimed through opening(s) formed in theflexible PCBs frame 105. For example, the left and 107A and 107B of theright rims frame 105 are connected to the left and 110A and 110B and include the openings for the visible-light camera cover lenses. Theright corners frame 105 includes a front side configured to face outward and away from the eye of the user. The opening for the visible-light camera cover lens is formed on and through the front or outward-facing side of theframe 105. In the example, the left and right visible- 114A and 114B each has a respective outward-facing field oflight cameras 111A and 111B with a line of sight or perspective that is correlated with the respective left and right eyes of the user of theview eyewear device 100. The visible-light camera cover lens can also be adhered to a front side or outward-facing surface of theright corner 110B in which an opening is formed with an outward-facing angle of coverage, but in a different outwardly direction. The coupling can also be indirect via intervening components. -
FIGS. 2A and 2B are perspective views, from the rear, of example hardware configurations of theeyewear device 100, including two different types of image displays. Theeyewear device 100 is sized and shaped in a form configured for wearing by a user; the form of eyeglasses is shown in the example. Theeyewear device 100 can take other forms and may incorporate other types of frameworks; for example, a headgear, a headset, or a helmet. - In the eyeglasses example,
eyewear device 100 includes aframe 105 including aleft rim 107A connected to aright rim 107B via abridge 106 adapted to be supported by a nose of the user. The left and 107A, 107B includeright rims 175A, 175B, which hold a respectiverespective apertures 180A, 180B, such as a lens and a display device. As used herein, the term “lens” is meant to include transparent or translucent pieces of glass or plastic having curved or flat surfaces that cause light to converge or diverge or that cause little or no convergence or divergence.optical element - Although shown as having two
180A, 180B, theoptical elements eyewear device 100 can include other arrangements, such as a single optical element (or it may not include any 180A, 180B), depending on the application or the intended user of theoptical element eyewear device 100. As further shown,eyewear device 100 includes aleft corner 110A adjacent the leftlateral side 170A of theframe 105 and aright corner 110B adjacent the rightlateral side 170B of theframe 105. - In one example, the image display of
180A, 180B includes an integrated image display. As shown inoptical assembly FIG. 2A , each 180A, 180B includes aoptical assembly suitable display matrix 177, such as a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, or any other such display. Each 180A, 180B also includes an optical layer or layers 176, which can include lenses, optical coatings, prisms, mirrors, waveguides, optical strips, and other optical components in any combination. Theoptical assembly optical layers 176A, 176B, . . . 176N (shown as 176A-N inFIG. 2A and herein) can include a prism having a suitable size and configuration and including a first surface for receiving light from a display matrix and a second surface for emitting light to the eye of the user. The prism of theoptical layers 176A-N extends over all or at least a portion of the 175A, 175B formed in the left andrespective apertures 107A, 107B to permit the user to see the second surface of the prism when the eye of the user is viewing through the corresponding left andright rims 107A, 107B. The first surface of the prism of theright rims optical layers 176A-N faces upwardly from theframe 105 and thedisplay matrix 177 overlies the prism so that photons and light emitted by thedisplay matrix 177 impinge the first surface. The prism is sized and shaped so that the light is refracted within the prism and is directed toward the eye of the user by the second surface of the prism of theoptical layers 176A-N. In this regard, the second surface of the prism of theoptical layers 176A-N can be convex to direct the light toward the center of the eye. The prism can optionally be sized and shaped to magnify the image projected by thedisplay matrix 177, and the light travels through the prism so that the image viewed from the second surface is larger in one or more dimensions than the image emitted from thedisplay matrix 177. - In one example, the
optical layers 176A-N may include an LCD layer that is transparent (keeping the lens open) unless and until a voltage is applied that makes the layer opaque (closing or blocking the lens). The image processor 412 (FIG. 4 ) on theeyewear device 100 may execute programming to apply the voltage to the LCD layer in order to produce an active shutter system, making theeyewear device 100 suitable for viewing visual content when displayed as a three-dimensional projection. Technologies other than LCD may be used for the active shutter mode, including other types of reactive layers that are responsive to a voltage or another type of input. - In another example, the image display device of
180A, 180B includes a projection image display as shown inoptical assembly FIG. 2B . Each 180A, 180B includes aoptical assembly laser projector 150, which is a three-color laser projector using a scanning mirror or galvanometer. During operation, an optical source such as alaser projector 150 is disposed in or on one of the 125A, 125B of thetemples eyewear device 100.Optical assembly 180B in this example includes one or moreoptical strips 155A, 155B, . . . 155N (shown as 155A-N inFIG. 2B ) which are spaced apart and across the width of the lens of each 180A, 180B or across a depth of the lens between the front surface and the rear surface of the lens.optical assembly - As the photons projected by the
laser projector 150 travel across the lens of each 180A, 180B, the photons encounter theoptical assembly optical strips 155A-N. When a particular photon encounters a particular optical strip, the photon is either redirected toward the user's eye, or it passes to the next optical strip. A combination of modulation oflaser projector 150, and modulation of optical strips, may control specific photons or beams of light. In an example, a processor controlsoptical strips 155A-N by initiating mechanical, acoustic, or electromagnetic signals. Although shown as having two 180A, 180B, theoptical assemblies eyewear device 100 can include other arrangements, such as a single or three optical assemblies, or each 180A, 180B may have arranged different arrangement depending on the application or intended user of theoptical assembly eyewear device 100. - In another example, the
eyewear device 100 shown inFIG. 2B may include two projectors, a left projector (not shown) and aright projector 150. The leftoptical assembly 180A may include aleft display matrix 177 or a left set of optical strips (not shown) which are configured to interact with light from the left projector. Similarly, the rightoptical assembly 180B may include a right display matrix (not shown) or a right set ofoptical strips 155A, 155B, . . . 155N which are configured to interact with light from theright projector 150. In this example, theeyewear device 100 includes a left display and a right display. -
FIG. 3 is a diagrammatic depiction of a three-dimensional scene 306, a left raw image 302A captured by a left visible-light camera 114A, and a rightraw image 302B captured by a right visible-light camera 114B. The left field ofview 111A may overlap, as shown, with the right field ofview 111B. The overlapping field ofview 304 represents that portion of the image captured by both 114A, 114B. The term ‘overlapping’ when referring to field of view means the matrix of pixels in the generated raw images overlap by thirty percent (30%) or more. ‘Substantially overlapping’ means the matrix of pixels in the generated raw images—or in the infrared image of scene—overlap by fifty percent (50%) or more. As described herein, the twocameras raw images 302A, 302B may be processed to include a timestamp, which allows the images to be displayed together as part of a three-dimensional projection. - For the capture of stereo images, as illustrated in
FIG. 3 , a pair of raw red, green, and blue (RGB) images are captured of areal scene 306 at a given moment in time—a left raw image 302A captured by theleft camera 114A and rightraw image 302B captured by theright camera 114B. When the pair ofraw images 302A, 302B are processed (e.g., by the image processor 412), depth images are generated. The generated depth images may be viewed on an 180A, 180B of an eyewear device, on another display (e.g., theoptical assembly image display 580 on a mobile device 401), or on a screen. - The generated depth images are in the three-dimensional space domain and can comprise a matrix of vertices on a three-dimensional location coordinate system that includes an X axis for horizontal position (e.g., length), a Y axis for vertical position (e.g., height), and a Z axis for depth (e.g., distance). Each vertex may include a color attribute (e.g., a red pixel light value, a green pixel light value, or a blue pixel light value); a position attribute (e.g., an X location coordinate, a Y location coordinate, and a Z location coordinate); a texture attribute; a reflectance attribute; or a combination thereof. The texture attribute quantifies the perceived texture of the depth image, such as the spatial arrangement of color or intensities in a region of vertices of the depth image.
-
FIG. 4 is a functional block diagram of anexample collaboration system 400 that includes a wearable device (e.g., an eyewear device 100), amobile device 401, and aserver system 498 connected viavarious networks 495 such as the Internet. Theserver system 498 may be one or more computing devices as part of a service or network computing system, for example, that include a processor, a memory, and network communication interface to communicate over thenetwork 495 with aneyewear device 100 and amobile device 401. Theserver system 498 includes aserver processor 499 that may be configured to host collaboration sessions. Functionality of theeyewear device 100 ormobile device 401 described herein, such as collaboration processing and serving collaborative objects to users, can be performed by theprocessor 499 of theserver system 498. - The
eyewear device 100 includes one or more visible- 114A, 114B that capture still images, video images, or both still and video images, as described herein. Thelight cameras 114A, 114B may have a direct memory access (DMA) to high-cameras speed circuitry 430 and function as a stereo camera. The 114A, 114B may be used to capture initial-depth images that may be rendered into three-dimensional (3D) models that are texture-mapped images of a red, green, and blue (RGB) imaged scene. Thecameras device 100 may also include a depth sensor 213, which uses infrared signals to estimate the position of objects relative to thedevice 100. The depth sensor that in some examples includes one or more infrared emitter(s) 215 and infrared camera(s) 410. - The
eyewear device 100 further includes two image displays of each 180A, 180B (one associated with theoptical assembly left side 170A and one associated with theright side 170B). Theeyewear device 100 also includes animage display driver 442, animage processor 412, low-power circuitry 420, and high-speed circuitry 430. The image displays of each 180A, 180B are for presenting images, including still images, video images, or still and video images. Theoptical assembly image display driver 442 is coupled to the image displays of each 180A, 180B in order to control the display of images.optical assembly - The
eyewear device 100 additionally includes one or more microphones (not shown) and one or more speakers 413 (e.g., one associated with the left side of the eyewear device and another associated with the right side of the eyewear device). Thespeakers 413 may be incorporated into theframe 105, temples 125, or corners 110 of theeyewear device 100. The one ormore speakers 413 are driven by anaudio processor 414 andaudio driver 415 under control of low-power circuitry 420, high-speed circuitry 430, or both. Thespeakers 413 are for presenting audio signals including, for example, a beat track. Theaudio processor 414 are coupled to the microphones and thespeakers 413 in order to control the respective capture and presentation of sound. - The components shown in
FIG. 4 for theeyewear device 100 are located on one or more circuit boards, for example a printed circuit board (PCB) or flexible printed circuit (FPC), located in the frame or temples. Alternatively, or additionally, the depicted components can be located in the corners, rims, hinges, or bridge of theeyewear device 100. - As shown in
FIG. 4 , high-speed circuitry 430 includes a high-speed processor 432, amemory 434, and high-speed wireless circuitry 436. In the example, theimage display driver 442 is coupled to the high-speed circuitry 430 and operated by the high-speed processor 432 in order to drive the left and right image displays of each 180A, 180B. High-optical assembly speed processor 432 may be any processor capable of managing high-speed communications and operation of any general computing system needed foreyewear device 100. High-speed processor 432 includes processing resources needed for managing high-speed data transfers on high-speed wireless connection 437 to a wireless local area network (WLAN) using high-speed wireless circuitry 436. - In some examples, the high-
speed processor 432 executes an operating system such as a LINUX operating system or other such operating system of theeyewear device 100 and the operating system is stored inmemory 434 for execution. In addition to any other responsibilities, the high-speed processor 432 executes a software architecture for theeyewear device 100 that is used to manage data transfers with high-speed wireless circuitry 436. In some examples, high-speed wireless circuitry 436 is configured to implement Institute of Electrical and Electronic Engineers (IEEE) 802.11 communication standards, also referred to herein as Wi-Fi. In other examples, other high-speed communications standards may be implemented by high-speed wireless circuitry 436. - The low-
power circuitry 420 includes a low-power processor 422 and low-power wireless circuitry 424. The low-power wireless circuitry 424 and the high-speed wireless circuitry 436 of theeyewear device 100 can include short-range transceivers (Bluetooth™ or Bluetooth Low-Energy (BLE)) and wireless wide, local, or wide-area network transceivers (e.g., cellular or Wi-Fi).Mobile device 401, including the transceivers communicating via a low-power wireless connection 425 and the high-speed wireless connection 437, may be implemented using details of the architecture of theeyewear device 100, as can other elements of thenetwork 495. -
Memory 434 includes any storage device capable of storing various data and applications, including, among other things, camera data generated by the left and right visible- 114A, 114B, the infrared camera(s) 410, thelight cameras image processor 412, and images generated for display by theimage display driver 442 on the image display of each 180A, 180B. Although theoptical assembly memory 434 is shown as integrated with high-speed circuitry 430, thememory 434 in other examples may be an independent, standalone element of theeyewear device 100. In certain such examples, electrical routing lines may provide a connection through a chip that includes the high-speed processor 432 from theimage processor 412 or low-power processor 422 to thememory 434. In other examples, the high-speed processor 432 may manage addressing ofmemory 434 such that the low-power processor 422 will boot the high-speed processor 432 any time that a read or writeoperation involving memory 434 is needed. - As shown in
FIG. 4 , the high-speed processor 432 of theeyewear device 100 can be coupled to the camera system (visible- 114A, 114B), thelight cameras image display driver 442, theuser input device 491, and thememory 434. - The output components of the
eyewear device 100 include visual elements, such as the left and right image displays associated with each lens or 180A, 180B as described inoptical assembly FIGS. 2A and 2B (e.g., a display such as a liquid crystal display (LCD), a plasma display panel (PDP), a light emitting diode (LED) display, a projector, or a waveguide). Theeyewear device 100 may include a user-facing indicator (e.g., an LED, aloudspeaker 413, or a vibrating actuator), or an outward-facing signal (e.g., an LED, a loudspeaker 413). The image displays of each 180A, 180B are driven by theoptical assembly image display driver 442. In some example configurations, the output components of theeyewear device 100 further include additional indicators such as audible elements (e.g., loudspeakers 413), tactile components (e.g., an actuator such as a vibratory motor to generate haptic feedback), and other signal generators. For example, thedevice 100 may include a user-facing set of indicators, and an outward-facing set of signals. The user-facing set of indicators are configured to be seen or otherwise sensed by the user of thedevice 100. For example, thedevice 100 may include an LED display positioned so the user can see it, a one ormore speakers 413 positioned to generate a sound the user can hear, or an actuator to provide haptic feedback the user can feel. The outward-facing set of signals are configured to be seen or otherwise sensed by an observer near thedevice 100. Similarly, thedevice 100 may include an LED, a loudspeaker, or an actuator that is configured and positioned to be sensed by an observer. - The input components of the
eyewear device 100 may include alphanumeric input components (e.g., a touch screen or touchpad configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric-configured elements), pointer-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a button switch, a touch screen or touchpad that senses the location, force or location and force of touches or touch gestures, or other tactile-configured elements), visual input components (e.g., cameras 114/420), and audio input components (e.g., a microphone), and the like. Themobile device 401 and theserver system 498 may include alphanumeric, pointer-based, tactile, audio, and other input components. - In some examples, the
eyewear device 100 includes a collection of motion-sensing components referred to as aninertial measurement unit 472. The motion-sensing components may be micro-electro-mechanical systems (MEMS) with microscopic moving parts, often small enough to be part of a microchip. The inertial measurement unit (IMU) 472 in some example configurations includes an accelerometer, a gyroscope, and a magnetometer. The accelerometer senses the linear acceleration of the device 100 (including the acceleration due to gravity) relative to three orthogonal axes (x, y, z). The gyroscope senses the angular velocity of thedevice 100 about three axes of rotation (pitch, roll, yaw). Together, the accelerometer and gyroscope can provide position, orientation, and motion data about the device relative to six axes (x, y, z, pitch, roll, yaw). The magnetometer, if present, senses the heading of thedevice 100 relative to magnetic north. The position of thedevice 100 may be determined by location sensors, such as a GPS unit, one or more transceivers to generate relative position coordinates, altitude sensors or barometers, and other orientation sensors. Such positioning system coordinates can also be received over the 425, 437 from thewireless connections mobile device 401 via the low-power wireless circuitry 424 or the high-speed wireless circuitry 436. - The
IMU 472 may include or cooperate with a digital motion processor or programming that gathers the raw data from the components and compute a number of useful values about the position, orientation, and motion of thedevice 100. For example, the acceleration data gathered from the accelerometer can be integrated to obtain the velocity relative to each axis (x, y, z); and integrated again to obtain the position of the device 100 (in linear coordinates, x, y, and z). The angular velocity data from the gyroscope can be integrated to obtain the position of the device 100 (in spherical coordinates). The programming for computing these useful values may be stored inmemory 434 and executed by the high-speed processor 432 of theeyewear device 100. - The
eyewear device 100 may optionally include additional peripheral sensors, such as biometric sensors, specialty sensors, or display elements integrated witheyewear device 100. For example, peripheral device elements may include any I/O components including output components, motion components, position components, or any other such elements described herein. For example, the biometric sensors may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), to measure bio signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), or to identify a person (e.g., identification based on voice, retina, facial characteristics, fingerprints, or electrical bio signals such as electroencephalogram data), and the like. - The
mobile device 401 may be a smartphone, tablet, laptop computer, access point, or any other such device capable of connecting witheyewear device 100 using both a low-power wireless connection 425 and a high-speed wireless connection 437.Mobile device 401 is connected toserver system 498 andnetwork 495. Thenetwork 495 may include any combination of wired and wireless connections. - The illustrated
collaboration system 400, as shown inFIG. 4 , includes a computing device, such asmobile device 401, coupled to aneyewear device 100 over a network. Thecollaboration system 400 includes a memory for storing instructions and a processor for executing the instructions. Execution of the instructions of thecollaboration system 400 by theprocessor 432 configures theeyewear device 100 to cooperate with themobile device 401. Thecollaboration system 400 may utilize thememory 434 of theeyewear device 100 or the 540A, 540B, 540C of the mobile device 401 (memory elements FIG. 5 ). Also, thecollaboration system 400 may utilize the 432, 422 of theprocessor elements eyewear device 100 or the central processing unit (CPU) 540 of the mobile device 401 (FIG. 5 ). In addition, thecollaboration system 400 may further utilize the memory and processor elements of theserver system 498. In this aspect, the memory and processing functions of thecollaboration system 400 can be shared or distributed across the processors and memories of theeyewear device 100, themobile device 401, and theserver system 498 to implement functionality described herein. - The
memory 434, in some example implementations, includes or is coupled to ahand gesture library 480, as described herein. The process of detecting a hand shape or gesture, in some implementations, involves comparing the pixel-level data in one or more captured frames of video data by theeyewear 100 ormobile device 401 to the hand shapes and gestures stored in thelibrary 480 until a good match is found. A gesture may be a static gesture that can be detect in one or a few frames of data or a dynamic gesture that is detected over the course of two or more frames of data. - The
memory 434 additionally includes, in some example implementations, anelement animation application 910, alocalization system 915, animage processing system 920, and acollaboration application 925. In acollaboration system 400 in which a camera is capturing frames of video data 900, theelement animation application 910 configures theprocessor 432 to control the movement of a series ofvirtual items 700 on a display in response to detecting one or more inputs, e.g., IMU data, captured images, and hand shapes or gestures. Thelocalization system 915 configures theprocessor 432 to obtain localization data for use in determining the position of theeyewear device 100 relative to the physical environment. The localization data may be derived from a series of images, anIMU unit 472, a GPS unit, or a combination thereof. Theimage processing system 920 configures theprocessor 432 to present a captured image on a display of an 180A, 180B in cooperation with theoptical assembly image display driver 442 and theimage processor 412. Thecollaboration application 925 configures theprocessor 432 to implement collaboration functions described herein. -
FIG. 5 is a high-level functional block diagram of an examplemobile device 401.Mobile device 401 includes aflash memory 540A which stores programming to be executed by theCPU 540 to perform all or a subset of the functions described herein. - The
mobile device 401 may include acamera 570 that comprises at least two visible-light cameras (first and second visible-light cameras with overlapping fields of view) or at least one visible-light camera and a depth sensor with substantially overlapping fields of view. Themobile device 401 may additionally include aspeaker 571.Flash memory 540A may further include multiple images or video, which are generated via thecamera 570. - As shown, the
mobile device 401 includes animage display 580, amobile display driver 582 to control theimage display 580, and adisplay controller 584. In the example ofFIG. 5 , theimage display 580 includes a user input layer 591 (e.g., a touchscreen) that is layered on top of or otherwise integrated into the screen used by theimage display 580. - Examples of touchscreen-type mobile devices that may be used include (but are not limited to) a smart phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or other portable device. However, the structure and operation of the touchscreen-type devices is provided by way of example; the subject technology as described herein is not intended to be limited thereto. For purposes of this discussion,
FIG. 5 therefore provides a block diagram illustration of the examplemobile device 401 with a user interface that includes a touchscreen input layer 891 for receiving input (by touch, multi-touch, or gesture, and the like, by hand, stylus, or other tool), acamera 570 for capturing images of objects (including hands of the user and potential virtual content), and animage display 580 for displaying content. - As shown in
FIG. 5 , themobile device 401 includes at least one digital transceiver (XCVR) 510, shown as WWAN XCVRs, for digital wireless communications via a wide-area wireless mobile communication network. Themobile device 401 also includes additional digital or analog transceivers, such as short-range transceivers (XCVRs) 520 for short-range network communication, such as via NFC, VLC, DECT, ZigBee, Bluetooth™, or Wi-Fi. For example,short range XCVRs 520 may take the form of any available two-way wireless local area network (WLAN) transceiver of a type that is compatible with one or more standard protocols of communication implemented in wireless local area networks, such as one of the Wi-Fi standards under IEEE 802.11. - To generate location coordinates for positioning of the
mobile device 401, themobile device 401 can include a global positioning system (GPS) receiver. Alternatively, or additionally theeyewear device 100 or themobile device 401 can utilize either or both theshort range XCVRs 520 andWWAN XCVRs 510 for generating location coordinates for positioning. For example, cellular network, Wi-Fi, or Bluetooth™ based positioning systems can generate very accurate location coordinates, particularly when used in combination. Such location coordinates can be transmitted between theeyewear device 100 ormobile device 401 over one or more network connections via 510, 520.XCVRs - The
mobile device 401 in some examples includes a collection of motion-sensing components referred to as an inertial measurement unit (IMU) 572 for sensing the position, orientation, and motion of theclient device 401. The motion-sensing components may be micro-electro-mechanical systems (MEMS) with microscopic moving parts, often small enough to be part of a microchip. The inertial measurement unit (IMU) 572 in some example configurations includes an accelerometer, a gyroscope, and a magnetometer. The accelerometer senses the linear acceleration of the client device 401 (including the acceleration due to gravity) relative to three orthogonal axes (x, y, z). The gyroscope senses the angular velocity of theclient device 401 about three axes of rotation (pitch, roll, yaw). Together, the accelerometer and gyroscope can provide position, orientation, and motion data about the device relative to six axes (x, y, z, pitch, roll, yaw). The magnetometer, if present, senses the heading of theclient device 401 relative to magnetic north. - The
IMU 572 may include or cooperate with a digital motion processor or programming that gathers the raw data from the components and compute a number of useful values about the position, orientation, and motion of theclient device 401. For example, the acceleration data gathered from the accelerometer can be integrated to obtain the velocity relative to each axis (x, y, z); and integrated again to obtain the position of the client device 401 (in linear coordinates, x, y, and z). The angular velocity data from the gyroscope can be integrated to obtain the position of the client device 401 (in spherical coordinates). The programming for computing these useful values may be stored in on or 540A, 540B, 540C and executed by themore memory elements CPU 540 of theclient device 401. - The
transceivers 510, 520 (i.e., the network communication interface) conforms to one or more of the various digital wireless communication standards utilized by modern mobile networks. Examples ofWWAN transceivers 510 include (but are not limited to) transceivers configured to operate in accordance with Code Division Multiple Access (CDMA) and 3rd Generation Partnership Project (3GPP) network technologies including, for example and without limitation, 3GPP type 2 (or 3GPP2) and LTE, at times referred to as “4G.” For example, the 510, 520 provide two-way wireless communication of information including digitized audio signals, still image and video signals, web page information for display as well as web-related inputs, and various types of mobile message communications to/from thetransceivers mobile device 401. - The
mobile device 401 further includes a microprocessor that functions as a central processing unit (CPU); shown asCPU 540 inFIG. 5 . A processor is a circuit having elements structured and arranged to perform one or more processing functions, typically various data processing functions. Although discrete logic components could be used, the examples utilize components forming a programmable CPU. A microprocessor for example includes one or more integrated circuit (IC) chips incorporating the electronic elements to perform the functions of the CPU. TheCPU 540, for example, may be based on any known or available microprocessor architecture, such as a Reduced Instruction Set Computing (RISC) using an ARM architecture, as commonly used today in mobile devices and other portable electronic devices. Of course, other arrangements of processor circuitry may be used to form theCPU 540 or processor hardware in smartphone, laptop computer, and tablet. - The
CPU 540 serves as a programmable host controller for themobile device 401 by configuring themobile device 401 to perform various operations, for example, in accordance with instructions or programming executable byCPU 540. For example, such operations may include various general operations of the mobile device, as well as operations related to the programming for applications on the mobile device. Although a processor may be configured by use of hardwired logic, typical processors in mobile devices are general processing circuits configured by execution of programming. - The
mobile device 401 includes a memory or storage system, for storing programming and data. In the example, the memory system may include aflash memory 540A, a random-access memory (RAM) 540B, andother memory components 540C, as needed. TheRAM 540B serves as short-term storage for instructions and data being handled by theCPU 540, e.g., as a working data processing memory. Theflash memory 540A typically provides longer-term storage. - Hence, in the example of
mobile device 401, theflash memory 540A is used to store programming or instructions for execution by theCPU 540. Depending on the type of device, themobile device 401 stores and runs a mobile operating system through which specific applications are executed. Examples of mobile operating systems include Google Android, Apple iOS (for iPhone or iPad devices), Windows Mobile, Amazon Fire OS, RIM BlackBerry OS, or the like. - As shown in
FIG. 5 , theCPU 540 of themobile device 401 may be coupled to acamera system 570, amobile display driver 582, auser input layer 591, and amemory 540A. Components and functionality of theeyewear device 100 described herein can be incorporated into themobile device 401. Likewise, components and functionality of themobile device 401 described herein may be incorporated into theeyewear device 100. - The
processor 432 within theeyewear device 100 or theprocessor 540 within themobile device 401 may construct a map of the environment surrounding the respective device, determine a location of the device within the mapped environment, and determine a relative position of the device to one or more objects in the mapped environment. Theprocessor 432/540 may construct the map and determine location and position information using a conventional simultaneous localization and mapping (SLAM) algorithm applied to data received from one or more sensors. Sensor data includes images received from one or both of the 114A, 114B or camera(s) 570, distance(s) received from a laser range finder, position information received from a GPS unit, motion and acceleration data received from ancameras IMU 472/572, or a combination of data from such sensors, or from other sensors that provide data useful in determining positional information. - In the context of augmented reality, a SLAM algorithm is used to construct and update a map of an environment, while simultaneously tracking and updating the location of a device (or a user) within the mapped environment. The mathematical solution can be approximated using various statistical methods, such as particle filters, Kalman filters, extended Kalman filters, and covariance intersection. In a system that includes a high-definition (HD) video camera that captures video at a high frame rate (e.g., thirty frames per second), the SLAM algorithm updates the map and the location of objects at least as frequently as the frame rate; in other words, calculating and updating the mapping and localization thirty times per second.
-
FIG. 6 depicts an examplephysical environment 600 along with elements that are useful when using a SLAM application and other types of tracking applications (e.g., natural feature tracking (NFT)). Although the following example is provided with reference toeyewear device 100, the example can be implemented in a similar manner in amobile device 401. Auser 602 ofeyewear device 100 is present in an example physical environment 600 (which, inFIG. 6 , is an interior room). Theprocessor 432 of theeyewear device 100 determines its position with respect to one or more objects 604 within theenvironment 600 using captured images, constructs a map of theenvironment 600 using a coordinate system (x, y, z) for theenvironment 600, and determines its position within the coordinate system. Additionally, theprocessor 432 determines a head pose (roll, pitch, and yaw) of theeyewear device 100 within the environment by using two or more location points (e.g., three 606 a, 606 b, and 606 c) associated with alocation points single object 604 a, or by using one or more location points 606 associated with two or 604 a, 604 b, 604 c. Themore objects processor 432 of theeyewear device 100 may position a virtual object 608 (such as the key shown inFIG. 6 ) within theenvironment 600 for viewing during an augmented reality experience such as a collaborative augmented reality experience where each user has a respective augmented reality device (e.g.,eyewear device 100 ormobile device 401. - The
localization system 915 in some examples associates avirtual marker 610 a with avirtual object 608 in theenvironment 600. In augmented reality, markers are registered at locations in the environment to assist devices with the task of tracking and updating the location of users, devices, and objects (virtual and physical) in a mapped environment. Markers are sometimes registered to a high-contrast physical object, such as the relatively dark object, such as the framedpicture 604 a, mounted on a lighter-colored wall, to assist cameras and other sensors with the task of detecting the marker. The markers may be preassigned or may be assigned by theeyewear device 100 upon entering the environment. - Markers can be encoded with or otherwise linked to information. A marker might include position information, a physical code (such as a bar code or a QR code; either visible to the user or hidden), or a combination thereof. A set of data associated with the marker is stored in the
memory 434 of theeyewear device 100. The set of data includes information about themarker 610 a, the marker's position (location and orientation), one or more virtual objects, or a combination thereof. The marker position may include three-dimensional coordinates for one ormore marker landmarks 616 a, such as the corner of the generallyrectangular marker 610 a shown inFIG. 6 . The marker location may be expressed relative to real-world geographic coordinates, a system of marker coordinates, a position of theeyewear device 100, or other coordinate system. The one or more virtual objects associated with themarker 610 a may include any of a variety of material, including still images, video, audio, tactile feedback, executable applications, interactive user interfaces and experiences, and combinations or sequences of such material. Any type of content capable of being stored in a memory and retrieved when themarker 610 a is encountered or associated with an assigned marker may be classified as a virtual object in this context. The key 608 shown inFIG. 6 , for example, is a virtual object displayed as a still image, either 2D or 3D, at a marker location. - In one example, the
marker 610 a may be registered in memory as being located near and associated with aphysical object 604 a (e.g., the framed work of art shown inFIG. 6 ). In another example, the marker may be registered in memory as being a particular position with respect to theeyewear device 100. -
FIGS. 7, 8, and 9 are illustrations of an examplecollaborative object 700 being developed by adding virtual content 702 during a collaboration period of a collaboration session for use in describing the steps of the methods illustrated inFIGS. 10 and 11 below (e.g., to create a virtual time capsule). Although a box is used for thecollaborative object 700 in many of the examples described herein, any virtual object may be selected for use as thecollaborative object 700. Additionally, although the illustrations depict aneyewear device 100 as the physically remote device it is to be understood that the functionality described below can be implemented using other physically remote devices such asmobile device 401. -
FIG. 7 provides a perspective view of an examplecollaborative object 700 in the form of a box in a first state (closed) that may be manipulated in threedimensions 701 with a hand 651 (e.g., through detected gestures in images or touch inputs on a touchscreen) based on corresponding movements in threedimensions 681. For example, thehand 651 may be rotated to rotate the collaborative object. In some examples, an extended index finger may be detected adjacent the collaborative object and a corresponding audible signal will be presented via a speaker when the user makes a tapping gesture. The location of the eyewear device may also be tracked in threedimensions 840 within anenvironment 600 so that the overlays generated for presentation of thedisplay 180B are more realistic. Thehand 651 may be predefined to be the left hand, as shown. In some implementations, the system includes a process for selecting and setting the hand, right of left, which will serve as thehand 651 to be detected. -
FIG. 8 provides a perspective view of the examplecollaborative object 700 in a second state (open) with associated virtual content 702 (watch face 702 a,urn 702 b,book 702 c, othervirtual content 702 d-f) added during a collaboration period. Thehand 652 is illustrated in the open position. This position of thehand 652 or the transition of thehand 651 in the relaxed position (FIG. 7 ) to thehand 652 in the open position may be set to correspond to opening thecollaborative object 700 such that when this hand position or hand gesture is detected, thecollaborative object 700 transitions to an open state. -
FIG. 9 provides a perspective view of the examplecollaborative object 700 in the first state (closed) with virtual content 702 added to exterior surfaces of thecollaborative object 700. Thehand 653 is illustrated in a closed position. This position of thehand 653 or the transition of thehand 651 in the relaxed position (FIG. 7 ) to thehand 653 in the closed position may be set to correspond to closing thecollaborative object 700 such that when this hand position or hand gesture is detected, thecollaborative object 700 transitions to a closed state. - The process of detecting and tracking includes detecting the
hand 651/652/653, over time, in various postures, in a set or series of captured frames of video data. In this context, detecting refers to and includes detecting a hand in as few as one frame of video data, as well as detecting the hand, over time, in a subset or series of frames of video data. Accordingly, in some implementations, the process includes detecting ahand 651 in a particular posture in one or more of the captured frames of video data. In other implementations, the process includes detecting thehand 651/652/653, over time, in various postures, in a subset or series of captured frames of video data. -
FIG. 10 is aflow chart 1000 depicting an example method of developing acollaborative object 700 during a collaboration period of a collaboration session including multiple physically remote devices (e.g.,eyewear devices 100,mobile devices 401, or a combination thereof). In an example, the steps ofFIG. 10 are performed by theprocessor 499 of the server system 498 (seeFIG. 4 ) accessible by the physically remote devices. In other examples, one or more steps may be performed by 432 and 540 of the physically remote devices or a combination of processors of theprocessors server system 498 and the physically remote device(s) (acting as a processor to implement the step(s)). One or more of the steps shown and described may be performed simultaneously, in a series, in an order other than shown and described, or in conjunction with additional steps. Some steps may be omitted or, in some applications, repeated. - At
block 1002, the processor receives user parameters for the collaborative session. In an example, the processor receives user parameters from a physically remote device of a host user where the host user designates the user parameters through their physically remote device during aserver system 498 connection. The user parameters include identifiers for the users that are permitted to access the collaborative session. User parameters may also include access levels identifying what individuals have access to during the collaborative session. Additional details regarding setting up and maintaining access levels is described below with reference to the steps of flow chart 1100 (FIG. 11 ). - At
block 1004, the processor receives object parameters. In an example, the processor receives object parameters from a physically remote device of a host user (or other user with suitable access level) where the user designates the object parameters through their physically remote device during aserver system 498 connection. The object parameters include identifiers identifying the object to be used as the collaborative object 700 (e.g., a box as illustrated inFIGS. 7-9 ). Other object parameters may include a material for the object (e.g., cardboard, metal, glass) or a time parameter providing a time window or deadline (e.g., in the form of aclock value 710 or a time bar 712) during which the virtual content 702 can be added to the collaborative object 700 (after which the virtual content 702 can no longer be added to the collaborative object 700). - The processor of the sever
system 498 may present the physically remote device vianetwork 495 with a list of available virtual objects 702 for selection by the user through their device, which is received by the processor of theserver system 498 upon selection. Alternatively, the user may send a virtual object 702 (e.g., a 3D image) they generated on their physically remote device to theserver system 498, where theprocessor 499 of the seversystem 498 designates the received virtual object 702 as thecollaborative object 700 upon receipt. - At
block 1006, the processor provides access to thecollaborative object 700. The processor provides access to thecollaborative object 700 through server connections with the physically remote devices based on the access level associated with each of the devices. In an example, theprocessor 499 develops thecollaborative object 700 responsive to the object parameters received and stores thecollaborative object 700 in a location accessible to the physically remote devices. The processor provides access to thecollaborative object 700 based on the level of access associated with the user of the physically remote device. When a user accesses thecollaborative object 700, the processor sends a file containing thecollaborative object 700 that the physically remote device uses to generate an overlay for presentation on a display of the physically remote devices, such asdisplay 180A-B of theeyewear device 100 ordisplay 580 to themobile device 401. The user may then interact with the representation of thecollaborative object 700 on their display (e.g., using a hand gesture such as depicted inFIG. 7 ) - At
block 1008, the processor receives design parameters. The processor receives design parameters from the physically remote devices having suitable permission levels accessing thecollaborative object 700. In an example, the processor sends a file to the physically remote device containing thecollaborative object 700. The physically remote device generates an overlay for display that the user can interact with to add design parameters to thecollaborative object 700. The user may then, for example, select an image (e.g., from their camera, such ascamera 114A-B or camera 570) and select a surface of thecollaborative object 700 where, upon selection of the surface, the selected image is associated with thecollaborative object 700 on the physically remote device. When the user makes a change on their device, the added/changed design parameter is communicated by the physically remote device to theserver system 498 vianetwork 498. - At
block 1010, the processor updates thecollaborative object 700 responsive to the design parameters. The processor updates thecollaborative object 700 in response to changes received from the physically remote devices vianetwork 495. In an example, upon receipt of the added/changed design parameter from the physically remote device, the processor associates the added/changed design parameter with thecollaborative object 700 in the location accessible to the physically remote devices. - At
block 1012, the processor receives virtual content 702. The processor receives virtual content 702 from the physically remote devices having suitable permission levels for accessing thecollaborative object 700. In an example, the processor sends a file to the physically remote device containing thecollaborative object 700. The physically remote device generates an overlay for display that the user can interact with to add virtual content 702 to thecollaborative object 700. The user may then, for example, add visual virtual content 702 by selecting an image (e.g., from their camera) and performing an action (e.g., drag and drop the image on thecollaborative object 700 or double tap on the object) to associate the virtual content 702 with thecollaborative object 700 on the physically remote device. Additionally, audio virtual content may be added to the video virtual content by, for example, pressing and holding the video virtual content and speaking into a microphone where the audio received while depressing the video virtual content is associated with the video virtual content. When the user makes a change on their device, the added virtual content 702 is communicated by the physically remote device to theserver system 498 vianetwork 495. - In an example, the users may associate virtual content 702 with the
collaborative object 700 by, for example, dragging and dropping the virtual content 702 onto a surface of thecollaborative object 700. In one example, using aneyewear device 100, theeyewear device 100 may recognize hand gestures and the user may manipulate the displayedcollaborative object 700 ondisplay 180A-B and select the virtual content 702 via hand gestures captured and processed by theeyewear device 100. In another example, using amobile device 401, themobile device 401 may interpret instructions received via thetouchscreen 580 of themobile device 401. The user may manipulate thecollaborative object 700 and select the virtual content 702 by touching/tapping thetouchscreen 580 with their finger to select the virtual content 702 and by dragging their finger to move the virtual content 702 onto the collaborative object 700 (which may associate the virtual content 702 with the collaborative object 700). - At
block 1014, the processor associates the virtual content 702 with thecollaborative object 700. The processor associates the virtual content 702 with thecollaborative object 700 by updating thecollaborative object 700 in response to changes received from the physically remote devices. In an example, upon receipt of the added virtual content 702 from the physically remote device, the processor associates the added virtual content 702 with thecollaborative object 700 in the location accessible to the physically remote devices. - At
block 1016, the processor stores thecollaborative object 700. Theprocessor 499 stores thecollaborative object 700 in memory accessible to the physically remote devices via anetwork 495 during a collaborative session. - At
block 1018, the processor provides access to thecollaborative object 700. Theprocessor 499 provides access to thecollaborative object 700 in memory accessible to the physically remote devices via anetwork 495. In an example, the processor checks credentials (e.g., user ID) of users requesting access and permits access if the credentials match credentials associated with the collaborative session for thecollaborative object 700. - At
block 1020, the processor presents thecollaborative object 700. Theprocessor 499 presents thecollaborative object 700 to the physically remote devices via thenetwork 495. In an example, the processor sends a file including the collaborative object 700 (and associated virtual content or links to such content) to a physically remote device having access to the collaborative session in response to a request from the physically remote device, which the physically remote device uses to generate an overlay including thecollaborative object 700 and associated virtual content for presentation on the display of the remote physical devices. In one example, the associated virtual content is presented all at once when thecollaborative object 700 is placed in an open state. In another example, the associated virtual content is presented in sequential order based on time stamps added when the virtual content was associated with thecollaborative object 700. -
FIG. 11 is a flow chart listing the steps of an example selective collaboration object access method. In an example, the steps ofFIG. 11 are performed by theprocessor 499 of the server system 498 (seeFIG. 4 ) accessible by the physically remote devices. In other examples, one or more steps may be performed by 432 and 540 of the physically remote devices or a combination ofprocessors processors 499 of theserver system 498 and the physically remote device(s) (acting as a processor to implement the step(s)). One or more of the steps shown and described may be performed simultaneously, in a series, in an order other than shown and described, or in conjunction with additional steps. Some steps may be omitted or, in some applications, repeated. - At
block 1102, the processor receives user identifiers. The processor receives user identifiers for users to be associated with a collaborative session. In one example, a user (the host) accesses theserver system 498 using a physically remote device. Using the physically remote device, the user creates a collaborative sessions and designates other users for participation in the collaboration. For example, the host (user A) may invite another user (user B) to participate in a collaboration to prepare content for another user (user C) with the intent to provide that user with the content at a later date. - At
block 1104, the processor receives access parameters for the users. The processor receives access parameters for the users from the host or another user with acceptable access levels. The access parameters indicate a respective access level to thecollaborative object 700 of each of the users that allows the respective access level of at least one of the users to be different than the respective access level of another user. For example, the host may have a first access level (enabling access to access thecollaborative object 700 in order to associate virtual content 704 and view associated virtual content 704 during a collaboration period) and another user may have a second level of access to thecollaborative object 700 that is less than the first level of access (e.g., it only permits access after the collaboration period has ended; or it permits access to thecollaborative object 700 during the collaboration period, but not associated virtual content 704 until after the collaboration period has ended). In one example, the host receives an access level enabling access by default and the host may grant other users access rights by providing the access rights to theprocessor 499 of the seversystem 498 via a physically remote device. - At
block 1106, the processor maintains a table of access parameters. In an example, the processor maintains a table in cloud storage including an identifier for each of the users and their respective access levels based on access parameters supplied by the host. In the example where the host (user A; A_ID) invites another user (user B; B_ID) to collaborate on a project for yet another user (user C; C_ID), the processor may initially create a table including the information identified in TABLE 1 below: -
TABLE 1 USER PERMISSION A_ID Yes B_ID No C_ID No
In TABLE 1, a permission level of “Yes” indicates a first level of access and a permission level of “No” indicates a second level of access. As shown in TABLE 1, the host (A_ID) initially is the only user with access rights providing access to, for example, the virtual content 704 associated with thecollaborative object 700 or the ability to add virtual content 704. - At
block 1108, the processor maintains a timer. The processor may receive an initial time value from the host. The timer may be used to track the time remaining during a collaboration period. In one example, the processor provides a time value corresponding to the initial time or the tracked time to the physically remote devices, which may use the time value to generate overlays depicting or representing time remaining (e.g., aclock value 710 or a time bar 712). - At
block 1110, the processor provides access tocollaborative object 700. The processor provide access to thecollaborative object 700 by the users based on their respective access levels. In an example, when the user attempts to access thecollaborative object 700, the processor will compare the user identification (ID) for the user to values in the table. If the user's ID (e.g., D_ID) is not found on the table, that user will not be able to access thecollaborative object 700. If another user (e.g., C_ID) is found in the table, but has a “No” permission level, that user will only be provided with access commensurate with that level of access. If another user (e.g., A_ID) is found in the table with a “Yes” permission level, that user will be provided with access to thecollaborative object 700 commensurate with that level of access. - At
block 1112, the processor identifies an access level change. The processor identifies an access level change for at least one other of the users. In one example, the host grants another user (e.g., B_ID) access rights by sending an access rights change request to theprocessor 499 of the seversystem 498 including the user ID to be changed and the new access level via a physically remote device. In another example, all users with a “Yes” permission level must request an access level change for a user with another permission level. The access level change is identified by the processor upon receipt of the request(s). In another example, the processor may identify a change based on the expiration of a collaboration period or a preset “reveal” time (e.g., based on a monitored timer). For example, where user A and user B are preparing content for user C, user A and user B may initially have a “Yes” permission level and the processor may automatically identify an access level change request for user C once the collaboration period has ended or the reveal time has been reached. - At
block 1114, the processor changes the respective access level. The processor changes the respective access level of the at least one other of the users responsive to the access level change. In response to an identified access level change (e.g., changing B_ID permission level to “Yes,” the processor updates the table as shown in TABLE 2 below: -
TABLE 2 USER PERMISSION A_ID Yes B_ID Yes C_ID No -
FIG. 12 is aflow chart 1200 including steps of a method for use in thecollaboration application 925. Aprocessor 499 ofserver system 498 enables users to associate the virtual content 702 with thecollaborative object 700 vianetwork 495, where theprocessor 499 maintains a timer 704 with theclock value 710 indicative of when a collaboration period ends (e.g., to create a sense of urgency, excitement and motivation among the users during a collaboration session). In one example, the collaboration session includes a collaboration period during which virtual content 702 can be associated withcollaborative object 700 followed by access period during which the virtual content 702 is fixed for viewing by users and can no longer be added. The collaboration period may be an object parameter specified by a user (see, for example, block 1004 ofFIG. 10 and the related description). - At
block 1202, theprocessor 499 provides the users with access to thecollaborative object 700 during the session. This is seen inFIGS. 7-9 . In an example, the users are provided authorization vianetwork 495 to join the session and collaborate on the generation of thecollaborative object 700. Theprocessor 499 serves, vianetwork 495, thecollaborative object 700 to the physically remote devices, which present thecollaborative object 700 to the user, e.g., as an overlay on the display 180 of theeyewear device 100 or thedisplay 580 of themobile device 401. In one example, users cannot access the associated virtual content 702 received from other users until the collaboration period ends, such as when theclock value 710 is zero or thetime bar 712 is completed. In another example, a subset of users can access the associated virtual content 702 added by others in that subset of users. - At
block 1204, theprocessor 499 enables the users to associate the virtual content 702 with thecollaborative object 700. An example of this is depicted inFIGS. 7-9 . In an example, the users contribute to the joint collaboration by using their physically remote devices to add and modify virtual content 702 at chosen locations of thecollaborative object 700, and shown in the display of their respective devices, such as ondisplay 180A-B of theeyewear device 100 and display 580 of themobile device 401. - At
block 1206, theprocessor 499 maintains the timer 704 with theclock value 712, which is indicative of when the collaboration period ends. This is seen inFIGS. 7-9 . In an example, the timer 704 displays theclock value 710 as a countdown on the respective device display, e.g., to create excitement and a sense of urgency to complete the generation of thecollaborative object 700. The countdown is indicative of when the collaboration period ends. In an example, the countdown can be a countdown of time and can be presented as thetime value 710, a corresponding timeline shown as thetime bar 712, or both on a device display. In an example, the duration of the collaboration period may be a week, or as short as a few hours. The countdown can change color when the countdown is close to the end of the collaboration period, such as changing from a green color to a red color. - At
block 1208, theprocessor 499 provides the users access to thecollaborative object 700 and the associated virtual content 702 at the end of the collaboration period. This is seen inFIGS. 7-9 . In an example, at the end of the collaboration period, such as when the countdown expires, such as theclock value 710 showing zero, or when thetime bar 712 reaches the right end of the window, theprocessor 499 provides all users with access (e.g., by changing access rights) to the finishedcollaborative object 700 in order to reveal the results of the collaboration to all users. - Machine learning refers to an algorithm that improves incrementally through experience. By processing a large number of different input datasets, a machine-learning algorithm can develop improved generalizations about particular datasets, and then use those generalizations to produce an accurate output or solution when processing a new dataset. Broadly speaking, a machine-learning algorithm includes one or more parameters that will adjust or change in response to new experiences, thereby improving the algorithm incrementally; a process similar to learning.
- In the context of computer vision, mathematical models attempt to emulate the tasks accomplished by the human visual system, with the goal of using computers to extract information from an image and achieve an accurate understanding of the contents of the image. Computer vision algorithms have been developed for a variety of fields, including artificial intelligence and autonomous navigation, to extract and analyze data in digital images and video.
- Deep learning refers to a class of machine-learning methods that are based on or modeled after artificial neural networks. An artificial neural network is a computing system made up of a number of simple, highly interconnected processing elements (nodes), which process information by their dynamic state response to external inputs. A large artificial neural network might have hundreds or thousands of nodes.
- A convolutional neural network (CNN) is a type of neural network that is frequently applied to analyzing visual images, including digital photographs and video. The connectivity pattern between nodes in a CNN is typically modeled after the organization of the human visual cortex, which includes individual neurons arranged to respond to overlapping regions in a visual field. A neural network that is suitable for use in the determining process described herein is based on one of the following architectures: VGG16, VGG19, ResNet50, Inception V3, Xception, or other CNN-compatible architectures.
- In the machine-learning example, at
block 1008 andblock 1012, theprocessor 432 determines whether a detected series of hand shapes substantially matches a predefined hand gesture using a machine-trained algorithm referred to as a hand feature model. Theprocessor 432 is configured to access the hand feature model, trained through machine learning, and applies the hand feature model to identify and locate features of the hand shape in one or more frames of the video data. - In one example implementation, the trained hand feature model receives a frame of video data which contains a detected hand shape and abstracts the image in the frame into layers for analysis. Data in each layer is compared to hand gesture data stored in the
hand gesture library 480, layer by layer, based on the trained hand feature model, until a good match is identified. - In one example, the layer-by-layer image analysis is executed using a convolutional neural network. In a first convolution layer, the CNN identifies learned features (e.g., hand landmarks, sets of joint coordinates, and the like). In a second convolution layer, the image is transformed into a plurality of images, in which the learned features are each accentuated in a respective sub-image. In a pooling layer, the sizes and resolution of the images and sub-images are reduced in order isolation portions of each image that include a possible feature of interest (e.g., a possible palm shape, a possible finger joint). The values and comparisons of images from the non-output layers are used to classify the image in the frame. Classification, as used herein, refers to the process of using a trained model to classify an image according to the detected hand shape. For example, an image may be classified as a “touching action” if the detected series of bimanual hand shapes matches the touching gesture stored in the
library 480. - Any of the functionality described herein for the
eyewear device 100, themobile device 401, and theserver system 498 can be embodied in one or more computer software applications or sets of programming instructions, as described herein. According to some examples, “function,” “functions,” “application,” “applications,” “instruction,” “instructions,” or “programming” are program(s) that execute functions defined in the programs. Various programming languages can be employed to develop one or more of the applications, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, a third-party application (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may include mobile software running on a mobile operating system such as IOS™ ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application can invoke API calls provided by the operating system to facilitate functionality described herein. - Hence, a machine-readable medium may take many forms of tangible storage medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer devices or the like, such as may be used to implement the client device, media gateway, transcoder, etc. shown in the drawings. Volatile storage media include dynamic memory, such as main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
- Except as stated immediately above, nothing that has been stated or illustrated is intended or should be interpreted to cause a dedication of any component, step, feature, object, benefit, advantage, or equivalent to the public, regardless of whether it is or is not recited in the claims.
- It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “includes,” “including,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises or includes a list of elements or steps does not include only those elements or steps but may include other elements or steps not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.
- Unless otherwise stated, any and all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. Such amounts are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain. For example, unless expressly stated otherwise, a parameter value or the like may vary by as much as plus or minus ten percent from the stated amount or range.
- In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, the subject matter to be protected lies in less than all features of any single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
- While the foregoing has described what are considered to be the best mode and other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that they may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all modifications and variations that fall within the true scope of the present concepts.
Claims (20)
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/900,792 US20240070299A1 (en) | 2022-08-31 | 2022-08-31 | Revealing collaborative object using countdown timer |
| KR1020257010139A KR20250053949A (en) | 2022-08-31 | 2023-07-21 | Optional collaboration object access |
| EP23861058.8A EP4581457A1 (en) | 2022-08-31 | 2023-07-21 | Revealing collaborative object using countdown timer |
| PCT/US2023/028387 WO2024049575A1 (en) | 2022-08-31 | 2023-07-21 | Revealing collaborative object using countdown timer |
| CN202380062929.4A CN119948433A (en) | 2022-08-31 | 2023-07-21 | Use countdown timer to show collaborating objects |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/900,792 US20240070299A1 (en) | 2022-08-31 | 2022-08-31 | Revealing collaborative object using countdown timer |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20240070299A1 true US20240070299A1 (en) | 2024-02-29 |
Family
ID=89996802
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/900,792 Pending US20240070299A1 (en) | 2022-08-31 | 2022-08-31 | Revealing collaborative object using countdown timer |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20240070299A1 (en) |
| EP (1) | EP4581457A1 (en) |
| KR (1) | KR20250053949A (en) |
| CN (1) | CN119948433A (en) |
| WO (1) | WO2024049575A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240070300A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Selective collaborative object access based on timestamp |
Citations (74)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6275863B1 (en) * | 1999-01-25 | 2001-08-14 | International Business Machines Corp. | System and method for programming and executing long running transactions |
| US6535909B1 (en) * | 1999-11-18 | 2003-03-18 | Contigo Software, Inc. | System and method for record and playback of collaborative Web browsing session |
| US20030105734A1 (en) * | 2001-11-16 | 2003-06-05 | Hitchen Stephen M. | Collaborative file access management system |
| US20060047814A1 (en) * | 2004-08-27 | 2006-03-02 | Cisco Technology, Inc. | System and method for managing end user approval for charging in a network environment |
| US20060080432A1 (en) * | 2004-09-03 | 2006-04-13 | Spataro Jared M | Systems and methods for collaboration |
| US20070033419A1 (en) * | 2003-07-07 | 2007-02-08 | Cryptography Research, Inc. | Reprogrammable security for controlling piracy and enabling interactive content |
| US20080013793A1 (en) * | 2006-07-13 | 2008-01-17 | Northrop Grumman Corporation | Gesture recognition simulation system and method |
| US7360164B2 (en) * | 2003-03-03 | 2008-04-15 | Sap Ag | Collaboration launchpad |
| US20090178073A1 (en) * | 2001-01-02 | 2009-07-09 | Nds Limited | Method and system for control of broadcast content access |
| US20100235763A1 (en) * | 2002-10-31 | 2010-09-16 | Litera Technology Llc. | Collaborative hierarchical document development and review system |
| US20120260347A1 (en) * | 2005-07-08 | 2012-10-11 | At&T Intellectual Property I, L.P. | Methods, Systems, and Devices for Securing Content |
| US20120293544A1 (en) * | 2011-05-18 | 2012-11-22 | Kabushiki Kaisha Toshiba | Image display apparatus and method of selecting image region using the same |
| US8468129B2 (en) * | 2011-09-23 | 2013-06-18 | Loyal3 Holdings, Inc. | Asynchronous replication of databases of peer networks |
| US20130219458A1 (en) * | 2012-02-17 | 2013-08-22 | Vasudevan Ramanathan | Methods and systems for secure digital content distribution and analytical reporting |
| US8752138B1 (en) * | 2011-08-31 | 2014-06-10 | Google Inc. | Securing user contact information in collaboration session |
| US8788680B1 (en) * | 2012-01-30 | 2014-07-22 | Google Inc. | Virtual collaboration session access |
| US20140280498A1 (en) * | 2013-03-14 | 2014-09-18 | Synacor, Inc. | Media sharing communications system |
| US20150020151A1 (en) * | 2013-07-09 | 2015-01-15 | Contentraven, Llc | Systems and methods for trusted sharing |
| US20150035823A1 (en) * | 2013-07-31 | 2015-02-05 | Splunk Inc. | Systems and Methods for Using a Three-Dimensional, First Person Display to Convey Data to a User |
| US20150113571A1 (en) * | 2013-10-22 | 2015-04-23 | Time Warner Cable Enterprises Llc | Methods and apparatus for content switching |
| US20150135332A1 (en) * | 2013-11-11 | 2015-05-14 | Adobe Systems Incorporated | Deferred Delivery of Electronic Signature Agreements |
| US9137220B2 (en) * | 2012-10-19 | 2015-09-15 | International Business Machines Corporation | Secure sharing and collaborative editing of documents in cloud based applications |
| US20160044073A1 (en) * | 2014-03-26 | 2016-02-11 | Unanimous A.I., Inc. | Suggestion and background modes for real-time collaborative intelligence systems |
| US9280553B2 (en) * | 2003-02-28 | 2016-03-08 | Microsoft Technology Licensing, Llc | Method to delay locking of server files on edit |
| US9398059B2 (en) * | 2013-11-22 | 2016-07-19 | Dell Products, L.P. | Managing information and content sharing in a virtual collaboration session |
| US20160306816A1 (en) * | 2014-10-29 | 2016-10-20 | Leonard Morales, JR. | System and method for publishing online posts |
| US20160337291A1 (en) * | 2015-05-15 | 2016-11-17 | Samsung Electronics Co., Ltd. | User terminal apparatus, server, and control method thereof |
| US20170054756A1 (en) * | 2015-08-21 | 2017-02-23 | PushPull Technology Limited | Data collaboration |
| US9607438B2 (en) * | 2012-10-22 | 2017-03-28 | Open Text Corporation | Collaborative augmented reality |
| US20170118271A1 (en) * | 2015-10-22 | 2017-04-27 | Ricoh Company, Ltd. | Approach For Sharing Electronic Documents During Electronic Meetings |
| US20170186064A1 (en) * | 2015-12-29 | 2017-06-29 | Dassault Systemes | Personalizing products with social collaboration |
| US9824335B1 (en) * | 2011-06-16 | 2017-11-21 | Google Inc. | Integrated calendar and conference application for document management |
| US20180012032A1 (en) * | 2014-10-23 | 2018-01-11 | Pageproof.Com Limited | Encrypted collaboration system and method |
| US20180077099A1 (en) * | 2016-09-14 | 2018-03-15 | International Business Machines Corporation | Electronic meeting management |
| US20180095636A1 (en) * | 2016-10-04 | 2018-04-05 | Facebook, Inc. | Controls and Interfaces for User Interactions in Virtual Spaces |
| US20180096507A1 (en) * | 2016-10-04 | 2018-04-05 | Facebook, Inc. | Controls and Interfaces for User Interactions in Virtual Spaces |
| US20180095635A1 (en) * | 2016-10-04 | 2018-04-05 | Facebook, Inc. | Controls and Interfaces for User Interactions in Virtual Spaces |
| US20180183810A1 (en) * | 2015-08-21 | 2018-06-28 | PushPull Technology Limited | Data Collaboration |
| US20180267940A1 (en) * | 2016-06-28 | 2018-09-20 | Hancom Inc. | Document collaboration apparatus for supporting simultaneous editing of styles for objects and operating method thereof |
| US20180350144A1 (en) * | 2018-07-27 | 2018-12-06 | Yogesh Rathod | Generating, recording, simulating, displaying and sharing user related real world activities, actions, events, participations, transactions, status, experience, expressions, scenes, sharing, interactions with entities and associated plurality types of data in virtual world |
| US20190099653A1 (en) * | 2017-10-03 | 2019-04-04 | Fanmountain Llc | Systems, devices, and methods employing the same for enhancing audience engagement in a competition or performance |
| US20190108578A1 (en) * | 2017-09-13 | 2019-04-11 | Magical Technologies, Llc | Systems and methods of rewards object spawning and augmented reality commerce platform supporting multiple seller entities |
| US20190130656A1 (en) * | 2017-11-01 | 2019-05-02 | Tsunami VR, Inc. | Systems and methods for adding notations to virtual objects in a virtual environment |
| US20190220177A1 (en) * | 2018-01-16 | 2019-07-18 | Salesforce.Com, Inc. | Accessibility lock and accessibility pause |
| US10552801B2 (en) * | 2016-09-27 | 2020-02-04 | Cisco Technology, Inc. | Hard stop indicator in a collaboration session |
| US10574442B2 (en) * | 2014-08-29 | 2020-02-25 | Box, Inc. | Enhanced remote key management for an enterprise in a cloud-based environment |
| US20200117705A1 (en) * | 2018-10-15 | 2020-04-16 | Dropbox, Inc. | Version history for offline edits |
| US20200374146A1 (en) * | 2019-05-24 | 2020-11-26 | Microsoft Technology Licensing, Llc | Generation of intelligent summaries of shared content based on a contextual analysis of user engagement |
| US20210004491A1 (en) * | 2019-07-03 | 2021-01-07 | Ooma, Inc. | Securing access to user data stored in a cloud computing environment |
| US20210019944A1 (en) * | 2019-07-16 | 2021-01-21 | Robert E. McKeever | Systems and methods for universal augmented reality architecture and development |
| US10956868B1 (en) * | 2020-06-29 | 2021-03-23 | 5th Kind LLC | Virtual reality collaborative workspace that is dynamically generated from a digital asset management workflow |
| US11012720B1 (en) * | 2020-03-23 | 2021-05-18 | Rovi Guides, Inc. | Systems and methods for managing storage of media content item |
| US20210240372A1 (en) * | 2020-01-31 | 2021-08-05 | Dropbox, Inc. | Data storage scheme switching in a distributed data storage system |
| US20210241529A1 (en) * | 2020-02-05 | 2021-08-05 | Snap Inc. | Augmented reality session creation using skeleton tracking |
| US11178376B1 (en) * | 2020-09-04 | 2021-11-16 | Facebook Technologies, Llc | Metering for display modes in artificial reality |
| US20220086238A1 (en) * | 2020-09-14 | 2022-03-17 | Box, Inc. | Platform-agnostic drag-and-drop operations |
| US11288711B1 (en) * | 2014-04-29 | 2022-03-29 | Groupon, Inc. | Collaborative editing service |
| US20220198765A1 (en) * | 2020-12-22 | 2022-06-23 | Arkh, Inc. | Spatially Aware Environment Interaction |
| US20220222900A1 (en) * | 2021-01-14 | 2022-07-14 | Taqtile, Inc. | Coordinating operations within an xr environment from remote locations |
| US20220231847A1 (en) * | 2021-01-19 | 2022-07-21 | Bank Of America Corporation | Collaborative architecture for secure data sharing |
| US20220317830A1 (en) * | 2021-03-31 | 2022-10-06 | Verizon Patent And Licensing Inc. | Methods and Systems for Providing a Communication Interface to Operate in 2D and 3D Modes |
| US20230041862A1 (en) * | 2021-08-03 | 2023-02-09 | Zhejiang Lab | Cloud-side collaborative multi-mode private data circulation method based on smart contract |
| US11604898B2 (en) * | 2019-08-20 | 2023-03-14 | Google Llc | Secure online collaboration |
| US20230141680A1 (en) * | 2021-11-10 | 2023-05-11 | Pencil Learning Technologies, Inc. | Multi-user collaborative interfaces for streaming video |
| US11677908B2 (en) * | 2021-11-15 | 2023-06-13 | Lemon Inc. | Methods and systems for facilitating a collaborative work environment |
| US11704652B2 (en) * | 2018-06-21 | 2023-07-18 | Supertab Ag | Method and system for augmented feature purchase |
| US20230244802A1 (en) * | 2022-01-31 | 2023-08-03 | Salesforce, Inc. | Managing permissions for collaborative shared documents |
| US20230258463A1 (en) * | 2018-04-06 | 2023-08-17 | State Farm Mutual Automobile Insurance Company | Methods and systems for response vehicle deployment |
| US20240070301A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Timelapse of generating a collaborative object |
| US20240070243A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Authenticating a selective collaborative object |
| US20240069642A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Scissor hand gesture for a collaborative object |
| US20240071020A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Real-world responsiveness of a collaborative object |
| US12348499B2 (en) * | 2022-02-23 | 2025-07-01 | Microsoft Technology Licensing, Llc | Secure collaboration with file encryption on download |
| US12380406B1 (en) * | 2017-01-09 | 2025-08-05 | Sykes Enterprises, Incorporated | Adaptive workspace environment |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10429923B1 (en) * | 2015-02-13 | 2019-10-01 | Ultrahaptics IP Two Limited | Interaction engine for creating a realistic experience in virtual reality/augmented reality environments |
| US11256336B2 (en) * | 2020-06-29 | 2022-02-22 | Facebook Technologies, Llc | Integration of artificial reality interaction modes |
| WO2022147146A1 (en) * | 2021-01-04 | 2022-07-07 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with three-dimensional environments |
-
2022
- 2022-08-31 US US17/900,792 patent/US20240070299A1/en active Pending
-
2023
- 2023-07-21 KR KR1020257010139A patent/KR20250053949A/en active Pending
- 2023-07-21 WO PCT/US2023/028387 patent/WO2024049575A1/en not_active Ceased
- 2023-07-21 CN CN202380062929.4A patent/CN119948433A/en active Pending
- 2023-07-21 EP EP23861058.8A patent/EP4581457A1/en active Pending
Patent Citations (78)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6275863B1 (en) * | 1999-01-25 | 2001-08-14 | International Business Machines Corp. | System and method for programming and executing long running transactions |
| US6535909B1 (en) * | 1999-11-18 | 2003-03-18 | Contigo Software, Inc. | System and method for record and playback of collaborative Web browsing session |
| US20090178073A1 (en) * | 2001-01-02 | 2009-07-09 | Nds Limited | Method and system for control of broadcast content access |
| US20030105734A1 (en) * | 2001-11-16 | 2003-06-05 | Hitchen Stephen M. | Collaborative file access management system |
| US7725490B2 (en) * | 2001-11-16 | 2010-05-25 | Crucian Global Services, Inc. | Collaborative file access management system |
| US20100235763A1 (en) * | 2002-10-31 | 2010-09-16 | Litera Technology Llc. | Collaborative hierarchical document development and review system |
| US9280553B2 (en) * | 2003-02-28 | 2016-03-08 | Microsoft Technology Licensing, Llc | Method to delay locking of server files on edit |
| US7360164B2 (en) * | 2003-03-03 | 2008-04-15 | Sap Ag | Collaboration launchpad |
| US20070033419A1 (en) * | 2003-07-07 | 2007-02-08 | Cryptography Research, Inc. | Reprogrammable security for controlling piracy and enabling interactive content |
| US20060047814A1 (en) * | 2004-08-27 | 2006-03-02 | Cisco Technology, Inc. | System and method for managing end user approval for charging in a network environment |
| US20060080432A1 (en) * | 2004-09-03 | 2006-04-13 | Spataro Jared M | Systems and methods for collaboration |
| US20120260347A1 (en) * | 2005-07-08 | 2012-10-11 | At&T Intellectual Property I, L.P. | Methods, Systems, and Devices for Securing Content |
| US20080013793A1 (en) * | 2006-07-13 | 2008-01-17 | Northrop Grumman Corporation | Gesture recognition simulation system and method |
| US20120293544A1 (en) * | 2011-05-18 | 2012-11-22 | Kabushiki Kaisha Toshiba | Image display apparatus and method of selecting image region using the same |
| US9824335B1 (en) * | 2011-06-16 | 2017-11-21 | Google Inc. | Integrated calendar and conference application for document management |
| US8752138B1 (en) * | 2011-08-31 | 2014-06-10 | Google Inc. | Securing user contact information in collaboration session |
| US8468129B2 (en) * | 2011-09-23 | 2013-06-18 | Loyal3 Holdings, Inc. | Asynchronous replication of databases of peer networks |
| US8788680B1 (en) * | 2012-01-30 | 2014-07-22 | Google Inc. | Virtual collaboration session access |
| US20130219458A1 (en) * | 2012-02-17 | 2013-08-22 | Vasudevan Ramanathan | Methods and systems for secure digital content distribution and analytical reporting |
| US9137220B2 (en) * | 2012-10-19 | 2015-09-15 | International Business Machines Corporation | Secure sharing and collaborative editing of documents in cloud based applications |
| US9607438B2 (en) * | 2012-10-22 | 2017-03-28 | Open Text Corporation | Collaborative augmented reality |
| US20140280498A1 (en) * | 2013-03-14 | 2014-09-18 | Synacor, Inc. | Media sharing communications system |
| US20150020151A1 (en) * | 2013-07-09 | 2015-01-15 | Contentraven, Llc | Systems and methods for trusted sharing |
| US20150035823A1 (en) * | 2013-07-31 | 2015-02-05 | Splunk Inc. | Systems and Methods for Using a Three-Dimensional, First Person Display to Convey Data to a User |
| US20150113571A1 (en) * | 2013-10-22 | 2015-04-23 | Time Warner Cable Enterprises Llc | Methods and apparatus for content switching |
| US20150135332A1 (en) * | 2013-11-11 | 2015-05-14 | Adobe Systems Incorporated | Deferred Delivery of Electronic Signature Agreements |
| US9398059B2 (en) * | 2013-11-22 | 2016-07-19 | Dell Products, L.P. | Managing information and content sharing in a virtual collaboration session |
| US20160044073A1 (en) * | 2014-03-26 | 2016-02-11 | Unanimous A.I., Inc. | Suggestion and background modes for real-time collaborative intelligence systems |
| US11288711B1 (en) * | 2014-04-29 | 2022-03-29 | Groupon, Inc. | Collaborative editing service |
| US10574442B2 (en) * | 2014-08-29 | 2020-02-25 | Box, Inc. | Enhanced remote key management for an enterprise in a cloud-based environment |
| US20180012032A1 (en) * | 2014-10-23 | 2018-01-11 | Pageproof.Com Limited | Encrypted collaboration system and method |
| US20160306816A1 (en) * | 2014-10-29 | 2016-10-20 | Leonard Morales, JR. | System and method for publishing online posts |
| US20160337291A1 (en) * | 2015-05-15 | 2016-11-17 | Samsung Electronics Co., Ltd. | User terminal apparatus, server, and control method thereof |
| US20170054756A1 (en) * | 2015-08-21 | 2017-02-23 | PushPull Technology Limited | Data collaboration |
| US20180183810A1 (en) * | 2015-08-21 | 2018-06-28 | PushPull Technology Limited | Data Collaboration |
| US20170118271A1 (en) * | 2015-10-22 | 2017-04-27 | Ricoh Company, Ltd. | Approach For Sharing Electronic Documents During Electronic Meetings |
| US20170186064A1 (en) * | 2015-12-29 | 2017-06-29 | Dassault Systemes | Personalizing products with social collaboration |
| US20180267940A1 (en) * | 2016-06-28 | 2018-09-20 | Hancom Inc. | Document collaboration apparatus for supporting simultaneous editing of styles for objects and operating method thereof |
| US20180077099A1 (en) * | 2016-09-14 | 2018-03-15 | International Business Machines Corporation | Electronic meeting management |
| US10552801B2 (en) * | 2016-09-27 | 2020-02-04 | Cisco Technology, Inc. | Hard stop indicator in a collaboration session |
| US20180095635A1 (en) * | 2016-10-04 | 2018-04-05 | Facebook, Inc. | Controls and Interfaces for User Interactions in Virtual Spaces |
| US20180096507A1 (en) * | 2016-10-04 | 2018-04-05 | Facebook, Inc. | Controls and Interfaces for User Interactions in Virtual Spaces |
| US20180095636A1 (en) * | 2016-10-04 | 2018-04-05 | Facebook, Inc. | Controls and Interfaces for User Interactions in Virtual Spaces |
| US12380406B1 (en) * | 2017-01-09 | 2025-08-05 | Sykes Enterprises, Incorporated | Adaptive workspace environment |
| US20190108578A1 (en) * | 2017-09-13 | 2019-04-11 | Magical Technologies, Llc | Systems and methods of rewards object spawning and augmented reality commerce platform supporting multiple seller entities |
| US20190099653A1 (en) * | 2017-10-03 | 2019-04-04 | Fanmountain Llc | Systems, devices, and methods employing the same for enhancing audience engagement in a competition or performance |
| US20190130656A1 (en) * | 2017-11-01 | 2019-05-02 | Tsunami VR, Inc. | Systems and methods for adding notations to virtual objects in a virtual environment |
| US20190220177A1 (en) * | 2018-01-16 | 2019-07-18 | Salesforce.Com, Inc. | Accessibility lock and accessibility pause |
| US20230258463A1 (en) * | 2018-04-06 | 2023-08-17 | State Farm Mutual Automobile Insurance Company | Methods and systems for response vehicle deployment |
| US11704652B2 (en) * | 2018-06-21 | 2023-07-18 | Supertab Ag | Method and system for augmented feature purchase |
| US20180350144A1 (en) * | 2018-07-27 | 2018-12-06 | Yogesh Rathod | Generating, recording, simulating, displaying and sharing user related real world activities, actions, events, participations, transactions, status, experience, expressions, scenes, sharing, interactions with entities and associated plurality types of data in virtual world |
| US20200117705A1 (en) * | 2018-10-15 | 2020-04-16 | Dropbox, Inc. | Version history for offline edits |
| US20200374146A1 (en) * | 2019-05-24 | 2020-11-26 | Microsoft Technology Licensing, Llc | Generation of intelligent summaries of shared content based on a contextual analysis of user engagement |
| US20210004491A1 (en) * | 2019-07-03 | 2021-01-07 | Ooma, Inc. | Securing access to user data stored in a cloud computing environment |
| US20210019944A1 (en) * | 2019-07-16 | 2021-01-21 | Robert E. McKeever | Systems and methods for universal augmented reality architecture and development |
| US11604898B2 (en) * | 2019-08-20 | 2023-03-14 | Google Llc | Secure online collaboration |
| US12189823B1 (en) * | 2019-08-20 | 2025-01-07 | Google Llc | Secure online collaboration |
| US20210240372A1 (en) * | 2020-01-31 | 2021-08-05 | Dropbox, Inc. | Data storage scheme switching in a distributed data storage system |
| US20210241529A1 (en) * | 2020-02-05 | 2021-08-05 | Snap Inc. | Augmented reality session creation using skeleton tracking |
| US11012720B1 (en) * | 2020-03-23 | 2021-05-18 | Rovi Guides, Inc. | Systems and methods for managing storage of media content item |
| US10956868B1 (en) * | 2020-06-29 | 2021-03-23 | 5th Kind LLC | Virtual reality collaborative workspace that is dynamically generated from a digital asset management workflow |
| US11178376B1 (en) * | 2020-09-04 | 2021-11-16 | Facebook Technologies, Llc | Metering for display modes in artificial reality |
| US20220086238A1 (en) * | 2020-09-14 | 2022-03-17 | Box, Inc. | Platform-agnostic drag-and-drop operations |
| US20220198765A1 (en) * | 2020-12-22 | 2022-06-23 | Arkh, Inc. | Spatially Aware Environment Interaction |
| US20220222900A1 (en) * | 2021-01-14 | 2022-07-14 | Taqtile, Inc. | Coordinating operations within an xr environment from remote locations |
| US20220231847A1 (en) * | 2021-01-19 | 2022-07-21 | Bank Of America Corporation | Collaborative architecture for secure data sharing |
| US20220317830A1 (en) * | 2021-03-31 | 2022-10-06 | Verizon Patent And Licensing Inc. | Methods and Systems for Providing a Communication Interface to Operate in 2D and 3D Modes |
| US20230041862A1 (en) * | 2021-08-03 | 2023-02-09 | Zhejiang Lab | Cloud-side collaborative multi-mode private data circulation method based on smart contract |
| US20230141680A1 (en) * | 2021-11-10 | 2023-05-11 | Pencil Learning Technologies, Inc. | Multi-user collaborative interfaces for streaming video |
| US11677908B2 (en) * | 2021-11-15 | 2023-06-13 | Lemon Inc. | Methods and systems for facilitating a collaborative work environment |
| US20230244802A1 (en) * | 2022-01-31 | 2023-08-03 | Salesforce, Inc. | Managing permissions for collaborative shared documents |
| US12348499B2 (en) * | 2022-02-23 | 2025-07-01 | Microsoft Technology Licensing, Llc | Secure collaboration with file encryption on download |
| US20240070243A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Authenticating a selective collaborative object |
| US20240069642A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Scissor hand gesture for a collaborative object |
| US20240071020A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Real-world responsiveness of a collaborative object |
| US12019773B2 (en) * | 2022-08-31 | 2024-06-25 | Snap Inc. | Timelapse of generating a collaborative object |
| US12148114B2 (en) * | 2022-08-31 | 2024-11-19 | Snap Inc. | Real-world responsiveness of a collaborative object |
| US20240070301A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Timelapse of generating a collaborative object |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240070300A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Selective collaborative object access based on timestamp |
| US12493705B2 (en) * | 2022-08-31 | 2025-12-09 | Snap Inc. | Selective collaborative object access based on timestamp |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4581457A1 (en) | 2025-07-09 |
| CN119948433A (en) | 2025-05-06 |
| KR20250053949A (en) | 2025-04-22 |
| WO2024049575A1 (en) | 2024-03-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12086324B2 (en) | Micro hand gestures for controlling virtual and graphical elements | |
| US20240320353A1 (en) | Timelapse of generating a collaborative object | |
| US20240393885A1 (en) | Scissor hand gesture for a collaborative object | |
| US12148114B2 (en) | Real-world responsiveness of a collaborative object | |
| US20250307370A1 (en) | Authenticating a selective collaborative object | |
| US20250200201A1 (en) | Selective collaborative object access | |
| US20240070299A1 (en) | Revealing collaborative object using countdown timer | |
| US20240069643A1 (en) | Physical gesture interaction with objects based on intuitive design | |
| US12493705B2 (en) | Selective collaborative object access based on timestamp | |
| US12505239B2 (en) | Collaborative object associated with a geographical location | |
| US20240070302A1 (en) | Collaborative object associated with a geographical location |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION COUNTED, NOT YET MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION COUNTED, NOT YET MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |