[go: up one dir, main page]

CN106462250B - Computerized system and method for layering content in a user interface - Google Patents

Computerized system and method for layering content in a user interface Download PDF

Info

Publication number
CN106462250B
CN106462250B CN201580031865.7A CN201580031865A CN106462250B CN 106462250 B CN106462250 B CN 106462250B CN 201580031865 A CN201580031865 A CN 201580031865A CN 106462250 B CN106462250 B CN 106462250B
Authority
CN
China
Prior art keywords
occluded
virtual environment
occluding
rendering
cast shadow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201580031865.7A
Other languages
Chinese (zh)
Other versions
CN106462250A (en
Inventor
阿瑞尔·萨克泰-泽尔策
克里斯蒂安·罗伯逊
乔恩·威利
约翰·尼古拉斯·吉特科夫
扎卡里·吉布森
大卫·霍·允·邱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of CN106462250A publication Critical patent/CN106462250A/en
Application granted granted Critical
Publication of CN106462250B publication Critical patent/CN106462250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)

Abstract

Computer-implemented systems and methods are provided for analyzing and determining properties of a virtual environment rendered on a display. The disclosed embodiments include, for example, a method of rendering a virtual environment, the method including operations performed with one or more processors. The operations of the method may include generating a plurality of object layers, the object layers representing allowable height values. The method may also include populating the environment with a plurality of objects, wherein each object is associated with a height value corresponding to one object layer. The method may further include determining whether any two objects form an occluded pair. The method may also include calculating a cast shadow index for each occluded pair that reflects a magnitude of a height difference between the occluding object and the occluded object. The method may also include rendering the virtual environment according to the calculated cast shadow indices.

Description

Computerized system and method for layering content in a user interface
Cross Reference of Related Applications
This application claims rights to U.S. provisional patent application No. 62/016,630 filed 24/6 2014, which is incorporated herein by reference in its entirety.
Background
The present disclosure relates generally to computerized systems and methods for displaying content to a user. More particularly, and not by way of limitation, the disclosed embodiments relate to systems and methods for displaying content in a virtual environment, including a virtual three-dimensional environment.
Today, graphical user interfaces reflect an important method for delivering content and information to users. In the modern digital age, users interact with these interfaces on a variety of devices, including computers, mobile phones, televisions, personal digital assistants, handheld systems, radios, music players, printers, tablets, kiosks, and other devices. Many conventional interfaces typically display content to a user in two dimensions.
Disclosure of Invention
The disclosed embodiments include systems and methods for analyzing, rendering, processing, and determining properties of objects within a virtual environment, including a virtual three-dimensional interactive environment. Aspects of the disclosed embodiments provide systems and methods for creating an object layer within a virtual environment, determining a virtual height of an object based on the object layer, and rendering a projection onto an occluded object based on a (e.g., virtual) apparent height difference between two objects. Aspects of the disclosed embodiments also provide methods and systems for processing object operations within a virtual environment to conform to user expectations.
The disclosed embodiments include, for example, a system for rendering a virtual environment, the system including a memory storing a set of instructions and one or more processors coupled to the memory, the one or more processors configured to execute the set of instructions to perform one or more operations. The operations may include generating a plurality of object layers in the virtual environment, the object layers representing allowable height values within the virtual environment. The operations may also include populating the environment with a plurality of objects, wherein each object is associated with a height value corresponding to one of the object layers. The operations may also include determining whether any two objects form an occluded pair, wherein an occluded pair includes an occluding object and an occluded object, and wherein the occluding object is associated with an occluding object layer having a greater height value than an occluded object layer associated with the occluded object. The operations may also include determining a cast shadow index for each occluded pair, the cast shadow index reflecting a magnitude of a height difference between the occluding object layer and the occluded object layer. The operations may also include rendering the virtual environment according to the calculated cast shadow indices.
The disclosed embodiments may also include, for example, a method for rendering a virtual environment, the method comprising operations performed by one or more processors. Operations of the method may include generating a plurality of object layers in the virtual environment, the object layers representing allowable height values within the virtual environment. The method may also include populating the environment with a plurality of objects, wherein each object is associated with a height value corresponding to one of the object layers. The method may further include determining whether any two objects form an occluded pair, wherein an occluded pair includes an occluding object and an occluded object, and wherein the occluding object is associated with an occluding object layer having a greater height value than an occluded object layer associated with the occluded object. The method may further include determining a cast shadow index for each occluded pair, the cast shadow index reflecting a magnitude of a height difference between the occluding object layer and the occluded object layer. The method may also include rendering the virtual environment according to the calculated cast shadow indices.
The disclosed embodiments may also include, for example, a system for rendering a drag-and-drop process in a virtual three-dimensional environment displayed on a mobile device. The system may include a memory storing a set of instructions and one or more processors configured to execute the set of instructions to perform one or more operations. The operations may include generating a plurality of object layers representing allowable height values within the virtual environment. The operations may also include detecting that a user has pressed a drag object associated with a drag object layer corresponding to a height value that is less than a drop container layer associated with a drop container object. The operations may also include rendering a new drop container having a height value less than the drag object layer. The operations may also include detecting a drag-and-drop action and rendering the virtual three-dimensional environment according to the detected drag-and-drop action.
The disclosed embodiments may also include, for example, methods for rendering drag-and-drop procedures in a virtual three-dimensional environment displayed on a mobile device. The method may include generating a plurality of object layers representing allowable height values within the virtual environment. The method may further include detecting that a user has pressed a drag object associated with a drag object layer corresponding to a lesser height value than a drop container layer associated with a drop container object. The method may also include rendering a new drop container having a height value less than the drag object layer. The method may further include detecting a drag-and-drop action and rendering the virtual three-dimensional environment according to the detected drag-and-drop action.
Additional features and advantages of the disclosed embodiments are set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosed embodiments. The features and advantages of the disclosed embodiments will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosed embodiments, as claimed.
The accompanying drawings form a part of the specification. The accompanying drawings illustrate some embodiments of the disclosure and, together with the description, serve to explain the principles of the disclosed embodiments as set forth in the appended claims.
Drawings
FIG. 1 illustrates an exemplary three-dimensional graphical user interface displayed on a client device consistent with disclosed embodiments.
FIG. 2 illustrates an exemplary computer system for performing the processes consistent with the disclosed embodiments.
FIG. 3 illustrates an exemplary three-dimensional graphical user interface displayed on a client device consistent with the disclosed embodiments.
FIG. 4 illustrates an exemplary system priority hierarchy for rendering objects in an interface consistent with the disclosed embodiments.
FIG. 5 illustrates a flow chart of an exemplary process for rendering projections based on object height differences consistent with the disclosed embodiments.
FIG. 6 illustrates an exemplary object layer environment and rendering effects consistent with the disclosed embodiments.
Fig. 7A-7D illustrate an exemplary occluding object processing environment and rendering effects consistent with the disclosed embodiments.
FIG. 8 illustrates a flow diagram of an exemplary object layer creation and rendering process consistent with the disclosed embodiments.
Fig. 9A-9D illustrate an exemplary drag-and-drop process in a sheltered environment consistent with the disclosed embodiments.
FIG. 9E illustrates a flow diagram of an exemplary drag-and-drop process in a sheltered environment consistent with the disclosed embodiments.
10A-10B illustrate an exemplary window object rendering environment consistent with the disclosed embodiments.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure that are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
The disclosed embodiments relate to methods and apparatus for analyzing, determining, rendering and processing properties of objects within a virtual environment, including a virtual three-dimensional interactive environment. In some aspects, the disclosed embodiments may determine these properties based on virtual heights, priority parameters, created object layers, and/or other such information consistent with the disclosed embodiments. In certain aspects, the disclosed embodiments may perform these processes to provide a virtual three-dimensional environment for enhancing the user experience over what is conventionally provided for environments and interfaces. In some aspects, the disclosed embodiments render the virtual environment with the calculated shadows cast on the occluded objects to give the appearance of depth. The disclosed embodiments may also cast shadows in accordance with object height, which may be based on the generated object layer, thereby mimicking the user's desires with real-world objects and lighting properties.
Determining object heights and rendering projections in a virtual interactive environment, including a virtual three-dimensional interactive environment, may provide one or more advantages. For example, in a virtual environment where projections may be the primary cues for representing object depth, it may be advantageous to provide processes for rendering, processing, and manipulating the projections in a consistent, standardized, and/or aesthetically pleasing manner. Further, it may be advantageous to create a virtual environment in which the projection reflects the user experience with physical shadows and lighting properties to enhance the user experience and provide a greater sense of immersion.
FIG. 1 illustrates an exemplary three-dimensional graphical user interface displayed on a client device consistent with disclosed embodiments. In some aspects, a three-dimensional interaction may differ from a traditional two-dimensional interaction in that it allows objects to be associated with height values. In some embodiments, the three-dimensional virtual environment may be associated with an environment depth that reflects any two rendered objects that differ in maximum apparent height. In some embodiments, the three-dimensional virtual environment may include a virtual camera for providing a perspective angle for viewing and rendering the three-dimensional environment. In some aspects, a device displaying such a three-dimensional scene may be configured to indicate a depth of an object via one or more processes (such as projection, occlusion, etc.) consistent with the disclosed embodiments.
FIG. 2 illustrates a block diagram of an exemplary computer system 200 that may be implemented consistent with certain aspects of the disclosed embodiments. For example, in some aspects, computer system 200 may reflect a computer system associated with a device (e.g., a client device of fig. 3) that performs one or more processes disclosed herein. In some embodiments, the computer system 200 may include one or more processors 202 connected to a communications backbone 206, such as a bus or an external communications network (e.g., any medium of digital data communications, such as a LAN, MAN, WAN, cellular network, WiFi network, NFC link, bluetooth, GSM network, PCS network, I/O connection, any wired connection such as USB, and any associated protocol such as HTTP, TCP/IP, RFID, etc.).
In some aspects, computer system 200 may include a main memory 208. Main memory 208 may include a Random Access Memory (RAM), which represents a tangible and non-transitory computer-readable medium that stores a computer program, set of instructions, code, or data for execution by processor 202. Such instructions, computer programs, etc., which when executed by the processor 202, may include machine code (e.g., from a compiler) and/or proposed in-file attachment code (filecontracting code) that the processor 202 may execute with an interpreter, may cause the processor 202 to perform one or more processes or functions consistent with the disclosed embodiments.
In some aspects, main memory 208 may also include or be connected to secondary memory 210. The secondary memory 210 may include a disk drive 212 (e.g., HDD, SSD) and/or a removable storage drive 214, such as a tape drive, flash memory, optical drive, CD/DVD drive, etc. Removable storage drive 214 may read from or write to removable storage unit 218 in a manner known to those skilled in the art. Removable storage unit 218 may represent a magnetic tape, an optical disk, or other storage medium that is read by and written to by removable storage drive 214. Removable storage unit 218 may represent a tangible and non-transitory computer-readable medium that stores a computer program, set of instructions, code or data for execution by processor 202.
In other embodiments, secondary memory 210 may include other means for causing computer programs or other program instructions to be loaded into computer system 200. Such means may include, for example, other removable storage units 218 or interfaces 220. Examples of such means may include a removable memory chip (e.g., EPROM, RAM, ROM, DRAM, EEPROM, flash memory devices, or other volatile or non-volatile memory devices) and associated socket, or other removable storage unit 218 and interface 220 that allow instructions and data to be transferred from the removable storage unit 218 to computer system 200.
Computer system 200 may also include one or more communication interfaces 224. Communications interface 224 may allow software and data to be transferred between computer system 200 and external systems (e.g., in addition to backbone 206). Communication interface 224 may include a modem, a network interface (e.g., an ethernet card), a communications port, a PCMCIA slot and card, or the like. Communication interface 224 may transmit software and data in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communication interface 224. These signals may be provided to communications interface 224 via a communications path (i.e., channel 228). Channel 228 carries signals and may be implemented using wires, cables, optical fibers, RF links, and/or other communication channels. In one embodiment, the signal comprises a data packet sent to the processor 202. Information representing the processed packets may also be sent in the form of signals from processor 202 over communication path 228.
In certain aspects, the computer-implemented methods described herein may be implemented on a single processor of a computer system, such as processor 202 of computer system 200. In other embodiments, these computer-implemented methods may be implemented using one or more processors within a single computer system and/or one or more processors within a separate computer system in communication with a network.
In certain embodiments with respect to FIG. 2, the terms "storage device" and "storage medium" may refer to particular devices, including, but not limited to, main memory 208, secondary memory 210, a hard disk installed in hard disk drive 212, and removable storage unit 218. Moreover, the term "computer-readable medium" can refer to devices including, but not limited to, any combination of hard disk drives 212, main memory 208 and secondary memory 210, and removable storage unit 218, which can provide a computer program and/or a set of instructions to processor 202 of computer system 200. Such computer programs and sets of instructions may be stored on one or more computer readable media. In certain aspects, computer programs and sets of instructions may also be received via communications interface 224 and stored on one or more computer-readable media.
FIG. 3 illustrates an exemplary virtual three-dimensional graphical user interface displayed on a device consistent with the disclosed embodiments. In certain aspects, a device (e.g., client device 310) may include, be associated with, and/or interact with one or more displays (e.g., display 320) for displaying one or more interaction objects (e.g., interaction objects 332A and 332B) to a user.
In some aspects, client device 310 may comprise any computer device, data processing device, or display device consistent with the disclosed embodiments. For example, the device 310 may include a personal computer, a laptop computer, a tablet computer, a notebook computer, a handheld computer, a personal digital assistant, a portable navigation device, a mobile phone, a wearable device, an embedded device, a smart phone, a television, a somatosensory display, a handheld system, a digital radio, a music player, a printer, a kiosk, and any other or alternative computer device capable of processing information and providing information to a display. In certain aspects, client device 310 may be implemented with one or more processors, a computer-based system (e.g., the exemplary computer system of fig. 2), or a display system (e.g., the display described with respect to display 320). In some aspects, client device 310 may include one or more client devices.
In certain aspects, client device 310 may include, be associated with, or interact with one or more displays 320. In some aspects, the display 320 may include a display device or panel for depicting information. For example, the display 320 may include one or more Cathode Ray Tube (CRT) displays, Liquid Crystal Displays (LCDs), plasma displays, Light Emitting Diode (LED) displays, touch screen displays, projection displays (e.g., images projected on a screen or surface, holographic images, etc.), Organic Light Emitting Diode (OLED) displays, Field Emission Displays (FEDs), active matrix displays, Vacuum Fluorescent (VFR) displays, three-dimensional (3D) displays, electronic paper (electronic ink) displays, micro displays, or any combination of these displays. In some embodiments, display 320 may be included in client device 310. In other embodiments, display 320 may constitute a stand-alone device in communication with client device 310 over a communication network (e.g., as discussed above with respect to fig. 2).
In certain aspects, the apparatus 310 may be configured to display and render a graphical user interface for providing data, information, pictures, videos, applications, windows, views, objects, and the like to a user. In some embodiments, the interface may include one or more interaction objects (e.g., objects 332A and/or 332B). In some aspects, the interactive object may represent one or more items, units, or packages of information displayed on the interface. For example, the interaction objects (e.g., object 332A) may include application windows (e.g., windows associated with IOS, microsoft windows, google android, apple OSX, other proprietary windows, etc.), views, buttons, text boxes, icons, pictures, videos, fields, search fields, notification bars, object containers, or any other visual cue capable of providing information and/or receiving input. In some aspects, the interactive object may include, contain, or contain other interactive objects. For example, the interaction objects associated with the application window may include other interaction objects associated with the application (e.g., buttons, fields, text, etc.).
As shown in fig. 3, device 310 may be configured to display one or more depth indicators (e.g., indicators 334A and 334B) on display 320. In some embodiments, the depth indicator may reflect a graphical or graphical indication of apparent depth or height of the corresponding interactive object. In some embodiments, for example, the depth indicators may take the form of projections or inward shadows representing the corresponding interactive objects positioned above or below one another in the virtual three-dimensional environment (e.g., as shown with respect to indicators 334A and 334B). The nature, size, shape, color, range intensity, consistency, uniformity, opacity, gradient, saturation, brightness, etc. of the displayed depth indicator (e.g., projection) may be determined by processes consistent with the disclosed embodiments. In some aspects, these parameters may vary according to the relative virtual heights of their corresponding interactive objects and other objects (e.g., other interactive objects and other depth indicators) rendered on the interface.
The apparatus 310 may be configured to allow rendered objects (e.g., interactive objects) to be located at any virtual height within the virtual environment. In some embodiments, the virtual height may reflect a line of sight for the rendered object to be located above a bottom position (e.g., representing a lowest possible height value, such as 0, -1, etc.). In some embodiments, the virtual environment may be associated with an environment depth that reflects any two rendered objects that differ in maximum apparent height. In some embodiments, this environment depth may be located some virtual distance from a virtual camera viewing, displaying, and rendering the virtual environment.
In some embodiments, the apparatus 310 may be configured to allow objects (e.g., interactive objects) to be located only within one or more object layers contained in the virtual environment (e.g., within virtual boundaries of the environment). In some aspects, the object layer may represent an allowable height value that the rendered object may have. In some aspects, the object layer may include a continuum (e.g., allowing all possible height values within the virtual environment). In other embodiments, the object layer may include discrete values (e.g., objects may only be located at certain heights within the virtual environment). In some embodiments, the processes implemented in client device 310 may be configured to change the relative height of an object (e.g., change the object layer in which the object resides) in response to, for example, user input, system processes, received data, or other triggers consistent with the disclosed embodiments.
In some aspects, the object layers may form a hierarchy of allowable height values (e.g., object layers) that represent how the device 310 may render objects in a virtual scene. For example, FIG. 4 illustrates an exemplary system priority hierarchy 400 for rendering objects in an interface consistent with the disclosed embodiments. In some aspects, hierarchy 400 may include one or more volumes (volumes) that reflect a level of height values (e.g., object layer heights) within the virtual environment. In some embodiments, objects associated with higher priority volumes will appear to the user to be in front of objects associated with lower priority volumes because higher priority will be in a higher object layer than a lower priority object layer. In some aspects, a volume may be associated with one or more object layers. Further, in some aspects, different volumes may be associated with different numbers of object layers.
In some embodiments, the volume may be located within a boundary of an environment depth associated with the virtual environment. For example, in one embodiment, a volume (and any object layer associated with the volume) may be located within a virtual boundary defined by a bottom surface 414 (e.g., which represents the lowest height value that an object may have) and a screen surface 412 (e.g., which represents the highest height value that an object may have). In some embodiments, the depth of the virtual environment may be determined from the difference in height of the screen face 412 and the bottom face 414. Consistent with the disclosed embodiments, apparatus 310 may be configured to render objects only within the allowed object layers located between faces 412 and 414.
In some aspects, the volume may represent a general height value corresponding to a level of the rendered object. For example, in some embodiments, the apparatus 310 may be configured to associate rendering objects for high priority system processes (e.g., certain system overlays, alarms, notification bars, and/or any objects included therein) with the system volume 402. In this example, objects located within system volume 402 will be located at an elevation above those objects in other volumes (e.g., these objects will be located in an object layer above the other objects). In some aspects, objects located within the object layer associated with system volume 402 will appear to the user to be "in front of" objects within other volumes because those objects are located in an object layer above the object layer of lower height value. In this manner, each object and/or each object layer may be associated with a respective volume based on the level of the object and/or the level of the object layer.
In another example, the hierarchy 400 may include a context switch volume 404 located below the high-priority system volume 402. In certain aspects, context switch volumes may be associated with certain system and application functions processed on client device 310. For example, in one embodiment, the context switch volume 404 may be associated with: functionality related to intent disambiguation, contemplated content views associated with applications or other volumes (e.g., volumes 406A, 406B, 406C, etc.), and the like.
The tier 400 may also include one or more application volumes (e.g., volumes 406A, 406B, 406C, etc.) associated with lower height values than the context switch volume 404. In some embodiments, the application volume may include objects that are typically associated with application processes running on the client device 310. For example, an application volume may include objects associated with an application (such as text, buttons, fields, windows, views, etc.) or any other object located within a running application.
The tier 400 may include other volumes such as an application switching volume 408 and a low priority system volume 410. In some aspects, for example, the application switching volume may reflect an object layer associated with a recently opened application or follow the process of switching between applications. In some aspects, the low priority system volume 410 may represent an always-present object, be located in the background, may be part of the top system volume, etc.
Although FIG. 4 illustrates certain volumes as having a certain size, name, and priority, it should be appreciated that the hierarchy 400 may include any number of volumes located within any allowable height of the virtual environment. Moreover, the use of certain terms with respect to volumes (e.g., "system volumes") is intended to be illustrative and not limiting.
The disclosed embodiments may be implemented to provide systems and methods for determining height values between objects and rendering an environment accordingly. FIG. 5 sets forth a flow chart illustrating an exemplary process for rendering projections based on object height differences consistent with embodiments of the present disclosure. In certain aspects, process 500 may be implemented in a client device 310 implementing one or more computer systems or processors (e.g., computer system 200 of fig. 2).
As shown in FIG. 5, process 500 may include obtaining the heights of two objects rendered in an environment (step 502). In some aspects, process 500 may determine these heights based on the object layer in which the object resides. For example, if the object is located within an object layer associated with a particular height, the process 500 may determine that the object is located at a height equal to the object layer in which it is located. In other embodiments, the process 500 may determine the height of the object based on other parameters associated with the object (e.g., height values stored in a memory, main memory, etc. of the device 310, such as a height associated with a z-coordinate or location).
In some embodiments, the process 500 may compare the heights of the two objects to determine a height difference associated with the two objects (step 504). In some embodiments, the height difference may reflect the apparent distance of one object above or below another object in the virtual environment. For example, if a first object is located in an object layer having a height value of 10 units and a second object is located in an object layer having a height value of 4 units, the process 500 may determine that the first object is located 6 units above the second object. In some embodiments, the units associated with such measurements may constitute density independent pixels, although other groups of units (e.g., centimeters, arbitrary units, etc.) are possible.
In some aspects, the process 500 may include calculating a cast shadow index between two objects based on a height difference between the objects (step 506). In some embodiments, the cast shadow index may represent a characteristic of a shadow cast by an object of a higher height value on an object of a lower height value. In some aspects, the cast shadow index may reflect the magnitude of the height difference between the two objects. For example, the projection index may represent the intensity, color, gradient, size, shape, brightness, etc. of the projection to represent various apparent height differences between two objects in the environment. For example, a first object located directly above a second object may cast a smaller, less obvious, weaker, brighter, etc. shadow than a third object located at a higher elevation than the first object. In some aspects, objects located in the same and/or adjacent layers cannot cast shadows on each other (e.g., as described with respect to fig. 7A-7D). For example, the apparatus 310 may determine that the cast shadow index is zero for two objects whose height difference does not exceed a threshold.
In certain aspects, the process 500 may also be configured to set the cast shadow index to other values. For example, in one embodiment, the apparatus 310 may be configured to determine certain types of objects (e.g., toolbars, system objects, etc.), objects located within certain object layers, objects having a height difference below some threshold, etc. as not casting a shadow at all. For example, the apparatus 310 may determine that the occluding object is associated with a particular type or kind of object (e.g., a toolbar, a system object, etc.), and may zero all shadows produced by that type (e.g., by modifying the cast index to zero). In another example, the apparatus 310 may be configured to cover, limit, increase, change, or otherwise modify the calculated cast shadow indices to a minimum or maximum value (e.g., a shadow intensity limit) for an object of a certain type, located in a certain layer, or having a certain height difference. The apparatus 310 may, for example, limit the cast shadow index associated with this object to a particular shadow intensity limit or range, or the like.
In some embodiments, process 500 may include rendering projections in the virtual environment based on the cast shadow indices (step 508). In some aspects, rendering the projection may include rendering the projection on an object associated with the lower height value (e.g., an occluded object). In some aspects, the rendered projections may be combined with others. For example, if the first object is located above the second object, which in turn is located above the third object, the process 500 may combine the projection from the first object onto the third object with the projection from the second object onto the third object.
FIG. 6 illustrates an exemplary object layer environment 600 and rendering effects consistent with the disclosed embodiments. In some aspects, environment 600 may include a floor 602 that represents the lowest permissible height that objects within environment 600 may have (e.g., the lowest possible object within environment 600). In some aspects, the base 602 may represent the floor 414 associated with a particular environment, but such a relationship is not required. For example, the base 602 may be located above the floor 414, on, for example, the lowest possible object layer associated with the application volume.
In some embodiments, the environment 600 may include objects in a shelved state 604 in an object layer located above the base 602. In some aspects, the object layer associated with the resting state 604 may be directly above the base 602 (e.g., have a projection index of zero), but such a relationship is not required. In certain aspects, the apparatus 310 may be configured to determine a height difference between the object in the resting state 604 and the base 602 and render a scene accordingly (e.g., as described with respect to fig. 5). For example, as shown in fig. 6, the apparatus 310 may perform a process to determine that an object in the resting state 604 is located in an object layer directly above the base 602 and calculate a cast shadow index (e.g., as shown with respect to the object 610) that reflects that the object does not cast a shadow on the base 602.
In some embodiments, the environment 600 may include objects in one or more object layers above an object layer associated with the resting object 604 and/or the base 602. For example, environment 600 may include objects located in focus state 606 and/or press down state 608. In some aspects, the apparatus 310 may be configured to render objects in focus to highlight, emphasize, or cut out certain objects in the environment 600. The device 910 may determine to place the target in focus based on instructions in an application or operating system running on the device. Similarly, in some embodiments, the apparatus 310 may be configured to send an object that the user pressed (e.g., immediately, after a threshold amount of time, etc.) into the object layer associated with the pressed state 608. Further, the device 310 may place the target in the depressed state 608 when the user types input consistent with the drag-and-drop action (e.g., proceeding in accordance with fig. 9A-9E).
The apparatus 310 may be configured to calculate height differences between objects in these states and render scenes accordingly. For example, the apparatus 310 may be configured to determine that a relative height difference between the object in the pressed state 608 and the base 602 is greater than a height difference between the object in the focused state 606 and the bottom 602. In this example, the apparatus 310 may render the environment in a manner that visually informs the user of this information (e.g., consistent with the cast shadow index). For example, as shown in FIG. 6, the shadow associated with object 614 is larger than the shadow associated with object 612, which indicates that object 614 is located in an object layer above object 612. The device 310 may render the scene to reflect this information in any manner consistent with the disclosed embodiments (e.g., intensity, color, range, saturation, etc. of the projection).
As used herein, the terms "focus state," "depressed state," and "resting state" are intended to be illustrative and not limiting. Further, while fig. 6 shows three possible states (e.g., object layers) that an object may take, the disclosed embodiments provide any number of such states, object layers, and permissible height values for rendering the object.
The disclosed embodiments may provide systems and methods for rendering objects and projections where occlusions are present. For example, fig. 7A-7D illustrate an exemplary occluded object operating environment and rendering effects consistent with the disclosed embodiments. As shown in fig. 7A, apparatus 310 may render an environment in which occluding object 702 partially occludes a second object (e.g., an object in a parked state 604). In some aspects, the occluding object may comprise any interactive object consistent with the disclosed embodiments. Device 310 may determine that an occluding object occludes an occluded object based on, for example, determining that the lateral extent (e.g., x and y coordinates) of object coverage of the occluding object is above some threshold amount and determining that the height of the occluding object is greater than the height of the occluded object. In some aspects, the apparatus 310 may be configured to render an environment consistent with the disclosed embodiments (e.g., as shown in fig. 7B).
For example, the apparatus 310 may be configured to determine that an object in the resting state 604 does not cast a shadow on the base 602 (e.g., because the height difference falls below a threshold distance). Apparatus 310 may also determine that occluding object 702 has a greater height value (e.g., is located in a higher object layer) than resting object and base 702. In this example, apparatus 310 may also be configured to determine that occluding object 702 does not cast a shadow on objects below it (e.g., base 602 and objects in a resting state 604) based on other factors consistent with the disclosed embodiments (e.g., object 702 is an example of an object class that does not cast a shadow).
The apparatus 310 may be configured to change the object position (e.g., change the object layer in which the object is located) in response to user input, system processes, or any other indicia consistent with the disclosed embodiments. As shown in fig. 7C, for example, the device 310 may be configured to move an object from the floor state 604 (or another state) to the pressed state 608 in response to a user input (e.g., a user touching, pressing down, clicking on the object, doing so for a threshold amount of time, etc.). In the example shown in FIG. 7C, the press down state 608 is associated with a greater height value than the resting state 604. In some aspects, the apparatus 301 may be configured to render the environment according to an updated location reflecting the moving object. For example, as shown in fig. 7D, the apparatus 310 may determine that an object moved to the pressed state 608 will now cast a shadow on the base 602. In this example, the object layer associated with the pressed state is located at a height below occluding object 702. In some aspects, the apparatus 310 may prevent an object in the pressed state 608 from being located in an object layer above the occluding object 702 due to, for example, object categories associated with the object in the pressed state 608 and the occluding object 702 or any other process consistent with the disclosed embodiments.
FIG. 8 illustrates a flow diagram of an exemplary object layer creation and rendering process 800 consistent with the disclosed embodiments. In certain aspects, process 800 may be implemented in a client device 310 implementing one or more computer systems or processors (e.g., computer system 200 of fig. 2).
In some embodiments, process 800 may include generating a plurality of object layers in a virtual environment to which objects are populated (step 802). In some aspects, the generated object layers may form discrete levels of height within the virtual environment. In other aspects, the object layer may constitute a continuous height value within the environment (e.g., the object may take any height value between the bottom surface 414 and the screen surface 412).
In some aspects, process 800 may include populating a virtual environment with one or more objects (e.g., interactive objects) (step 804). In some embodiments, populating an environment may include designating each object as a particular object layer, designating each object as a particular height, designating each object as a particular priority volume, or any other such process for specifying, representing absolute, relative, or approximate heights of objects within a virtual environment.
In some aspects, process 800 may include determining whether any object that is filled or visible in the virtual environment obscures another object (step 806). In certain embodiments, the process 800 may determine whether one object occludes another object by, for example, comparing coordinates of the two objects (e.g., x, y, and/or z positions in a cartesian coordinate system), height values of two object layers associated with the two objects, determining camera properties associated with a virtual camera of the viewing environment (e.g., camera position, camera view angle, camera field of view, etc.), and/or the like. For example, process 800 may determine that an occluding object occludes an occluded object by determining that the lateral extent (e.g., x and y coordinates) covered by the object is above some threshold amount, comparing the heights thereof (e.g., the heights of their respective object layers), and determining that one object on one layer is above or below another object. Because an occlusion may require at least two objects in the environment, a set of objects that occlude one another may be referred to as an "occlusion pair," and this description is not limiting.
In certain aspects, process 800 may include calculating a cast shadow index for each occlusion pair consistent with the disclosed embodiments (consistent with the process described with respect to fig. 5) (step 808). In some embodiments, the process 800 may then render the virtual three-dimensional environment according to the one or more cast indices (step 810). In some aspects, for example, rendering the scene may include determining a sum or net effect of all projections based on the projection indices of each occlusion pair, and rendering the scene accordingly. For example, the process 800 may add, multiply, or otherwise combine the cast indices of each object in the scene (e.g., by combining the indices of each occluding pair of which the object is a member) to render them appropriately. In some embodiments, the process 800 may also limit the combined cast shadow indices to meet a maximum and minimum value (e.g., a shadow intensity limit or an aggregate shadow intensity limit).
The disclosed embodiments also provide methods and systems for processing user interactions, system processes, and object manipulations of the presence of occluded objects and rendering scenes accordingly. For example, fig. 9A-9D illustrate block diagrams of an exemplary drag-and-drop process in a shaded environment consistent with the disclosed embodiments. Similarly, FIG. 9E illustrates a flow diagram of an exemplary drag-and-drop process consistent with the disclosed embodiments. In certain aspects, the processes described with reference to fig. 9A-9E may be implemented in a client device 310 implementing one or more computer systems or processors (e.g., computer system 200 of fig. 2).
For example, FIG. 9A illustrates an exemplary environment in which an object in a pressed state 608 must be dragged and dropped into a container (drop container) associated with a float action object 902. As described in the example of FIG. 9A, a press button in state 608 may be obscured by an obscuring object 702, and a float action object 902 may be located at a greater height than an object in the press state 608. In this example, the float action object 902 may represent a drop container associated with an object layer having a height value greater than and/or equal to the height value of the object in the pressed state 608 and/or the occluding object 702. In some aspects, the float action object 902 may comprise any interactive object for handling drag-and-drop functionality consistent with the disclosed embodiments.
In some aspects, the apparatus 301 may be configured to manipulate objects in the environment to allow a user to drop an object in the pressed state 608 into a container associated with the float action object 902. The device 310 may determine to perform such manipulation based on any of the processes of the disclosed embodiments. For example, the apparatus 310 may determine that the object has remained in the pressed state 608 for a threshold period of time, and that an object having a higher height value in the context (e.g., the floating action object 902) is associated with the category of the projected object.
As shown in fig. 9B, the device 310 may be configured to remove, delete, shuffle, fade out, or move the object 702 away from the object in the depressed state. In some aspects, device 310 may place (e.g., generate and display) contextual action object 906 at a location below the object in press down state 608 instead of obscuring object 702. In some aspects, contextual action object 906 may represent an object that is now visually the same or similar to occluding object 702, but that is located at an elevation below the object in the depressed state 608. In some embodiments, device 310 may create and render contextual action object 906 such that it has similar or identical shading characteristics as occluding object 702. For example, device 310 may maintain, associate, and/or specify a value for any cast shadow index associated with occluding object 702 in contextual action object 906 despite contextual action object 906 existing in a different object layer. In this way, contextual action object 906 may mask object 702 from its source to inherit the visible and rendering properties. In other aspects, such replacement or maintenance is not required, such that, for example, the device renders the contextual action object 906 as if it were a new object located in its current layer independent of the occluding object 702.
In some embodiments, apparatus 310 may also create, generate, display, and place contextual float object 904 at an elevation below float object 902 and the object in pressed state 608. In some aspects, contextual float object 904 may inherit properties from source float object 902, such as the same or similar appearance and/or one or more cast indices associated with float object 902. Apparatus 310 may render context float object 904 using any process consistent with the disclosed embodiments. For example, contextual floating action object 904 may appear, fade in, fly in from a side of the display, expand from a center point based on the location of floating action object 902 (e.g., as shown in fig. 9B and 9C), and so on. The nature of contextual float object 904 itself may also be independent of the nature of float object 902. For example, device 310 may render contextual float object 904 such that it has a cast shadow index based on its own object layer, and not the cast shadow index of float object 902.
In certain aspects, and as shown in the exemplary environment of FIG. 9C, device 310 may modify the appearance of occluded object 702. For example, apparatus 310 may cause occluding object 702 to temporarily or permanently disappear from the scene. In some embodiments, this removal may free the user to visualize and manipulate the object in the pressed state 608. In some embodiments, the device 310 may also modify, remove, eliminate, fade out, slide, move, and/or reduce the appearance of the floating action object 902, and similarly modify, add, emphasize, and/or change the appearance of the contextual floating action object 904 (e.g., to indicate that it may receive an object in the pressed state 608).
In some aspects, as shown in fig. 9D, device 310 may modify (e.g., reduce, remove, change its appearance) floating action object 902 away from the environment with a push button 608, a contextual floating action object 904 (e.g., containing the same data as removed floating action object 902), and/or a contextual action object 906 (e.g., containing the same data as removed occluding object 702). In this manner, the apparatus 310 may be configured to allow users, systems, and other processes to freely manipulate, interact with, or cooperate with objects without regard to shading, projections, height values, or other rendering environmental effects.
FIG. 9E illustrates a flow diagram of certain aspects of the foregoing embodiments in an exemplary drag-and-drop process 900. In certain aspects, process 900 may be implemented in a client device 310 implementing one or more computer systems or processors (e.g., computer system 200 of fig. 2).
Process 900 may include generating a plurality of object layers (step 910). For example, the object layer may represent discrete or continuous allowable height values within a virtual environment rendered on a display device. In some aspects, process 900 may also include populating the environment with one or more objects consistent with the disclosed embodiments (step 920), such as described with respect to fig. 8 and other embodiments disclosed herein.
In some embodiments, process 900 may include detecting that the user has pressed the drag object (step 930). In some aspects, the process 900 may determine that the user has entered a selection or depression representing a drag object for some threshold amount of time (e.g., one second) of input. Additionally or alternatively, the process 900 may determine that the pulling object has been in the hold down state 608 for a threshold amount of time, or any other detection consistent with the disclosed embodiments. In one aspect, for example, the process 900 may determine whether any drop containers are currently displayed on the device and, if so, the height of the drop container, thereby determining whether further processing is required. Also, for example, when the height of one of the drop containers is higher than the height of the drag object (e.g., based on the height of its respective object layer), one of the drop containers is completely obscured, etc., further processing may be required.
Process 900 may include determining whether an object in a pressed state is occluded by another object (e.g., as described with respect to the process of fig. 8 or other processes herein). When the process 900 determines that the object in the pressed state 608 is occluded by one or more other objects (e.g., occluding object 702), the process 900 may remove the occluding object 702 from the display (step 940). This removal process may take any form consistent with the disclosed embodiments, such as sliding the occluding object 702 away from the display, fading it out, changing its color or transparency, disappearing it, etc. (e.g., as described with respect to fig. 9A-9D).
In some aspects, when process 900 determines that one or more drop containers are associated with an object layer having a height greater than the height of the push (pull) object 608, process 900 may render one or more new drop containers (e.g., contextual float action object 904) in the environment (step 950). The newly dropped container may be located (e.g., designated) in an object layer having a height that is less than the height of the push-pull object 608 (e.g., as shown in fig. 9A-9C). In some embodiments, the new drop container may be visually similar or identical to one or more drop containers (e.g., floating action object 902) located above the depressed drag object 608. In some embodiments, for example, a new drop container (e.g., contextual float object 904) may be assigned or inherit one or more cast indices of a higher drop container (e.g., float object 902), although at a lower object level. In some aspects, process 900 may also include modifying the appearance of one or more drop containers 902 located above the push-pull object (e.g., in the push state 608), including removing it from display, changing its color or transparency, or any other modification consistent with the disclosed embodiments.
Process 900 may handle a drag-and-drop action consistent with the disclosed embodiments (step 960). In some aspects, handling the drag-and-drop action may include detecting a user input reflecting a drag action on the display and rendering the environment accordingly (e.g., moving objects and interfaces associated with drag objects on the screen in response to the user input, updating the cast indices as drag objects are obscured and obscured by other objects, etc.). For example, process 900 may detect that a user has dragged an object in a hold down state 608 over a drop container (e.g., contextual float action object 904), detect an input signaling that the dragged object is dropped (e.g., the object is no longer in a hold down state), and perform the necessary processing to proceed with the drop of the object into the container as defined by the application and operating system running on device 310.
In another example, the process 900 can modify and/or update one or more cast indices of the drag object (e.g., for each occluding pair of which the drag object is a member) in response to detecting and rendering drag-and-drop actions. In one embodiment, for example, the process 900 may set one, several, or all of the cast shadow indices of the dragged object to zero, another predetermined value, or otherwise limit the range of acceptable cast shadow indices (e.g., limit such indices to a predetermined range of values). Further, process 900 may modify one or more cast indices of drop containers (e.g., float object 902, contextual float object 904, etc.), occluding object 702, and/or contextual action object 906 in a similar manner (e.g., set the cast index of each occluding pair in which such object is a member to zero or limit these values to predetermined limits or ranges). In some aspects, process 900 may remove temporary new delivery container 904 and/or contextual action object 906 from the display and restore these objects to their original state. For example, the process 900 may return the floating action object 902 and the mask object 702 to their original object layers, as shown in FIG. 9A.
The disclosed embodiments also provide methods and systems for manipulating, processing, and rendering nested objects. For example, fig. 10A-10B illustrate an exemplary window object rendering environment 1000 consistent with the disclosed embodiments. In certain aspects, the processes described with reference to fig. 10A-10D may be implemented in a client device 310 implementing one or more computer systems or processors (e.g., computer system 200 of fig. 2).
In some embodiments, environment 1000 may include a window object 1002 that includes viewing objects 1004A and 1004B. In some aspects, view objects 1004A and 1004B may themselves comprise or contain nested view objects 1006A and 1006B, respectively. In some embodiments, window object 1002 containing view objects 1004A and 1004B and nested view objects 1006A and 1006B can comprise any interactive object (e.g., an application window, view, button, etc.) consistent with the disclosed embodiments. As shown in fig. 10A, the apparatus 310 may be configured to determine the height of objects within a scene and render the environment accordingly (e.g., as represented by the presence of projections).
In certain aspects, the apparatus 310 may be configured to track the height of objects rendered in the context to manipulate, process, and render nested viewing objects that are occluded from one another. In some embodiments, the apparatus 310 may perform these steps by, for example, maintaining the height of all objects in the scenario, specifying the priority of certain objects of a particular volume or category (e.g., application window, application volume, etc.), generating the inheriting characteristics of nested objects, and so forth. For example, as shown in fig. 10B, device 310 may be configured to determine that viewing object 1004A has a height value greater than the viewing window, and render these objects accordingly (e.g., render viewing object 1004A and its nested viewing object 1006A above viewing object 1004B and its nested viewing object 1006B).
The foregoing description has been presented for purposes of illustration. The precise forms or embodiments disclosed are non-exhaustive and are not limiting. Modifications and adaptations to the embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. For example, the described implementations include hardware and software, but systems and methods consistent with the present disclosure may be implemented solely in hardware.
The computer program based on the written description and the method in the description is within the skill of the software developer. Various programming techniques may be used to generate the various programs or program modules. For example, program segments or program modules may be designed in or with the aid of Java, C + +, assembly language, or any such programming language. One or more of such software segments or modules may be integrated into the device system or existing communication software.
Moreover, although illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations based on the present disclosure. The elements in the claims are to be understood broadly based on the language used in the claims and not limited to examples described in this specification or during the course of an application, which examples are to be considered non-exclusive. Moreover, the steps of the disclosed methods may be modified in any manner, including reordering steps and/or inserting or deleting steps.
The features and advantages of the present disclosure are apparent from the detailed description, and thus, it is intended by the appended claims to cover all systems and methods that fall within the true spirit and scope of the present disclosure. As used herein, the indefinite articles "a" and "an" mean "one or more" in the open claims, containing the conjunction "comprising", "including" and/or "having". Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the disclosure to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the disclosure.
Those skilled in the art will recognize other embodiments from consideration of the description and practice of the embodiments disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosed embodiments being indicated by the following claims.

Claims (20)

1. A system for rendering a virtual environment, the system comprising:
one or more processors; and
a memory storing instructions that, when executed by the one or more processors, cause the system to perform operations comprising:
generating a plurality of object layers representing allowable height values within the virtual environment;
populating the virtual environment with a plurality of user-selectable objects, wherein each of the user-selectable objects is associated with a height value corresponding to one of the object layers;
determining whether two user-selectable objects form an occluded pair, wherein an occluded pair comprises an occluding object and an occluded object, and wherein the occluding object is associated with an occluding object layer having a greater height value than an occluded object layer associated with the occluded object;
calculating a cast shadow index for each occluded pair, the cast shadow index reflecting a magnitude of a height difference between the occluding object layer and the occluded object layer; and
rendering the virtual environment for display in accordance with the calculated cast shadow indices, wherein rendering the virtual environment comprises:
casting a shadow on an occluded object based on the cast shadow index for each occlusion pair to give the appearance of depth;
determining a net effect of the projection of the virtual environment by combining the calculated projection indices; and
the combined cast shadow index is limited to the aggregate shadow intensity limit.
2. The system of claim 1, wherein calculating the cast shadow index comprises:
obtaining a height value associated with each object in the virtual environment;
comparing the obtained height values for each occluded pair to obtain a magnitude of a height difference between the occluding object layer and the occluded object layer; and
calculating the cast shadow index based on the magnitude of the height difference.
3. The system of claim 2, wherein the object layers form a discrete set of values within the virtual environment.
4. The system of claim 2, wherein the cast shadow index represents at least one of a size, color, shape, or intensity of a cast shadow rendered within the virtual environment.
5. The system of claim 2, wherein calculating the cast shadow index comprises:
determining an occluding object class associated with the occluding object; and
modifying the cast shadow indices based on the occluding object class.
6. The system of claim 5, wherein modifying the cast shadow index comprises changing the cast shadow index to represent that the occluding object does not cast a shadow onto the occluded object.
7. A computer-implemented method for rendering a virtual environment, the method comprising:
generating a plurality of object layers representing allowable height values within the virtual environment;
populating the environment with a plurality of user-selectable objects, wherein each of the user-selectable objects is associated with a height value corresponding to one of the object layers;
determining whether two user-selectable objects form an occluded pair, wherein an occluded pair comprises an occluding object and an occluded object, and wherein the occluding object is associated with an occluding object layer having a greater height value than an occluded object layer associated with the occluded object;
calculating a cast shadow index for each occluded pair, the cast shadow index reflecting a magnitude of a height difference between the occluding object layer and the occluded object layer; and
rendering the virtual environment for display in accordance with the calculated cast shadow indices, wherein rendering the virtual environment comprises:
casting a shadow on an occluded object based on the cast shadow index for each occlusion pair to give the appearance of depth;
determining a net effect of the projection of the virtual environment by combining the calculated projection indices; and
the combined cast shadow index is limited to the aggregate shadow intensity limit.
8. The computer-implemented method of claim 7, wherein calculating the cast shadow index comprises:
obtaining a height value associated with each object in the virtual environment;
comparing the obtained height values for each occluded pair to obtain a magnitude of a height difference between the occluding object layer and the occluded object layer; and
calculating the cast shadow index based on the magnitude of the height difference.
9. The computer-implemented method of claim 8, wherein the object layers form a discrete set of values within the virtual environment.
10. The computer-implemented method of claim 8, wherein the cast shadow index represents at least one of a size, color, shape, or intensity of a cast shadow rendered within the virtual environment.
11. The computer-implemented method of claim 8, wherein calculating the cast shadow index comprises:
determining an occluding object class associated with the occluding object; and
modifying the cast shadow indices based on the occluding object class.
12. The computer-implemented method of claim 11, wherein modifying the cast shadow index comprises changing the cast shadow index to represent that the occluding object does not cast a shadow onto the occluded object.
13. A system for rendering a drag-and-drop process in an environment, the system comprising:
one or more processors; and
a memory storing instructions that, when executed by the one or more processors, cause the system to perform operations comprising:
generating a plurality of object layers representing allowable height values relative to a floor within the virtual environment;
detecting that a user has pressed a drag object associated with an occluded object layer corresponding to a smaller height value than an occluding object layer associated with dropping a container object;
calculating a cast shadow index reflecting a magnitude of a height difference between the occluded object layer and the occluding object layer;
rendering a new drop container having a height value less than the occluded object layer; and
rendering the virtual environment for display in accordance with the detected drag-and-drop action, wherein rendering the virtual environment comprises:
casting a shadow on the drag object based on the cast shadow index to give the appearance of depth;
determining a net effect of the projection of the virtual environment by combining the cast shadow index with one or more other cast shadow indices calculated for one or more occlusion pairs; and
the combined cast shadow index is limited to the aggregate shadow intensity limit.
14. The system of claim 13, wherein the operations further comprise:
detecting that the dragged object is occluded in the virtual environment by an occluding object associated with the occluding object layer;
removing the occluding object from display; and
modifying an appearance of the drop container object.
15. The system of claim 14, wherein modifying the appearance of the drop container object comprises removing the drop container object from display, and wherein the operations further comprise:
generating a contextual object that is visually the same as the occluding object, the contextual object located in a contextual object layer having a lesser height value than the occluded object layer; and
rendering the contextual action object for display.
16. The system of claim 15, wherein generating the contextual action object comprises specifying one or more fall-shadow indices associated with the occluding object for the contextual action object; and wherein rendering the virtual environment comprises modifying the one or more fall-shadow indices in response to detecting the drag-and-drop action.
17. A computer-implemented method for rendering a drag-and-drop process in a virtual environment, the method comprising:
generating a plurality of object layers representing allowable height values within the virtual environment;
detecting that a user has pressed a drag object associated with an occluded object layer corresponding to a smaller height value than an occluding object layer associated with dropping a container object;
calculating a cast shadow index reflecting a magnitude of a height difference between the occluded object layer and the occluding object layer;
rendering a new drop container having a height value less than the occluded object layer; and
rendering the virtual environment according to the detected drag-and-drop action, wherein rendering the virtual environment comprises:
casting a shadow on the drag object based on the cast shadow index to give the appearance of depth;
determining a net effect of the projection of the virtual environment by combining the cast shadow index with one or more other cast shadow indices calculated for one or more occlusion pairs; and
the combined cast shadow index is limited to the aggregate shadow intensity limit.
18. The computer-implemented method of claim 17, further comprising:
detecting that the dragged object is occluded in the virtual environment by an occluding object associated with the occluding object layer;
removing the occluding object from display; and
modifying an appearance of the drop container object.
19. The computer-implemented method of claim 18, wherein modifying the appearance of the drop container object comprises removing the drop container object from display, and wherein the method further comprises:
generating a contextual object that is visually the same as the occluding object, the contextual object located in a contextual object layer having a lesser height value than the occluded object layer; and
rendering the contextual action object for display.
20. The computer-implemented method of claim 19, wherein generating the contextual action object comprises specifying one or more silhouette indices associated with the obscuring object for the contextual action object; and wherein rendering the virtual environment comprises modifying the one or more fall-shadow indices in response to detecting the drag-and-drop action.
CN201580031865.7A 2014-06-24 2015-06-23 Computerized system and method for layering content in a user interface Active CN106462250B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201462016630P 2014-06-24 2014-06-24
US62/016,630 2014-06-24
PCT/US2015/037178 WO2015200323A1 (en) 2014-06-24 2015-06-23 Computerized systems and methods for layering content in a user interface

Publications (2)

Publication Number Publication Date
CN106462250A CN106462250A (en) 2017-02-22
CN106462250B true CN106462250B (en) 2020-04-24

Family

ID=53539931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580031865.7A Active CN106462250B (en) 2014-06-24 2015-06-23 Computerized system and method for layering content in a user interface

Country Status (4)

Country Link
US (1) US9990763B2 (en)
EP (1) EP3161597A1 (en)
CN (1) CN106462250B (en)
WO (1) WO2015200323A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107223270B (en) * 2016-12-28 2021-09-03 达闼机器人有限公司 Display data processing method and device
CN108629030B (en) * 2018-05-09 2019-11-19 成都四方伟业软件股份有限公司 Data display method and device
CN109002241B (en) * 2018-06-29 2019-06-18 掌阅科技股份有限公司 View staggered floor display methods, electronic equipment and storage medium
US12169895B2 (en) * 2021-10-15 2024-12-17 Adobe Inc. Generating shadows for digital objects within digital images utilizing a height map

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6184865B1 (en) * 1996-10-23 2001-02-06 International Business Machines Corporation Capacitive pointing stick apparatus for symbol manipulation in a graphical user interface
US6915490B1 (en) * 2000-09-29 2005-07-05 Apple Computer Inc. Method for dragging and dropping between multiple layered windows
CN103677086A (en) * 2012-09-05 2014-03-26 优三第科技开发(深圳)有限公司 Electronic device
CN103873277A (en) * 2012-12-12 2014-06-18 中国科学院声学研究所 Layered network topology visualizing method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6957395B1 (en) * 2000-01-04 2005-10-18 Apple Computer, Inc. Computer interface having a single window mode of operation
US7616201B2 (en) * 2005-11-23 2009-11-10 Autodesk, Inc. Casting shadows
US8947452B1 (en) * 2006-12-07 2015-02-03 Disney Enterprises, Inc. Mechanism for displaying visual clues to stacking order during a drag and drop operation
US8213680B2 (en) * 2010-03-19 2012-07-03 Microsoft Corporation Proxy training data for human body tracking
US20120218395A1 (en) * 2011-02-25 2012-08-30 Microsoft Corporation User interface presentation and interactions
US10417018B2 (en) * 2011-05-27 2019-09-17 Microsoft Technology Licensing, Llc Navigation of immersive and desktop shells

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6184865B1 (en) * 1996-10-23 2001-02-06 International Business Machines Corporation Capacitive pointing stick apparatus for symbol manipulation in a graphical user interface
US6915490B1 (en) * 2000-09-29 2005-07-05 Apple Computer Inc. Method for dragging and dropping between multiple layered windows
CN103677086A (en) * 2012-09-05 2014-03-26 优三第科技开发(深圳)有限公司 Electronic device
CN103873277A (en) * 2012-12-12 2014-06-18 中国科学院声学研究所 Layered network topology visualizing method and system

Also Published As

Publication number Publication date
WO2015200323A1 (en) 2015-12-30
CN106462250A (en) 2017-02-22
EP3161597A1 (en) 2017-05-03
US20150371436A1 (en) 2015-12-24
US9990763B2 (en) 2018-06-05

Similar Documents

Publication Publication Date Title
US12307080B2 (en) Displaying a three dimensional user interface
KR102733855B1 (en) Systems and methods for augmented reality scenes
KR101842106B1 (en) Generating augmented reality content for unknown objects
KR102562577B1 (en) Indicating out-of-view augmented reality images
JP6659644B2 (en) Low latency visual response to input by pre-generation of alternative graphic representations of application elements and input processing of graphic processing unit
JP6203406B2 (en) System and method for determining plane spread in an augmented reality environment
EP2600313A2 (en) Dynamic graphical interface shadows
CN106462250B (en) Computerized system and method for layering content in a user interface
CN106575196A (en) Electronic device and method for displaying user interface thereof
JP2022019748A (en) Device and method for generating dynamic virtual content in mixed reality
US20160070460A1 (en) In situ assignment of image asset attributes
CN107038738A (en) Object is shown using modified rendering parameter
US20220164491A1 (en) Systems and methods for smart volumetric layouts
US9483873B2 (en) Easy selection threshold
WO2023066005A1 (en) Method and apparatus for constructing virtual scenario, and electronic device, medium and product
US11922904B2 (en) Information processing apparatus and information processing method to control display of a content image
WO2025139466A1 (en) Multidirectional sliding method, terminal device, and computer readable storage medium
EP3162054A1 (en) Computerized systems and methods for analyzing and determining properties of virtual environments
US11243678B2 (en) Method of panning image
JP6002346B1 (en) Program, method, electronic apparatus and system for displaying object image in game
US20250004606A1 (en) Adding, placing, and grouping widgets in extended reality (xr) applications

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: American California

Applicant after: Google limited liability company

Address before: American California

Applicant before: Google Inc.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant