WO2017120300A1 - Content delivery systems and methods - Google Patents
Content delivery systems and methods Download PDFInfo
- Publication number
- WO2017120300A1 WO2017120300A1 PCT/US2017/012284 US2017012284W WO2017120300A1 WO 2017120300 A1 WO2017120300 A1 WO 2017120300A1 US 2017012284 W US2017012284 W US 2017012284W WO 2017120300 A1 WO2017120300 A1 WO 2017120300A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- different images
- content
- user
- displayed
- different
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4821—End-user interface for program selection using a grid, e.g. sorted out by channel and broadcast time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
- H04N21/42206—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
- H04N21/42222—Additional components integrated in the remote control device, e.g. timer, speaker, sensors for detecting position, direction or movement of the remote control, microphone or battery charging device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
Definitions
- This application describes, among other things, a method and system for dynamically displaying, discovering, scanning and interacting with content across a wide variety of platforms.
- the television was tuned to the desired channel by adjusting a tuner knob and the viewer watched the selected program. Later, remote control devices were introduced that permitted viewers to tune the television from a distance. This addition to the user-television interface created the phenomenon known as "channel surfing" whereby a viewer could rapidly view short segments being broadcast on a number of channels to quickly learn what programs were available at any given time.
- buttons that can be programmed with the expert commands. These soft buttons sometimes have accompanying LCD displays to indicate their action. These too have the flaw that they are difficult to use without looking away from the TV to the remote control. Yet another flaw in these remote units is the use of modes in an attempt to reduce the number of buttons.
- moded a special button exists to select whether the remote should communicate with the TV, DVD player, cable set-top box, VCR, etc. This causes many usability issues including sending commands to the wrong device, forcing the user to look at the remote to make sure that it is in the right mode, and it does not provide any simplification to the integration of multiple devices.
- the most advanced of these universal remote units provide some integration by allowing the user to program sequences of commands to multiple devices into the remote. This is such a difficult task that many users hire professional installers to program their universal remote units.
- remote devices usable to interact with such frameworks, as well as other applications, systems and methods for these remote devices for interacting with such frameworks.
- various different types of remote devices can be used with such frameworks including, for example, trackballs, "mouse" -type pointing devices, light pens, etc.
- 3D pointing devices with scroll wheels.
- 3D pointing is used in this specification to refer to the ability of an input device to move in three (or more) dimensions in the air in front of, e.g., a display screen, and the corresponding ability of the user interface to translate those motions directly into user interface commands, e.g., movement of a cursor on the display screen.
- the transfer of data between the 3D pointing device may be performed wirelessly or via a wire connecting the 3D pointing device to another device.
- “3D pointing” differs from, e.g., conventional computer mouse pointing techniques which use a surface, e.g., a desk surface or mousepad, as a proxy surface from which relative movement of the mouse is translated into cursor movement on the computer display screen.
- An example of a 3D pointing device can be found in U.S. Patent Application No. 1 1/1 19,663, the disclosure of which is incorporated here by reference.
- Systems and methods according to the present invention describe dynamically discovering and displaying content, represented by a plurality of different images on a graphical user interface. Based on the interaction of the user, whether an explicit interaction or no interaction at all, content is updated and displayed to a user for further manipulation.
- the plurality of different images on the graphical user interface; receiving an input from the user; determining that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; selecting another one of the plurality of different images; and displaying the content represented by the another one of the plurality of different images.
- a method for dynamically displaying content to a user on a graphical user interface displayed on a device, wherein the content is represented by a plurality of different images comprising: displaying the plurality of different images on the graphical user interface; dynamically updating the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images; receiving input via at least one sensor in a 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface displayed on the device; determining, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display
- a system for dynamically displaying content to a user, comprising: a 3D pointing device; a device configured to display a graphical user interface; a processor associated with the device and configured to receive inputs for dynamically displaying the content, wherein the content is represented by a plurality of different images, the processor configured to: display the plurality of different images on the graphical user interface; dynamically update the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images (where all of the images need not necessarily change simultaneously); receive input via at least one sensor in the 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface of the device; determine, based at least in part on a current position of the cursor, that the user has an interest in one of the pluralit
- the dynamically update the displayed content on the graphical user interface of the device to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; select another one of the plurality of different images; and display the content represented by the another one of the plurality of different images.
- FIG. 1 depicts a conventional remote control unit for media system
- FIG. 2 depicts an exemplary media system in which exemplary
- FIGS. 3A and 3B show a 3D pointing device according to an exemplary embodiment of the present invention
- FIG. 4 depicts another exemplary 3D pointing device
- FIG. 5 illustrates a user employing a 3D pointing device to provide input to a user interface on a television according to an exemplary embodiment of the present invention
- FIG. 6 depicts an initial user interface displaying a plurality of different images
- FIGS. 7 A and 7B are examples of a Snap or Snapshot visualization of a content item
- FIG. 8 depicts another user interface displaying a plurality of different images
- FIG. 9 depicts another user interface displaying a plurality of different images
- FIG. 10 depicts a further user interface displaying a plurality of different images
- FIG. 1 1 depicts a further user interface displaying a plurality of different images
- FIG. 12 depicts a further user interface displaying a plurality of different images in the watch list
- FIG. 13 depicts a further user interface displaying additional detail regarding one of the plurality of different images
- FIG. 14 depicts a further user interface displaying a plurality of different images
- FIG. 15 depicts a method for dynamically displaying and updating content according to one of the embodiments herein;
- FIG. 16 depicts a method for dynamically displaying and updating content according to another one of the embodiments herein;
- FIG. 17 depicts a method for dynamically displaying and updating content according to another one of the embodiments herein;
- FIG. 18 depicts a content delivery system in which exemplary
- FIG. 19 depicts a brief overview of the method
- an exemplary aggregated media system 200 in which the present invention can be implemented will first be described with respect to Figure 2. Those skilled in the art will appreciate, however, that the present invention is not restricted to implementation in this type of media system and that more or fewer components can be included therein.
- an input/output (I/O) bus 210 connects the system components in the media system 200 together.
- the I/O bus 210 represents any of a number of different of mechanisms and techniques for routing signals between the media system components.
- the I/O bus 210 may include an appropriate number of independent audio "patch" cables that route audio signals, coaxial cables that route video signals, two-wire serial lines or infrared or radio frequency transceivers that route control signals, optical fiber or any other routing mechanisms that route other types of signals.
- the media system 200 includes a television/monitor 212, a video cassette recorder (VCR) 214, digital video disk (DVD) recorder/playback device 216, audio/video tuner 218 and compact disk player 220 coupled to the I/O bus 210.
- the VCR 214, DVD 216 and compact disk player 220 may be single disk or single cassette devices, or alternatively may be multiple disk or multiple cassette devices. They may be independent units or integrated together.
- the media system 200 includes a microphone/speaker system 222, video camera 224 and a wireless I/O control device 226. According to exemplary
- the wireless I/O control device 226 is a 3D pointing device.
- the wireless I/O control device 226 can communicate with the media system 200 using, e.g., an IR or RF transmitter or transceiver. Alternatively, the I/O control device can be connected to the media system 200 via a wire.
- the media system 200 also includes a system controller 228.
- the system controller 228 operates to store and display media system data available from a plurality of media system data sources and to control a wide variety of features associated with each of the system components. As shown in Figure 2, system controller 228 is coupled, either directly or indirectly, to each of the system components, as necessary, through I/O bus 210.
- system controller 228 is configured with a wireless communication transmitter (or transceiver), which is capable of communicating with the system components via IR signals or RF signals. Regardless of the control medium, the system controller 228 is configured to control the media components of the media system 200 via a graphical user interface described below.
- media system 200 may be configured to receive media items from various media sources and service providers.
- media system 200 receives media input from and, optionally, sends information to, any or all of the following sources: cable broadcast 230, satellite broadcast 232 (e.g., via a satellite dish), very high frequency (VHF) or ultra-high frequency (UHF) radio frequency communication of the broadcast television networks 234 (e.g., via an aerial antenna), telephone network 236 and cable modem 238 (or another source of Internet content).
- the media system 200 may be an entertainment system.
- the media components and media sources illustrated and described with respect to Figure 2 are purely exemplary and that media system 200 may include more or fewer of both.
- other types of inputs to the system include AM/FM radio and satellite radio.
- remote devices and interaction techniques between remote devices and user interfaces in accordance with the present invention can be used in conjunction with other types of systems, for example computer systems including, e.g., a display, a processor and a memory system or with various other systems and applications.
- remote devices which operate as 3D pointers are of particular interest for the present specification, although the present invention is not limited to systems including 3D pointers.
- Such devices enable the translation of movement of the device, e.g., linear movement, rotational movement, acceleration or any combination thereof, into commands to a user interface.
- Remote devices which operate as 3D pointers are examples of motion sensing devices which enable the translation of movement, e.g., pointing or gestures, into commands to a user interface.
- An exemplary 3D pointing device 300 is depicted in Figures 3A-3B.
- user movement of the 3D pointing can be defined, for example, in terms of a combination of x-axis attitude (roll), y-axis elevation (pitch) and/or z-axis heading (yaw) motion of the 3D pointing device 300.
- the 3D pointing device 300 includes two buttons 302 and 304 as well as a scroll wheel 306, although other physical configurations are possible.
- 3D pointing device 300 can be held by a user in front of a display 308 and motion of the 3D pointing device 300 will be sensed by sensors inside the device 300 (described below with respect to Figure 3B) and translated by the 3D pointing device 300 into output which is usable to interact with the information displayed on display 308, e.g., to move the cursor 310 on the display 308.
- rotation of the 3D pointing device 300 about the y-axis can be sensed by the 3D pointing device 300 and translated into an output usable by the system to move cursor 310 along the V2 axis of the display 308.
- rotation of the 3D pointing device 308 about the z-axis can be sensed by the 3D pointing device 300 and translated into an output usable by the system to move cursor 310 along the X2 axis of the display 308.
- gyroscopes e.g., gyroscopes, angular rotation sensors, accelerometers, magnetometers, etc. It will be appreciated by those skilled in the art that one or more of each or some of these sensors can be employed within device 300. According to one purely illustrative example, two rotational sensors 320 and 322 and one accelerometer 324 can be employed as sensors in 3D pointing device 300 as shown in Figure 3B. Although this example employs inertial sensors, it will be appreciated that other motion sensing devices and systems are not so limited, and examples of other types of sensors are mentioned above.
- the rotational sensors 320, 322 can be 1 -D, 2-D or 3-D sensors.
- the accelerometer 324 can, for example, be a 3-axis linear accelerometer, although a 2-axis linear accelerometer could be used by assuming that the device is measuring gravity and mathematically computing the remaining third value. Additionally, the accelerometer(s) and rotational sensor(s) could be packaged together into a single sensor package. Other variations of sensors and sensor packages may also be used in conjunction with these examples.
- a handheld motion sensing device is not limited to the industrial design illustrated in Figures 3A and 3B, but can instead be deployed in any industrial form factor, another example of which is illustrated as Figure 4.
- the 3D pointing device 400 includes a ring-shaped housing 401 , two buttons 402, and 404 as well as a scroll wheel 406 and grip 407, although other exemplary embodiments may include other physical configurations.
- the region 408 which includes the two buttons 402 and 404 and scroll wheel 406 is referred to herein as the "control area" 408, which is disposed on an outer portion of the ring-shaped housing 401 . More details regarding this exemplary handheld motion sensing device can be found in U.S. Patent Application Serial No.
- the handheld motion sensing device may also include one or more audio sensing devices, e.g., microphone 410.
- Such motion sensing devices 300, 400 have numerous applications including, for example, usage in the so-called "10 foot” interface between a sofa and a television in the typical living room as shown in Figure 5.
- the 3D pointing device 400 moves between different positions, that movement is detected by one or more sensors within 3D pointing device 400 and transmitted to the television 520 (or associated system component, e.g., a set-top box (not shown)). Movement of the 3D pointing device 400 can, for example, be translated into movement of a cursor 540 displayed on the television 520 and which is used to interact with a user interface, e.g., the Peak Content Delivery Service.
- the television 520 can also include one or more microphones (two of which 544 and 546 are illustrated in Figure 5).
- input can be provided to the user interface via gesture input, tremor input, voice input, touch input, stylus input, eye tracking input, facial recognition, and user and/or device context, for example.
- the input device can be worn by the user.
- the user interface could be on a television, a computer, a tablet, a cell phone, a device worn by the user, an Augmented Reality or Virtual Reality system, or any other type of computing device or handheld device.
- the user interface is on a handheld device or a device worn by the user, for example, the user could provide input by moving the handheld device.
- the embodiments described herein include, but are not limited to, a content selection input device and content delivery output device which are physically separated from one another.
- 3D pointing device 300 can be used to interact with the display 308 in a number of ways other than (or in addition to) cursor movement, for example it can control cursor fading, volume or media transport (play, pause, fast-forward and rewind). For example, pressing the scroll wheel 306 (the scroll wheel also operating in this case as a switch), could cause the device to switch from one mode to another.
- the device could cause the content to play or pause.
- moving the scroll wheel could allow fast- forwarding or rewinding of the content displayed on the Ul.
- the system can be programmed to recognize gestures, e.g., predetermined movement patterns, to convey commands in addition to cursor movement.
- other input commands e.g., a zoom-in or zoom-out on a particular region of a display (e.g., actuated by pressing button 302 to zoom-in or button 304 to zoom-out or by using the scroll wheel 306), may also be available to the user.
- the user may use the scroll wheel on the 3D pointer device in a scrolling mode.
- the cursor When operating in scrolling mode, the cursor can be displayed in a default representation, e.g., as an arrow on the user interface. While in scroll mode, rotation of the scroll wheel on the 3D pointing device (or other pointing device if a 3D pointer is not used) has the effect of scrolling the content which is currently being viewed by the user vertically, i.e., up and down.
- the GUI screen (also referred to herein as a "Ul view”, which terms refer to a currently displayed set of Ul objects) seen on television 520 is a home view.
- the home view displays a plurality of applications 522, e.g., "Photos", “Music”, “Recorded”, “Guide”, “Live TV”, “On Demand”, and “Settings”, which are selectable by the user by way of interaction with the user interface via the 3D pointing device 400.
- Such user interactions can include, for example, pointing, scrolling, clicking or various combinations thereof.
- exemplary pointing, scrolling and clicking interactions which can be used in conjunction with exemplary embodiments of the present invention, the interested reader is directed to U.S.
- Figure 5 illustrates various icons for accessing content
- the method for accessing content as described below could be implemented by selecting any icon or logging into a system to display the initial view.
- other forms of input as discussed above, could be used to display a certain Ul view, e.g., gestures, voice recognition, etc.
- user interfaces may use, at least in part, zooming techniques for moving between user interface views.
- the next "highest" user interface view could be reached by actuating an object on the Ul view which is one zoom level higher than the currently displayed Ul view.
- zooming and/or panning could be implemented by moving the scroll wheel 306.
- the zooming transition effect can be performed by progressive scaling and displaying of at least some of the Ul objects displayed on the current Ul view to provide a visual impression of movement of those Ul objects away from an observer.
- user interfaces may zoom-in in response to user interaction with the user interface which will, likewise, result in the progressive scaling and display of Ul objects that provide the visual impression of movement toward an observer. More information relating to zoomable user interfaces can be found in U.S. Patent Application Serial No. 10/768,432, filed on January 30, 2004, entitled “A Control Framework with a Zoomable Graphical User Interface for Organizing, Selecting and Launching Media Items," and U.S. Patent Application Serial No. 09/829,263, filed on April 9, 2001 , entitled “Interactive Content Guide for Television Programming,” the disclosures of which are incorporated here by reference.
- Movement within the user interface between different user interface views is not limited to zooming.
- Other non-zooming techniques can be used to transition between user interface views.
- panning can be performed by progressive translation and display of at least some of the user interface objects which are currently displayed in a user interface view. This provides the visual impression of lateral movement of those user interface objects to an observer.
- PeakTM as a verb
- PeakTM can be defined as a semantic merge of the noun “peak” meaning “mountain top” and the homophone verb “peek” meaning to see.
- PeakTM as a noun
- PeakTM can be defined as the view of a collection of content from a particular semantic vantage point. For example, if the semantic vantage point (PeakTM) is "1980's Drama Movies," peaking that content will lead to a stream of cover art and related metadata organized across a viewing screen in a pleasing way.
- the content category could also be music-related in which case the art would be music album covers or something else related to items in that category.
- PeakTM is “I could go on an ad for a movie (e.g., "Captivating! - USA Today).
- An example usage of PeakTM is "I could go on an ad for a movie so I peaked it.”
- Another example is "I was in the mood for a suspense movie so I peaked for one.” Peaking is different from searching because searching helps you find content, but Peaking helps find the content for you.
- PeakTM is a content discovery application and service for multiple platforms including smart TV's, PC's, mobile phones and the like. PeakTM allows the user to discover new content of various types including video, audio, entertainment destinations (e.g., restaurants, theaters). PeakTM's user interface shows images of content on the screen, which vary over time. The images remain on the screen for several seconds and then disappear unless the user interacts with them. There is no static grid that splits the screen, but rather, the grid is dynamic where the size and shape of new images that appear vary over time. The content that populates the image rectangles is coming from a data base that is either local or online. Each image can show, for example, the title of the content and a rating. When the cursor is hovering over an image that title remains still and does not disappear like the
- Figure 6 illustrates an initial explore screen 600 displaying content represented by a plurality different images 602.
- the initial screen 600 can display a default semantic PeakTM, such as "all" or the user can set the initial screen to display content based on the user's personal preferences and/or settings, the user's viewed content history, or potentially all history of other users of the system.
- the images 602 could be random images. If the user shows no interest in any of the images 602, then images 602 are replaced with new and different images representing different content is automatically displayed to the user in another Ul view. The images 602 can all be replaced with new and different images at the same time.
- each of the images 602 can be replaced with new and different images at different intervals, so that one image is replaced at a time with a new image and content.
- the viewer has more time to contemplate the content on the screen.
- This cycle of new Ul views with new content and images continues until the user indicates an interest in any of the images or content.
- the new Ul views can be cycled to update every few or several seconds automatically.
- the timing of the updates to new Ul views could be determined based on a user's preferences and/or settings, or learned by the system based on the user's past browsing history/usage. Further, the user could intentionally pick a different semantic PeakTM. In any event, once a semantic PeakTM is selected, it is remembered in the user's personal list.
- the system may remember the personal history for each user, such as what the user liked and did not like, what content metadata they looked at more, what options the user preferred, etc. The system may accomplish this by determining relevant content, the metadata for that content, and presentation rules, etc. In addition, the system allows for the creation and publishing of a semantic PeakTM to others. As shown in Fig. 6, each of the plurality of images 602 could be displayed indicating the title 604 of the content, as well as indicating a rating 606 of the content. In this example of Fig. 6, each of the plurality of images 602 could display the movie's cover art or a scene or character in the program.
- each of the plurality of images 602 may be displayed in a (generally) rectangular or square shape, where each of the shapes can be of different sizes and at different screen locations.
- rectangular and square shapes are illustrated in Fig. 6, the shapes could be any shape or combination of shapes, e.g., a teardrop and/or a circle, or pyramid shape.
- a dynamic screen layout one can display oversized visuals, like having a single movie poster take up one- third or even one-half of the screen 600. Since the images and/or content can cycle out and be replaced by images of varying sizes, the user will be getting the benefit of a large and visually arresting display.
- Fig. 6, and the other content views discussed below represent programs such as movies or shows, the content could also be advertisements, documents, music, photos, games, recipes, books, travel, online dating, restaurants, shopping, theatre tickets, local events, social media, or job listings, for example.
- the content represented on the display could be related to more than one type of content.
- content and images representing movies, theatre tickets, and advertisements could be displayed simultaneously.
- Snapshots or Snaps For each display mechanism such as PeakTM and content type, a concise visual display of a subset of relevant metadata needs to be constructed. The template shape is then constructed (possibly dynamically as required) whenever that particular content item needs to be shown on the screen.
- An example of a Snap for a restaurant is shown in Figure 7A.
- An example of a Snap for a movie is shown in Figure 7B.
- the Snap allows a coherent presentation of the most relevant information about a particular content item so that the user can instinctively browse across several relevant information facets in parallel. Note that while the particular embodiments in this patent involve using a single Snap construct per content item display, the designer could easily decide to selectively use one of many Snap constructs or even use one that morphs over time in a single display so as to display additional relevant information.
- both the layout and the content displayed on the screen autonomously changes as the user watches.
- each particular collection/layout displayed on the Ul stays stable for a few to several seconds, before changing and updating to new content and images.
- the user or the system could choose to generally select the older pieces of content to change out at a given instant in time, or can just randomly choose the content to change in order to make the display more visually interesting.
- the view presented in Fig 6 could change to new content presented in Fig. 8.
- the content has dynamically updated from that of Fig. 6 to present new content in a new Ul view 800.
- each of the plurality of images 810 have changed to different sizes, different locations, and different images.
- the transition from the Ul screen shown in Fig. 6 to that of Fig. 8 could happen gradually, i.e., by individually cycling through (replacing) individual images.
- the user has moved the 3D pointing device to hover over image 802 representing "Jurassic World.” Once the user has hovered over one of the plurality of different images, the image could relay additional features to the user.
- This border could be of different colors, for example.
- the image 802 could be enlarged relative to its original displayed size to visually convey to the user that the user is hovering over the image for possible further input.
- Other features could be
- image 802 stand out visually as one in which the user may be interested.
- the user has random access to any part of the Ul, corners or edges of the image 802, for example, could be linked to additional features.
- the image 802 will then display icons 806, 808.
- One such feature is an icon 806 (PeakTM, for example) on the corner of the content image 802, that if selected will navigate the view to another Ul screen that displays additional content represented by a plurality of images, which are similar to the image 802 originally selected.
- just hovering over the image 802 could indicate interest wherein the images 810 are automatically replaced with content related to image 802.
- FIG. 8 displays an image 802 with two icons 806, 808, the image 802 could display additional or different icons for selection by the user for additional features. Alternatively, the image 802 could display no icons, and instead various input, such as selecting a button on the input device, could provide access and selection of additional features.
- the Ul display 900 has been automatically updated to present new content represented by a plurality of different images 908.
- image 902 of Fig. 9 is highlighted to indicate the user's interest in "The Hunger Games: Mockingjay - Part 1 ,” for example.
- a cursor 904 is displayed, where movement of the 3D pointing device corresponds to movement of the cursor 904.
- the cursor 904 is moved over the icon 906 represented by a flag. The user can select the icon 906 by pressing a button on the 3D pointing device, for example, to add this program to the user's short list or watch list.
- a new Ul view 1000 is displayed. As shown in Fig. 10, because the user has highlighted image 802 "Jurassic World,” by hovering over the image 802, for example, in Fig. 8, the highlighted image 802 "Jurassic World” remains on the screen 1000, while the remaining images 810 of Fig. 8, for example, have updated to reflect new and different images 1004 in Fig. 10. Also, in this Ul view 1000, a cursor 1006 is displayed, where movement of the 3D pointing device corresponds to
- the cursor 1006 is moved over the icon 1008 represented by the PeakTM symbol.
- the user can select the icon 1008 by pressing a button on the 3D pointing device, for example, to add this program to the user's PeakTM list.
- Fig. 1 1 illustrates another display view 1 100 of the Peak screen, where an icon 1 102 (PeakTM icon, for example) is displayed differently than that of Fig. 10, for example.
- icon 1 102 PeakTM icon, for example
- the content is dynamically updated to display content represented by different images 1 104, wherein the content is similar to "The Man from U.N. CLE.," for example.
- the related content 1 104 could be presented based on similar movies in this example, the related content that is
- Fig. 1 1 Another feature illustrated in Fig. 1 1 is the number of items that are listed in the short list or watch list, as designated by the number next to the icon 1 106 represented by a flag.
- FIG. 12 Selection of the icon 1 106 represented by a flag on Fig. 1 1 , for example, results in a new Ul view 1200 of Fig. 12, where the icon 1 106 may remain in the new Ul view 1200.
- the eight images 1202 representing programs are displayed in the short list or watch list view 1200.
- the images 1202 are displayed in a grid format where each image is displayed in equal size, the images 1202 can be displayed in different sizes or shapes and at different locations as discussed above. Further, selection of an image could be accomplished by moving the cursor 1204 over any of the images 1202 and pressing a button, for example, on the 3D pointing device.
- Fig. 13 displays additional details regarding the content, such as the title, date, user rating, parental rating, content time length, and a brief description of the content.
- this Ul view 1300 could include an icon 1302 for playing the content.
- Fig. 13 displays certain details, as well as an icon 1302 for playing the content, the information displayed could list different or additional details and/or additional or different icons.
- a user is not required to access this U l view 1300 to play the desired content. Instead, the user could select the content represented by an image of any of the Ul views of Figs. 6 and 8-12 by pressing a button on the 3D pointing device, for example.
- the dynamic content method could also be implemented using a grid display where a user manipulates up, down, left, and right buttons or via other input to move from one part of the grid to another.
- this dynamic content method could be implemented in a text-based system. For example, a user enters a text-based search in a search engine on the Internet. However, the terms the user is entering do not succinctly match the terms related to the desired search results. In this example, the user enters "classic movies," and the search results displays a variety of different types of classic movies, such as those from the 1970's, those from the 1940's, film noir movies, and black and white movies.
- Each content item e.g., a movie or a restaurant
- the initial step (Part 1 ) of an algorithm to implement the method is to take the mixed data attributes and produce normalized metric data facets, i.e., facet generation. Those facets are then used in Part 2 (content item selection) to drive the actual selection process, e.g., of the content images which are displayed on the Ul, and cycled through, as described previously.
- Part 2 content item selection
- This section will describe how that process works for each common data type, based on one-to-one mappings.
- the method could further include many-to-one mapping. There are likely some useful facets that are inherently formed from several attributes at once. There is nothing inherent in this architecture that prohibits that or makes that unwieldy.
- the desired output range is 0 to 1 .
- the perspective mapping function in this case is: For ordinal data, the data is by rank. Assuming that the rank has a meaning in the sense that the third element is more similar to the first element than to the tenth element and that there is always at least one element that is first in the list (has value 1 indicating that it is first), the mapping is again fairly straight-forward. In the perspective mapping proposed here, the facet value of 1 is assigned to the top rank item and 0 to the bottom rank item. The perspective mapping then is:
- perspective mappings required are more complex. Furthermore, multiple perspectives might be meaningful for any given attribute. The following are some examples of perspective mappings based on categorical data.
- One category could be the year a program was released.
- the year a film was made looks like a metric (and technically is one) but, from a cognitive perspective, it behaves more like a category.
- the real attribute of interest is whether or not a film is "modern” or "classic” or “early,” for example.
- the perspective mappings for those attributes could be as follows:
- Another category could be the genre of the program.
- Film genre is clearly categorical and is not metric at all. In fact, by itself and with no preconceptions, it is virtually impossible to say whether a "horror" movie is more similar to a "romance” than to a "comedy.” Since the system is seeking to understand the similarity (or distance) between two movies, for example, the system could have some mechanism or prism through which the system can determine similarity. For example, if the system knew the user's goal was emotional diversion, then fantasy might be quite similar to drama to achieve that goal. If the user's goal was to be inspired, then action might be quite similar to drama. The user's goal gives the system the necessary perspective needed to judge similarity of one genre to another.
- Machine Learning could be used to iterate these mappings.
- An Expert or Oracle of Delphi approach could be used to determine an original mapping.
- Another possibility is to leverage text analysis of reviews or movie advertisements or plot descriptions to determine a good mapping.
- Python diet (Dictionary) is set forth below:
- Another category could be actors/actresses. However, just based on their identification number or name in the system, one has virtually no ability to determine how similar two actors are to each other. So, for actors too, perspective mapping is needed.
- One possible perspective could be the distribution of film genres in which the actors/actresses have starred.
- the primary similarity metric would then be: * Hj(k)) . In other words, it is the dot product of the two histograms and is sometimes called the cosine metric.
- Many other perspectives are possible such as one that determines whether an actor is dominant in a particular genre (e.g., Sylvester Stallone with action) or whether an actor is a broad character actor with no particular preference (e.g., Meryl Streep).
- N f The number of facets that are to be used for computing
- W Vl A weight vector for view v t of relative importance of each facet in comparing similarity metrics between two content items.
- N v The number of views used to select the next set of content items to display.
- N P The number of elements in G p .
- the system To compute the similarity between two content items, the system must first decide how to compute the similarity of a single facet from two content items. Since every facet is normalized to a range of [0, 1 ], the straight-forward method would be to use the Manhattan or Euclidean measure. In equation form, this would mean either
- the weight vector w k allows the algorithm and system to adjust the relative importance of each of the facets to the overall computation.
- One way to implement this method is to process the user's reaction to the particular PeakTM session as the base for deciding which new content to suggest.
- user preferences and/or group behavior could be learned over time.
- the base is the content that has been presented to the user in this session, indicated as the G p content.
- the system can find content that is most appropriate to show based on the user's behavior to date.
- PeakTM and the dynamic content delivery method is to avoid requiring the user to do anything, i.e., to provide passive content navigation where the system generates the inputs rather than putting the cognitive load on the user to continuously refine, e.g., directed queries.
- the system can learn when an item is displayed, but no action by the user is taken.
- the system can learn a bit more.
- the system assumes that more interaction with the item indicates more interest by the user (except for the case when the user explicitly downgrades the film). For example, when the user acts on one film (image or item of interest) in the layout, the other images or items of interest in the layout are deemed to be of less interest.
- An example for assessing user interaction could be to assign a ⁇ values as shown in Table 3 below.
- a user may be presented with a Ul view such as that of Figure 6.
- the "type of interaction" by the user is "no selection of any item on screen”
- some or all of the plurality of images 602 are automatically cycled out and replaced with a new set of plurality of images 1400 as set forth in Figure 14.
- the plurality of images 1402 are of different shapes, different locations, and different images than that of the plurality of images 602 of Figure 6.
- each View is expressed in terms of a weight vector, for example.
- the View weight vector sets the relative importance of each of the facets in determining similarity between candidate content and the content history. In essence, then, the View weight vector can be thought as a basis vector in the overall facet space.
- the system may constantly recalculate the rankings and resort the items to be presented, based on interaction or lack thereof. If size is used to represent popularity, the a setting may not affect the size of the image.
- the a setting of a non- interacted item that is being replaced would determine what item (or set of items) gets displayed next, but the item's popularity relative to the general populace is orthogonal to that - your top item may or may not be popular with the crowd. For example, noninteraction with an item uses the a setting to re-score all items with metadata overlap and re-rank everything. That new ranking can be used to determine size when the new item cycles in.
- Other visualizations could do things very differently in how they choose to display size.
- display size determines a setting, but the a setting may or may not determine the display size of future items.
- the content provider could determine the size of an image based on whether the content provider desires to promote certain content. Hence, if certain content is promoted, that image for the content item may be of a larger size than other images displayed to grab the viewer's attention.
- the group of the similarity metrics for a given View Vj is defined as follows:
- the content in the layout group L is then shown/presented to the user. Based on the user's reactions, the next iteration of interesting content is prepared for the user as the next iteration of layout group L.
- the content set G p is updated to include this set of content for this session of PeakTM operation.
- Figure 15 illustrates a method of one of the exemplary embodiments of the invention.
- the Ul view is presented displaying content associated with a plurality of different images.
- input is determined. If, at step S102, it is determined that input is received where a user indicates an interest in an image by interacting with that image, then at step S106 that image of interest remains on the screen and the remaining images are replaced by images that represent related content to the image in which the user has an expressed an interest and displayed in a new Ul view at S100. Alternatively, if at step S102, it is determined that no input has been received or there has been no interaction by the user, then the Ul view is updated at S104 to update all of the images and replace them with new images.
- step S108 it is determined if a selection has been made by the user with respect to one of the images. If an image has been selected by the user, then the content is displayed at step S1 10. If no image has been selected by the user at S108, then all of the images are updated again at S104 and displayed in a new Ul view at S100.
- FIG. 16 Another method is shown in Fig. 16, which sets forth a method for dynamically displaying content to a user on a graphical user interface, wherein the content is represented by a plurality of different images, comprising: displaying the plurality of different images on the graphical user interface; receiving an input from the user; determining that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; selecting another one of the plurality of different images; and displaying the content represented by the another one of the plurality of different images.
- FIG. 17 Another method is shown in Fig. 17, which sets forth a method for dynamically displaying content to a user on a graphical user interface displayed on a device, wherein the content is represented by a plurality of different images, comprising: displaying the plurality of different images on the graphical user interface; dynamically updating the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images; receiving input via at least one sensor in a 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface displayed on the device; determining, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display
- the PeakTM Content Delivery Service can be implemented using one or more processors 1800 that are connected to one or more input devices 1802 and one or more output devices 1804 as shown in Figure 18.
- Processor(s) 1800 are thus specially programmed to present PeakTM Content Delivery Service user interface screens which change over time as described above, both randomly and in response to a user's random access (pointing) cursor movements and/or button selections of content elements, flags and/or PeakTM icons as described above.
- input device 1802 could thus be (or include) a 3D pointing device and output device 1804 could thus be (or include) a television, AR/VR device, mobile phone or the like.
- processor(s) 1800 could reside within the television itself or a set-top box or another device connected to the television, like a game console or the user's smart phone. If used in a tablet, the processor(s) 1800, input device(s) 1802, and output device(s) 1804 could all reside within a single housing and be portable.
- elements of the Peak Content Delivery Service could be pushed to the local system 1800, 1802, and 1804 from a remotely located server 1806 via, e.g., the Internet or a cable or satellite media connection.
- the PeakTM Content Delivery Service is explicitly imagined as a multiuser, multi-platform system with the ability to learn relationships and interests across a collection of users and content.
- the PeakTM Content Delivery Service could of course be implemented for just a single user with the learning then restricted to that particular user.
- Figure 19 illustrates a brief overview of the PeakTM Content Delivery Service method 1900.
- the metadata from various Content Sources 1902 as well as Global Context 1904 such as weather drives the system's User Interface operation shown in the right-hand side of the diagram.
- the loop starts with an auto-generated query 1916 of the metadata 1906 for the first set of content to show the user.
- the appropriate content is then selected and ordered 1910 for presentation (a group that here is referred to as the Layout group).
- the machine learning 1912 determines the views and perspectives 1914 presented to the user.
- a Snap 1908 is formed for display to the user.
- the last step involves the user either deliberately requesting more information on particular displayed content or simply waiting for something more interesting to be displayed.
- a new auto-generated query 1916 is formed and the loop begins again. The result is a mostly passive, guided journey through content of potential interest to the user - a journey that is both rewarding and fun.
- a determination or indication of a user's interest in a particular image (or other discoverable content) can be based on one or more inputs or actions including, but not limited to, cursor position, cursor movement, remote control input (e.g., button press, button release, scroll wheel movement, OFN detections), voice input, eye-tracking, selection, hovering, focus placement on the image, etc.
- a user's lack of interest may also be determined or indicated by a lack of one or more these inputs.
- "interest” can be valued at different levels based upon a number or quality of inputs made by the user with respect to a given image (or other discoverable content).
- general context can enhance the significance of user action or inaction. For example, if a particular content item takes up half the screen and the user still does not indicate interest, that indicates a higher level of disinterest than if the content item only took up 1 /10th of the screen. Conversely, if a user deliberately selects a content item even though its visual representation (Snap) is the smallest on the screen, that indicates a higher level of interest than if the content item took up half the screen. Similarly, a user's pattern of interest compared with general popularity can indicate a different level of interest. If the user selects an item on screen that is among the least popular shown, that is more significant than if the user picks one that is the most popular shown.
- Systems and methods for processing data according to exemplary embodiments of the present invention can be performed by one or more processors executing sequences of instructions contained in a memory device. Such instructions may be read into the memory device from other computer-readable mediums such as secondary data storage device(s). Execution of the sequences of instructions contained in the memory device causes the processor to operate, for example, as described above. In alternative embodiments, hard-wire circuitry may be used in place of or in combination with software instructions to implement the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Systems and methods according to the present invention describe dynamically discovering and displaying content, represented by a plurality of different images on a graphical user interface. Based on the interaction of the user, whether an explicit interaction or no interaction at all, content is updated and displayed to a user for further manipulation.
Description
CONTENT DELIVERY SYSTEMS AND METHODS
RELATED APPLICATIONS
[0001] This application is related to, and claims priority from, U.S. Provisional Patent Application No. 62/274,989, entitled "Content Delivery Systems and Methods," to Daniel S. Simpkins et al., filed on January 5, 2016, the disclosure of which is
incorporated here by reference.
BACKGROUND
[0002] This application describes, among other things, a method and system for dynamically displaying, discovering, scanning and interacting with content across a wide variety of platforms.
[0003] Technologies associated with the communication of information have evolved rapidly over the last several decades. Television, cellular telephony, the Internet and optical communication techniques (to name just a few things) combine to inundate consumers with available information and entertainment options. Taking television as an example, the last three decades have seen the introduction of cable television service, satellite television service, pay-per-view movies, and video-on- demand. Whereas television viewers of the 1960s could typically receive perhaps four or five over-the-air TV channels on their television sets, today's TV watchers have the opportunity to select from hundreds, thousands, and potentially millions of channels of shows and information. Streaming technology, which can be used on a television by just plugging in a device, or by any device with a connection to the Internet, for example, gives a viewer thousands of programs from which to choose.
[0004] The technological ability to provide so much information and content to end users provides both opportunities and challenges to system designers and service providers. One challenge is that while end users typically prefer having more choices rather than fewer, this preference is counterweighted by their desire that the selection
process be both fast and simple. Unfortunately, the development of the systems and interfaces by which end users access media items has resulted in selection processes which are neither fast nor simple. Consider again the example of television programs. When television was in its infancy, determining which program to watch was a relatively simple process primarily due to the small number of choices. One would consult a printed guide which was formatted, for example, as series of columns and rows which showed the correspondence between (1 ) nearby television channels, (2) programs being transmitted on those channels and (3) date and time. The television was tuned to the desired channel by adjusting a tuner knob and the viewer watched the selected program. Later, remote control devices were introduced that permitted viewers to tune the television from a distance. This addition to the user-television interface created the phenomenon known as "channel surfing" whereby a viewer could rapidly view short segments being broadcast on a number of channels to quickly learn what programs were available at any given time.
[0005] Despite the fact that the number of channels and amount of viewable content has dramatically increased, the generally available user interface, control device options and frameworks for televisions has not changed much over the last 30 years. Printed guides are still the most prevalent mechanism for conveying programming information. The multiple button remote control with up and down arrows is still the most prevalent channel/content selection mechanism. The reaction of those who design and implement the TV user interface to the increase in available media content has been a straightforward extension of the existing selection procedures and interface objects. Thus, the number of rows in the printed guides has been increased to accommodate more channels. The number of buttons on the remote control devices has been increased to support additional functionality and content handling, e.g., as shown in Figure 1. However, this approach has significantly increased both the time required for a viewer to review the available information and the complexity of actions required to implement a selection. Arguably, the cumbersome nature of the existing interface has hampered commercial implementation of some services, e.g., video-on-
demand, since consumers are resistant to new services that will add complexity to an interface that they view as already too slow and complex.
[0006] In addition to increases in bandwidth and content, the user interface bottleneck problem is being exacerbated by the aggregation of technologies.
Consumers are reacting positively to having the option of buying integrated systems rather than a number of segregable components. An example of this trend is the combination television/VCR/DVD in which three previously independent components are frequently sold today as an integrated unit. This trend is likely to continue, potentially with an end result that most if not all of the communication devices currently found in the household will be packaged together as an integrated unit, e.g., a
television/VCR/DVD/internet access/radio/stereo unit. Even those who continue to buy separate components will likely desire seamless control of, and interworking between, the separate components. With this increased aggregation comes the potential for more complexity in the user interface. For example, when so-called "universal" remote units were introduced, e.g., to combine the functionality of TV remote units and VCR remote units, the number of buttons on these universal remote units was typically more than the number of buttons on either the TV remote unit or VCR remote unit individually. This added number of buttons and functionality makes it very difficult to control anything but the simplest aspects of a TV or VCR without hunting for exactly the right button on the remote. Many times, these universal remotes do not provide enough buttons to access many levels of control or features unique to certain TVs. In these cases, the original device remote unit is still needed, and the original hassle of handling multiple remotes remains due to user interface issues arising from the complexity of
aggregation. Some remote units have addressed this problem by adding "soft" buttons that can be programmed with the expert commands. These soft buttons sometimes have accompanying LCD displays to indicate their action. These too have the flaw that they are difficult to use without looking away from the TV to the remote control. Yet another flaw in these remote units is the use of modes in an attempt to reduce the number of buttons. In these "moded" universal remote units, a special button exists to
select whether the remote should communicate with the TV, DVD player, cable set-top box, VCR, etc. This causes many usability issues including sending commands to the wrong device, forcing the user to look at the remote to make sure that it is in the right mode, and it does not provide any simplification to the integration of multiple devices. The most advanced of these universal remote units provide some integration by allowing the user to program sequences of commands to multiple devices into the remote. This is such a difficult task that many users hire professional installers to program their universal remote units.
[0007] Some attempts have also been made to modernize the screen interface between end users and media systems. However, these attempts typically suffer from, among other drawbacks, an inability to easily scale between large collections of media items and small collections of media items. For example, interfaces which rely on lists of items may work well for small collections of media items, but are tedious to browse for large collections of media items. Interfaces which rely on hierarchical navigation (e.g., tree structures) may be speedier to traverse than list interfaces for large collections of media items, but are not readily adaptable to small collections of media items. Additionally, users tend to lose interest in selection processes wherein the user has to move through three or more layers in a tree structure. For all of these cases, current remote units make this selection process even more tedious by forcing the user to repeatedly depress the up and down buttons to navigate the list or hierarchies. When selection skipping controls are available such as page up and page down, the user usually has to look at the remote to find these special buttons or be trained to know that they even exist. Accordingly, organizing frameworks, techniques and systems which simplify the control and screen interface between users and media systems as well as accelerate the selection process, while at the same time permitting service providers to take advantage of the increases in available bandwidth to end user equipment by facilitating the supply of a large number of media items and new services to the user have been proposed in U.S. Patent Application Serial No. 10/768,432, filed on January 30, 2004, entitled "A Control Framework with a Zoomable Graphical User Interface for
Organizing, Selecting and Launching Media Items", the disclosure of which is incorporated here by reference.
[0008] Of particular interest for this specification are the remote devices usable to interact with such frameworks, as well as other applications, systems and methods for these remote devices for interacting with such frameworks. As mentioned in the above- incorporated application, various different types of remote devices can be used with such frameworks including, for example, trackballs, "mouse" -type pointing devices, light pens, etc. However, another category of remote devices which can be used with such frameworks (and other applications) is 3D pointing devices with scroll wheels. The phrase "3D pointing" is used in this specification to refer to the ability of an input device to move in three (or more) dimensions in the air in front of, e.g., a display screen, and the corresponding ability of the user interface to translate those motions directly into user interface commands, e.g., movement of a cursor on the display screen. The transfer of data between the 3D pointing device may be performed wirelessly or via a wire connecting the 3D pointing device to another device. Thus "3D pointing" differs from, e.g., conventional computer mouse pointing techniques which use a surface, e.g., a desk surface or mousepad, as a proxy surface from which relative movement of the mouse is translated into cursor movement on the computer display screen. An example of a 3D pointing device can be found in U.S. Patent Application No. 1 1/1 19,663, the disclosure of which is incorporated here by reference.
[0009] In addition, because users can access thousands of items, users can easily become frustrated when searching for an item of interest. As such, users may lose patience because of the extra time required for searching content and end up choosing a program, for example, that may not necessarily be one matched to the user's specific interests. This is especially true for users who may not know exactly what they want or do not know the exact terms to be used in searching for content. Further, these problems affect not just content viewing on a television, but across other platforms as well such as mobile phones, personal computers, web devices, AR/VR and the like.
[0010] Therefore, a content discovery method which dynamically displays and updates content by learning about a user's current requirements and overall preference, is needed to overcome the drawbacks discussed above.
SUMMARY
[0011] Systems and methods according to the present invention describe dynamically discovering and displaying content, represented by a plurality of different images on a graphical user interface. Based on the interaction of the user, whether an explicit interaction or no interaction at all, content is updated and displayed to a user for further manipulation.
[0012] According to an exemplary embodiment of the invention, a method is described for dynamically displaying content to a user on a graphical user interface, wherein the content is represented by a plurality of different images, comprising:
displaying the plurality of different images on the graphical user interface; receiving an input from the user; determining that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; selecting another one of the plurality of different images; and displaying the content represented by the another one of the plurality of different images.
[0013] According to another exemplary embodiment of the invention, a method is described for dynamically displaying content to a user on a graphical user interface displayed on a device, wherein the content is represented by a plurality of different images, comprising: displaying the plurality of different images on the graphical user interface; dynamically updating the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images; receiving input via at least one sensor in a 3D
pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface displayed on the device; determining, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; selecting another one of the plurality of different images; and displaying the content represented by the another one of the plurality of different images.
[0014] According to another exemplary embodiment of the invention, a system is described for dynamically displaying content to a user, comprising: a 3D pointing device; a device configured to display a graphical user interface; a processor associated with the device and configured to receive inputs for dynamically displaying the content, wherein the content is represented by a plurality of different images, the processor configured to: display the plurality of different images on the graphical user interface; dynamically update the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images (where all of the images need not necessarily change simultaneously); receive input via at least one sensor in the 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface of the device; determine, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images;
dynamically update the displayed content on the graphical user interface of the device to include the one of the plurality of different images and changing a remainder of the
plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; select another one of the plurality of different images; and display the content represented by the another one of the plurality of different images.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings illustrate exemplary embodiments of the present invention, wherein:
[0016] FIG. 1 depicts a conventional remote control unit for media system;
[0017] FIG. 2 depicts an exemplary media system in which exemplary
embodiments of the present invention can be implemented;
[0018] FIGS. 3A and 3B show a 3D pointing device according to an exemplary embodiment of the present invention;
[0019] FIG. 4 depicts another exemplary 3D pointing device;
[0020] FIG. 5 illustrates a user employing a 3D pointing device to provide input to a user interface on a television according to an exemplary embodiment of the present invention;
[0021] FIG. 6 depicts an initial user interface displaying a plurality of different images;
[0022] FIGS. 7 A and 7B are examples of a Snap or Snapshot visualization of a content item;
[0023] FIG. 8 depicts another user interface displaying a plurality of different images;
[0024] FIG. 9 depicts another user interface displaying a plurality of different images;
[0025] FIG. 10 depicts a further user interface displaying a plurality of different images;
[0026] FIG. 1 1 depicts a further user interface displaying a plurality of different images;
[0027] FIG. 12 depicts a further user interface displaying a plurality of different images in the watch list;
[0028] FIG. 13 depicts a further user interface displaying additional detail regarding one of the plurality of different images;
[0029] FIG. 14 depicts a further user interface displaying a plurality of different images;
[0030] FIG. 15 depicts a method for dynamically displaying and updating content according to one of the embodiments herein;
[0031] FIG. 16 depicts a method for dynamically displaying and updating content according to another one of the embodiments herein;
[0032] FIG. 17 depicts a method for dynamically displaying and updating content according to another one of the embodiments herein;
[0033] FIG. 18 depicts a content delivery system in which exemplary
embodiments of the present invention can be implemented;
[0034] FIG. 19 depicts a brief overview of the method
DETAILED DESCRIPTION
[0035] The following detailed description of the invention refers to the
accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims.
[0036] In order to provide some context for this discussion, an exemplary aggregated media system 200 in which the present invention can be implemented will first be described with respect to Figure 2. Those skilled in the art will appreciate, however, that the present invention is not restricted to implementation in this type of media system and that more or fewer components can be included therein. Therein, an input/output (I/O) bus 210 connects the system components in the media system 200 together. The I/O bus 210 represents any of a number of different of mechanisms and techniques for routing signals between the media system components. For example, the I/O bus 210 may include an appropriate number of independent audio "patch" cables that route audio signals, coaxial cables that route video signals, two-wire serial lines or infrared or radio frequency transceivers that route control signals, optical fiber or any other routing mechanisms that route other types of signals.
[0037] In this exemplary embodiment, the media system 200 includes a television/monitor 212, a video cassette recorder (VCR) 214, digital video disk (DVD) recorder/playback device 216, audio/video tuner 218 and compact disk player 220 coupled to the I/O bus 210. The VCR 214, DVD 216 and compact disk player 220 may be single disk or single cassette devices, or alternatively may be multiple disk or multiple cassette devices. They may be independent units or integrated together. In addition, the media system 200 includes a microphone/speaker system 222, video camera 224 and a wireless I/O control device 226. According to exemplary
embodiments of the present invention, the wireless I/O control device 226 is a 3D pointing device. The wireless I/O control device 226 can communicate with the media system 200 using, e.g., an IR or RF transmitter or transceiver. Alternatively, the I/O control device can be connected to the media system 200 via a wire.
[0038] The media system 200 also includes a system controller 228. According to one exemplary embodiment of the present invention, the system controller 228 operates to store and display media system data available from a plurality of media system data sources and to control a wide variety of features associated with each of the system components. As shown in Figure 2, system controller 228 is coupled, either directly or indirectly, to each of the system components, as necessary, through I/O bus 210. In one exemplary embodiment, in addition to or in place of I/O bus 210, system controller 228 is configured with a wireless communication transmitter (or transceiver), which is capable of communicating with the system components via IR signals or RF signals. Regardless of the control medium, the system controller 228 is configured to control the media components of the media system 200 via a graphical user interface described below.
[0039] As further illustrated in Figure 2, media system 200 may be configured to receive media items from various media sources and service providers. In this exemplary embodiment, media system 200 receives media input from and, optionally, sends information to, any or all of the following sources: cable broadcast 230, satellite broadcast 232 (e.g., via a satellite dish), very high frequency (VHF) or ultra-high frequency (UHF) radio frequency communication of the broadcast television networks 234 (e.g., via an aerial antenna), telephone network 236 and cable modem 238 (or another source of Internet content). The media system 200 may be an entertainment system. Those skilled in the art will appreciate that the media components and media sources illustrated and described with respect to Figure 2 are purely exemplary and that media system 200 may include more or fewer of both. For example, other types of inputs to the system include AM/FM radio and satellite radio.
[0040] More details regarding this exemplary media system and frameworks associated therewith can be found in the above-incorporated by reference U.S. Patent Application "A Control Framework with a Zoomable Graphical User Interface for
Organizing, Selecting and Launching Media Items". Alternatively, remote devices and interaction techniques between remote devices and user interfaces in accordance with
the present invention can be used in conjunction with other types of systems, for example computer systems including, e.g., a display, a processor and a memory system or with various other systems and applications.
[0041] As mentioned in the Background section, remote devices which operate as 3D pointers are of particular interest for the present specification, although the present invention is not limited to systems including 3D pointers. Such devices enable the translation of movement of the device, e.g., linear movement, rotational movement, acceleration or any combination thereof, into commands to a user interface.
[0042] Remote devices which operate as 3D pointers are examples of motion sensing devices which enable the translation of movement, e.g., pointing or gestures, into commands to a user interface. An exemplary 3D pointing device 300 is depicted in Figures 3A-3B. Therein, user movement of the 3D pointing can be defined, for example, in terms of a combination of x-axis attitude (roll), y-axis elevation (pitch) and/or z-axis heading (yaw) motion of the 3D pointing device 300. In the example of Figure 3A, the 3D pointing device 300 includes two buttons 302 and 304 as well as a scroll wheel 306, although other physical configurations are possible. In this example, 3D pointing device 300 can be held by a user in front of a display 308 and motion of the 3D pointing device 300 will be sensed by sensors inside the device 300 (described below with respect to Figure 3B) and translated by the 3D pointing device 300 into output which is usable to interact with the information displayed on display 308, e.g., to move the cursor 310 on the display 308. For example, rotation of the 3D pointing device 300 about the y-axis can be sensed by the 3D pointing device 300 and translated into an output usable by the system to move cursor 310 along the V2 axis of the display 308. Likewise, rotation of the 3D pointing device 308 about the z-axis can be sensed by the 3D pointing device 300 and translated into an output usable by the system to move cursor 310 along the X2 axis of the display 308.
[0043] Numerous different types of sensors can be employed within device 300 to sense its motion, e.g., gyroscopes, angular rotation sensors, accelerometers, magnetometers, etc. It will be appreciated by those skilled in the art that one or more of
each or some of these sensors can be employed within device 300. According to one purely illustrative example, two rotational sensors 320 and 322 and one accelerometer 324 can be employed as sensors in 3D pointing device 300 as shown in Figure 3B. Although this example employs inertial sensors, it will be appreciated that other motion sensing devices and systems are not so limited, and examples of other types of sensors are mentioned above. The rotational sensors 320, 322 can be 1 -D, 2-D or 3-D sensors. The accelerometer 324 can, for example, be a 3-axis linear accelerometer, although a 2-axis linear accelerometer could be used by assuming that the device is measuring gravity and mathematically computing the remaining third value. Additionally, the accelerometer(s) and rotational sensor(s) could be packaged together into a single sensor package. Other variations of sensors and sensor packages may also be used in conjunction with these examples.
[0044] A handheld motion sensing device is not limited to the industrial design illustrated in Figures 3A and 3B, but can instead be deployed in any industrial form factor, another example of which is illustrated as Figure 4. In the example of Figure 4, the 3D pointing device 400 includes a ring-shaped housing 401 , two buttons 402, and 404 as well as a scroll wheel 406 and grip 407, although other exemplary embodiments may include other physical configurations. The region 408 which includes the two buttons 402 and 404 and scroll wheel 406 is referred to herein as the "control area" 408, which is disposed on an outer portion of the ring-shaped housing 401 . More details regarding this exemplary handheld motion sensing device can be found in U.S. Patent Application Serial No. 1 1/480,662, entitled "3D Pointing Devices," filed on July 3, 2006, the disclosure of which is incorporated here by reference. In accordance with further embodiments described below, the handheld motion sensing device may also include one or more audio sensing devices, e.g., microphone 410.
[0045] A number of permutations and variations relating to 3D pointing devices can be implemented in systems according to exemplary embodiments of the present invention. The interested reader is referred to U.S. Patent Application Serial No.
1 1/1 19,663, entitled (as amended) "3D Pointing Devices and Methods," filed on May 2,
2005, U.S. Patent Application Serial No. 1 1/1 19,719, entitled (as amended) "3D
Pointing Devices with Tilt Compensation and Improved Usability," also filed on May 2, 2005, U.S. Patent Application Serial No. 1 1/1 19,987, entitled (as amended) "Methods and Devices for Removing Unintentional Movement in 3D Pointing Devices," also filed on May 2, 2005, and U.S. Patent Application Serial No. 1 1/1 19,688, entitled "Methods and Devices for Identifying Users Based on Tremor," also filed on May 2, 2005, the disclosures of which are incorporated here by reference, for more details regarding exemplary 3D pointing devices which can be used in conjunction with exemplary embodiments of the present invention.
[0046] Such motion sensing devices 300, 400 have numerous applications including, for example, usage in the so-called "10 foot" interface between a sofa and a television in the typical living room as shown in Figure 5. Therein, as the 3D pointing device 400 moves between different positions, that movement is detected by one or more sensors within 3D pointing device 400 and transmitted to the television 520 (or associated system component, e.g., a set-top box (not shown)). Movement of the 3D pointing device 400 can, for example, be translated into movement of a cursor 540 displayed on the television 520 and which is used to interact with a user interface, e.g., the Peak Content Delivery Service. Additionally, in support of embodiments described below wherein audio sensing is performed in conjunction with motion sensing, the television 520 can also include one or more microphones (two of which 544 and 546 are illustrated in Figure 5).
[0047] In addition to the 3D pointing devices as described in Figs. 3A-3B and 4, input can be provided to the user interface via gesture input, tremor input, voice input, touch input, stylus input, eye tracking input, facial recognition, and user and/or device context, for example. Further, the input device can be worn by the user. Moreover, the user interface could be on a television, a computer, a tablet, a cell phone, a device worn by the user, an Augmented Reality or Virtual Reality system, or any other type of computing device or handheld device. When the user interface is on a handheld device or a device worn by the user, for example, the user could provide input by moving the
handheld device. Thus, the embodiments described herein include, but are not limited to, a content selection input device and content delivery output device which are physically separated from one another.
[0048] Referring again to Figure 3A, an exemplary relationship between movement of the 3D pointing device 300 and corresponding cursor movement on a user interface will now be described. In addition, or as an alternative, to moving a cursor on the user interface, the device could move any object on the user interface. Likewise, movement of the 3D pointing device 300 could hover over the user interface without displaying or controlling a cursor, yet still provide the user with a visual display as to where on the user interface the device 300 is hovering. Rotation of the 3D pointing device 300 about the y-axis can be sensed by the 3D pointing device 300 and
translated into an output usable by the system to move cursor 310 along the V2 axis of the display 308. Likewise, rotation of the 3D pointing device 300 about the z-axis can be sensed by the 3D pointing device 300 and translated into an output usable by the system to move cursor 310 along the X2 axis of the display 308. It will be appreciated that the output of 3D pointing device 300 can be used to interact with the display 308 in a number of ways other than (or in addition to) cursor movement, for example it can control cursor fading, volume or media transport (play, pause, fast-forward and rewind). For example, pressing the scroll wheel 306 (the scroll wheel also operating in this case as a switch), could cause the device to switch from one mode to another. Further, pressing the scroll wheel or another button the device could cause the content to play or pause. Likewise, moving the scroll wheel (or pressing a button) could allow fast- forwarding or rewinding of the content displayed on the Ul. Additionally, the system can be programmed to recognize gestures, e.g., predetermined movement patterns, to convey commands in addition to cursor movement. Moreover, other input commands, e.g., a zoom-in or zoom-out on a particular region of a display (e.g., actuated by pressing button 302 to zoom-in or button 304 to zoom-out or by using the scroll wheel 306), may also be available to the user. Further, the user may use the scroll wheel on the 3D pointer device in a scrolling mode. When operating in scrolling mode, the cursor
can be displayed in a default representation, e.g., as an arrow on the user interface. While in scroll mode, rotation of the scroll wheel on the 3D pointing device (or other pointing device if a 3D pointer is not used) has the effect of scrolling the content which is currently being viewed by the user vertically, i.e., up and down.
[0049] Returning now to the application illustrated in Figure 5, the GUI screen (also referred to herein as a "Ul view", which terms refer to a currently displayed set of Ul objects) seen on television 520 is a home view. In this particular exemplary embodiment, the home view displays a plurality of applications 522, e.g., "Photos", "Music", "Recorded", "Guide", "Live TV", "On Demand", and "Settings", which are selectable by the user by way of interaction with the user interface via the 3D pointing device 400. Such user interactions can include, for example, pointing, scrolling, clicking or various combinations thereof. For more details regarding exemplary pointing, scrolling and clicking interactions which can be used in conjunction with exemplary embodiments of the present invention, the interested reader is directed to U.S.
Published Patent Application No. 20060250358, entitled "Methods And Systems For Scrolling And Pointing In User Interface," to Frank J. Wroblewski, filed on May 4, 2006, the disclosure of which is incorporated here by reference.
[0050] Although Figure 5 illustrates various icons for accessing content, the method for accessing content as described below, could be implemented by selecting any icon or logging into a system to display the initial view. Alternatively, other forms of input, as discussed above, could be used to display a certain Ul view, e.g., gestures, voice recognition, etc.
[0051] In addition, the relationship between a currently displayed user interface view and its next "highest" user interface view will depend upon the particular user interface implementation. According to exemplary embodiments of the present invention, user interfaces may use, at least in part, zooming techniques for moving between user interface views. In the context of such user interfaces, the next "highest" user interface view could be reached by actuating an object on the Ul view which is one zoom level higher than the currently displayed Ul view. As discussed above with
respect to Fig. 3A, for example, zooming and/or panning could be implemented by moving the scroll wheel 306. The zooming transition effect can be performed by progressive scaling and displaying of at least some of the Ul objects displayed on the current Ul view to provide a visual impression of movement of those Ul objects away from an observer. In another functional aspect of the present invention, user interfaces may zoom-in in response to user interaction with the user interface which will, likewise, result in the progressive scaling and display of Ul objects that provide the visual impression of movement toward an observer. More information relating to zoomable user interfaces can be found in U.S. Patent Application Serial No. 10/768,432, filed on January 30, 2004, entitled "A Control Framework with a Zoomable Graphical User Interface for Organizing, Selecting and Launching Media Items," and U.S. Patent Application Serial No. 09/829,263, filed on April 9, 2001 , entitled "Interactive Content Guide for Television Programming," the disclosures of which are incorporated here by reference.
[0052] Movement within the user interface between different user interface views is not limited to zooming. Other non-zooming techniques can be used to transition between user interface views. For example, panning can be performed by progressive translation and display of at least some of the user interface objects which are currently displayed in a user interface view. This provides the visual impression of lateral movement of those user interface objects to an observer.
[0053] The content discovery method and system will now be described.
[0054] The dynamic display content method is described as a Peak™ content discovery method and system. Peak, as a verb, can be defined as a semantic merge of the noun "peak" meaning "mountain top" and the homophone verb "peek" meaning to see. Peak™, as a noun, can be defined as the view of a collection of content from a particular semantic vantage point. For example, if the semantic vantage point (Peak™) is "1980's Drama Movies," peaking that content will lead to a stream of cover art and related metadata organized across a viewing screen in a pleasing way. Of course, the content category could also be music-related in which case the art would be music
album covers or something else related to items in that category. Each piece of content will stay on the screen for a limited time and then disappear. At any one time, a number of different pieces of content may be shown. Each one is selectable by the user for further action. Other types of content and visualizations are possible. So, for example, you might even see snippets from reviews that could go on an ad for a movie (e.g., "Captivating!" - USA Today). An example usage of Peak™ is "I couldn't remember the title of the movie so I peaked it." Another example is "I was in the mood for a suspense movie so I peaked for one." Peaking is different from searching because searching helps you find content, but Peaking helps find the content for you.
[0055] Generally speaking, Peak™ is a content discovery application and service for multiple platforms including smart TV's, PC's, mobile phones and the like. Peak™ allows the user to discover new content of various types including video, audio, entertainment destinations (e.g., restaurants, theaters). Peak™'s user interface shows images of content on the screen, which vary over time. The images remain on the screen for several seconds and then disappear unless the user interacts with them. There is no static grid that splits the screen, but rather, the grid is dynamic where the size and shape of new images that appear vary over time. The content that populates the image rectangles is coming from a data base that is either local or online. Each image can show, for example, the title of the content and a rating. When the cursor is hovering over an image that title remains still and does not disappear like the
surrounding images. The following discusses these and alternative embodiments of the dynamic method and system.
[0056] Figure 6 illustrates an initial explore screen 600 displaying content represented by a plurality different images 602. The initial screen 600 can display a default semantic Peak™, such as "all" or the user can set the initial screen to display content based on the user's personal preferences and/or settings, the user's viewed content history, or potentially all history of other users of the system. Also, when the initial screen 600 is displayed, the images 602 could be random images. If the user shows no interest in any of the images 602, then images 602 are replaced with new and
different images representing different content is automatically displayed to the user in another Ul view. The images 602 can all be replaced with new and different images at the same time. Alternatively, each of the images 602 can be replaced with new and different images at different intervals, so that one image is replaced at a time with a new image and content. Thus, the viewer has more time to contemplate the content on the screen. This cycle of new Ul views with new content and images continues until the user indicates an interest in any of the images or content. The new Ul views can be cycled to update every few or several seconds automatically. The timing of the updates to new Ul views could be determined based on a user's preferences and/or settings, or learned by the system based on the user's past browsing history/usage. Further, the user could intentionally pick a different semantic Peak™. In any event, once a semantic Peak™ is selected, it is remembered in the user's personal list. The system may remember the personal history for each user, such as what the user liked and did not like, what content metadata they looked at more, what options the user preferred, etc. The system may accomplish this by determining relevant content, the metadata for that content, and presentation rules, etc. In addition, the system allows for the creation and publishing of a semantic Peak™ to others. As shown in Fig. 6, each of the plurality of images 602 could be displayed indicating the title 604 of the content, as well as indicating a rating 606 of the content. In this example of Fig. 6, each of the plurality of images 602 could display the movie's cover art or a scene or character in the program.
[0057] As also shown in Fig. 6, each of the plurality of images 602 may be displayed in a (generally) rectangular or square shape, where each of the shapes can be of different sizes and at different screen locations. Although rectangular and square shapes are illustrated in Fig. 6, the shapes could be any shape or combination of shapes, e.g., a teardrop and/or a circle, or pyramid shape. By having a dynamic screen layout, one can display oversized visuals, like having a single movie poster take up one- third or even one-half of the screen 600. Since the images and/or content can cycle out and be replaced by images of varying sizes, the user will be getting the benefit of a large and visually arresting display.
[0058] Although Fig. 6, and the other content views discussed below, represent programs such as movies or shows, the content could also be advertisements, documents, music, photos, games, recipes, books, travel, online dating, restaurants, shopping, theatre tickets, local events, social media, or job listings, for example.
Moreover, the content represented on the display could be related to more than one type of content. For example, content and images representing movies, theatre tickets, and advertisements could be displayed simultaneously.
[0059] The concept of Snapshots or Snaps is now explained. For each display mechanism such as Peak™ and content type, a concise visual display of a subset of relevant metadata needs to be constructed. The template shape is then constructed (possibly dynamically as required) whenever that particular content item needs to be shown on the screen. An example of a Snap for a restaurant is shown in Figure 7A. An example of a Snap for a movie is shown in Figure 7B. In each case, the Snap allows a coherent presentation of the most relevant information about a particular content item so that the user can instinctively browse across several relevant information facets in parallel. Note that while the particular embodiments in this patent involve using a single Snap construct per content item display, the designer could easily decide to selectively use one of many Snap constructs or even use one that morphs over time in a single display so as to display additional relevant information.
[0060] Both the layout and the content displayed on the screen autonomously changes as the user watches. In one exemplary implementation, each particular collection/layout displayed on the Ul stays stable for a few to several seconds, before changing and updating to new content and images. However, the user or the system could choose to generally select the older pieces of content to change out at a given instant in time, or can just randomly choose the content to change in order to make the display more visually interesting. For example, the view presented in Fig 6 could change to new content presented in Fig. 8.
[0061] As shown in Fig. 8, the content has dynamically updated from that of Fig. 6 to present new content in a new Ul view 800. Again, it is noted that each of the
plurality of images 810 have changed to different sizes, different locations, and different images. Note that the transition from the Ul screen shown in Fig. 6 to that of Fig. 8 could happen gradually, i.e., by individually cycling through (replacing) individual images. However, it is possible that one or more of the previously displayed images remain on the Ul after the update. In this exemplary embodiment, for example, the user has moved the 3D pointing device to hover over image 802 representing "Jurassic World." Once the user has hovered over one of the plurality of different images, the image could relay additional features to the user. For example, in Fig. 8, "Jurassic World," is visually highlighted by a border 804 surrounding the image 802. This border could be of different colors, for example. Alternatively, the image 802 could be enlarged relative to its original displayed size to visually convey to the user that the user is hovering over the image for possible further input. Other features could be
implemented to make image 802 stand out visually as one in which the user may be interested. Although the user has random access to any part of the Ul, corners or edges of the image 802, for example, could be linked to additional features. For example, once a user has indicated an interest in an image or content, via hovering or cursor movement, the image 802 will then display icons 806, 808. One such feature is an icon 806 (Peak™, for example) on the corner of the content image 802, that if selected will navigate the view to another Ul screen that displays additional content represented by a plurality of images, which are similar to the image 802 originally selected. Alternatively, just hovering over the image 802 could indicate interest wherein the images 810 are automatically replaced with content related to image 802. This can be done using metadata, mapping, and the algorithms discussed below. Another corner may be populated with an icon 808 (flag, for example) that when selected, moves the selected image to a short list or watch list, that can be accessed at any time for faster and easier comparison and selection. Although Fig. 8 displays an image 802 with two icons 806, 808, the image 802 could display additional or different icons for selection by the user for additional features. Alternatively, the image 802 could display no icons, and
instead various input, such as selecting a button on the input device, could provide access and selection of additional features.
[0062] As shown in Fig. 9, the Ul display 900 has been automatically updated to present new content represented by a plurality of different images 908. Like image 802 of Fig. 8, image 902 of Fig. 9 is highlighted to indicate the user's interest in "The Hunger Games: Mockingjay - Part 1 ," for example. Also, in this Ul view 900, a cursor 904 is displayed, where movement of the 3D pointing device corresponds to movement of the cursor 904. As shown in Fig. 9, the cursor 904 is moved over the icon 906 represented by a flag. The user can select the icon 906 by pressing a button on the 3D pointing device, for example, to add this program to the user's short list or watch list.
[0063] In Fig. 10, a new Ul view 1000 is displayed. As shown in Fig. 10, because the user has highlighted image 802 "Jurassic World," by hovering over the image 802, for example, in Fig. 8, the highlighted image 802 "Jurassic World" remains on the screen 1000, while the remaining images 810 of Fig. 8, for example, have updated to reflect new and different images 1004 in Fig. 10. Also, in this Ul view 1000, a cursor 1006 is displayed, where movement of the 3D pointing device corresponds to
movement of the cursor 1006. As shown in Fig. 10, the cursor 1006 is moved over the icon 1008 represented by the Peak™ symbol. The user can select the icon 1008 by pressing a button on the 3D pointing device, for example, to add this program to the user's Peak™ list.
[0064] Fig. 1 1 illustrates another display view 1 100 of the Peak screen, where an icon 1 102 (Peak™ icon, for example) is displayed differently than that of Fig. 10, for example. In this Ul presentation 1 100, the content is dynamically updated to display content represented by different images 1 104, wherein the content is similar to "The Man from U.N. CLE.," for example. Although the related content 1 104 could be presented based on similar movies in this example, the related content that is
dynamically provided to the user could be related or include the same genres, subgenres, actors/actresses, director, or release date. Additionally, since content could be books or restaurants, for example, the related content presented to the user could be
based on writer, chef, location, type of food, for example. Another feature illustrated in Fig. 1 1 is the number of items that are listed in the short list or watch list, as designated by the number next to the icon 1 106 represented by a flag.
[0065] Selection of the icon 1 106 represented by a flag on Fig. 1 1 , for example, results in a new Ul view 1200 of Fig. 12, where the icon 1 106 may remain in the new Ul view 1200. As shown in Fig. 12, the eight images 1202 representing programs are displayed in the short list or watch list view 1200. Although the images 1202 are displayed in a grid format where each image is displayed in equal size, the images 1202 can be displayed in different sizes or shapes and at different locations as discussed above. Further, selection of an image could be accomplished by moving the cursor 1204 over any of the images 1202 and pressing a button, for example, on the 3D pointing device.
[0066] Selection of any content represented by an image on the Ul views of Figs. 6, and 8 to 12, for example, could result in a new Ul view 1300 as shown in Fig. 13. Fig. 13 displays additional details regarding the content, such as the title, date, user rating, parental rating, content time length, and a brief description of the content. In addition, this Ul view 1300 could include an icon 1302 for playing the content. Although Fig. 13 displays certain details, as well as an icon 1302 for playing the content, the information displayed could list different or additional details and/or additional or different icons. Moreover, a user is not required to access this U l view 1300 to play the desired content. Instead, the user could select the content represented by an image of any of the Ul views of Figs. 6 and 8-12 by pressing a button on the 3D pointing device, for example.
[0067] Some of the above discussed examples and embodiments are
implemented using a 3D pointing device which has random access to a Ul on a television. However, the dynamic content method could also be implemented using a grid display where a user manipulates up, down, left, and right buttons or via other input to move from one part of the grid to another. In addition, this dynamic content method could be implemented in a text-based system. For example, a user enters a text-based
search in a search engine on the Internet. However, the terms the user is entering do not succinctly match the terms related to the desired search results. In this example, the user enters "classic movies," and the search results displays a variety of different types of classic movies, such as those from the 1970's, those from the 1940's, film noir movies, and black and white movies. Once the user selects an item related to film noir movies, for example, this selection is remembered by the system and other related film noir results are then dynamically displayed to the user for further review. As such, although the user knew what they were interested in, e.g., film noir movies, the user did not know the exact search terms. Instead, the system remembers the user's interests and dynamically displays the desired content. Further, since the system remembers and learns what the user likes, the user is not presented with unrelated and undesired items. Likewise, if the user exits out of the item selected, the user is not presented with the original listing of items. Instead, the user is presented with new and related items.
[0068] More detailed methods for implementing the above-discussed
embodiments above are now discussed.
[0069] First, though, let's briefly introduce some terminology used in this section. Each content item (e.g., a movie or a restaurant) is individually labelled and is associated with a set of attribute values. Attribute values for a movie would include its title, its genre, year of release and so on while attribute values for a restaurant would include its name, its food genre, rough expense category and so on.
[0070] Since attributes are almost by definition free-form, a further construct is adopted to make algorithm construction more straight-forward. Each free-form attribute is associated with a normalized facet. The normalized facets are then used in computing algorithms for similarity and distance. More information on mapping attributes to facets is given below.
[0071] The initial step (Part 1 ) of an algorithm to implement the method is to take the mixed data attributes and produce normalized metric data facets, i.e., facet generation. Those facets are then used in Part 2 (content item selection) to drive the actual selection process, e.g., of the content images which are displayed on the Ul, and
cycled through, as described previously. This section will describe how that process works for each common data type, based on one-to-one mappings. Although not discussed herein, the method could further include many-to-one mapping. There are likely some useful facets that are inherently formed from several attributes at once. There is nothing inherent in this architecture that prohibits that or makes that unwieldy.
[0072] Table 1 below provides an overview of term definitions.
TABLE 1
[0073] For metric attribute values, all that is required here is to normalize the data. In this example, the desired output range is 0 to 1 . The perspective mapping function in this case is:
For ordinal data, the data is by rank. Assuming that the rank has a meaning in the sense that the third element is more similar to the first element than to the tenth element and that there is always at least one element that is first in the list (has value 1 indicating that it is first), the mapping is again fairly straight-forward. In the perspective mapping proposed here, the facet value of 1 is assigned to the top rank item and 0 to the bottom rank item. The perspective mapping then is:
= p Ui) = 1 - (max(^) - 1)
[0074] For categorical data, the perspective mappings required are more complex. Furthermore, multiple perspectives might be meaningful for any given attribute. The following are some examples of perspective mappings based on categorical data.
[0075] One category could be the year a program was released. The year a film was made looks like a metric (and technically is one) but, from a cognitive perspective, it behaves more like a category. In this case, the real attribute of interest is whether or not a film is "modern" or "classic" or "early," for example. The perspective mappings for those attributes could be as follows:
^classic = pclassic ^ = > 1930)&&(a. < I960))? 1 : 0
fearly = pearly ^ = (Β_ < LG 30) ? Χ . Q
[0076] Another category could be the genre of the program. Film genre is clearly categorical and is not metric at all. In fact, by itself and with no preconceptions, it is virtually impossible to say whether a "horror" movie is more similar to a "romance" than to a "comedy." Since the system is seeking to understand the similarity (or distance) between two movies, for example, the system could have some mechanism or prism through which the system can determine similarity. For example, if the system knew the user's goal was emotional diversion, then fantasy might be quite similar to drama to achieve that goal. If the user's goal was to be inspired, then action might be quite similar to drama. The user's goal gives the system the necessary perspective needed to judge similarity of one genre to another.
[0077] Machine Learning could be used to iterate these mappings. An Expert or Oracle of Delphi approach could be used to determine an original mapping. Another possibility is to leverage text analysis of reviews or movie advertisements or plot descriptions to determine a good mapping. An example using the syntax of Python diet (Dictionary) is set forth below:
f inspirational = pinspirationai (a.} = β_ jn {.drama.. 0.8, 'action': 0.8, 'fantasy': 0.8, horror: 0.1 , comedy: 0.3, science-fiction: 0.5}
f Character-study = pCharacter -study ^ = Β_ IN {<DRAMA.. Q.8, 'action': 0.2, 'fantasy': 0.3, ΠΟΠΌΓ:
0.5, comedy: 0.5, science-fiction: 0.3, romance: 0.6}
[0078] Another category could be actors/actresses. However, just based on their identification number or name in the system, one has virtually no ability to determine how similar two actors are to each other. So, for actors too, perspective mapping is needed. One possible perspective could be the distribution of film genres in which the actors/actresses have starred. Let Ht be the histogram vector of content genres in which that actor i has starred. Double counting of genres is allowed but the whole vector is normalized so that \\Ht \\ = l. The primary similarity metric would then be:
* Hj(k)) . In other words, it is the dot product of the two histograms and is sometimes called the cosine metric. Many other perspectives are possible such as one that determines whether an actor is dominant in a particular genre (e.g., Sylvester
Stallone with action) or whether an actor is a broad character actor with no particular preference (e.g., Meryl Streep).
[0079] At this point, the facets describing the content items that are normalized and metric in nature have been determined. Given that, it would be straight-forward to run algorithms like k-means clustering, PCA or many other techniques to enable the system to select the appropriate content items to show next. Any of those approaches could be used. However, discussed below is an expert-initialized system that could adapt over time using Machine Learning. Prior to discussing this technique, Table 2 sets forth some additional terminology.
Term Definition
sijk Single facet similarity metric between facet f (facet value fk for content item et ) and facet ff (facet value fk for content item e )
Similarity metric between content items et and e;- based on weight vector (or multi-dimensional facet basis vector if you prefer) w
Dj Distance metric between content items e* and e;- based on weight vector (or multi-dimensional facet basis vector if you prefer). Directly related to similarity metric.
Nf The number of facets that are to be used for computing
similarities. Since multiple perspectives, this number will include those as individual facets.
Vi A "view" of the content similarities defined by its weight
vector wVi defining the relative importance of each facet for this view.
WVl A weight vector for view vt of relative importance of each facet in comparing similarity metrics between two content items.
Nv The number of views used to select the next set of content items to display.
Gp The group (set) of content items en that are part of the
current Peak session.
NP The number of elements in Gp.
«i The relative importance of element i in group Gp to the
selection of the next set of content to display.
max_top(n, G) This function returns the n elements of G with the highest similarity metrics.
The similarity measure of content item et for view Vj on the Peak group Gp
Gvi The group (set) of similarity metrics S^V] for all content
items et .
Nl G The number of elements from group GVJ which will be
incorporated into the Layout set of content to display.
L The "layout" group (set) of content to display next.
TABLE 2
[0080] To compute the similarity between two content items, the system must first decide how to compute the similarity of a single facet from two content items. Since
every facet is normalized to a range of [0, 1 ], the straight-forward method would be to use the Manhattan or Euclidean measure. In equation form, this would mean either
sm = i - \f - ft\ or sijk = l - {f - fff
With that atomic level similarity metric, the similarity overall between the two content items can be computed. In equation form, this becomes:
w _ (∑N k= f 1 k * sijk)
Si] ~ (∑: ^ )
In the above equation, the weight vector wk allows the algorithm and system to adjust the relative importance of each of the facets to the overall computation.
[0081] For some algorithms, it is better to think in terms of distance while for others, similarity works better. However, the two are tightly coupled and so, for example, the formula that relates similarity to distance is:
[0082] One way to implement this method is to process the user's reaction to the particular Peak™ session as the base for deciding which new content to suggest. In addition, user preferences and/or group behavior could be learned over time. For example, the base is the content that has been presented to the user in this session, indicated as the Gp content. Using similarity measures, the system can find content that is most appropriate to show based on the user's behavior to date.
[0083] One way to measure user preferences during the Peak™ session is now discussed. The parameter a{ determines the relative importance that new content be similar to content item e* in Gp. This section describes how a{ is calculated.
[0084] One goal of Peak™ and the dynamic content delivery method is to avoid requiring the user to do anything, i.e., to provide passive content navigation where the system generates the inputs rather than putting the cognitive load on the user to continuously refine, e.g., directed queries. As such, there is only a limited amount the system can learn when an item is displayed, but no action by the user is taken. When the user undertakes an action, the system can learn a bit more. In this example, the
system assumes that more interaction with the item indicates more interest by the user (except for the case when the user explicitly downgrades the film). For example, when the user acts on one film (image or item of interest) in the layout, the other images or items of interest in the layout are deemed to be of less interest. An example for assessing user interaction could be to assign a{ values as shown in Table 3 below.
[0085] Using the information in Table 3 for example, a user may be presented with a Ul view such as that of Figure 6. However, if the "type of interaction" by the user is "no selection of any item on screen," then some or all of the plurality of images 602 are automatically cycled out and replaced with a new set of plurality of images 1400 as set forth in Figure 14. As illustrated in Figure 14, the plurality of images 1402 are of different shapes, different locations, and different images than that of the plurality of images 602 of Figure 6.
[0086] In selecting the content for display, this algorithm imagines a collection of individual similarity-based selection algorithms operating in parallel. Each individual algorithm is referred to as a View, where each View is expressed in terms of a weight vector, for example. The View weight vector sets the relative importance of each of the facets in determining similarity between candidate content and the content history. In
essence, then, the View weight vector can be thought as a basis vector in the overall facet space.
[0087] The system may constantly recalculate the rankings and resort the items to be presented, based on interaction or lack thereof. If size is used to represent popularity, the a setting may not affect the size of the image. The a setting of a non- interacted item that is being replaced would determine what item (or set of items) gets displayed next, but the item's popularity relative to the general populace is orthogonal to that - your top item may or may not be popular with the crowd. For example, noninteraction with an item uses the a setting to re-score all items with metadata overlap and re-rank everything. That new ranking can be used to determine size when the new item cycles in. Other visualizations could do things very differently in how they choose to display size. In other words, it might always be the case that display size determines a setting, but the a setting may or may not determine the display size of future items. Alternatively, or in addition to, the content provider could determine the size of an image based on whether the content provider desires to promote certain content. Hence, if certain content is promoted, that image for the content item may be of a larger size than other images displayed to grab the viewer's attention.
[0088] To compute the similarity of potential content and the Peak™ group, the following equation could be used, for example, where a potential content item et to the user's reaction to the Peak™ Group of content set forth as:
[0089] In forming the layout group, the group of the similarity metrics for a given View Vj is defined as follows:
GVJ = {s^Vj v i, ei E E\GP]
Note that any item already shown to the user (part of Gp) is explicitly not included in GVL Then, the layout group L is defined as follows:
1 = U max _top(N^, GVi)
i=i
This then is the set of new content to show to the user. It is this set of new content which the system judges, based on user observation in this session, across prior sessions as well as observation of other users, as most intriguing to the user at this moment in time. The content in the layout group L is then shown/presented to the user. Based on the user's reactions, the next iteration of intriguing content is prepared for the user as the next iteration of layout group L. As each layout group is shown, the content set Gp is updated to include this set of content for this session of Peak™ operation.
[0090] Although various algorithms are discussed above to determine relevant content based on user preferences which can be learned by the system, one of ordinary skill in the art would appreciate that other methods, algorithms and/or mappings can be used to implement the dynamic content delivery method.
[0091] Figure 15 illustrates a method of one of the exemplary embodiments of the invention. At step S100, the Ul view is presented displaying content associated with a plurality of different images. At step S102, input is determined. If, at step S102, it is determined that input is received where a user indicates an interest in an image by interacting with that image, then at step S106 that image of interest remains on the screen and the remaining images are replaced by images that represent related content to the image in which the user has an expressed an interest and displayed in a new Ul view at S100. Alternatively, if at step S102, it is determined that no input has been received or there has been no interaction by the user, then the Ul view is updated at S104 to update all of the images and replace them with new images. At step S108, it is determined if a selection has been made by the user with respect to one of the images. If an image has been selected by the user, then the content is displayed at step S1 10. If no image has been selected by the user at S108, then all of the images are updated again at S104 and displayed in a new Ul view at S100.
[0092] Another method is shown in Fig. 16, which sets forth a method for dynamically displaying content to a user on a graphical user interface, wherein the
content is represented by a plurality of different images, comprising: displaying the plurality of different images on the graphical user interface; receiving an input from the user; determining that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; selecting another one of the plurality of different images; and displaying the content represented by the another one of the plurality of different images.
[0093] Another method is shown in Fig. 17, which sets forth a method for dynamically displaying content to a user on a graphical user interface displayed on a device, wherein the content is represented by a plurality of different images, comprising: displaying the plurality of different images on the graphical user interface; dynamically updating the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images; receiving input via at least one sensor in a 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface displayed on the device; determining, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; selecting another one of the plurality of different images; and displaying the content represented by the another one of the plurality of different images.
[0094] The Peak™ Content Delivery Service can be implemented using one or more processors 1800 that are connected to one or more input devices 1802 and one or more output devices 1804 as shown in Figure 18. Processor(s) 1800 are thus specially programmed to present Peak™ Content Delivery Service user interface screens which change over time as described above, both randomly and in response to a user's random access (pointing) cursor movements and/or button selections of content elements, flags and/or Peak™ icons as described above. If used in a system like that of Figure 5, input device 1802 could thus be (or include) a 3D pointing device and output device 1804 could thus be (or include) a television, AR/VR device, mobile phone or the like. In such an embodiment, processor(s) 1800 could reside within the television itself or a set-top box or another device connected to the television, like a game console or the user's smart phone. If used in a tablet, the processor(s) 1800, input device(s) 1802, and output device(s) 1804 could all reside within a single housing and be portable.
Optionally, elements of the Peak Content Delivery Service could be pushed to the local system 1800, 1802, and 1804 from a remotely located server 1806 via, e.g., the Internet or a cable or satellite media connection.
[0095] The Peak™ Content Delivery Service is explicitly imagined as a multiuser, multi-platform system with the ability to learn relationships and interests across a collection of users and content. The Peak™ Content Delivery Service could of course be implemented for just a single user with the learning then restricted to that particular user. Figure 19 illustrates a brief overview of the Peak™ Content Delivery Service method 1900. The metadata from various Content Sources 1902 as well as Global Context 1904 such as weather drives the system's User Interface operation shown in the right-hand side of the diagram. The loop starts with an auto-generated query 1916 of the metadata 1906 for the first set of content to show the user. The appropriate content is then selected and ordered 1910 for presentation (a group that here is referred to as the Layout group). The machine learning 1912 determines the views and perspectives 1914 presented to the user. For each content item, a Snap 1908 is formed for display to the user. The last step involves the user either deliberately requesting
more information on particular displayed content or simply waiting for something more interesting to be displayed. In either of those two cases, with the benefit of machine learning 1912, a new auto-generated query 1916 is formed and the loop begins again. The result is a mostly passive, guided journey through content of potential interest to the user - a journey that is both rewarding and fun.
[0096] As used herein and described briefly in Table 3, a determination or indication of a user's interest in a particular image (or other discoverable content), can be based on one or more inputs or actions including, but not limited to, cursor position, cursor movement, remote control input (e.g., button press, button release, scroll wheel movement, OFN detections), voice input, eye-tracking, selection, hovering, focus placement on the image, etc. Similarly, a user's lack of interest may also be determined or indicated by a lack of one or more these inputs. As described earlier, "interest" can be valued at different levels based upon a number or quality of inputs made by the user with respect to a given image (or other discoverable content). Further, general context can enhance the significance of user action or inaction. For example, if a particular content item takes up half the screen and the user still does not indicate interest, that indicates a higher level of disinterest than if the content item only took up 1 /10th of the screen. Conversely, if a user deliberately selects a content item even though its visual representation (Snap) is the smallest on the screen, that indicates a higher level of interest than if the content item took up half the screen. Similarly, a user's pattern of interest compared with general popularity can indicate a different level of interest. If the user selects an item on screen that is among the least popular shown, that is more significant than if the user picks one that is the most popular shown.
[0097] Systems and methods for processing data according to exemplary embodiments of the present invention can be performed by one or more processors executing sequences of instructions contained in a memory device. Such instructions may be read into the memory device from other computer-readable mediums such as secondary data storage device(s). Execution of the sequences of instructions contained in the memory device causes the processor to operate, for example, as described
above. In alternative embodiments, hard-wire circuitry may be used in place of or in combination with software instructions to implement the present invention.
[0098] Numerous variations of the afore-described exemplary embodiments are contemplated. The above-described exemplary embodiments are intended to be illustrative in all respects, rather than restrictive, of the present invention. Thus the present invention is capable of many variations in detailed implementation that can be derived from the description contained herein by a person skilled in the art. All such variations and modifications are considered to be within the scope and spirit of the present invention as defined by the following claims. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, used herein, the article "a" is intended to include one or more items.
Claims
1 . A method for dynamically displaying content to a user on a graphical user interface, wherein the content is represented by a plurality of different images, comprising:
receiving an input from the user;
determining that the user has an interest in one of the plurality of different images;
dynamically updating the displayed content on the graphical user interface to include the one of said plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of said plurality of different images;
selecting another one of the plurality of different images; and
displaying the content represented by the another one of the plurality of different images.
2. The method of claim 1 , wherein if no input is received indicating interest in any of the plurality of different images, all of the plurality of different images are dynamically updated to replace the displayed plurality of different images with new displayed plurality of different images.
3. The method of claim 2, wherein based on whether or not the user has indicated an interest one of the plurality of different images, a value at is assigned to each image to determine a visual layout of the new displayed plurality of different images.
4. The method of claim 1 , wherein the input is provided by a 3D pointing device, where movement of the 3D pointing device corresponds to movement of a cursor and the 3D pointing device comprises at least one button.
5. The method of claim 1 , wherein the input is provided by one of the user touching the graphical user interface screen on a tablet or the user providing voice input.
6. The method of claim 1 , wherein the additional plurality of different images are related to the one of the plurality of different images by representing content that has similar metadata.
7. The method of claim 1 , wherein the additional plurality of different images are related to the one of said plurality of different images by representing content, wherein content is movie content, and determination of related movie content is based on a same genre, a same actor/actress, or a same director.
8. The method of claim 1 , wherein the plurality of different images are randomly displayed in a dynamic visual layout where each of the plurality of different images are of different shapes and of different sizes and displayed at different locations, wherein the shape, the size, and the location of each of the plurality of different images are random.
9. The method of claim 8, wherein the dynamic visual layout is automatically updated every several seconds to display a new plurality of different images.
10. The method of claim 9, wherein a user can update preference settings to change an amount of time between the automatic updates.
1 1 . The method of claim 1 , wherein once the user indicates interest in one of the plurality of different images, the one of the plurality of different images is displayed with an icon and the user can select the icon to add the selected one of said plurality of different images to the user's watch list.
12. The method of claim 1 , wherein the content is one of photos, music, books, shopping, advertising, restaurants, events, travel, job openings, service providers, online dating, finance, games, social media, movies, or shows.
13. A method for dynamically displaying content to a user on a graphical user interface displayed on a device, wherein the content is represented by a plurality of different images, comprising:
displaying the plurality of different images on the graphical user interface;
dynamically updating the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images;
receiving input via at least one sensor in a 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface displayed on said device;
determining, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images;
dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images;
selecting another one of the plurality of different images; and
displaying the content represented by the another one of the said plurality of different images.
14. The method of claim 13, wherein the additional plurality of different images are
related to the one of the plurality of different images by representing content that has similar metadata.
15. The method of claim 13, wherein the plurality of different images are randomly displayed in a dynamic visual layout where each of the plurality of different images are of different shapes and of different sizes and displayed at different locations, wherein the shape, the size, and the location of each of the plurality of different images are random.
16. The method of claim 13, wherein the device is a smart television, a tablet, or a personal computer.
17. The method of claim 13, wherein once the user indicates interest in one of the plurality of different images, the one of said plurality of different images is displayed with an icon and the user can select the icon to add the selected one of the plurality of different images to the user's watch list.
18. The method of claim 13, wherein the content is one of photos, music, books, shopping, advertising, restaurants, events, travel, job openings, service providers, online dating, finance, games, social media, movies, or shows.
19. A system for dynamically displaying content to a user, comprising:
a 3D pointing device;
a device configured to display a graphical user interface;
a processor associated with the device and configured to receive inputs for dynamically displaying said content, wherein said content is represented by a plurality of different images, the processor configured to:
display the plurality of different images on the graphical user interface ;
dynamically update the displayed plurality of different images automatically every
several seconds to replace said displayed plurality of different images with new displayed plurality of different images;
receive input via at least one sensor in the 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface of the device;
determine, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images;
dynamically update the displayed content on the graphical user interface of the device to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of said plurality of different images;
select another one of the plurality of different images; and
display the content represented by the another one of the plurality of different images.
20. The system of claim 19, wherein the additional plurality of different images are related to the one of said plurality of different images by representing content that has similar metadata.
21 . The system of claim 19, wherein the plurality of different images are randomly displayed in a dynamic visual layout where each of the plurality of different images are of different shapes and of different sizes and displayed at different locations, wherein the shape, the size, and the location of each of the plurality of different images are random.
22. The system of claim 19, wherein the device is a smart television, a tablet, or a
personal computer.
23. The system of claim 19, wherein once the user indicates interest in one of the plurality of different images, the one of the plurality of different images is displayed with an icon and the user can select the icon to add the selected one of the plurality of different images to the user's watch list.
24. The system of claim 19, wherein the content is one of photos, music, books, shopping, advertising, restaurants, events, travel, job openings, service providers, online dating, finance, games, social media, movies, or shows.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201662274989P | 2016-01-05 | 2016-01-05 | |
| US62/274,989 | 2016-01-05 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017120300A1 true WO2017120300A1 (en) | 2017-07-13 |
Family
ID=59274433
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2017/012284 Ceased WO2017120300A1 (en) | 2016-01-05 | 2017-01-05 | Content delivery systems and methods |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2017120300A1 (en) |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6175362B1 (en) * | 1997-07-21 | 2001-01-16 | Samsung Electronics Co., Ltd. | TV graphical user interface providing selection among various lists of TV channels |
| US6295646B1 (en) * | 1998-09-30 | 2001-09-25 | Intel Corporation | Method and apparatus for displaying video data and corresponding entertainment data for multiple entertainment selection sources |
| US7797713B2 (en) * | 2007-09-05 | 2010-09-14 | Sony Corporation | GUI with dynamic thumbnail grid navigation for internet TV |
| US7839385B2 (en) * | 2005-02-14 | 2010-11-23 | Hillcrest Laboratories, Inc. | Methods and systems for enhancing television applications using 3D pointing |
| US20110219395A1 (en) * | 2006-08-29 | 2011-09-08 | Hillcrest Laboratories, Inc. | Pointing Capability and Associated User Interface Elements for Television User Interfaces |
| US20120086711A1 (en) * | 2010-10-12 | 2012-04-12 | Samsung Electronics Co., Ltd. | Method of displaying content list using 3d gui and 3d display apparatus applied to the same |
| US8261209B2 (en) * | 2007-08-06 | 2012-09-04 | Apple Inc. | Updating content display based on cursor position |
| US20130097542A1 (en) * | 2011-04-21 | 2013-04-18 | Panasonic Corporation | Categorizing apparatus and categorizing method |
| US8760400B2 (en) * | 2007-09-07 | 2014-06-24 | Apple Inc. | Gui applications for use with 3D remote controller |
| US20140337749A1 (en) * | 2013-05-10 | 2014-11-13 | Samsung Electronics Co., Ltd. | Display apparatus and graphic user interface screen providing method thereof |
| WO2014194148A2 (en) * | 2013-05-29 | 2014-12-04 | Weijie Zhang | Systems and methods involving gesture based user interaction, user interface and/or other features |
| US8935630B2 (en) * | 2005-05-04 | 2015-01-13 | Hillcrest Laboratories, Inc. | Methods and systems for scrolling and pointing in user interfaces |
| US20150074552A1 (en) * | 2013-09-10 | 2015-03-12 | Opentv, Inc | System and method of displaying content and related social media data |
-
2017
- 2017-01-05 WO PCT/US2017/012284 patent/WO2017120300A1/en not_active Ceased
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6175362B1 (en) * | 1997-07-21 | 2001-01-16 | Samsung Electronics Co., Ltd. | TV graphical user interface providing selection among various lists of TV channels |
| US6295646B1 (en) * | 1998-09-30 | 2001-09-25 | Intel Corporation | Method and apparatus for displaying video data and corresponding entertainment data for multiple entertainment selection sources |
| US7839385B2 (en) * | 2005-02-14 | 2010-11-23 | Hillcrest Laboratories, Inc. | Methods and systems for enhancing television applications using 3D pointing |
| US8935630B2 (en) * | 2005-05-04 | 2015-01-13 | Hillcrest Laboratories, Inc. | Methods and systems for scrolling and pointing in user interfaces |
| US20110219395A1 (en) * | 2006-08-29 | 2011-09-08 | Hillcrest Laboratories, Inc. | Pointing Capability and Associated User Interface Elements for Television User Interfaces |
| US8261209B2 (en) * | 2007-08-06 | 2012-09-04 | Apple Inc. | Updating content display based on cursor position |
| US7797713B2 (en) * | 2007-09-05 | 2010-09-14 | Sony Corporation | GUI with dynamic thumbnail grid navigation for internet TV |
| US8760400B2 (en) * | 2007-09-07 | 2014-06-24 | Apple Inc. | Gui applications for use with 3D remote controller |
| US20120086711A1 (en) * | 2010-10-12 | 2012-04-12 | Samsung Electronics Co., Ltd. | Method of displaying content list using 3d gui and 3d display apparatus applied to the same |
| US20130097542A1 (en) * | 2011-04-21 | 2013-04-18 | Panasonic Corporation | Categorizing apparatus and categorizing method |
| US20140337749A1 (en) * | 2013-05-10 | 2014-11-13 | Samsung Electronics Co., Ltd. | Display apparatus and graphic user interface screen providing method thereof |
| WO2014194148A2 (en) * | 2013-05-29 | 2014-12-04 | Weijie Zhang | Systems and methods involving gesture based user interaction, user interface and/or other features |
| US20150074552A1 (en) * | 2013-09-10 | 2015-03-12 | Opentv, Inc | System and method of displaying content and related social media data |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9400598B2 (en) | Fast and smooth scrolling of user interfaces operating on thin clients | |
| US7386806B2 (en) | Scaling and layout methods and systems for handling one-to-many objects | |
| US8521587B2 (en) | Systems and methods for placing advertisements | |
| US8935630B2 (en) | Methods and systems for scrolling and pointing in user interfaces | |
| US20060262116A1 (en) | Global navigation objects in user interfaces | |
| US8850478B2 (en) | Multimedia systems, methods and applications | |
| US9576033B2 (en) | System, method and user interface for content search | |
| JP5662569B2 (en) | System and method for excluding content from multiple domain searches | |
| US20070067798A1 (en) | Hover-buttons for user interfaces | |
| US9459783B2 (en) | Zooming and panning widget for internet browsers | |
| EP2948827B1 (en) | Method and system for content discovery | |
| WO2017120300A1 (en) | Content delivery systems and methods | |
| US20170180670A1 (en) | Systems and methods for touch screens associated with a display |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17736304 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17736304 Country of ref document: EP Kind code of ref document: A1 |