US12430342B2 - Computerized system and method for high-quality and high-ranking digital content discovery - Google Patents
Computerized system and method for high-quality and high-ranking digital content discoveryInfo
- Publication number
- US12430342B2 US12430342B2 US16/681,992 US201916681992A US12430342B2 US 12430342 B2 US12430342 B2 US 12430342B2 US 201916681992 A US201916681992 A US 201916681992A US 12430342 B2 US12430342 B2 US 12430342B2
- Authority
- US
- United States
- Prior art keywords
- computing device
- image
- content
- images
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24578—Query processing with adaptation to user needs using ranking
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/248—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/538—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0277—Online advertisement
Definitions
- embodiments of disclosed systems and methods provide improvements to a number of technology areas, for example those related to systems and processes that handle or process content for users or business entities, and provide for improved user loyalty, improved content publishing, improved advertising opportunities, improved content search results, and the like.
- the disclosed systems and methods enable a more robust, accurate electronic network-based search for content to be performed by leveraging the quality and accuracy of the search methodologies discussed herein.
- CTR click-through rate
- a non-transitory computer-readable storage medium tangibly storing thereon, or having tangibly encoded thereon, computer readable instructions that when executed cause at least one processor to perform a method for a novel and improved framework for obtaining highly-relevant and high-quality results when performing digital content discovery on a network.
- a system comprising one or more computing devices configured to provide functionality in accordance with such embodiments.
- functionality is embodied in steps of a method performed by at least one computing device.
- program code (or program logic) executed by a processor(s) of a computing device to implement functionality in accordance with one or more such embodiments is embodied in, by and/or on a non-transitory computer-readable medium.
- a “network” should be understood to refer to a network that may couple devices so that communications may be exchanged, such as between a server and a client device or other types of devices, including between wireless devices coupled via a wireless network, for example.
- a network may also include mass storage, such as network attached storage (NAS), a storage area network (SAN), or other forms of computer or machine readable media, for example.
- a network may include the Internet, one or more local area networks (LANs), one or more wide area networks (WANs), wire-line type connections, wireless type connections, cellular or any combination thereof.
- sub-networks which may employ differing architectures or may be compliant or compatible with differing protocols, may interoperate within a larger network.
- Various types of devices may, for example, be made available to provide an interoperable capability for differing architectures or protocols.
- a router may provide a link between otherwise separate and independent LANs.
- a communication link or channel may include, for example, analog telephone lines, such as a twisted wire pair, a coaxial cable, full or fractional digital lines including T1, T2, T3, or T4 type lines, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communication links or channels, such as may be known to those skilled in the art.
- ISDNs Integrated Services Digital Networks
- DSLs Digital Subscriber Lines
- wireless links including satellite links, or other communication links or channels, such as may be known to those skilled in the art.
- a computing device or other related electronic devices may be remotely coupled to a network, such as via a wired or wireless line or link, for example.
- a “wireless network” should be understood to couple client devices with a network.
- a wireless network may employ stand-alone ad-hoc networks, mesh networks, Wireless LAN (WLAN) networks, cellular networks, or the like.
- a wireless network may further include a system of terminals, gateways, routers, or the like coupled by wireless radio links, or the like, which may move freely, randomly or organize themselves arbitrarily, such that network topology may change, at times even rapidly.
- a wireless network may further employ a plurality of network access technologies, including Wi-Fi, Long Term Evolution (LTE), WLAN, Wireless Router (WR) mesh, or 2nd, 3rd, or 4th generation (2G, 3G, or 4G) cellular technology, or the like.
- Network access technologies may enable wide area coverage for devices, such as client devices with varying degrees of mobility, for example.
- a computing device may be capable of sending or receiving signals, such as via a wired or wireless network, or may be capable of processing or storing signals, such as in memory as physical memory states, and may, therefore, operate as a server.
- devices capable of operating as a server may include, as examples, dedicated rack-mounted servers, desktop computers, laptop computers, set top boxes, integrated devices combining various features, such as two or more features of the foregoing devices, or the like.
- Servers may vary widely in configuration or capabilities, but generally a server may include one or more central processing units and memory.
- a client (or consumer or user) device may include a computing device capable of sending or receiving signals, such as via a wired or a wireless network.
- a client device may, for example, include a desktop computer or a portable device, such as a cellular telephone, a smart phone, a display pager, a radio frequency (RF) device, an infrared (IR) device an Near Field Communication (NFC) device, a Personal Digital Assistant (PDA), a handheld computer, a tablet computer, a phablet, a laptop computer, a set top box, a wearable computer, smart watch, an integrated or distributed device combining various features, such as features of the forgoing devices, or the like.
- RF radio frequency
- IR infrared
- NFC Near Field Communication
- PDA Personal Digital Assistant
- a client device may also include or execute an application to perform a variety of possible tasks, such as browsing, searching, playing, streaming or displaying various forms of content, including locally stored or uploaded images and/or video, or games (such as fantasy sports leagues).
- an application to perform a variety of possible tasks, such as browsing, searching, playing, streaming or displaying various forms of content, including locally stored or uploaded images and/or video, or games (such as fantasy sports leagues).
- an “advertisement” should be understood to include, but not be limited to, digital media content embodied as a media item that provides information provided by another user, service, third party, entity, and the like.
- Such digital ad content can include any type of known or to be known media renderable by a computing device, including, but not limited to, video, text, audio, images, and/or any other type of known or to be known multi-media item or object.
- the digital ad content can be formatted as hyperlinked multi-media content that provides deep-linking features and/or capabilities. Therefore, while some content is referred to as an advertisement, it is still a digital media item that is renderable by a computing device, and such digital media item comprises content relaying promotional content provided by a network associated party.
- a relevance threshold can be set by a user, site administrator, artist creating/capturing the content, the system, service or platform hosting the content, or some combination thereof.
- relevancy can be quantified (or scored). For example, as discussed above, an image's relevancy can be determined via implementation of a logistic loss function which quantifies an images parameters or features. In another non-limiting example, relevancy can be based on a discounted cumulative gain (DCG) measure of ranking quality, as discussed in more detail below.
- DCG discounted cumulative gain
- DCG can measure the effectiveness of web search engine algorithms or related applications by analyzing the returned results against a graded relevance scale of content items in a search engine result set.
- DCG measures the usefulness, or gain, of a content item based on its position in the result list. The gain is accumulated from the top of the result list to the bottom with the gain of each result discounted at lower ranks.
- the disclosed systems and methods provide a unified framework of pair-wise and logistic loss functions being implemented within machine-learned ranking (MLR) systems and methods which construct ranking models of an image collection that are used when performing image retrieval upon receiving a search query.
- MLR machine-learned ranking
- a web-enabled mobile device may include a browser application that is configured to receive and to send web pages, web-based messages, and the like.
- the browser application may be configured to receive and display graphics, text, multimedia, and the like, employing virtually any web based language, including a wireless application protocol messages (WAP), and the like.
- WAP wireless application protocol
- the browser application is enabled to employ Handheld Device Markup Language (HDML), Wireless Markup Language (WML), WMLScript, JavaScript, Standard Generalized Markup Language (SMGL), HyperText Markup Language (HTML), eXtensible Markup Language (XML), and the like, to display and send a message.
- HDML Handheld Device Markup Language
- WML Wireless Markup Language
- WMLScript Wireless Markup Language
- JavaScript Standard Generalized Markup Language
- SMGL Standard Generalized Markup Language
- HTML HyperText Markup Language
- XML eXtensible Markup Language
- Client devices 101 - 104 computing device may be capable of sending or receiving signals, such as via a wired or wireless network, or may be capable of processing or storing signals, such as in memory as physical memory states, and may, therefore, operate as a server.
- devices capable of operating as a server may include, as examples, dedicated rack-mounted servers, desktop computers, laptop computers, set top boxes, integrated devices combining various features, such as two or more features of the foregoing devices, or the like.
- Content server 106 can further provide a variety of services that include, but are not limited to, streaming and/or downloading media services, search services, email services, photo services, web services, social networking services, news services, third-party services, audio services, video services, instant messaging (IM) services, SMS services, MMS services, FTP services, voice over IP (VOIP) services, or the like.
- services for example a mail application and/or email-platform, can be provided via the application server 108 , whereby a user is able to utilize such service upon the user being authenticated, verified or identified by the service.
- Examples of content may include images, text, audio, video, or the like, which may be processed in the form of physical signals, such as electrical signals, for example, or may be stored in memory, as physical states, for example.
- One approach to presenting targeted advertisements includes employing demographic characteristics (e.g., age, income, gender, occupation, etc.) for predicting user behavior, such as by group. Advertisements may be presented to users in a targeted audience based at least in part upon predicted user behavior(s).
- demographic characteristics e.g., age, income, gender, occupation, etc.
- Another approach includes profile-type ad targeting.
- user profiles specific to a user may be generated to model user behavior, for example, by tracking a user's path through a web site or network of sites, and compiling a profile based at least in part on pages or advertisements ultimately delivered.
- a correlation may be identified, such as for user purchases, for example. An identified correlation may be used to target potential purchasers by targeting content or advertisements to particular users.
- a presentation system may collect descriptive content about types of advertisements presented to users. A broad range of descriptive content may be gathered, including content specific to an advertising presentation system. Advertising analytics gathered may be transmitted to locations remote to an advertising presentation system for storage or for further evaluation. Where advertising analytics transmittal is not immediately available, gathered advertising analytics may be stored by an advertising presentation system until transmittal of those advertising analytics becomes available.
- Servers 106 , 108 , 120 and 130 may be capable of sending or receiving signals, such as via a wired or wireless network, or may be capable of processing or storing signals, such as in memory as physical memory states.
- Devices capable of operating as a server may include, as examples, dedicated rack-mounted servers, desktop computers, laptop computers, set top boxes, integrated devices combining various features, such as two or more features of the foregoing devices, or the like.
- Servers may vary widely in configuration or capabilities, but generally, a server may include one or more central processing units and memory.
- a server may also include one or more mass storage devices, one or more power supplies, one or more wired or wireless network interfaces, one or more input/output interfaces, or one or more operating systems, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, or the like.
- FIG. 1 illustrates servers 106 , 108 , 120 and 130 as single computing devices, respectively, the disclosure is not so limited. For example, one or more functions of servers 106 , 108 , 120 and/or 130 may be distributed across one or more distinct computing devices. Moreover, in one embodiment, servers 106 , 108 , 120 and/or 130 may be integrated into a single computing device, without departing from the scope of the present disclosure.
- Power supply 226 provides power to Client device 200 .
- a rechargeable or non-rechargeable battery may be used to provide power.
- the power may also be provided by an external power source, such as an AC adapter or a powered docking cradle that supplements and/or recharges a battery.
- Keypad 256 may comprise any input device arranged to receive input from a user.
- keypad 256 may include a push button numeric dial, or a keyboard.
- Keypad 256 may also include command buttons that are associated with selecting and sending images.
- Illuminator 258 may provide a status indication and/or provide light. Illuminator 258 may remain active for specific periods of time or in response to events. For example, when illuminator 258 is active, it may backlight the buttons on keypad 256 and stay on while the client device is powered. Also, illuminator 258 may backlight these buttons in various patterns when particular actions are performed, such as dialing another client device. Illuminator 258 may also cause light sources positioned within a transparent or translucent case of the client device to illuminate in response to actions.
- Client device 200 also comprises input/output interface 260 for communicating with external devices, such as a headset, or other input or output devices not shown in FIG. 2 .
- Input/output interface 260 can utilize one or more communication technologies, such as USB, infrared, BluetoothTM, or the like.
- Haptic interface 262 is arranged to provide tactile feedback to a user of the client device. For example, the haptic interface may be employed to vibrate client device 200 in a particular way when the Client device 200 receives a communication from another user.
- Mass memory 230 includes a RAM 232 , a ROM 234 , and other storage means. Mass memory 230 illustrates another example of computer storage media for storage of information such as computer readable instructions, data structures, program modules or other data. Mass memory 230 stores a basic input/output system (“BIOS”) 240 for controlling low-level operation of Client device 200 . The mass memory also stores an operating system 241 for controlling the operation of Client device 200 . It will be appreciated that this component may include a general purpose operating system such as a version of UNIX, or LINUXTM, or a specialized client communication operating system such as Windows ClientTM, or the Symbian® operating system. The operating system may include, or interface with a Java virtual machine module that enables control of hardware components and/or operating system operations via Java application programs.
- BIOS basic input/output system
- Memory 230 further includes one or more data stores, which can be utilized by Client device 200 to store, among other things, applications 242 and/or other data.
- data stores may be employed to store information that describes various capabilities of Client device 200 . The information may then be provided to another device based on any of a variety of events, including being sent as part of a header during a communication, sent upon request, or the like. At least a portion of the capability information may also be stored on a disk drive or other storage medium (not shown) within Client device 200 .
- Applications 242 may include computer executable instructions which, when executed by Client device 200 , transmit, receive, and/or otherwise process audio, video, images, and enable telecommunication with a server and/or another user of another client device.
- Other examples of application programs or “apps” in some embodiments include browsers, calendars, contact managers, task managers, transcoders, photo management, database programs, word processing programs, security applications, spreadsheet programs, games, search programs, and so forth.
- Applications 242 may further include search client 245 that is configured to send, to receive, and/or to otherwise process a search query and/or search result using any known or to be known communication protocols. Although a single search client 245 is illustrated it should be clear that multiple search clients may be employed. For example, one search client may be configured to enter a search query message, where another search client manages search results, and yet another search client is configured to manage serving advertisements, IMs, emails, and other types of known messages, or the like.
- FIG. 3 is a block diagram illustrating the components for performing the systems and methods discussed herein.
- FIG. 3 includes a search engine 300 , network 315 and database 320 .
- the search engine 300 can be a special purpose machine or processor and could be hosted by an application server, content server, social networking server, web server, search server, content provider, email service provider, ad server, user's computing device, and the like, or any combination thereof.
- search engine 300 can be embodied as a stand-alone application that executes on a user device.
- the search engine 300 can function as an application installed on the user's device, and in some embodiments, such application can be a web-based application accessed by the user device over a network.
- the search engine 300 can be installed as an augmenting script, program or application to another media and/or content serving application, such as, for example, Yahoo!® Search, Yahoo!® Mail, Flickr®, Tumblr®, Twitter®, Instagram®, SnapChat®, Facebook®, Amazon®, EBay® and the like.
- Database 320 comprises a dataset of data and metadata associated with local and/or network information related to users, services, applications, user-generated content, third party provided content and the like. Such information can be stored and indexed in the database 320 independently and/or as a linked or associated dataset. As discussed above, it should be understood that the data (and metadata) in the database 320 can be any type of information and type, whether known or to be known, without departing from the scope of the present disclosure.
- the data (and metadata) in the database 320 can be any type of information related to a user, content, a device, an application, a service provider, a content provider, whether known or to be known, without departing from the scope of the present disclosure.
- database 320 can comprise information associated with content providers, such as, but not limited to, content generating and hosting sites or providers that enable users to search for, upload, download, share, edit or otherwise avail users to content (e.g., Yahoo!® Search, Yahoo!® Mobile applications, Yahoo!® Mail, Flickr®, Tumblr®, Twitter®, Instagram®, SnapChat®, Facebook®, and the like).
- content providers such as, but not limited to, content generating and hosting sites or providers that enable users to search for, upload, download, share, edit or otherwise avail users to content (e.g., Yahoo!® Search, Yahoo!® Mobile applications, Yahoo!® Mail, Flickr®, Tumblr®, Twitter®, Instagram®, SnapChat®, Facebook®, and the like).
- Such sites may also enable users to search for and purchase products or services based on information provided by those sites, such as, for example, Amazon®, EBay® and the like.
- database 320 can comprise data and metadata associated with such content information from one and/or an assortment of media hosting sites
- the information stored in database 320 can be represented as an n-dimensional vector (or feature vector) for each stored data/metadata item, where the information associated with, for example, the stored images can corresponds to a node(s) on the vector of an image.
- database 320 can store and index content information in database 320 as linked set of data and metadata, where the data and metadata relationship can be stored as the n-dimensional vector discussed above.
- Such storage can be realized through any known or to be known vector or array storage, including but not limited to, a hash tree, queue, stack, VList, or any other type of known or to be known dynamic memory allocation technique or technology.
- the information can be analyzed, stored and indexed according to any known or to be known computational analysis technique or algorithm, such as, but not limited to, word2vec analysis, cluster analysis, data mining, Bayesian network analysis, Hidden Markov models, artificial neural network analysis, logical model and/or tree analysis, and the like.
- database 320 can be a single database housing information associated with one or more services and/or content providers, and in some embodiments, database 320 can be configured as a linked set of data stores that provides such information, as each datastore in the set is associated with and/or unique to a specific service and/or content provider.
- the network 315 can be any type of network such as, but not limited to, a wireless network, a local area network (LAN), wide area network (WAN), the Internet, or a combination thereof.
- the network 315 facilitates connectivity of the search engine 300 , and the database of stored resources 320 .
- the search engine 300 and database 320 can be directly connected by any known or to be known method of connecting and/or enabling communication between such devices and resources.
- search engine 300 The principal processor, server, or combination of devices that comprises hardware programmed in accordance with the special purpose functions herein is referred to for convenience as search engine 300 , and includes training data module 302 , learning module 304 , determination module 306 and media identification module 308 .
- engine(s) and modules discussed herein are non-exhaustive, as additional or fewer engines and/or modules (or sub-modules) may be applicable to the embodiments of the systems and methods discussed.
- the operations, configurations and functionalities of each module, and their role within embodiments of the present disclosure will be discussed with reference to FIGS. 4 - 5 .
- the disclosed systems and methods effectuate cost effective, accurate and computationally efficient identification of high-quality, top-ranked images.
- the images can be images of and/or associated with a user generated content (UGC) collection, a provider or service generated collection, or some combination thereof.
- ULC user generated content
- the identification of content will be focused on discovering digital images; however, it should not be construed as limiting, as any known or to be known type of content, media and/or multi-media (e.g., text, video, audio, multi-media, RSS feeds, graphics interchange format (GIF) files, and the like) is applicable to the disclosed systems and methods discussed herein.
- multi-media e.g., text, video, audio, multi-media, RSS feeds, graphics interchange format (GIF) files, and the like
- “Learning to rank” framework is a manner of ranking search results using a machine learning model where all the content items being searched (referred to as “documents”) are represented by feature vectors.
- the feature vector as discussed above, for each document comprises information reflecting the relevance of the document to a query.
- Typical features used in learning to rank include, but are not limited to, the frequencies of the query terms in the document, the BM25 and PageRank scores, and the relationship between this document and other documents, and the like.
- the retrieved documents are images and the features not only reflect the relevance of the images but also the quality of the images.
- the features used to reflect the relevance of the images include image-specific features, such as, but not limited to, frequencies of the query terms in the filename of the image, object saliency scores in the image, face position in the images, and the like.
- image-specific features such as, but not limited to, frequencies of the query terms in the filename of the image, object saliency scores in the image, face position in the images, and the like.
- quality-indicating features can be identified and/or extracted from feature vectors, such as, but not limited to, sharpness, contrast, saturation, emotion arousal scores, and the like.
- a predetermined threshold value for example, more than 300.
- the learning to rank framework which is executed by the search engine 300 , finds a ranking function ⁇ (x) which assigns a score to each query document pair (q, d) and then ranks the documents according to the score.
- Process 400 begins with Step 402 where training data associated with search queries and images is identified.
- the training data is composed of a set of queries and a set of images.
- the training data will be compiled into a set of triples: ⁇ query, document, grade ⁇ , where the “grade” indicates the degree of relevance of the document to the query.
- the identification of the training data enables the determination of the ranking function ⁇ (x). That is, as evidenced from the below discussion, the learning to rank framework implements (or adapts) a machine learning algorithm, methodology or technique that will result in minimizing a loss function which is constructed from the training data.
- the machine learning algorithm, methodology or technique can be, but is not limited to, gradient boosting, as discussed below.
- each grade can be one element in an ordinal set.
- a set of PEGFB labels ⁇ perfect, excellent, good, fair, bad ⁇ .
- such labels can be judged and applied to the query-document pairing by search editors.
- Step 404 the training data used in the instant disclosure, in some embodiments, is identified (and collected) in the following way, Step 404 :
- an assigned relevance label is applied.
- such label can be applied by a human editor.
- the labels of relevance include, but are not limited to, “Highly Relevant”, “Moderately Relevant” and “Not Relevant”.
- a label of quality is also assigned.
- such label can be applied by a human editor.
- the labels of quality include, but are not limited to, “Exceptional”, “Professional”, “Good”, “Fair” and “Bad”.
- the disclosed systems and methods are able to reduce the confusion in image search editorial judging process, which leads to a more efficient usage of the editorial resource(s).
- Step 406 a training data set is compiled, which is represented as follows: ⁇ (x j q , y j q ) ⁇ , where q goes from 1 to n (the number of queries), j goes from 1 to m q (the number of images for query q), x j q ⁇ R d is the d-dimensional feature vector for the pair of query q and the j th image for query q, y j q is the combined label (e.g., PEGFB) for x l q . Step 406 .
- a training data set is compiled, which is represented as follows: ⁇ (x j q , y j q ) ⁇ , where q goes from 1 to n (the number of queries), j goes from 1 to m q (the number of images for query q), x j q ⁇ R d is the d-dimensional feature vector for the pair of query q and the j th image for query q
- F the search space for the candidate ranking function and L is the loss function.
- F is the sum of decision tress utilized by the editors when determining the grade of query-image pairs.
- the number of quality factors used for analyzing an image is a predetermined number, for example, 20; therefore, in such embodiments, F is then a maximum of 20.
- the key factor in the determination of the ranking function is the design of the loss function L because the implementation of the loss function leads to a high-quality ranking function.
- a logistic loss function and a pair-wise loss function are simultaneously applied to the compiled training data set.
- the first one is a point-wise loss function:
- the second category of loss functions is called pair-wise loss function:
- the disclosed systems and methods combine the pair-wise loss and the logistic loss into the framework for the novel learning to rank functionality discussed herein (as in Step 408 ).
- the logistic loss aims at reducing non-relevant images in top results, while the pair-wise loss incorporates the preference of image quality.
- Equation 1 a pair-wise loss function (Equation 1) is utilized, where the order y i q >y j q is determined by the PEGFB label from Table 1. Combining the two loss functions results in a loss function which can be used to learn the ranking function ⁇ (x):
- ⁇ is the parameter for balancing the two kinds of loss functions.
- G i represents the relevance score assigned to the document at position I on the 5-point PEGFB score: 10 for “Perfect,” 7 for “Excellent,” 3 for “Good,” 1 for “Fair,” and 0 for “Bad.” Higher degree of relevance corresponds to higher value of DCG.
- QDCG is used as a primary measurement of search engine performance. For each image in the ranked list of N images (N is set to 10, for example), a label of relevance and a label of quality is given by the editors. As discussed above in Step 404 , the labels of relevance include “Highly Relevant”, “Moderately Relevant” and “Not Relevant”; and the labels of quality include “Exceptional”, “Professional”, “Good”, “Fair” and “Bad”. The two labels are then combined to map into the PEGFB labels according Table 1. QDCG is then defined using the definition of DCG in Equation 3 and the scores G i given by the Table 1. QDCG not only measures the relevance performance of an image search engine but also the quality of the ranked image list.
- Step 410 a gradient boosting algorithm (e.g., Gbrank) is applied to Equation 2 (from Step 408 ) to optimize (e.g., minimize) the loss function constructed from the training data.
- Gbrank gradient boosting algorithm
- Step 412 ranking function ⁇ (x) is determined from the output of Step 410 (e.g., ⁇ (x) is produced from applying gradient boosting to the combined loss function).
- the ranking function ⁇ (x) can be deployed at runtime of an image search. That is, for example, once a user types a query q, the query is sent to search engine 300 which returns a set of candidate images. For each candidate image, the search engine 300 collects its features and represents them as feature vectors x i and then applies the ranking function ⁇ to calculate the ranking scores ⁇ (x i ). The candidate images are then returned as a ranked set where the order is determined by the ranking scores.
- Process 500 details steps performed in accordance with some embodiments of the present disclosure for applying the determined ranking function to a search query in order to return the highest ranked (e.g., relevant) and highest-quality images.
- Process 500 is executed at runtime, or when a query is received or detected whereupon an image search to satisfy the query is performed.
- Process 500 is performed by the search engine 300 , and specifically by the determination module 306 (which performs Steps 502 and 506 - 510 ) and the media identification module 308 (which performs Steps 504 and 512 - 514 ), which are special purpose modules as detailed below.
- the search engine 300 through models 306 and 308 , implements the learned ranking function (from Process 400 ) in order to identify high-quality, highly-relevant (accurate or high-ranking) content (e.g., images).
- the identification of content will be discovering digital images; however, it should not be construed as limiting, as any known or to be known type of content, media and/or multi-media (e.g., text, video, audio, multi-media, RSS feeds, graphics interchange format (GIF) files, and the like) is applicable to the disclosed systems and methods discussed herein.
- media and/or multi-media e.g., text, video, audio, multi-media, RSS feeds, graphics interchange format (GIF) files, and the like
- Process 500 begins with Step 502 where a query for an image is received.
- the query for an image or images is ultimately a request for content to be communicated to a user.
- the request can be based on any type of known or to be known process for triggering a request to serve or communicate content to a user, such as, for example, a user requesting content, a user browsing a particular web page and being determined to receive particular content based on such browsing, the user's location, the user's interests derived from his/her user profile, the user's mail activity, the user's search activity, the user's social networking activity, the user's media rendering activity, and the like, or some combination thereof.
- Step 502 's request (or query) therefore, comprises information related to the type of content that should be provided to the user (or desired by the user), which can be based on the activity of the user, as discussed above.
- the request can comprise any type of data that can be used as a search, such as, but not limited to, text, audio, images, video, and/or any other type of multi-media content that can be used as a basis for searching for other content.
- a search is performed based on the received query (or request) and a set of candidate images are identified.
- the candidate images are identified from the search using any known or to be known type of image (or document) retrieval in which the top related images to the query are returned.
- data and/or metadata of the stored images are analyzed in relation to the query, and those images that have data/metadata related to the query (at or above a threshold) are identified as candidate images.
- data/metadata can include, but is not limited to, the title of the image, author of the image, subject, tags, annotations and the like.
- the query can be translated into a feature vector, as discussed above, where the nodes of the query feature vector are compared against the features of the images.
- each identified candidate image is translated into a feature vector, where each feature vector is a d-dimensional vector having its features as corresponding nodes on the vector.
- features of the images can include, but are not limited to, resolution, focus, pixel quality, size, dimension, color scheme, exposure, white balance and the like.
- the features can also, or alternatively include, the title of the image, author of the image, subject, tags, annotations and the like.
- Step 508 the ranking function determined from Process 400 is applied to each candidate image's feature vector.
- the application of the ranking function results in the calculation of a score for each candidate image.
- the score is calculated for the query image pair—i.e., a score for the pair including: the query received in Step 502 , and the image(s) identified in Step 504 .
- the score is calculated for the candidate image and then assigned to a query-(candidate) image pair.
- Step 510 the candidate images (or the query image pairs) are ranked according to their calculated scores.
- the candidate images that have scores of greater value than other candidate images are ranked higher in such ranking.
- Step 512 an image set is compiled (or determined) based on the result of the ranking occurring in Step 510 .
- the image set (referred to as a search result), therefore, comprises a ranked set of images, where the images having higher-valued scores are ranked higher in the image set than those images with lower scores.
- the compiled image set (or search result) is communicated to a user in response to the received query.
- the communicated image set comprises the high-quality and highest ranked (or relevant) images to the query, thereby providing the user with an improved search experience.
- the disclosed systems and methods as understood by those of skill in the art, can be implemented over mobile platforms; therefore, while conventional systems will dedicate resources to speed and efficiency in providing users content, the disclosed systems and methods can ensure that while speed and efficiency will be maintained, quality and relevance of the content provided to the user will not suffer as a result.
- FIG. 6 is a work flow example 600 for serving relevant digital media content associated with advertisements (e.g., digital advertisement content) based on the information associated with the identified media (or content), as discussed above in relation to FIGS. 3 - 5 .
- information e.g., digital advertisement content
- search information can include, but is not limited to, analyzed information (i.e., information associated with and/or derived from the stored images), the identity, context and/or type of media content being rendered and/or provided to a user, the content of such media, search results, search queries, and the like, and/or some combination thereof.
- Step 602 search information is identified.
- the search information can be based any of the information form search process outlined above with respect to FIGS. 3 - 5 .
- Process 600 will refer to single provided/identified content object (e.g., an image or set of images) as the basis for serving a digital advertisement(s); however, it should not be construed as limiting, as any number of search sessions, identified content items, and/or quantities of information related to applications on a user device and/or media renderable via such applications can form such basis, without departing from the scope of the instant disclosure.
- a context is determined based on the identified search information. This context forms a basis for serving advertisements related to the search information.
- the context can be determined by determining a category which the search information of Step 602 represents. For example, the category can be related to the content type of the media being searched for, identified, selected or rendered.
- the identification of the context from Step 604 can occur before, during and/or after the analysis detailed above with respect to Processes 400 - 500 , or some combination thereof.
- the context e.g., content/context data
- an advertisement platform comprising an advertisement server 130 and ad database.
- the advertisement server 130 Upon receipt of the context, the advertisement server 130 performs a search for a relevant advertisement within the associated ad database. The search for an advertisement is based at least on the identified context.
- Step 608 the advertisement server 130 searches the ad database for a digital advertisement(s) that matches the identified context.
- an advertisement is selected (or retrieved) based on the results of Step 608 .
- the selected advertisement can be modified to conform to attributes of the page, message or method upon which the advertisement will be displayed, and/or to the application and/or device for which it will be displayed.
- the selected advertisement is shared or communicated via the application the user is utilizing to search for and/or render the media.
- Step 612 the selected advertisement is sent directly to each user's computing device.
- the selected advertisement is displayed in conjunction with the rendered and/or identified media on the user's device and/or within the application being used to search for and/or render the media.
- internal architecture 700 of a computing device(s), computing system, computing platform and the like includes one or more processing units, processors, or processing cores, (also referred to herein as CPUs) 712 , which interface with at least one computer bus 702 .
- processing units, processors, or processing cores, (also referred to herein as CPUs) 712 which interface with at least one computer bus 702 .
- Memory 704 interfaces with computer bus 702 so as to provide information stored in memory 704 to CPU 712 during execution of software programs such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer executable process steps, incorporating functionality described herein, e.g., one or more of process flows described herein.
- CPU 712 first loads computer executable process steps from storage, e.g., memory 704 , computer readable storage medium/media 706 , removable media drive, and/or other storage device.
- CPU 712 can then execute the stored process steps in order to execute the loaded computer-executable process steps.
- Stored data e.g., data stored by a storage device, can be accessed by CPU 712 during the execution of computer-executable process steps.
- Persistent storage can be used to store an operating system and one or more application programs. Persistent storage can also be used to store device drivers, such as one or more of a digital camera driver, monitor driver, printer driver, scanner driver, or other device drivers, web pages, content files, playlists and other files. Persistent storage can further include program modules and data files used to implement one or more embodiments of the present disclosure, e.g., listing selection module(s), targeting information collection module(s), and listing notification module(s), the functionality and use of which in the implementation of the present disclosure are discussed in detail herein.
- Network link 728 typically provides information communication using transmission media through one or more networks to other devices that use or process the information.
- network link 728 may provide a connection through local network 724 to a host computer 726 or to equipment operated by a Network or Internet Service Provider (ISP) 730 .
- ISP equipment in turn provides data communication services through the public, worldwide packet-switching communication network of networks now commonly referred to as the Internet 732 .
- At least some embodiments of the present disclosure are related to the use of computer system 700 for implementing some or all of the techniques described herein. According to one embodiment, those techniques are performed by computer system 700 in response to processing unit 712 executing one or more sequences of one or more processor instructions contained in memory 704 . Such instructions, also called computer instructions, software and program code, may be read into memory 704 from another computer-readable medium 706 such as storage device or network link. Execution of the sequences of instructions contained in memory 704 causes processing unit 712 to perform one or more of the method steps described herein. In alternative embodiments, hardware, such as ASIC, may be used in place of or in combination with software. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware and software, unless otherwise explicitly stated herein.
- the signals transmitted over network link and other networks through communications interface carry information to and from computer system 700 .
- Computer system 700 can send and receive information, including program code, through the networks, among others, through network link and communications interface.
- a server host transmits program code for a particular application, requested by a message sent from computer, through Internet, ISP equipment, local network and communications interface.
- the received code may be executed by processor 702 as it is received, or may be stored in memory 704 or in storage device or other non-volatile storage for later execution, or both.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Library & Information Science (AREA)
- Game Theory and Decision Science (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Transfer Between Computers (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
| Exceptional | Professional | Good | Fair | Bad | ||
| Highly Relevant | Perfect | Excellent | Good | Fair | Bad |
| Moderately Relevant | Fair | Fair | Fair | Fair | Bad |
| Not Relevant | Bad | Bad | Bad | Bad | Bad |
where F is the search space for the candidate ranking function and L is the loss function. According to some embodiments, F is the sum of decision tress utilized by the editors when determining the grade of query-image pairs. In some embodiments, the number of quality factors used for analyzing an image (from Step 404) is a predetermined number, for example, 20; therefore, in such embodiments, F is then a maximum of 20.
-
- where l is a regression loss l(x,y):=∥x−y∥2, or
- l is a logistic loss l(x,y):=log (1+exp(−yx)).
-
- where l is a quadratic hinge loss function l(t):=max (0, 1−t)2.
Claims (21)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/681,992 US12430342B2 (en) | 2016-03-18 | 2019-11-13 | Computerized system and method for high-quality and high-ranking digital content discovery |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/074,028 US10482091B2 (en) | 2016-03-18 | 2016-03-18 | Computerized system and method for high-quality and high-ranking digital content discovery |
| US16/681,992 US12430342B2 (en) | 2016-03-18 | 2019-11-13 | Computerized system and method for high-quality and high-ranking digital content discovery |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/074,028 Continuation US10482091B2 (en) | 2016-03-18 | 2016-03-18 | Computerized system and method for high-quality and high-ranking digital content discovery |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200081896A1 US20200081896A1 (en) | 2020-03-12 |
| US12430342B2 true US12430342B2 (en) | 2025-09-30 |
Family
ID=59847809
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/074,028 Active 2037-04-29 US10482091B2 (en) | 2016-03-18 | 2016-03-18 | Computerized system and method for high-quality and high-ranking digital content discovery |
| US16/681,992 Active 2037-02-28 US12430342B2 (en) | 2016-03-18 | 2019-11-13 | Computerized system and method for high-quality and high-ranking digital content discovery |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/074,028 Active 2037-04-29 US10482091B2 (en) | 2016-03-18 | 2016-03-18 | Computerized system and method for high-quality and high-ranking digital content discovery |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US10482091B2 (en) |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10102206B2 (en) | 2016-03-31 | 2018-10-16 | Dropbox, Inc. | Intelligently identifying and presenting digital documents |
| CN106021374A (en) * | 2016-05-11 | 2016-10-12 | 百度在线网络技术(北京)有限公司 | Underlay recall method and device for query result |
| US10958695B2 (en) | 2016-06-21 | 2021-03-23 | Google Llc | Methods, systems, and media for recommending content based on network conditions |
| US10083379B2 (en) * | 2016-09-27 | 2018-09-25 | Facebook, Inc. | Training image-recognition systems based on search queries on online social networks |
| US10257128B2 (en) * | 2016-11-28 | 2019-04-09 | Microsoft Technology Licensing, Llc | Presenting messages to participants based on neighborhoods |
| US10956453B2 (en) * | 2017-05-24 | 2021-03-23 | International Business Machines Corporation | Method to estimate the deletability of data objects |
| CN108875907B (en) * | 2018-04-23 | 2022-02-18 | 北方工业大学 | Fingerprint identification method and device based on deep learning |
| US10789288B1 (en) * | 2018-05-17 | 2020-09-29 | Shutterstock, Inc. | Relational model based natural language querying to identify object relationships in scene |
| US10878291B2 (en) * | 2019-03-28 | 2020-12-29 | International Business Machines Corporation | Visually guided query processing |
| US11029984B2 (en) * | 2019-04-27 | 2021-06-08 | EMC IP Holding Company LLC | Method and system for managing and using data confidence in a decentralized computing platform |
| KR102350610B1 (en) | 2019-12-26 | 2022-01-14 | 고려대학교 산학협력단 | Method for raw-to-rgb mapping using two-stage u-net with misaligned data, recording medium and device for performing the method |
| CN111914822B (en) * | 2020-07-23 | 2023-11-17 | 腾讯科技(深圳)有限公司 | Text image labeling method, device, computer readable storage medium and equipment |
| CN112258285A (en) * | 2020-10-26 | 2021-01-22 | 北京沃东天骏信息技术有限公司 | Content recommendation method and device, equipment and storage medium |
| US20240232613A1 (en) * | 2023-01-08 | 2024-07-11 | Near Intelligence Holdings, Inc. | Method for performing deep similarity modelling on client data to derive behavioral attributes at an entity level |
Citations (55)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060015496A1 (en) * | 2003-11-26 | 2006-01-19 | Yesvideo, Inc. | Process-response statistical modeling of a visual image for use in determining similarity between visual images |
| US20070174872A1 (en) * | 2006-01-25 | 2007-07-26 | Microsoft Corporation | Ranking content based on relevance and quality |
| US20070209025A1 (en) * | 2006-01-25 | 2007-09-06 | Microsoft Corporation | User interface for viewing images |
| US20070239632A1 (en) * | 2006-03-17 | 2007-10-11 | Microsoft Corporation | Efficiency of training for ranking systems |
| US20070239764A1 (en) * | 2006-03-31 | 2007-10-11 | Fuji Photo Film Co., Ltd. | Method and apparatus for performing constrained spectral clustering of digital image data |
| US20080285860A1 (en) * | 2007-05-07 | 2008-11-20 | The Penn State Research Foundation | Studying aesthetics in photographic images using a computational approach |
| US20090319507A1 (en) * | 2008-06-19 | 2009-12-24 | Yahoo! Inc. | Methods and apparatuses for adapting a ranking function of a search engine for use with a specific domain |
| US20100082510A1 (en) * | 2008-10-01 | 2010-04-01 | Microsoft Corporation | Training a search result ranker with automatically-generated samples |
| US20100082617A1 (en) * | 2008-09-24 | 2010-04-01 | Microsoft Corporation | Pair-wise ranking model for information retrieval |
| CA2714523A1 (en) * | 2009-09-02 | 2011-03-02 | Sophia Learning, Llc | Teaching and learning system |
| US20110064308A1 (en) * | 2009-09-15 | 2011-03-17 | Tandent Vision Science, Inc. | Method and system for learning a same-material constraint in an image |
| US20110087673A1 (en) * | 2009-10-09 | 2011-04-14 | Yahoo!, Inc., a Delaware corporation | Methods and systems relating to ranking functions for multiple domains |
| US20110116690A1 (en) * | 2009-11-18 | 2011-05-19 | Google Inc. | Automatically Mining Person Models of Celebrities for Visual Search Applications |
| US20110145175A1 (en) * | 2009-12-14 | 2011-06-16 | Massachusetts Institute Of Technology | Methods, Systems and Media Utilizing Ranking Techniques in Machine Learning |
| US20110170768A1 (en) * | 2010-01-11 | 2011-07-14 | Tandent Vision Science, Inc. | Image segregation system with method for handling textures |
| US20110194761A1 (en) * | 2010-02-08 | 2011-08-11 | Microsoft Corporation | Intelligent Image Search Results Summarization and Browsing |
| US20110196859A1 (en) * | 2010-02-05 | 2011-08-11 | Microsoft Corporation | Visual Search Reranking |
| US8090222B1 (en) * | 2006-11-15 | 2012-01-03 | Google Inc. | Selection of an image or images most representative of a set of images |
| US20120011112A1 (en) * | 2010-07-06 | 2012-01-12 | Yahoo! Inc. | Ranking specialization for a search |
| US8209330B1 (en) * | 2009-05-29 | 2012-06-26 | Google Inc. | Ordering image search results |
| US20120271806A1 (en) * | 2011-04-21 | 2012-10-25 | Microsoft Corporation | Generating domain-based training data for tail queries |
| US20120290566A1 (en) * | 2011-05-12 | 2012-11-15 | Google Inc. | Dynamic image display area and image display within web search results |
| US20120303615A1 (en) * | 2011-05-24 | 2012-11-29 | Ebay Inc. | Image-based popularity prediction |
| US8370282B1 (en) * | 2009-07-22 | 2013-02-05 | Google Inc. | Image quality measures |
| WO2013044019A1 (en) * | 2011-09-23 | 2013-03-28 | Alibaba Group Holding Limited | Image quality analysis for searches |
| US8429173B1 (en) * | 2009-04-20 | 2013-04-23 | Google Inc. | Method, system, and computer readable medium for identifying result images based on an image query |
| US20130121584A1 (en) * | 2009-09-18 | 2013-05-16 | Lubomir D. Bourdev | System and Method for Using Contextual Features to Improve Face Recognition in Digital Images |
| US20130208978A1 (en) * | 2010-10-19 | 2013-08-15 | 3M Innovative Properties Company | Continuous charting of non-uniformity severity for detecting variability in web-based materials |
| US8532377B2 (en) * | 2010-12-22 | 2013-09-10 | Xerox Corporation | Image ranking based on abstract concepts |
| US20130343642A1 (en) * | 2012-06-21 | 2013-12-26 | Siemens Corporation | Machine-learnt person re-identification |
| WO2014012662A1 (en) * | 2012-07-20 | 2014-01-23 | Eth Zurich | Selecting a set of representative images |
| US20140093174A1 (en) * | 2012-09-28 | 2014-04-03 | Canon Kabushiki Kaisha | Systems and methods for image management |
| US20140105505A1 (en) * | 2012-10-15 | 2014-04-17 | Google Inc. | Near duplicate images |
| US20140250110A1 (en) * | 2011-11-25 | 2014-09-04 | Linjun Yang | Image attractiveness based indexing and searching |
| US20140351264A1 (en) * | 2013-05-21 | 2014-11-27 | Xerox Corporation | Methods and systems for ranking images using semantic and aesthetic models |
| US8909625B1 (en) * | 2011-06-02 | 2014-12-09 | Google Inc. | Image search |
| US20150161268A1 (en) * | 2012-03-20 | 2015-06-11 | Google Inc. | Image display within web search results |
| US20150161178A1 (en) * | 2009-12-07 | 2015-06-11 | Google Inc. | Distributed Image Search |
| US20150169999A1 (en) * | 2011-09-30 | 2015-06-18 | Google Inc. | Refining Image Relevance Models |
| US20150169575A1 (en) * | 2013-02-05 | 2015-06-18 | Google Inc. | Scoring images related to entities |
| US20150248429A1 (en) * | 2014-02-28 | 2015-09-03 | Microsoft Corporation | Generation of visual representations for electronic content items |
| US9171352B1 (en) * | 2014-12-04 | 2015-10-27 | Google Inc. | Automatic processing of images |
| US20150363636A1 (en) * | 2014-06-12 | 2015-12-17 | Canon Kabushiki Kaisha | Image recognition system, image recognition apparatus, image recognition method, and computer program |
| US20160078507A1 (en) * | 2014-09-12 | 2016-03-17 | Gurudatta Horantur Shivaswamy | Mapping products between different taxonomies |
| US20160098403A1 (en) * | 2014-10-06 | 2016-04-07 | Fujitsu Limited | Document ranking apparatus, method and computer program |
| US20160098844A1 (en) * | 2014-10-03 | 2016-04-07 | EyeEm Mobile GmbH | Systems, methods, and computer program products for searching and sorting images by aesthetic quality |
| US20160321283A1 (en) * | 2015-04-28 | 2016-11-03 | Microsoft Technology Licensing, Llc | Relevance group suggestions |
| US20160321522A1 (en) * | 2015-04-30 | 2016-11-03 | Canon Kabushiki Kaisha | Devices, systems, and methods for pairwise multi-task feature learning |
| US20170011279A1 (en) * | 2015-07-07 | 2017-01-12 | Xerox Corporation | Latent embeddings for word images and their semantics |
| US9552549B1 (en) * | 2014-07-28 | 2017-01-24 | Google Inc. | Ranking approach to train deep neural nets for multilabel image annotation |
| US20170039452A1 (en) * | 2015-08-03 | 2017-02-09 | Yahoo! Inc. | Computerized method and system for automated determination of high quality digital content |
| US20170243082A1 (en) * | 2014-06-20 | 2017-08-24 | Google Inc. | Fine-grained image similarity |
| US20170255647A1 (en) * | 2016-03-01 | 2017-09-07 | Baidu Usa Llc | Method for selecting images for matching with content based on metadata of images and content in real-time in response to search queries |
| US20170262478A1 (en) * | 2014-09-09 | 2017-09-14 | Thomson Licensing | Method and apparatus for image retrieval with feature learning |
| US20180061459A1 (en) * | 2016-08-30 | 2018-03-01 | Yahoo Holdings, Inc. | Computerized system and method for automatically generating high-quality digital content thumbnails from digital video |
-
2016
- 2016-03-18 US US15/074,028 patent/US10482091B2/en active Active
-
2019
- 2019-11-13 US US16/681,992 patent/US12430342B2/en active Active
Patent Citations (61)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060015496A1 (en) * | 2003-11-26 | 2006-01-19 | Yesvideo, Inc. | Process-response statistical modeling of a visual image for use in determining similarity between visual images |
| US7836050B2 (en) * | 2006-01-25 | 2010-11-16 | Microsoft Corporation | Ranking content based on relevance and quality |
| US20070174872A1 (en) * | 2006-01-25 | 2007-07-26 | Microsoft Corporation | Ranking content based on relevance and quality |
| US20070209025A1 (en) * | 2006-01-25 | 2007-09-06 | Microsoft Corporation | User interface for viewing images |
| US20070239632A1 (en) * | 2006-03-17 | 2007-10-11 | Microsoft Corporation | Efficiency of training for ranking systems |
| US20070239764A1 (en) * | 2006-03-31 | 2007-10-11 | Fuji Photo Film Co., Ltd. | Method and apparatus for performing constrained spectral clustering of digital image data |
| US8090222B1 (en) * | 2006-11-15 | 2012-01-03 | Google Inc. | Selection of an image or images most representative of a set of images |
| US20080285860A1 (en) * | 2007-05-07 | 2008-11-20 | The Penn State Research Foundation | Studying aesthetics in photographic images using a computational approach |
| US20090319507A1 (en) * | 2008-06-19 | 2009-12-24 | Yahoo! Inc. | Methods and apparatuses for adapting a ranking function of a search engine for use with a specific domain |
| US20100082617A1 (en) * | 2008-09-24 | 2010-04-01 | Microsoft Corporation | Pair-wise ranking model for information retrieval |
| US20100082510A1 (en) * | 2008-10-01 | 2010-04-01 | Microsoft Corporation | Training a search result ranker with automatically-generated samples |
| US9053115B1 (en) * | 2009-04-20 | 2015-06-09 | Google Inc. | Query image search |
| US8429173B1 (en) * | 2009-04-20 | 2013-04-23 | Google Inc. | Method, system, and computer readable medium for identifying result images based on an image query |
| US8566331B1 (en) * | 2009-05-29 | 2013-10-22 | Google Inc. | Ordering image search results |
| US8209330B1 (en) * | 2009-05-29 | 2012-06-26 | Google Inc. | Ordering image search results |
| US8370282B1 (en) * | 2009-07-22 | 2013-02-05 | Google Inc. | Image quality measures |
| US8738553B1 (en) * | 2009-07-22 | 2014-05-27 | Google Inc. | Image selection based on image quality |
| CA2714523A1 (en) * | 2009-09-02 | 2011-03-02 | Sophia Learning, Llc | Teaching and learning system |
| US20110064308A1 (en) * | 2009-09-15 | 2011-03-17 | Tandent Vision Science, Inc. | Method and system for learning a same-material constraint in an image |
| US20130121584A1 (en) * | 2009-09-18 | 2013-05-16 | Lubomir D. Bourdev | System and Method for Using Contextual Features to Improve Face Recognition in Digital Images |
| US20110087673A1 (en) * | 2009-10-09 | 2011-04-14 | Yahoo!, Inc., a Delaware corporation | Methods and systems relating to ranking functions for multiple domains |
| US20110116690A1 (en) * | 2009-11-18 | 2011-05-19 | Google Inc. | Automatically Mining Person Models of Celebrities for Visual Search Applications |
| US20150161178A1 (en) * | 2009-12-07 | 2015-06-11 | Google Inc. | Distributed Image Search |
| US20110145175A1 (en) * | 2009-12-14 | 2011-06-16 | Massachusetts Institute Of Technology | Methods, Systems and Media Utilizing Ranking Techniques in Machine Learning |
| US20110170768A1 (en) * | 2010-01-11 | 2011-07-14 | Tandent Vision Science, Inc. | Image segregation system with method for handling textures |
| US20110196859A1 (en) * | 2010-02-05 | 2011-08-11 | Microsoft Corporation | Visual Search Reranking |
| US20140321761A1 (en) * | 2010-02-08 | 2014-10-30 | Microsoft Corporation | Intelligent Image Search Results Summarization and Browsing |
| US20110194761A1 (en) * | 2010-02-08 | 2011-08-11 | Microsoft Corporation | Intelligent Image Search Results Summarization and Browsing |
| US20120011112A1 (en) * | 2010-07-06 | 2012-01-12 | Yahoo! Inc. | Ranking specialization for a search |
| US20130208978A1 (en) * | 2010-10-19 | 2013-08-15 | 3M Innovative Properties Company | Continuous charting of non-uniformity severity for detecting variability in web-based materials |
| US8532377B2 (en) * | 2010-12-22 | 2013-09-10 | Xerox Corporation | Image ranking based on abstract concepts |
| US20120271806A1 (en) * | 2011-04-21 | 2012-10-25 | Microsoft Corporation | Generating domain-based training data for tail queries |
| US20120290566A1 (en) * | 2011-05-12 | 2012-11-15 | Google Inc. | Dynamic image display area and image display within web search results |
| US20120303615A1 (en) * | 2011-05-24 | 2012-11-29 | Ebay Inc. | Image-based popularity prediction |
| US8909625B1 (en) * | 2011-06-02 | 2014-12-09 | Google Inc. | Image search |
| WO2013044019A1 (en) * | 2011-09-23 | 2013-03-28 | Alibaba Group Holding Limited | Image quality analysis for searches |
| US20150169999A1 (en) * | 2011-09-30 | 2015-06-18 | Google Inc. | Refining Image Relevance Models |
| US20140250110A1 (en) * | 2011-11-25 | 2014-09-04 | Linjun Yang | Image attractiveness based indexing and searching |
| US20150161268A1 (en) * | 2012-03-20 | 2015-06-11 | Google Inc. | Image display within web search results |
| US20130343642A1 (en) * | 2012-06-21 | 2013-12-26 | Siemens Corporation | Machine-learnt person re-identification |
| WO2014012662A1 (en) * | 2012-07-20 | 2014-01-23 | Eth Zurich | Selecting a set of representative images |
| US20140093174A1 (en) * | 2012-09-28 | 2014-04-03 | Canon Kabushiki Kaisha | Systems and methods for image management |
| US20140105505A1 (en) * | 2012-10-15 | 2014-04-17 | Google Inc. | Near duplicate images |
| US20150169575A1 (en) * | 2013-02-05 | 2015-06-18 | Google Inc. | Scoring images related to entities |
| US20140351264A1 (en) * | 2013-05-21 | 2014-11-27 | Xerox Corporation | Methods and systems for ranking images using semantic and aesthetic models |
| US20150248429A1 (en) * | 2014-02-28 | 2015-09-03 | Microsoft Corporation | Generation of visual representations for electronic content items |
| US20150363636A1 (en) * | 2014-06-12 | 2015-12-17 | Canon Kabushiki Kaisha | Image recognition system, image recognition apparatus, image recognition method, and computer program |
| US20170243082A1 (en) * | 2014-06-20 | 2017-08-24 | Google Inc. | Fine-grained image similarity |
| US9552549B1 (en) * | 2014-07-28 | 2017-01-24 | Google Inc. | Ranking approach to train deep neural nets for multilabel image annotation |
| US20170262478A1 (en) * | 2014-09-09 | 2017-09-14 | Thomson Licensing | Method and apparatus for image retrieval with feature learning |
| US20160078507A1 (en) * | 2014-09-12 | 2016-03-17 | Gurudatta Horantur Shivaswamy | Mapping products between different taxonomies |
| US9659384B2 (en) * | 2014-10-03 | 2017-05-23 | EyeEm Mobile GmbH. | Systems, methods, and computer program products for searching and sorting images by aesthetic quality |
| US20160098844A1 (en) * | 2014-10-03 | 2016-04-07 | EyeEm Mobile GmbH | Systems, methods, and computer program products for searching and sorting images by aesthetic quality |
| US20160098403A1 (en) * | 2014-10-06 | 2016-04-07 | Fujitsu Limited | Document ranking apparatus, method and computer program |
| US9171352B1 (en) * | 2014-12-04 | 2015-10-27 | Google Inc. | Automatic processing of images |
| US20160321283A1 (en) * | 2015-04-28 | 2016-11-03 | Microsoft Technology Licensing, Llc | Relevance group suggestions |
| US20160321522A1 (en) * | 2015-04-30 | 2016-11-03 | Canon Kabushiki Kaisha | Devices, systems, and methods for pairwise multi-task feature learning |
| US20170011279A1 (en) * | 2015-07-07 | 2017-01-12 | Xerox Corporation | Latent embeddings for word images and their semantics |
| US20170039452A1 (en) * | 2015-08-03 | 2017-02-09 | Yahoo! Inc. | Computerized method and system for automated determination of high quality digital content |
| US20170255647A1 (en) * | 2016-03-01 | 2017-09-07 | Baidu Usa Llc | Method for selecting images for matching with content based on metadata of images and content in real-time in response to search queries |
| US20180061459A1 (en) * | 2016-08-30 | 2018-03-01 | Yahoo Holdings, Inc. | Computerized system and method for automatically generating high-quality digital content thumbnails from digital video |
Non-Patent Citations (4)
| Title |
|---|
| Keke Chen, Ya Zhang, Zhaohui Zheng, Hongyuan Zha and Gordon Sun, "Adapting ranking functions to user preference," 2008 IEEE 24th International Conference on Data Engineering Workshop, Cancun, 2008, pp. 580-587, doi: 10.1109/ICDEW.2008.4498384. (Year: 2008). * |
| Murray, Naila, et al. "Learning to rank images using semantic and aesthetic labels." BMVC. 2012 (Year: 2012). * |
| San Pedro, Jose, Tom Yeh, and Nuria Oliver. "Leveraging user comments for aesthetic aware image search reranking." Proceedings of the 21st international conference on World Wide Web. 2012. (Year: 2012). * |
| Sculley, "Combined Regression and Ranking," Jul. 25-28, 2010 ACM, 9 pages (2010). |
Also Published As
| Publication number | Publication date |
|---|---|
| US20170270122A1 (en) | 2017-09-21 |
| US10482091B2 (en) | 2019-11-19 |
| US20200081896A1 (en) | 2020-03-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12430342B2 (en) | Computerized system and method for high-quality and high-ranking digital content discovery | |
| US12120076B2 (en) | Computerized system and method for automatically determining and providing digital content within an electronic communication system | |
| US12067607B2 (en) | Neural contextual bandit based computational recommendation method and apparatus | |
| US10565771B2 (en) | Automatic video segment selection method and apparatus | |
| US10896355B2 (en) | Automatic canonical digital image selection method and apparatus | |
| US10867221B2 (en) | Computerized method and system for automated determination of high quality digital content | |
| US11281725B2 (en) | Computerized system and method for automatically generating and providing interactive query suggestions within an electronic mail system | |
| US10652311B2 (en) | Computerized system and method for determining and communicating media content to a user based on a physical location of the user | |
| US10664484B2 (en) | Computerized system and method for optimizing the display of electronic content card information when providing users digital content | |
| US11194856B2 (en) | Computerized system and method for automatically identifying and providing digital content based on physical geographic location data | |
| US10430718B2 (en) | Automatic social media content timeline summarization method and apparatus | |
| US20190140997A1 (en) | Computerized system and method for automatically performing an implicit message search | |
| US11263664B2 (en) | Computerized system and method for augmenting search terms for increased efficiency and effectiveness in identifying content | |
| US10878023B2 (en) | Generic card feature extraction based on card rendering as an image | |
| US12199957B2 (en) | Automatic privacy-aware machine learning method and apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: YAHOO HOLDINGS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YAHOO! INC.;REEL/FRAME:051179/0462 Effective date: 20170613 Owner name: YAHOO! INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HE, YUNLONG;YIN, DAWEI;CHANG, YI;REEL/FRAME:051172/0146 Effective date: 20160317 Owner name: OATH INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YAHOO HOLDINGS, INC.;REEL/FRAME:051180/0001 Effective date: 20180124 |
|
| AS | Assignment |
Owner name: VERIZON MEDIA INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OATH INC.;REEL/FRAME:054258/0635 Effective date: 20201005 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| AS | Assignment |
Owner name: YAHOO AD TECH LLC, VIRGINIA Free format text: CHANGE OF NAME;ASSIGNOR:VERIZON MEDIA INC.;REEL/FRAME:059472/0163 Effective date: 20211102 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |