[go: up one dir, main page]

EP2768168A1 - Method to recommend social network threads - Google Patents

Method to recommend social network threads Download PDF

Info

Publication number
EP2768168A1
EP2768168A1 EP13290032.5A EP13290032A EP2768168A1 EP 2768168 A1 EP2768168 A1 EP 2768168A1 EP 13290032 A EP13290032 A EP 13290032A EP 2768168 A1 EP2768168 A1 EP 2768168A1
Authority
EP
European Patent Office
Prior art keywords
multimedia
fragment
social network
threads
descriptive elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP13290032.5A
Other languages
German (de)
French (fr)
Inventor
Abdelkrim Hebbar
Yann Gaste
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alcatel Lucent SAS
Original Assignee
Alcatel Lucent SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcatel Lucent SAS filed Critical Alcatel Lucent SAS
Priority to EP13290032.5A priority Critical patent/EP2768168A1/en
Publication of EP2768168A1 publication Critical patent/EP2768168A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/35Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
    • H04H60/37Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying segments of broadcast information, e.g. scenes or extracting programme ID
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/58Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of audio

Definitions

  • Such methods mostly rely on historical data, or ratings of other social network users, and do not allow real time following of the social network threads in the social network that are related to a currently attended network stream.
  • the invention has for object a method to recommend social network threads to a user attending a multimedia flow, comprising the steps :
  • the step in which the multimedia analysis unit analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a face recognition subunit performs a face recognition on a picture of the multimedia fragment to extract names of recognized people as descriptive elements.
  • the step in which the multimedia analysis unit analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a sound recognition subunits performs a sound recognition on a sound extract of the multimedia fragment to extract recognized sound source names as descriptive elements.
  • the step in which the multimedia analysis unit analyses content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a chromatic analysis subunit performs a chromatic analysis on a picture of the multimedia fragment to extract colour patterns as descriptive elements.
  • Said steps of analysing the social network threads on the social network platform and of extracting descriptive elements from the analysed social network threads may further comprise the steps :
  • FIG 1 is schematically represented a home network 100 with a multimedia receiver 1, comprising in this example a television 3, acting as multimedia display unit, and a base-station 5 connected to the internet and receiving a multimedia stream, here an IPTV (Internet Protocol TeleVision) stream.
  • a multimedia stream here an IPTV (Internet Protocol TeleVision) stream.
  • the multimedia stream may be any other kind of video stream, an audio stream, a slideshow, a text stream such as a news title stream, and the like.
  • the recommendation unit 17 recommends specific network threads searched using search queries in a social network thread database by proposing said threads to the user for reading, the threads being selected according to specific rules chosen for example as a function of the interests of the user, used in the search queries.
  • the fragment extraction unit 13 may use the statistics of the changes in framing and shots as indicators for continuity or for changing the fragment.
  • a speech of a candidate will often correspond to long single shots either of the candidate himself or of the attending public, while a heated discussion between the candidates will correspond to fast alternating shots of said candidates and other intervening people, with similar alternation patterns in the speaking voices.
  • the fragment extraction unit 13 may be configured to store temporarily on memory, in particular cache memory, the multimedia content or packets. The stored data is then processed, so that fragment delimitation markers may be established. The delimited fragments are then sent to the other units or subunits for further processing.
  • the fragment extraction unit 13 may use a change in content nature, for example a transition from speech to music, a recurring jingle, or a change in the speaking voices as fragment delimitation.
  • a systematic fractioning can be done using predetermined time intervals (e.g. 30 seconds or one minute) to allow real time or almost real time recommendation, and to avoid sending and processing multimedia fragments that are too heavy in data amount. Also, to avoid the fragments to be too numerous while having not enough content for a meaningful search to be conducted, a minimal length (e.g. 3 seconds) may be predetermined for the fragments.
  • the fragment extraction unit 13 may also hash the fragments, for example by keeping only every second or third image of a video stream to make the fragments lighter and easier to process. In the case of an audio or radio flow any suitable sound file compression procedure may be used.
  • the fragment extraction unit 13 may comprise an application downloaded and run on the social network browsing node 7, and/or a program stored and run on the base station 5 or on the social network platform.
  • the extracted multimedia fragments are sent to a multimedia analysis unit 15, which converts the multimedia fragments in data usable in search queries, by extracting descriptive elements and properties from the fragments. For example, if the fragments comprise a picture of an actor, the analysis unit may use a face recognition subunit to identify said actor, and isolate his name as a relevant descriptive element to use in search queries.
  • the multimedia analysis unit 15 comprises the associated means to process multimedia data, which may include subunits associated to the aforementioned functions. Said subunits may be shared with the fragment extraction unit 13 for the extracting of the multimedia fragments.
  • the multimedia analysis unit 15 will run voice detection methods to identify the presence of a speech.
  • the analysis unit 15 uses a speech-to-text conversion subunit to transcript the sound in searchable words. If no speech is detected, or in addition or in parallel to the speech-to-text transcription, the analysis unit 15 may use sound or music recognition subunits, for example to identify the title and artist of a played music track, or the name of a sound source (instrument, animal, etc.) and use them as searchable words.
  • the multimedia analysis unit 15 will use for example face or object recognition subunits, to identify people or objects on the multimedia fragment. The names obtained are then used as searchable terms.
  • the multimedia analysis unit 15 may also use structural or descriptive metadata. For example, on an IPTV program, the title, a short synopsis and/or the cast and authors and producers may be forwarded to the user in a transparent fashion, by multiplexing the data with the multimedia content. Said data may for example serve when the user requests information on the program he is currently watching using an integrated information display function of his television.
  • Such metadata about multimedia content is most of the time accessible via application programming interfaces (API), often open and/or public application programming interfaces.
  • API application programming interfaces
  • Such metadata is added in particular by the multimedia flow provider as an additional service.
  • the isolated descriptive elements are used to fill a multimedia fragment descriptor, which contains keywords and statistics extracted from the multimedia fragment, for example a set of weights associated to the keywords and attributed according to the number of occurrences of the considered keyword, or the chronological evolution of specific properties and characteristics of the multimedia fragment.
  • the metrics used to search for related threads 11a, 11b, 11c using the descriptor may comprise the known metrics such as objective image quality index (OQI), structural similarity (SSIM) or the Czenakowski distance (CZD).
  • OQI objective image quality index
  • SSIM structural similarity
  • CZD Czenakowski distance
  • threads 11a, 11b, 11c with elements within a predetermined distance of the descriptive elements of the fragment descriptor according to the chosen metrics are considered relating to the content of the multimedia fragment.
  • the fragment descriptor is then sent to a social thread recommendation unit 17 which is configured to use the fragment descriptor in a search query for social network threads 11a, 11b, 11c which are consequently bound to the content of the currently attended multimedia stream.
  • the thread recommendation unit 17 is therefore connected to the social network client 9, configured to give recommended thread identifiers in answer to a query containing recommendation rules.
  • Identifiers of the returned recommended social threads 11a, 11b, 11c are then forwarded to the social browsing node 7.
  • the social network browsing node 7 then only needs to search for the threads 11a, 11b, or 11c associated to the stored identifiers and display at least part of their content for the user to pick the ones he is interested in.
  • the recommendation unit 17 is connected to a recommendation database 21, which is configured to store the multimedia fragment descriptors and at least part of the returned results.
  • the recommendation unit 17 may search within the recommendation database 21 the previously established fragment descriptor closest to the current one yielding no or insufficient results, and use the returned results corresponding to said closest descriptor.
  • a social network crawler 23 Connected to the social network database 21 is a social network crawler 23, for systematic and thorough browsing of the social thread contents, in particular to forward and update the social network threads that are followed by users.
  • the social network crawler 23 is in particular used to conduct intensive analysis and ordering of the network threads 11a, 11b, 11c to establish thread properties.
  • Figure 2 represents in a schematic fashion the main steps of a particular embodiment of a method 200 to analyse the content of social threads 11a, 11b and 11c stored on the social network database 11 using the social network crawler 23 in order to make the search for specific threads less time and resource consuming, and thus reach minimal delay between media fragment display and associated recommendation.
  • the first step 201 for a gross filtering of topic candidates is the listing of their titles and topics by the network crawler 23. This implies that the threads 11a, 11b, 11c have previously been given one or more topic descriptors, related to their content and containing for example titles and/or tagged keywords.
  • the social network crawler 23 may in particular be configured to store possible synonym words, general associated themes and lexical fields relating to the titles and/or keywords.
  • the third step 205 is the emerging of the data structures in the selected social threads 11a, 11b, 11c.
  • the emerging of the data structures is the conversion from the initial, chronological linear graph of the messages ("timeline") to a conversation tree structure, where the semantic relationship of the messages is taken in consideration.
  • the social crawler 23 uses a semantic analysis subunit and/or metadata of the messages, which may be associated a tag stating that the considered message is a reply, and identify the message of which it is a reply.
  • the third step 205 is the establishing of statistical and topological thread structure properties from the emerged structure.
  • the search for network threads 11a, 11b, 11c that correspond to a multimedia fragment may accordingly be performed by comparing the social topic descriptors and the multimedia fragment descriptors. Consequently, the thorough and resource consuming search in the social threads 11a, 11b, 11c needs only be done periodically, for example once a predetermined amount of modifications (updates) is detected on the network threads 11a, 11b, 11c or at predetermined time intervals.
  • Figure 3 is a schematic flow chart with the main steps of the method 300 to recommend social threads 11a, 11b, 11c represented.
  • the first step 301 is the acknowledgement that a multimedia stream is attended. This corresponds to the launching of the application or program corresponding to the method. If no clear and non empty fragment can be identified, an error message may be sent, prompting the user to check for problems in the multimedia flow reception.
  • step 309 the fragment descriptor is searched for new content in comparison to the previously established descriptors. If no new content is identified, a new fragment is extracted in a return to step 303.
  • network thread descriptors are collected from a social thread descriptor database, for example using a main keyword or semantic element to collect only the descriptors of a set of potentially related social network threads 11a, 11b, 11c in a previous, rapid sorting.
  • the social network thread descriptors established or updated during the social network crawling are all stored in a database DB following the recommendation, or failure to identify a recommended social network thread.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention has for object a method to recommend social network threads (11a, 11b, 11c), with the steps :
- a fragment extraction unit (13) extracts a multimedia fragment of the multimedia flow using changes in multimedia flow content properties to delimit the multimedia fragment,
- a multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements,
- a thread recommendation unit (17) uses the multimedia fragment descriptor to find social network threads (11a, 11b, 11c) related to the multimedia fragment,
- the thread recommendation unit (17) recommends the network threads (11a, 11b, 11c) related to the multimedia fragment.

Description

  • The invention relates to a method to create recommendations for social threads in a social network, in particular in relationship with a multimedia stream.
  • It has been observed that the activity on social networks, such as facebook and twitter spikes during major television events such as sport matches and competitions, live music shows and the like.
  • For example, the activity of users on twitter spiked at record heights around three thousand tweets (messages posted on threads) per seconds when the Japanese team scored against Cameroon on June, 14th 2010 during the FIFA World Cup, at the NBA finals on June, 17th 2010 and at the Japanese victory over Denmark in World Cup, while the absolute record was reached at 7196 tweets per second during the 2011 FIFA Women's World Cup final between Japan and the United States.
  • However, to find social network threads related to the currently watched TV or multimedia stream, the user must manually search the social networks using topics and tags as keywords, possibly missing relevant threads.
  • It is known to use automated recommendations based on the current activity of the threads, by picking so called "hot topics" with high activity, or to base recommendations on evaluations of social network friends to pick social network threads.
  • Such methods mostly rely on historical data, or ratings of other social network users, and do not allow real time following of the social network threads in the social network that are related to a currently attended network stream.
  • Therefore, these recommendation methods are not suited, in particular for socially augmented media flows, such as television debates and sports events, where at least one dedicated social thread is followed throughout the show or event, and may be integrated as a feedback or question source in the emission.
  • In order to overcome the aforementioned drawbacks, the invention has for object a method to recommend social network threads to a user attending a multimedia flow, comprising the steps :
    • a fragment extraction unit extracts a multimedia fragment of the multimedia flow using changes in multimedia flow content properties to delimit the multimedia fragment,
    • a multimedia analysis unit analyses content of the multimedia fragment to fill a fragment descriptor containing descriptive elements,
    • a thread recommendation unit uses the multimedia fragment descriptor to find social network threads related to the multimedia fragment,
    • the thread recommendation unit recommends the network threads related to the multimedia fragment.
  • The method according to the invention may also present one or more of the following characteristics, taken separately and/or in combination.
  • The multimedia flow comprises one of the following : a video flow, an audio flow, a slide show, a text stream.
  • The step in which the multimedia analysis unit analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein an object recognition subunit performs an object recognition on a picture of the multimedia fragment to extract recognized object names as descriptive elements.
  • The step in which the multimedia analysis unit analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a face recognition subunit performs a face recognition on a picture of the multimedia fragment to extract names of recognized people as descriptive elements.
  • The step in which the multimedia analysis unit analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a sound recognition subunits performs a sound recognition on a sound extract of the multimedia fragment to extract recognized sound source names as descriptive elements.
  • The step in which the multimedia analysis unit analyses content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a speech-to-text conversion unit converts speech from a sound extract of the multimedia fragment to extract speech words as descriptive elements.
  • The step in which the multimedia analysis unit analyses content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein an optical character recognition unit performs an optical character recognition on a picture of the multimedia fragment to extract character chains as descriptive elements.
  • The step in which the multimedia analysis unit analyses content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a chromatic analysis subunit performs a chromatic analysis on a picture of the multimedia fragment to extract colour patterns as descriptive elements.
  • It further comprises the following steps :
    • a social network crawler analyses the social network threads on the social network platform,
    • the social network crawler extracts descriptive elements from the analysed social network threads and uses said descriptive elements to fill a social network thread descriptors,
    • the filled social network thread descriptors are stored on a database to be compared to multimedia fragment descriptors to find social network threads related to the multimedia fragment.
  • Said steps of analysing the social network threads on the social network platform and of extracting descriptive elements from the analysed social network threads may further comprise the steps :
    • the social network crawler emerges a conversation tree structure of the social network thread,
    • the social network crawler uses structural and topological properties of the conversation tree structure as descriptive elements.
  • The steps of analysing the social network threads on the social network platform and of extracting descriptive elements from the analysed social network threads may comprise the steps:
    • the social network crawler establishes chronological evolution properties of the social network threads,
    • said chronological evolution properties are compared to multimedia flow chronological evolution properties of the multimedia flow using structure similarity metrics.
  • The invention also has for object the associated data storage medium storing a machine executable program for performing a method to recommend social threads as previously described and the associated network node configured to perform a method to recommend social network threads to a user as previously described.
  • Other characteristics and advantages of the invention will appear at the reading of the description of the following figures, among which:
    • figure 1 is a schematic representation of the different apparatus elements involved in the method,
      figure 2 is a schematic flowchart of the main steps of a procedure to create social topic descriptors used in the invention,
    • figure 3 is a schematic flowchart of the main steps of the method according to the invention.
  • On all figures, the same element is referred to with the same number.
  • On figure 1 is schematically represented a home network 100 with a multimedia receiver 1, comprising in this example a television 3, acting as multimedia display unit, and a base-station 5 connected to the internet and receiving a multimedia stream, here an IPTV (Internet Protocol TeleVision) stream. As an alternative, the multimedia stream may be any other kind of video stream, an audio stream, a slideshow, a text stream such as a news title stream, and the like.
  • The home network 100 further comprises a social network browsing node 7, connected via a social network client 9 to a social network database 11, the social network client 9 and database 11 being outside of the home network, for example on the internet. The home network 100 also comprises a fragment extraction unit 13 and a fragment analysis unit 15, here implemented in the base-station 5. It comprises in addition a recommendation unit 17.
  • The recommendation unit 17 recommends specific network threads searched using search queries in a social network thread database by proposing said threads to the user for reading, the threads being selected according to specific rules chosen for example as a function of the interests of the user, used in the search queries.
  • The social network browsing node 7, for example a personal computer or a smartphone, is connected to the same network as the multimedia receiver 1, for example within the home or local network, which is built around the base station 5 for the user's home and acting as a communication centralizer and as a bridge to the internet.
  • The social network browsing node 7 is also connected to a social network via a social network client 9, for example a micro-blogging platform browser, which accesses data stored on remote servers on the internet that form at least part of the content of the social network.
  • The multimedia display unit 1 and the social network browser 7 may be fused in a single device, for example a personal computer, a laptop, an electronic pad, a smartphone, a television in particular IP television, or be two distinct devices, for example a television and a smartphone as previously discussed. Connection between the two is assumed, either directly or via a centralizing element such as the base-station 5 or a server.
  • The social network client 9 is used to access a social network database 11 comprising multiple social threads 11a to 11c, which are aggregations of essentially user posted messages, possibly in reply to each other, and possibly with enclosed or embedded contents such as pictures, short videos, links, etc. The social network database 11 is in particular located on remote servers forming a social network platform and are accessed via the internet.
  • The base station 5 receives IP packets from a multimedia flow provider via the internet, and converts said packets in multimedia content. Said content is then displayed on the television 3.
  • The base station 5 comprises in the discussed embodiment a fragment extraction unit 13, and a fragment analysis unit 15.
  • The unit 13 is configured to extract parts of the multimedia content and forward them as fragments. In particular, the fragment extraction unit 13 is configured to isolate multimedia fragments, containing frames or pictures, scenes or sound extracts, which form whenever possible a consistent unit.
  • The fragment extraction unit 13, along with all units and subunits mentioned afterwards, comprise memory and processing means configured to perform the functions they correspond to, and may be distributed in the devices in the considered network 100 or in internet servers, with possibly more than one unit in a device, while the dedicated memory and processing means may be shared with other, possibly unrelated, functions and units.
  • The fragment extraction unit 13 is configured to use statistical and media analysis subunits to extract fragments from the multimedia flow, in particular coherent fragments, corresponding to a relatively complete and independent content. A coherent fragment is for example obtained by grouping in one fragment the multimedia flow content between apparition and disappearance of specific characteristics in the multimedia content (colour, face or object on screen, speaking voice, etc.).
  • Such a fragment is for example a single scene, which the unit recognizes as such using fade to black or massive change in the images due to changes in framing as triggers. The fragment extraction unit 13 may also use a face recognition subunit to identify a person on screen, and use the presence of said person as a continuity indicator meaning that the content is still part of the same fragment.
  • Considering for example a multimedia stream containing a political debate on television, in which two candidates face each other, a first and most simple way for the fragment extraction unit 13 to isolate fragments is to recognize the changes in the candidate on screen or the speaking voice, so that a fragment corresponds to the single speeches the candidates hold.
  • Otherwise or in parallel, the fragment extraction unit 13 may use the statistics of the changes in framing and shots as indicators for continuity or for changing the fragment. In the case of the political debate, a speech of a candidate will often correspond to long single shots either of the candidate himself or of the attending public, while a heated discussion between the candidates will correspond to fast alternating shots of said candidates and other intervening people, with similar alternation patterns in the speaking voices.
  • The speech and discussion form each a single and separate fragment and may thus be recognized as such analysing the frame changing frequency.
  • This statistical change in framing duration is an indicator for ending the current fragment and grouping the following content in a new one, so as to have one fragment corresponding to the speech, and the next one corresponding to the discussion.
  • The fragment extraction unit 13 may be configured to use changes in specific statistics and characteristics of the multimedia content as triggers to set a marker in the multimedia flow time line which delimits one fragment from the next.
  • In particular, the fragment extraction unit 13 may be configured to store temporarily on memory, in particular cache memory, the multimedia content or packets. The stored data is then processed, so that fragment delimitation markers may be established. The delimited fragments are then sent to the other units or subunits for further processing.
  • For audio or radio flows, the fragment extraction unit 13 may use a change in content nature, for example a transition from speech to music, a recurring jingle, or a change in the speaking voices as fragment delimitation.
  • If no marker can be set, a systematic fractioning can be done using predetermined time intervals (e.g. 30 seconds or one minute) to allow real time or almost real time recommendation, and to avoid sending and processing multimedia fragments that are too heavy in data amount. Also, to avoid the fragments to be too numerous while having not enough content for a meaningful search to be conducted, a minimal length (e.g. 3 seconds) may be predetermined for the fragments.
  • The fragment extraction unit 13 may also hash the fragments, for example by keeping only every second or third image of a video stream to make the fragments lighter and easier to process. In the case of an audio or radio flow any suitable sound file compression procedure may be used.
  • As an example, the fragment extraction unit 13 may comprise an application downloaded and run on the social network browsing node 7, and/or a program stored and run on the base station 5 or on the social network platform.
  • The extracted multimedia fragments are sent to a multimedia analysis unit 15, which converts the multimedia fragments in data usable in search queries, by extracting descriptive elements and properties from the fragments. For example, if the fragments comprise a picture of an actor, the analysis unit may use a face recognition subunit to identify said actor, and isolate his name as a relevant descriptive element to use in search queries.
  • Other possible data sources include object recognition on the pictures, music and sound recognition on the audio track, optical character recognition (OCR), speech-to-text conversion, chromatic analysis and the like. To perform all this processing, the multimedia analysis unit 15 comprises the associated means to process multimedia data, which may include subunits associated to the aforementioned functions. Said subunits may be shared with the fragment extraction unit 13 for the extracting of the multimedia fragments.
  • The processes and techniques used to analyse the multimedia stream are of course varying according to the nature of said stream.
  • For example, if the multimedia stream is a radio or more generally a sound stream, the multimedia analysis unit 15 will run voice detection methods to identify the presence of a speech. In case a speech is detected, the analysis unit 15 uses a speech-to-text conversion subunit to transcript the sound in searchable words. If no speech is detected, or in addition or in parallel to the speech-to-text transcription, the analysis unit 15 may use sound or music recognition subunits, for example to identify the title and artist of a played music track, or the name of a sound source (instrument, animal, etc.) and use them as searchable words.
  • If the multimedia stream is a video stream or slide show, the multimedia analysis unit 15 will use for example face or object recognition subunits, to identify people or objects on the multimedia fragment. The names obtained are then used as searchable terms.
  • The multimedia analysis unit 15 may also use structural or descriptive metadata. For example, on an IPTV program, the title, a short synopsis and/or the cast and authors and producers may be forwarded to the user in a transparent fashion, by multiplexing the data with the multimedia content. Said data may for example serve when the user requests information on the program he is currently watching using an integrated information display function of his television.
  • Such metadata about multimedia content is most of the time accessible via application programming interfaces (API), often open and/or public application programming interfaces. Such metadata is added in particular by the multimedia flow provider as an additional service.
  • The isolated descriptive elements are used to fill a multimedia fragment descriptor, which contains keywords and statistics extracted from the multimedia fragment, for example a set of weights associated to the keywords and attributed according to the number of occurrences of the considered keyword, or the chronological evolution of specific properties and characteristics of the multimedia fragment.
  • The metrics used to search for related threads 11a, 11b, 11c using the descriptor may comprise the known metrics such as objective image quality index (OQI), structural similarity (SSIM) or the Czenakowski distance (CZD). In particular, threads 11a, 11b, 11c with elements within a predetermined distance of the descriptive elements of the fragment descriptor according to the chosen metrics are considered relating to the content of the multimedia fragment.
  • The fragment descriptor is then sent to a social thread recommendation unit 17 which is configured to use the fragment descriptor in a search query for social network threads 11a, 11b, 11c which are consequently bound to the content of the currently attended multimedia stream.
  • The thread recommendation unit 17 is therefore connected to the social network client 9, configured to give recommended thread identifiers in answer to a query containing recommendation rules.
  • The recommendation rules are elaborated using the fragment descriptor and the descriptive elements thereof, possibly taking into account the profile, preferences and personal settings of the user. To do this, the recommendation unit 17 is connected to a profile and settings depository 19 containing the profile, preferences and setting rules of the user.
  • Such rules may include user preferences for ordering the results from most to least interesting: nationality, search language, current geographical position and any other elements relevant for the ordering.
  • Identifiers of the returned recommended social threads 11a, 11b, 11c are then forwarded to the social browsing node 7. The social network browsing node 7 then only needs to search for the threads 11a, 11b, or 11c associated to the stored identifiers and display at least part of their content for the user to pick the ones he is interested in.
  • The recommendation unit 17 is connected to a recommendation database 21, which is configured to store the multimedia fragment descriptors and at least part of the returned results.
  • If a search query to the social network client 9 does not yield sufficient result, meaning at least one social thread within a certain distance from the descriptive elements of the multimedia fragment descriptor, the recommendation unit 17 may search within the recommendation database 21 the previously established fragment descriptor closest to the current one yielding no or insufficient results, and use the returned results corresponding to said closest descriptor.
  • Since the search in the content of the social threads is time and resource consuming, a preparatory work may be done to elaborate summarized descriptors of the social threads.
  • Connected to the social network database 21 is a social network crawler 23, for systematic and thorough browsing of the social thread contents, in particular to forward and update the social network threads that are followed by users. The social network crawler 23 is in particular used to conduct intensive analysis and ordering of the network threads 11a, 11b, 11c to establish thread properties.
  • Figure 2 represents in a schematic fashion the main steps of a particular embodiment of a method 200 to analyse the content of social threads 11a, 11b and 11c stored on the social network database 11 using the social network crawler 23 in order to make the search for specific threads less time and resource consuming, and thus reach minimal delay between media fragment display and associated recommendation.
  • On figure 2, the first step 201 for a gross filtering of topic candidates is the listing of their titles and topics by the network crawler 23. This implies that the threads 11a, 11b, 11c have previously been given one or more topic descriptors, related to their content and containing for example titles and/or tagged keywords. The social network crawler 23 may in particular be configured to store possible synonym words, general associated themes and lexical fields relating to the titles and/or keywords.
  • The following step 203 is the analysis of the content, recurrent words are listed with the number of times they occur in the considered thread, the embedded content is analysed using similar functions and subunits as for extracting significant elements from the multimedia fragment content: face, object and character recognition for embedded images and videos, music and sound recognition for audio tracks.
  • The third step 205 is the emerging of the data structures in the selected social threads 11a, 11b, 11c. The emerging of the data structures is the conversion from the initial, chronological linear graph of the messages ("timeline") to a conversation tree structure, where the semantic relationship of the messages is taken in consideration.
  • For example, in a conversation tree structure, the messages posted in response to a first one are directly related to the said, without consideration of their posting time and possible unrelated messages posted in between.
  • The ramified structure of the obtained graph, with the replies "sprouting" from the initial message, gives it its name of conversation tree. To proceed to the conversion, the social crawler 23 uses a semantic analysis subunit and/or metadata of the messages, which may be associated a tag stating that the considered message is a reply, and identify the message of which it is a reply.
  • For example in micro-blogging threads 11a, 11b, 11c it is common to start a reply to a message from a user user1 with the character chain @user1. The social network crawler may be configured to identify such character chains starting with the @ sign, to search for messages from a user whose name is written after the @, and to link the considered message to the identified one as a reply.
  • The third step 205, once the data structure is emerged, is the establishing of statistical and topological thread structure properties from the emerged structure.
  • The statistical and topological thread structure properties are for example the number of intervening users, as an indicator of popularity, or the average number of posts per user, as an indicator that the thread is followed by its contributors.
  • In particular, said thread properties may comprise chronological evolution properties. For example, this implies listing the time frames in which the number of posts is at or below average, and the time frames in which activity spikes. The chronological evolution properties may be used to find similarity patterns with the multimedia fragment contents.
  • Examples of properties that vary over time may for example include the number of intervening users, the average length of the discussions (posts in reply to each other) in the conversation tree, average time between a post and the next reply to considered post, average number of replies to each post, and the like.
  • Said chronological evolution properties may in particular be compared to multimedia flow chronological evolution properties. Said multimedia flow chronological evolution properties are established by analysing the chronological succession of fragments. The chronological evolution of both multimedia flow and social network thread may then be compared using structure similarity metrics.
  • In the case of on demand or more generally not live content, the chronological properties of multimedia flow and network thread are advantageously compared using a relative time line, due to the fact that at the opposite of live content, no explicit synchronization may be assumed,
  • Returning to the example of the political debate, monologue speeches of one candidate will probably correspond to low activity periods, while heated debates often correspond to activity spikes. By using pattern similarity detecting subunits such criteria may be used to link fragments and threads.
  • All the extracted thread relevant data is put together in a social network thread descriptor in step 207, the thread descriptor is stored on a database (not represented on figure 2) for future use by the social network clients 9 or for internal search engines of the social network platform.
  • The search for network threads 11a, 11b, 11c that correspond to a multimedia fragment may accordingly be performed by comparing the social topic descriptors and the multimedia fragment descriptors. Consequently, the thorough and resource consuming search in the social threads 11a, 11b, 11c needs only be done periodically, for example once a predetermined amount of modifications (updates) is detected on the network threads 11a, 11b, 11c or at predetermined time intervals.
  • Figure 3 is a schematic flow chart with the main steps of the method 300 to recommend social threads 11a, 11b, 11c represented.
  • The first step 301 is the acknowledgement that a multimedia stream is attended. This corresponds to the launching of the application or program corresponding to the method. If no clear and non empty fragment can be identified, an error message may be sent, prompting the user to check for problems in the multimedia flow reception.
  • Once a multimedia stream is recognized and content is identified, said content is analysed in step 303 to extract relevant fragments, using the previously described units and subunits.
  • The following step 305 is the analysis of the fragment content to identify if a significant fragment has been extracted. This may include an estimation, in particular rash estimation, of the accessible content (keywords, names, metrics, etc.) for the fragment descriptor, and comparing that estimation to a threshold value. If no descriptor or only an insufficient descriptor may be established, the method prescribes a return to previous step 303 to isolate a new multimedia fragment.
  • If the fragment is deemed significant (containing at least a predetermined amount of content), the fragment descriptor is established in step 307. Possible embodiments of this step have previously been described.
  • In step 309, the fragment descriptor is searched for new content in comparison to the previously established descriptors. If no new content is identified, a new fragment is extracted in a return to step 303.
  • If new multimedia fragment content is identified in the descriptor, in step 311 network thread descriptors are collected from a social thread descriptor database, for example using a main keyword or semantic element to collect only the descriptors of a set of potentially related social network threads 11a, 11b, 11c in a previous, rapid sorting.
  • In the following step 313 the similarities between the social thread descriptors and the multimedia fragment descriptors are computed using any known method.
  • If a social thread is identified as linked to the current multimedia fragment (both descriptors being within a certain distance of each other), said network thread is recommended in step 315. An identifier of the social network thread is forwarded to the social network browsing node 7 to be used to access the content of said thread by the social network browsing node 7 if the user wants to read the identified social network thread.
  • If no social network thread descriptor is close enough to the one of the multimedia fragment, the similarity computing element (either the base station 5, the social network browsing node 7 or a server of the social network) may request in step 317 a refreshing of the social network thread descriptors.
  • The refreshing of the social network thread descriptors comprises the steps 319 to 323.
  • Said refreshing starts with step 319 in which the network crawler 23 checks the social network database DB for new topics in step 319.
  • If a thread is newly identified as being related to the multimedia fragment, said thread is analysed in step 321 to establish a complete descriptor of the thread, said descriptor being sent to a social network thread descriptor database DB. Said database DB is then used for subsequent researches so as not to require complete search and analysis of the social network threads 11a, 11b, 11c.
  • if no new topic is identified as relating to the multimedia fragment, the network crawler 23 may check already identified threads 11a, 11b, 11c for recent activity (new messages) in a step 323. If a sufficient amount of new content is detected in a previously recommended thread, said social network thread is recommended again (step 321).
  • The social network thread descriptors established or updated during the social network crawling are all stored in a database DB following the recommendation, or failure to identify a recommended social network thread.
  • The method proposed by the invention allows real time recommendation of social network threads 11a, 11b, 11c based on a multimedia flow attended by a user. This is in particular useful in the case of socially augmented multimedia flows where dedicated social network threads 11a, 11b, 11c are used as feedback or content source.
  • The method allows in particular to find social network threads relating to live multimedia flows, where no historical data or recommendation systems based on likes and dislikes may yield the searched threads.

Claims (13)

  1. Method to recommend social network threads (11a, 11b, 11c) to a user attending a multimedia flow, comprising the steps :
    - a fragment extraction unit (13) extracts a multimedia fragment of the multimedia flow using changes in multimedia flow content properties to delimit the multimedia fragment,
    - a multimedia analysis unit (15) analyses content the multimedia fragment to fill a fragment descriptor containing descriptive elements,
    - a thread recommendation unit (17) uses the multimedia fragment descriptor to find social network threads (11a, 11b, 11c) related to the multimedia fragment,
    - the thread recommendation unit (17) recommends the social network threads (11a, 11b, 11c) related to the multimedia fragment.
  2. Method according to claim 1, wherein the multimedia flow comprises one of the following : a video flow, an audio flow, a slide show, a text stream.
  3. Method according to claim 1, wherein the step in which the multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein an object recognition subunit performs an object recognition on a picture of the multimedia fragment to extract recognized object names as descriptive elements.
  4. Method according to claim 1, wherein the step in which the multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a face recognition subunit performs a face recognition on a picture of the multimedia fragment to extract names of recognized people as descriptive elements.
  5. Method according to claim 1, wherein the step in which the multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a sound recognition subunits performs a sound recognition on a sound extract of the multimedia fragment to extract recognized sound source names as descriptive elements.
  6. Method according to claim 1, wherein the step in which the multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a speech-to-text conversion unit converts speech from a sound extract of the multimedia fragment to extract speech words as descriptive elements.
  7. Method according to claim 1, wherein the step in which the multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein an optical character recognition unit performs an optical character recognition on a picture of the multimedia fragment to extract character chains as descriptive elements.
  8. Method according to claim 1, wherein the step in which the multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements comprises a step wherein a chromatic analysis subunit performs a chromatic analysis on a picture of the multimedia fragment to extract colour patterns as descriptive elements.
  9. Method according to any of the precedent claims, wherein it further comprises the following steps :
    - a social network crawler (23) analyses the social network threads (11a, 11b, 11c) on the social network platform,
    - the social network crawler (23) extracts descriptive elements from the analysed social network threads (11a, 11b, 11c) and uses said descriptive elements to fill a social network thread descriptors,
    - the filled social network thread descriptors are stored on a database (DB) to be compared to multimedia fragment descriptors to find social network threads (11a, 11b, 11c) related to the multimedia fragment.
  10. Method according to claim 9, wherein the steps of analysing the social network threads (11a, 11b, 11c) on the social network platform and of extracting descriptive elements from the analysed social network threads (11a, 11b, 11c) comprise the steps :
    - the social network crawler (23) emerges a conversation tree structure of the social network thread,
    - the social network crawler (23) uses structural and topological properties of the conversation tree structure as descriptive elements.
  11. Method according to claim 9 or 10, wherein the steps of analysing the social network threads (11a, 11b, 11c) on the social network platform and of extracting descriptive elements from the analysed social network threads (11a, 11b, 11c) comprises the steps:
    - the social network crawler (23) establishes chronological evolution properties of the social network threads (11a, 11b, 11c),
    - said chronological evolution properties are compared to multimedia flow chronological evolution properties of the multimedia flow using structure similarity metrics.
  12. Data storage medium storing a machine executable program for performing a method to recommend social threads (11a, 11b, 11c) comprising the steps :
    - a fragment extraction unit (13) extracts a multimedia fragment of the multimedia flow using changes in multimedia flow content properties to delimit the multimedia fragment,
    - a multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements,
    - a thread recommendation unit (17) uses the multimedia fragment descriptor to find social network threads (11a, 11b, 11c) related to the multimedia fragment,
    - the thread recommendation unit (17) recommends the network threads (11a, 11b, 11c) related to the multimedia fragment.
  13. Network node configured to perform a method to recommend social network threads (11a, 11b, 11c) to a user, with the steps :
    - a fragment extraction unit (13) extracts a multimedia fragment of the multimedia flow using changes in multimedia flow content properties to delimit the multimedia fragment,
    - a multimedia analysis unit (15) analyses the content of the multimedia fragment to fill a fragment descriptor containing descriptive elements,
    - a thread recommendation unit (17) uses the multimedia fragment descriptor to find social network threads (11a, 11b, 11c) related to the multimedia fragment,
    - the thread recommendation unit (17) recommends the network threads (11a, 11b, 11c) related to the multimedia fragment.
EP13290032.5A 2013-02-18 2013-02-18 Method to recommend social network threads Withdrawn EP2768168A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP13290032.5A EP2768168A1 (en) 2013-02-18 2013-02-18 Method to recommend social network threads

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP13290032.5A EP2768168A1 (en) 2013-02-18 2013-02-18 Method to recommend social network threads

Publications (1)

Publication Number Publication Date
EP2768168A1 true EP2768168A1 (en) 2014-08-20

Family

ID=47790113

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13290032.5A Withdrawn EP2768168A1 (en) 2013-02-18 2013-02-18 Method to recommend social network threads

Country Status (1)

Country Link
EP (1) EP2768168A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040170392A1 (en) * 2003-02-19 2004-09-02 Lie Lu Automatic detection and segmentation of music videos in an audio/video stream
US20100325218A1 (en) * 2009-06-22 2010-12-23 Nokia Corporation Method and apparatus for determining social networking relationships
US20110282906A1 (en) * 2010-05-14 2011-11-17 Rovi Technologies Corporation Systems and methods for performing a search based on a media content snapshot image
US20120054795A1 (en) * 2010-08-31 2012-03-01 Samsung Electronics Co., Ltd. Method and apparatus for providing preferred broadcast information
US20120317241A1 (en) * 2011-06-08 2012-12-13 Shazam Entertainment Ltd. Methods and Systems for Performing Comparisons of Received Data and Providing a Follow-On Service Based on the Comparisons
US20120317046A1 (en) * 2011-06-10 2012-12-13 Myslinski Lucas J Candidate fact checking method and system
US20120331496A1 (en) * 2011-06-22 2012-12-27 Steven Copertino Methods and apparatus for presenting social network content in conjunction with video content

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040170392A1 (en) * 2003-02-19 2004-09-02 Lie Lu Automatic detection and segmentation of music videos in an audio/video stream
US20100325218A1 (en) * 2009-06-22 2010-12-23 Nokia Corporation Method and apparatus for determining social networking relationships
US20110282906A1 (en) * 2010-05-14 2011-11-17 Rovi Technologies Corporation Systems and methods for performing a search based on a media content snapshot image
US20120054795A1 (en) * 2010-08-31 2012-03-01 Samsung Electronics Co., Ltd. Method and apparatus for providing preferred broadcast information
US20120317241A1 (en) * 2011-06-08 2012-12-13 Shazam Entertainment Ltd. Methods and Systems for Performing Comparisons of Received Data and Providing a Follow-On Service Based on the Comparisons
US20120317046A1 (en) * 2011-06-10 2012-12-13 Myslinski Lucas J Candidate fact checking method and system
US20120331496A1 (en) * 2011-06-22 2012-12-27 Steven Copertino Methods and apparatus for presenting social network content in conjunction with video content

Similar Documents

Publication Publication Date Title
CN106331778B (en) Video recommendation method and device
US20220021952A1 (en) Determining A Popularity Metric for Media Content
US9888279B2 (en) Content based video content segmentation
US9253511B2 (en) Systems and methods for performing multi-modal video datastream segmentation
CN104798346B (en) Method and computing system for supplementing electronic messages related to broadcast media
US9251532B2 (en) Method and apparatus for providing search capability and targeted advertising for audio, image, and video content over the internet
US9396258B2 (en) Recommending video programs
CN108875022B (en) Video recommendation method and device
US8595375B1 (en) Segmenting video based on timestamps in comments
US8489600B2 (en) Method and apparatus for segmenting and summarizing media content
US8694533B2 (en) Presenting mobile content based on programming context
US20160014482A1 (en) Systems and Methods for Generating Video Summary Sequences From One or More Video Segments
US9342584B2 (en) Server apparatus, information terminal, and program
US9913001B2 (en) System and method for generating segmented content based on related data ranking
US10652592B2 (en) Named entity disambiguation for providing TV content enrichment
US20130305280A1 (en) Web Identity to Social Media Identity Correlation
US8994311B1 (en) System, method, and computer program for segmenting a content stream
EP3690674A1 (en) Method for recommending video content
KR20120088650A (en) Estimating and displaying social interest in time-based media
JP2005512233A (en) System and method for retrieving information about a person in a video program
US20140222775A1 (en) System for curation and personalization of third party video playback
KR20130083829A (en) Automatic image discovery and recommendation for displayed television content
US20220321970A1 (en) Dynamic Real-Time Audio-Visual Search Result Assembly
US20150128186A1 (en) Mobile Multimedia Terminal, Video Program Recommendation Method and Server Thereof
EP2768168A1 (en) Method to recommend social network threads

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130218

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

R17P Request for examination filed (corrected)

Effective date: 20150220

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ALCATEL LUCENT

17Q First examination report despatched

Effective date: 20180323

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20181003