WO2024025142A1 - Dispositif électronique de fourniture de service d'appel vidéo et son procédé de commande - Google Patents
Dispositif électronique de fourniture de service d'appel vidéo et son procédé de commande Download PDFInfo
- Publication number
- WO2024025142A1 WO2024025142A1 PCT/KR2023/008039 KR2023008039W WO2024025142A1 WO 2024025142 A1 WO2024025142 A1 WO 2024025142A1 KR 2023008039 W KR2023008039 W KR 2023008039W WO 2024025142 A1 WO2024025142 A1 WO 2024025142A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- participant
- image
- gesture
- display
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Definitions
- This disclosure relates to an electronic device that provides a video call service and a method of controlling the same. It relates to an electronic device that provides a video call service that provides a content sharing mode for sharing content with a plurality of participants and a method of controlling the same.
- video call services have been provided with a content sharing function that allows conversations or meetings by sharing content with multiple participants.
- an electronic device providing a video call service includes: a display; Memory; and at least one processor operatively coupled to the communication interface, the display, and the memory.
- the at least one processor displays the shared content in the main area and displays the shared content in the main area and displays the shared content in the terminal corresponding to at least one participant among the plurality of participants.
- the display is controlled to display the image received from the device in the sub-area.
- the at least one processor displays a terminal corresponding to the first participant in the main area based on at least one of the detected voice and gesture.
- the display is controlled to display the first image received from the device.
- a method of controlling an electronic device providing a video call service includes, when entering a content sharing mode for sharing content during a video call with a plurality of participants, the shared content is displayed in the main area. displaying an image received from a terminal device corresponding to at least one participant among the plurality of participants in a sub-area; And when at least one of the voice and gesture of the first participant among the plurality of participants is detected, the first message received from the terminal device corresponding to the first participant is displayed in the main area based on at least one of the detected voice and gesture. It includes: displaying an image.
- the control method includes conducting a video call with a plurality of participants.
- a content sharing mode for sharing content displaying shared content in a main area and displaying an image received from a terminal device corresponding to at least one participant among the plurality of participants in a sub area;
- the voice and gesture of the first participant among the plurality of participants is detected, the first message received from the terminal device corresponding to the first participant is displayed in the main area based on at least one of the detected voice and gesture. It includes: displaying an image.
- FIG. 1 is a block diagram showing the configuration of an electronic device according to an embodiment of the present disclosure
- FIG. 2 is a flowchart illustrating the operation of a content sharing mode while performing a video call, according to an embodiment of the present disclosure
- FIG. 3 is a diagram illustrating a screen displayed while operating in a normal call mode according to an embodiment of the present disclosure
- FIG. 4 is a diagram illustrating a screen displayed while operating in content sharing mode, according to an embodiment of the present disclosure
- FIG. 5 is a diagram illustrating a screen displayed when an utterance from a participant with utterance intent is detected while operating in content sharing mode, according to an embodiment of the present disclosure
- FIG. 6 is a diagram illustrating a screen displayed when speech from a participant who does not intend to speak is detected while operating in content sharing mode, according to an embodiment of the present disclosure
- FIG. 7 is a diagram illustrating a screen displayed when the speech of a participant with speech intent ends while operating in content sharing mode, according to an embodiment of the present disclosure
- 8A and 8B are diagrams for explaining a method of configuring a screen according to the number of participants according to an embodiment of the present disclosure
- 9 to 17 are diagrams for explaining embodiments of controlling an electronic device according to a user's voice or gesture while operating in a content sharing mode, according to various embodiments of the present disclosure
- FIG. 18 is a diagram illustrating an embodiment in which speech from a participant with speech intent is detected while operating in a content sharing mode displaying a plurality of content screens, according to an embodiment of the present disclosure
- 19 is a flowchart illustrating a control method of an electronic device providing a video call service according to an embodiment of the present disclosure.
- FIG. 20 is a diagram for explaining a video call system including a server and a plurality of terminal devices according to an embodiment of the present disclosure.
- expressions such as “have,” “may have,” “includes,” or “may include” refer to the existence of the corresponding feature (e.g., a numerical value, function, operation, or component such as a part). , and does not rule out the existence of additional features.
- expressions such as “A or B,” “at least one of A or/and B,” or “one or more of A or/and B” may include all possible combinations of the items listed together.
- “A or B,” “at least one of A and B,” or “at least one of A or B” includes (1) at least one A, (2) at least one B, or (3) it may refer to all cases including both at least one A and at least one B.
- first,” “second,” “first,” or “second,” used in this document can modify various components regardless of order and/or importance, and refer to one component. It is only used to distinguish from other components and does not limit the components.
- a first user device and a second user device may represent different user devices regardless of order or importance.
- a first component may be renamed a second component without departing from the scope of rights described in this document, and similarly, the second component may also be renamed to the first component.
- module “unit,” and “part” used in this document are terms to refer to components that perform at least one function or operation, and these components are implemented in hardware or software. Alternatively, it can be implemented through a combination of hardware and software. In addition, a plurality of “modules”, “units”, “parts”, etc. are integrated into at least one module or chip, except in cases where each needs to be implemented with individual specific hardware, and is integrated into at least one processor. It can be implemented as:
- a component e.g., a first component is “(operatively or communicatively) coupled with/to” another component (e.g., a second component).
- another component e.g., a second component.
- any component may be directly connected to the other component or may be connected through another component (e.g., a third component).
- a component e.g., a first component
- another component e.g., a second component
- no other component e.g., a third component
- the expression “configured to” depends on the situation, for example, “suitable for,” “having the capacity to.” ,” can be used interchangeably with “designed to,” “adapted to,” “made to,” or “capable of.”
- the term “configured (or set to)” may not necessarily mean “specifically designed to” in hardware.
- the expression “a device configured to” may mean that the device is “capable of” working with other devices or components.
- the phrase “processor configured (or set) to perform A, B, and C” refers to a processor dedicated to performing the operations (e.g., an embedded processor), or by executing one or more software programs stored on a memory device.
- FIG. 1 is a block diagram showing the configuration of an electronic device 100 according to an embodiment of the present disclosure.
- the electronic device 100 includes a display 110, a camera 120, a speaker 130, a microphone 140, a communication interface 150, a memory 160, and an input interface 170. and at least one processor 180.
- the electronic device 100 may be a TV capable of providing a video call service, but this is only an embodiment and various devices such as smart phones, tablet PCs, laptop PCs, desktop PCs, etc. It can be implemented with electronic devices.
- the configuration of the electronic device 100 is not limited to the configuration shown in FIG. 1, and of course, additional configurations that are obvious to those skilled in the art may be added.
- the display 110 can display various information.
- the display 110 may provide an execution screen of a video call application while performing a video call function.
- the execution screen of the video call application may include a main area that displays one of the plurality of participants or content and a sub-area that displays at least some of the plurality of participants.
- the size of the image displayed in the main area may be larger than the size displayed in the sub area.
- the video of the participant eg, the speaker or host
- the video call application operates in the second mode (eg, content sharing mode) while running, at least one of the participant's video and content may be displayed in the main area.
- the display 110 may be implemented as a Liquid Crystal Display Panel (LCD), Organic Light Emitting Diodes (OLED), etc., and in some cases, the display 110 may also be implemented as a flexible display, transparent display, etc. .
- the display 110 according to the present disclosure is not limited to a specific type.
- the camera 120 is configured to acquire images taken around the electronic device 100. At this time, the camera 120 may be placed in one area (for example, the bezel or the display 110) of the electronic device 100 and may capture the front direction of the electronic device 100. In particular, while a video call application is running, the electronic device 100 may obtain an image by photographing at least one participant located in front of the electronic device 100.
- the camera 120 may be placed on the front of the electronic device 100, but this is only an example, and of course, it may also be placed on the back of the electronic device 100. Additionally, a plurality of cameras 120 may be provided depending on the type of electronic device 100. Additionally, the camera 120 is provided outside the electronic device 100 and may be electrically connected to the electronic device 100.
- the speaker 130 can output various voice messages and audio.
- the speaker 130 can output audio received from a terminal device corresponding to another participant while a video call application is running, and output audio of content.
- the speaker 130 may be provided inside the electronic device 100, but this is only an example, and may be provided outside the electronic device 100 and electrically connected to the electronic device 100.
- the microphone 140 can acquire the user's voice.
- the microphone 140 can acquire the user's voice while the video call application is running and transmit it to a server or external terminal device through the communication interface 150.
- the microphone 140 may be provided inside the electronic device 100, but this is only an example, and may be provided outside the electronic device 100 and electrically connected to the electronic device 100.
- the communication interface 150 includes at least one circuit and can communicate with various types of external devices or servers.
- the communication interface 150 includes a BLE (Bluetooth Low Energy) module, a Wi-Fi communication module, a cellular communication module, a 3G (3rd generation) mobile communication module, a 4G (4th generation) mobile communication module, and a 4th generation LTE (Long Term Evolution) communication module.
- BLE Bluetooth Low Energy
- Wi-Fi Wireless Fidelity
- cellular communication module a 3G (3rd generation) mobile communication module
- 4G (4th generation) mobile communication module a 4th generation LTE (Long Term Evolution) communication module
- 4th generation LTE Long Term Evolution
- the communication interface 140 can transmit video data acquired by the camera 120 and audio data acquired through the microphone 140 to an external server or terminal device while the video call application is running. Video data and audio data can be received from an external terminal device.
- a communication connection with an external device is made through a wireless communication module, but of course, a communication connection with an external device can be made through a wired communication module.
- the memory 160 may store an operating system (OS) for controlling the overall operation of the components of the electronic device 100 and instructions or data related to the components of the electronic device 100.
- OS operating system
- the memory 160 can store various modules for performing video call functions.
- various modules for performing video call functions stored in non-volatile memory may load data for performing various operations into volatile memory.
- loading refers to an operation of loading and storing data stored in non-volatile memory in volatile memory so that at least one processor 180 can access it.
- the memory 160 may be implemented as non-volatile memory (ex: hard disk, solid state drive (SSD), flash memory), volatile memory (may also include memory in at least one processor 180), etc. You can.
- the input interface 170 includes a circuit, and at least one processor 180 may receive a user command for controlling the operation of the electronic device 100 through the input interface 170.
- the input interface 170 may be implemented as a remote control, but this is only an example and may consist of a touch screen, buttons, keyboard, mouse, etc.
- At least one processor 180 may control the electronic device 100 according to at least one instruction stored in the memory 160.
- at least one processor 180 displays the shared content in the main area and corresponds to at least one participant among the plurality of participants.
- the display 110 is controlled to display the image received from the terminal device in the sub-area.
- the at least one processor 180 receives information from the terminal device corresponding to the first participant in the main area based on at least one of the detected voice and gesture.
- the display 110 is controlled to display the first image.
- At least one processor 180 may determine the speech intention of the first participant based on the detected voice volume and speech time, and the detected gesture. Specifically, if the volume of the detected voice is greater than or equal to a preset value, the detected speech time is greater than or equal to a threshold time, or the detected gesture is a predefined gesture, the first participant determines the speech intention. It can be determined that exists.
- the at least one processor 180 controls the display 110 to display the first image received from the terminal device corresponding to the first participant in the main area. You can.
- the display 110 may be controlled to display shared content on the first image received from the terminal device corresponding to the first participant displayed on the main area.
- the at least one processor 180 selects the first participant previously displayed in the sub-area based on at least one of the detected voice and gesture.
- the display 110 may be controlled to remove the second image received from the corresponding terminal device and display the first image received from the terminal device corresponding to the first participant in the main area.
- the first image displayed in the main area and the second image displayed in the sub-area are both images received from the terminal device corresponding to the first participant, but the size of the first image displayed in the main area is displayed in the sub-area. It may be larger than the size of the second image.
- At least one processor 180 maintains the shared content in the main area and displays the second image received from the terminal device corresponding to the first participant displayed on the sub area.
- the display 110 can be controlled to provide image effects on the screen. At this time, the image effect may correspond to at least one type of detected voice or gesture.
- At least one processor 180 displays the shared content again in the main area.
- the display 110 can be controlled to display the image received from the terminal device corresponding to the first participant in the sub-area.
- At least one processor 180 displays the multiple shared contents in the main area and displays at least one of the multiple participants.
- the display 110 can be controlled to display the image received from the terminal device corresponding to the participant in the sub-area.
- the at least one processor 180 selects one of the plurality of shared contents displayed in the main area based on at least one of the detected voice and gesture of the second participant.
- One video can be removed, and the display 110 can be controlled to display the third video received from the terminal device corresponding to the second participant on the area where the removed shared content is displayed.
- FIGS. 2 to 7 A method of controlling an electronic device according to a user's voice or gesture while operating in content sharing mode will be described with reference to FIGS. 2 to 7 .
- Figure 2 is a flowchart for explaining the operation of the content sharing mode while performing a video call, according to an embodiment of the present disclosure.
- the electronic device 100 can start a video call (S210). Specifically, when the user runs the video call application and inputs a user interaction to execute the video call function or inputs a user interaction in response to a video call request received from an external terminal device, the electronic device 100 makes the video call. You can start. At this time, multiple participants can participate in the video call.
- the electronic device 100 may display a video call screen including a main area and a sub area. For example, as shown in FIG. 3, the electronic device 100 displays the image 310 received from the terminal device corresponding to participant 1 in the main area, and the image 310 received from the terminal device corresponding to participants 2 to 5 is displayed in the main area.
- the received images 320-1 to 320-4 can be displayed in the sub-area.
- Participant 1 may be a participant making a speech or a participant hosting a video call.
- the sub-area may be displayed on the right side of the main area, as shown in FIG. 3, but this is only an example, and the sub-area may be displayed above or below the main area.
- the electronic device 100 may enter the content sharing mode (S220). Specifically, when an interaction for sharing content is input through one participant among a plurality of participants, the electronic device 100 may enter the content sharing mode. Upon entering the content sharing mode, the electronic device 100 may display shared content in the main area. Specifically, upon entering the content sharing mode, the electronic device 100 may remove the image 310 of participant 1 displayed in the main area and display the shared content 410, as shown in FIG. 4 . At this time, in the sub-area, the images 320-4 received from the terminal devices corresponding to participant 5 are removed, and the images 420-1, 320-1 to 320-3 received from the terminal devices corresponding to participants 1 to 4 are added. This may be displayed.
- S220 content sharing mode
- the electronic device 100 may detect one of the voice and gesture of the first participant among the plurality of participants (S230). At this time, the electronic device 100 may extract voice from audio received from the outside and analyze the volume and speech time of the extracted voice. Alternatively, the electronic device 100 may detect the type of gesture made by the user in the image received from the external terminal device by inputting the image obtained from the external terminal device into the gesture recognition model.
- the electronic device 100 may determine whether the first participant's speech intention exists (S240). Specifically, the electronic device 100 may determine whether the first participant's speech intention exists based on the volume and speech time of the analyzed voice and the detected gesture.
- the electronic device 100 may determine that the first participant has an intention to speak.
- the preset value may be at least one of the average voice volume of a plurality of participants or a predefined target volume value. If the detected voice volume is less than a preset value, the electronic device 100 may determine that the first participant does not intend to speak.
- the electronic device 100 may determine that the first participant has an intention to speak. If the voice speech time is less than the threshold time, the electronic device 100 may determine that the first participant does not intend to speak.
- the electronic device 100 may determine that the first participant has an intention to speak.
- the predefined gesture may include a gesture in which the user clenches a fist or a gesture in which the user raises a hand.
- this is only an example, and the electronic device 100 inputs the participant's video into a learned artificial intelligence model to determine whether the participant has a speaking intention in the video. You can judge whether or not.
- shouts, cheers, applause, exclamations, etc. can be judged as speech with no speech intention
- gestures such as shaking the palm and drawing a heart can be judged as gestures with no speech intention.
- the electronic device 100 may display the first participant's image on the main area (S250). For example, when a voice is detected from Participant 3 and it is determined that speech intent exists in Participant 3's voice, the electronic device 100 displays a terminal device corresponding to Participant 3 on the main area, as shown in FIG. 5. The image 510-1 received from can be displayed in the main area. At this time, the previously displayed shared content 510-2 is reduced in size and can be displayed on the video 510-1 received from the terminal device corresponding to participant 3. That is, the shared content 510-2 may be displayed in a picture in picture (PIP) format on the video 510-1 received from the terminal device corresponding to participant 3.
- PIP picture in picture
- the image 320-2 received from the terminal device corresponding to participant 3 previously displayed in the sub-area is removed, and the images received from the terminal device corresponding to participants 1, 2, 4, and 5 are displayed. (420-1,320-1,320-3, 320-4) may be displayed.
- the electronic device 100 may display an image effect on the image of the first participant displayed on the sub-area (S260). For example, if a voice is detected from Participant 3, but it is determined that there is no speech intention in Participant 3's voice, the electronic device 100 maintains the shared content 410 displayed in the main area and displays it in the sub-area. An image effect that shakes the image 320-3 received from the terminal device corresponding to participant 3 may be provided.
- the image effect may correspond to the type of voice (audio) or gesture of the first participant.
- the image effect may be an image effect that shakes the image.
- the image effect may be an image effect that provides an icon of an applause image.
- the image effect may be an image effect that provides an icon of a heart image.
- the electronic device 100 detects a voice from Participant 3 but determines that there is no speech intention in Participant 3's voice, the electronic device 100 displays the video received from the terminal device corresponding to Participant 3 in the sub-region. You can determine whether or not it is displayed on the screen. As shown in FIG. 6, when the image received from the terminal device corresponding to Participant 3 is displayed on the sub-area, the electronic device 100 displays the image 320 received from the terminal device corresponding to Participant 3 displayed in the sub-area. -3) Image effects can be provided. However, when the image received from the terminal device corresponding to Participant 3 is not displayed in the sub-area, the electronic device 100 may provide an indicator, such as an icon, in the main area.
- the image effect was explained as an image effect that shakes the image, but this is only an example, and includes an image effect that changes the color and brightness of the image, an image effect that blinks the image, and provides highlights to the image.
- Various image effects, such as image effects, may be provided.
- the electronic device 100 displays the shared content again in the main area and receives the content from the terminal device corresponding to Participant 3.
- the image can be displayed in the sub area.
- the electronic device 100 may display the shared content 410 again in the main area and display the image 320-3 received from the terminal device corresponding to participant 3 in the sub area.
- the image 420-1 received from the terminal device corresponding to participant 1 is removed from the sub-area, and the images 320-1, 320-3, 320- received from the terminal device corresponding to participant 2, 4, 5, and 3 are removed. 4,320-2) may be displayed.
- FIGS. 8A and 8B are diagrams for explaining a method of configuring a screen according to the number of participants, according to an embodiment of the present disclosure.
- the video call screen includes a main area and a sub area.
- the sub-area may display images received from a terminal device corresponding to at least some of the plurality of participants. At this time, a preset number of images can be displayed in the sub-area. For example, as shown in FIGS. 3 to 7, up to four images can be displayed in the sub-area.
- videos received from terminal devices corresponding to all participants may be displayed in the sub-area during content sharing mode.
- the electronic device 100 displays shared content 810 in the main area and displays terminal devices corresponding to the three participants in the sub area, as shown in FIG. 8A.
- the images 820-1 to 820-3 received from the channels 820-1 to 820-3 may be displayed.
- videos received from terminal devices corresponding to some of the plurality of participants may be displayed in the sub-area during the content sharing mode.
- the electronic device 100 displays shared content 830 in the main area and displays shared content 830 in the sub area to 4 of the 8 participants, as shown in FIG. 8B.
- Images 840-1 to 840-4 received from corresponding terminal devices can be displayed.
- the images 850 received from the terminals corresponding to the remaining four participants are not displayed, but are changed to the sub-area by a user interaction that changes the sub-area (for example, a drag interaction or an interaction that selects the direction button on the remote control, etc.). It can be displayed on the screen.
- an icon indicating the current participant and an indicator 860 indicating the number of participants may be displayed on the video call screen.
- FIG. 9 is a diagram illustrating an example in which a voice without speaking intent is detected during content sharing mode, according to an embodiment of the present disclosure.
- the electronic device 100 displays shared content 910 in the main area, as shown in (a) of FIG. 9, and displays the shared content 910 in the sub area from terminal devices corresponding to participants 1 to 4.
- Received images 920-1 to 920-4 can be displayed.
- the electronic device 100 when Participant 1 utters a voice with a volume less than a preset value for less than the threshold time, the electronic device 100 generates a voice corresponding to Participant 1, as shown in (b) and (c) of FIG. 9.
- the image 920-1 received from the terminal device may provide a shaking image effect.
- the electronic device 100 displays the image provided in the video 920-1 received from the terminal device corresponding to Participant 1, as shown in (d) of FIG. 9. The effect can be removed.
- a preset time e.g. 3 seconds
- FIG. 10 is a diagram illustrating an example in which a voice with speaking intent is detected during content sharing mode, according to an embodiment of the present disclosure.
- the electronic device 100 displays shared content 1010 in the main area, as shown in (a) of FIG. 10, and displays the shared content 1010 in the sub area from terminal devices corresponding to participants 1 to 4.
- Received images 1020-1 to 1020-4 can be displayed.
- the electronic device 100 connects the terminal corresponding to participant 4, as shown in (b) of FIG. 10.
- the image 1030-1 received from the device can be displayed in the main area, and the size of the existing shared content can be reduced to display the reduced shared content 1030-2 on the video 1030-1.
- the electronic device 100 may remove the image 1020-4 received from the terminal device corresponding to participant 4 provided in the sub-area from the sub-area.
- the electronic device 100 enlarges and displays the shared content 1010 again in the main area, as shown in (c) of FIG. 10, and participant 4 displayed in the main area
- the image 1030-1 received from the terminal device corresponding to may be removed, and the image 1020-4 received from the terminal device corresponding to Participant 4 may be displayed in the sub-area.
- FIG. 11 is a diagram illustrating an embodiment in which both a voice with speaking intention and a voice without speaking intention are detected during a content sharing mode, according to an embodiment of the present disclosure.
- the electronic device 100 displays shared content 1110 in the main area, as shown in (a) of FIG. 11, and displays the shared content 1110 in the sub area from terminal devices corresponding to participants 1 to 4.
- Received images 1120-1 to 1020-4 can be displayed.
- the electronic device 100 As shown, the image 1130-1 received from the terminal device corresponding to participant 1 is displayed in the main area, and the size of the existing shared content is reduced to display the reduced shared content 1130-2 as an image. (1130-1) can be displayed above. Additionally, the electronic device 100 may remove the image 1120-1 received from the terminal device corresponding to participant 1 provided in the sub-area from the sub-area. Additionally, as shown in (b) of FIG. 11, the electronic device 100 may provide an image effect in which the image 1120-4 received from the terminal device corresponding to participant 4 is shaken.
- the electronic device 100 enlarges and displays the shared content 1110 again in the main area, as shown in (c) of FIG. 11, and Participant 1 displayed in the main area
- the image 1130-1 received from the terminal device corresponding to may be removed, and the image 1120-1 received from the terminal device corresponding to Participant 1 may be displayed in the sub-area.
- the image 1120-1 received from the terminal device corresponding to Participant 1 may be displayed on the bottom area rather than the previously displayed area.
- the electronic device 100 may remove the image effect provided in the image 1120-4 received from the terminal device corresponding to participant 4.
- FIG. 12 is a diagram illustrating an example in which a gesture with utterance intent is detected during content sharing mode, according to an embodiment of the present disclosure.
- the electronic device 100 displays shared content 1210 in the main area, as shown in (a) of FIG. 12, and displays the shared content 1210 in the sub area from terminal devices corresponding to participants 1 to 4.
- Received images 1220-1 to 1220-4 can be displayed.
- the electronic device 100 may recognize Participant 1's gesture through a gesture recognition model. Then, the electronic device 100 identifies the speech intention of participant 1, and displays the image 1230-1 received from the terminal device corresponding to participant 1 in the main area, as shown in (b) of FIG. 12. And, by reducing the size of the existing shared content, the reduced shared content 1230-2 can be displayed on the video 1230-1. Additionally, the electronic device 100 may remove the image 1220-1 received from the terminal device corresponding to participant 1 provided in the sub-area from the sub-area.
- the electronic device 100 identifies Participant 1's intention to end the speech, enlarges and displays the shared content 1210 again in the main area, as shown in (d) of FIG. 12, and displays the shared content 1210 again in the main area.
- the displayed image 1230-1 received from the terminal device corresponding to Participant 1 may be removed, and the image 1220-1 received from the terminal device corresponding to Participant 1 may be displayed in the sub-area. At this time, the image 1220-1 received from the terminal device corresponding to Participant 1 may be displayed on the previously displayed area or the bottom area.
- FIG. 13 is a diagram illustrating an example in which a gesture without speech intent is detected during content sharing mode, according to an embodiment of the present disclosure.
- the electronic device 100 displays shared content 1310 in the main area, as shown in (a) of FIG. 13, and displays the shared content 1310 in the sub area from terminal devices corresponding to participants 1 to 4.
- Received images 1320-1 to 1320-4 can be displayed.
- the electronic device 100 may recognize Participant 1's gesture through a gesture recognition model. Then, the electronic device 100 identifies that participant 1 has no intention to speak, and, as shown in (b) and (c) of FIGS. 13, the image 1320-1 received from the terminal device corresponding to participant 1 ) can provide a shaky image effect.
- the electronic device 100 displays the image provided in the video 1320-1 received from the terminal device corresponding to participant 1, as shown in (d) of FIG. 13. The effect can be removed.
- a preset time e.g. 3 seconds
- FIG. 14 is a diagram illustrating an embodiment in which both voices with speaking intention and voices without speaking intention of two or more people are detected during content sharing mode, according to an embodiment of the present disclosure.
- the electronic device 100 displays shared content 1410 in the main area, as shown in (a) of FIG. 14, and displays the shared content 1410 in the sub area from terminal devices corresponding to participants 1 to 4.
- Received images 1420-1 to 1420-4 can be displayed.
- the electronic device 100 is As shown in b), the image 1120-4 received from the terminal device corresponding to participant 4 may provide a shaking image effect.
- the electronic device 100 may recognize Participant 1's gesture through a gesture recognition model. Then, the electronic device 100 identifies the speech intention of participant 1, and displays the image 1430-1 received from the terminal device corresponding to participant 1 in the main area, as shown in (d) of FIG. 14. In addition, the size of the previously provided shared content can be reduced and the reduced shared content 1430-2 can be displayed on the video 1430-1. Additionally, the electronic device 100 may remove the image 1420-1 received from the terminal device corresponding to participant 1 provided in the sub-area from the sub-area.
- the electronic device 100 identifies Participant 1's intention to end the speech, enlarges and displays the shared content 1410 again in the main area, as shown in (e) of FIG. 14, and displays the shared content 1410 again in the main area.
- the displayed image 1430-1 received from the terminal device corresponding to Participant 1 may be removed, and the image 1420-1 received from the terminal device corresponding to Participant 1 may be displayed in the sub-area.
- FIG. 15 is a diagram illustrating an example in which a voice with an intention to speak is detected from a participant not included in a sub-area during a content sharing mode, according to an embodiment of the present disclosure.
- the electronic device 100 displays shared content 1510 in the main area, as shown in (a) of FIG. 15, and displays the shared content 1510 in the sub area from terminal devices corresponding to participants 1 to 4.
- Received images 1520-1 to 1520-4 can be displayed.
- the video call may include not only participants 1 to 4 but also 7 more participants 1530.
- the electronic device 100 displays the image 1540-1 received from the terminal device corresponding to participant 5 in the main area and reduces the size of the previously provided shared content.
- the reduced shared content 1540-2 can be displayed on the video 1540-1.
- the electronic device 100 may maintain images 1520-1 to 1520-4 received from terminal devices corresponding to participants 1 to 4 provided in the sub-area.
- the electronic device 100 enlarges and displays the shared content 1510 again in the main area, as shown in (c) of FIG. 15, and displays the shared content 1510 again in the main area.
- the image 1540-1 received from the terminal device corresponding to participant 5 may be removed, and the image 1520-5 received from the terminal device corresponding to participant 5 may be displayed in the sub-area.
- the previously displayed image 1520-4 received from the terminal corresponding to Participant 1 may be removed from the sub-area.
- the image 1520-5 received from the terminal device corresponding to participant 5 may be displayed on the bottom area of the sub-area, as shown in FIG.
- the image 1520-5 received from the terminal device corresponding to participant 5 may be displayed on the bottom area of the sub-area. It can be displayed on the top area. At this time, the previously displayed image 1520-4 received from the terminal corresponding to participant 4 may be removed from the sub-area.
- FIG. 16 is a diagram illustrating an embodiment in which a gesture without an intention to speak is detected by a participant not included in a sub-area during a content sharing mode, according to an embodiment of the present disclosure.
- the electronic device 100 displays shared content 1610 in the main area, as shown in (a) of FIG. 16, and displays the shared content 1610 in the sub area from terminal devices corresponding to participants 1 to 4.
- Received images 1620-1 to 1620-4 can be displayed.
- the video call may include not only participants 1 to 4 but also 7 more participants 1630.
- the electronic device 100 recognizes Participant 5's gesture through a gesture recognition model. can be recognized. Then, the electronic device 100 identifies that participant 5 has no intention to speak, and as shown in (b) of FIG. 16, an indicator (i.e., heart) 1640 corresponding to the type of gesture is displayed on the main area. can be displayed. The indicator 1640 corresponding to the type of gesture may be removed at a preset time or when the gesture ends. At this time, images 1620-1 to 1620-5 received from terminal devices corresponding to participants 1 to 4 provided in the sub-area may be maintained.
- FIG. 17 illustrates an embodiment in which a gesture without a speaking intention by a participant not included in a sub-area is detected while a participant with speaking intention is speaking while operating in content sharing mode, according to an embodiment of the present disclosure. This is a drawing for
- the electronic device 100 displays the image 1710-1 and the shared content received from the terminal device corresponding to the participant with the intention to speak in the main area, as shown in (a) of FIG. 16. 1710-2), and images 1720-1 to 1720-4 received from terminal devices corresponding to participants 1 to 4 may be displayed in the sub-area.
- the video call may include not only participants 1 to 4 but also 7 more participants 1730.
- the electronic device 100 may recognize the gestures of Participant 5 and Participant 6 through a gesture recognition model. Then, the electronic device 100 identifies that Participant 5 and Participant 6 do not intend to speak, and, as shown in (b) of FIG. 17, displays an indicator (i.e. heart) corresponding to the type of gesture on the main area. Two numbers (1740) can be displayed to correspond to the number of identified gestures. The indicator 1740 corresponding to the type of gesture may be removed at a preset time or when the gesture ends. At this time, images 1720-1 to 1720-4 received from terminal devices corresponding to participants 1 to 4 provided in the sub-area may be maintained.
- FIG. 18 is a diagram illustrating an embodiment in which a voice with speaking intent is detected during a content sharing mode in which a plurality of shared contents are provided in the main area, according to an embodiment of the present disclosure.
- the electronic device 100 displays the first shared content 1810 and the second shared content 1820 in the main area, as shown in (a) of FIG. 18, and the participants in the sub area. Images 1830-1 to 1830-4 received from terminal devices corresponding to numbers 1 to 4 may be displayed.
- the electronic device 100 connects the terminal corresponding to participant 4, as shown in (b) of FIG. 18.
- the image 1840 received from the device can be displayed in the area where the second shared content 1820 is displayed in the main area.
- the first shared content 1810 provided in the main area can be maintained.
- the voice uttered by Participant 4 may be a voice related to the first shared content 1810.
- the electronic device 100 may display the image received from the terminal device corresponding to participant 4 in the area where the second shared content 1820 unrelated to the voice uttered by participant 4 is displayed.
- the electronic device 100 displays the image received from the terminal device corresponding to participant 4 in the area displaying the first shared content 1810. It can be displayed. Additionally, the electronic device 100 may remove the image 1830-4 received from the terminal device corresponding to participant 4 provided in the sub-area from the sub-area.
- the size of the first shared content 1810 and the video 1840 of participant 4 are described as being the same, but this is only an example, and as described above, the first shared content 1810 The size of is reduced, and the reduced size of the first shared content can be displayed on the video 1840 of Participant 4.
- FIG. 19 is a flowchart illustrating a method of controlling an electronic device that provides a video call service, according to an embodiment of the present disclosure.
- the electronic device 100 enters a content sharing mode to share content during a video call with a plurality of participants (S1910).
- the electronic device 100 displays shared content in the main area and displays an image received from a terminal device corresponding to at least one participant among the plurality of participants in the sub area (S1920).
- the electronic device 100 detects at least one of the voice and gesture of the first participant among the plurality of participants (S1930).
- the electronic device 100 displays the first image captured from the terminal corresponding to the first participant in the main area based on at least one of the detected voice and gesture (S1940).
- the electronic device 100 may determine the first participant's speech intention based on the detected voice volume and speech time, and the detected gesture. At this time, the electronic device 100 detects that the first participant has an intention to speak if the volume of the detected voice is more than a preset value, the detected speech time is more than a threshold time, or the detected gesture is a predefined gesture. It can be judged that
- the electronic device 100 may display the first image in the main area. If it is determined that the first participant has an intention to speak, the electronic device 100 may display the first image in the main area. If it is determined that the first participant does not intend to speak, the electronic device 100 maintains the shared content in the main area and applies an image effect on the second image received from the terminal device corresponding to the first participant displayed on the sub area. can be provided. At this time, the image effect may correspond to at least one type of detected voice or gesture.
- the electronic device 100 when at least one of the voice and gesture of the first participant among the plurality of participants is detected, the electronic device 100 reduces the size of the shared content based on at least one of the detected voice and gesture and displays the reduced-sized shared content. can be displayed on the first image displayed on the main area. Additionally, when at least one of the voice and gesture of the first participant among the plurality of participants is detected, the electronic device 100 displays a terminal device corresponding to the first participant displayed in the sub-area based on at least one of the detected voice and gesture. The second image received from can be removed and the first image can be displayed in the main area. At this time, the size of the first image may be larger than the size of the second image.
- the electronic device 100 displays the shared content again in the main area and displays it to the first participant.
- the image received from the corresponding terminal device can be displayed in the sub-area.
- the video of the participant with the intention to speak is displayed in the main area and the size of the shared content is reduced.
- this is one implementation.
- the video of the participant with speaking intent may continue to be displayed in the sub-area. That is, if a user setting is entered so that shared content is continuously displayed in the main area during the content sharing mode, the electronic device 100 continues to display the video of the participant with speaking intent even if a voice or gesture with speaking intent is detected. It can be displayed in the area. At this time, the electronic device 100 may provide an image effect (for example, a highlight display, etc.) to the image of the participant with speaking intent displayed in the sub-area.
- an image effect for example, a highlight display, etc.
- the electronic device 100 controls operations during the content sharing mode, but this is only one embodiment, and at least some of the operations during the content sharing mode are performed by an external server. It can be.
- FIG. 20 is a diagram for explaining a video call system including a server and a plurality of terminal devices according to an embodiment of the present disclosure.
- the first to fourth terminal devices 2010-1 to 2010-4 may perform a video call function through the server 2020.
- the server 2020 receives video and audio from the first to fourth terminal devices 2010-1 to 2010-4, and generates a video call screen based on the received video to communicate with the first to fourth terminal devices 2010-1 to 2010-4. It can be transmitted to the fourth terminal device (2010-1 to 2010-4).
- the server 2020 analyzes video or audio received from at least one of the plurality of terminal devices 2010-1 to 2010-4 to determine whether a participant with speaking intent exists. You can judge whether or not.
- the server 2020 arranges the image received from the terminal device corresponding to the participant with the intention to speak on the main area, as shown in FIG. 5, to display a plurality of terminal devices. You can transmit the video call screen to (2010-1 to 2010-4).
- the server 2020 When a voice or gesture without speaking intent is detected, the server 2020 provides an image effect on the sub-area, as shown in FIG. 6, to transmit the image to a plurality of terminal devices 2010-1 to 2010-4. You can transmit the call screen.
- functions related to artificial intelligence eg, learning function and inference function for a neural network model
- functions related to artificial intelligence are operated through at least one processor and memory of the electronic device 100.
- the processor may consist of one or multiple processors.
- one or more processors may include at least one of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and a Neural Processing Unit (NPU), but are not limited to the examples of the processors described above.
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- NPU Neural Processing Unit
- CPU is a general-purpose processor that can perform not only general calculations but also artificial intelligence calculations, and can efficiently execute complex programs through a multi-layer cache structure. CPUs are advantageous for serial processing, which allows organic connection between previous and next calculation results through sequential calculations.
- the general-purpose processor is not limited to the above-described examples, except where specified as the above-described CPU.
- GPU is a processor for large-scale operations such as floating-point operations used in graphics processing, and can perform large-scale operations in parallel by integrating a large number of cores.
- GPUs may be more advantageous than CPUs in parallel processing methods such as convolution operations.
- the GPU can be used as a co-processor to supplement the functions of the CPU.
- the processor for mass computation is not limited to the above-described example, except for the case specified as the above-described GPU.
- NPU is a processor specialized in artificial intelligence calculations using artificial neural networks, and each layer that makes up the artificial neural network can be implemented in hardware (e.g., silicon). At this time, the NPU is designed specifically according to the company's requirements, so it has a lower degree of freedom than a CPU or GPU, but can efficiently process artificial intelligence calculations requested by the company. Meanwhile, as a processor specialized for artificial intelligence calculations, NPU can be implemented in various forms such as TPU (Tensor Processing Unit), IPU (Intelligence Processing Unit), and VPU (Vision processing unit).
- the artificial intelligence processor is not limited to the examples described above, except where specified as the NPU described above.
- one or more processors may be implemented as a System on Chip (SoC).
- SoC System on Chip
- the SoC may further include memory and a network interface such as a bus for data communication between the processor and memory.
- the electronic device 100 uses some of the processors to perform artificial intelligence-related operations (e.g., artificial intelligence Operations related to model learning or inference) can be performed.
- artificial intelligence-related operations e.g., artificial intelligence Operations related to model learning or inference
- the electronic device 100 performs artificial intelligence-related operations using at least one of a GPU, NPU, VPU, TPU, or hardware accelerator specialized for artificial intelligence operations such as convolution operation, matrix multiplication operation, etc., among a plurality of processors. It can be done.
- this is only an example, and of course, calculations related to artificial intelligence can be processed using general-purpose processors such as CPUs.
- the electronic device 100 may perform calculations on functions related to artificial intelligence using multiple cores (eg, dual core, quad core, etc.) included in one processor.
- the electronic device 100 can perform artificial intelligence operations such as convolution operations and matrix multiplication operations in parallel using multi-cores included in the processor.
- One or more processors control input data to be processed according to predefined operation rules or artificial intelligence models stored in memory.
- Predefined operation rules or artificial intelligence models are characterized by being created through learning.
- being created through learning means that a predefined operation rule or artificial intelligence model with desired characteristics is created by applying a learning algorithm to a large number of learning data.
- This learning may be performed on the device itself that performs the artificial intelligence according to the present disclosure, or may be performed through a separate server/system.
- An artificial intelligence model may be composed of multiple neural network layers. At least one layer has at least one weight value, and the operation of the layer is performed using the operation result of the previous layer and at least one defined operation.
- Examples of neural networks include Convolutional Neural Network (CNN), Deep Neural Network (DNN), Recurrent Neural Network (RNN), Restricted Boltzmann Machine (RBM), Deep Belief Network (DBN), Bidirectional Recurrent Deep Neural Network (BRDNN), and Deep Neural Network (BRDNN).
- CNN Convolutional Neural Network
- DNN Deep Neural Network
- RNN Restricted Boltzmann Machine
- BBM Restricted Boltzmann Machine
- BBN Deep Belief Network
- BBN Deep Belief Network
- BBN Bidirectional Recurrent Deep Neural Network
- BDN Deep Neural Network
- BDN Deep Neural Network
- a learning algorithm is a method of training a target device (eg, a TV) using a large number of learning data so that the target device can make decisions or make predictions on its own.
- Examples of learning algorithms include supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, and the learning algorithm in the present disclosure is specified. Except, it is not limited to the examples described above.
- Computer program products are commodities and can be traded between sellers and buyers.
- the computer program product may be distributed in the form of a machine-readable storage medium (e.g. compact disc read only memory (CD-ROM)) or through an application store (e.g. Play StoreTM) or on two user devices (e.g. It can be distributed (e.g. downloaded or uploaded) directly between smartphones) or online.
- a machine-readable storage medium e.g. compact disc read only memory (CD-ROM)
- an application store e.g. Play StoreTM
- two user devices e.g. It can be distributed (e.g. downloaded or uploaded) directly between smartphones) or online.
- at least a portion of the computer program product e.g., a downloadable app
- a machine-readable storage medium such as the memory of a manufacturer's server, an application store's server, or a relay server. It can be temporarily stored or created temporarily.
- Methods according to various embodiments of the present disclosure may be implemented as software including instructions stored in a machine-readable storage media that can be read by a machine (e.g., a computer).
- the device stores information stored from the storage medium.
- a device capable of calling a command and operating according to the called command may include an electronic device (eg, a TV) according to the disclosed embodiments.
- a storage medium that can be read by a device may be provided in the form of a non-transitory storage medium.
- 'non-transitory storage medium' simply means that it is a tangible device and does not contain signals (e.g. electromagnetic waves). This term refers to cases where data is semi-permanently stored in a storage medium and temporary storage media. It does not distinguish between cases where it is stored as .
- a 'non-transitory storage medium' may include a buffer where data is temporarily stored.
- the processor may perform the function corresponding to the instruction directly or using other components under the control of the processor.
- Instructions may contain code generated or executed by a compiler or interpreter.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
La présente invention concerne un dispositif électronique de fourniture de service d'appel vidéo, et un procédé de commande associé. Le présent dispositif électronique comprend : une interface de communication; une unité d'affichage; une mémoire; et au moins un processeur connecté de manière fonctionnelle à l'interface de communication, à l'unité d'affichage et à la mémoire, lors de l'entrée d'un mode de partage de contenu pour partager un contenu durant un appel vidéo avec une pluralité de participants, le ou les processeurs commandent l'unité d'affichage pour qu'elle affiche un contenu partagé sur une zone principale et affiche sur une sous-zone une image reçue d'un dispositif terminal correspondant à un participant parmi la pluralité de participants. Lorsque la voix et/ou un geste d'un premier participant parmi la pluralité de participants sont détectés, le ou les processeurs commandent l'unité d'affichage pour qu'elle affiche une première image reçue d'un dispositif terminal correspondant au premier participant sur la zone principale sur la base de la voix détectée et/ou du geste détecté.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020220091592A KR20240014179A (ko) | 2022-07-25 | 2022-07-25 | 화상 통화 서비스를 제공하는 전자 장치 및 이의 제어 방법 |
| KR10-2022-0091592 | 2022-07-25 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024025142A1 true WO2024025142A1 (fr) | 2024-02-01 |
Family
ID=89706965
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2023/008039 Ceased WO2024025142A1 (fr) | 2022-07-25 | 2023-06-12 | Dispositif électronique de fourniture de service d'appel vidéo et son procédé de commande |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR20240014179A (fr) |
| WO (1) | WO2024025142A1 (fr) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20250122120A (ko) * | 2024-02-06 | 2025-08-13 | 삼성전자주식회사 | 전자 장치 및 이의 제어 방법 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150341545A1 (en) * | 2013-01-09 | 2015-11-26 | Lg Electronics Inc. | Voice tracking apparatus and control method therefor |
| KR20160071732A (ko) * | 2014-12-12 | 2016-06-22 | 삼성전자주식회사 | 음성 입력을 처리하는 방법 및 장치 |
| KR20170064242A (ko) * | 2015-12-01 | 2017-06-09 | 삼성전자주식회사 | 영상통화를 제공하는 전자 장치 및 방법 |
| KR20170082349A (ko) * | 2016-01-06 | 2017-07-14 | 삼성전자주식회사 | 디스플레이 장치 및 그 제어 방법 |
| KR20220009318A (ko) * | 2020-07-14 | 2022-01-24 | (주)날리지포인트 | 화상 회의 서비스 제공 장치 및 방법 |
-
2022
- 2022-07-25 KR KR1020220091592A patent/KR20240014179A/ko active Pending
-
2023
- 2023-06-12 WO PCT/KR2023/008039 patent/WO2024025142A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150341545A1 (en) * | 2013-01-09 | 2015-11-26 | Lg Electronics Inc. | Voice tracking apparatus and control method therefor |
| KR20160071732A (ko) * | 2014-12-12 | 2016-06-22 | 삼성전자주식회사 | 음성 입력을 처리하는 방법 및 장치 |
| KR20170064242A (ko) * | 2015-12-01 | 2017-06-09 | 삼성전자주식회사 | 영상통화를 제공하는 전자 장치 및 방법 |
| KR20170082349A (ko) * | 2016-01-06 | 2017-07-14 | 삼성전자주식회사 | 디스플레이 장치 및 그 제어 방법 |
| KR20220009318A (ko) * | 2020-07-14 | 2022-01-24 | (주)날리지포인트 | 화상 회의 서비스 제공 장치 및 방법 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20240014179A (ko) | 2024-02-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2020256460A1 (fr) | Appareil et procédé pour commander un affichage sur la base d'un cycle de fonctionnement en fréquence réglé différemment en fonction de la fréquence | |
| WO2019143227A1 (fr) | Dispositif électronique produisant une image en rapport avec un texte et son procédé de fonctionnement | |
| WO2019164232A1 (fr) | Dispositif électronique, procédé de traitement d'image associé et support d'enregistrement lisible par ordinateur | |
| WO2021172832A1 (fr) | Procédé de modification d'image basée sur la reconnaissance des gestes, et dispositif électronique prenant en charge celui-ci | |
| WO2018117428A1 (fr) | Procédé et appareil de filtrage de vidéo | |
| WO2020017875A1 (fr) | Appareil électronique, procédé de traitement d'image et support d'enregistrement lisible par ordinateur | |
| WO2019177344A1 (fr) | Appareil électronique et son procédé de commande | |
| WO2019017687A1 (fr) | Procédé de fonctionnement d'un service de reconnaissance de la parole, et dispositif électronique et serveur le prenant en charge | |
| WO2020060223A1 (fr) | Dispositif et procédé de fourniture d'informations de traduction d'application | |
| EP3698258A1 (fr) | Appareil électronique et son procédé de commande | |
| WO2022197089A1 (fr) | Dispositif électronique et procédé de commande de dispositif électronique | |
| EP3545685A1 (fr) | Procédé et appareil de filtrage de vidéo | |
| WO2022039366A1 (fr) | Dispositif électronique et son procédé de commande | |
| WO2019190171A1 (fr) | Dispositif électronique et procédé de commande associé | |
| WO2024025142A1 (fr) | Dispositif électronique de fourniture de service d'appel vidéo et son procédé de commande | |
| EP3469790A1 (fr) | Appareil d'affichage et procédé de commande correspondant | |
| WO2020166796A1 (fr) | Dispositif électronique et procédé de commande associé | |
| WO2020171547A1 (fr) | Procédé de gestion de tâches multiples et son dispositif électronique | |
| WO2021145693A1 (fr) | Dispositif électronique de traitement de données d'image et procédé de traitement de données d'image | |
| WO2020130734A1 (fr) | Dispositif électronique permettant la fourniture d'une réaction en fonction d'un état d'utilisateur et procédé de fonctionnement correspondant | |
| WO2023054913A1 (fr) | Dispositif électronique qui identifie une force tactile et son procédé de fonctionnement | |
| WO2020204572A1 (fr) | Dispositif électronique et son procédé de commande | |
| WO2015105215A1 (fr) | Procédé et appareil de modification de média par entrée tactile | |
| WO2024096394A1 (fr) | Dispositif électronique et procédé de commande associé | |
| WO2024106942A1 (fr) | Dispositif électronique et procédé de commande de dispositif électronique |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23846802 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 23846802 Country of ref document: EP Kind code of ref document: A1 |