WO2023282371A1 - 인공지능 학습모델을 이용하여 다국어 자막 서비스를 제공하는 서버 및 방법과, 서버의 제어 방법 - Google Patents
인공지능 학습모델을 이용하여 다국어 자막 서비스를 제공하는 서버 및 방법과, 서버의 제어 방법 Download PDFInfo
- Publication number
- WO2023282371A1 WO2023282371A1 PCT/KR2021/008757 KR2021008757W WO2023282371A1 WO 2023282371 A1 WO2023282371 A1 WO 2023282371A1 KR 2021008757 W KR2021008757 W KR 2021008757W WO 2023282371 A1 WO2023282371 A1 WO 2023282371A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- worker
- content
- terminal device
- user terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
- G06Q10/06311—Scheduling, planning or task assignment for a person or group
- G06Q10/063112—Skill-based matching of a person or a group to a task
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06398—Performance of employee with respect to a job function
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/08—Auctions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/278—Content descriptor database or directory service for end-user access
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/263—Language identification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/42—Data-driven translation
- G06F40/44—Statistical methods, e.g. probability models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/51—Translation evaluation
Definitions
- the present invention relates to a server and method for providing a multilingual subtitle service using an artificial intelligence learning model, and to a control method of the server, and more particularly, to a subtitle content job request and job execution for various content images in an integrated manner.
- a service system capable of providing a multilingual subtitle service by including a dedicated multilingual subtitle content creation tool for available online content images and a method of controlling the server.
- the present invention has been devised in response to the above-described needs, and an object of the present invention is to provide a caption service capable of integrally requesting and performing caption content operations for content video.
- a server providing a caption service includes a first user terminal device of a client requesting translation of content video and a second user of a worker performing the translation work.
- a communication unit for performing data communication with at least one of the terminal devices, a storage unit for storing a search list for a worker based on the learned worker information and an artificial intelligence learning model for evaluating the worker's task ability, and according to the client's worker recommendation command ,
- a controller for controlling the communication unit to input image information about the content video into the artificial intelligence learning model, obtain a worker list for a translator capable of translation, and transmit the obtained worker list to the first user terminal device.
- the worker information includes at least one of profile information for each worker, completed subtitle content for each worker, and evaluated task grade information for each worker.
- the artificial intelligence learning model learns worker information stored in the storage unit, and based on the learned worker information, a data learning unit classifying translatable categories for each field by operator, and the image information and the data learning unit. It may include a data acquisition unit that obtains a list of operators capable of translating the content image based on the learned operator information.
- the control unit inputs the image information to the artificial intelligence learning model, and the operator list obtained through the data acquisition unit. is transmitted to the first user terminal device, and when a selection command for at least one worker included in the worker list is received from the first user terminal device, a task is sent to the second user terminal device of the worker corresponding to the selection command.
- the communication unit may be controlled to transmit an allocation message, and the image information may include at least one of address information, title information, and description information of the content image.
- the control unit may, when receiving translation request information including at least one of an automatic translation command, working condition information, and image information from the first user terminal device, based on address information included in the image information, the content image Acquiring and extracting audio data from an image frame for the input content video, inputting the audio data to the artificial intelligence learning model, and the artificial intelligence learning model includes a language recognition unit recognizing a first language related to the extracted audio data.
- the data acquisition unit may obtain a second language requested by the client from the first language recognized through the language recognition unit.
- the artificial intelligence learning model converts a second language acquired through the data acquisition unit or a language corresponding to the subtitle contents updated in the storage unit into a language suitable for the context based on previously learned subtitle contents.
- a translation check unit may be further included.
- the data acquisition unit uses at least one of a verification result corrected by the translation verification unit in relation to the updated subtitle contents, working period information for the subtitle contents, and evaluation information of a user using the subtitle contents.
- a task level value of a worker who worked on subtitle content may be obtained.
- the control unit determines validity of the content image based on the image information, and then determines the validity of the content image based on the image information.
- a task generator for generating and storing task information related to the verified translation request information, and a subtitle content creation tool for translating content images related to at least one piece of translation request information stored in the storage unit, the second user terminal device and a task execution unit for storing the caption contents in the storage unit when the caption contents for which the translation work has been completed are received from the second user terminal device, wherein the image information includes address information for the content video; It may include at least one of title information and description information about the content video.
- the storage unit includes a temporary storage unit for storing subtitle contents for which the translation work has been completed and a final storage unit for storing subtitle contents for which translation verification has been completed among the subtitle contents stored in the temporary storage unit.
- An inspection tool for verifying the translation of the language included in at least one subtitle content stored in the temporary storage unit is provided to the third user terminal device of the inspector, and when an inspection completion message is received from the third user terminal device, the inspection process is performed. Completed subtitle content may be stored in the final storage unit.
- control unit evaluates the test capability of the unregistered worker to generate task grade information of the unregistered worker, and obtains information from the unregistered worker.
- the method may further include a member management unit generating profile information including at least one of provided personal information, history information, cost information, and evaluation information and the task grade information, and storing the generated profile information in the storage unit.
- a method for providing a caption service by using an artificial intelligence learning model in a server is a client's first user terminal device requesting translation of a content video and a worker performing the translation task.
- performing data communication with at least one of the second user terminal devices inputting image information about the content image into an artificial intelligence learning model according to the client's operator recommendation command, and using the artificial intelligence learning model Acquiring a worker list for workers capable of translating the content video and transmitting the obtained worker list to the first user terminal device, wherein the worker information includes profile information for each worker, the worker It includes at least one of completed subtitle content for each job and task grade information evaluated for each worker.
- the acquiring step may include learning operator information stored in the storage unit through a data learning unit of the artificial intelligence learning model, classifying translatable categories for each operator and field based on the learned operator information, and learning the operator information.
- a list of operators capable of translating the content image may be obtained by inputting information and image information to the data acquisition unit of the artificial intelligence learning model.
- the inputting may include inputting the image information to the artificial intelligence learning model when translation request information including at least one of worker request information, working condition information, and image information is received, and the first user terminal device When a selection command for at least one worker included in the worker list is received from the operator, transmitting a task assignment message to a second user terminal device of a worker corresponding to the selection command, wherein the image information comprises: It may include at least one of address information about the content video, title information, and description information about the content video.
- the inputting may include, when translation request information including at least one of an automatic translation command, working condition information, and image information is received from the first user terminal device, based on address information included in the image information, A content image is obtained and input to the language recognition unit of the artificial intelligence learning model.
- the first language is input to the data acquisition unit
- the method may further include acquiring a second language requested by the client from the first language.
- converting the second language or the language corresponding to the subtitle contents updated in the storage unit into a language suitable for the context can include more.
- the data acquisition unit uses at least one of a verification result corrected by the translation verification unit in relation to the updated subtitle contents, working period information for the subtitle contents, and user evaluation information using the subtitle contents to obtain the subtitle contents.
- the method may further include obtaining a task level value of a worker who worked on subtitle content.
- a control method of a server providing a caption service includes receiving translation request information including at least one of working condition information and image information from a first user terminal device of a client; After determining the validity of the content image based on the image information, storing the translation request information for which the validity has been verified. In response to a request from an operator, the content image related to at least one translation request information stored in the storage unit is stored. Transmitting a caption content creation tool for a translation job to a second user terminal device of the operator, and storing the caption content when the caption content for which the translation job is completed is received from the second user terminal device,
- the video information includes at least one of address information, title information, and description information about the content video.
- the storing may include storing the subtitle contents for which the translation work has been completed in a temporary storage unit and checking translation for a language included in at least one subtitle content stored in the temporary storage unit according to a work request of an inspector.
- the method may include transmitting an inspection work tool for inspection to a third user terminal device of an inspector, and storing subtitle content that has been inspected in the final storage unit when an inspection completion message is received from the third user terminal device. .
- the method further includes registering worker information including profile information and task grade information of an unregistered worker and updating the registered worker information, wherein the registering comprises: personal information of the unregistered worker; Generating the profile information including at least one of history information, cost information, and evaluation information, and evaluating test capability of the unregistered worker when a registration request command is received from the second user terminal device of the unregistered worker.
- the step of generating task grade information of the non-registered worker, wherein the step of updating includes work period information on subtitle contents completed by the registered worker, error correction information, and user evaluation using the subtitle contents.
- Task grade information of the registered worker may be updated using at least one of the pieces of information.
- a server providing a caption service may provide a caption service capable of integrally requesting and executing caption content operations for content images through an artificial intelligence learning model.
- the present invention provides a requester who requests subtitle content work including source language and translation target language work for video content distributed online in the form of various resources, a worker who performs source language and translation target language work, and generated subtitles.
- the present invention provides a new revenue model for workers and inspectors by providing a subtitle service that mediates a subtitle content operator, a subtitle content client, and a subtitle content inspector in a server, and reduces the cost required for subtitle content generation for the client. It works.
- FIG. 1 is an exemplary view of a caption service system according to an embodiment of the present invention
- FIG. 2 is a block diagram of a server providing a caption service according to an embodiment of the present invention
- FIG. 3 is a detailed block diagram of a storage unit according to an embodiment of the present invention.
- FIG. 4 is a detailed block diagram of an artificial intelligence learning model according to an embodiment of the present invention.
- FIG. 5 is a detailed block diagram of a data learning unit and a data acquisition unit according to an embodiment of the present invention.
- FIG. 6 is a detailed block diagram of a control unit according to an embodiment of the present invention.
- FIG. 7 is a block diagram of a user terminal device according to an embodiment of the present invention.
- FIGS. 8 and 9 are exemplary views of displaying a caption service window for a client in a user terminal device according to an embodiment of the present invention.
- 10 to 12 are examples of displaying a caption service window for an operator in a user terminal device according to an embodiment of the present invention.
- FIG. 13 is a flowchart of a method for providing a caption service using an artificial intelligence learning model in a server according to an embodiment of the present invention
- FIG. 14 is a flowchart of a control method of a server providing a caption service according to the present invention.
- expressions such as “A or B,” “at least one of A and/and B,” or “one or more of A or/and B” may include all possible combinations of the items listed together.
- a component e.g., a first component
- another component e.g., a second component
- the certain component may be directly connected to the other component or connected through another component (eg, a third component).
- an element e.g, a first element
- another element e.g., a second element
- the element and the above It may be understood that other components (eg, a third component) do not exist between the other components.
- the expression “configured to” means “suitable for,” “having the capacity to,” depending on the circumstances. ,” “designed to,” “adapted to,” “made to,” or “capable of.”
- the term “configured (or set) to” may not necessarily mean only “specifically designed to” hardware.
- the phrase “device configured to” may mean that the device is “capable of” in conjunction with other devices or components.
- a processor configured (or configured) to perform A, B, and C” may include a dedicated processor (eg, embedded processor) to perform the operation, or by executing one or more software programs stored in a memory device.
- a general-purpose processor eg, CPU or application processor
- Electronic devices include, for example, a smart phone, a tablet PC, a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a workstation, a server, a PDA, and a PMP. It may include at least one of a portable multimedia player, an MP3 player, a medical device, a camera, or a wearable device.
- a wearable device may be in the form of an accessory (e.g. watch, ring, bracelet, anklet, necklace, eyeglasses, contact lens, or head-mounted-device (HMD)), integrated into textiles or clothing (e.g.
- the electronic device may include, for example, a television, a digital video disk (DVD) player, Audio, refrigerator, air conditioner, vacuum cleaner, oven, microwave, washing machine, air purifier, set top box, home automation control panel, security control panel, media box (e.g. Samsung HomeSyncTM, Apple TVTM, or Google TVTM), game console (eg, XboxTM, PlayStationTM), an electronic dictionary, an electronic key, a camcorder, or an electronic picture frame.
- DVD digital video disk
- the electronic device may include various types of medical devices (e.g., various portable medical measuring devices (such as blood glucose meter, heart rate monitor, blood pressure monitor, or body temperature monitor), magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), CT (computed tomography), imager, or ultrasonicator, etc.), navigation device, global navigation satellite system (GNSS), EDR (event data recorder), FDR (flight data recorder), automobile infotainment device, marine electronic equipment (e.g.
- various portable medical measuring devices such as blood glucose meter, heart rate monitor, blood pressure monitor, or body temperature monitor
- MRA magnetic resonance angiography
- MRI magnetic resonance imaging
- CT computed tomography
- imager or ultrasonicator, etc.
- navigation device e.g., global navigation satellite system (GNSS), EDR (event data recorder), FDR (flight data recorder), automobile infotainment device, marine electronic equipment (e.g.
- navigation devices for ships, gyrocompasses, etc. avionics, security devices, head units for vehicles, industrial or home robots, drones, ATMs in financial institutions, point of sale (POS) in stores of sales), or IoT devices (eg, light bulbs, various sensors, sprinkler devices, fire alarms, thermostats, street lights, toasters, exercise equipment, hot water tanks, heaters, boilers, etc.).
- IoT devices eg, light bulbs, various sensors, sprinkler devices, fire alarms, thermostats, street lights, toasters, exercise equipment, hot water tanks, heaters, boilers, etc.
- the term user may refer to a person using an electronic device or a device using an electronic device (eg, an artificial intelligence electronic device).
- FIG. 1 is an exemplary diagram of a caption service system according to an embodiment of the present invention.
- the caption service system 1000 includes a server 100 providing caption services and first and second user terminal devices 200 - 1 and 200 - 2 .
- the server 100 is a server that provides various services for a project requested by a client.
- the project generates subtitle content for the original language (hereinafter referred to as the source language) (transcription) of the content video requested by the client, or translates the source language of the content video into the language requested by the client. It can be something to create.
- the first user terminal device 200-1 is a terminal device of a client requesting a translation work for a content image
- the second user terminal device 200-2 is a worker performing translation work for a content image requested by a client.
- the first and second user terminal devices 200-1 and 200-2 may be electronic devices such as desktops, laptops, tablet PCs, and smart phones capable of accessing the Internet.
- the server 100 providing the caption service converts image information about the content image requested by the client to an artificial intelligence learning model. to obtain a worker list for workers capable of translation, and transmit the obtained worker list to the first user terminal device 200-1 of the client.
- the client may select a worker for the content image from the worker list displayed through the first user terminal device 200-1.
- the server 100 When the server 100 receives a selection command for a worker selected by the client from the first user terminal device 200-1, the server 100 translates the content image requested by the client to the second user terminal device 200-2 of the corresponding worker. Sends a task assignment message for a job.
- the operator can access the server 100 through the second user terminal device 200-2 and perform a translation job for the content image requested by the client.
- the server 100 when the server 100 receives an automatic translation command for the content image requested by the client from the client's first user terminal device 200-1, the server 100 inputs the corresponding content image to the artificial intelligence learning model.
- a first language related to the audio data may be recognized from the audio content included in the content image, and the recognized first language may be automatically translated into a second language requested by the client.
- the server 100 determines whether the content image requested by the client is valid, and then determines the validity of the content image requested by the client. Creates and stores task information related to
- the operator can check the task information stored in the server 100 through the second user terminal device 200-2 and perform a translation job for the content image requested by the client.
- FIG. 2 is a block diagram of a server providing a caption service according to an embodiment of the present invention.
- a server 100 providing a caption service includes a communication unit 110 , a storage unit 120 and a control unit 130 .
- the communication unit 110 performs data communication with at least one of the client's first user terminal device 200-1 requesting translation of the content video and the worker's second user terminal device 200-2 performing the translation work. do.
- the communication unit 110 is connected to an external network according to a wireless communication protocol such as IEEE to perform data communication with the first and second user terminal devices 200-1 and 200-2, or a relay device (not shown). ), it is possible to perform data communication with the first and second user terminal devices 200-1 and 200-2.
- a wireless communication protocol such as IEEE to perform data communication with the first and second user terminal devices 200-1 and 200-2, or a relay device (not shown).
- the storage unit 120 stores a worker search list based on the learned worker information and an artificial intelligence learning model that evaluates the worker's task ability.
- the worker information may include at least one of completed subtitle content for each worker and evaluated task grade information for each worker.
- the control unit 130 controls the overall operation of each component constituting the server 100 .
- the control unit 130 obtains a worker list for workers capable of translation by inputting image information about content images into an artificial intelligence learning model according to the client's worker recommendation command, and sends the obtained worker list to the first user terminal. Controls the communication unit 110 to transmit to the device 200-1.
- the content video is a video file format such as MP4, AVI, and MOV.
- the artificial intelligence learning model learns worker information stored in the storage unit 120, classifies translatable categories for each worker and field based on the learned worker information, and provides content images requested by the client within the classified categories. It can be an AI model that provides a list of translatable workers for
- the artificial intelligence learning model learns the information stored in the storage unit 120 through FIGS. 3 and 4 and outputs a result based on the learned information and the input information will be described in detail.
- FIG. 3 is a detailed block diagram of a storage unit according to an embodiment of the present invention
- FIG. 4 is a detailed block diagram of an artificial intelligence learning model according to an embodiment of the present invention.
- the storage unit 120 includes a member information storage unit 310, a task information storage unit 320, an artificial intelligence learning model 330, a temporary storage unit 340, and a final storage unit 350. ).
- the member information storage unit 310 may store profile information about all pre-registered members.
- the pre-registered members may include a client who has requested translation work for the content video, a worker who performs the translation work requested by the client, and an inspector who performs verification (inspection) of the subtitle contents completed by the worker. .
- Profile information of such members may include at least one of ID information, contact information, e-mail information, payment information, gender, and age information of pre-registered members. Meanwhile, when the member is a worker or inspector, the member information storage unit 310 may further store worker information including at least one of task grade information and history information in addition to profile information of the worker or inspector.
- the task information storage unit 320 stores task information generated in relation to translation request information for a content image requested by a pre-registered client.
- the translation request information may include at least one of operator request information, working condition information, and image information
- the image information may include at least one of address information, title information, and description information about the content image requested by the client.
- the description information may be detailed information about the content video
- the subtitle condition information may include at least one of a language to be translated, a request date for completion of the translation work, a translation work cost, a worker level, and translation work difficulty information.
- the translation request information may include a content video file.
- the task information storage unit 320 may separately store task information not assigned to a worker, task information assigned to a worker, and task information completed by a worker.
- the artificial intelligence learning model 330 learns profile information of members stored in the member information storage unit 310 and task information stored in the task information storage unit 320, and provides various services related to translation work based on the learned information.
- the temporary storage unit 340 temporarily stores subtitle contents for which translation work has been completed by a worker or subtitle contents automatically translated through the artificial intelligence learning model 330, and the final storage unit 124 stores the subtitle contents in the temporary storage unit 123.
- subtitle contents verified by an inspector or the artificial intelligence learning model 330 are stored.
- the aforementioned artificial intelligence learning model 330 includes a data learning unit 331, a data acquisition unit 332, a language recognition unit 333, and a task evaluation unit 334, as shown in FIG. .
- the data learning unit 331 learns worker profile information and worker information stored in the member information storage unit 121 of the storage unit 120, and classifies translatable categories for each worker and each field based on the learned worker information.
- the data learning unit 331 may classify the main translation field as the medical field based on the profile information and worker information of the first worker.
- the data acquisition unit 332 acquires a list of operators that can be translated for the content image requested by the client based on the image information and the operator information learned through the data learning unit 331 .
- control unit 130 inputs the image information to the artificial intelligence learning model 330 when translation request information including at least one of worker request information, work condition information, and image information is received.
- the image information may include at least one of address information for a content image, title information, and description information for the corresponding content image.
- the data acquisition unit 332 provides information about the content image requested by the client based on the image information input to the artificial intelligence learning model 330 and the operator information learned through the data learning unit 331.
- a category may be classified, and a worker list for workers corresponding to the classified category may be acquired.
- control unit 130 transmits the operator list acquired through the data acquisition unit 332 to the client's first user terminal device 200-1, and from the first user terminal device 200-1 to the operator list.
- a task assignment message is transmitted to the second user terminal device 200 - 2 of the worker corresponding to the selection command through the communication unit 110 .
- the worker accesses the server 100 through the second user terminal device 200-2 and performs a translation job from the content image requested by the client.
- the language recognition unit 333 recognizes a first language related to audio data extracted from an image frame of an input content image.
- the control unit 130 sends address information included in the image information. Based on the content image is acquired, audio data is extracted from the image data for the acquired content image, and the extracted audio data is input to the artificial intelligence learning model. Accordingly, the language recognition unit 333 may recognize the first language from input data. The first language recognized through the language recognition unit 333 is input to the data acquisition unit 332, and the data acquisition unit 332 receives the input first language and language information pre-learned through the data learning unit 331. Obtain a second language converted from the first language based on.
- the translation verification unit 334 converts the second language acquired through the data acquisition unit 332 or the language corresponding to the subtitle contents updated in the storage unit 120 into a language suitable for the context based on the pre-learned subtitle contents. process the conversion.
- the data acquisition unit 332 includes the error-corrected verification result from the translation verification unit 334 in relation to the subtitle contents updated in the storage unit 120, the work period information on the subtitle contents, and the user's evaluation using the subtitle contents. If at least one of the pieces of information is input, a task level value of a worker who worked on subtitle content may be obtained using the input information.
- the components constituting the aforementioned artificial intelligence learning model 330 may be implemented as a software module or manufactured in the form of at least one hardware chip and installed in the server 100.
- the data learning unit 331 and the data acquisition unit 332 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or an existing general-purpose processor (eg, CPU). Alternatively, it may be manufactured as a part of an application processor) or a graphics-only processor (eg GPU) and mounted on the server 100.
- the dedicated hardware chip for artificial intelligence is a dedicated processor specialized in probability calculation, and can quickly process calculation tasks in the field of artificial intelligence such as machine learning with higher parallel processing performance than conventional general-purpose processors.
- the software modules are computer-readable non-temporary readable records. It can be stored in a medium (non-transitory computer readable media).
- the software module may be provided by an Operating System (OS) or a predetermined application.
- OS Operating System
- OS Operating System
- OS Operating System
- the other part may be provided by a predetermined application.
- FIG. 5 is a detailed block diagram of a data learning unit and a data acquisition unit according to an embodiment of the present invention.
- the data learning unit 331 may include a training data acquisition unit 331-1 and a model learning unit 331-4.
- the data learning unit 331 may selectively further include at least one of a training data pre-processing unit 331-2, a training data selection unit 331-3, and a model evaluation unit 331-5.
- the training data acquisition unit 331-1 may acquire training data necessary for the first model and the second model.
- the learning data acquisition unit 311 includes profile information of members including clients, workers, and inspectors registered in the server 100, worker information, subtitle content for which translation work has been completed by the worker, and language information for each country. etc. can be acquired as learning data.
- the model learning unit 331-4 uses the training data to determine how to classify categories for the translatable field for each worker, how to convert a first language into a second language, and how to evaluate each worker's task ability. standards can be taught. For example, the model learning unit 331 - 4 may train an artificial intelligence learning model through supervised learning using at least a part of the learning data as a criterion. Alternatively, the model learning unit 331-4, for example, learns by itself using learning data without any guidance, and through unsupervised learning to discover the criteria for determining the situation, artificial intelligence model can be trained.
- model learning unit 331 - 4 may train the artificial intelligence learning model through reinforcement learning using, for example, feedback on whether a result of situational judgment according to learning is correct.
- model learning unit 331-4 may train an artificial intelligence learning model using, for example, a learning algorithm including error back-propagation or gradient descent.
- the model learning unit 331 - 4 may determine an AI learning model having a high correlation between input training data and basic training data as an AI learning model to be learned.
- the basic learning data may be pre-classified for each type of data
- the artificial intelligence model may be pre-built for each type of data.
- the basic training data is classified according to various criteria such as the region where the training data was created, the time the training data was created, the size of the training data, the genre of the training data, the creator of the training data, and the type of object in the training data. may have been
- the model learning unit 331-4 may store the learned artificial intelligence learning model.
- the model learning unit 331 - 4 may store the learned artificial intelligence learning model in the storage unit 120 .
- the model learning unit 331 - 4 may store the learned artificial intelligence learning model in a memory of an artificial intelligence server (not shown) connected to the server 100 through a wired or wireless network.
- the data learning unit 331 includes a learning data pre-processing unit 331-2 and a learning data selection unit ( 331-3) may be further included.
- the learning data pre-processing unit 331-2 may pre-process the acquired data so that the acquired data can be used for learning for worker recommendation, automatic translation, and task ability evaluation of the worker.
- the learning data pre-processing unit 331-2 may process the corresponding data into a preset format so that the model learning unit 331-4 can use the acquired data.
- the learning data selector 331-3 may select data necessary for learning from data acquired by the learning data acquisition unit 331-1 or data preprocessed by the learning data preprocessor 331-2.
- the selected learning data may be provided to the model learning unit 331-4.
- the learning data selection unit 331-3 may select learning data necessary for learning from among acquired or pre-processed data according to a predetermined selection criterion. Also, the learning data selection unit 1313 may select learning data according to a selection criterion preset by learning by the model learning unit 1314 .
- the learning unit 331 may further include a model evaluation unit 331-5 to improve the recognition result of the artificial intelligence learning model.
- the model evaluation unit 331-5 inputs evaluation data to the artificial intelligence learning model, and if the recognition result output from the evaluation data does not satisfy a predetermined criterion, the model learning unit 331-4 will learn again.
- the evaluation data may be predefined data for evaluating the artificial intelligence model.
- the model evaluation unit 331-5 may perform a predetermined criterion when the number or ratio of evaluation data for which the recognition result is not accurate among the recognition results of the AI learning model trained on the evaluation data exceeds a preset threshold. can be evaluated as unsatisfactory.
- the model evaluation unit 331-5 evaluates whether each learned artificial intelligence learning model satisfies a predetermined criterion, and selects a model that satisfies the predetermined criterion as a final model. It can be determined as an artificial intelligence learning model.
- the model evaluation unit 331-5 may determine one or a predetermined number of learning models preset in order of high evaluation scores as the final artificial intelligence learning model. .
- the data acquisition unit 332 may include an input data acquisition unit 332-1 and a provision unit 332-4, as shown in (b) of FIG.
- the data acquisition unit 332 may selectively further include at least one of an input data pre-processing unit 332-2, an input data selection unit 332-3, and a model updating unit 332-5.
- the input data acquisition unit 332-1 may obtain data necessary for obtaining information on worker recommendation, automatic translation, and task ability evaluation of the worker.
- the provision unit 332-4 applies the input data obtained from the input data acquisition unit 332-1 to the artificial intelligence learning model learned as an input value to provide various information such as worker recommendation, automatic translation, and evaluation of the worker's task ability. can be obtained.
- the providing unit 332-4 applies the data selected by the input data pre-processing unit 332-2 or the input data selection unit 332-3 to an artificial intelligence learning model as an input value to obtain a recognition result. can do.
- Recognition results may be determined by an artificial intelligence learning model.
- the provision unit 332-4 applies the worker profile information and worker information obtained from the input data acquisition unit 332-1 to the learned first model to provide a category for a field that can be translated by the corresponding worker. can be obtained (or estimated).
- the data acquisition unit 332 includes an input data pre-processing unit 332-2 and an input data selection unit 332-2 in order to improve the recognition results of the artificial intelligence learning model or to save resources or time for providing recognition results. 3) may be further included.
- the input data pre-processing unit 332-2 may pre-process the acquired data so that the acquired data can be used to be input to the first and second models.
- the input data pre-processing unit 332-2 converts the acquired data into a predefined format so that the providing unit 332-4 can acquire information about worker recommendation, automatic translation, and evaluation of the worker's task ability, and use the acquired data. can be processed into
- the input data selection unit 332-3 may select data necessary for situation determination from among data acquired by the input data acquisition unit 332-1 or data preprocessed by the input data pre-processing unit 332-2. The selected data may be provided to the provider 332-4. The input data selector 332-3 may select some or all of the obtained or preprocessed data according to a predetermined selection criterion for determining the situation. In addition, the input data selection unit 332-3 may select data according to a selection criterion set by learning by the model learning unit 332-4.
- the model updating unit 332-5 may control the artificial intelligence learning model to be updated based on the evaluation of the recognition result provided by the providing unit 332-4.
- the model updating unit 332-5 provides the recognition result provided by the providing unit 332-4 to the model learning unit 332-4 so that the model learning unit 332-4 can achieve artificial intelligence. You can request additional training or renewal of the learning model.
- control unit 130 may include a configuration as shown in FIG. 6 .
- FIG. 6 is a detailed block diagram of a control unit according to an embodiment of the present invention.
- control unit 130 includes a task creation unit 131 , a task execution unit 132 and a member management unit 133 .
- the task generator 131 determines whether the content image is valid based on the image information. Then, task information related to the translation request information whose validity has been verified is created and stored in the storage unit 120 .
- the image information may include at least one of address information, title information, and description information about the content image.
- the task generating unit 131 when the translation request information for the content image requested by the client is received from the first user terminal device 200-1, the task generating unit 131 generates the corresponding content image based on the address information included in the translation request information. determine validity. As a result of the determination, if the validity of the content image is verified, the task generating unit 131 generates task information for the received translation request information and stores it in the storage unit 120, and if the validity of the corresponding content image is not verified, the communication unit An operation disable message is transmitted to the first user terminal device 200-1 through 110.
- the task generating unit 131 does not determine whether the received translation request information is valid, and performs a task for the corresponding translation request information.
- Information may be generated and stored in the storage unit 120 .
- the task execution unit 132 provides a caption content creation tool for a translation job of a content image related to at least one piece of translation request information stored in the storage unit 120 to the translator's second user terminal device 200-2, When caption content for which translation work has been completed is received from the second user terminal device 200 - 2 , the received caption content is stored in the storage unit 120 .
- the storage unit 120 stores the subtitle contents for which translation work has been completed in the temporary storage unit 121, and among the subtitle contents stored in the temporary storage unit 121, the subtitle contents for which translation verification has been completed are finally stored. It can be stored in unit 122. Therefore, the task execution unit 132 provides a third user terminal device (not shown) of the inspector with a verification work tool for verifying the translation of the language included in the at least one subtitle content stored in the temporary storage unit 121 and , When a verification completion message is received from the third user terminal device (not shown), the verified subtitle content may be stored in the final storage unit 122 .
- the member management unit 133 When a registration request command is received from the second user terminal device 200 - 2 of an unregistered worker, the member management unit 133 evaluates the test capability of the unregistered worker and generates task grade information of the unregistered worker. Thereafter, the member management unit 133 generates profile information including at least one of personal information, history information, cost information, and evaluation information provided by unregistered workers and task grade information, and the member information storage unit of the storage unit 120 ( 121) to save.
- the member management unit 134 uses at least one of work period information, error correction information, and evaluation information of a user using subtitle content for subtitle content that has been completed by a pre-registered worker. Task grade information of previously registered workers can be updated.
- the member management unit 133 refers to the translation request information of the client who requested the corresponding subtitle content, and the request date of the client's translation work completion request and the operator for the subtitle content It is possible to perform task performance evaluation for the corresponding worker with reference to the task duration of the task, and update task grade information of the corresponding worker based on the performed task performance evaluation information.
- the member management unit 134 may update task grade information of a corresponding operator based on the above-described task performance evaluation information and inspection evaluation information of an inspector who verifies the subtitle content completed by the operator.
- the member management unit 134 provides task grade information of a worker who performed a task on subtitle content based on the above-described task performance evaluation information, review evaluation information, and user use evaluation information using the corresponding subtitle content. can be updated
- the user terminal device 200 may be the client's first user terminal device 200 - 1 or the worker's second user terminal device 200 - 2 .
- FIG. 7 is a block diagram of a user terminal device according to an embodiment of the present invention.
- the user terminal device 200 may be an electronic device such as a desktop, laptop, smart phone, or tablet. As shown in FIG. 7 , such a user terminal device 200 may include a communication unit 210, an input unit 220, a display unit 230, and a control unit 240.
- the communication unit 210 performs data communication with the server 100 providing the caption service, such as project information on the content image requested by the client, caption content completed by the operator, information on the caption content reviewed by the inspector, etc. can send and receive.
- the caption service such as project information on the content image requested by the client, caption content completed by the operator, information on the caption content reviewed by the inspector, etc. can send and receive.
- the communication unit 210 includes a wireless communication module such as a short-distance communication module and a wireless LAN module, a high-definition multimedia interface (HDMI), a universal serial bus (USB), and an Institute of Electrical and Electronic Engineers (IEEE) 1394 It may include a connector including at least one of wired communication modules such as the.
- a wireless communication module such as a short-distance communication module and a wireless LAN module
- HDMI high-definition multimedia interface
- USB universal serial bus
- IEEE 1394 Institute of Electrical and Electronic Engineers 1394
- It may include a connector including at least one of wired communication modules such as the.
- the input unit 220 is an input means for receiving various user commands and transmitting them to the control unit 240 to be described later.
- the input unit 220 includes a user command for accessing the server 100 providing a subtitle service, a translation request command for requesting a translation work for a client's content video, a worker's task assignment command for the requested translation work, and the like. can be input.
- Such an input unit 220 includes a microphone (not shown) for receiving user voice commands, a control unit (not shown) implemented as a keypad equipped with various function keys, numeric keys, special keys, text keys, etc.
- a touch input unit (not shown) that receives a user's touch command through the display unit 230 to be displayed may be included.
- the display unit 230 displays various content images, an execution icon corresponding to each registered application, and an execution screen of an application corresponding to the selected icon.
- the display unit 130 displays a caption service window provided by the server 100 when the user terminal device 200 accesses the server 100 providing the caption service.
- the display unit 230 may be implemented with a liquid crystal display (LCD), an organic light emitting display (OLED), or the like.
- the display unit 230 may be implemented in the form of a touch screen forming a mutually layered structure together with a touch input unit (not shown) that receives a user's touch command.
- the control unit 240 controls the overall operation of each component constituting the user terminal device 200 .
- the controller 240 controls the display unit 230 to display a caption service window according to a user command input through the input unit 220 .
- the display unit 230 may display the closed caption service window provided in the server 100 according to the control command of the controller 240 .
- the user may request a translation job for the content video or perform a translation job for the requested content video through the caption service window displayed on the user terminal device 200 .
- the user may perform an inspection work on translated caption content through a caption service window displayed on the user terminal device 200 or perform a capability evaluation test of an unregistered operator.
- FIG 8 and 9 are exemplary views of displaying a caption service window for a requester in a user terminal device according to an embodiment of the present invention.
- the client's first user terminal device 200-1 accessing the server 100 providing the caption service may display a caption service window 800 provided by the server 100.
- a project creation UI (Project Center) 810 for a client may be displayed on one area of the caption service window 800 .
- the first user terminal device 200-1 requests translation of the content image to be requested by the client.
- a first client window 820 for inputting information is displayed.
- the first client window 820 includes a first UI 821 for setting at least one of a language work for a source language included in a content image and a translation work for translation into a language requested by the client, and content At least one of the second UIs 822 for inputting address information of an image may be included. Accordingly, the client may set the type of subtitle to be inserted into the content image through the first UI 821 and input address information about the content image through the second UI 822 .
- the first client window 820 may further include a third UI 823 for setting categories (movie subtitles, games, documents, etc.) for content images requested by the client. Accordingly, the client may set a category for a content image to be requested through the third UI 823 of the first client window 820 .
- the first user terminal device 200-1 transmits the translation request information set by the client to the server 100, and the server 100 transmits the translation request information set by the client to the server 100. It is possible to generate task information for the translation request information received from ) and store it in the task information storage unit 320 of the storage unit 120 .
- the first user terminal device 200-1 displays the second client window 830 for searching the client's project.
- the second client window 830 includes a first UI 831 for setting the progress status of a project requested by the client for translation, a second UI 832 for searching for a project execution period, and a source language of content video. It may include at least one of a third UI 833 for selection, a fourth UI 834 for translation language selection, and a fifth UI 835 for searching for a specific project.
- the second client window 830 may further include a sixth UI 836 that displays a search result corresponding to the client's user command for at least one of the first to fifth UIs 831 to 835. there is.
- a client may request a search for a project for which translation work for subtitle content has been completed among projects requested by the client.
- the first user terminal device 200-1 may receive a search result list for a project for which translation work has been completed from the server 100 and display the list on the sixth UI 836. .
- the first user terminal device 200-1 displays a search result list for project A based on information received from the server 100 through a sixth UI 836. can be displayed on
- the client may request a search for a project in which a task for caption content is in progress among projects requested by the client through the first UI 531 .
- the first user terminal device 200-1 sends a search result list to project B based on information received from the server 100 through a sixth UI 836. ) can be displayed on
- 10 to 12 are exemplary views of displaying a caption service window for a worker in a user terminal device according to an embodiment of the present invention.
- the second user terminal device 200 - 2 of the worker may display a caption service window 800 provided from the server 100 .
- a project task UI (Career Center) 840 for a worker may be displayed on one area of the subtitle service window 800, and a task to be worked is searched among icons included in the project task UI 840.
- the second user terminal device 200-2 displays the first worker window 850 for searching for tasks in which translation work is possible.
- the first worker window 850 includes a first UI 851 for selecting a source language included in a content image, a second UI 852 for selecting a language requested by a client for translation, and a selection of a work period. It may include at least one of a third UI 853, a fourth UI 854 for selecting a category for a content image, and a fifth UI 855 for setting a difficulty level of a task.
- the first worker window 850 may further include a sixth UI 856 for displaying a search result corresponding to a user command of the worker for at least one of the first to fifth UIs 851 to 855.
- the operator selects the source language of the content video as the first language (Korean) through the first UI 851 and selects the language requested for translation as the second language (English) through the second UI 852. And, through the third UI 853, the work period can be selected as 1 month.
- the second user terminal device 200-2 receives a search result list for at least one task information corresponding to the operator's user command from the server 100, and the sixth UI ( 856).
- the second user terminal device 200-2 receives a second worker window for searching for a project assigned to the worker when a selection command for the project search icon 842 is input from the worker ( 860) can be displayed.
- the second worker window 860 includes a first UI 861 for selecting a task type, a second UI 862 for searching a task progress state, and a third UI 863 for selecting a remaining task period. ), a fourth UI 864 for selecting a source language included in the content video, and a fifth UI 865 for selecting a language to be translated.
- the second worker window 860 may further include a sixth UI 866 that displays a search result corresponding to a user command of an operator for at least one of the first to fifth UIs 861 to 865. there is.
- a worker selects a task type related to translation through a first UI 861, selects a work state in which a task is being performed through a second UI 862, and selects a translation language through a fifth UI 862. It can be selected as a second language (English).
- the second user terminal device 200-2 receives a search result list for a project for which a translation related task is being performed in the second language from the server 100, and the sixth UI 866 ) can be displayed.
- the second user terminal device 200-2 is shown in FIG. 12.
- a subtitle content creation tool window 870 for performing a task for a selected project may be displayed.
- the second user terminal device 200-2 displays the first area 871 of the subtitle content creation tool window 870 when an operator's selection command for the first project among the projects included in the search result list is input.
- Displays a content video related to the selected first project on the second area 872 displays a subtitle content creation tool UI for generating subtitle content for each section of the corresponding content video on the second area 872, and displays the subtitle content on the third area 873.
- a caption content editing tool UI for editing the playback period of the caption content for each section generated through the work tool UI may be displayed.
- the operator may translate the source language of the content video displayed on the first area 871 for each section through the second area 872 into a language requested by the client.
- the source language may be included in the first section (00:00:00 to 00:00:03) of the content video.
- the operator can translate the source language of the first section into the language requested by the client through the second area 872.
- the second area 872 is related to the content video.
- the first to third subtitle contents 01 to 03 for each section translated by the operator may be displayed.
- the second user terminal device 200-2 provides time information corresponding to each of the first to third caption contents. Based on this, the reproduction period of the first to third caption contents 01 to 03 for each reproduction time of the corresponding content video may be displayed in a bar form through the third area 873 . Therefore, the operator can edit the playback period for each of the first to third caption contents by adjusting the length of the bar corresponding to each of the first to third caption contents displayed in the third area 873 .
- the client's first user terminal device (200-1) registers project information on the content image requested by the client in the server 100 through the subtitle service window, and registers the second user terminal of the worker.
- An operation of performing a translation work for a project previously registered in the server 100 through the subtitle service window in the device 200-2 has been described in detail.
- a method of providing a caption service using an artificial intelligence learning model in the server 100 according to the present invention and a method of controlling the server 100 providing the caption service will be described in detail.
- FIG. 13 is a flowchart of a method for providing a caption service using an artificial intelligence learning model in a server according to an embodiment of the present invention.
- the initial server 100 establishes data communication with at least one of a first user terminal device 200-1 of a client requesting translation of a content image and a second user terminal device 200-2 of a worker performing translation work. It is performed (S1310).
- the server 100 inputs image information about the content image requested by the client to the artificial intelligence learning model (S1320 and S1330).
- the server 100 when the server 100 receives translation request information including at least one of worker request information, working condition information, and image information from the first user terminal device 200-1, the image information included in the translation request information into the artificial intelligence learning model.
- the image information may include at least one of address information about the content image, title information, and description information about the content image.
- the server 100 obtains a worker list for workers capable of translating the corresponding content image through an artificial intelligence learning model, and transmits the obtained worker list to the first user terminal device 200-1 (S1340). .
- the aforementioned artificial intelligence learning model learns worker information stored in the storage unit through the data learning unit, and classifies translatable categories for each worker and each field based on the learned worker information. Therefore, the artificial intelligence learning model inputs the worker information and the input image information learned through the data learning unit to the data acquisition unit of the artificial intelligence learning model when image information about the content image requested by the client to be translated is input. A list of operators capable of translating content images may be acquired.
- the worker information may include at least one of profile information for each worker, subtitle content that has been completed for each worker, and evaluated task grade information for each worker.
- the server 100 transmits the obtained worker list to the first user terminal device 200-1. Accordingly, the client may select a worker to request a translation job through the worker list displayed on the first user terminal device 200-1.
- the first user terminal device 200 - 1 transmits the input selection command to the server 100 .
- the server 100 When a selection command for at least one worker included in the worker list is received from the first user terminal device 200-1, the server 100 sends the second user terminal device 200-2 of the worker corresponding to the selection command. A task assignment message is transmitted (1350). Accordingly, the operator may access the server 100 through the second user terminal device 200-2 and perform a translation job for the content image requested by the client.
- step S1320 when the server 100 receives translation request information including at least one of an automatic translation command, working condition information, and image information from the first user terminal device 200-1, the server 100 includes it in the image information.
- a content image is obtained based on the obtained address information, and image data of the obtained content image is input to the language recognition unit of the artificial intelligence learning model (S1360 and S1370).
- the server 100 inputs the recognized first language to the data acquisition unit to acquire the second language requested by the client from the first language. (S1380).
- the server 100 inputs the second language to the translation verification unit of the artificial intelligence learning model.
- the translation check unit may generate subtitle content for the content video requested by the client by analyzing the input second language and converting the language suitable for the context.
- the translation check may convert the language corresponding to the updated caption content into a language suitable for the context when the caption content is updated in the storage unit.
- the server 100 provides error-corrected inspection results through the translation inspection unit in relation to the updated subtitle content, and work period information for the corresponding subtitle content.
- At least one of user evaluation information using the caption content may be input to the data acquisition unit, and a task level value of a worker who worked on the caption content may be obtained based on the input information.
- FIG. 14 is a flowchart of a control method of a server providing a caption service according to the present invention.
- the server 100 providing a caption service receives translation request information including at least one of working condition information and image information from the client's first user terminal device 200-1 ( 1410).
- the server 100 determines whether the content video requested by the client is valid based on the image information included in the received translation request information, and then generates task information for the translation request information whose validity has been verified. and save (S1420).
- the image information may include at least one of address information about the content image, title information, and description information about the content image.
- the server 100 transmits a subtitle content creation tool for a translation job of a content image related to at least one translation request information stored in the storage unit to the operator's second user terminal device 200-2 according to the operator's request.
- the operator may perform a translation job for the content image requested by the client by using the caption content generating tool displayed on the second user terminal device 200-2.
- the second user terminal device 200 - 2 transmits the subtitle content for which the translation work has been completed to the server 100 .
- the present invention is not limited thereto, and subtitle contents translated by a worker may be stored in the server 100 in real time, and when the translation work is completed, the second user terminal device 200-2 performs the translation work.
- a completion message for may be transmitted to the server 100.
- the server 100 When the server 100 receives the caption content for which the translation work has been completed or the translation work completion message is received from the second user terminal device 200-2, the server 100 stores the caption content translated by the operator (S1440).
- the server 100 may store subtitle content for which translation work has been completed in a temporary storage unit, and may store subtitle content that has been inspected by an inspector among subtitle contents stored in the temporary storage unit in a final storage unit.
- the server 100 when an inspection work command is received from the inspector's user terminal device 200, the server 100 provides a list of subtitle contents stored in the temporary storage to the inspector's user terminal device 200. Accordingly, the user terminal device 200 of the inspector may display the list received from the server 100 .
- the server 100 when the server 100 receives a selection command for the first caption content among the caption contents included in the list from the user terminal device 200 of the inspector, the server 100 transmits the web information of the inspection work tool for the first caption content to the inspector. It is transmitted to the user terminal device 200. Accordingly, the inspector inspects the first subtitle content through the inspector's user terminal device 200, and the inspector's user terminal device 200 uses the subtitle service server 100 to inspect the first subtitles completed by the inspector. Sends a message about completion of review for content.
- the caption service server 100 deletes the first caption content stored in the temporary storage unit, and the first caption content reviewed by the reviewer is completed.
- the subtitle content is stored in the final storage unit.
- the server 100 may register profile information and worker information of unregistered workers. Specifically, when a registration request command is received from the second user terminal device 200 - 2 of an unregistered worker, the server 100 evaluates the test capability of the unregistered worker and generates task grade information of the unregistered worker. Thereafter, the server 100 may generate and register profile information including at least one of personal information, history information, cost information, and evaluation information provided by an unregistered worker and worker information including task grade information.
- the server 100 may update worker information of registered workers. Specifically, the server 100 obtains task grade information of a pre-registered operator using at least one of work period information, error correction information, and evaluation information of a user using subtitle content for subtitle content that has been completed by a registered operator. can be updated
- control method of the server 100 may be coded in software and stored in a non-transitory readable medium.
- Such non-transitory readable media may be loaded and used in various devices.
- a non-transitory readable medium is not a medium that stores data for a short moment, such as a register, cache, or memory, but a medium that stores data semi-permanently and can be read by a device. Specifically, it may be a CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Educational Administration (AREA)
- General Engineering & Computer Science (AREA)
- Development Economics (AREA)
- Artificial Intelligence (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Operations Research (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Game Theory and Decision Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
Description
Claims (18)
- 자막 서비스를 제공하는 서버에 있어서,컨텐츠 영상의 번역을 요청하는 의뢰인의 제1 사용자 단말 장치 및 상기 번역 작업을 수행하는 작업자의 제2 사용자 단말 장치 중 적어도 하나와 데이터 통신을 수행하는 통신부;학습된 작업자 정보에 기초하여 작업자 검색 리스트 및 작업자의 테스크 능력 평가를 수행하는 인공지능 학습모델을 저장하는 저장부; 및상기 의뢰인의 작업자 추천 명령에 따라, 상기 컨텐츠 영상에 대한 영상 정보를 상기 인공지능 학습모델에 입력하여 번역 가능한 작업자에 대한 작업자 리스트를 획득하고, 상기 획득된 작업자 리스트를 상기 제1 사용자 단말 장치로 전송하도록 상기 통신부를 제어하는 제어부;를 포함하며,상기 작업자 정보는,작업자별 프로파일 정보, 상기 작업자별 작업 완료된 자막 컨텐츠 및 상기 작업자별 평가된 테스크 등급 정보 중 적어도 하나를 포함하는 서버.
- 제 1 항에 있어서,상기 인공지능 학습모델은,상기 저장부에 저장된 작업자 정보를 학습하고, 상기 학습된 작업자 정보에 기초하여 작업자별 분야별 번역 가능한 카테고리를 분류하는 데이터 학습부; 및상기 영상 정보 및 상기 데이터 학습부를 통해 학습된 작업자 정보에 기초하여 상기 컨텐츠 영상에 대한 번역 가능한 작업자 리스트를 획득하는 데이터 획득부;를 포함하는 것을 특징으로 하는 서버.
- 제 2 항에 있어서,상기 제어부는,작업자 요청 정보, 작업 조건 정보 및 영상 정보 중 적어도 하나를 포함하는 번역 요청 정보가 수신되면, 상기 영상 정보를 상기 인공지능 학습모델에 입력하며, 상기 데이터 획득부를 통해 획득한 작업자 리스트를 상기 제1 사용자 단말 장치로 전송하고,상기 제1 사용자 단말 장치로부터 상기 작업자 리스트에 포함된 적어도 하나의 작업자 대한 선택 명령이 수신되면, 상기 선택 명령에 대응하는 작업자의 제2 사용자 단말 장치로 테스크 할당 메시지를 전송하도록 상기 통신부를 제어하며,상기 영상 정보는,상기 컨텐츠 영상에 대한 어드레스 정보, 타이틀 정보 및 상기 컨텐츠 영상에 대한 디스크립션 정보 중 적어도 하나를 포함하는 것을 특징으로 하는 서버.
- 제 2 항에 있어서,상기 제어부는,상기 제1 사용자 단말 장치로부터 자동 번역 명령, 작업 조건 정보 및 영상 정보 중 적어도 하나를 포함하는 번역 요청 정보가 수신되면, 상기 영상 정보에 포함된 어드레스 정보에 기초하여 상기 컨텐츠 영상을 획득하여 상기 영상 프레임으로부터 오디오 데이터를 추출하고, 상기 추출된 오디오 데이터를 인공지능 학습모델에 입력하며,인공지능 학습모델은,상기 입력된 컨텐츠 데이터와 관련된 제1 언어를 인식하는 언어 인식부;를 더 포함하며,상기 데이터 획득부는,상기 언어 인식부를 통해 인식된 제1 언어로부터 상기 의뢰인이 요청한 제2 언어를 획득하는 것을 특징으로 하는 서버.
- 제 4 항에 있어서,상기 인공지능 학습모델은,기학습된 자막 컨텐츠에 기초하여 상기 데이터 획득부를 통해 획득된 제2 언어 또는 상기 저장부에 업데트된 자막 컨텐츠에 대응하는 언어를 문맥에 적합한 언어로 변환 처리하는 번역 검수부;를 더 포함하는 것을 특징으로 하는 서버.
- 제 5 항에 있어서,상기 데이터 획득부는,상기 업데이트된 자막 컨텐츠와 관련하여 상기 번역 검수부로부터 오류 수정된 검수 결과, 자막 컨텐츠에 대한 작업 기간 정보, 상기 자막 컨텐츠를 이용한 사용자의 평가 정보 중 적어도 하나를 이용하여 상기 자막 컨텐츠를 작업한 작업자의 테스크 레벨값을 획득하는 것을 특징으로 하는 서버.
- 제 1 항에 있어서,상기 제어부는,상기 제1 사용자 단말 장치로부터 작업 조건 정보 및 영상 정보 중 적어도 하나를 포함하는 번역 요청 정보가 수신되면, 상기 영상 정보에 기초하여 상기 컨텐츠 영상의 유효성 여부를 판단한 후, 상기 유효성이 검증된 번역 요청 정보와 관련된 테스크 정보를 생성하여 저장하는 테스크 생성부;상기 저장부에 저장된 적어도 하나의 번역 요청 정보와 관련된 컨텐츠 영상의 번역 작업을 위한 자막 컨텐츠 생성 툴을 상기 제2 사용자 단말 장치로 제공하며, 상기 제2 사용자 단말 장치로부터 번역 작업이 완료된 자막 컨텐츠가 수신되면, 상기 자막 컨텐츠를 상기 저장부에 저장하는 테스크 실행부;를 포함하며,상기 영상 정보는,상기 컨텐츠 영상에 대한 어드레스 정보, 타이틀 정보 및 상기 컨텐츠 영상에 대한 디스크립션 정보 중 적어도 하나를 포함하는 것을 특징으로 하는 서버.
- 제 7 항에 있어서,상기 저장부는,상기 번역 작업이 완료된 자막 컨텐츠를 저장하는 임시 저장부; 및상기 임시 저장부에 저장된 자막 컨텐츠 중 번역 검수가 완료된 자막 컨텐츠를 저장하는 파이널 저장부;를 포함하며,상기 테스크 실행부는,상기 임시 저장부에 저장된 적어도 하나의 자막 컨텐츠에 포함된 언어에 대한 번역 검수를 위한 검수 작업 툴을 검수자의 제3 사용자 단말 장치로 제공하며, 상기 제3 사용자 단말 장치로부터 검수 완료 메시지가 수신되면, 검수 완료된 자막 컨텐츠를 상기 파이널 저장부에 저장하는 것을 특징으로 하는 서버.
- 제 7 항에 있어서,상기 제어부는,미등록된 작업자의 제2 사용자 단말 장치로부터 등록 요청 명령이 수신되면, 상기 미등록된 작업자의 테스트 능력을 평가하여 상기 미등록된 작업자의 테스크 등급 정보를 생성하고, 상기 미등록된 작업자로부터 제공된 개인 정보, 이력 정보, 비용 정보, 평가 정보 중 적어도 하나와 상기 테스크 등급 정보를 포함하는 프로파일 정보를 생성하여 상기 저장부에 저장하는 회원 관리부;를 더 포함하는 것을 특징으로 서버.
- 서버에서 인공지능 학습모델을 이용하여 자막 서비스를 제공하는 방법에 있어서,컨텐츠 영상의 번역을 요청하는 의뢰인의 제1 사용자 단말 장치 및 상기 번역 작업을 수행하는 작업자의 제2 사용자 단말 장치 중 적어도 하나와 데이터 통신을 수행하는 단계;상기 의뢰인의 작업자 추천 명령에 따라, 상기 컨텐츠 영상에 대한 영상 정보를 인공지능 학습모델에 입력하는 단계;상기 인공지능 학습모델을 통해 상기 컨텐츠 영상에 대한 번역 가능한 작업자에 대한 작업자 리스트를 획득하는 단계; 및상기 획득된 작업자 리스트를 상기 제1 사용자 단말 장치로 전송하는 단계;를 포함하며,상기 작업자 정보는,작업자별 프로파일 정보, 상기 작업자별 작업 완료된 자막 컨텐츠 및 상기 작업자별 평가된 테스크 등급 정보 중 적어도 하나를 포함하는 자막 서비스 제공 방법.
- 제 10 항에 있어서,상기 획득하는 단계는,상기 인공지능 학습모델의 데이터 학습부를 통해 상기 저장부에 저장된 작업자 정보를 학습하고, 상기 학습된 작업자 정보에 기초하여 작업자별 분야별 번역 가능한 카테고리를 분류하며, 학습된 작업자 정보 및 영상 정보를 상기 인공지능 학습모델의 데이터 획득부에 입력하여 상기 컨텐츠 영상에 대한 번역 가능한 작업자 리스트를 획득하는 것을 특징으로 하는 자막 서비스 제공 방법.
- 제 11 항에 있어서,상기 입력하는 단계는,작업자 요청 정보, 작업 조건 정보 및 영상 정보 중 적어도 하나를 포함하는 번역 요청 정보가 수신되면, 상기 영상 정보를 상기 인공지능 학습모델에 입력하며,상기 제1 사용자 단말 장치로부터 상기 작업자 리스트에 포함된 적어도 하나의 작업자 대한 선택 명령이 수신되면, 상기 선택 명령에 대응하는 작업자의 제2 사용자 단말 장치로 테스크 할당 메시지를 전송하는 단계;를 더 포함하며,상기 영상 정보는,상기 컨텐츠 영상에 대한 어드레스 정보, 타이틀 정보 및 상기 컨텐츠 영상에 대한 디스크립션 정보 중 적어도 하나를 포함하는 것을 특징으로 하는 자막 서비스 제공 방법.
- 제 11 항에 있어서,상기 입력하는 단계는,상기 제1 사용자 단말 장치로부터 자동 번역 명령, 작업 조건 정보 및 영상 정보 중 적어도 하나를 포함하는 번역 요청 정보가 수신되면, 상기 영상 정보에 포함된 어드레스 정보에 기초하여 상기 컨텐츠 영상을 획득하여 상기 인공지능 학습모델의 언어 인식부에 입력하며,상기 언어 인식부를 통해 상기 컨텐츠 영상의 오디오 데이터와 관련된 제1 언어가 인식되면, 상기 제1 언어를 상기 데이터 획득부에 입력하여 상기 제1 언어로부터 상기 의뢰인이 요청한 제2 언어를 획득하는 단계;를 더 포함하는 것을 특징으로 하는 자막 서비스 제공 방법.
- 제 13 항에 있어서,상기 인공지능 학습모델의 번역 검수부에서, 기학습된 자막 컨텐츠에 기초하여 상기 제2 언어 또는 상기 저장부에 업데트된 자막 컨텐츠에 대응하는 언어를 문맥에 적합한 언어로 변환 처리하는 단계;를 더 포함하는 것을 특징으로 하는 자막 서비스 제공 방법.
- 제 14 항에 있어서,상기 데이터 획득부에서, 업데이트된 자막 컨텐츠와 관련하여 상기 번역 검수부를 통해 오류 수정된 검수 결과, 자막 컨텐츠에 대한 작업 기간 정보, 상기 자막 컨텐츠를 이용한 사용자의 평가 정보 중 적어도 하나를 이용하여 상기 자막 컨텐츠를 작업한 작업자의 테스크 레벨값을 획득하는 단계;를 더 포함하는 것을 특징으로 하는 자막 서비스 제공 방법.
- 자막 서비스를 제공하는 서버의 제어 방법에 있어서,의뢰인의 제1 사용자 단말 장치로부터 작업 조건 정보 및 영상 정보 중 적어도 하나를 포함하는 번역 요청 정보를 수신하는 단계;상기 영상 정보에 기초하여 상기 컨텐츠 영상의 유효성 여부를 판단한 후, 상기 유효성이 검증된 번역 요청 정보를 저장하는 단계;작업자 요청에 따라, 상기 저장부에 저장된 적어도 하나의 번역 요청 정보와 관련된 컨텐츠 영상의 번역 작업을 위한 자막 컨텐츠 생성 툴을 상기 작업자의 제2 사용자 단말 장치로 전송하는 단계; 및상기 제2 사용자 단말 장치로부터 번역 작업이 완료된 자막 컨텐츠가 수신되면, 상기 자막 컨텐츠를 저장하는 단계;를 포함하며,상기 영상 정보는,상기 컨텐츠 영상에 대한 어드레스 정보, 타이틀 정보 및 상기 컨텐츠 영상에 대한 디스크립션 정보 중 적어도 하나를 포함하는 제어 방법.
- 제 16 항에 있어서,상기 저장하는 단계는,상기 번역 작업이 완료된 자막 컨텐츠를 임시 저장부에 저장하는 단계; 및검수자의 작업 요청에 따라, 상기 임시 저장부에 저장된 적어도 하나의 자막 컨텐츠에 포함된 언어에 대한 번역 검수를 위한 검수 작업 툴을 검수자의 제3 사용자 단말 장치로 전송하는 단계; 및상기 제3 사용자 단말 장치로부터 검수 완료 메시지가 수신되면, 검수 완료된 자막 컨텐츠를 상기 파이널 저장부에 저장하는 단계;를 포함하는 것을 특징으로 하는 제어 방법.
- 제 16 항에 있어서,미등록된 작업자의 프로파일 정보 및 테스크 등급 정보를 포함하는 작업자 정보를 등록하는 단계; 및상기 등록된 작업자 정보를 업데이트하는 단계;를 더 포함하며,상기 등록하는 단계는,상기 미등록된 작업자의 개인 정보, 이력 정보, 비용 정보, 평가 정보 중 적어도 하나를 포함하는 상기 프로파일 정보를 생성하는 단계; 및상기 미등록된 작업자의 제2 사용자 단말 장치로부터 등록 요청 명령이 수신되면, 상기 미등록된 작업자의 테스트 능력을 평가하여 상기 미등록된 작업자의 테스크 등급 정보를 생성하는 단계;를 포함하며,상기 업데이트하는 단계는,상기 등록된 작업자에 의해 작업 완료된 자막 컨텐츠에 대한 작업 기간 정보, 오류 수정 정보, 상기 자막 컨텐츠를 이용한 사용자의 평가 정보 중 적어도 하나를 이용하여 상기 등록된 작업자의 테스크 등급 정보를 업데이트하는 것을 특징으로 하는 제어 방법.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/430,056 US11966712B2 (en) | 2021-07-04 | 2021-07-08 | Server and method for providing multilingual subtitle service using artificial intelligence learning model, and method for controlling server |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2021-0087498 | 2021-07-04 | ||
| KR1020210087498A KR102431383B1 (ko) | 2021-07-04 | 2021-07-04 | 인공지능 학습모델을 이용하여 다국어 자막 서비스를 제공하는 서버 및 방법과, 서버의 제어 방법 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023282371A1 true WO2023282371A1 (ko) | 2023-01-12 |
Family
ID=82846877
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2021/008757 Ceased WO2023282371A1 (ko) | 2021-07-04 | 2021-07-08 | 인공지능 학습모델을 이용하여 다국어 자막 서비스를 제공하는 서버 및 방법과, 서버의 제어 방법 |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11966712B2 (ko) |
| KR (1) | KR102431383B1 (ko) |
| WO (1) | WO2023282371A1 (ko) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12141353B1 (en) | 2023-07-07 | 2024-11-12 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for displaying dynamic closed-captioning content |
| KR102726106B1 (ko) * | 2024-02-13 | 2024-11-05 | 주식회사 유엑스플러스코퍼레이션 | 번역 자막 제공 장치 및 이를 이용한 번역 자막 제공 방법 |
| KR102800075B1 (ko) | 2024-11-27 | 2025-04-29 | (주)페르소나에이아이 | 힌디어, 영어 및 한국어가 포함된 다국어 ai doc 시스템의 구축 방법 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101802674B1 (ko) * | 2016-11-24 | 2017-11-28 | 유남민 | 번역 관리 시스템 |
| KR20190141331A (ko) * | 2018-06-14 | 2019-12-24 | 이동준 | 동영상 자막 번역 서비스를 제공하는 시스템 및 방법 |
| KR20200142282A (ko) * | 2019-06-12 | 2020-12-22 | 삼성전자주식회사 | 컨텐츠 번역 서비스를 제공하는 전자 장치 및 그 제어 방법 |
| KR102244448B1 (ko) * | 2020-10-05 | 2021-04-27 | 주식회사 플리토 | 전문 번역 서비스 플랫폼을 제공하기 위한 방법 |
| KR102258000B1 (ko) * | 2020-10-05 | 2021-05-31 | 주식회사 플리토 | 복수의 사용자 단말들과의 연계를 통해 이미지 번역 서비스를 제공하기 위한 방법 및 서버 |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102118643B (zh) * | 2009-12-30 | 2015-05-27 | 新奥特(北京)视频技术有限公司 | 一种网络字幕播放系统及其播放方法 |
| US9026446B2 (en) * | 2011-06-10 | 2015-05-05 | Morgan Fiumi | System for generating captions for live video broadcasts |
| US10909329B2 (en) * | 2015-05-21 | 2021-02-02 | Baidu Usa Llc | Multilingual image question answering |
| US11113599B2 (en) * | 2017-06-22 | 2021-09-07 | Adobe Inc. | Image captioning utilizing semantic text modeling and adversarial learning |
-
2021
- 2021-07-04 KR KR1020210087498A patent/KR102431383B1/ko active Active
- 2021-07-08 US US17/430,056 patent/US11966712B2/en active Active
- 2021-07-08 WO PCT/KR2021/008757 patent/WO2023282371A1/ko not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101802674B1 (ko) * | 2016-11-24 | 2017-11-28 | 유남민 | 번역 관리 시스템 |
| KR20190141331A (ko) * | 2018-06-14 | 2019-12-24 | 이동준 | 동영상 자막 번역 서비스를 제공하는 시스템 및 방법 |
| KR20200142282A (ko) * | 2019-06-12 | 2020-12-22 | 삼성전자주식회사 | 컨텐츠 번역 서비스를 제공하는 전자 장치 및 그 제어 방법 |
| KR102244448B1 (ko) * | 2020-10-05 | 2021-04-27 | 주식회사 플리토 | 전문 번역 서비스 플랫폼을 제공하기 위한 방법 |
| KR102258000B1 (ko) * | 2020-10-05 | 2021-05-31 | 주식회사 플리토 | 복수의 사용자 단말들과의 연계를 통해 이미지 번역 서비스를 제공하기 위한 방법 및 서버 |
Also Published As
| Publication number | Publication date |
|---|---|
| US11966712B2 (en) | 2024-04-23 |
| US20240005105A1 (en) | 2024-01-04 |
| KR102431383B1 (ko) | 2022-08-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2020262958A1 (en) | Electronic apparatus and control method thereof | |
| WO2019203488A1 (en) | Electronic device and method for controlling the electronic device thereof | |
| WO2020017898A1 (en) | Electronic apparatus and control method thereof | |
| WO2016028042A1 (en) | Method of providing visual sound image and electronic device implementing the same | |
| WO2015167160A1 (en) | Command displaying method and command displaying device | |
| WO2023282371A1 (ko) | 인공지능 학습모델을 이용하여 다국어 자막 서비스를 제공하는 서버 및 방법과, 서버의 제어 방법 | |
| WO2014107006A1 (en) | Display apparatus and control method thereof | |
| WO2019027259A1 (en) | APPARATUS AND METHOD FOR PROVIDING SUMMARY INFORMATION USING ARTIFICIAL INTELLIGENCE MODEL | |
| WO2019177344A1 (en) | Electronic apparatus and controlling method thereof | |
| WO2016093552A2 (en) | Terminal device and data processing method thereof | |
| WO2019083275A1 (ko) | 관련 이미지를 검색하기 위한 전자 장치 및 이의 제어 방법 | |
| EP3635605A1 (en) | Electronic device and method for controlling the electronic device | |
| EP3602334A1 (en) | Apparatus and method for providing summarized information using an artificial intelligence model | |
| WO2016126007A1 (en) | Method and device for searching for image | |
| WO2020159288A1 (ko) | 전자 장치 및 그 제어 방법 | |
| WO2020040517A1 (en) | Electronic apparatus and control method thereof | |
| WO2019235793A1 (en) | Electronic device and method for providing information related to image to application through input unit | |
| EP3230902A2 (en) | Terminal device and data processing method thereof | |
| WO2023132657A1 (ko) | 상품 트렌드 예측 서비스 제공 장치, 방법 및 프로그램 | |
| WO2016190652A1 (en) | Electronic device, information providing system and information providing method thereof | |
| WO2020190103A1 (en) | Method and system for providing personalized multimodal objects in real time | |
| WO2025005499A1 (ko) | 인공지능 기반 폴리 사운드 제공 장치 및 방법 | |
| WO2019107674A1 (en) | Computing apparatus and information input method of the computing apparatus | |
| WO2016036049A1 (ko) | 검색 서비스 제공 장치, 시스템, 방법 및 컴퓨터 프로그램 | |
| WO2022145946A1 (ko) | 학습 영상 및 예문의 인공지능 추천기반 언어학습 시스템 및 방법 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 17430056 Country of ref document: US |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21949411 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21949411 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.06.2024) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21949411 Country of ref document: EP Kind code of ref document: A1 |