[go: up one dir, main page]

WO2025037801A1 - Appareil et procédé de construction d'un modèle de résumé qui résume un document de brevet, et appareil de résumé de document de brevet utilisant un modèle de résumé - Google Patents

Appareil et procédé de construction d'un modèle de résumé qui résume un document de brevet, et appareil de résumé de document de brevet utilisant un modèle de résumé Download PDF

Info

Publication number
WO2025037801A1
WO2025037801A1 PCT/KR2024/011221 KR2024011221W WO2025037801A1 WO 2025037801 A1 WO2025037801 A1 WO 2025037801A1 KR 2024011221 W KR2024011221 W KR 2024011221W WO 2025037801 A1 WO2025037801 A1 WO 2025037801A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
model
transfer learning
topic
patent document
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/011221
Other languages
English (en)
Korean (ko)
Inventor
이성주
박상현
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SNU R&DB Foundation
Original Assignee
Seoul National University R&DB Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020240096112A external-priority patent/KR20250026736A/ko
Application filed by Seoul National University R&DB Foundation filed Critical Seoul National University R&DB Foundation
Publication of WO2025037801A1 publication Critical patent/WO2025037801A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning

Definitions

  • the present invention relates to a device and method for constructing a summary model that summarizes a patent document, and a patent document summarizing device that summarizes a patent document using the summary model.
  • patent applications increase, creating a vast amount of basic data for trend analysis and strategy formulation.
  • patents are composed of complex technical terms and contain a variety of information, so it takes a lot of time and money to extract the necessary technical information from a large number of patents.
  • Quantitative technical analysis requires users to define rules in the preprocessing stage to distinguish various types of technical information contained in patent documents, and the rules applied in the preprocessing stage must be configured differently depending on the information required. Quantitative technical analysis requires users to create new rules each time to analyze the required technical information, which incurs a lot of costs and has limitations in terms of scalability.
  • the present invention in order to solve the above-mentioned problem, provides a summary model construction device and method formed with a structure of a large language model (LLM), which constructs a summary model that summarizes and outputs the input data based on a predetermined summary topic when the contents of at least one section of a patent document are input.
  • LLM large language model
  • a summary model construction device comprises: a memory in which a summary model construction program for constructing a summary model is stored; and a processor executing the summary model construction program stored in the memory, wherein the summary model construction program inputs a plurality of patent data into a transfer learning model through a plurality of prompts in which summary conditions for different summary topics are set, matches the patent data with output data of the transfer learning model based on the summary conditions to generate summary data for each summary topic, and transfer-learns the summary data for each summary topic to the transfer learning model to construct a summary model, wherein the summary model is formed in the structure of a large language model (LLM), and when at least one section content of a patent document is input, summarizes the input data based on a predetermined summary topic and outputs it, and the patent data includes at least one section content among the plurality of section contents of a predetermined patent document.
  • LLM large language model
  • a method for constructing a summary model using a summary model constructing device includes the steps of: inputting a plurality of patent data into a transfer learning model through a plurality of prompts in which summary conditions for different summary topics are set; matching the patent data with output data of the transfer learning model based on the summary conditions to generate summary data for each summary topic; and constructing a summary model by transfer learning the summary data for each summary topic to the transfer learning model, wherein the summary model is formed in the structure of a large language model (LLM), and when input data including at least one section content of a patent document is input, summarizes and outputs the input data based on a predetermined summary topic, and the patent data includes at least one section content among the plurality of section contents of a predetermined patent document.
  • LLM large language model
  • a patent document summarization device using a summary model comprises: a memory storing a summary program; and a processor executing the summary program stored in the memory, wherein the summary program receives a plurality of patent documents and at least one summary subject, inputs the plurality of patent documents into a summary model to generate summary data of each patent document based on the summary subject, and visualizes and outputs analysis data for the plurality of summary data analyzed according to analysis conditions, and the summary model is formed as a large language model (LLM) and, when input data including at least one section content of a patent document is input, summarizes and outputs the input data based on a predetermined summary subject, inputs a plurality of patent data into a transfer learning model through a plurality of prompts in which summary conditions for different summary subjects are set, matches the output data of the transfer learning model based on the summary conditions with the patent data to generate summary data for each summary subject, and is constructed by transfer learning the summary data for each summary subject in the transfer learning model,
  • LLM large language model
  • patent documents can be easily summarized based on a summary topic without the need to apply rules for distinguishing various types of technical information included in patent documents, thereby saving cost and time.
  • Figure 1 is a conceptual diagram schematically illustrating a summary model construction device according to one embodiment of the present invention.
  • Figure 2 is an example of prompts used in the summary model building process.
  • Figure 3 is a flowchart for explaining a method for constructing a summary model according to one embodiment of the present invention.
  • FIG. 4 is a conceptual diagram schematically illustrating a patent literature summary device according to one embodiment of the present invention.
  • Figures 5 and 6 are examples of visualizing analysis data for summary data.
  • first, second, etc. used in this specification are used only for the purpose of distinguishing one component from another component, and do not limit the order or relationship of the components.
  • first component of the present invention may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • Figure 1 is a conceptual diagram schematically illustrating a summary model construction device according to one embodiment of the present invention.
  • the summary model construction device (100) is formed with a structure of a large language model (LLM), and when the contents of at least one section of a patent document are input, the device constructs a summary model that summarizes and outputs the corresponding input data based on a predetermined summary topic.
  • LLM large language model
  • the summary model construction device (100) includes a memory (110) and a processor (120).
  • the memory (110) stores a summary model construction program that constructs a summary model.
  • the summary model construction program inputs a plurality of patent data into a transfer learning model through a plurality of prompts in which summary conditions for different summary topics are set, matches the patent data with output data of the transfer learning model based on the summary conditions to generate summary data for each summary topic, and transfer-learns the summary data for each summary topic to the transfer learning model to construct a summary model.
  • the patent data includes at least one section content among the contents of a plurality of sections of a given patent document.
  • the memory (110) should be interpreted as a general term for a nonvolatile storage device that maintains stored information even when power is not supplied and a volatile storage device that requires power to maintain the stored information.
  • the memory (110) may perform a function of temporarily or permanently storing data processed by the processor (120).
  • the memory (110) may include a magnetic storage media or a flash storage media in addition to a volatile storage device that requires power to maintain the stored information, but the scope of the present invention is not limited thereto.
  • the processor (120) executes the summary model construction program stored in the memory (110) so that, when the contents of at least one section of an input patent document are input, the summary model is trained to summarize and output the input data based on a predetermined summary topic.
  • the processor (120) may be implemented in the form of a microprocessor, a central processing unit (CPU), a processor core, a multiprocessor, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), etc., but the scope of the present invention is not limited thereto.
  • the processor (120) provides a function of executing a summary model construction program stored in the memory (110) and controlling the hardware of the summary model construction device (100) according to the execution of the program. That is, the processor (120) can perform hardware control functions such as a necessary file system, memory allocation, network, basic library, timer, device control (display, media, input device, 3D, etc.), and other utilities according to the execution of the program.
  • the summary model building program describes the operation of building the summary model.
  • the summary model building program inputs multiple patent data into the transfer learning model through multiple prompts in which summary conditions for different summary topics are set.
  • the summary topic may be a topic to be summarized, such as a technical field or a technical component.
  • the summary condition is set in the prompt, and may be an example sentence of output data for the summary topic and the input data.
  • the patent data includes at least one section content among multiple section contents of a given patent document, and may include the contents of sections such as the title of the invention, technical field, abstract, and claim 1.
  • the summary model is formed by the structure of a large language model (LLM), and when the contents of at least one section of a patent document are input, the input data is summarized and output based on a predetermined summary topic.
  • LLM large language model
  • Figure 2 is an example of a prompt in which a summary condition for summarizing a technical field is set.
  • the prompt (10) illustrated in Figure 2 when a technical field is set as a summary topic (11) and the title of the invention, an abstract, and the first claim are input as input data as summary conditions (12), the output data for this is set as an example sentence.
  • the summary model building program applies multiple patent data to the transfer learning model through multiple prompts with different summary topics and summary conditions, and generates summary data for each summary topic using the output data of the transfer learning model.
  • the summary data is data that matches the output data of the transfer learning model with the patent data corresponding to the output data, and may be data that matches one or more of the contents of multiple sections included in the patent data with the output data of the transfer learning model.
  • the transfer learning model summarizes multiple patent data by technical field through the prompt in Fig. 2
  • the summary data by technical field may be output data that matches the title, abstract, and first claim of the invention.
  • the summary model building program stores the summary data for each summary topic and transfers multiple summary data for each summary topic to the transfer learning model to build the summary model.
  • the summary model building program can build a summary model using multiple transfer learning models.
  • the summary model building program can input multiple patent data through prompts in which summary conditions for different summary topics are set for each of the multiple transfer learning models, and can generate summary data for each summary topic using the output data of each transfer learning model.
  • the summary model building program can build a summary model by transfer learning the summary data for each summary topic to one of the multiple transfer learning models.
  • the summary model construction program can construct a summary model using one transfer learning model.
  • the summary model construction program can sequentially apply multiple prompts, each of which has a summary condition set for a different summary topic, to the transfer learning model, and input multiple patent data into the transfer learning model through the prompts to perform learning for each summary topic.
  • the transfer learning model for which learning has been completed can be set as the summary model.
  • the summary model construction program can receive summary data from another summary model construction device (100) and construct a summary model by transfer learning the data.
  • Transfer learning can generally use the LORA (Low-Rank Adaptation) method, which is a learning parameter reduction technique mainly used for transfer learning of large-scale language models (LLMs).
  • the LORA method is an algorithm that dramatically reduces the number of learning parameters of large-scale language models (LLMs) by decomposing the learning parameter matrix into low dimensions, and can determine hyperparameters (e.g., batch size, learning rate, epoch size) required for optimal learning in order to minimize cross-entropy, which is a loss function.
  • the style of the summary conditions set in the prompts for different summary topics can be set to be the same, thereby unifying the style of the summary data and setting the style of the output data of the summary model.
  • the summary model learned in this way summarizes multiple patent documents written in different styles or viewpoints in the same style, so the output data summarizing each patent document has a unified style and can be easily utilized for the analysis of a large amount of patent documents in the future.
  • the communication module (130) may include a device including hardware and software required to transmit and receive signals, such as control signals or data signals, through wired or wireless connections with other network devices in order to perform data communication with external devices and signal data.
  • the database (140) may store various data for the operation of a summary model construction program that constructs a summary model. For example, data required for the operation of a summary model construction program that constructs a summary model or data generated through the operation may be stored, such as summary data generated using a plurality of prompts in which each summary topic and summary condition is set and output data output from a transfer learning model through the same.
  • Figure 3 is a flowchart for explaining a method for constructing a summary model according to one embodiment of the present invention.
  • the summary model construction method (S100) is a method of constructing a summary model using a plurality of transfer learning models.
  • the summary model construction device (100) inputs a plurality of patent data into a plurality of transfer learning models through a plurality of prompts in which summary conditions for different summary topics are set (step S110), and matches the patent data with output data of each transfer learning model based on the summary conditions to generate summary data for each summary topic (step S120). Then, the summary data for each summary topic is transferred to one of the plurality of transfer learning models to construct a summary model (step S130).
  • the summary model is formed by the structure of a large language model (LLM), and when input data including the contents of at least one section of a patent document is input, the input data is summarized and output based on a predetermined summary topic.
  • the patent data includes the contents of at least one section among the contents of multiple sections of a predetermined patent document, and may include the contents of sections such as the title of the invention, technical field, abstract, and claim 1.
  • step S110 a process (step S110) of inputting multiple patent data into a transfer learning model through multiple prompts is described.
  • the summary model building device (100) sets each of a plurality of prompts (10) in which different summary topics (11) and summary conditions (12) for the summary topics (11) are set to each of a plurality of transfer learning models, and inputs a plurality of patent data into the transfer learning model through the set prompts (10).
  • the prompt (10) of Fig. 2 when a technical field is set as a summary topic (11) and a title of the invention, an abstract, and the first claim are input as input data as summary conditions (12), output data for this is set as an example sentence, and a plurality of patent data can be input into the transfer learning model through a prompt like Fig. 2.
  • multiple transfer learning models may be stored in different summary model building devices (100) or may be stored in one summary model building device (100).
  • the summary model construction device (100) generates summary data for each summary topic using the output data of the transfer learning model based on the summary topic and summary conditions set in the prompt.
  • the summary data is data that matches the output data of the transfer learning model with the patent data corresponding to the output data, and may be data that matches one or more of the contents of multiple sections included in the patent data with the output data of the transfer learning model.
  • the transfer learning model summarizes multiple patent data by technical field through the prompt in Fig. 2
  • the summary data by technical field may be output data that matches the title, abstract, and first claim of the invention.
  • the summary model building device (100) stores such summary data for each summary topic, and builds a summary model by transfer learning multiple summary data for each summary topic to one of multiple transfer learning models.
  • summary data generated by one of the summary model building devices (100) is transmitted, and the summary model building device (100) receives summary data for each topic and performs transfer learning on the stored transfer learning model to build a summary model.
  • the summary model construction device (100) can also construct a summary model using a single transfer learning model. Multiple prompts with summary conditions for different summary topics are sequentially set in the transfer learning model, and multiple patent data can be input into the transfer learning model through the set prompts to perform learning. Finally, the transfer learning model can be set as a summary model.
  • FIG. 4 is a conceptual diagram schematically illustrating a patent literature summary device according to one embodiment of the present invention.
  • the patent document summarizing device (200) receives a plurality of patent documents and one or more summary topics, inputs the plurality of patent documents into a summary model, generates summary data of each patent document based on the summary topic, and visualizes and outputs analysis data for the plurality of summary data analyzed according to analysis conditions.
  • the patent document summarizing device (200) includes a memory (210) and a processor (220).
  • the memory (210) stores a summary program that summarizes a plurality of patent documents based on a summary topic.
  • the summary program receives a plurality of patent documents and at least one summary topic, inputs the plurality of patent documents into a summary model, generates summary data for each patent document based on the summary topic, and outputs analysis data for the plurality of summary data analyzed according to analysis conditions by visualizing them.
  • the memory (210) should be interpreted as a general term for a nonvolatile storage device that maintains stored information even when power is not supplied and a volatile storage device that requires power to maintain the stored information.
  • the memory (210) may perform a function of temporarily or permanently storing data processed by the processor (220).
  • the memory (210) may include a magnetic storage media or a flash storage media in addition to a volatile storage device that requires power to maintain the stored information, but the scope of the present invention is not limited thereto.
  • the processor (220) executes a summary program stored in the memory (210) to summarize at least one patent document based on at least one summary topic.
  • the summary program receives a plurality of patent documents and at least one summary topic, inputs the plurality of patent documents into a summary model, and generates summary data of each patent document based on the summary topic.
  • the summary model is formed by a large language model (LLM), and is constructed by inputting multiple patent data into a transfer learning model through multiple prompts in which summary conditions for different summary topics are set, matching the output data of the transfer learning model based on the summary conditions with the patent data to generate summary data for each summary topic, and transfer learning the summary data for each summary topic to the transfer learning model. Since the specific learning process of the summary model is the same as the method (S100) for constructing a summary model using the summary model construction device (100) described above, a detailed description is omitted.
  • LLM large language model
  • the processor (220) visualizes and outputs analysis data for a plurality of summary data analyzed according to analysis conditions. For example, if the analysis conditions are set to classify for similar technical fields, a plurality of patent documents can be visualized by classifying for similar technical fields, as shown in FIG. 5.
  • the processor (220) may be implemented in the form of a microprocessor, a central processing unit (CPU), a processor core, a multiprocessor, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), etc., but the scope of the present invention is not limited thereto.
  • the processor (220) provides a function of executing a summary program stored in the memory (210) and controlling the hardware of the patent literature summary device (200) according to the execution of the program. That is, the processor (220) can perform hardware control functions such as a necessary file system, memory allocation, network, basic library, timer, device control (display, media, input device, 3D, etc.), and other utilities according to the execution of the program.
  • the communication module (230) may include a device including hardware and software required to transmit and receive signals such as control signals or data signals through wired or wireless connections with other network devices in order to perform data communication with external devices and signal data.
  • the database (240) may store various data for the operation of the summary program. For example, data required for the operation of the summary model may be stored.
  • the present invention can also be implemented in the form of a non-transitory storage medium containing computer-executable instructions, such as program modules executed by a computer.
  • the computer-readable medium can be any available medium that can be accessed by a computer, and includes both volatile and nonvolatile media, removable and non-removable media.
  • the computer-readable medium can include a computer storage medium.
  • the computer storage medium includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un appareil de construction d'un modèle de résumé pour construire un modèle de résumé qui résume un document de brevet par rapport à un certain sujet, qui comprend : une mémoire qui stocke un programme de construction de modèle de résumé pour construire un modèle de résumé ; et un processeur qui exécute le programme de construction de modèle de résumé stocké dans la mémoire, le programme de construction de modèle de résumé appliquant une pluralité d'éléments de données de brevet à l'entrée d'un modèle d'apprentissage par transfert, par l'intermédiaire d'une pluralité d'invites dans lesquelles des conditions de résumé pour différents sujets de résumé sont définies, génère des données de résumé concernant chaque sujet de résumé en mettant en correspondance les données de brevet avec des données de sortie du modèle d'apprentissage par transfert qui est basé sur les conditions de résumé, et construit un modèle de résumé par apprentissage par transfert des données de résumé concernant chaque sujet de résumé vers le modèle d'apprentissage par transfert. Le modèle de résumé est formé dans une structure de modèle de grand langage (LLM) et, si au moins un élément de contenu de section d'un document de brevet est introduit, résume et produit des données d'entrée sur la base d'un certain sujet de résumé, les données de brevet comprenant au moins un élément de contenu de section parmi une pluralité d'éléments de contenu de section d'un certain document de brevet.
PCT/KR2024/011221 2023-08-17 2024-07-31 Appareil et procédé de construction d'un modèle de résumé qui résume un document de brevet, et appareil de résumé de document de brevet utilisant un modèle de résumé Pending WO2025037801A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2023-0107335 2023-08-17
KR20230107335 2023-08-17
KR1020240096112A KR20250026736A (ko) 2023-08-17 2024-07-22 특허 문헌을 요약하는 요약 모델을 구축하는 장치 및 방법과 요약 모델을 이용한 특허 문헌 요약 장치
KR10-2024-0096112 2024-07-22

Publications (1)

Publication Number Publication Date
WO2025037801A1 true WO2025037801A1 (fr) 2025-02-20

Family

ID=94632486

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/011221 Pending WO2025037801A1 (fr) 2023-08-17 2024-07-31 Appareil et procédé de construction d'un modèle de résumé qui résume un document de brevet, et appareil de résumé de document de brevet utilisant un modèle de résumé

Country Status (1)

Country Link
WO (1) WO2025037801A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010224639A (ja) * 2009-03-19 2010-10-07 Fuji Xerox Co Ltd 要約文書作成支援システム及びプログラム
JP2020067987A (ja) * 2018-10-26 2020-04-30 楽天株式会社 要約作成装置、要約作成方法、及びプログラム
KR20210120796A (ko) * 2020-03-27 2021-10-07 네이버 주식회사 비감독 관점-기초 복수 문서의 추상적 요약
US20220067284A1 (en) * 2020-08-28 2022-03-03 Salesforce.Com, Inc. Systems and methods for controllable text summarization
KR20220060699A (ko) * 2020-11-05 2022-05-12 한국과학기술정보연구원 논문의 요약과 본문 매칭에 기반한 학술 정보 제공 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010224639A (ja) * 2009-03-19 2010-10-07 Fuji Xerox Co Ltd 要約文書作成支援システム及びプログラム
JP2020067987A (ja) * 2018-10-26 2020-04-30 楽天株式会社 要約作成装置、要約作成方法、及びプログラム
KR20210120796A (ko) * 2020-03-27 2021-10-07 네이버 주식회사 비감독 관점-기초 복수 문서의 추상적 요약
US20220067284A1 (en) * 2020-08-28 2022-03-03 Salesforce.Com, Inc. Systems and methods for controllable text summarization
KR20220060699A (ko) * 2020-11-05 2022-05-12 한국과학기술정보연구원 논문의 요약과 본문 매칭에 기반한 학술 정보 제공 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PARK, SANG-HYUN : "Research on the Methodology for Extracting Technology Development Issues-Solutions Based on Patent Information. 2023 Korean Institute of Industrial Engineers", KOREAN ACADEMIC SOCIETY OF BUSINESS ADMINISTRATION JOINT SPRING CONFERENCE., 2 June 2023 (2023-06-02), XP093280254 *

Similar Documents

Publication Publication Date Title
KR930009619B1 (ko) 2진트리병렬 처리장치
WO2017039086A1 (fr) Système de modularisation d'apprentissage profond sur la base d'un module d'extension internet et procédé de reconnaissance d'image l'utilisant
US10469588B2 (en) Methods and apparatus for iterative nonspecific distributed runtime architecture and its application to cloud intelligence
WO2013035904A1 (fr) Système et procédé de traitement de pipeline d'analyse d'informations biométriques
DE112011101469T5 (de) Kompilieren von Software für ein hierarchisches verteiltes Verarbeitungssystem
WO2021049706A1 (fr) Système et procédé de réponse aux questions d'ensemble
ITRM940789A1 (it) "metodo e procedimento di comunicazione inter-macchina e metodo generalizzato per la riparazione di programmi per esso"
US5465319A (en) Architecture for a computer system used for processing knowledge
DE102018208267A1 (de) Technologie zum verwenden von steuerabhängigkeitsgraphen für das konvertieren von steuerflussprogrammen in datenflussprogramme
WO2023200059A1 (fr) Procédé de fourniture d'une recommandation de conception et d'une proposition finale pour un produit à commercialiser et appareil associé
CN110147397A (zh) 系统对接方法、装置、管理系统及终端设备、存储介质
WO2009116748A2 (fr) Procédé et appareil de développement de logiciels basés sur un contenant de composants réservés
WO2022004978A1 (fr) Système et procédé pour tâche de conception de décoration architecturale
WO2025037801A1 (fr) Appareil et procédé de construction d'un modèle de résumé qui résume un document de brevet, et appareil de résumé de document de brevet utilisant un modèle de résumé
WO2023101368A1 (fr) Procédé et appareil de traitement de tâches multi-robot pour attribuer des tâches à des robots
WO2024147593A1 (fr) Appareil et procédé de conversion d'image
Lalejini et al. Tag-based regulation of modules in genetic programming improves context-dependent problem solving
CN113505069B (zh) 一种测试数据分析方法及系统
WO2023214608A1 (fr) Matériel de simulation de circuit quantique
Ebert et al. Distributed Petri nets for model-driven verifiable robotic applications in ROS
WO2018216828A1 (fr) Système de gestion de mégadonnées énergétiques, et procédé associé
del Rosal et al. Simulating NEPs in a cluster with jNEP
Iozzia Hands-on Deep Learning with Apache Spark: Build and Deploy Distributed Deep Learning Applications on Apache Spark
WO2024219527A1 (fr) Procédé et système de gestion d'attributs de données maîtresses
CN113778447B (zh) 前端系统的服务兼容方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24854353

Country of ref document: EP

Kind code of ref document: A1