CN118819617A - Configuration method and device of large model application intelligent agent - Google Patents
Configuration method and device of large model application intelligent agent Download PDFInfo
- Publication number
- CN118819617A CN118819617A CN202410855629.8A CN202410855629A CN118819617A CN 118819617 A CN118819617 A CN 118819617A CN 202410855629 A CN202410855629 A CN 202410855629A CN 118819617 A CN118819617 A CN 118819617A
- Authority
- CN
- China
- Prior art keywords
- agent
- service
- metadata
- metadata information
- configuration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/70—Software maintenance or management
- G06F8/71—Version control; Configuration management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/34—Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
技术领域Technical Field
本公开涉及计算机技术领域,尤其涉及一种大模型应用智能体的配置方法及装置。The present disclosure relates to the field of computer technology, and in particular to a configuration method and device for a large model application intelligent agent.
背景技术Background Art
大模型应用智能体是指依托于目前的大语言模型和多模态模型等先进技术,通过prompt(指令)方式结合垂直类知识体系的构建,来引导智能体产生特定领域的文本或图像输出。其中,上述指令可以是问题、任务描述、条件等形式。当智能体在接收到指令后,会利用其预训练的模型和对应的垂直领域知识库数据进行推理和生成,以产生具有特定领域专业知识的输出。Large model application agents refer to agents that rely on current advanced technologies such as large language models and multimodal models, and guide agents to produce text or image outputs in specific fields through prompts (instructions) combined with the construction of vertical knowledge systems. The above instructions can be in the form of questions, task descriptions, conditions, etc. When the agent receives the instruction, it will use its pre-trained model and the corresponding vertical domain knowledge base data for reasoning and generation to produce output with specific domain expertise.
随着大模型技术在企业应用日益广泛,以及越来越多的企业不满足日常的通用类知识的输出,基于企业内部知识体系构建企业自己的大模型智能体越来越成为迫切的需求和挑战。比如在医疗保健、法律或金融等领域,需要智能体能够更准确地理解和回答这些特定领域的问题。As big model technology is increasingly used in enterprises, and more and more enterprises are not satisfied with the output of daily general knowledge, building their own big model intelligent agents based on their internal knowledge systems is becoming an increasingly urgent need and challenge. For example, in fields such as healthcare, law, or finance, intelligent agents are required to understand and answer questions in these specific fields more accurately.
目前,主流的应用智能体开发方式是用python技术进行开发,效率较为低下,而且复杂度较高,对于不懂技术的人而言,是无法实现智能体开发的目的的。At present, the mainstream way to develop application intelligent agents is to use Python technology, which is relatively inefficient and complex. For people who do not understand technology, it is impossible to achieve the purpose of intelligent agent development.
发明内容Summary of the invention
有鉴于此,本申请提出一种大模型应用智能体的配置方法及装置,以解决上述问题。In view of this, the present application proposes a configuration method and device for a large model application agent to solve the above-mentioned problems.
本申请一方面,提出一种大模型应用智能体的配置方法,包括如下步骤:In one aspect, the present application proposes a configuration method for a large model application agent, comprising the following steps:
接收智能体配置指令;且所述智能体配置指令基于web前端配置得到的;Receiving agent configuration instructions; wherein the agent configuration instructions are obtained based on the web front-end configuration;
根据所述智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中;且所述智能体元数据信息包括知识库元数据信息、插件元数据信息、工作流元数据信息和数据库元数据信息;According to the agent configuration instruction, corresponding agent metadata information is configured and stored in the metadata service; and the agent metadata information includes knowledge base metadata information, plug-in metadata information, workflow metadata information and database metadata information;
调用大模型智能体服务创建智能体框架,并将所述智能体元数据信息与所述智能体框架进行绑定,创建智能体。The large model agent service is called to create an agent framework, and the agent metadata information is bound to the agent framework to create an agent.
作为本申请的一可选实施方案,可选地,根据所述智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中,包括:As an optional implementation scheme of the present application, optionally, configuring corresponding agent metadata information according to the agent configuration instruction and storing it in the metadata service includes:
根据所述智能体配置指令调用知识库服务,并将知识文档按照预设方式存储于所述知识库服务;Calling the knowledge base service according to the agent configuration instruction, and storing the knowledge document in the knowledge base service in a preset manner;
调用所述元数据服务,并将所述知识文档对应的知识库元数据信息存储至所述元数据服务中。The metadata service is called, and the knowledge base metadata information corresponding to the knowledge document is stored in the metadata service.
作为本申请的一可选实施方案,可选地,所述预设存储方式,包括:As an optional implementation scheme of the present application, optionally, the preset storage method includes:
通过预设模型将所述知识文档进行向量化处理后,存储于对应的向量知识库中。After the knowledge document is vectorized through a preset model, it is stored in a corresponding vector knowledge base.
作为本申请的一可选实施方案,可选地,所述预设存储方式,还包括:As an optional implementation scheme of the present application, optionally, the preset storage method further includes:
使用文本频率、分类和聚类算法提取所述知识文档中知识的关键词,并将所述知识和所述关键词存储于对应的搜索引擎中。Keywords of the knowledge in the knowledge document are extracted using text frequency, classification and clustering algorithms, and the knowledge and the keywords are stored in a corresponding search engine.
作为本申请的一可选实施方案,可选地,所述根据所述智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中,还包括:As an optional implementation scheme of the present application, optionally, configuring corresponding agent metadata information according to the agent configuration instruction and storing it in the metadata service also includes:
根据所述智能体配置指令调用插件服务,配置对应的插件后,持久化至所述插件服务中;Call the plug-in service according to the agent configuration instruction, configure the corresponding plug-in, and persist it in the plug-in service;
调用所述元数据服务,并将所述插件持久化对应的插件元数据信息存储至所述元数据服务中。The metadata service is called, and plug-in metadata information corresponding to the plug-in persistence is stored in the metadata service.
作为本申请的一可选实施方案,可选地,所述根据所述智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中,还包括:As an optional implementation scheme of the present application, optionally, configuring corresponding agent metadata information according to the agent configuration instruction and storing it in the metadata service also includes:
根据所述智能体配置指令调用数据库服务,创建数据库和数据表;Calling database services according to the agent configuration instructions to create databases and data tables;
调用所述元数据服务,并将所述数据库和数据表对应的数据库元数据信息存储于所述元数据服务中。The metadata service is called, and database metadata information corresponding to the database and the data table is stored in the metadata service.
作为本申请的一可选实施方案,可选地,所述根据所述智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中,还包括:As an optional implementation scheme of the present application, optionally, configuring corresponding agent metadata information according to the agent configuration instruction and storing it in the metadata service also includes:
根据所述智能体配置指令调用工作流服务,配置工作流;Calling the workflow service according to the agent configuration instruction to configure the workflow;
调用所述元数据服务,并将所述工作流相应的工作流元数据信息存储于所述元数据服务中。The metadata service is called, and the workflow metadata information corresponding to the workflow is stored in the metadata service.
作为本申请的一可选实施方案,可选地,所述工作流包括若干个工作节点,用于连接相应的第三方业务系统。As an optional implementation scheme of the present application, optionally, the workflow includes a plurality of work nodes for connecting to corresponding third-party business systems.
作为本申请的一可选实施方案,可选地,在调用大模型智能体服务创建智能体框架时,基于所述web前端配置引导词,并将所述引导词按照预设方式匹配至相应的大语言模型。As an optional implementation scheme of the present application, optionally, when calling the large model agent service to create an agent framework, guide words are configured based on the web front end, and the guide words are matched to the corresponding large language model in a preset manner.
本申请另一方面,提供一种装置,用于实现上述任一项所述的大模型应用智能体的配置方法,包括:In another aspect, the present application provides a device for implementing the configuration method of the large model application agent described in any one of the above items, comprising:
指令接收模块,被配置为接收智能体配置指令;且所述智能体配置指令基于web前端配置得到的;An instruction receiving module is configured to receive an agent configuration instruction; and the agent configuration instruction is obtained based on the web front-end configuration;
元数据信息配置模块,被配置为根据所述智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中;且所述智能体元数据信息包括知识库元数据信息、插件元数据信息、工作流元数据信息和数据库元数据信息;A metadata information configuration module is configured to configure corresponding agent metadata information according to the agent configuration instruction and store it in the metadata service; and the agent metadata information includes knowledge base metadata information, plug-in metadata information, workflow metadata information and database metadata information;
智能体创建模块,被配置为调用大模型智能体服务创建智能体框架,并将所述智能体元数据信息与所述智能体框架进行绑定,创建智能体。The intelligent agent creation module is configured to call the large model intelligent agent service to create an intelligent agent framework, and bind the intelligent agent metadata information with the intelligent agent framework to create an intelligent agent.
本发明的技术效果:Technical effects of the present invention:
本申请提出一种大模型应用智能体的配置方法,包括接收智能体配置指令;且智能体配置指令基于web前端配置得到的;根据智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中;且智能体元数据信息包括知识库元数据信息、插件元数据信息、工作流元数据信息和数据库元数据信息;调用大模型智能体服务创建智能体框架,并将智能体元数据信息与智能体框架进行绑定,创建智能体。本申请通过web可视化配置机制,能够在配置过程中极大程度降低了开发智能体的难度,屏蔽了底层大语言模型、知识库等技术,使不懂技术的业务人员也可以方便的进行智能体的开发。The present application proposes a configuration method for a large model application agent, including receiving agent configuration instructions; and the agent configuration instructions are obtained based on the web front-end configuration; configuring the corresponding agent metadata information according to the agent configuration instructions, and storing it in the metadata service; and the agent metadata information includes knowledge base metadata information, plug-in metadata information, workflow metadata information and database metadata information; calling the large model agent service to create an agent framework, and binding the agent metadata information to the agent framework to create an agent. The present application can greatly reduce the difficulty of developing agents during the configuration process through a web visual configuration mechanism, shielding the underlying large language model, knowledge base and other technologies, so that business personnel who do not understand technology can also easily develop agents.
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。Further features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments with reference to the attached drawings.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本公开的示例性实施例、特征和方面,并且用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
图1示出为本申请实施例的大模型应用智能体的配置方法的流程图;FIG1 is a flowchart showing a configuration method of a large model application agent according to an embodiment of the present application;
图2示出为本申请实施例的大模型应用智能体的配置方法的实施流程示意图;FIG2 is a schematic diagram showing an implementation flow of a configuration method for a large model application agent according to an embodiment of the present application;
图3示出为本申请实施例的大模型应用智能体的使用方法的流程示意图;FIG3 is a schematic diagram showing a flow chart of a method for using a large model application agent according to an embodiment of the present application;
图4示出为本申请实施例的大模型应用智能体的使用方法的实施流程示意图;FIG4 is a schematic diagram showing an implementation flow of a method for using a large model application agent according to an embodiment of the present application;
图5示出为本申请实施例的大模型应用智能体的整体拓扑图。FIG5 shows an overall topology diagram of a large model application agent according to an embodiment of the present application.
具体实施方式DETAILED DESCRIPTION
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. The same reference numerals in the accompanying drawings represent elements with the same or similar functions. Although various aspects of the embodiments are shown in the accompanying drawings, the drawings are not necessarily drawn to scale unless otherwise specified.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word “exemplary” is used exclusively herein to mean “serving as an example, example, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
另外,为了更好的说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。In addition, in order to better illustrate the present disclosure, numerous specific details are given in the following specific embodiments. It should be understood by those skilled in the art that the present disclosure can also be implemented without certain specific details. In some examples, methods, means, components and circuits well known to those skilled in the art are not described in detail in order to highlight the subject matter of the present disclosure.
本申请的大模型应用智能体的配置方法采用web前端和后台服务端组成智能体开发平台,且后台服务端包括元数据服务、大模型智能体服务、知识库服务、插件服务、工作流服务和数据库服务。用户在与智能体开发平台的交互过程中,基于智能体开发平台所具备的知识库构建能力、工作流配置能力和智能体框架配置能力等,高效、准确的构建出想要的智能体。其中,在配置开发智能体时,相关管理人员通过web前端调用后台服务端的接口,将智能体信息保存至元数据服务中,并通过web形式展示数据的元信息,使不懂技术的业务人员也可以快速的开发符合业务需求的模型应用智能体,有效的降低了用户的开发难度。The configuration method of the large model application agent of the present application adopts a web front-end and a back-end server to form an agent development platform, and the back-end server includes metadata services, large model agent services, knowledge base services, plug-in services, workflow services and database services. In the process of interaction with the agent development platform, the user can efficiently and accurately build the desired agent based on the knowledge base construction capabilities, workflow configuration capabilities and agent framework configuration capabilities of the agent development platform. Among them, when configuring the development agent, the relevant management personnel call the interface of the back-end server through the web front-end, save the agent information to the metadata service, and display the metadata of the data in the form of a web, so that business personnel who do not understand technology can also quickly develop model application agents that meet business needs, effectively reducing the development difficulty of users.
实施例1Example 1
如图1和图2所示,本申请一方面,提出一种大模型应用智能体的配置方法,包括如下步骤:As shown in FIG. 1 and FIG. 2 , on the one hand, the present application proposes a configuration method for a large model application agent, comprising the following steps:
S1、接收智能体配置指令;且智能体配置指令基于web前端配置得到的;S1, receiving agent configuration instructions; and the agent configuration instructions are obtained based on the web front-end configuration;
本步骤中,为了能够开发出一个完整的智能体,首先需要接收基于web前端配置发送相应的智能体配置指令,调用相应的后台服务端,从而使后台服务能够按照智能体配置指令进行知识库配置、数据库配置、工作流配置、插件配置。In this step, in order to develop a complete intelligent agent, you first need to receive the corresponding intelligent agent configuration instructions based on the web front-end configuration and call the corresponding background server, so that the background service can perform knowledge base configuration, database configuration, workflow configuration, and plug-in configuration according to the intelligent agent configuration instructions.
S2、根据智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中;且智能体元数据信息包括知识库元数据信息、插件元数据信息、工作流元数据信息和数据库元数据信息;S2. According to the agent configuration instruction, the corresponding agent metadata information is configured and stored in the metadata service; and the agent metadata information includes knowledge base metadata information, plug-in metadata information, workflow metadata information and database metadata information;
本步骤中,平台管理人员通过web前端向后台服务发送对应的智能体配置指令,后台服务在接收到web前端发送的智能体配置指令后,配置对应的智能体元数据信息,并将智能体元数据信息均存储在元数据服务中,从而使后续用户查询时,能够通过调用元数据服务查询智能体配置好的元数据信息,并根据配置好的元数据信息调用相应的后台服务,查询到所需要的信息。In this step, the platform administrator sends the corresponding intelligent agent configuration instructions to the background service through the web front end. After receiving the intelligent agent configuration instructions sent by the web front end, the background service configures the corresponding intelligent agent metadata information, and stores the intelligent agent metadata information in the metadata service. When subsequent users query, they can query the metadata information configured by calling the metadata service, and call the corresponding background service according to the configured metadata information to query the required information.
S3、调用大模型智能体服务创建智能体框架,并将智能体元数据信息与智能体框架进行绑定,创建智能体。S3. Call the large model agent service to create an agent framework, and bind the agent metadata information to the agent framework to create an agent.
本步骤中,平台管理人员通过web前端调用大模型智能体服务,即大模型agent服务,创建智能体框架,配置创建智能体所需的各项配置,不仅如此,大模型智能体服务创建完智能体框架之后,调用元数据服务,获取上述步骤S200中存储在元数据服务中的智能体元数据信息,绑定到智能体框架,最终实现完整智能体的开发,使智能体具有知识库、插件、工作流、以及数据库查询等多种能力。In this step, the platform administrator calls the large model intelligent agent service, i.e., the large model agent service, through the web front end, creates an intelligent agent framework, and configures various configurations required to create the intelligent agent. Moreover, after the large model intelligent agent service creates the intelligent agent framework, it calls the metadata service to obtain the intelligent agent metadata information stored in the metadata service in the above step S200, binds it to the intelligent agent framework, and finally realizes the development of a complete intelligent agent, so that the intelligent agent has multiple capabilities such as knowledge base, plug-in, workflow, and database query.
作为本申请的一可选实施方案,可选地,根据智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中,包括:根据智能体配置指令调用知识库服务,并将知识文档按照预设方式存储于知识库服务;调用元数据服务,并将知识文档对应的知识库元数据信息存储至元数据服务中。As an optional implementation scheme of the present application, optionally, corresponding intelligent agent metadata information is configured according to the intelligent agent configuration instructions and stored in the metadata service, including: calling the knowledge base service according to the intelligent agent configuration instructions, and storing the knowledge document in the knowledge base service in a preset manner; calling the metadata service, and storing the knowledge base metadata information corresponding to the knowledge document in the metadata service.
需要说明的是,通过配置知识库,用于扩充智能体垂直业务域知识的能力。在配置知识库元数据信息中,平台管理人员通过web前端调用知识库服务,把知识文件上传至知识库服务。其中,知识库服务收到上传的知识文档之后,按照预设的存储方式进行保存。进一步的,知识库保存完知识文档后,调用元数据服务,把文档存储的元数据信息发送给元数据服务,由元数据服务进行相应的知识库元数据信息的存储。It should be noted that the knowledge base is configured to expand the ability of the intelligent body's vertical business domain knowledge. In configuring the knowledge base metadata information, the platform administrator calls the knowledge base service through the web front end and uploads the knowledge file to the knowledge base service. Among them, after the knowledge base service receives the uploaded knowledge document, it saves it according to the preset storage method. Furthermore, after the knowledge base saves the knowledge document, it calls the metadata service and sends the metadata information stored in the document to the metadata service, which stores the corresponding knowledge base metadata information.
作为本申请的一可选实施方案,可选地,预设存储方式,包括:通过预设模型将知识文档进行向量化处理后,存储于对应的向量知识库中。As an optional implementation scheme of the present application, optionally, the preset storage method includes: vectorizing the knowledge document through a preset model and storing it in a corresponding vector knowledge base.
需要说明的是,一种场景中,知识库服务收到上传的知识文档后,通过预设模型进行知识的向量化处理,再保存到对应的向量库中。其中,预设模型优选为text2vec-bge-large-chinese模型,能够有效提高中文知识的召回率。通过text2vec-bge-large-chinese模型进行知识的向量化处理之后,再保存到Milvus向量知识库中。It should be noted that in one scenario, after the knowledge base service receives the uploaded knowledge document, it vectorizes the knowledge through the preset model and then saves it to the corresponding vector library. Among them, the preset model is preferably the text2vec-bge-large-chinese model, which can effectively improve the recall rate of Chinese knowledge. After the knowledge is vectorized by the text2vec-bge-large-chinese model, it is saved in the Milvus vector knowledge base.
作为本申请的一可选实施方案,可选地,预设存储方式,还包括:使用文本频率、分类和聚类算法提取知识文档中知识的关键词,并将知识和关键词存储于对应的搜索引擎中。As an optional implementation scheme of the present application, optionally, the preset storage method also includes: using text frequency, classification and clustering algorithms to extract keywords of knowledge in knowledge documents, and storing the knowledge and keywords in a corresponding search engine.
需要说明的是,又一种场景中,使用文本频率、分类和聚类算法提取知识的关键词,保存到es搜索引擎中,最终实现语义结合关键词的知识混合召回方式,大大的提高了召回率。It should be noted that in another scenario, text frequency, classification and clustering algorithms are used to extract knowledge keywords and save them in the es search engine, ultimately achieving a hybrid recall method of knowledge that combines semantics with keywords, greatly improving the recall rate.
作为本申请的一可选实施方案,可选地,根据智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中,还包括:根据智能体配置指令调用插件服务,配置对应的插件后,持久化至插件服务中;调用元数据服务,并将插件持久化对应的插件元数据信息存储至元数据服务中。As an optional implementation scheme of the present application, optionally, corresponding intelligent agent metadata information is configured according to the intelligent agent configuration instructions and stored in the metadata service, and also includes: calling the plug-in service according to the intelligent agent configuration instructions, configuring the corresponding plug-in, and persisting it in the plug-in service; calling the metadata service, and storing the plug-in metadata information corresponding to the plug-in persistence in the metadata service.
需要说明的是,插件配置用于扩充智能体获取即时信息的能力。在配置插件元数据信息的过程中,平台管理人员通过web前端调用插件服务,配置插件可以让智能体拥有特定的信息查询,以及图像识别、解析能力等。其中,插件服务通过class load技术把对应的插件持久化到服务内存中。且插件服务持久化插件后,调用元数据服务,将插件持久化的元数据信息发送至元数据服务,由元数据服务进行元数据的存储。It should be noted that the plug-in configuration is used to expand the ability of the intelligent agent to obtain instant information. In the process of configuring the plug-in metadata information, the platform administrator calls the plug-in service through the web front end. The configuration of the plug-in can enable the intelligent agent to have specific information query, image recognition, and analysis capabilities. Among them, the plug-in service persists the corresponding plug-in into the service memory through the class load technology. After the plug-in service persists the plug-in, it calls the metadata service and sends the plug-in persistent metadata information to the metadata service, which stores the metadata.
作为本申请的一可选实施方案,可选地,根据智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中,还包括:根据智能体配置指令调用数据库服务,创建数据库和数据表;调用元数据服务,并将数据库和数据表对应的数据库元数据信息存储于元数据服务中。As an optional implementation scheme of the present application, optionally, corresponding intelligent agent metadata information is configured according to the intelligent agent configuration instructions and stored in the metadata service, and also includes: calling the database service according to the intelligent agent configuration instructions to create a database and data tables; calling the metadata service and storing the database metadata information corresponding to the database and data tables in the metadata service.
需要说明的是,在配置数据库元数据信息的过程中,平台管理人员通过web前端调用数据库服务,创建数据库以及数据表,其中数据库服务接收到创建信息后,调用mysql数据库进行数据库与数据表的创建。且数据库服务创建完数据库和数据表之后,调用元数据服务,把创建的数据库和数据表的元数据信息发送给元数据服务,由元数据服务进行相应的元数据存储。It should be noted that in the process of configuring database metadata information, the platform administrator calls the database service through the web front end to create the database and data table. After receiving the creation information, the database service calls the MySQL database to create the database and data table. After the database service creates the database and data table, it calls the metadata service and sends the metadata information of the created database and data table to the metadata service, which then performs the corresponding metadata storage.
作为本申请的一可选实施方案,可选地,根据智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中,还包括:根据智能体配置指令调用工作流服务,配置工作流;调用元数据服务,并将工作流相应的工作流元数据信息存储于元数据服务中。As an optional implementation scheme of the present application, optionally, corresponding intelligent agent metadata information is configured according to the intelligent agent configuration instructions and stored in the metadata service, and also includes: calling the workflow service according to the intelligent agent configuration instructions to configure the workflow; calling the metadata service and storing the workflow metadata information corresponding to the workflow in the metadata service.
作为本申请的一可选实施方案,可选地,工作流包括若干个工作节点,用于连接相应的第三方业务系统。As an optional implementation scheme of the present application, optionally, the workflow includes a plurality of work nodes for connecting to corresponding third-party business systems.
需要说明的是,工作流配置用于让智能体具备和外部业务系统交互的能力。在配置工作流元数据信息的过程中,平台管理人员通过web前端调用工作流服务,配置跟第三方业务系统可以打通的工作流。其中,工作流服务接收到工作流节点后,把工作流节点配置到Activiti工作流引擎。且工作流服务保存完工作流后,调用元数据服务,把工作流存储的元数据信息发送给元数据服务,由元数据服务进行相应的工作流元数据信息的存储。It should be noted that workflow configuration is used to enable the agent to interact with external business systems. In the process of configuring workflow metadata information, the platform administrator calls the workflow service through the web front end to configure the workflow that can be connected to the third-party business system. Among them, after the workflow service receives the workflow node, it configures the workflow node to the Activiti workflow engine. After the workflow service saves the workflow, it calls the metadata service and sends the metadata information stored in the workflow to the metadata service, which stores the corresponding workflow metadata information.
作为本申请的一可选实施方案,可选地,在调用大模型智能体服务创建智能体框架时,基于web前端配置引导词,并将引导词按照预设方式匹配至相应的大语言模型。As an optional implementation scheme of the present application, optionally, when calling the large model agent service to create an agent framework, guide words are configured based on the web front end, and the guide words are matched to the corresponding large language model in a preset manner.
需要特别说明的是,平台管理人员通过web前端调用大模型智能体服务,创建智能体框架,包括配置提示词、使用哪种大语言模型、开场白是什么等,并将提示词、开场白和大语言模型元数据等提交到元数据服务进行智能体元数据的存储。其中,本实施例中的大语言模块可以为GPT4或文心一言中的任意一种。管理人员通过web前端输入提示词,后台服务端接收到提示词信息后,会将提示词给到大语言模型,使智能体能够具备垂直领域的回答问题的能力。并且基于管理人员输入的开场白确定智能体与用户开始对话时首先发送的消息。也就是说,提示词的配置用于指导智能体具备的角色、能力、要求以及自身的约束,并且通过大模型智能体服务使智能体能够灵活的使用各种已有的大语言模型进行答案的推理。进一步的,大模型智能体服务创建完智能体框架后,再调用元数据服务,获取上述各个步骤的其他智能体元数据信息,绑定到智能体框架,最终实现完整的智能体的开发,使智能体具有知识库、插件、工作流以及数据库查询等能力。It should be noted that the platform administrator calls the large model agent service through the web front end to create an agent framework, including configuring prompt words, which large language model to use, what the opening remarks are, etc., and submits the prompt words, opening remarks and large language model metadata to the metadata service for storing the agent metadata. Among them, the large language module in this embodiment can be any one of GPT4 or Wenxin Yiyan. The administrator inputs the prompt words through the web front end. After the backend service receives the prompt word information, it will give the prompt words to the large language model, so that the agent can have the ability to answer questions in vertical fields. And based on the opening remarks entered by the administrator, the first message sent when the agent starts a conversation with the user is determined. In other words, the configuration of the prompt words is used to guide the role, capabilities, requirements and constraints of the agent, and the large model agent service enables the agent to flexibly use various existing large language models to reason about the answer. Furthermore, after the large model agent service creates the agent framework, it calls the metadata service to obtain other agent metadata information of the above steps and binds it to the agent framework, finally realizing the development of a complete agent, so that the agent has the capabilities of knowledge base, plug-in, workflow and database query.
下面将以一个医院助理智能体开发过程为例,对本申请的方法进行详细说明。The following will take the development process of a hospital assistant intelligent agent as an example to explain the method of this application in detail.
1、构建医院智能体框架。1. Build a hospital intelligent body framework.
智能体开发平台提供了智能体配置能力,通过web前端可以配置引导词,例如医院智能体引导词:The agent development platform provides agent configuration capabilities. Guide words can be configured through the web front end. For example, the hospital agent guide words:
##角色:医院助理##Role: Hospital Assistant
##介绍:##introduce:
你叫xx,你是一个医院管理专家,你可以为患者、医院领导、医院物业提供一系列的服务。Your name is xx. You are a hospital management expert. You can provide a series of services to patients, hospital leaders, and hospital properties.
##服务##Serve
你具备以下几种服务能力:You have the following service capabilities:
###VR导航:xxxxxx。###VR Navigation: xxxxxx.
###智能挂号:xxxxx。###Smart registration: xxxxx.
###化验单查询:xxxxx。###Test report query: xxxxx.
###患者服务:xxxxx。###Patient Services: xxxxx.
###医疗咨询:xxxxx。###Medical consultation: xxxxx.
##约束:##constraint:
用户的问题你需要逐步思考后给出答案;You need to think about the user's questions step by step and then give the answer;
不允许在回答中添加编造成分;It is not allowed to add fabricated elements to the answers;
除了已有服务,你不具备其他能力,遇到超出能力边界的用户需求,你需要拒绝用户的请求,并且告诉用户这些服务能力你还在学习中。In addition to the existing services, you do not have other capabilities. When you encounter user demands that exceed your capabilities, you need to reject the user's request and tell the user that you are still learning these service capabilities.
其中,后台服务端接收到引导词信息后,会把引导词通过prompt(指令)方式提交到大语言模型。例如,web前端输入:你能帮我做什么。后台服务端把问题和引导词通过指令方式提交给大语言模型后,返回给web前端:你好!我叫xx,作为一个医院管理专家,我能为你提供以下几种服务:After receiving the guide word information, the backend server will submit the guide word to the large language model through a prompt (command). For example, the web frontend inputs: What can you do for me? After the backend server submits the question and guide word to the large language model through a command, it returns to the web frontend: Hello! My name is xx. As a hospital management expert, I can provide you with the following services:
###VR导航:xxxxxx。###VR Navigation: xxxxxx.
###智能挂号:xxxxx。###Smart registration: xxxxx.
###化验单查询:xxxxx。###Test report query: xxxxx.
###患者服务:xxxxx。###Patient Services: xxxxx.
###医疗咨询:xxxxx。###Medical consultation: xxxxx.
使得智能体具备医院垂直领域的回答问题能力。This enables the intelligent agent to have the ability to answer questions in the vertical field of the hospital.
2、为医院智能体绑定知识库,给医院智能体构建私有知识库体系的具体配置过程如下:2. Bind the knowledge base to the hospital agent and build a private knowledge base system for the hospital agent. The specific configuration process is as follows:
智能体开发平台提供了知识文件上传能力,文件格式包括pdf、doc、docx、ppt、pptx、xlsx、markdown、png、jpg、txt。通过web前端把医院相关信息上传到知识库,包括医院基本信息,如楼栋、科室、食堂,以及位置信息和医疗咨询信息等。后台服务端接收到上传的知识库后,分两种方式进行存储。一种为通过text2vec-bge-large-chinese模型进行知识的向量化处理,再保存到Milvus向量知识库中,text2vec-bge-large-chinese模型可有效提高中文知识的召回率;二是使用了文本频率、分类和聚类算法提取知识的关键词,保存到es搜索引擎中。最终实现语义+关键词的知识混合召回方式,大大的提高了召回率。The intelligent agent development platform provides the ability to upload knowledge files, and the file formats include pdf, doc, docx, ppt, pptx, xlsx, markdown, png, jpg, and txt. Upload hospital-related information to the knowledge base through the web front end, including basic hospital information such as buildings, departments, canteens, as well as location information and medical consultation information. After the backend server receives the uploaded knowledge base, it is stored in two ways. One is to vectorize the knowledge through the text2vec-bge-large-chinese model and then save it to the Milvus vector knowledge base. The text2vec-bge-large-chinese model can effectively improve the recall rate of Chinese knowledge; the second is to use text frequency, classification and clustering algorithms to extract knowledge keywords and save them to the es search engine. Finally, the semantic + keyword knowledge hybrid recall method is realized, which greatly improves the recall rate.
3、进一步的,还包括进行医院私有知识库的近义词库构建。3. Furthermore, it also includes the construction of a synonym database for the hospital's private knowledge base.
由于医院知识体系存在很多的近义词,例如治疗、医治、诊疗、诊治可能同时表示了一个意思,在医院知识库里面会同时出现这些词语,平台提供了近义词维护的能力,通过web前端输入知识实体词、近义词,进行知识实体词和近义词的一对多映射。后台服务端接收到web前端的输入后,同样通过text2vec-bge-large-chinese模型进行知识实体词、近义词的向量化处理,保存到Milvus向量知识库中。后续用户输入的近义词都会统一命中相同的知识实体词,比如有用户在web前端输入“诊治”或“医治”,后台服务端接收到信息后,先把“诊治”或“医治”进行向量化,再提交到Milvus向量知识库通过余弦距离技术查询,会统一查询出规范的知识实体词“治疗”,从而达到归一化处理,减少知识歧义,可有效提高知识命中率。Since there are many synonyms in the hospital knowledge system, such as treatment, medical treatment, diagnosis and treatment, and diagnosis and treatment, which may express the same meaning at the same time, these words will appear at the same time in the hospital knowledge base. The platform provides the ability to maintain synonyms. Knowledge entity words and synonyms are input through the web front end to perform one-to-many mapping between knowledge entity words and synonyms. After the backend server receives the input from the web front end, it also vectorizes the knowledge entity words and synonyms through the text2vec-bge-large-chinese model and saves them to the Milvus vector knowledge base. Subsequent synonyms entered by users will uniformly hit the same knowledge entity word. For example, if a user enters "diagnosis and treatment" or "medical treatment" on the web front end, after the backend server receives the information, it will first vectorize "diagnosis and treatment" or "medical treatment", and then submit it to the Milvus vector knowledge base for query through the cosine distance technology. The standardized knowledge entity word "treatment" will be uniformly queried, thereby achieving normalization processing, reducing knowledge ambiguity, and effectively improving the knowledge hit rate.
4、进行医院私有知识的命中测试。4. Conduct a hit test on the hospital’s proprietary knowledge.
智能体开发平台提供自主的命中测试能力,使用者在web前端输入一段文字,进行知识库的召回测试,看召回的数据是否正确。The intelligent agent development platform provides autonomous hit testing capabilities. Users enter a piece of text on the web front end and perform a recall test on the knowledge base to see if the recalled data is correct.
例如,使用者输入:神经内科在哪个位置。后台服务端接收到消息后,先把问题进行向量化,然后分两种方式进行知识查询。第一种是通过关键词匹配到es搜索引擎中进行知识的查询,第二种是通过向量化语义匹配到Milvus知识库中进行知识查询。两种查询结果进行叠加后,再通过ReRank重排技术进行结果的重排,进一步保证查询结果的有效性和准确性,最终输出结果给到web前端:神经内科在医院大楼3楼205室。For example, the user inputs: Where is the Department of Neurology? After receiving the message, the backend server first vectorizes the question and then performs knowledge query in two ways. The first is to query knowledge by matching keywords to the es search engine, and the second is to query knowledge by matching vectorized semantics to the Milvus knowledge base. After the two query results are superimposed, the results are rearranged using the ReRank rearrangement technology to further ensure the validity and accuracy of the query results, and the final output result is given to the web frontend: The Department of Neurology is in Room 205, 3rd Floor, Hospital Building.
5、智能体开发平台跟第三方系统进行打通。5. The intelligent development platform is connected with the third-party system.
智能体开发平台提供了工作流的配置能力,用于智能体快速跟第三方业务系统的打通,可用于智能挂号、化验单查询等业务场景。The intelligent agent development platform provides workflow configuration capabilities, which is used to quickly connect the intelligent agent with third-party business systems. It can be used in business scenarios such as smart registration and test order query.
以智能挂号为例:web前端配置一个工作流,包括以下几个节点,即开始节点、语义解析节点、调用挂号系统API节点、结束节点。然后在web前端输入信息“我要挂今天下午15点钟牙科号”,后台服务端接收到信息后,调用语义解析节点,通过调用大模型能力,返回一个json数据结构{“name”:”李某”,”time”:”2024-05-15 15:00:00”,”depart”:”牙科”};调用挂号系统API节点,把数据结构按照API要求提交给挂号系统,最终挂号成功返回给web前端告诉用户:您好,已为您挂号成功,请您及时就诊。Take smart registration as an example: the web front end configures a workflow, including the following nodes, namely the start node, semantic parsing node, registration system API call node, and end node. Then enter the information "I want to make a dental appointment at 15:00 this afternoon" on the web front end. After receiving the information, the backend server calls the semantic parsing node and returns a json data structure {"name":"李某","time":"2024-05-15 15:00:00","depart":"dentistry"} by calling the large model capability; call the registration system API node and submit the data structure to the registration system according to the API requirements. Finally, the registration is successfully returned to the web front end to tell the user: Hello, your registration has been successful, please see the doctor in time.
由此可知,本申请的智能体开发平台,能够提供的能力有智能体框架配置、知识库配置、工作流配置等,高效、准确的构建出一个医院智能体,具备医院垂直领域的服务能力。并且,通过web可视化配置机制,能够在配置过程中极大程度降低了开发智能体的难度,屏蔽了底层大语言模型、知识库等技术,使不懂技术的业务人员也可以方便的进行智能体的开发。而且维护性极高,可以快速的进行智能体的迭代,极大的增加了灵动性。也就是说,通过本申请的配置方法,用户不需要关心如何使用技术手段调用大模型能力,例如无需关心调用大模型的什么API进行提交提示词、用什么开发语言进行大模型的交互。原因在于大模型存在很多种,且每种大模型的调用方式都是不一样的,但是通过该智能体的配置,可以屏蔽大模型的调用方式,使用户无感进行大模型的使用。同时。通过本申请的配置方法,用户也无需关心如何使用技术手段进行知识库的使用,例无需关心如何进行知识文档的分词、向量化、召回,以及采取何种知识库技术进行知识文档的保存,使用户无感的使用知识库。It can be seen from this that the intelligent agent development platform of the present application can provide capabilities such as intelligent agent framework configuration, knowledge base configuration, workflow configuration, etc., and efficiently and accurately build a hospital intelligent agent with service capabilities in the vertical field of the hospital. In addition, through the web visualization configuration mechanism, the difficulty of developing the intelligent agent can be greatly reduced during the configuration process, and the underlying large language model, knowledge base and other technologies can be shielded, so that business personnel who do not understand technology can also easily develop the intelligent agent. Moreover, it is highly maintainable and can quickly iterate the intelligent agent, greatly increasing its flexibility. In other words, through the configuration method of the present application, users do not need to care about how to use technical means to call the large model capabilities, such as what API to call the large model to submit prompt words and what development language to use for the interaction of the large model. The reason is that there are many types of large models, and the calling methods of each large model are different, but through the configuration of the intelligent agent, the calling method of the large model can be shielded, so that users can use the large model without feeling. At the same time. Through the configuration method of this application, users do not need to worry about how to use technical means to use the knowledge base, for example, they do not need to worry about how to segment, vectorize, and recall knowledge documents, and what knowledge base technology to use to save knowledge documents, so that users can use the knowledge base without feeling.
需要说明的是,本申请的智能开发平台整体拓扑图如图5所示,其中包括智能体、知识库、插件、工作流、大语言模型和元数据。具体的,首先需要通过人为的配置,得到一个配置好的智能体。其中,在配置智能体的过程之中,需要将智能体的元信息存储在元数据服务中,包括知识库元数据、插件元数据、数据库元数据、工作流元数据,以及智能体框架对应的开场白、提示词、大语言模型元数据等。且上述元数据最终均通过调用元数据服务存储至数据库中。随后,在使用配置好的智能体的过程中,当用户提交一个查询指令到智能体开发平台时,智能体开发平台根据用户输入的指令,去知识库检索相关信息,并得到知识库返回相关数据,随后将知识库返回的信息以及用户的查询指令一起提交给大语言模型,通过大语言模型的语义理解能力,进行最终答案的输出,即利用智能体进行问题的回答。It should be noted that the overall topology of the intelligent development platform of the present application is shown in Figure 5, which includes an agent, a knowledge base, a plug-in, a workflow, a large language model and metadata. Specifically, it is necessary to first obtain a configured agent through manual configuration. Among them, in the process of configuring the agent, the meta information of the agent needs to be stored in the metadata service, including knowledge base metadata, plug-in metadata, database metadata, workflow metadata, and the opening remarks, prompt words, and large language model metadata corresponding to the agent framework. And the above metadata are finally stored in the database by calling the metadata service. Subsequently, in the process of using the configured agent, when the user submits a query instruction to the agent development platform, the agent development platform retrieves relevant information from the knowledge base according to the instruction input by the user, and obtains the relevant data returned by the knowledge base, and then submits the information returned by the knowledge base and the user's query instruction to the large language model, and outputs the final answer through the semantic understanding ability of the large language model, that is, using the agent to answer the question.
实施例2Example 2
如图3和图4所示,本申请还提供了一种大模型应用智能体的使用方法,基于上述任一项所述的大模型应用智能体的配置方法所配置出的大模型应用智能体实现,包括如下步骤:As shown in FIG. 3 and FIG. 4 , the present application also provides a method for using a large model application agent, and the large model application agent configured based on the configuration method of the large model application agent described in any of the above items is implemented, including the following steps:
S10、终端用户通过web前端向已配置好的智能体发起一个查询问题,调用大模型智能体服务;S10, the terminal user initiates a query to the configured agent through the web front end, calling the large model agent service;
S20、大模型智能体服务根据查询问题调用元数据服务,并查询元数据服务中配置好的元数据信息;且元数据信息包括知识库元数据信息、插件元数据信息、工作流元数据信息、数据库元数据信息;S20, the large model agent service calls the metadata service according to the query question, and queries the metadata information configured in the metadata service; and the metadata information includes knowledge base metadata information, plug-in metadata information, workflow metadata information, and database metadata information;
S30、大模型智能体服务根据知识库元数据信息调用知识库服务,获取相应的知识库信息;S30, the large model agent service calls the knowledge base service according to the knowledge base metadata information to obtain the corresponding knowledge base information;
S40、大模型智能体服务根据插件元数据信息调用插件服务,获取相应的插件信息;S40, the large model agent service calls the plug-in service according to the plug-in metadata information to obtain the corresponding plug-in information;
S50、大模型智能体服务根据工作流元数据信息调用工作流服务,获取相应的工作流信息;S50, the large model agent service calls the workflow service according to the workflow metadata information to obtain the corresponding workflow information;
S60、大模型智能体服务根据数据库元数据信息调用数据库服务,查询到所需要的数据;S60, the large model agent service calls the database service according to the database metadata information to query the required data;
S70、大模型智能体服务将知识库信息、插件信息、工作流信息和所需要的数据,以及从元数据服务中获取到的提示词和大语言模型元数据,均提交至大语言模型进行推理,最终返回答案给终端用户。S70. The large model agent service submits the knowledge base information, plug-in information, workflow information and required data, as well as the prompt words and large language model metadata obtained from the metadata service to the large language model for reasoning, and finally returns the answer to the end user.
需要说明的是,在步骤S10中,终端用户与智能体开发平台的交互,可以是文本,也可以是语音。即终端用户能够在跟智能体的文本或语音交互过程中,自动根据配置好的大语言模型、prompt(指令)、工具插件、流程以及知识库,进行问题的理解和解答。It should be noted that in step S10, the interaction between the terminal user and the agent development platform can be text or voice. That is, the terminal user can automatically understand and answer questions based on the configured large language model, prompt (instructions), tool plug-ins, processes and knowledge base during the text or voice interaction with the agent.
综上所述,本申请通过配置智能体的元数据信息,包括智能体的角色定义、使用到的知识库、以及调用的大语言模型,使得拦截每次用户提交指令的过程,动态的根据配置好的元数据信息查询到相应的信息,并把查询的信息和问题提交到配置好的大语言模型,推理出最终的答案。To summarize, the present application configures the metadata information of the intelligent agent, including the role definition of the intelligent agent, the knowledge base used, and the called large language model, so as to intercept the process of each user submitting an instruction, dynamically query the corresponding information based on the configured metadata information, and submit the queried information and questions to the configured large language model to infer the final answer.
需要说明的是,尽管以作为示例介绍了如上,但本领域技术人员能够理解,本公开应不限于此。事实上,用户完全可根据实际应用场景灵活设定,只要可以按照上述技术方法实现本申请的技术功能即可。It should be noted that, although the above is introduced as an example, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, users can flexibly set according to actual application scenarios, as long as the technical functions of the present application can be realized according to the above technical methods.
实施例3Example 3
基于实施例1的实施原理,本申请另一方面,提供一种装置,用于实现上述任一项所述的大模型应用智能体的配置方法,包括:Based on the implementation principle of Example 1, the present application provides, on the other hand, a device for implementing the configuration method of the large model application agent described in any one of the above items, including:
指令接收模块,被配置为接收智能体配置指令;且智能体配置指令基于web前端配置得到的;The instruction receiving module is configured to receive the agent configuration instruction; and the agent configuration instruction is obtained based on the web front-end configuration;
元数据信息配置模块,被配置为根据智能体配置指令配置对应的智能体元数据信息,并存储于元数据服务中;且智能体元数据信息包括知识库元数据信息、插件元数据信息、工作流元数据信息和数据库元数据信息;The metadata information configuration module is configured to configure the corresponding agent metadata information according to the agent configuration instruction and store it in the metadata service; and the agent metadata information includes knowledge base metadata information, plug-in metadata information, workflow metadata information and database metadata information;
智能体创建模块,被配置为调用大模型智能体服务创建智能体框架,并将智能体元数据信息与智能体框架进行绑定,创建智能体。The intelligent agent creation module is configured to call the large model intelligent agent service to create an intelligent agent framework, and bind the intelligent agent metadata information with the intelligent agent framework to create an intelligent agent.
显然,本领域的技术人员应该明白,实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成的,程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各控制方法的实施例的流程。上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。Obviously, those skilled in the art should understand that all or part of the processes in the above-mentioned embodiment method can be implemented by instructing the relevant hardware through a computer program. The program can be stored in a computer-readable storage medium. When the program is executed, it can include the processes of the embodiments of the above-mentioned control methods. The modules or steps of the present invention can be implemented by a general-purpose computing device. They can be concentrated on a single computing device or distributed on a network composed of multiple computing devices. Optionally, they can be implemented by program codes executable by computing devices, so that they can be stored in a storage device and executed by the computing device, or they can be made into individual integrated circuit modules respectively, or multiple modules or steps therein can be made into a single integrated circuit module for implementation. In this way, the present invention is not limited to any specific combination of hardware and software.
本领域技术人员可以理解,实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成的,程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各控制方法的实施例的流程。其中,存储介质可为磁碟、光盘、只读存储记忆体(Read-OnlyMemory,ROM)、随机存储记忆体(RandomAccessMemory,RAM)、快闪存储器(FlashMemory)、硬盘(HardDiskDrive,缩写:HDD)或固态硬盘(Solid-StateDrive,SSD)等;存储介质还可以包括上述种类的存储器的组合。Those skilled in the art can understand that all or part of the processes in the above-mentioned embodiment method can be implemented by instructing the relevant hardware through a computer program, and the program can be stored in a computer-readable storage medium. When the program is executed, it can include the processes of the embodiments of the above-mentioned control methods. Among them, the storage medium can be a disk, an optical disk, a read-only memory (ROM), a random access memory (RAM), a flash memory (Flash Memory), a hard disk (Hard Disk Drive, abbreviated as: HDD) or a solid-state drive (SSD), etc.; the storage medium can also include a combination of the above-mentioned types of memory.
实施例4Example 4
更进一步地,本申请另一方面,提供一种控制系统,包括:Furthermore, in another aspect, the present application provides a control system, comprising:
处理器;processor;
用于存储处理器可执行指令的存储器;a memory for storing processor-executable instructions;
其中,所述处理器被配置为执行所述可执行指令时实现上述任一项所述的大模型应用智能体的配置方法。Wherein, the processor is configured to implement any of the above-mentioned methods for configuring the large model application agent when executing the executable instructions.
本公开实施例来控制系统包括处理器以及用于存储处理器可执行指令的存储器。其中,处理器被配置为执行可执行指令时实现前面任一所述的大模型应用智能体的配置方法。The control system of the disclosed embodiment includes a processor and a memory for storing processor executable instructions, wherein the processor is configured to implement any of the above-mentioned configuration methods of the large model application agent when executing the executable instructions.
此处,应当指出的是,处理器的个数可以为一个或多个。同时,在本公开实施例的控制系统中,还可以包括输入装置和输出装置。其中,处理器、存储器、输入装置和输出装置之间可以通过总线连接,也可以通过其他方式连接,此处不进行具体限定。Here, it should be noted that the number of processors can be one or more. At the same time, in the control system of the embodiment of the present disclosure, an input device and an output device may also be included. Among them, the processor, memory, input device and output device may be connected through a bus or in other ways, which are not specifically limited here.
存储器作为一计算机可读存储介质,可用于存储软件程序、计算机可执行程序和各种模块,如:本公开实施例的大模型应用智能体的配置方法所对应的程序或模块。处理器通过运行存储在存储器中的软件程序或模块,从而执行控制系统的各种功能应用及数据处理。The memory, as a computer-readable storage medium, can be used to store software programs, computer executable programs and various modules, such as the program or module corresponding to the configuration method of the large model application agent in the embodiment of the present disclosure. The processor executes various functional applications and data processing of the control system by running the software programs or modules stored in the memory.
输入装置可用于接收输入的数字或信号。其中,信号可以为产生与设备/终端/服务器的用户设置以及功能控制有关的键信号。输出装置可以包括显示屏等显示设备。The input device can be used to receive input numbers or signals. The signal can be a key signal related to user settings and function control of the device/terminal/server. The output device can include a display device such as a display screen.
实施例5Example 5
本申请另一方面,提供一种非易失性计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述任一项所述的大模型应用智能体的配置方法。On the other hand, the present application provides a non-volatile computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, implements any of the above-mentioned methods for configuring a large model application agent.
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。The embodiments of the present disclosure have been described above, and the above description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and changes will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The selection of terms used herein is intended to best explain the principles of the embodiments, practical applications, or technical improvements in the market, or to enable other persons of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410855629.8A CN118819617A (en) | 2024-06-27 | 2024-06-27 | Configuration method and device of large model application intelligent agent |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410855629.8A CN118819617A (en) | 2024-06-27 | 2024-06-27 | Configuration method and device of large model application intelligent agent |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118819617A true CN118819617A (en) | 2024-10-22 |
Family
ID=93083588
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410855629.8A Pending CN118819617A (en) | 2024-06-27 | 2024-06-27 | Configuration method and device of large model application intelligent agent |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118819617A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119692954A (en) * | 2025-02-24 | 2025-03-25 | 中图科信数智技术(北京)有限公司 | Intelligent workflow interaction configuration method and system based on large model |
| CN120338291A (en) * | 2025-06-18 | 2025-07-18 | 深圳润世华软件和信息技术服务有限公司 | Park comprehensive energy intelligent monitoring method and related equipment |
| CN120743254A (en) * | 2025-08-28 | 2025-10-03 | 广东金赋科技股份有限公司 | Visual creation and debugging method and system based on AI intelligent agent |
-
2024
- 2024-06-27 CN CN202410855629.8A patent/CN118819617A/en active Pending
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119692954A (en) * | 2025-02-24 | 2025-03-25 | 中图科信数智技术(北京)有限公司 | Intelligent workflow interaction configuration method and system based on large model |
| CN119692954B (en) * | 2025-02-24 | 2025-10-17 | 中图科信数智技术(北京)有限公司 | Intelligent workflow interaction configuration method and system based on large model |
| CN120338291A (en) * | 2025-06-18 | 2025-07-18 | 深圳润世华软件和信息技术服务有限公司 | Park comprehensive energy intelligent monitoring method and related equipment |
| CN120743254A (en) * | 2025-08-28 | 2025-10-03 | 广东金赋科技股份有限公司 | Visual creation and debugging method and system based on AI intelligent agent |
| CN120743254B (en) * | 2025-08-28 | 2025-11-14 | 广东金赋科技股份有限公司 | Visual creation and debugging method and system based on AI intelligent agent |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN118819617A (en) | Configuration method and device of large model application intelligent agent | |
| US20190035506A1 (en) | Intelligent auxiliary diagnosis method, system and machine-readable medium thereof | |
| US8032525B2 (en) | Execution of semantic queries using rule expansion | |
| CN109933647A (en) | Method, apparatus, electronic device and computer storage medium for determining description information | |
| US20050149536A1 (en) | Data migration and format transformation system | |
| CA2985961C (en) | Domain specific language to query medical data | |
| CN108492887A (en) | medical knowledge map construction method and device | |
| US20200321087A1 (en) | System and method for recursive medical health document retrieval and network expansion | |
| CN110428910A (en) | Clinical application indication analysis system, method, computer equipment and storage medium | |
| CN105190634A (en) | Method for Computing Scores for Medical Recommendations Used as Medical Decision Support | |
| CN109299238B (en) | A data query method and device | |
| CN118332097B (en) | Information interaction method and device | |
| CN118350616A (en) | Business process construction method, device, electronic device and storage medium | |
| Palagin et al. | Fundamentals of the integrated use of neural network and ontolinguistic paradigms: A comprehensive approach | |
| US20230253124A1 (en) | Method for machine-assisted automated continuation of conversations between the user, software system, and health expert. | |
| Freedman et al. | A novel tool for standardizing clinical data in a semantically rich model | |
| CN118051598A (en) | Medicine knowledge question-answering method and device, electronic equipment and storage medium | |
| US20210117881A1 (en) | Natural language workflow construction | |
| CN120046714A (en) | Method and system for providing data elements from a data corpus | |
| CN115905497B (en) | Method, device, electronic equipment and storage medium for determining reply sentence | |
| US20120316890A1 (en) | Automated configuration of a medical practice management system using global content | |
| Guyon et al. | Capacity control in linear classifiers for pattern recognition | |
| CN113409936A (en) | System and storage medium for assisting disease reasoning | |
| CN119848209A (en) | Question-answering large language model optimization method, device, equipment and storage medium | |
| CN112885479B (en) | Method and device for realizing data item comparison verification in medical data |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |