[go: up one dir, main page]

CN119809874A - Teaching data analysis method, device, storage medium and electronic equipment - Google Patents

Teaching data analysis method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN119809874A
CN119809874A CN202311315869.0A CN202311315869A CN119809874A CN 119809874 A CN119809874 A CN 119809874A CN 202311315869 A CN202311315869 A CN 202311315869A CN 119809874 A CN119809874 A CN 119809874A
Authority
CN
China
Prior art keywords
teaching
scene
data
event
events
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311315869.0A
Other languages
Chinese (zh)
Inventor
林文滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shirui Electronics Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202311315869.0A priority Critical patent/CN119809874A/en
Publication of CN119809874A publication Critical patent/CN119809874A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请公开了一种教学数据分析方法、装置、计算机存储介质及电子设备,涉及教育信息化技术领域,其中,该方法包括:获取教学数据,以及根据教学数据识别出多个基础教学事件;对多个基础教学事件进行聚合处理,得到至少一个教学场景;根据各教学场景对应的类型信息,对各教学场景包括的至少一个基础教学事件分别进行分类处理,得到各教学场景包括的至少一个教学事件;根据至少一个教学场景分别对应的至少一个教学事件,对教学数据进行分析处理,得到教学数据对应的分析结果。采用该方法提高了对教学数据进行分析的准确率和效率。

The present application discloses a teaching data analysis method, device, computer storage medium and electronic device, which relates to the field of educational information technology, wherein the method comprises: obtaining teaching data, and identifying multiple basic teaching events according to the teaching data; performing aggregation processing on multiple basic teaching events to obtain at least one teaching scene; performing classification processing on at least one basic teaching event included in each teaching scene according to type information corresponding to each teaching scene, and obtaining at least one teaching event included in each teaching scene; performing analysis processing on the teaching data according to at least one teaching event corresponding to at least one teaching scene, and obtaining analysis results corresponding to the teaching data. The method improves the accuracy and efficiency of analyzing teaching data.

Description

Teaching data analysis method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of educational informatization technology, and in particular, to a method and apparatus for analyzing teaching data, a storage medium, and an electronic device.
Background
After the class video analysis technology of the 70 th century of 20 was raised, research and teaching processes through video analysis have become hot topics of educational informatization. In order to adapt to the analysis of classroom teaching behaviors in an informatization teaching environment, a coding system with information technology features is continuously provided, wherein typical classroom interaction analysis methods include a France interaction analysis system (Flanders Interaction ANALYSIS SYSTEM, FIAS) and an S-T classroom teaching analysis.
In the traditional classroom behavior analysis method, most of analysis processes are completed manually, an automatic method is not used, and the problems of single data mode, complex operation, low efficiency and the like exist.
Disclosure of Invention
The embodiment of the application provides a teaching data analysis method, a device, a storage medium and electronic equipment, which can more accurately classify basic teaching events by combining teaching scenes to obtain the teaching events, further analyze the teaching data according to the teaching events and improve the accuracy of the teaching data analysis. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for analyzing teaching data, where the method includes:
acquiring teaching data and identifying a plurality of basic teaching events according to the teaching data;
the plurality of basic teaching events are aggregated to obtain at least one teaching scene;
According to the type information corresponding to each teaching scene, respectively classifying at least one basic teaching event included in each teaching scene to obtain at least one teaching event included in each teaching scene;
and according to at least one teaching event corresponding to at least one teaching scene, analyzing and processing the teaching data to obtain an analysis result corresponding to the teaching data.
In a second aspect, an embodiment of the present application provides a teaching data analysis apparatus, including:
the recognition module is used for acquiring teaching data and recognizing a plurality of basic teaching events according to the teaching data;
The aggregation module is used for carrying out aggregation processing on the plurality of basic teaching events to obtain at least one teaching scene;
The classification module is used for respectively classifying at least one basic teaching event included in each teaching scene according to the type information corresponding to each teaching scene to obtain at least one teaching event included in each teaching scene;
and the analysis module is used for analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-described method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include a processor and a memory, wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-described method steps.
The technical scheme provided by the embodiments of the application has the beneficial effects that at least:
In the embodiment of the application, after the teaching data is obtained, the teaching data is identified and processed to obtain a plurality of basic teaching events, and the plurality of basic teaching events are further aggregated and processed to obtain a teaching scene which has a certain rule meaning and can be understood by a user. The content meanings of the basic teaching events included in different teaching scenes are not identical, so that at least one basic teaching event in each teaching scene is respectively classified and processed by combining the type information corresponding to each teaching scene, and at least one teaching event included in each teaching scene is obtained. And analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data. The teaching data analysis provided by the application can be automatically processed when the teaching events are divided, so that the efficiency of teaching data analysis is improved. The teaching events are further classified from the basic teaching events, the content of the teaching event characterization is more specific compared with the basic teaching events, the classification of the events is refined, and meanwhile, the accuracy of obtaining the teaching events is improved by combining the teaching scenes.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flow chart of a teaching data analysis method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a scene change provided by an embodiment of the application;
FIG. 3 is a schematic representation of a code provided by an embodiment of the present application;
fig. 4 is a flow chart of a teaching data analysis method according to an embodiment of the present application;
fig. 5 is a schematic content diagram of a question-answer scenario provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of a teaching data analysis device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application as detailed in the accompanying claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it should be noted that, unless expressly specified and limited otherwise, "comprise" and "have" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art. Furthermore, in the description of the present application, unless otherwise indicated, "a plurality of" and "at least two" mean two or more. "at least one" means one or more. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The present application will be described in detail with reference to specific examples.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals according to the embodiments of the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, teaching data and the like referred to in this specification are acquired with sufficient authorization.
In one embodiment, as shown in fig. 1, a flow chart of a method for analyzing teaching data according to the present application is provided, which may be implemented by a computer program and may be executed on a von neumann system-based teaching data analysis device. The computer program may be integrated in the application or may run as a stand-alone tool class application.
Specifically, the teaching data analysis method includes:
S101, acquiring teaching data, and identifying a plurality of basic teaching events according to the teaching data.
In this embodiment, the teaching data is data with teaching contents, which is used to represent the teaching contents occurring in the course of classroom teaching, and may include the contents occurring in the course of classroom teaching, such as the behaviors of teachers and students. For example, the teaching data is a classroom teaching video, a classroom teaching picture, and the like. Basic teaching events can be understood as easily discernable basic events occurring during teaching. For example, the basic teaching event is a basic event such as a teacher speaking, a student speaking, etc., and the basic event only needs to distinguish the speaking subjects.
In one embodiment, the tutorial data includes tutorial vision data and tutorial voice data.
Specifically, teaching visual data and teaching voice data in the classroom teaching process are obtained, and a plurality of basic teaching events are identified according to the teaching visual data and the teaching voice data. Further, teaching visual data and teaching voice data can be collected through sensors such as cameras and microphones arranged in a classroom, the teaching data collected in real time from the classroom are identified, and a plurality of basic teaching events are obtained.
In another embodiment, the method for acquiring teaching data and identifying a plurality of basic teaching events according to the teaching data comprises the steps of acquiring the teaching data, respectively identifying teaching visual data and teaching voice data to obtain a first identification result corresponding to the teaching visual data and a second identification result corresponding to the teaching voice data, and obtaining a plurality of basic teaching events according to the first identification result and the second identification result.
The teaching visual data is identified to obtain a corresponding first identification result, the teaching visual data can be identified through Computer Vision (CV), and the obtained first identification result is used as a basic action event for representing actions of a main body corresponding to the teaching visual data. For example, the first recognition result is a basic action event such as a stage event, a stand-up event, etc.
The computer vision means that a camera and a computer are used for replacing human eyes to perform machine vision such as recognition, tracking and measurement on targets, and further graphic processing is performed, so that the computer is processed into images which are more suitable for human eyes to observe or transmit to an instrument to detect.
The teaching voice data is identified to obtain a corresponding second identification result, the teaching voice data can be identified through automatic voice identification (Automatic Speech Recognition, ASR), and the obtained second identification result is used as a basic voice event for representing the voice of the main body corresponding to the teaching voice data. For example, the second recognition result is a basic speech event such as a teacher speaking, a teacher asking question, etc.
Automatic speech recognition is the conversion of lexical content in human speech into computer readable inputs, such as keys, binary codes, or character sequences, that can convert a natural speech signal into machine recognizable text information.
Specifically, the acquired teaching visual data and teaching voice data are respectively identified, a first identification result corresponding to the teaching visual data and a second identification result corresponding to the teaching voice data are obtained, and a plurality of basic teaching events are obtained by combining the first identification result and the second identification result. Further, teaching visual data and teaching voice data are obtained, the teaching visual data and the teaching voice data are respectively identified, a basic action event and a basic voice event are obtained, and a plurality of basic teaching events are obtained according to the basic action event and the basic voice event.
In this embodiment, the acquired teaching visual data and the acquired teaching voice data are respectively identified to obtain a first identification result and a second identification result, the first identification result and the second identification result have an association relationship, and a plurality of basic teaching events are obtained by combining the two results, so that the accuracy of identifying the teaching data is improved.
S102, carrying out aggregation processing on a plurality of basic teaching events to obtain at least one teaching scene.
In this embodiment, the teaching scene is a teaching scene which has a certain rule meaning and can be understood by a user in the classroom teaching process. For example, the teaching scene is a question scene, an upper interaction scene, and the like.
In this embodiment, aggregation processing is performed on a plurality of basic teaching events to obtain at least one teaching scene. Aggregation, in information science, refers to content selection, analysis and classification of related data, and finally analysis is performed to obtain a required result, which mainly refers to a data conversion process capable of generating a scalar value from an array. In the application, aggregation processing is also known as composite event processing (Complex event processing, CEP), which is an analysis technology based on event streams in a dynamic environment, wherein events are usually meaningful state changes, and by analyzing relationships among the events, by utilizing technologies such as filtering, association, aggregation and the like, detection rules are formulated according to time sequence relationships among the events and the aggregation relationships, a sequence of events meeting requirements is continuously queried from the event streams, and finally more complex composite events are obtained through analysis.
In one embodiment, the method for obtaining at least one teaching scene by aggregating a plurality of basic teaching events comprises the step of obtaining at least one teaching scene by aggregating a plurality of basic teaching events according to a preset conversion rule.
The preset conversion rule is a conversion rule set according to the association relation between the basic teaching events, and is used for converting to the next teaching scene according to the front-back association relation between the basic teaching events when the basic teaching events meet the preset conversion rule, and converting the teaching scene through the preset conversion rule, so that the basic teaching events are aggregated, and at least one teaching scene is obtained. For example, a student speaking event and a student standing event occur after a teacher question event, and a teaching scene is converted into a question-answer scene.
The basic teaching event is an action event and a language event which are frequently occurred in the classroom teaching process, and can be an action/language event of a teacher or an action/language event of a student. For example, the basic teaching event is a student hand-up event, a student standing event, a student on-the-shelf event, a student speaking event, a teacher speaking event, and the like. As shown in fig. 2, fig. 2 is a schematic diagram of a scene transition according to an embodiment of the present application. The figure 2 comprises a plurality of basic teaching events, particularly comprises basic teaching events such as teacher question event, student speaking event, student standing event, teacher speaking event and the like, and teaching scenes, namely a question scene and a question and answer scene. When the teacher question event is detected, the question scene is converted, after the question scene is entered, when the basic teaching event meets the student standing event and the student speaking event, the question scene is converted, and when the teacher speaking event is detected in the question scene, the teaching scene is unchanged. The types and numbers of basic teaching events and teaching scenes shown in fig. 2 are only schematic, and the present disclosure is not limited in any way.
Specifically, aggregation processing is carried out on a plurality of basic teaching events according to a preset conversion rule, so that at least one teaching scene is obtained. For example, a finite state machine (FINITE STATE MACHINE, FSM) is used to aggregate a plurality of basic teaching events according to a preset conversion rule, so as to obtain at least one teaching scene.
The finite state machine is simply referred to as a state machine. The State machine has three components, state, event, action, event triggering State transition and Action execution. The execution of the actions is not necessary, and only the state may be transferred without specifying any action. In general, a state machine is a mathematical model that represents the behavior of a finite number of states, and transitions between those states, and the execution of actions, etc. The State machine may be represented by the formula State (S) +event (E) = > Actions (a) +state (S1), i.e. in the case of State S, event E is received, so that the State transitions to S1, accompanied by the execution of action a. In the embodiment of the application, a finite State machine is used for carrying out aggregation processing on a plurality of basic teaching events, namely, teaching scenes which are required to be converted are set in a State (State) part, and the events (Event) are the basic teaching events.
In the embodiment of the application, a plurality of basic teaching events are aggregated into the teaching scene by setting the preset conversion condition and the corresponding teaching scene, the teaching scene is easy to understand and has logic, and the basic teaching events are divided by using the teaching scene, so that the basic teaching scene is convenient to further process.
S103, respectively classifying at least one basic teaching event included in each teaching scene according to the type information corresponding to each teaching scene to obtain at least one teaching event included in each teaching scene.
In this embodiment, the teaching event is an event obtained by dividing the basic teaching event, and the content represented by the teaching event is more specific than the basic teaching event. For example, the underlying tutorial event may be a teacher speaking, and the corresponding tutorial event may be a tutorial event that accepts emotion, encourages or counsels, gives directions or instructions, and the like.
Specifically, at least one basic teaching event included in each teaching scene is respectively classified and processed by combining type information corresponding to different teaching scenes, so as to obtain at least one teaching event included in each teaching scene. For example, the student speaking events in the question-answering scene are classified as student passive speaking events, and at least the student passive speaking events included in the question-answering scene are obtained.
S104, analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data.
In this embodiment, according to at least one teaching event identified and converted, analysis processing is performed on the teaching data, so as to obtain an analysis result corresponding to the teaching data. For example, the student passive speaking event and the student active speaking event are identified and converted, the proportion of the student passive speaking event to the student active speaking event can be calculated, and the proportion of the student speaking event to the whole class duration can also be calculated, so that the teacher class teaching style represented by the teaching data can be obtained through analysis.
Further, teaching advice can be provided for teachers according to analysis results. For example, advice is to pay attention to increase encouragement or surfacing of students during teacher teaching.
In one embodiment, the method for analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data comprises the steps of sequentially encoding at least one teaching event corresponding to each teaching scene according to a time sequence of sequentially occurring at least one teaching event corresponding to each teaching scene to obtain encoded data corresponding to each teaching scene, constructing a teaching analysis matrix according to the at least one encoded data, and analyzing and processing the teaching analysis matrix to obtain the analysis result corresponding to the teaching data.
As shown in FIG. 3, FIG. 3 is a schematic diagram of a code provided in an embodiment of the present application, and the code is classified into three categories, i.e., teacher language, student language, silence or confusion, wherein the teacher language includes emotion receiving, encouragement or expression, student viewpoint receiving or utilization, questioning, teaching, instruction or instruction giving, and criticizing or maintenance authority, and the events are sequentially corresponding to numbers 1-7. Wherein 1-4 corresponds to an indirect effect and 5-7 corresponds to a direct effect. The student language comprises passive speaking of the student and active speaking of the student, and codes 8 and 9 are respectively corresponding to the passive speaking of the student. Silence or confusion corresponds to an invalid language, coded 10.
Specifically, the teaching events are sequentially encoded according to the occurrence time of the teaching events, corresponding encoded data are obtained, a teaching analysis matrix is constructed according to the encoded data, and analysis processing is carried out on the teaching analysis matrix, so that an analysis result is obtained. For example, coding is sequentially performed according to the occurrence time of the teaching event, so as to obtain coded data corresponding to the teaching data, wherein the coded data is composed of a series of codes, and adjacent coded data are combined into a coding pair. Thus, in addition to the first code and the last code, the other codes appear twice in the code pairs, and the code pairs appear in the teaching analysis matrix in the frequency record. And analyzing and processing the teaching analysis matrix to obtain an analysis result corresponding to the teaching data.
The columns and rows of the teaching analysis matrix correspond to the values encoded in the encoding pairs. For example, the code pair (8, 4) appears only 1 time, filling 1 in 8 rows and 4 columns of the educational analysis matrix.
And analyzing and processing the teaching analysis matrix to obtain an analysis result corresponding to the teaching data. The method comprises the step of obtaining an analysis result by observing the distribution in the teaching analysis matrix. For example, 1-3 rows and 1-3 columns in the matrix, the area of intersection is the area of active interaction. If the recording times in the area are denser, the emotion atmosphere interaction between the teacher and the students is reflected, and the method is a presentation of active interaction between the teacher and the students. And calculating the proportion of the codes by coding the corresponding event content to obtain a desired analysis result. For example, the ratio of the numerical values recorded in 1-7 rows in the matrix to the total numerical value of 1-10 rows is calculated to obtain the teacher language ratio in the classroom teaching.
In this embodiment, compared with the case of directly recording a large number of repeated data records caused by continuous behavior during encoding, the method and device for recording teaching events and converting the teaching events into encoding improves the recording efficiency, does not need to record a large number of repeated encoded data, and saves the storage space.
In another embodiment, at least one teaching event corresponding to each teaching scene is coded in sequence according to the time sequence of the occurrence of the at least one teaching event corresponding to each teaching scene in sequence, and the method for obtaining the coded data corresponding to each teaching scene is that the at least one teaching event corresponding to each teaching scene is coded in sequence according to the time sequence of the occurrence of the at least one teaching event corresponding to each teaching scene in sequence and the duration time corresponding to the at least one teaching event respectively, so as to obtain the coded data corresponding to each teaching scene.
Specifically, coding is sequentially performed according to the time sequence and duration of the occurrence of the teaching events to obtain coded data corresponding to the teaching data, the coded data not only characterizes the corresponding teaching events, but also characterizes the occurrence time and duration of the teaching events, and complex classroom data is converted into visual and concise coded data. And the analysis result of the teaching data is obtained by analyzing the coded data, so that the efficiency of classroom teaching analysis is improved.
In another embodiment, the method for encoding the at least one teaching event corresponding to each teaching scene sequentially according to the time sequence of the at least one teaching event corresponding to each teaching scene and the duration corresponding to the at least one teaching event respectively, and obtaining the encoded data corresponding to each teaching scene includes continuously encoding the at least one teaching event corresponding to each teaching scene sequentially according to the preset time interval according to the time sequence of the at least one teaching event corresponding to each teaching scene and the duration corresponding to the at least one teaching event respectively, and obtaining the encoded data corresponding to each teaching scene.
Specifically, according to the occurrence time and duration of the teaching event, the teaching event is continuously encoded according to a preset time interval, and encoded data corresponding to the teaching data is obtained. The teaching event is divided into a plurality of events with the duration of the preset time interval by the preset time interval, and the plurality of events correspond to one code. It will be appreciated that the smaller the preset time interval, the higher the resulting encoding accuracy. For example, a teaching event is obtained as a student passive speaking event, the duration of the event is 8 seconds, the preset time interval is 1 second, and the code corresponding to the event is "8 88 88 88 8".
In this embodiment, by encoding the transmission time, duration and preset time interval according to the teaching event, compared with the 3 second sampling in the traditional recording mode, the embodiment of the application improves the encoding precision and further improves the analysis accuracy of the teaching data by setting the preset time interval.
In the embodiment of the application, after the teaching data is obtained, the teaching data is identified and processed to obtain a plurality of basic teaching events, and the plurality of basic teaching events are further aggregated and processed to obtain a teaching scene which has a certain rule meaning and can be understood by a user. The content meanings of the basic teaching events included in different teaching scenes are not identical, so that at least one basic teaching event in each teaching scene is respectively classified and processed by combining the type information corresponding to each teaching scene, and at least one teaching event included in each teaching scene is obtained. And analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data. The teaching data analysis provided by the application can be automatically processed when the teaching events are divided, so that the efficiency of teaching data analysis is improved. The teaching events are further classified from the basic teaching events, the content of the teaching event characterization is more specific compared with the basic teaching events, the classification of the events is refined, and meanwhile, the accuracy of obtaining the teaching events is improved by combining the teaching scenes. Furthermore, the teaching events are encoded at shorter time intervals compared with the traditional teaching data analysis mode, so that encoded data with higher precision is obtained, and the precision of an analysis result is improved.
Referring to fig. 4, fig. 4 is a flow chart of a teaching data analysis method according to an embodiment of the application. The method may be implemented in dependence on a computer program, and may be run on a von neumann system-based teaching data analysis device. The computer program may be integrated in the application or may run as a stand-alone tool class application.
Specifically, the teaching data analysis method includes:
S201, acquiring teaching data, and identifying a plurality of basic teaching events according to the teaching data.
Specifically, S201 and S101 are identical, and will not be described here again.
S202, aggregation processing is carried out on a plurality of basic teaching events, and at least one teaching scene is obtained.
Specifically, S202 and S102 are identical, and will not be described here again.
S203, determining a preset classification rule corresponding to the target teaching scene according to the type information corresponding to the target teaching scene in at least one teaching scene.
The preset classification rules are used for classifying the basic teaching events. For example, the preset classification rule is to match a plurality of keywords having commandability such as "quiet" and "notice and listen" for the teacher speaking event, and when at least one keyword is matched, the teacher speaking event is classified into one category.
The type information is used for representing the type of the target teaching scene. For example, the teaching scene is a question-answer scene, an upper-stage interaction scene and other different types.
Specifically, the at least one teaching scene comprises a target teaching scene, and a preset classification rule corresponding to the target teaching scene is determined according to type information corresponding to the target teaching scene before at least one basic teaching event included in the target teaching scene is respectively classified.
S204, respectively classifying at least one basic teaching event included in the target teaching scene according to a preset classification rule corresponding to the target teaching scene to obtain at least one teaching event included in the target teaching scene.
In this embodiment, according to the type information corresponding to the target teaching scene, a preset classification rule corresponding to the target teaching scene is determined. And further respectively classifying at least one basic teaching event included in the target teaching scene according to the preset classification rule to obtain at least one teaching event included in the target teaching scene.
Specifically, basic teaching events in the target teaching scene are classified according to preset classification rules corresponding to the target teaching scene. For example, the basic teaching event is a student speaking event, the target teaching scene is a question-answer scene, and the student speaking event in the question-answer scene is classified as a student passive speaking event in the teaching event according to a preset classification rule corresponding to the question-answer scene.
In another embodiment, the preset classification rules comprise a classification processing mode and an event type set, wherein the event type set comprises a plurality of teaching events, the preset classification rules corresponding to different teaching scenes are not identical, at least one basic teaching event included in the target teaching scene is respectively classified according to the preset classification rules corresponding to the target teaching scene, and the method for obtaining the at least one teaching event included in the target teaching scene comprises the steps of classifying the at least one basic teaching event included in the target teaching scene according to the preset classification rules corresponding to the target teaching scene according to the classification processing mode corresponding to the target teaching scene, and obtaining the at least one teaching event included in the target teaching scene and belonging to the event type set corresponding to the target teaching scene.
The preset classification rule comprises a classification processing mode and an event type set. The classification processing mode is used for classifying the basic teaching events into corresponding teaching events. The event type set comprises all the types of teaching events, and is used for classifying the basic teaching events into the corresponding teaching events in the event type set according to the classification processing mode. For example, the set of event types includes accepting emotion, encouraging or surfacing, admitting or utilizing a student's opinion, asking questions, teaching, giving directions or instructions, criticizing or maintaining authority, student active speaking, student passive speaking, and having an invalid language.
Specifically, preset classification rules corresponding to different teaching scenes are not identical. And classifying at least one basic teaching event in the target teaching scene according to a preset classification rule corresponding to the target teaching scene, and respectively processing the basic teaching event according to a classification processing mode corresponding to the target teaching scene to obtain at least one teaching event belonging to an event type set corresponding to the target teaching scene, wherein the at least one teaching event is included in the target teaching scene. For example, student speaking events in a question-answering scenario are classified as student passive speaking events according to a preset classification rule, and student speaking events in a class practice scenario are classified as student active speaking events according to a preset classification rule.
In the embodiment, the classification processing modes and event type sets corresponding to different teaching scenes are not identical, and teaching events with teaching significance are extracted by combining different teaching scene recognition, so that the recognition accuracy is improved.
In another embodiment, the preset classification rule includes a plurality of classification processing modes, and the classification processing modes between at least one basic teaching event included in the teaching scene are not identical.
Specifically, the classification processing modes among different basic teaching events are not identical, and the preset classification rules comprise multiple classification processing modes which are respectively used for processing corresponding basic teaching events. For example, as shown in fig. 5, fig. 5 is a schematic diagram of a question-answer scenario provided by an embodiment of the present application, and fig. 5 includes a teacher question event 510, a student speaking event 520, a teacher speaking event 530, and a student standing event 540. The teacher questioning event 510, the student speaking event 520, the teacher speaking event 530, and the student standing event 540 have a back-and-forth order relationship as shown in fig. 5. The classification processing modes for the teacher question event 510, the student speaking event 520 and the teacher speaking event 530 are different. The teacher questioning event 510 is classified into a questioning event, the student speaking event 520 is classified into a student passive speaking event, the teacher speaking event 530 is matched with a plurality of keywords with favorability, the teacher speaking event 530 is classified into an encouraging or a surfacing event when at least one keyword is matched, if at least one keyword is not matched, the teacher speaking event 530 is further compared with the student speaking event 520 in a text manner, and when the text similarity exceeds a first preset threshold, the teacher speaking event 530 is classified into a view event for admitting or utilizing the student.
In this embodiment, different basic teaching events are classified by different classification processing modes, so as to obtain teaching events, and the recognition accuracy of the teaching events is improved.
Further, there are various classification processes in addition to the encouragement or surfacing of events, acceptance or utilization of student's opinion events, quiz events, and student's passive speaking events described above in connection with the quiz scenario. The classification processing mode further comprises the steps of matching a plurality of keywords with command performance on a teacher speaking event, classifying the teacher speaking event to give guidance or instruction events when at least one keyword is matched, combining a class confusion scene, classifying the teacher speaking event to criticizing or maintenance authoritative events through quieting a class after the teacher speaking event is detected, classifying student speaking events in a non-questioning scene or a question-and-answer scene to student active speaking events, judging whether text similarity between the teacher speaking event and student speaking events before the teacher speaking event exceeds a second preset threshold, classifying the teacher speaking event to accept emotion events when the text similarity exceeds the second preset threshold, and classifying the teacher speaking event to the teaching event when the preset classification rules cannot classify the teacher speaking event.
S205, analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data.
Specifically, S205 and S104 are identical, and will not be described here again.
In the embodiment of the application, after the teaching data is obtained, the teaching data is identified and processed to obtain a plurality of basic teaching events, and the plurality of basic teaching events are further aggregated and processed to obtain a teaching scene which has a certain rule meaning and can be understood by a user. The content meanings of the basic teaching events included in different teaching scenes are not identical, so that at least one basic teaching event in each teaching scene is respectively classified and processed by combining the type information corresponding to each teaching scene, and at least one teaching event included in each teaching scene is obtained. And analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data. The teaching data analysis provided by the application can be automatically processed when the teaching events are divided, so that the efficiency of teaching data analysis is improved. The method comprises the steps of obtaining preset classification rules according to type information of teaching scenes, wherein the preset classification rules corresponding to different teaching scenes are not identical, and classification processing modes corresponding to different basic teaching events are not identical. The teaching events with teaching significance are identified and extracted by combining different teaching scenes, so that the identification accuracy is improved, and the accuracy of analysis results is further improved.
Fig. 6 schematically illustrates a structural diagram of a teaching data analysis device according to an embodiment of the present application. The teaching data analysis device may be implemented as all or part of the device by software, hardware, or a combination of both. As shown in fig. 6, the teaching data analyzing apparatus 60 may include an identifying module 601, an aggregating module 602, a classifying module 603, and an analyzing module 604. Wherein:
The recognition module 601 is configured to obtain teaching data, and recognize a plurality of basic teaching events according to the teaching data.
And the aggregation module 602 is configured to aggregate the plurality of basic teaching events to obtain at least one teaching scene.
The classification module 603 is configured to perform classification processing on at least one basic teaching event included in each teaching scene according to type information corresponding to each teaching scene, so as to obtain at least one teaching event included in each teaching scene.
And the analysis module 604 is configured to perform analysis processing on the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively, so as to obtain an analysis result corresponding to the teaching data.
In some possible embodiments, classification module 603 includes:
And the determining unit is used for determining a preset classification rule corresponding to the target teaching scene according to the type information corresponding to the target teaching scene in the at least one teaching scene.
The classification unit is used for respectively classifying at least one basic teaching event included in the target teaching scene according to a preset classification rule corresponding to the target teaching scene to obtain at least one teaching event included in the target teaching scene.
In some possible embodiments, the preset classification rule includes a classification processing manner and an event type set, where the event type set includes a plurality of teaching events, and the preset classification rules corresponding to different teaching scenes are not identical, and the classification unit includes:
And the classifying sub-unit is used for classifying at least one basic teaching event included in the target teaching scene according to a preset classifying rule corresponding to the target teaching scene and respectively processing the basic teaching event according to a classifying processing mode corresponding to the target teaching scene to obtain at least one teaching event included in the target teaching scene and belonging to an event type set corresponding to the target teaching scene.
In some possible embodiments, the preset classification rule includes multiple classification processing manners, and the classification processing manners between at least one basic teaching event included in the teaching scene are not identical.
In some possible embodiments, the analysis module 604 includes:
The coding unit is used for coding the at least one teaching event corresponding to each teaching scene in sequence according to the time sequence of the at least one teaching event corresponding to each teaching scene in sequence, so as to obtain coded data corresponding to each teaching scene.
And the construction unit is used for constructing a teaching analysis matrix according to at least one piece of coded data.
And the analysis unit is used for carrying out analysis processing on the teaching analysis matrix to obtain an analysis result corresponding to the teaching data.
In some possible embodiments, the coding unit comprises:
The coding subunit is configured to sequentially code at least one teaching event corresponding to each teaching scene according to a time sequence of sequentially occurring at least one teaching event corresponding to each teaching scene and a duration corresponding to each at least one teaching event, so as to obtain coded data corresponding to each teaching scene.
In some possible embodiments, the coding subunit is configured to sequentially perform continuous coding on at least one teaching event corresponding to each teaching scene according to a time sequence of occurrence of at least one teaching event corresponding to each teaching scene and a duration corresponding to each at least one teaching event, so as to obtain coded data corresponding to each teaching scene.
In some possible embodiments, the aggregation module 602 includes:
and the aggregation unit is used for carrying out aggregation processing on the plurality of basic teaching events according to a preset conversion rule to obtain at least one teaching scene.
In some possible embodiments, the tutorial data includes tutorial vision data and tutorial voice data.
In some possible embodiments, the identification module 601 includes:
And the acquisition unit is used for acquiring the teaching data.
And the recognition unit is used for recognizing the teaching visual data and the teaching voice data respectively to obtain a first recognition result corresponding to the teaching visual data and a second recognition result corresponding to the teaching voice data.
And the combination unit is used for obtaining a plurality of basic teaching events according to the first identification result and the second identification result.
In the embodiment of the application, after the teaching data is obtained, the teaching data is identified and processed to obtain a plurality of basic teaching events, and the plurality of basic teaching events are further aggregated and processed to obtain a teaching scene which has a certain rule meaning and can be understood by a user. The content meanings of the basic teaching events included in different teaching scenes are not identical, so that at least one basic teaching event in each teaching scene is respectively classified and processed by combining the type information corresponding to each teaching scene, and at least one teaching event included in each teaching scene is obtained. And analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data. The teaching data analysis provided by the application can be automatically processed when the teaching events are divided, so that the efficiency of teaching data analysis is improved. The method comprises the steps of obtaining preset classification rules according to type information of teaching scenes, wherein the preset classification rules corresponding to different teaching scenes are not identical, and classification processing modes corresponding to different basic teaching events are not identical. The teaching events with teaching significance are identified and extracted by combining different teaching scenes, so that the identification accuracy is improved, and the accuracy of analysis results is also improved. Furthermore, the teaching events are encoded at shorter time intervals compared with the traditional teaching data analysis mode, so that encoded data with higher precision is obtained, and the precision of an analysis result is improved.
It should be noted that, when the teaching data analysis device provided in the foregoing embodiment performs the teaching data analysis method, only the division of the foregoing functional modules is used as an example, and in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the teaching data analysis device and the teaching data analysis method provided in the foregoing embodiments belong to the same concept, which embody detailed implementation procedures in the method embodiments, and are not described herein again.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
Embodiments of the present application also provide a computer-readable storage medium having instructions stored therein, which when executed on a computer or processor, cause the computer or processor to perform one or more of the steps of the embodiments shown in fig. 1-5 described above. The respective constituent modules of the above-described teaching data analysis device may be stored in the computer-readable storage medium if implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (DIGITAL VERSATILE DISC, DVD)), or a semiconductor medium (e.g., a Solid state disk (Solid STATE DISK, SSD)), or the like.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program, which may be stored in a computer-readable storage medium, instructing relevant hardware, and which, when executed, may comprise the embodiment methods as described above. The storage medium includes various media capable of storing program codes such as a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely illustrative of the preferred embodiments of the present application and are not intended to limit the scope of the present application, and various modifications and improvements made by those skilled in the art to the technical solution of the present application should fall within the scope of protection defined by the claims of the present application without departing from the design spirit of the present application.
The present disclosure further provides a computer program product, where at least one instruction is stored, where the at least one instruction is loaded by the processor and executed by the processor to perform the teaching data analysis method according to the embodiment shown in fig. 1 to 5, and the specific execution process may refer to the specific description of the embodiment shown in fig. 1 to 5, which is not repeated herein.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic device 70 may include at least one processor 701, at least one network interface 704, a user interface 703, memory 705, and at least one communication bus 702.
Wherein the communication bus 702 is used to enable connected communications between these components.
The user interface 703 may include a Display screen (Display), a Camera (Camera), and the optional user interface 703 may further include a standard wired interface, and a wireless interface.
The network interface 704 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 701 may include one or more processing cores. The processor 701 utilizes various interfaces and lines to connect various portions of the overall electronic device 70, perform various functions of the electronic device 70, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 705, and invoking data stored in the memory 705. Alternatively, the processor 701 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 701 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display screen, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 701 and may be implemented by a single chip.
The Memory 705 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 705 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 705 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 705 may include a stored program area that may store instructions for implementing an operating system, instructions for at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc., and a stored data area that may store data related to the various method embodiments described above, etc. The memory 705 may also optionally be at least one storage device located remotely from the processor 701. As shown in fig. 7, an operating system, a network communication module, a user interface module, and a teaching data analysis application program may be included in the memory 705, which is a type of computer storage medium.
In the electronic device 70 shown in fig. 7, the user interface 703 is mainly used for providing an input interface for a user to obtain data input by the user, while the processor 701 may be used for calling a teaching data analysis application program stored in the memory 705 and specifically performing the following operations:
acquiring teaching data and identifying a plurality of basic teaching events according to the teaching data;
the plurality of basic teaching events are aggregated to obtain at least one teaching scene;
According to the type information corresponding to each teaching scene, respectively classifying at least one basic teaching event included in each teaching scene to obtain at least one teaching event included in each teaching scene;
and according to at least one teaching event corresponding to at least one teaching scene, analyzing and processing the teaching data to obtain an analysis result corresponding to the teaching data.
In some possible embodiments, the processor 701 executes the classification processing on at least one basic teaching event included in each teaching scene according to the type information corresponding to each teaching scene, so as to obtain at least one teaching event included in each teaching scene, and specifically executes:
determining a preset classification rule corresponding to the target teaching scene according to the type information corresponding to the target teaching scene in the at least one teaching scene;
And respectively classifying at least one basic teaching event included in the target teaching scene according to a preset classification rule corresponding to the target teaching scene to obtain at least one teaching event included in the target teaching scene.
In some possible embodiments, the preset classification rule includes a classification processing manner and an event type set, where the event type set includes a plurality of teaching events, the preset classification rules corresponding to different teaching scenes are not identical, the processor 701 executes the classification processing on at least one basic teaching event included in the target teaching scene according to the preset classification rule corresponding to the target teaching scene, so as to obtain at least one teaching event included in the target teaching scene, and specifically performs:
And classifying at least one basic teaching event included in the target teaching scene according to a preset classification rule corresponding to the target teaching scene, and obtaining at least one teaching event included in the target teaching scene and belonging to an event type set corresponding to the target teaching scene.
In some possible embodiments, the preset classification rule includes multiple classification processing manners, and the classification processing manners between at least one basic teaching event included in the teaching scene are not identical.
In some possible embodiments, the processor 701 executes the at least one teaching event corresponding to at least one teaching scene, performs analysis processing on the teaching data to obtain an analysis result corresponding to the teaching data, and specifically performs:
Coding at least one teaching event corresponding to each teaching scene in sequence according to the time sequence of the at least one teaching event corresponding to each teaching scene in sequence, so as to obtain coding data corresponding to each teaching scene;
constructing a teaching analysis matrix according to at least one piece of coded data;
and analyzing and processing the teaching analysis matrix to obtain an analysis result corresponding to the teaching data.
In some possible embodiments, the processor 701 executes the time sequence of sequentially generating the at least one teaching event corresponding to each teaching scene, sequentially encodes the at least one teaching event corresponding to each teaching scene, and obtains encoded data corresponding to each teaching scene, and specifically performs:
And sequentially encoding at least one teaching event corresponding to each teaching scene according to the time sequence of the sequentially-occurring at least one teaching event corresponding to each teaching scene and the duration time respectively corresponding to the at least one teaching event, so as to obtain encoded data corresponding to each teaching scene.
In some possible embodiments, the processor 701 executes the time sequence of sequentially generating the at least one teaching event corresponding to each teaching scene and the duration corresponding to the at least one teaching event respectively, and encodes sequentially the at least one teaching event corresponding to each teaching scene to obtain encoded data corresponding to each teaching scene, and specifically executes:
and continuously encoding at least one teaching event corresponding to each teaching scene according to a time sequence of the at least one teaching event corresponding to each teaching scene and a duration corresponding to each teaching event, and obtaining encoded data corresponding to each teaching scene.
In some possible embodiments, the processor 701 performs the aggregation processing on the plurality of basic teaching events to obtain at least one teaching scenario, specifically performing:
and carrying out aggregation processing on the plurality of basic teaching events according to a preset conversion rule to obtain at least one teaching scene.
In some possible embodiments, the tutorial data includes tutorial vision data and tutorial voice data.
In some possible embodiments, the processor 701 executes the acquired teaching data, and identifies a plurality of basic teaching events according to the teaching data, specifically executing:
Acquiring teaching data;
Respectively identifying the teaching visual data and the teaching voice data to obtain a first identification result corresponding to the teaching visual data and a second identification result corresponding to the teaching voice data;
and obtaining a plurality of basic teaching events according to the first recognition result and the second recognition result.
In an embodiment of the present application, the electronic device 70 may be a mobile phone, tablet computer, desktop, laptop, notebook, ultra-mobile Personal computer (UMPC), handheld computer, netbook, personal digital assistant (Personal DIGITAL ASSISTANT, PDA), wearable electronic device, virtual reality device, etc.
In the embodiment of the application, after the teaching data is obtained, the teaching data is identified and processed to obtain a plurality of basic teaching events, and the plurality of basic teaching events are further aggregated and processed to obtain a teaching scene which has a certain rule meaning and can be understood by a user. The content meanings of the basic teaching events included in different teaching scenes are not identical, so that at least one basic teaching event in each teaching scene is respectively classified and processed by combining the type information corresponding to each teaching scene, and at least one teaching event included in each teaching scene is obtained. And analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data. The teaching data analysis provided by the application can be automatically processed when the teaching events are divided, so that the efficiency of teaching data analysis is improved. The method comprises the steps of obtaining preset classification rules according to type information of teaching scenes, wherein the preset classification rules corresponding to different teaching scenes are not identical, and classification processing modes corresponding to different basic teaching events are not identical. The teaching events with teaching significance are identified and extracted by combining different teaching scenes, so that the identification accuracy is improved, and the accuracy of analysis results is also improved. Furthermore, the teaching events are encoded at shorter time intervals compared with the traditional teaching data analysis mode, so that encoded data with higher precision is obtained, and the precision of an analysis result is improved.

Claims (13)

1. A method of teaching data analysis, the method comprising:
acquiring teaching data and identifying a plurality of basic teaching events according to the teaching data;
the plurality of basic teaching events are aggregated to obtain at least one teaching scene;
According to the type information corresponding to each teaching scene, respectively classifying at least one basic teaching event included in each teaching scene to obtain at least one teaching event included in each teaching scene;
and according to at least one teaching event corresponding to at least one teaching scene, analyzing and processing the teaching data to obtain an analysis result corresponding to the teaching data.
2. The method for analyzing teaching data according to claim 1, wherein the classifying the at least one basic teaching event included in each teaching scene according to the type information corresponding to each teaching scene to obtain the at least one teaching event included in each teaching scene includes:
determining a preset classification rule corresponding to the target teaching scene according to the type information corresponding to the target teaching scene in the at least one teaching scene;
And respectively classifying at least one basic teaching event included in the target teaching scene according to a preset classification rule corresponding to the target teaching scene to obtain at least one teaching event included in the target teaching scene.
3. The teaching data analysis method according to claim 2, wherein the preset classification rules comprise a classification processing mode and an event type set, the event type set comprises a plurality of teaching events, and the preset classification rules corresponding to different teaching scenes are not identical;
The classifying process is performed on at least one basic teaching event included in the target teaching scene according to a preset classification rule corresponding to the target teaching scene, so as to obtain at least one teaching event included in the target teaching scene, including:
And classifying at least one basic teaching event included in the target teaching scene according to a preset classification rule corresponding to the target teaching scene, and obtaining at least one teaching event included in the target teaching scene and belonging to an event type set corresponding to the target teaching scene.
4. The teaching data analysis method according to claim 3, wherein the preset classification rule includes a plurality of classification processing modes, and the classification processing modes between at least one basic teaching event included in the teaching scene are not identical.
5. The method for analyzing teaching data according to claim 1, wherein the analyzing the teaching data according to at least one teaching event corresponding to at least one teaching scene to obtain an analysis result corresponding to the teaching data includes:
Coding at least one teaching event corresponding to each teaching scene in sequence according to the time sequence of the at least one teaching event corresponding to each teaching scene in sequence, so as to obtain coding data corresponding to each teaching scene;
constructing a teaching analysis matrix according to at least one piece of coded data;
and analyzing and processing the teaching analysis matrix to obtain an analysis result corresponding to the teaching data.
6. The method for analyzing teaching data according to claim 5, wherein the sequentially encoding the at least one teaching event corresponding to each teaching scene according to a time sequence in which the at least one teaching event corresponding to each teaching scene sequentially occurs, to obtain encoded data corresponding to each teaching scene, includes:
And sequentially encoding at least one teaching event corresponding to each teaching scene according to the time sequence of the sequentially-occurring at least one teaching event corresponding to each teaching scene and the duration time respectively corresponding to the at least one teaching event, so as to obtain encoded data corresponding to each teaching scene.
7. The method for analyzing teaching data according to claim 6, wherein the sequentially encoding at least one teaching event corresponding to each teaching scene according to a time sequence in which the at least one teaching event corresponding to each teaching scene sequentially occurs and a duration corresponding to each at least one teaching event, to obtain encoded data corresponding to each teaching scene, includes:
and continuously encoding at least one teaching event corresponding to each teaching scene according to a time sequence of the at least one teaching event corresponding to each teaching scene and a duration corresponding to each teaching event, and obtaining encoded data corresponding to each teaching scene.
8. The method for analyzing teaching data according to claim 1, wherein the aggregating the plurality of basic teaching events to obtain at least one teaching scene comprises:
and carrying out aggregation processing on the plurality of basic teaching events according to a preset conversion rule to obtain at least one teaching scene.
9. The teaching data analysis method according to claim 1, wherein the teaching data includes teaching visual data and teaching voice data.
10. The method of claim 9, wherein the obtaining teaching data and identifying a plurality of base teaching events from the teaching data comprises:
Acquiring teaching data;
Respectively identifying the teaching visual data and the teaching voice data to obtain a first identification result corresponding to the teaching visual data and a second identification result corresponding to the teaching voice data;
and obtaining a plurality of basic teaching events according to the first recognition result and the second recognition result.
11. A teaching data analysis device, the device comprising:
the recognition module is used for acquiring teaching data and recognizing a plurality of basic teaching events according to the teaching data;
The aggregation module is used for carrying out aggregation processing on the plurality of basic teaching events to obtain at least one teaching scene;
The classification module is used for respectively classifying at least one basic teaching event included in each teaching scene according to the type information corresponding to each teaching scene to obtain at least one teaching event included in each teaching scene;
and the analysis module is used for analyzing and processing the teaching data according to at least one teaching event corresponding to at least one teaching scene respectively to obtain an analysis result corresponding to the teaching data.
12. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method steps of any one of claims 1 to 10.
13. An electronic device comprising a processor and a memory, wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-10.
CN202311315869.0A 2023-10-11 2023-10-11 Teaching data analysis method, device, storage medium and electronic equipment Pending CN119809874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311315869.0A CN119809874A (en) 2023-10-11 2023-10-11 Teaching data analysis method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311315869.0A CN119809874A (en) 2023-10-11 2023-10-11 Teaching data analysis method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN119809874A true CN119809874A (en) 2025-04-11

Family

ID=95273036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311315869.0A Pending CN119809874A (en) 2023-10-11 2023-10-11 Teaching data analysis method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN119809874A (en)

Similar Documents

Publication Publication Date Title
WO2022095380A1 (en) Ai-based virtual interaction model generation method and apparatus, computer device and storage medium
US12080288B2 (en) Learning support device, learning support method, and recording medium
CN110600033B (en) Learning assessment method, device, storage medium and electronic equipment
WO2021018232A1 (en) Adaptive evaluation method and apparatus, storage medium, and electronic device
CN109659009B (en) Emotion management method and device and electronic equipment
CN109003492B (en) Topic selection device and terminal equipment
CN108763342A (en) Education resource distribution method and device
US20190122667A1 (en) Question Urgency in QA System with Visual Representation in Three Dimensional Space
KR102606336B1 (en) Test method and apparatus for evaluating cognitive function decline
CN115131867A (en) Student learning efficiency detection method, system, device and medium
CN116229777A (en) Internet comprehensive teaching training method, system, medium and equipment
CN111159379B (en) A method, device and system for automatically generating questions
CN118506620A (en) Question explanation method, device, electronic device and storage medium
CN119150814A (en) Conference summary generation method, device, terminal and computer readable storage medium
CN113539253B (en) Audio data processing method and device based on cognitive assessment
EP4272642A1 (en) Test method and apparatus for evaluating cognitive function decline
CN106802941B (en) A kind of generation method and equipment of reply message
CN117372215A (en) Calculation method and system for student participation degree and readable storage medium
CN110111011B (en) Teaching quality supervision method and device and electronic equipment
CN116342082A (en) Knowledge graph-based post competence judging method, device, medium and equipment
CN118885964B (en) Method and system for judging concentration degree of learning process based on multi-mode alignment
CN119809874A (en) Teaching data analysis method, device, storage medium and electronic equipment
CN109087633A (en) Voice assessment method, device and electronic equipment
CN115670459B (en) Information processing device, method, terminal equipment and storage medium
CN108039081B (en) Robot teaching evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination