Course content generation system and editing device for online interactive teaching

文档序号:1816095 发布日期:2021-11-09 浏览:14次 中文

阅读说明:本技术 一种在线互动教学的课程内容生成系统和编辑装置 (Course content generation system and editing device for online interactive teaching ) 是由 孙洪伟 吴哲 王宇航 于 2021-06-23 设计创作,主要内容包括:本发明涉及一种在线互动教学的课程内容生成系统和编辑装置,课程内容生成系统包括:素材管理模块、场景编辑器和课程内容生成模块;素材管理模块用于存储生成课程内容使用的所有素材,每一素材设置有能够识别的至少一个标签;场景编辑器用于根据教学信息从素材管理模块中调取素材并形成至少一个可编辑的对象,以及对每一对象进行多维度属性的编辑,获得与课程内容对应的内容场景;课程内容生成模块,用于根据教学信息配置内容场景的路径,获得用于在客户端播放的在线互动的课程内容。本发明的课程内容生成系统和编辑装置均可以简化现有技术中生成课程内容的操作流程,提高课程内容的生成效率,同时可实现规模化生产。(The invention relates to a course content generating system and an editing device for online interactive teaching, wherein the course content generating system comprises: the system comprises a material management module, a scene editor and a course content generation module; the material management module is used for storing all materials used for generating the course content, and each material is provided with at least one label capable of being identified; the scene editor is used for calling materials from the material management module according to the teaching information, forming at least one editable object, and editing the multi-dimensional attributes of each object to obtain a content scene corresponding to the course content; and the course content generation module is used for configuring a path of a content scene according to the teaching information and obtaining the course content for online interaction played at the client. The course content generating system and the editing device can simplify the operation flow of generating the course content in the prior art, improve the generation efficiency of the course content and realize large-scale production.)

1. A course content generation system for online interactive teaching, comprising: the system comprises a material management module, a scene editor and a course content generation module;

the material management module is used for storing all materials used for generating the course content, and each material is provided with at least one label capable of being identified;

the scene editor is used for calling materials from the material management module according to the teaching information, forming at least one editable object, and editing the multi-dimensional attributes of each object to obtain a content scene corresponding to the course content;

and the course content generation module is used for configuring the path of the content scene according to the teaching information and generating the course content for online interaction played at the client.

2. The lesson content generation system of claim 1,

the material management module comprises: the device comprises a first material acquisition unit and a material storage unit;

the first material acquisition unit is used for receiving original materials uploaded by a user based on a first interface; a plurality of label options for a user to select are displayed in the first interface;

the material storage unit is used for storing original materials carrying at least one label selected by a user;

alternatively, the first and second electrodes may be,

the material management module comprises: the second material acquisition unit, the label determination unit and the material storage unit;

the second material acquisition unit is used for receiving batch materials uploaded after a user triggers a batch uploading button;

the label determining unit is used for configuring labels for the batched materials, so that each material in the batched materials carries at least one label meeting the label specification;

and the material storage unit is used for storing the material carrying at least one label.

3. The lesson content generation system of claim 1,

the scene editor includes: the system comprises a material calling unit, a scene editing unit and a scene previewing unit;

the material calling unit is used for calling and displaying more than one material from the material management module according to the teaching information;

the scene editing unit is used for converting the called materials into more than one object and editing the multi-dimensional attributes of each object to generate a content scene;

a scene preview unit for displaying the generated content scene;

alternatively, the first and second electrodes may be,

the scene editor includes: a material calling unit and a scene editing unit;

the material calling unit is used for calling and displaying more than one material from the material management module according to the teaching information;

and the scene editing unit is used for converting the called materials into more than one object and editing the multi-dimensional attributes of each object to generate a content scene.

4. The lesson content generation system of claim 3, wherein the scene editing unit, in particular,

receiving a moving instruction of any material in the material display area, responding to the moving instruction, and moving the material corresponding to the moving instruction to a stage area of a scene editing unit to generate an object with a unique identifier; the material in the material display area responds to at least one moving instruction to generate at least one object, and the identification of each object is unique;

editing the multi-dimensional attributes of each object in the stage area of the scene editing unit to generate a content scene; the stage area is a visualization area for editing multi-dimensional attributes of objects to generate content scenes.

5. The lesson content generation system of claim 4, wherein the scene editing unit is configured to edit the multi-dimensional attributes of the objects in the stage area of the scene editing unit, and comprises:

calling a display frame of the event dimension attribute of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame; the event dimension attribute is information of an operation triggered by a specified action executed by an object at a specified moment, and the event dimension attribute comprises: double click, single click, long press or drag;

and/or calling a display frame of the behavior dimension attribute of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame; the behavior dimension attribute is behavior information which can be executed by the object, and the behavior dimension attribute comprises one or more of the following items: displaying, hiding, switching models, shifting behaviors, playing behaviors, pausing behaviors, specifying logic information among behaviors and entering a level;

and/or calling a display frame of the basic information attribute of the current processing object, responding to an adjusting instruction of a user, and adjusting at least one piece of configuration information in the display frame; the base information attributes include one or more of: name, initial position, initial state, tag state, and tag size;

and/or calling a display frame of monitoring information attributes of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame, wherein the monitoring information attributes are information used for judging whether the event of the monitoring object is successfully triggered and information used for judging whether the behavior of the monitoring object is completed.

6. The lesson content generation system of claim 1,

the course content generation module comprises: a node generation unit and a content generation unit;

the node generating unit is used for determining a plurality of nodes for realizing the path of the course content based on the teaching information and binding a content scene with at least one node;

and the content generating unit is used for generating a path of each node permutation and combination according to the judgment condition between the adjacent nodes in the path to obtain the course content.

7. The lesson content generation system of claim 6,

the course content generating module further comprises: a determination condition generating unit for generating a determination condition,

and the judging condition generating unit is used for determining the judging condition among the adjacent nodes in the path according to the number of the nodes and the logical relationship of the content scene bound by each node, or determining the judging condition among the adjacent nodes in the path according to the number of the nodes, the logical relationship of the content scene bound by each node and the capability case value of the content scene.

8. An editing device for online interactive teaching, comprising:

and the scene editor is used for acquiring more than one material according to the teaching information, forming editable objects, editing each object in a multi-dimensional attribute manner in a visual manner, and acquiring content scenes corresponding to the course contents, wherein the course contents are online interactive courseware which correspond to the teaching information and can be played at a client.

9. A course content generating apparatus for online interactive teaching, comprising:

the course content generating module is used for configuring a path of a content scene according to the teaching information and generating the course content which corresponds to the teaching information and can be played on the client side in an online interaction manner;

the content scene is a scene corresponding to the course content, which is generated by converting and editing more than one material in a scene editing mode based on the teaching information.

10. A computer storage medium storing computer-executable instructions, wherein the computer-executable instructions are executed to implement a course content generation process in a course content generation system of online interactive teaching as claimed in any one of claims 1 to 7, or the computer-executable instructions are executed to implement a course content generation process in an editing apparatus of online interactive teaching as claimed in claim 8, or the computer-executable instructions are executed to implement a course content generation process in a course content generation apparatus of online interactive teaching as claimed in claim 9.

Technical Field

The invention relates to the technical field of internet online teaching, in particular to a course content generation system and an editing device for online interactive teaching.

Background

The courseware making process of the existing online interactive class is as follows: an editor needs to classify and name elements used in multi-type course contents (namely interactive teaching contents) based on an excel form, and fills the elements into the excel form one by one, and the ID of the elements needs to be defined, so that the condition of ID repetition cannot exist; the naming of the elements can not use Chinese characters and special symbols, more than 500 elements in one form, more than 4000 elements in each excel form which need to be marked with initial state, state after movement, coordinates and the like, more than 4000 items which need to be manually filled and confirmed, the checking of the contents is a very huge work, and any filling error can influence the actual output effect of the course contents.

In particular, the edited excel table in the prior art further needs a coding process, after the coding process, the material as an element can be enabled to implement the execution of the events and behaviors required by the course content, and the process of the coding process must be edited by a professional technician, so that the generation flow of the course content in the prior art is complex and inefficient.

Therefore, the existing scheme for generating the course content has the defects of complex operation process, low efficiency, high technical threshold of operation and incapability of realizing rapid and efficient production.

Disclosure of Invention

Technical problem to be solved

In view of the above drawbacks and deficiencies of the prior art, the present invention provides a course content generation system for online interactive teaching.

(II) technical scheme

In order to achieve the purpose, the invention adopts the main technical scheme that:

in a first aspect, an embodiment of the present invention provides a course content generating system for online interactive teaching, including: the system comprises a material management module, a scene editor and a course content generation module;

the material management module is used for storing all materials used for generating the course content, and each material is provided with at least one label capable of being identified;

the scene editor is used for calling materials from the material management module according to the teaching information, forming at least one editable object, editing each object in a multi-dimensional attribute mode, namely editing each dimensional attribute of each object, and obtaining a content scene corresponding to the course content;

and the course content generation module is used for configuring the path of the content scene according to the teaching information and generating the course content for online interaction played at the client.

Optionally, the material type in the material management module includes one or more of the following items:

a file in a picture format; a file in video format; a file in an audio format; an animation type file; an interactive file.

Optionally, the material management module includes: the device comprises a first material acquisition unit and a material storage unit; the first material acquisition unit is used for receiving original materials uploaded by a user based on a first interface; a plurality of label options for a user to select are displayed in the first interface;

the material storage unit is used for storing original materials carrying at least one label selected by a user;

or, the material management module comprises: the second material acquisition unit, the label determination unit and the material storage unit;

the second material acquisition unit is used for receiving batch materials uploaded after a user triggers a batch uploading button;

the label determining unit is used for configuring labels for the batched materials, so that each material in the batched materials carries at least one label meeting the label specification;

and the material storage unit is used for storing the material carrying at least one label.

Optionally, the material management module further includes: and the label modifying unit is used for modifying the label of at least one material in the material management module.

Optionally, the scene editor comprises: the system comprises a material calling unit, a scene editing unit and a scene previewing unit;

the material calling unit is used for calling more than one material from the material management module according to the teaching information and displaying the material in a material display area;

the scene editing unit is used for converting the called materials into more than one object and editing the multi-dimensional attributes of each object to generate a content scene;

a scene preview unit for displaying the generated content scene;

alternatively, the first and second electrodes may be,

the scene editor includes: a material calling unit and a scene editing unit;

the material calling unit is used for calling more than one material from the material management module according to the teaching information and displaying the material in a material display area;

and the scene editing unit is used for converting the called materials into more than one object and editing the multi-dimensional attributes of each object to generate a content scene.

Optionally, the material retrieval unit is specifically configured to,

receiving a file with a material label uploaded by a user, calling a corresponding material from the material management module based on the material label, and displaying the corresponding material in the material display area;

alternatively, the first and second electrodes may be,

and receiving a material query instruction input by a user in the material display area, wherein the material query instruction comprises a material label, responding to the material query instruction, calling a material corresponding to the material label from the material management module, and displaying the material in the material display area.

Optionally, the scene editing unit is specifically configured to,

receiving a moving instruction of any material in the material display area, responding to the moving instruction, and moving the material corresponding to the moving instruction to a stage area of a scene editing unit to generate an object with a unique identifier; the material in the material display area responds to at least one moving instruction to generate at least one object, and the identification of each object is unique;

editing the multi-dimensional attributes of each object in the stage area of the scene editing unit to generate a content scene; the stage area is a visualization area used for editing multi-dimensional attributes of objects and generating content scenes.

Optionally, the scene editing unit is configured to edit the multidimensional attribute of each object in the stage area of the scene editing unit, and includes:

calling a display frame of the event dimension attribute of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame; the event dimension attribute is information of an operation triggered by a specified action executed by an object at a specified moment, and the event dimension attribute comprises: double click, single click, long press or drag;

and/or calling a display frame of the behavior dimension attribute of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame; the behavior dimension attribute is behavior information which can be executed by the object, and the behavior dimension attribute comprises one or more of the following items: displaying, hiding, switching models, shifting behaviors, playing behaviors, pausing behaviors, specifying logic information among behaviors and entering a level;

and/or calling a display frame of the basic information attribute of the current processing object, responding to an adjusting instruction of a user, and adjusting at least one piece of configuration information in the display frame; the base information attributes include one or more of: name, initial position, initial state, tag state, and tag size;

and/or calling a display frame of monitoring information attributes of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame, wherein the monitoring information attributes are information used for judging whether the event of the monitoring object is successfully triggered and information used for judging whether the behavior of the monitoring object is completed.

Optionally, the course content generating module includes a node generating unit and a content generating unit;

the node generation unit is used for determining a plurality of nodes in a path for realizing the course content based on the teaching information and binding a content scene with at least one node;

and the content generating unit is used for generating a path of each node permutation and combination according to the judgment condition between the adjacent nodes in the path to obtain the course content.

Optionally, the course content generating module further includes: a determination condition generating unit for generating a determination condition,

and the judging condition generating unit is used for determining the judging condition among the adjacent nodes in the path according to the number of the nodes and the logical relationship of the content scene bound by each node, or determining the judging condition among the adjacent nodes in the path according to the number of the nodes, the logical relationship of the content scene bound by each node and the capability case value of the content scene.

Optionally, each node has a unique identifier, and each node binds to one content scene;

and/or binding more than two nodes to the same content scene;

and/or the format of the course content obtained by the content generation unit is JSON format;

and/or a first virtual node used for marking the starting state of the course content is arranged on the content scene bound by the starting node of each path, and a second virtual node used for marking the ending state of the course content is arranged on the content scene bound by the ending node of the path.

In a second aspect, the present invention further provides an editing apparatus for online interactive teaching, including:

and the scene editor is used for acquiring more than one material according to the teaching information, forming editable objects, editing the multi-dimensional attributes of each object and acquiring visual content scenes corresponding to the course content, wherein the course content is a courseware which corresponds to the teaching information and can be played on line in an interactive manner at a client.

Optionally, the editing apparatus further includes: and the course content generation module is used for configuring the path of the content scene according to the teaching information and generating the course content for online interaction played at the client.

Optionally, the scene editor comprises: a third material acquisition unit, a scene editing unit and a scene preview unit;

the third material acquisition unit is used for acquiring a material corresponding to the teaching information and displaying the material in a material display area;

the scene editing unit is used for converting the acquired materials into more than one object and editing the multi-dimensional attributes of each object to generate a content scene;

a scene preview unit that displays the generated content scene;

alternatively, the scene editor includes: a third material acquisition unit and a scene editing unit;

the third material acquisition unit is used for acquiring a material corresponding to the teaching information and displaying the material in a material display area;

and the scene editing unit is used for converting the acquired material into more than one object and editing the multi-dimensional attributes of each object to generate a content scene.

Optionally, the scene editing unit is specifically configured to receive a moving instruction of any material in the material display area, and in response to the moving instruction, move the material corresponding to the moving instruction to a stage area of the scene editing unit to generate an object with a unique identifier; the method comprises the following steps that a material in a material display area responds to a moving instruction of a stage area to generate at least one object, and the identifier of each object is unique;

editing the multi-dimensional attributes of each object in the stage area of the scene editing unit to generate a content scene; the stage area is a visualization area used for editing multi-dimensional attributes of objects and generating content scenes.

Optionally, the scene editing unit is configured to edit the multidimensional attribute of each object in the stage area of the scene editing unit, and includes:

calling a display frame of the event dimension attribute of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame; the event dimension attribute is information of an operation triggered by a specified action executed by an object at a specified moment, and the event dimension attribute comprises: double click, single click, long press or drag;

and/or calling a display frame of the behavior dimension attribute of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame; the behavior dimension attribute is behavior information which can be executed by the object, and the behavior dimension attribute comprises one or more of the following items: displaying, hiding, switching models, shifting behaviors, playing behaviors, pausing behaviors, specifying logic information among behaviors and entering a level;

and/or calling a display frame of the basic information attribute of the current processing object, responding to an adjusting instruction of a user, and adjusting at least one piece of configuration information in the display frame; the base information attributes include one or more of: name, initial position, initial state, tag state, and tag size;

and/or calling a display frame of monitoring information attributes of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame, wherein the monitoring information attributes are information used for judging whether the event of the monitoring object is successfully triggered and information used for judging whether the behavior of the monitoring object is completed.

Optionally, the course content generating module includes a node generating unit and a content generating unit;

the node generation unit is used for determining a plurality of nodes in a path for realizing the course content based on the teaching information and binding a content scene with at least one node;

and the content generating unit is used for generating a path of each node permutation and combination according to the judgment condition between the adjacent nodes in the path to obtain the course content.

Optionally, the course content generating module further includes: a determination condition generation unit;

and the judging condition generating unit is used for determining the judging condition among the adjacent nodes in the path according to the number of the nodes and the logical relationship of the content scene bound by each node, or determining the judging condition among the adjacent nodes in the path according to the number of the nodes, the logical relationship of the content scene bound by each node and the capability case value of the content scene.

Optionally, each node has a unique identifier, and each node binds to one content scene;

and/or binding more than two nodes to the same content scene;

and/or the format of the course content obtained by the content generation unit is JSON format;

and/or a first virtual node used for marking the starting state of the course content is arranged on the content scene bound by the starting node of each path, and a second virtual node used for marking the ending state of the course content is arranged on the content scene bound by the ending node of the path.

In a third aspect, the present invention further provides a device for generating course content for online interactive teaching, including:

the course content generating module is used for configuring a path of a content scene according to the teaching information and generating the course content which corresponds to the teaching information and can be played on the client side in an online interaction manner;

and the content scene is a visual scene which is generated by converting and editing more than one material in a scene editing mode based on the teaching information and corresponds to the course content.

Optionally, the course content generating module includes a node generating unit and a content generating unit;

the node generation unit is used for determining a plurality of nodes in a path for realizing the course content based on the teaching information and binding a content scene with at least one node;

and the content generating unit is used for generating a path of each node permutation and combination according to the judgment condition between the adjacent nodes in the path to obtain the course content.

Optionally, the course content generating module further includes: a determination condition generation unit;

and the judging condition generating unit is used for determining the judging condition among the adjacent nodes in the path according to the number of the nodes and the logical relationship of the content scene bound by each node, or determining the judging condition among the adjacent nodes in the path according to the number of the nodes, the logical relationship of the content scene bound by each node and the capability case value of the content scene.

Optionally, each node has a unique identifier, and each node binds to one content scene;

and/or binding more than two nodes to the same content scene;

and/or the format of the course content obtained by the content generation unit is JSON format;

and/or a first virtual node used for marking the starting state of the course content is arranged on the content scene bound by the starting node of each path, and a second virtual node used for marking the ending state of the course content is arranged on the content scene bound by the ending node of the path.

In a fourth aspect, the present invention further provides a computer storage medium, which stores computer-executable instructions, where the computer-executable instructions are executed to implement a course content generation process in the course content generation system of online interactive teaching as described in any one of the above first aspects, or the computer-executable instructions are executed to implement a course content generation process in an editing apparatus of online interactive teaching as described in any one of the above second aspects, or the computer-executable instructions are executed to implement a course content generation process in a course content generation apparatus of online interactive teaching as described in any one of the above third aspects.

(III) advantageous effects

The course content generation system can rapidly realize the rapid configuration of the multi-dimensional attributes of each object by means of the scene editor and the course content generation module, reduce the configuration difficulty of operators and improve the generation efficiency of the course content.

According to the scene editor, the materials are converted into the objects, and one material can form a plurality of objects, so that the utilization rate of the materials can be improved, and the defect that the ID of the materials in the prior art needs to be unique is overcome. In addition, the editing of the object attributes is realized by means of the display frame corresponding to each dimension attribute, the problem that the editing process can only be realized by writing codes in the prior art is solved, the technical operation threshold is reduced, and the output efficiency of course content is improved.

In addition, the nodes in the course content generation module bind the content scene, and configure a content scene to bind a plurality of nodes, thereby effectively realizing the reuse of the content scene, finally forming the course content with a graph data structure formula, and realizing the configuration and recommendation of the personalized teaching content according to the learning condition of the current node of the student.

Drawings

Fig. 1 is a schematic diagram of a course content generating system for online interactive teaching according to an embodiment of the present invention;

FIG. 2 is a schematic diagram of a scenario editor provided in the present invention;

fig. 3 is a schematic diagram of a first interface for a user to upload a material according to an embodiment of the present invention;

FIGS. 4A and 4B are schematic diagrams of a display box for specifying dimension attributes for the NPC object in FIG. 3, respectively;

FIG. 5 is a diagram illustrating a data structure with multi-node multi-path according to an embodiment of the present invention;

fig. 6 is a schematic diagram of the determination conditions in the path of fig. 5.

Detailed Description

For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.

In order to better understand the above technical solutions, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.

In the prior art, an excel table is used for configuring and editing course contents, and then the course contents are converted into JSON files, and the process needs the operation of teaching and research personnel with programming basis; if the course content is written directly by using the code, the ordinary instructor cannot complete the course content, so that the technical requirement on the course content editing is very high, and the processing efficiency is very low.

Particularly, the storyline online interactive class has a lot of materials, and only single course content can be output by adopting the existing excel form editing, so that large-scale operation cannot be realized.

In addition, the configuration of course content edited by using an excel form in the prior art has the defect that real-time preview cannot be realized. For example, a technician uploads all materials, the names need to correspond to the names of elements in a table one by one, and the technician converts an excel table into a JSON file according to a format and packages the JSON file into course content, so that the effect of the course content can be previewed on a client. The whole process needs to be executed again when the problem is found by each check, the experience of course content production personnel is not friendly, and the production efficiency is very low.

Therefore, a course content generation system for online interactive teaching is needed to realize fast and efficient course content generation, reduce the difficulty of course content configuration, and simultaneously improve the richness of interaction in the course content.

Example one

An embodiment of the present invention provides a course content generating system for online interactive teaching, as shown in fig. 1, where fig. 1 shows a schematic diagram of an architecture of the course content generating system, the course content generating system of this embodiment may include: the system comprises a material management module, a scene editor and a course content generation module.

The material management module of this embodiment is configured to store all the materials used for generating the course content, where each material is provided with at least one identifiable tag. For example, tags may be carried in the name of each material, and these tags serve as identifiers for retrieving target materials in the scene editor. Of course, the embodiment of the present invention is not limited to the example that only the name carries the tag, and the embodiment is set according to actual needs. It should be noted that the material mentioned in the present embodiment may be the most basic information drawn by the designer for generating the course content, such as cartoon characters, short videos, dialog boxes, calculation vertical styles, and the like.

And the scene editor is used for calling more than one material from the material management module according to the teaching information to form editable objects, and editing the multi-dimensional attributes of each object to obtain more than one content scene corresponding to the course content. In this embodiment, the editing processes of the scene editor are all visual processes. The teaching information may be word version or PPT version text information previously created by a teaching and research staff, the teaching information is predefined with various teaching modes/teaching scenes, various teaching mode/teaching scene switching logical relationships, and the like, and specific information of each object to be edited in each teaching mode, and the like. Of course, the teaching information in the scene editor may be previously imported into the scene editor, or previously retrieved from a designated storage area, and the method for obtaining the teaching information in the scene editor is not limited in this embodiment.

And the course content generation module is used for configuring the path of the content scene according to the teaching information and generating the course content for online interaction played at the client. For example, the generated content scene may be configured according to a teaching mode in the teaching information, so as to obtain a course content capable of online interaction. Typically, the format of the course content may be a JSON format to facilitate presentation or playing directly at the client. Of course, the present embodiment does not limit the format of the course content, and any format can be conveniently displayed or played at the client.

The course content generation system of the embodiment can rapidly realize rapid configuration of multi-dimensional attributes of each object by means of the scene editor and the course content generation module, reduces the configuration difficulty of operators, improves the generation efficiency of course content, and effectively realizes batch processing of the course content.

The course content generation system does not need the coding operation of technicians, and can solve the problems that the material ID can not be repeated and the operation and the inspection are very complicated in the prior art.

To better understand the functions of each module in the course content generation system, each module of the course content generation system is described in detail below.

1. Material management module

In this embodiment, each material stored in the material management module has at least one tag, and the tags are named according to a preset rule (tag specification), that is, tags meeting the preset tag specification, so that a target material can be quickly found during scene editing of the scene editor, search time of subsequent editors is saved, and the material can be called in a quick and batched manner. The embodiment provides conditions for the large-scale production of the course content through the label of the standardized material, so that the efficiency of the course content output is improved.

In a specific implementation process, the name of each material in the material management module may include a tag, for example, the tags of the materials are combined to form the name of the material, which facilitates subsequent classification and search.

Generally, an artist can complete the production of various types of materials through a material production program in advance, and then import/upload the drawn materials by means of a material management module. In order to better explain the process of uploading the material to the material management module, the following description is made by two specific embodiments.

In one embodiment, the material management module comprises: the device comprises a first material acquisition unit and a material storage unit;

the first material acquisition unit is used for receiving original materials uploaded by a user based on a first interface; as shown in fig. 3, a plurality of tab options or custom tab options for the user to select are presented in the first interface shown in fig. 3. In fig. 3, the grade one, grade two, spring, summer heat, map, etc. with the option box are all label options, and accordingly, the grade one, grade two, spring, summer heat, map may be a label.

The material storage unit is used for storing original materials carrying a plurality of tags selected by a user and/or custom tags. The material storage unit may store the materials by classification according to the tags.

In another embodiment, in order to facilitate users to upload materials in batch, the material management module of this embodiment may include: the second material acquisition unit, the label determination unit and the material storage unit;

the second material acquisition unit is used for receiving batch materials uploaded after a user triggers a batch uploading button; for example, a batch upload button may be displayed in a designated area of the second interface, and after the user triggers the batch upload button, batch upload of the material may be implemented. The second interface can be the same as the first interface or different from the first interface, and is arranged according to actual requirements.

The label determining unit is used for configuring labels of the batched materials, so that each material in the batched materials carries at least one label meeting the label specification;

the material storage unit is used for storing the material carrying at least one label, for example, the material is classified and stored according to the label or the material type.

The material storage unit in this embodiment may store the material in a classified manner, for example, the material may be stored in a classified manner according to a file format, or may be stored in a classified manner according to a tag of the material.

In practical applications, the material management module may further include: a label modification unit;

the label modifying unit is used for editing the label of at least one material in the material management module, for example, the label name or the custom label can be modified. Through the secondary editing to the material label in the material management module, can realize that the label of material all accords with and predetermines the label standard in the material management module, can be fast the batched material of carrying on of follow-up operation of being convenient for and call.

The types of material in this embodiment may include one or more of the following: the materials stored in the material management module in this embodiment can be materials uploaded by the designer and associated with teaching information, and these materials can be repeatedly retrieved and used in the subsequent course content generation. The material type and the specific content of the material in this embodiment may be configured according to actual needs, and this embodiment does not limit this.

For the designer, the materials can be divided into common materials and general materials. The general material is a material whose frequency of use in the content scene generation is less than a first threshold (i.e., the frequency is low), and the general material is a material whose frequency of use in the content scene generation is equal to or greater than the first threshold. For example, the core character in the course content, the material that appears in almost every content scene is the general material throughout the course. Thus, the labels of the material in the present embodiment can be classified into a general type, a core person type, an assistant person type, and the like, which are not shown in fig. 3. The purpose of setting the label for the material in the embodiment is to quickly and conveniently screen the target material in the scene editor, so that the time for the editor to search the target material can be saved to a great extent, and the target material can be found quickly only by confirming the label.

In a specific implementation, the label combination of each material can be used as the name of the material, for example, "person name + age stage + clothing + specific scene" is used as the name of one material. The material management module is used for standardizing material management, so that the accuracy of subsequent flow production is ensured, and the production efficiency is improved.

The material management module of this embodiment can save various types of materials, can be convenient for the transfer of other course contents, can save art designer's operating cost and time, and can realize quick portable's management.

2. Scene editor

In this embodiment, the editing of the content scene in the scene editor is all visual editing, and for this reason, in order to facilitate the operation of an editor, the scene editor is divided into three areas in this embodiment, so as to implement visual editing of the content scene. The first area is a material display area (i.e., a material library in fig. 2), the second area is a stage area (a visualization area for editing an object), and the third area is a property editing area for performing multi-dimensional property editing on the object, such as a display frame shown in fig. 2, where standard configuration items are pre-designed in the display frame.

In other embodiments, the content scene can be visually edited as a scene editor, and is not limited to only the three regions shown in fig. 2. In this embodiment, three regions are used for illustration for convenience of description, and the visualization operation of the scene editor is not limited.

The scene editor of the present embodiment may include: the system comprises a material calling unit, a scene editing unit and a scene previewing unit;

the material calling unit is used for calling more than one material from the material management module according to the teaching information and displaying the material in a material display area;

the scene editing unit is used for converting the called materials into more than one object and editing the multi-dimensional attributes of each object to generate more than one content scene;

and the scene preview unit is used for displaying the generated content scene. For example, a content scene may be converted to a JSON file for presentation, or converted to a JSON file for presentation in a browser page.

Of course, in other embodiments, the scene editor may include: the material calling unit and the scene editing unit do not preview scenes.

In this embodiment, the scene editor preferably includes a scene preview unit for facilitating real-time editing and previewing and further correcting editing in real time. Through the scene preview unit, the working efficiency of editors can be improved, errors after the content scene is edited in the later period are reduced, and the working efficiency is improved. After the "preview" button in the upper right corner of fig. 2 is triggered, the function of the scene preview unit may be executed, for example, the current content scene may be converted into a JSON file for display, so that whether the effect of previewing the currently edited content scene meets the requirement of teaching information or whether the behavior of an object in the currently edited content scene is correct or not may be checked.

To better understand the scene editor of the present embodiment, the material display area and the stage area are described below separately.

2.1 Material display area/Material library

When the scene editor is used for scene editing, materials used in the scene need to be called from the material management module to the material display area of the scene editor, and then editing is carried out to obtain a plurality of content scenes.

For the retrieval of the materials in the material display area, the material retrieval unit of the embodiment may be configured to receive a file with a plurality of material tags, such as an excel file, uploaded by a user, and retrieve a corresponding material from the material management module according to each material tag in the file to display the material in the material display area; namely, the batch import of the materials is realized. The file in the place can be an excel template pre-established by editors, and the embodiment does not limit the file form and content.

In practical application, the material calling unit is further configured to receive a material query instruction with a tag, which is input by a user in the material display area, and respond to the material query instruction to display a target material matched with the material query instruction in the material display area. The method can realize the inquiry and the import of a single material by an editor.

That is to say, the material retrieval unit is used for importing the required material of course content in batches in the material show region on the one hand, and on the other hand can also import single target material in the material show region, promotes user experience.

In practical application, an editor is used to use an excel file in an initial stage of course creation, names of materials used in each content scene, such as character names, prop names, dialog information and the like, are marked in the excel file, the names of all the materials are names with tags, and after the excel file is edited, the materials can be directly imported into a scene editor during scene editing. The material calling unit can automatically guide the materials with the same name in the material management module into the material display area of the scene editor according to the material name in the excel file, so that the automatic matching and guiding of the materials are realized. The importing process of the excel file can save the complex process that course editors search and import each target material from the material display area, and improves the editing efficiency of course content.

In this embodiment, the same material can only be imported once, and the material calling unit can filter according to the label of the imported material, i.e., filters the imported material, thereby effectively avoiding multiple importations of the same material.

2.2 stage area

The stage zone in this embodiment may be a visualization area for editing multi-dimensional attributes of objects and generating content scenes. For example, an editor may move (e.g., drag operation or double-click operation) the imported material to a stage area, and edit the material in the stage area, for example, editing the position, name, behavior, event, and other multidimensional attributes, thereby generating a content scene. As shown in fig. 2, the stage zone is a visualization area for presentation and editing after the material generation object. In this embodiment, the size of the stage area may be 1920 × 1080, and the display size of the stage area may be adaptively adjusted according to the window size of the scene editor, so that the content scene generated by the stage area may meet the requirement of any client screen resolution for playing the content scene.

Specifically, the scene editing unit is specifically configured to receive a moving instruction of a user for any material in the material display area, and in response to the moving instruction, move the material corresponding to the moving instruction to the stage area to generate an editable object with a unique identifier;

and the scene editing unit in the embodiment is used for editing the multi-dimensional attributes of each object in the stage area in a visual mode to generate a content scene.

For example, the editing of the multidimensional attribute of each object in the stage area in a visual manner may include:

1) the method includes the steps of calling a display frame of the event dimension attribute of the current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame, for example, adjusting information of a standard configuration item in the display frame.

The event dimension attribute in this embodiment is information that an object executes an operation triggered by a specified action at a specified time, and the event dimension attribute includes: double click, single click, long press or drag, etc.

2) The method includes the steps of calling a display frame of behavior dimension attributes of a current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame, for example, adjusting information of a plurality of standard configuration items predefined in the display frame.

The behavior dimension attribute is behavior information that the object can execute, and the behavior dimension attribute may include: display, hide, switch styles, shift behaviors, play behaviors, pause behaviors, specify logical information between behaviors, enter checkpoints, and the like.

3) The method includes the steps of calling a display frame of basic information attributes of a current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame, for example, adjusting information of a plurality of standard configuration items predefined in the display frame.

In this embodiment, the basic information attribute may include: name, initial position, initial state, tag state, and tag size, etc.

4) The method includes the steps of calling a display frame of monitoring information attributes of a current processing object, responding to an adjustment instruction of a user, and adjusting at least one piece of configuration information in the display frame, for example, adjusting information of a plurality of standard configuration items predefined in the display frame.

The interception information attribute may be understood as information on whether an event for intercepting an object is successfully triggered and information on whether a behavior for intercepting an object is completed.

The identification of each object in the stage zone is unique, and a material in the material display area generates a plurality of objects in response to at least one movement instruction. That is, the object is converted from the material. The same material may be moved multiple times to create multiple objects in the stage area, each object having a unique identifier. Through editing the object in the stage area, the material of the material management module is not influenced, and meanwhile, the same material can be repeatedly used for multiple times to generate different objects, so that the problem of multi-material ID repetition in the prior art is solved.

In this embodiment, when an object in a content scene is edited, first, a material to be used in the current content scene may be imported into a material display area by using a material import button of the material display area; then, the material in the material display area moves to the stage area in response to the drag instruction (i.e., the movement instruction), and an object is generated after the material moves to the stage area. Then, the object is edited in the stage area.

In order to facilitate the operation of a user, the same object can be copied to generate a plurality of objects in the stage area, and the identification of each object is different. The material is generated into the object with the unique identification, so that the conflict of events or behaviors among any objects is avoided, and compared with the scheme in the prior art, the multiplexing rate of the material is improved, and the operability of a user is improved.

2.3, property editing area:

in this embodiment, a display frame of a certain dimension attribute of the current object is displayed in the attribute editing area, such as the display frame shown on the right side of fig. 2, and shown in fig. 4A and 4B.

Each display frame displays a plurality of standard configuration items (i.e., option frames for adjusting configuration information), such as the width, height, X, Y, initial state, click, shape, and the like of the walnut shown in fig. 4A, and such as the movement, speed, and the like in fig. 4B belong to the standard configuration items, and a user can input information or select preset configuration information. Therefore, the user can edit the object attribute by adjusting the configuration information in the standard configuration item, the operation is simple and easy, and the defect that the editing must be realized through coding in the prior art is overcome.

The display frame shown in fig. 4A is collected with event attributes, behavior attributes, basic information attributes, and the like, and this embodiment does not limit one or more attributes displayed in the display frame, and is adjusted according to actual needs.

In the embodiment, the properties of the objects can be edited in batch, and the production efficiency of the course content is effectively improved. In this embodiment, the manner and the order of displaying the standard configuration items are not limited, and the manner of triggering the attribute display frame is also not limited, and can be defined according to actual needs.

In this embodiment, the teaching information may be a teaching information file, such as a word format or a PDF format, previously created by a teacher, where the teaching information is designed with a plurality of teaching modes (or teaching scenes), and information such as objects used in each teaching mode and attributes of the objects is designed. Furthermore, the scene editor can edit the content scene according to the teaching information, and adjust each dimension attribute of each object in the editing process, so that the content scene matched with the teaching information and capable of being played at the client side is obtained.

In this embodiment, with the aid of the configuration mode of the standard configuration items in the display frame, editing of the dimensional attributes of each object in the stage area is realized, so that abstract complex logic in each piece of teaching information can be converted into configuration information of each standard configuration item, operation of editing personnel is facilitated, and efficiency of teaching information output is improved.

The minimum granularity of the scene editor editing is a content scene, namely a scene with display content, and mixed editing of video, audio, NPC and level can be realized in the content scene. The blending in this embodiment refers to editing of different dimensional attributes of a plurality of objects, such as editing of an event of a first object, editing of a behavior and an event of a second object, editing of monitoring information of a third object, and the like, where first and second objects move first and then can be controlled by the sequence of the objects and the duration of the objects. The above-mentioned level can be understood as an interactive behavior in a content scene. Compared with the prior art, the mixed programming is only explained by the coding sequential writing process.

Because each dimension attribute of the object in the stage area can be edited quickly, the scene editor of the embodiment can provide stronger mixed editing capability, for example, the editing of event attributes and behavior attributes such as NPC and animation display, hiding, moving, modeling switching and the like can be provided, the editing of behavior attributes such as video, audio, dialog group display, hiding and automatic playing can be provided, and whether the editing of behavior attributes such as level cards is entered or not is provided, the mixed editing functions are abstracted into standard configuration items in the display frames of the attributes of each object, an editor can realize the complex animation effect and the logic effect of course content by modifying the configuration information in the standard configuration items, and the output of the content scene of complex behaviors/events is realized.

The multidimensional attribute of the object in this embodiment may include: basic information attribute, event attribute/event dimension attribute, behavior attribute/behavior dimension attribute, and monitoring information attribute.

Wherein, the basic information attribute may include: name, initial position, initial state, tag state, and tag size;

the event attribute is information of an operation triggered by a specified action executed by an object at a specified moment, and the event dimension attribute comprises: double click, single click, long press or drag, etc.;

the behavior attribute is behavior information that can be executed by the object, and the behavior dimension attribute comprises: displaying, hiding, switching shapes, shifting behaviors, playing behaviors, pausing behaviors, logic information between specified behaviors (such as logic of double-click and playing behaviors, logic of selecting answers and entering a checkpoint and the like), entering the checkpoint, issuing rewards, starting jobs and the like;

the monitoring information attribute is information for whether the event of the monitoring object is successfully triggered and information for whether the behavior of the monitoring object is completed.

That is to say, an event is an operation triggered by an object through a certain action at a certain time, and a behavior refers to a behavior that the object can execute, for example, an NPC (non-player character) can execute a behavior of displaying, hiding, switching a style, and displacing; the video may perform play actions and the dialog group may perform play and hide actions. Events and behaviors of an object may be set to listen for information. And according to the requirements of the teaching information, the editor sets the logic and the occurrence sequence of attributes such as events, behaviors, monitoring information and the like when configuring the content scene.

For example, in a teaching mode, the tag at the top of the core character a is set to be clicked, the video explanation 1 is played, and the tag at the top of the core character is hidden after the video explanation 1 is completed. In the corresponding edited content scene, the object is a core character A, and the video explanation is 1; the attributes of each object may include: the label, click, the logical sequence between play, play and click, play the complete monitoring information, hide the behavior, etc.

After the configuration information of each attribute is adjusted by means of the display frame of each object attribute, the edited content scene may be: and monitoring that the label at the top of the core character A is clicked, sequentially executing video 1 playing, and hiding the label. That is, a single click event of the user is first listened to, and then one or more object behaviors are executed.

Of course, in a specific application, in order to better configure the attributes of each object, the configuration of the editor with respect to events and behaviors can be further divided into simple event behavior configuration and complex event behavior configuration. Generally, a simple behavior event configuration has only one trigger condition, one or more behaviors can be executed after the trigger condition is triggered, and a plurality of behaviors can be executed simultaneously or sequentially. For example, monitoring the occurrence of an event A, sequentially executing object K hiding and object C appearing;

in the complex behavior event configuration, a plurality of trigger conditions need to be configured, and the set logic needs to be satisfied, so that the subsequent behaviors can be executed. Such as monitoring that event a occurred and event B also occurred, and then performing an action. Compared with simple event behaviors, complex event behaviors require monitoring to be triggered by a plurality of events according to certain logic, and subsequent behaviors are executed.

The configuration of simple event behaviors or the configuration of complex event behaviors is realized through the multi-dimensional attribute display frame of the object, so that the configuration of complex event behaviors can be simplified and visualized, the technical threshold of editors can be effectively reduced, the operability is improved, and the generation efficiency of course content is improved.

The above describes in detail the functions of the scene editing unit in the scene editor, and after the scene editing unit completes a content scene, the scene previewing unit may preview the content scene to determine whether the content scene is correct or meets the design requirement of the teaching information.

For example, the conversion of the content scene edited in the current stage area into a JSON file for presentation is implemented by means of the "preview" button in the upper right corner of fig. 2.

In this embodiment, all edited content scenes in the stage area support real-time preview, and when a preview button at the upper right corner of the stage area is clicked, the scene preview unit converts the content scenes into a file that can be played by a browser and calls the browser to play, or alternatively, the scene preview unit can generate a JSON file that can be recognized by a front end (such as a trainee client) of the content scenes and jump to a browser page to preview the effect. The scene editor of the embodiment supports content scene configuration and preview, and realizes what you see is what you get of content scene configuration.

In the scene editor of the embodiment, the material is converted into the object, and then the object is configured with the multi-dimensional attributes in a visual mode, so that the standardized processing of the configuration of each attribute can be realized, the generation difficulty of course content is reduced, the configuration efficiency of the course content is improved, and the problem that a technical worker needs to write codes in the prior art is effectively solved.

3. Course content generation module

Based on the description of the scene editor, the teaching information is previously completed by the teaching and research personnel, and the teaching information can comprise a plurality of teaching scenes/teaching modes with logical relations/paths. For this reason, the course content generating module in this embodiment configures the one or more content scenes based on the logical relationship/path in the teaching information, and obtains the course content for playing at the client and matching each student.

Specifically, the course content generating module may include: a node generation unit and a content generation unit; the node generation unit is used for determining a plurality of nodes in a path for realizing course content based on the teaching information and binding a content scene with at least one node; and the content generating unit is used for generating a path of each node permutation and combination according to the judgment condition between the adjacent nodes in the path to obtain the course content.

Alternatively, in another specific implementation, the course content generating module may include: a node generation unit, a determination condition generation unit, and a content generation unit;

the node generation unit is used for determining a plurality of nodes in a path for realizing the course content based on the teaching information and binding a content scene with at least one node;

the judging condition generating unit is used for determining judging conditions among adjacent nodes in the path according to the number of the nodes and the logical relationship of the content scene bound by each node, or determining judging conditions among adjacent nodes in the path according to the number of the nodes, the logical relationship of the content scene bound by each node and the capability case value of the content scene;

and the content generating unit is used for generating a path of each node permutation and combination according to the judgment condition between the adjacent nodes in the path to obtain the course content.

In another embodiment, the lesson content generation module can include: a node generation unit, a determination condition generation unit, and a content generation unit;

the node generation unit is used for determining the nodes and the content scenes corresponding to the nodes based on the teaching scene/teaching mode and binding each node and the corresponding content scene; that is, the node and the content scene are bound according to the node determined by the teaching mode and the corresponding content scene;

the judgment condition generating unit is used for determining a judgment condition between adjacent nodes in the path based on the incidence relation and the number of nodes of the teaching scene/teaching mode and the logical relation of the content scene bound by each node; or, the method is further used for determining the path and the judgment condition between adjacent nodes in the path based on the association relationship and the number of nodes of the teaching scene/teaching mode and the logical relationship of the content scene bound by each node; or, the method is further configured to determine a path and a determination condition between adjacent nodes in the path based on an association relationship and the number of nodes of a teaching scene/a teaching mode, a logical relationship of a content scene bound to each node, and a capability case (capability) value of the content scene bound to each node;

and the content generating unit is used for generating a path of each node permutation and combination according to the judgment condition between the adjacent nodes in the path to obtain the course content. The format of the course content can be a JSON format, and the course content is sent to any client side to which the student belongs to realize playing.

Each node in the above embodiments has a unique identifier, and each node binds one content scene. In a specific operation, the node and the content scene bound thereto may also be unbound to reselect the content scene that needs to be bound.

Certainly, two or more nodes can also be bound with the same content scene, and the way can realize that the student circularly learns the course content, thereby being convenient for consolidating the learning knowledge. In the embodiment, the skipping among the nodes in the course content is the skipping of the paths to which the nodes belong according to the judgment conditions, so that different students can learn different course contents, and the personalized playing of each student can be realized.

As shown in fig. 5, fig. 5 shows a data structure of a plurality of nodes and a plurality of paths formed by the determination conditions, and the data structure may be a data structure of a certain course content. Starting from the starting node (e.g., node 1), a jump can be made to the next node (e.g., node 2 or node 3) by a decision condition, and the corresponding next node is different due to the difference in the decision condition. For example, the course content displayed by the client is a customs scenario, and if the current student tries for 1-5 customs passes, the judgment condition of the path where the node 3 is located is met, and what is presented to the student is the trend node 3; if the current student tries to pass more than 5 times, the judgment condition of the path where the node 2 is located is met, and the path is presented to the student and is the path going to the node 2; i.e. the number of attempts and whether the passing is the decision condition for selecting a different path. Therefore, different students can learn different course contents by triggering different judging conditions, the course contents of the students can be configured in a personalized manner, and the characteristics of thousands of people are provided for the course contents.

After the node is bound with the content scene, the decision condition may be generated by means of a display box that can be edited by an editor, as shown in fig. 6, where a plurality of configuration items are displayed, and the configuration items may be fixed in advance or customized according to the content scene bound by the node. The editor can adjust or input each configuration item in the display frame, and then the edge of the node, namely the judgment condition, is generated. Referring to fig. 5 and 6, if the path name of node 1, node 3, node 5, and node 6 in fig. 5 is 2, when the determination condition of node 3 is generated, the button to which the determination condition generation unit belongs may be triggered, a display frame for generating the determination condition is displayed on the current operation interface, and the configuration items are sequentially filled in to generate the determination condition. When the path name 2 is selected in fig. 6, a node that needs to execute the determination condition 7 in the path name 2 is selected, and if the selected node is the node 3 in fig. 6, an object or a specific attribute of the object in the node 3 is used as a variable, and a dynamic range of the variable is set, thereby generating the determination condition 7. Of course, in other embodiments, the determination condition may be automatically generated according to the information input by the user. In the embodiment, the judgment condition is generated by means of the display frame, so that the defect that the judgment condition needs to be realized by coding in the prior art is overcome, and meanwhile, the configuration item of the display frame can be customized, so that the judgment condition can be flexibly configured, the flexibility of course content is improved, and the configuration difficulty is reduced.

In practical applications, for the course content of a specific student, the setting of the determination condition in the course content may be further related to the learning condition data, which is the historical learning data of the specific student. On one hand, the judgment condition generation unit can carry out setting according to the capacity case value based on the judgment condition; on the other hand, the matched content scenes and the nodes bound by the content scenes can be obtained based on the learning situation data of the specific trainees, and then the node paths and the judgment conditions in the paths are determined according to the capability case values of the content scenes. Therefore, the courses for the students to learn can be set according to different learning abilities, namely, learning paths of the students with different abilities are different, and difficulty levels of learning contents are also different. Therefore, in the embodiment, the learning target is used as the guide, and the optimal teaching effect is achieved under the knowledge point with limited course content, so that the teaching target is realized.

That is, the determination condition generating unit may further add a path according to the historical learning condition of the learner, where the determination condition between adjacent nodes in the path is determined according to the historical learning condition and the capability case value of the content scene to which the nodes are bound. The capability case value of the content scene in this embodiment is an evaluation value obtained by evaluating a knowledge point of the content scene by using a preset evaluation algorithm. Of course, the determination conditions in the newly added path are related to the degree of grasp of each knowledge point by the trainee.

In addition, in order to enable the teaching information to completely cover each student, a path without any judgment condition is also arranged in the judgment condition generating unit, namely the number of nodes in the path is small, and the path can be used for communicating a starting node, at least one middle node and an ending node of the course content, so that the course is communicated from the starting node to the ending node. When the teaching information is not completely covered, the teaching information enters the path, and the course content carrying the path without any judgment condition can realize that any student can use the course content to cover the learning ability of any student.

It should be noted that, in the path formed by the nodes in this embodiment, the determination condition between adjacent nodes can be used as an edge connecting two nodes, and the data structure formed by the path can be similar to the current graph data structure. In practical application, the setting of the teaching scene in the teaching information can be set according to the mode of a graph data structure, so that all nodes and edges in the finally generated course content are also of a matching graph data structure, the information between the course content and the teaching and research personnel is consistent, the matching degree is high, the course content is more flexible, and the teaching and research personnel can adapt to students with different learning abilities.

In addition, in order to better mark the starting state and the ending state of each course content, a first virtual node used for marking the starting state of the course content is arranged on a content scene bound by the starting node of each path, a second virtual node used for marking the ending state of the course content is arranged on a content scene bound by the ending node of the path, and effective recording of the starting state and the ending state of the course content is realized in a virtual node mode.

In summary, the material management module in this embodiment is configured to upload and manage materials required for the course content, the scene editor is configured to edit a content scene required for the course content, and the course content generation module is configured to configure a logical relationship executed by the content scene. Therefore, the JSON file of the course content can be played at the client (namely the front end) to finish different learning paths of different students, so that the personalized teaching of thousands of people is realized.

Compared with the prior art, the course content generation system for online interactive teaching can obtain the course content with complex interactive effect, in concrete implementation, functional editing can be carried out on an object for material conversion, abstraction and induction of the interactive effect are realized, the process of scene occurrence and the process of scenario interaction in the course are all realized by editing the multidimensional attribute of the object, a plurality of content scenes are obtained, then the plurality of content scenes are combined in series in a path mode in a graph data structure, the final course content is obtained, the course content can have complex judgment logic and simple judgment logic when being presented to a student, and the course content is presented according to the answering process of the student to each node.

The system of the embodiment can realize 60% + efficiency improvement of course content production, effectively reduce the production cost and configuration difficulty of the course content, and solve the defect that the code configuration must be written by technical personnel in the prior art. The scene editor in the embodiment edits the content scene in a standard configuration item mode, so that the configuration threshold of the course content can be effectively reduced, and the configuration efficiency is improved.

Example two

The embodiment of the invention provides an editing device for online interactive teaching, which can comprise: a scene editor; and the scene editor is used for calling more than one material according to the teaching information, forming editable objects, editing each object in a multi-dimensional attribute manner in a visual manner, and obtaining visual content scenes corresponding to the course content, wherein the course content is online interactive courseware which corresponds to the teaching information and can be played at a client.

For example, the scene editor may comprise: a third material acquisition unit, a scene editing unit and a scene preview unit;

the third material acquisition unit is used for acquiring a material corresponding to the teaching information and displaying the material in a material display area;

the scene editing unit is used for converting the acquired materials into more than one object and editing the multi-dimensional attributes of each object to generate a content scene;

and the scene preview unit is used for displaying the generated content scene.

Of course, in another embodiment, the scene editor may include: a third material acquisition unit and a scene editing unit.

It can be understood that, in this embodiment, the material may be downloaded and called from the public network by the third material obtaining unit in the scene editor, or may be drawn in a drawing program associated with the editing apparatus, and the source of the material is not limited in this embodiment. That is, the third material obtaining unit is used for downloading a material in real time, or uploading a material, etc., and the source and the generation manner of the material are not limited in this embodiment.

Generally, the materials can be uniformly managed and stored through a material management module according to the first embodiment, and then called for use in a scene editor.

The function of the scene editor in this embodiment is consistent with the implementation manner in the first embodiment, and reference is made to the description of the first embodiment, which is not described in detail here.

In a specific implementation process, the editing apparatus of this embodiment may further include: and the course content generation module is used for configuring the path of the content scene according to the teaching information and generating the course content for online interaction played at the client. The function of the course content generating module in this embodiment is substantially the same as that of the course content generating module in the first embodiment, and reference is made to the description of the above embodiment, and details thereof are not described here.

The editing system of the embodiment can rapidly realize the rapid configuration of the multidimensional attribute of each object by means of the scene editor, reduce the configuration difficulty of operators, improve the generation efficiency of course content, and effectively realize the processing of the course content.

The editing system does not need coding operation of technicians, and can solve the problems that the material ID can not be repeated and the operation and the inspection are very complicated in the prior art.

EXAMPLE III

The embodiment of the invention provides a course content generation device for online interactive teaching, which can comprise: a course content generation module; the course content generation module is used for configuring a path of a content scene according to the teaching information and generating the course content which corresponds to the teaching information and can be played on the client side in an online interaction manner;

and the content scene is a visual scene which is generated by converting and editing more than one material in a scene editing mode based on the teaching information and corresponds to the course content.

The course content generating module of this embodiment includes a node generating unit and a content generating unit, or in other embodiments, the course content generating module includes a node generating unit, a determination condition generating unit and a content generating unit. The functions or roles of the units in the course content generating module in this embodiment are the same as those described in the first embodiment, and reference is made to the above description, and the present embodiment is not described in detail again.

In addition, in the course content generating device of the embodiment, a content scene is bound to the nodes, and a plurality of nodes can be bound to the configured content scene, so that the content scene can be effectively reused, the course content with a graph data structure formula is finally formed, and the configuration and recommendation of the personalized teaching content can be realized according to the learning condition of the current node of the student.

Example four

An embodiment of the present invention further provides an online teaching system, including: a plurality of clients for playing teaching contents online and the course content generating system of the first embodiment, the editing device of the second embodiment, and the course content generating device of the third embodiment; the teaching system can generate course contents for playing at the client, and any client can play the course contents according to the trigger instruction of the current student.

In a specific application, the client may be understood as a front end, the course content generating system/editing apparatus/course content generating apparatus of this embodiment may be implemented at a back end, the course content edited at the back end is visually implemented, and the edited course content occupies a small space and has high operation efficiency compared with the edited course content of an excel table in the prior art.

The above-mentioned course content generating system/editing means/course content generating means are all realized by a computer program running on an electronic device, the computer program being stored in a memory, and the stored computer program being executed by a processor to thereby execute the flow of course content generation of any of the above-mentioned embodiments.

Embodiments of the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third and the like are for convenience only and do not denote any order. These words are to be understood as part of the name of the component.

Furthermore, it should be noted that in the description of the present specification, the description of the term "one embodiment", "some embodiments", "examples", "specific examples" or "some examples", etc., means that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.

While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all changes and modifications that fall within the scope of the invention.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:互动教学中的多媒体数据展示方法及相关设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!