Method and device for network teaching management based on Internet of things

文档序号:1923414 发布日期:2021-12-03 浏览:20次 中文

阅读说明:本技术 基于物联网进行网络教学管理的方法及装置 (Method and device for network teaching management based on Internet of things ) 是由 范骏 于 2021-09-09 设计创作,主要内容包括:本发明公开了基于物联网进行网络教学管理的方法,包括以下步骤:将多个学生设备以及教师端设置于教室内,并将学生端与学生设备连接;基于教师端采集教师用户的动作参数;将教师用户的动作参数发送到学生端;基于学生端采集学生用户的动作参数;基于学生用户的动作参数驱动学生设备同步的动作;本发明提供一个学生用户以及教师用户的交互平台,并且联系学生设备进行真实的教室模拟,可以在教室内仅由教师用户进行真人教学,配合模拟学生用户的学生设备形成真实的教学场景,一方面为教师用户提供真实课堂的教学体验,另一方面为学生用户提供真实的学习环境,提高学生用户的代入感,有效提高网络教学的学习效果。(The invention discloses a method for network teaching management based on the Internet of things, which comprises the following steps: arranging a plurality of student devices and a teacher end in a classroom, and connecting the student ends with the student devices; collecting action parameters of teacher users based on a teacher end; sending the action parameters of the teacher user to the student end; acquiring action parameters of a student user based on a student end; driving synchronous action of student equipment based on action parameters of student users; the invention provides an interactive platform for student users and teacher users, and relates to student devices to carry out real classroom simulation, so that only the teacher users can carry out real-person teaching in a classroom, and real teaching scenes are formed by matching the student devices simulating the student users, on one hand, real classroom teaching experience is provided for the teacher users, on the other hand, real learning environments are provided for the student users, the substitution feeling of the student users is improved, and the learning effect of network teaching is effectively improved.)

1. The method for network teaching management based on the Internet of things is characterized by comprising the following steps:

step S11, arranging a plurality of student devices and a teacher end in a classroom, and connecting the student end with the student devices;

step S12, collecting the action parameters of the teacher user based on the teacher end, wherein the action parameters of the teacher user at least comprise:

a first sound parameter generated by the voice of the collected teacher user;

a first arm action parameter generated by collecting arm actions of a teacher user;

step S13, the action parameters of the teacher user are sent to the student end;

identifying student users required by the teacher user to respond based on the arm motion parameters of the teacher user;

step S14, collecting the action parameters of the student user based on the student end, wherein the action parameters of the student user comprise:

a second sound parameter generated by the voice of the collected student user;

a second arm motion parameter generated by collecting arm motions of the student user;

the leg action parameters are used for judging the standing posture of the student user based on the leg action parameters, and the standing posture of the student user at least comprises a sitting posture or a standing posture;

the head action parameters are used for judging the head actions of the student users based on the head action parameters, and the head actions at least comprise head shaking and head pointing;

step S15, driving synchronous action of student equipment based on action parameters of student users;

driving the arm of the student device to make the same type of arm motion as the student user based on the second arm motion parameter;

driving arms of the student equipment to make leg actions of the same type as the leg actions of the student user based on the leg action parameters;

driving arms of the student device to make head movements of the same type as the student user based on the head movement parameters;

and driving the arm of the student device to make the same voice as the student user based on the second sound parameter.

2. The method for managing internet of things teaching of claim 1, wherein the step of identifying the student users who requested the teacher user to respond based on the arm movement parameters of the teacher user is a step of performing voice recognition based on the first sound parameters to be able to point the student users who requested the teacher user to respond to through the identified student name words,

or the student users at the student end contacted by the student device pointed by the collected arm actions of the teacher user as the student users required to respond.

3. The method for managing network teaching based on internet of things according to claim 1, wherein the second sound parameter at least includes audio data, wherein the audio data is collected human voice of a student;

adding an intensity parameter to the second sound parameter when the arm of the student equipment is driven to make the same voice as the student user based on the second sound parameter, and controlling the sound intensity emitted by the student equipment based on the intensity parameter;

the calculation formula of the intensity parameter is as follows:

where Q denotes the sound intensity, d denotes the horizontal distance of the student device from the platform, k denotes the reference intensity, where Q, k has the unit dB, and e denotes a natural constant.

4. The method for managing the internet of things-based network teaching according to claim 1, wherein the arm of the student device is driven to perform the same type of arm movement as the student user based on the second arm movement parameters, wherein the second arm movement parameters at least include the type of arm movement, the speed of movement, and the time of movement;

the types of actions include:

lifting, clapping and swinging hands;

for the hand-lifting type action, the action speed is the time for completing the arm lifting action of the student user once, and the action time is the total time for lifting the arm of the student user;

for clapping type actions, the speed of the action is the time required by the clapping of the student user once, and the time of the action is the total time of the clapping of the student;

for a hand-waving type of motion, the speed of the motion is the time required for the student user to wave his hands once, and the time of the motion is the total time for the student to wave his hands.

5. The method for the internet-of-things-based network teaching management according to claim 4, wherein for the clapping type action, the simulation is performed in cooperation with the sound production, and based on the collection of clapping audio of the student user or the pre-stored play of clapping audio, the sound production intensity of the play can be calculated by the following formula:

wherein ZiWhen the student equipment simulates the clapping action of the ith student userSound intensity of kaIndicating the reference sounding intensity, XiIndicating the speed of the clapping action of the ith student user,mean, σ, of samples representing applause actions of student usersxSample standard deviations representing samples of clapping action of the student user.

6. The utility model provides a device for network teaching management based on thing networking which characterized in that includes:

the student terminal is used for collecting action parameters of student users;

a student device for simulating an action of a student user based on an action parameter of the student user;

the teacher end is used for collecting action parameters of a teacher user;

and the cloud platform is connected with the student end, the teacher end and the student equipment and is used for receiving and sending data.

7. The device for managing network teaching based on internet of things of claim 6, wherein the teacher end at least comprises:

the first audio acquisition unit is used for acquiring voice data of a teacher user;

and the first arm action acquisition unit is used for acquiring the first arm action parameters of the teacher user.

8. The device for network teaching management based on internet of things of claim 6, wherein the student end at least comprises:

the second audio acquisition unit is used for acquiring the human voice data of the student user;

the limb action acquisition unit is used for acquiring second arm action parameters, leg action parameters and head action parameters of the student user;

the limb action parameter unit comprises a second arm action acquisition unit for acquiring second arm action parameters of the student user, a leg action acquisition unit for acquiring leg action parameters of the student user and a head action acquisition unit for acquiring head actions of the student user.

9. The device for managing network teaching based on internet of things of claim 6, wherein the student device at least comprises:

a head unit for simulating head movements of a student user;

the arm unit is used for simulating the arm action of a student user;

a leg unit for simulating leg movements of a student user;

and the sound production unit is used for simulating the human voice of the student user to produce sound.

10. The device for network teaching management based on internet of things of claim 6, wherein the cloud platform at least comprises:

the receiving unit is used for receiving data of the teacher end and the student end;

a transmission unit for transmitting data to the student device;

a guidance unit that generates guidance information based on the action parameters of the teacher user and transmits the guidance information to the student end of the student user who responds to the teacher user;

and the control parameter generating unit is used for generating control parameters for controlling the action of the student equipment based on the action parameters of the student user.

Technical Field

The invention relates to the technical field of network teaching, in particular to a method for managing network teaching based on the Internet of things.

Background

The network teaching is a teaching mode which realizes teaching targets by applying multimedia and network technology under the guidance of certain teaching theory and thought, multi-side and multi-directional interaction of teachers, students, media and the like and collection, transmission, processing and sharing of teaching information of various media;

the network teaching is mainly divided into a teaching type and a demonstration type, wherein: the teaching mode is characterized in that a teacher is used as a center to give lessons systematically. The teaching mode is a new development of the traditional class teaching in the network teaching. The lecture-type teaching mode is a teaching mode mainly based on lectures performed using a network as a communication tool for teachers and students. The teaching mode of the teaching network realized by the Internet can be divided into a synchronous mode and an asynchronous mode. The synchronous teaching mode is the same as the traditional teaching mode except that the teacher and the students are not in the same place for class, the students can listen to the teacher for teaching at the same time, and the teacher and the students have some simple communication. Asynchronous teaching can be simply realized by using the Internet Web service and the e-mail service, the mode is that teachers compile teaching materials such as teaching requirements, teaching contents, teaching evaluation and the like into HTML files to be stored on a Web server, and students can achieve the purpose of learning by browsing the pages. The mode is characterized in that the teaching activities can be carried out 24 hours all day long, each student can determine the learning time, content and progress according to the actual condition of the student, and can download the learning content on the internet or ask for teaching to teachers at any time. The main defects are lack of real-time interactivity and high requirements on learning consciousness and initiative of students.

The demonstration type network teaching mode is that a teacher demonstrates various teaching information to students by using a network according to the teaching requirement, wherein the teaching information can be CAI courseware loaded by the teacher or teaching information from a campus network or the Internet.

The teaching mode or the demonstration network teaching mode basically receives information only by students, and lacks the interaction between the students and teachers in real classroom teaching and the interaction between the students and classroom environment, so that the students cannot obtain real learning experience, which is a main factor of low learning efficiency of network teaching.

Disclosure of Invention

The invention provides a method for network teaching management based on the Internet of things, which solves the technical problems in the related technology.

According to one aspect of the invention, a method for network teaching management based on the Internet of things is provided, which comprises the following steps:

step S11, arranging a plurality of student devices and a teacher end in a classroom, and connecting the student end with the student devices;

step S12, collecting the action parameters of the teacher user based on the teacher end, wherein the action parameters of the teacher user at least comprise:

a first sound parameter generated by the voice of the collected teacher user;

a first arm action parameter generated by collecting arm actions of a teacher user;

step S13, the action parameters of the teacher user are sent to the student end;

identifying student users required by the teacher user to respond based on the arm motion parameters of the teacher user;

step S14, collecting the action parameters of the student user based on the student end, wherein the action parameters of the student user comprise:

a second sound parameter generated by the voice of the collected student user;

a second arm motion parameter generated by collecting arm motions of the student user;

the leg action parameters are used for judging the standing posture of the student user based on the leg action parameters, and the standing posture of the student user at least comprises a sitting posture or a standing posture;

the head action parameters are used for judging the head actions of the student users based on the head action parameters, and the head actions at least comprise head shaking and head pointing;

step S15, driving synchronous action of student equipment based on action parameters of student users;

driving the arm of the student device to make the same type of arm motion as the student user based on the second arm motion parameter;

driving arms of the student equipment to make leg actions of the same type as the leg actions of the student user based on the leg action parameters;

driving arms of the student device to make head movements of the same type as the student user based on the head movement parameters;

and driving the arm of the student device to make the same voice as the student user based on the second sound parameter.

Further, the identifying of the teacher user's requested responsive student user based on the teacher user's arm motion parameters is capable of pointing to the teacher user's requested responsive student user through the identified student name words by performing voice recognition based on the first sound parameters,

or the student users at the student end contacted by the student device pointed by the collected arm actions of the teacher user as the student users required to respond.

Further, the second sound parameter comprises at least audio data, wherein the audio data is a collected human voice of the student;

adding an intensity parameter to the second sound parameter when the arm of the student equipment is driven to make the same voice as the student user based on the second sound parameter, and controlling the sound intensity emitted by the student equipment based on the intensity parameter;

the calculation formula of the intensity parameter is as follows:

where Q denotes the sound intensity, d denotes the horizontal distance of the student device from the platform, k denotes the reference intensity, where Q, k has the unit dB, and e denotes a natural constant.

Further, driving the arm of the student device to make the same type of arm motion as the student user based on second arm motion parameters, wherein the second arm motion parameters at least comprise the type of arm motion, the speed of motion and the time of motion;

the types of actions include:

lifting, clapping and swinging hands;

for the hand-lifting type action, the action speed is the time for completing the arm lifting action of the student user once, and the action time is the total time for lifting the arm of the student user;

for clapping type actions, the speed of the action is the time required by the clapping of the student user once, and the time of the action is the total time of the clapping of the student;

for a hand-waving type of motion, the speed of the motion is the time required for the student user to wave his hands once, and the time of the motion is the total time for the student to wave his hands.

Further, for clapping type actions, the simulation is performed in cooperation with sound production, and based on collecting clapping audio of a student user or playing prestored clapping audio, the played sound production intensity can be calculated by the following formula:

wherein ZiIndicating the sound production intensity, k, of the student device simulating the clapping action of the ith student useraIndicating the reference sounding intensity, XiIndicating the speed of the clapping action of the ith student user,mean, σ, of samples representing applause actions of student usersxSample standard deviations representing samples of clapping action of the student user.

According to an aspect of the present invention, there is provided an apparatus for performing network teaching management based on the internet of things, including:

the student terminal is used for collecting action parameters of student users;

a student device for simulating an action of a student user based on an action parameter of the student user;

the teacher end is used for collecting action parameters of a teacher user;

and the cloud platform is connected with the student end, the teacher end and the student equipment and is used for receiving and sending data.

Further, the teacher end includes at least:

the first audio acquisition unit is used for acquiring voice data of a teacher user;

and the first arm action acquisition unit is used for acquiring the first arm action parameters of the teacher user.

Further, the student terminal includes at least:

the second audio acquisition unit is used for acquiring the human voice data of the student user;

the limb action acquisition unit is used for acquiring second arm action parameters, leg action parameters and head action parameters of the student user;

the limb action parameter unit comprises a second arm action acquisition unit for acquiring second arm action parameters of the student user, a leg action acquisition unit for acquiring leg action parameters of the student user and a head action acquisition unit for acquiring head actions of the student user.

Further, the student device includes at least:

a head unit for simulating head movements of a student user;

the arm unit is used for simulating the arm action of a student user;

a leg unit for simulating leg movements of a student user;

and the sound production unit is used for simulating the human voice of the student user to produce sound.

Further, the cloud platform includes at least:

the receiving unit is used for receiving data of the teacher end and the student end;

a transmission unit for transmitting data to the student device;

a guidance unit that generates guidance information based on the action parameters of the teacher user and transmits the guidance information to the student end of the student user who responds to the teacher user;

and the control parameter generating unit is used for generating control parameters for controlling the action of the student equipment based on the action parameters of the student user.

The invention has the beneficial effects that:

the invention provides an interactive platform for student users and teacher users, and relates to student equipment to carry out real classroom simulation, so that only the teacher user can carry out real-person teaching in a classroom, and real teaching scenes are formed by matching the student equipment for simulating the student users, on one hand, real classroom teaching experience is provided for the teacher user, on the other hand, real learning environment is provided for the student users, the substitution feeling of the student users is improved, and the real-time interaction between the student users and the teacher user is greatly improved compared with the traditional network teaching similar to video conferences or recorded and broadcast network teaching.

Drawings

Fig. 1 is a first flowchart of a method for performing network teaching management based on the internet of things according to an embodiment of the present invention;

fig. 2 is a first classroom arrangement schematic diagram of a method for network teaching management based on the internet of things according to an embodiment of the invention;

fig. 3 is a first schematic structural diagram of an apparatus for performing network teaching management based on the internet of things according to an embodiment of the present invention;

fig. 4 is a schematic structural diagram of a device for performing network teaching management based on the internet of things according to an embodiment of the present invention;

fig. 5 is a schematic structural diagram of a teacher end module of the device for performing network teaching management based on the internet of things according to the embodiment of the present invention;

fig. 6 is a schematic structural diagram of a module at a student end of the device for performing network teaching management based on the internet of things according to the embodiment of the invention;

fig. 7 is a schematic block structure diagram of student equipment of the device for network teaching management based on the internet of things according to the embodiment of the present invention;

fig. 8 is a schematic structural diagram of a module of a cloud platform of the apparatus for performing network teaching management based on the internet of things according to the embodiment of the present invention;

fig. 9 is a first flowchart of a method for performing network teaching management based on the internet of things according to an embodiment of the present invention;

fig. 10 is a second classroom arrangement schematic diagram of a method for network teaching management based on the internet of things according to an embodiment of the invention;

fig. 11 is a third schematic structural diagram of an apparatus for performing network teaching management based on the internet of things according to an embodiment of the present invention;

fig. 12 is a fourth schematic structural diagram of an apparatus for performing network teaching management based on the internet of things according to an embodiment of the present invention.

In the figure: a teacher terminal 100, a student terminal 200, a student device 300, a cloud platform 400, a teacher device 500, a first audio acquisition unit 110, a first arm motion acquisition unit 120, a second audio acquisition unit 210, a limb motion acquisition unit 220, a second arm motion acquisition unit 221, a leg motion acquisition unit 222, a head motion acquisition unit 223, a head unit 310, an arm unit 320, a leg unit 330, a sound generation unit 340, a reception unit 410, a transmission unit 420, a guidance unit 430, a control parameter generation unit 440, an arm control unit 441, a head control unit 442, a leg control unit 443, and a sound generation control unit 444; .

Detailed Description

The subject matter described herein will now be discussed with reference to example embodiments. It should be understood that these embodiments are discussed only to enable those skilled in the art to better understand and thereby implement the subject matter described herein, and are not intended to limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as needed. For example, the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with respect to some examples may also be combined in other examples.

In this embodiment, a method for performing network teaching management based on the internet of things is provided, as shown in fig. 1, a schematic flow chart of the method for performing network teaching management based on the internet of things according to the present invention is shown, as shown in fig. 1, the method for performing network teaching management based on the internet of things includes the following steps:

step S11, placing a plurality of student devices 300 and the teacher terminal 100 in the classroom, and connecting the student terminals 200 and the student devices 300;

step S12, collecting the action parameters of the teacher user based on the teacher terminal 100, where the action parameters of the teacher user at least include:

a first sound parameter generated by the voice of the collected teacher user;

a first arm action parameter generated by collecting arm actions of a teacher user;

step S13, sending the action parameters of the teacher user to the student end 200;

identifying student users required by the teacher user to respond based on the arm motion parameters of the teacher user;

specifically, voice recognition based on the first sound parameter can point to a student user who responds to a request of the teacher user through the recognized student name word;

or a student user of the student terminal 200 contacted by the student device 300 based on the acquisition of the arm motion of the teacher user as a student user who responds as requested by the teacher user;

more specifically, as an example, the teacher terminal 100 includes an infrared transmitting device provided on an arm of the teacher user, and the student device 300 includes an infrared receiving device for cooperating with the infrared transmitting device provided on the arm of the teacher user. Based on the cooperation of the infrared transmitting device and the infrared receiving device, the student device 300 is contacted based on the pointing direction of the arm of the teacher, and the student users are further guided.

Step S14, collecting the action parameters of the student user based on the student terminal 200, where the action parameters of the student user include:

a second sound parameter generated by the voice of the collected student user;

a second arm motion parameter generated by collecting arm motions of the student user;

the leg action parameters are used for judging the standing posture of the student user based on the leg action parameters, and the standing posture of the student user at least comprises a sitting posture or a standing posture;

the head action parameters are used for judging the head actions of the student users based on the head action parameters, and the head actions at least comprise head shaking and head pointing;

step S15, driving synchronous action of the student device 300 based on the action parameters of the student user;

driving the arm of the student device 300 to make the same type of arm motion as the student user based on the second arm motion parameter;

driving the arms of the student device 300 to make the same type of leg movements as the student user based on the leg movement parameters;

driving the arms of the student device 300 to make the same type of head movement as the student user based on the head movement parameters;

driving the arm of the student device 300 to make the same voice as the student user based on the second sound parameter;

for example, for the second sound parameter, at least audio data is included, wherein the audio data is the collected human voice of the student;

attaching an intensity parameter to the second sound parameter when the arm of the student device 300 is driven to make the same human voice as the student user based on the second sound parameter, and controlling the intensity of the sound emitted by the student device 300 based on the intensity parameter;

the strength parameters were obtained as follows: the determination is made based on the distance between the student device 300 and the platform and the reference strength, and for a classroom with a length of 8m, a width of 6m and a floor height of 4m, the following formula can be referred to:

where Q represents the sound intensity, d represents the horizontal distance of the student device 300 from the podium, k represents the reference intensity, where Q, k has units of dB, e represents a natural constant, and the remaining parameters may be de-unitized;

for the above classroom, the reference intensity can be 40 dB;

driving the arm of the student device 300 to make the same type of arm motion as the student user based on second arm motion parameters, wherein the second arm motion parameters at least comprise the type of arm motion, the speed of the motion and the time of the motion;

the types of actions include:

lifting, clapping, swinging, etc.;

for the hand-lifting type action, the action speed is the time for completing the arm lifting action of the student user once, and the action time is the total time for lifting the arm of the student user;

for clapping type actions, the speed of the action is the time required by the clapping of the student user once, and the time of the action is the total time of the clapping of the student;

for the hand swinging type action, the action speed is the time required by the hand swinging of the student user once, and the action time is the total time of the hand swinging of the student;

as a further scheme, for the clapping type action, since the student device 300 is a mechanical device, and its mechanical joint and structure are difficult to simulate a real clapping, it is necessary to perform simulation in cooperation with sound generation, and based on collecting clapping audio of the student user or pre-storing clapping audio playing, the sound generation intensity of playing can be calculated by the following formula:

wherein ZiIndicates the sound emission intensity, k, of the student device 300 simulating the clapping action of the ith student useraIndicating the reference sounding intensity, XiIndicating the speed of the clapping action of the ith student user,mean, σ, of samples representing applause actions of student usersxA sample standard deviation representing a sample of clapping action of a student user;

wherein i is (1, 2, 3.. N), and N is a positive integer.

Wherein Zi, kaThe unit of (a) is dB, e represents a natural constant, and the rest parameters can be subjected to de-unitization;

based on the formula, the student device 300 can be controlled to synchronously sound when simulating the clapping action of the student user, more real simulation can be provided by matching with the sound intensity of the clapping action of the student user, and a student feedback scene similar to a real classroom can be obtained.

The following classroom scenario of interaction can be achieved, for example, based on the simulation of arm movements of student users:

students hold hands to answer questions;

the students are teachers or other students applause;

students raise their hands to signal rejection;

as a further scheme, the student device 300 comprises an image acquisition unit for acquiring images and a sound acquisition unit for acquiring audio, the image acquisition unit is used for acquiring video information of a classroom and sending the video information to the student 200 for display, and the sound acquisition unit is used for acquiring audio information of the classroom and sending the audio information to the student 200 for playing, so that the student 200 can obtain real experience of the whole classroom;

as shown in fig. 2 to 7, based on the method for performing network teaching management based on the internet of things, the present invention further provides a device for performing network teaching management based on the internet of things, including:

the student terminal 200 is used for collecting action parameters of student users;

a student device 300 for simulating an action of a student user based on an action parameter of the student user;

a teacher terminal 100 for collecting motion parameters of a teacher user;

a cloud platform 400 connected to the student terminal 200, the teacher terminal 100, and the student devices 300, and configured to receive and transmit data;

the student devices 300 CAN be connected through near field communication and then connected to the cloud platform 400 through the internet, such as a CAN bus or a WiFi network;

the teacher end 100 includes at least:

a first audio collecting unit 110 for collecting vocal data of the teacher user;

a first arm motion acquisition unit 120 for acquiring a first arm motion parameter of the teacher user;

the student terminal 200 includes at least:

a second audio collecting unit 210 for collecting vocal data of the student user;

a limb movement acquisition unit 220 for acquiring second arm movement parameters, leg movement parameters and head movement parameters of the student user;

the limb action parameter unit comprises a second arm action acquisition unit 221 for acquiring second arm action parameters of the student user, a leg action acquisition unit 222 for acquiring leg action parameters of the student user, and a head action acquisition unit 223 for acquiring head actions of the student user;

the motion sensing is a conventional technical means in the field, and an acceleration sensor can be used as a hardware basis of the acquisition unit as a reference;

the student device 300 includes at least:

a head unit 310 for simulating head movements of a student user;

an arm unit 320 for simulating arm movements of a student user;

a leg unit 330 for simulating leg movements of a student user;

the sound production unit 340 is used for simulating the human voice of the student user to produce sound;

the cloud platform 400 includes at least:

a receiving unit 410 for receiving data of the teacher end 100 and the student end 200;

a transmission unit 420 for transmitting data to the student device 300;

a guidance unit 430 that generates guidance information based on the action parameters of the teacher user to transmit to the student user student terminal 200 that responds required by the teacher user;

the guide information may be an audio prompt or a text or image prompt, and the prompt can be provided based on the display of the student terminal 200.

A control parameter generation unit 440 for generating a control parameter for controlling the action of the student device 300 based on the action parameter of the student user;

the control parameter generation unit 440 includes:

an arm control unit 441 for generating control parameters for controlling the arm unit 320 of the student device 300;

a head control unit 442 for generating control parameters for controlling the head unit 310 of the student device 300;

a leg control unit 443 for generating control parameters for controlling the leg unit 330 of the student device 300;

an utterance control unit 444 for generating a control parameter for controlling the utterance unit 340 of the student device 300;

the system can form an Internet of things system for simulating a real classroom, an interactive platform for student users and teacher users is provided based on the Internet of things system, real classroom simulation is carried out by contacting with the student equipment 300, real-person teaching can be carried out only by the teacher users in a classroom, real teaching scenes are formed by matching with the student equipment 300 for simulating the student users, on one hand, teaching experience of the real classroom is provided for the teacher users, on the other hand, real learning environment is provided for the student users, substitution feeling of the student users is improved, real-time interaction between the student users and the teacher users is greatly improved compared with traditional network teaching similar to video conference or recorded network teaching, the whole system can be reused, and the system is suitable for network teaching of various subjects;

as shown in fig. 8, as another way, the teacher user also adopts remote teaching, and based on such a scenario, another method for performing network teaching management based on the internet of things is further provided, which includes the following steps:

step S91, placing a plurality of student devices 300 and teacher device in the classroom, connecting the student terminal 200 with the student devices 300, and connecting the teacher device with the teacher terminal 100;

step S92, collecting the action parameters of the teacher user based on the teacher terminal 100, where the action parameters of the teacher user at least include:

a first sound parameter generated by the voice of the collected teacher user;

a first arm action parameter generated by collecting arm actions of a teacher user;

step S93, driving the teacher device to synchronously act based on the action parameters of the teacher user;

wherein the first arm motion parameters at least comprise the type of arm motion, the speed of the motion and the time of the motion;

the types of actions include:

lifting, clapping, swinging, etc.;

for the hand-lifting type action, the action speed is the time for completing the arm lifting action of the teacher user once, and the action time is the total time for lifting the arm of the teacher user;

for clapping type actions, the speed of the action is the time required by the teacher user to clap once, and the time of the action is the total time of the teacher clapping;

for the hand swinging type action, the action speed is the time required by the teacher user to swing the hands once, and the action time is the total time of the teacher to swing the hands;

as a further scheme, for the clapping type action, since the teacher device is a mechanical device, and its mechanical joint and structure are difficult to simulate a real clapping, it is necessary to perform simulation in cooperation with sound generation, and based on collecting the clapping audio of the teacher user or pre-storing the clapping audio for playing, the sound generation intensity of the playing can be calculated by the following formula:

wherein ZiRepresents the sound emission intensity k of the teacher device simulating the clapping action of the teacher user for the s-th timeaTo representReference sound intensity, XsIndicates the speed of the clapping action of the instructor user at the s-th time,mean, σ, of samples representing the clapping action of teacher userxA sample standard deviation representing a sample of the teacher user's clapping action;

wherein i is (1, 2, 3.. N), and N is a positive integer.

Wherein Zs, kaThe unit of (a) is dB, e represents a natural constant, and the rest parameters can be subjected to de-unitization;

based on the formula, the teacher equipment can be controlled to synchronously sound when simulating the clapping action of the teacher user, more real simulation can be provided by matching with the sound intensity of the clapping action of the teacher user, and a teacher feedback scene similar to a real classroom can be obtained.

Step S94, based on the action parameters of the teacher user, the student users required by the teacher user to respond can be identified;

specifically, voice recognition based on the first sound parameter can point to a student user who responds to a request of the teacher user through the recognized student name word;

or the student user of the student terminal 200 contacted by the student device 300 based on the acquisition of the arm motion-driven arm of the teacher device by the teacher user as the student user who responds as requested by the teacher user;

more specifically, as an example, the teacher terminal 100 includes an infrared transmitting device provided on an arm of the teacher apparatus, and the student apparatus 300 includes an infrared receiving device for cooperating with the infrared transmitting device provided on the arm of the teacher apparatus. Based on the cooperation of the infrared transmitting device and the infrared receiving device, the student device 300 is contacted based on the pointing direction of the arm of the teacher device, and the student user is further guided.

Step S95, collecting the action parameters of the student users required by the teacher user to respond based on the student end 200, where the action parameters of the student users include:

a second sound parameter generated by the voice of the collected student user;

a second arm motion parameter generated by collecting arm motions of the student user;

the leg action parameters are used for judging the standing posture of the student user based on the leg action parameters, and the standing posture of the student user at least comprises a sitting posture or a standing posture;

the head action parameters are used for judging the head actions of the student users based on the head action parameters, and the head actions at least comprise head shaking and head pointing;

step S96, driving synchronous action of the student device 300 based on the action parameters of the student user;

driving the arm of the student device 300 to make the same type of arm motion as the student user based on the second arm motion parameter;

driving the arms of the student device 300 to make the same type of leg movements as the student user based on the leg movement parameters;

driving the arms of the student device 300 to make the same type of head movement as the student user based on the head movement parameters;

driving the arm of the student device 300 to make the same voice as the student user based on the second sound parameter;

for example, for the second sound parameter, at least audio data is included, wherein the audio data is the collected human voice of the student;

attaching an intensity parameter to the second sound parameter when the arm of the student device 300 is driven to make the same human voice as the student user based on the second sound parameter, and controlling the intensity of the sound emitted by the student device 300 based on the intensity parameter;

driving the arm of the student device 300 to make the same type of arm motion as the student user based on second arm motion parameters, wherein the second arm motion parameters at least comprise the type of arm motion, the speed of the motion and the time of the motion;

the types of actions include:

lifting, clapping, swinging, etc.;

for the hand-lifting type action, the action speed is the time for completing the arm lifting action of the student user once, and the action time is the total time for lifting the arm of the student user;

for clapping type actions, the speed of the action is the time required by the clapping of the student user once, and the time of the action is the total time of the clapping of the student;

for the hand swinging type action, the action speed is the time required by the hand swinging of the student user once, and the action time is the total time of the hand swinging of the student;

as shown in fig. 10 to 12, based on the above method, the apparatus for performing network teaching management based on the internet of things of the present invention further includes a teacher device 500 disposed in a classroom, where the teacher device 500 is disposed with reference to the student devices 300;

through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present embodiment or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g. a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method of the embodiments.

In the description of the present invention, it is to be understood that the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.

In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.

In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.

In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above should not be understood to necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.

Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

The embodiments of the present invention have been described with reference to the drawings, but the present invention is not limited to the above-mentioned specific embodiments, which are only illustrative and not restrictive, and those skilled in the art can make many forms without departing from the spirit and scope of the present invention and the protection scope of the claims.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:重睑术联合带蒂匝肌瓣改善上睑凹陷的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!