Data interaction method, device, equipment and storage medium

文档序号:1023800 发布日期:2020-10-27 浏览:8次 中文

阅读说明:本技术 数据交互方法、装置、设备以及存储介质 (Data interaction method, device, equipment and storage medium ) 是由 燕宇飞 于 2020-07-20 设计创作,主要内容包括:本申请公开了一种数据交互方法、装置、设备以及存储介质,涉及计算机中的人工智能、图像处理和虚拟现实技术领域。具体实现方案为:接收目标对象的人体模型数据,其中,人体模型数据为待检测端通过采集人体特征点数据所生成的;根据目标对象的人体模型数据得到待展示于虚拟现实场景中的第一虚拟现实人体模型;展示第一虚拟现实人体模型,针对第一虚拟现实人体模型进行数据交互,得到数据交互结果;向待检测端的目标对象发送数据交互结果。本申请在保证用户隐私性的情况下,提高远程沟通的可操作性。(The application discloses a data interaction method, a data interaction device, data interaction equipment and a storage medium, and relates to the technical field of artificial intelligence, image processing and virtual reality in a computer. The specific implementation scheme is as follows: receiving human body model data of a target object, wherein the human body model data are generated by acquiring human body characteristic point data of a to-be-detected end; obtaining a first virtual reality human body model to be displayed in a virtual reality scene according to the human body model data of the target object; displaying the first virtual reality human body model, and carrying out data interaction aiming at the first virtual reality human body model to obtain a data interaction result; and sending a data interaction result to a target object of the end to be detected. The method and the device improve the operability of remote communication under the condition of ensuring the privacy of the user.)

1. A data interaction method, comprising:

receiving human body model data of a target object, wherein the human body model data is generated by acquiring human body characteristic point data of a to-be-detected end;

obtaining a first virtual reality human body model to be displayed in a virtual reality scene according to the human body model data of the target object;

displaying the first virtual reality human body model, and carrying out data interaction aiming at the first virtual reality human body model to obtain a data interaction result;

and sending the data interaction result to the target object of the end to be detected.

2. The method of claim 1, wherein,

providing at least one observation aid in the virtual reality scene;

the data interaction is carried out aiming at the first virtual reality human body model to obtain a data interaction result, and the data interaction result comprises the following steps:

and carrying out data interaction aiming at the first virtual reality human body model based on the currently selected observation auxiliary tool to obtain the observation information of the first virtual reality human body model.

3. The method of claim 2, wherein,

the viewing aid includes at least one of a measuring tool, a marking tool and a marking tool.

4. The method of claim 1, wherein,

providing at least one surgical tool in the virtual reality scene;

the data interaction is carried out aiming at the first virtual reality human body model to obtain a data interaction result, and the data interaction result comprises the following steps:

and carrying out data interaction aiming at the first virtual reality human body model based on the currently selected surgical tool to obtain a second virtual reality human body model.

5. The method of claim 4, further comprising:

generating a virtual surgery result at least according to the first virtual reality human body model and the second virtual reality human body model, wherein the virtual surgery result comprises at least one of a surgery scheme and a post-surgery effect graph.

6. A data interaction method, comprising:

collecting human body feature point data through a camera, and generating human body model data of a target object according to the human body feature point data;

sending the human body model data of the target object to a data processing end, so that a first virtual reality human body model displayed in a virtual reality scene is generated at the data processing end according to the human body model data of the target object, and data interaction is carried out on the first virtual reality human body model;

and receiving a data interaction result sent by the data processing terminal.

7. A data interaction device, comprising:

the model data receiving module is used for receiving human body model data of a target object, wherein the human body model data is generated by acquiring human body characteristic point data of the end to be detected;

the scene model generation module is used for obtaining a first virtual reality human body model to be displayed in a virtual reality scene according to the human body model data of the target object;

the data interaction module is used for displaying the first virtual reality human body model, and performing data interaction aiming at the first virtual reality human body model to obtain a data interaction result;

and the sending module is used for sending the data interaction result to the target object of the end to be detected.

8. The apparatus of claim 7, wherein,

providing at least one observation aid in the virtual reality scene;

and the data interaction module is used for carrying out data interaction aiming at the first virtual reality human body model based on the currently selected observation auxiliary tool to obtain the observation information of the first virtual reality human body model.

9. The apparatus of claim 8, wherein,

the viewing aid includes at least one of a measuring tool, a marking tool and a marking tool.

10. The apparatus of claim 7, wherein,

providing at least one surgical tool in the virtual reality scene;

and the data interaction module is used for carrying out data interaction aiming at the first virtual reality human body model based on the currently selected surgical tool to obtain a second virtual reality human body model.

11. The apparatus of claim 10, further comprising:

a virtual operation result generation module, configured to generate a virtual operation result at least according to the first virtual reality human body model and the second virtual reality human body model, where the virtual operation result includes at least one of an operation scheme and an operation effect graph.

12. A data interaction device, comprising:

the model data generation module is used for acquiring human body feature point data through a camera and generating human body model data of a target object according to the human body feature point data;

the model data sending module is used for sending the human body model data of the target object to a data processing end so as to generate a first virtual reality human body model shown in a virtual reality scene at the data processing end according to the human body model data of the target object and carry out data interaction aiming at the first virtual reality human body model;

and the data interaction result receiving module is used for receiving the data interaction result sent by the data processing terminal.

13. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.

14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.

Technical Field

The present application relates to the field of artificial intelligence in computer technology, and in particular to the field of image processing and virtual reality technology.

Background

At present, the main mode of remote face-diagnosis is to collect the image data of a patient in real time through a camera, collect the sound data of the patient through a microphone, finally synthesize the video data and show the video data on one side of a doctor, and similar to video call, the doctor and the patient can communicate through images and sound, so that the purpose of remote face-diagnosis is achieved.

Disclosure of Invention

The application provides a data interaction method, a data interaction device, data interaction equipment and a storage medium.

According to an aspect of the present application, there is provided a data interaction method, including:

receiving human body model data of a target object, wherein the human body model data are generated by acquiring human body characteristic point data of a to-be-detected end;

obtaining a first virtual reality human body model to be displayed in a virtual reality scene according to the human body model data of the target object;

displaying the first virtual reality human body model, and carrying out data interaction aiming at the first virtual reality human body model to obtain a data interaction result;

and sending a data interaction result to a target object of the end to be detected.

According to another aspect of the present application, there is provided a data interaction method, including:

collecting human body feature point data through a camera, and generating human body model data of a target object according to the human body feature point data;

sending the human body model data of the target object to a data processing end, generating a first virtual reality human body model displayed in a virtual reality scene according to the human body model data of the target object at the data processing end, and carrying out data interaction aiming at the first virtual reality human body model;

and receiving a data interaction result sent by the data processing terminal.

According to another aspect of the present application, there is provided a data interaction apparatus, including:

the model data receiving module is used for receiving human body model data of the target object, wherein the human body model data are generated by acquiring human body characteristic point data of the end to be detected;

the scene model generation module is used for obtaining a first virtual reality human body model to be displayed in a virtual reality scene according to the human body model data of the target object;

the data interaction module is used for displaying the first virtual reality human body model and carrying out data interaction aiming at the first virtual reality human body model to obtain a data interaction result;

and the sending module is used for sending the data interaction result to the target object of the end to be detected.

According to another aspect of the present application, there is provided a data interaction apparatus, including:

the model data generation module is used for acquiring human body feature point data through the camera and generating human body model data of the target object according to the human body feature point data;

the model data sending module is used for sending the human body model data of the target object to the data processing end so as to generate a first virtual reality human body model displayed in a virtual reality scene according to the human body model data of the target object at the data processing end and carry out data interaction aiming at the first virtual reality human body model;

and the data interaction result receiving module is used for receiving the data interaction result sent by the data processing terminal.

According to the technology of the application, the privacy of the user data and the operability in remote communication are improved.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:

FIG. 1 is a first flowchart of a data interaction method provided according to an embodiment of the present application;

FIG. 2 is a second flowchart of a data interaction method provided according to an embodiment of the present application;

FIG. 3 is a flow chart III of a data interaction method provided according to an embodiment of the present application;

FIG. 4 is a first block diagram of a data interaction device according to an embodiment of the present disclosure;

FIG. 5 is a block diagram of a data interaction apparatus according to an embodiment of the present application;

FIG. 6 is a block diagram of a data interaction device according to an embodiment of the present application;

FIG. 7 is a block diagram of an electronic device for implementing a method of data interaction of an embodiment of the present application.

Detailed Description

The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

Referring to fig. 1, an embodiment of the present application provides a data interaction method, which relates to a virtual reality technology (VR), and the data interaction method includes:

s101, receiving human body model data of a target object, wherein the human body model data are generated by acquiring human body feature point data of a to-be-detected end;

s102, obtaining a first virtual reality human body model to be displayed in a virtual reality scene according to human body model data of a target object;

s103, displaying the first virtual reality human body model, and carrying out data interaction aiming at the first virtual reality human body model to obtain a data interaction result;

and S104, sending a data interaction result to the target object of the end to be detected.

The scheme at least has the following beneficial effects: (1) the human body model data transmitted by the end to be detected only relates to human body morphological characteristics, does not relate to real image information of a user, and fully protects the privacy information of a target object, especially the privacy part of the human body. (2) Because the human body model data is acquired off line, namely the data is transmitted after the human body characteristic point data is acquired by the end to be detected to generate the human body model data, the requirement on real-time bandwidth is not high. (3) In a virtual reality scene, data interaction can be carried out aiming at the first virtual reality human body model, the communication process which is almost the same as that of face-to-face communication is realized, and more operation possibilities are provided for remote communication. For example, the virtual face of the virtual reality phantom may be lined and the target object may be synchronized to experience the post-operative effect by VR simulation. (4) Compared with remote communication based on character images or video data, human body model data generated based on human body feature point data can provide high-precision data.

The application can be applied to remote diagnosis of doctors and patients, in a data interaction scene of the remote diagnosis, the end to be detected can be a patient end, correspondingly, the data processing end for receiving the human body model data of the target object sent by the end to be detected can be a doctor end, the doctor end receives the human body characteristic data collected from the patient end, the doctor end can generate Virtual Reality (VR) by using the human body characteristic data to inquire a virtual reality human body model in a consulting room, a series of interactive operations are carried out on the virtual reality human body model to obtain an inquiry result, and the inquiry result is returned to the patient end.

The method is not limited to the data interaction scene of the remote face diagnosis, and can be expanded to other various medical scenes, such as a remote medical teaching or medical discussion scene, wherein a plurality of doctor terminals enter the same virtual reality scene, the plurality of doctor terminals can see the first virtual reality human body and perform a series of interactive operations aiming at the virtual reality human body model, and the effects of teaching, academic discussion or collaborative diagnosis and the like can be achieved. Further, the method can be extended to scenes other than medical scenes, for example, a clothing customization scene, in a data interaction scene of the clothing customization scene, the end to be detected can be a customer end, and correspondingly, the data processing end which receives the human body model data of the target object sent by the end to be detected can be a clothing designer end. The costume designer side can perform a series of interactive operations on the virtual reality human body model to obtain a costume customization result, and the costume customization result is returned to the customer side.

Optionally, the data processing end for implementing the embodiment of the application includes an electronic device capable of implementing a Virtual Reality (VR) technology. For example, a combination of a mobile phone and a head-mounted device with a virtual reality interaction function, a combination of a personal computer and a head-mounted device with a virtual reality interaction function, a virtual reality all-in-one machine, and the like.

Taking a mobile phone and a head-mounted device with a virtual reality interaction function as an example, a doctor can see an inquiry request from a patient at a mobile phone end and select to enter a virtual reality examination room for inquiring the patient through the mobile phone end. Then, a doctor enters a virtual reality scene provided by the head-mounted equipment by wearing the head-mounted equipment which is connected with the mobile phone in a matching manner and has a virtual reality interaction function, a virtual reality human body model of a target patient is displayed in the virtual reality scene, and the doctor can interact with the virtual reality human body model to realize inquiry.

In one embodiment, the method shown in fig. 1 further comprises: and acquiring the movement starting position and the movement distance information of the user at preset intervals, and adjusting the display visual angle of the first virtual reality human body model according to the movement starting position and the movement distance information of the user.

In the traditional face-diagnosis process, doctors need to guide patients to make various postures or verbal descriptions by using language to enable the patients to cooperate to see the parts which the doctors want to see, and if the expressions of the doctors are not good or language barriers exist, the effect of the face-diagnosis can be affected. According to the embodiment, the user can observe the first virtual reality human body model in an all-around manner only by moving, and convenience of inquiry of doctors is improved.

In one embodiment, the performing data interaction on the first virtual reality human body model in step S103, and obtaining a data interaction result includes: and acquiring user action information, and generating operation information of the first virtual reality human body model according to the user action information.

Optionally, the user action information may be collected by a sensor of the virtual reality; or shooting the user image by a camera, analyzing continuous multi-frame user images, and determining the user action information.

In one embodiment, at least one viewing aid is provided in a virtual reality scene.

Step S103, performing data interaction on the first virtual reality human body model to obtain a data interaction result, and the data interaction result comprises the following steps: and based on the currently selected observation auxiliary tool, performing data interaction aiming at the first virtual reality human body model to obtain observation information of the first virtual reality human body model.

Optionally, the user selects a certain observation aid shown in the virtual reality scene through a gesture action such as circling or clicking.

Often, the physician needs to know the size or scale of certain parts for further diagnosis, which requires face-to-face or scoring and measurement, and conventional video screening cannot provide such procedures. And this application improves the convenience that the user observed and judged virtual reality manikin through providing observation appurtenance.

In addition, the virtual reality scene provided by the embodiment of the application is that the virtual reality human body model can restore the real size of the human body, and the size or the proportion obtained based on the virtual reality human body model is almost close to real user data. Therefore, the accuracy of observing and judging the virtual reality human body model can be improved.

In one embodiment, the viewing aid comprises at least one of a measuring tool, a marking tool and a marking tool.

And a measuring tool is provided, so that the specific size of the virtual reality human body model can be measured, and data support is provided for a doctor to determine a diagnosis scheme. And providing a marking tool to draw the target adjustment area. A marking tool is provided to add a mark to the virtual reality phantom.

The application provides a plurality of observation auxiliary tools, which greatly improves the operability of inquiry, for example, a doctor can draw a hair planting area in a scalp area of a human body model through a marking tool, and then measure the area size of the hair planting area through a measuring tool so as to further determine a surgical plan.

In one embodiment, at least one surgical tool is provided in a virtual reality scene;

referring to fig. 2, step S103 performs data interaction on the first virtual reality human body model to obtain a data interaction result, including: and based on the currently selected surgical tool, performing data interaction on the first virtual reality human body model to obtain a second virtual reality human body model.

The user can carry out virtual operation to the virtual reality human body model through the operation instrument, obtains virtual operation effect, can the convenience of customers know the postoperative effect. Especially, the medicine is applied to the medical and beauty field, and the patient is very interested in the effect after the operation. Through virtual operation, doctors and patients can know the medical and beauty effect, and the efficiency and experience of inquiry can be greatly improved.

Optionally, the method shown in fig. 1 further includes: and receiving the adjustment requirement of the end to be detected. Therefore, in step S103, the user may perform corresponding operations on the first virtual reality human body model based on the adjustment requirement, so that the data interaction result of step S103 satisfies the adjustment requirement.

In another embodiment, the virtual reality scene may further provide a plurality of adjustment templates, where the adjustment templates include information such as an adjustment position and an adjustment size; then, the method shown in fig. 1 further includes: and adjusting the first virtual reality human body model based on the selected adjusting template to obtain a second virtual reality human body model.

The user can directly call the prestored adjusting template to adjust the first virtual reality human body model, and the workload of the user for carrying out virtual surgery is reduced.

In step S104, the data interaction result may be sent to the end to be detected, so that after the patient of the end to be detected knows the operation effect, the feedback opinion of the end to be detected may be received, and further adjusted according to the feedback opinion of the end to be detected.

In one embodiment, referring to fig. 2, the method of fig. 1 further comprises: s201, generating a virtual operation result at least according to the first virtual reality human body model and the second virtual reality human body model, wherein the virtual operation result comprises at least one of an operation scheme and an operation effect graph.

In the above embodiment, firstly, since the first virtual reality human body model and the second virtual reality human body model are both accurate data, accurate adjustment data can be obtained by using the first virtual reality human body model and the second virtual reality human body model, and then an operation scheme is generated, thereby greatly reducing the workload of doctors. And secondly, the first virtual reality human body model and the second virtual reality human body model can be correspondingly provided with effect contrast diagrams, so that the operation effect can be visually embodied.

Based on the data interaction method provided by the embodiment of the application, a doctor can remotely perform a facial diagnosis process on a patient as a facial diagnosis process by using a virtual reality technology after the patient is subjected to three-dimensional modeling, for example, the patient can be subjected to a post-operation effect by marking a virtual face after modeling and synchronously experiencing the post-operation effect by using a virtual reality simulation means. The inquiry form of the user is not limited by the current image-text and video modes any more, a more vivid and scenic diagnosis and treatment mode is provided for doctors and patients, and the inquiry efficiency and experience can be greatly improved.

Referring to fig. 3, an embodiment of the present application provides a data interaction method, which is applicable to a terminal to be detected, and the data interaction method includes:

s301, collecting human body feature point data through a camera, and generating human body model data of a target object according to the human body feature point data;

s302, sending the human body model data of the target object to a data processing end, generating a first virtual reality human body model displayed in a virtual reality scene according to the human body model data of the target object at the data processing end, and carrying out data interaction aiming at the first virtual reality human body model;

and S303, receiving a data interaction result sent by the data processing terminal.

The scheme at least has the following beneficial effects: (1) the human body model data transmitted by the end to be detected only relates to human body morphological characteristics, does not relate to real image information of a user, and fully protects the privacy information of the user, especially the privacy part of the human body. (2) Because the human body model data is acquired off line, namely the data is transmitted after the human body characteristic point data is acquired by the end to be detected to generate the human body model data, the requirement on real-time bandwidth is not high. (3) The human body model data are provided for the data processing end, data interaction can be carried out on the first virtual reality human body model for the data processing end, the communication process which is almost the same as face-to-face is achieved, and more operation possibility is provided for remote communication. (4) Compared with remote communication based on character images or video data, human body model data generated based on human body feature point data can provide high-precision data.

Optionally, the end to be detected can be mobile equipment such as a mobile phone, a tablet personal computer, intelligent wearable equipment and a portable computer, and a user does not need to be limited by places and equipment, so that the data of the human body feature points can be conveniently and quickly acquired.

Optionally, the camera device configured at the end to be detected may be a depth camera, so as to collect human feature point data with higher accuracy. The camera equipment configured at the end to be detected can also be a common camera, and two-dimensional feature points acquired by the common camera are converted into three-dimensional feature points by using a two-dimensional to three-dimensional technology.

For the parts of the embodiment that are not described in detail, reference may be made to the description of the corresponding end to be detected in the previous embodiment, and details are not described here.

The following is an example of an inquiry provided by the application in combination with a data interaction method of a data processing terminal and a terminal to be detected. Wherein, the patient end is the end to be detected, and the doctor end is the data processing end.

Working equipment:

the patient end can be a mobile phone, the mobile phone can be used for modeling three-dimensional scanning of a human body, only characteristic points can be collected, images cannot be stored, and privacy disclosure is avoided.

The doctor end can be a cell-phone to and dress has the head mounted equipment of Virtual Reality (VR) interactive function.

(II) working process:

(1) the patient initiates a request at the patient end to enter the virtual office. After entering the virtual surface diagnosis room, human body feature point data are collected through a camera at the patient end, and human body model data are generated based on the human body feature point data. The patient end can use the human body model data

(2) The patient end can display the three-dimensional model of the human body for the patient to view. The patient side may also receive user-entered inquiry information and the like, including the location to be improved and concerns. The patient end generates a waiting state for the doctor to visit.

(3) The doctor enters the virtual surface examination room in a mode of wearing head-mounted equipment with a virtual reality interaction function. And displaying the virtual reality human body model in a virtual surface diagnosis room based on the human body model data. And in order to improve the real experience of the user, the virtual facial examination room is arranged in the virtual facial examination room, and the size of the virtual reality human body model is larger than that of the patient real person.

(4) The doctor observes and judges through the virtual reality manikin that the doctor end reduced, and the virtual reality interaction function and the virtual reality manikin that can pass through the doctor end in this period carry out a series of interactive behaviors, for example, measure the proportion of position, draw a line help judgement on some positions.

(5) Doctors perform virtual operation or operate to simulate the effect after operation on specific parts through the virtual reality interaction function of the doctor end according to the appeal and the problem proposed by patients.

(6) The patient receives the change of the human body model through the patient end to experience the effect after the operation, and meanwhile, the feedback opinions can be fed back to the doctor according to the preference of the patient end so as to modify the scheme in different degrees, namely, the feedback opinions are sent to the doctor end through the patient end so as to finally achieve the satisfactory scheme.

(7) The doctor end produces a diagnosis report and an accurate operation scheme and an operation effect chart according to the data recorded by the finally achieved scheme.

Based on the example, the application provides a brand-new interaction mode for the scene of remote face diagnosis under the condition of protecting the privacy of the user, the two parties of the face diagnosis do not face to face but surpass the real experience, the data and parameters in the reality are led into the digital world, the two parties of the face diagnosis can conveniently and directly carry out diagnosis and treatment scheme selection by using the interaction mode of virtual reality, the whole links such as application and postoperative effect experience are adopted, and the experience and the value of the face diagnosis are greatly improved.

Referring to fig. 4, fig. 4 is a data interaction apparatus 400 provided in an embodiment of the present application, and the apparatus includes:

the model data receiving module 401 is configured to receive human body model data of a target object, where the human body model data is generated by acquiring human body feature point data at a to-be-detected end;

a scene model generating module 402, configured to obtain a first virtual reality human body model to be displayed in a virtual reality scene according to human body model data of a target object;

the data interaction module 403 is configured to display the first virtual reality human body model, perform data interaction for the first virtual reality human body model, and obtain a data interaction result;

the sending module 404 is configured to send a data interaction result to a target object of a to-be-detected end.

In one embodiment, at least one viewing aid is provided in a virtual reality scene;

the data interaction module 403 is configured to perform data interaction for the first virtual reality human body model based on the currently selected observation auxiliary tool, so as to obtain observation information of the first virtual reality human body model.

In one embodiment, the viewing aid comprises at least one of a measuring tool, a marking tool and a marking tool.

In one embodiment, at least one surgical tool is provided in a virtual reality scene;

and a data interaction module 403, configured to perform data interaction on the first virtual reality human body model based on the currently selected surgical tool, so as to obtain a second virtual reality human body model.

In one embodiment, referring to fig. 5, the data interaction apparatus 500 further includes:

a virtual operation result generating module 501, configured to generate a virtual operation result according to at least the first virtual reality human body model and the second virtual reality human body model, where the virtual operation result includes at least one of an operation scheme and an operation effect graph.

Referring to fig. 6, fig. 6 is a data interaction apparatus 600 provided in an embodiment of the present application, the apparatus including:

the model data generation module 601 is configured to acquire human body feature point data through a camera, and generate human body model data of a target object according to the human body feature point data;

a model data sending module 602, configured to send the body model data of the target object to a data processing end, so as to generate, at the data processing end, a first virtual reality body model shown in a virtual reality scene according to the body model data of the target object, and perform data interaction with respect to the first virtual reality body model;

and a data interaction result receiving module 603, configured to receive a data interaction result sent by the data processing end.

According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.

Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.

As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.

The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of data interaction provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of data interaction provided herein.

The memory 702, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of data interaction in the embodiments of the present application (e.g., the model data receiving module 401, the scene model generating module 402, the data interaction module 403, the transmitting module 404). The processor 701 executes various functional applications of the server and data processing, i.e., a method of implementing data interaction in the above-described method embodiments, by executing non-transitory software programs, instructions, and modules stored in the memory 702.

The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for data interaction, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, which may be connected to a data-interacting electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The electronic device of the method of data interaction may further comprise: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.

The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus with which the data is interacted, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.

Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.

The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种智能医疗系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!