Data converging method, data splitting method, device, equipment and storage medium

文档序号:52635 发布日期:2021-09-28 浏览:35次 中文

阅读说明:本技术 数据合流方法、数据拆分方法、装置、设备及存储介质 (Data converging method, data splitting method, device, equipment and storage medium ) 是由 董超峰 邵振军 于 2020-03-27 设计创作,主要内容包括:本说明书一个或多个实施例提供一种数据合流方法、数据拆分方法、装置、设备及存储介质,其中,数据合流方法包括:获取被拆分为多个数据块的流数据,其中,各所述数据块被指定了序号;对所述数据块进行解析,得到所述序号;根据所述序号以及用于缓存所述流数据的数组的长度确定出所述数据块在所述数组中的目标位置;将所述数据块存入所述目标位置,以将获取到的数据块合流。(One or more embodiments of the present specification provide a data merging method, a data splitting method, an apparatus, a device, and a storage medium, where the data merging method includes: acquiring stream data split into a plurality of data blocks, wherein each data block is assigned a sequence number; analyzing the data block to obtain the serial number; determining the target position of the data block in the array according to the sequence number and the length of the array for caching the streaming data; and storing the data blocks into the target position so as to merge the acquired data blocks.)

1. A data converging method, comprising:

acquiring stream data split into a plurality of data blocks, wherein each data block is assigned a sequence number;

analyzing the data block to obtain the serial number;

determining the target position of the data block in the array according to the sequence number and the length of the array for caching the streaming data;

and storing the data blocks into the target position so as to merge the acquired data blocks.

2. The method of claim 1, wherein determining the target position of the data block in the array according to the sequence number and the length of the array for buffering the stream data comprises:

and taking the position in the array corresponding to the numerical value obtained by modulo the sequence number and the length of the array as the target position.

3. The method of claim 1, wherein the array is a heap array, the method further comprising:

after the data blocks are stored in the target positions, the data blocks are sequentially taken out of the array according to the sequence from small to large of numerical values corresponding to all the positions in the array, wherein the variable corresponding to the target positions is m +1, and m is a natural number;

and after taking out one data block, adding 1 to the value of m +1, comparing the value of m +1 with the serial number of the data block taken out this time, if the value of m +1 is not equal to the serial number of the data block taken out this time, putting the data block taken out this time into the array again, waiting for a preset time length, continuously acquiring the data block at the target position within the preset time length, and if the waiting time length exceeds the preset time length or the value of m +1 is equal to the serial number of the data block taken out this time, continuously acquiring the data block at the position with the corresponding numerical value of m +2 in the array.

4. The method according to any one of claims 1 to 3, further comprising:

before acquiring streaming data which is split into a plurality of data blocks, receiving multi-path streaming data from a device side, wherein the device side splits the streaming data into the plurality of data blocks, and each data block is assigned with a resource identifier;

and distributing the received data blocks with the same resource identification to the same task processing object, wherein one task processing object corresponds to one independent array.

5. A data splitting method is characterized by comprising the following steps:

splitting streaming data to be sent to a receiving end into a plurality of data blocks;

establishing multi-path connection with the receiving end;

and packaging the plurality of data blocks through the multi-path connection, and then transmitting the plurality of data blocks to the receiving end in a multi-path manner, wherein the serial numbers of the packaged data blocks are assigned.

6. The method of claim 5, wherein each of the packed data blocks is assigned a resource identifier.

7. A data merging apparatus, comprising:

an obtaining module, configured to obtain stream data that is split into a plurality of data blocks, where each data block is assigned a sequence number;

the analysis module is used for analyzing the data block to obtain the serial number;

the determining module is used for determining the target position of the data block in the array according to the sequence number and the length of the array for caching the streaming data;

and the confluence module is used for storing the data blocks into the target position so as to confluence the acquired data blocks.

8. A data splitting apparatus, comprising:

the splitting module is used for splitting the streaming data to be sent to the receiving end into a plurality of data blocks;

the connection module is used for establishing multi-path connection with the receiving end;

and the sending module is used for packaging the plurality of data blocks through the multi-path connection and then sending the plurality of data blocks to the receiving end in a multi-path mode, wherein the serial numbers of the packaged data blocks are assigned.

9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the data merging method according to any one of claims 1 to 4 or the data splitting method according to claims 5 and 6 when executing the program.

10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the data merging method of any one of claims 1 to 4 or the data splitting method of claims 5 and 6.

Technical Field

One or more embodiments of the present disclosure relate to the field of communications technologies, and in particular, to a data converging method, a data splitting method, an apparatus, a device, and a storage medium.

Background

In the 4G/5G era, users have higher requirements on timeliness of streaming data, and generally, streaming data pushed by users are in the amount of tens or hundreds of G levels, so that timeliness is difficult to guarantee under the existing bandwidth condition.

Disclosure of Invention

In view of this, one or more embodiments of the present specification propose a data merging method, a data splitting method, an apparatus, a device and a storage medium.

One or more embodiments of the present specification provide a data merging method, including: acquiring stream data split into a plurality of data blocks, wherein each data block is assigned a sequence number; analyzing the data block to obtain the serial number; determining the target position of the data block in the array according to the sequence number and the length of the array for caching the streaming data; and storing the data blocks into the target position so as to merge the acquired data blocks.

Optionally, determining a target position of the data block in the array according to the sequence number and the length of the array for caching the stream data includes: and taking the position in the array corresponding to the numerical value obtained by modulo the sequence number and the length of the array as the target position.

Optionally, the array is a heap array, and the method further includes: after the data blocks are stored in the target positions, the data blocks are sequentially taken out of the array according to the sequence from small to large of numerical values corresponding to all the positions in the array, wherein the variable corresponding to the target positions is m +1, and m is a natural number; and after taking out one data block, adding 1 to the value of m +1, comparing the value of m +1 with the serial number of the data block taken out this time, if the value of m +1 is not equal to the serial number of the data block taken out this time, putting the data block taken out this time into the array again, waiting for a preset time length, continuously acquiring the data block at the target position within the preset time length, and if the waiting time length exceeds the preset time length or the value of m +1 is equal to the serial number of the data block taken out this time, continuously acquiring the data block at the position with the corresponding numerical value of m +2 in the array.

Optionally, the method further includes: before acquiring streaming data which is split into a plurality of data blocks, receiving multi-path streaming data from a device side, wherein the device side splits the streaming data into the plurality of data blocks, and each data block is assigned with a resource identifier; and distributing the received data blocks with the same resource identification to the same task processing object, wherein one task processing object corresponds to one independent array.

One or more embodiments of the present specification further provide a data splitting method, including: splitting streaming data to be sent to a receiving end into a plurality of data blocks; establishing multi-path connection with the receiving end; and packaging the plurality of data blocks through the multi-path connection, and then transmitting the plurality of data blocks to the receiving end in a multi-path manner, wherein the serial numbers of the packaged data blocks are assigned.

Optionally, each packed data block is assigned a resource identifier.

One or more embodiments of the present specification also provide a data merging apparatus including: an obtaining module, configured to obtain stream data that is split into a plurality of data blocks, where each data block is assigned a sequence number; the analysis module is used for analyzing the data block to obtain the serial number; the determining module is used for determining the target position of the data block in the array according to the sequence number and the length of the array for caching the streaming data;

and the confluence module is used for storing the data blocks into the target position so as to confluence the acquired data blocks.

One or more embodiments of the present specification further provide a data splitting apparatus, including: the splitting module is used for splitting the streaming data to be sent to the receiving end into a plurality of data blocks; the connection module is used for establishing multi-path connection with the receiving end; and the sending module is used for packaging the plurality of data blocks through the multi-path connection and then sending the plurality of data blocks to the receiving end in a multi-path mode, wherein the serial numbers of the packaged data blocks are assigned.

One or more embodiments of the present specification also provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement any one of the data merging methods or any one of the data splitting methods described above.

One or more embodiments of the present specification also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform any one of the data merging methods described above or any one of the data splitting methods described above.

As can be seen from the above description, in the data merging method provided in one or more embodiments of the present disclosure, after receiving stream data that is split into multiple data blocks, the server parses the data blocks to obtain sequence numbers corresponding to the data blocks, and determines the positions of the data blocks in an array according to the sequence numbers of the data blocks and the length of the array for caching the stream data, so that the received block data can be recombined into original stream data, thereby reducing delay to the maximum extent and ensuring timeliness of the stream data.

Drawings

In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the following description are only one or more embodiments of the present specification, and that other drawings may be obtained by those skilled in the art without inventive effort from these drawings.

FIG. 1 is a schematic diagram of a communication system shown in one or more embodiments herein;

FIG. 2 is a flow diagram of a data confluence method as shown in one or more embodiments of the present description;

fig. 3 is a data structure diagram of a offload protocol shown in one or more embodiments of the present description;

FIG. 4 is a flow diagram illustrating a method of data splitting in accordance with one or more embodiments of the disclosure;

fig. 5 is a schematic diagram illustrating information interaction between a server and a device according to one or more embodiments of the present disclosure;

FIG. 6 is a schematic diagram of a receive channel structure shown in one or more embodiments of the present description;

FIG. 7 is a diagram of a data set structure shown in one or more embodiments of the present description;

FIG. 8 is a schematic diagram of a receive channel structure shown in one or more embodiments of the present description;

FIG. 9 is a block diagram of a data merge device, as shown in one or more embodiments of the present disclosure;

FIG. 10 is a block diagram of a data splitting apparatus, as shown in one or more embodiments of the present description;

fig. 11 is a more specific hardware architecture diagram of an electronic device according to one or more embodiments of the present disclosure.

Detailed Description

For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.

It is to be noted that unless otherwise defined, technical or scientific terms used in one or more embodiments of the present specification should have the ordinary meaning as understood by those of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in one or more embodiments of the specification is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.

One or more embodiments of the present specification provide a data converging method, which may be implemented based on a communication system, as shown in fig. 1, where the communication system may include a device side and a server side, where the device side may be a 4K/8K acquisition device held by a user, for example, a mobile terminal having a function of acquiring data. After the acquisition device acquires the Stream data, the acquisition device can split the Stream data into a plurality of data blocks according to the size or time of the data, specify the sequence number and the resource identifier of each data block, support to pack the split Stream media data (such as Stream1 (data Stream 1) -Stream3 (data Stream 3) shown in fig. 1) according to an agreed protocol through multiple paths (three paths are taken as an example in fig. 1), and send the data to the server, after receiving the split Stream data, the server processes the data blocks with the same resource ID (hereinafter also referred to as resource identifier) through the same service thread, for example, the service thread 1 in the figure is responsible for processing the data blocks in the Stream data set 1, the data blocks in the set belong to the same resource, the service thread 2 is responsible for processing the data blocks in the flow data set 2, the data blocks in the set are the same resource, and the position of the data blocks in the array for caching the data blocks is determined according to the sequence numbers of the data blocks, and putting the data blocks into corresponding positions in the array, thereby realizing confluence of the data.

Fig. 2 is a flowchart illustrating a data converging method, which may be performed by a server according to one or more embodiments of the present disclosure, as shown in fig. 2, the method including:

step 202: acquiring stream data split into a plurality of data blocks, wherein each data block is assigned a sequence number;

alternatively, the streaming data may be a video stream and the data blocks may be assigned sequence numbers at the sender, which may be, for example, the acquisition device.

In an example, after splitting the acquired data into a plurality of data blocks, the sending end transmits the data to the serving end through a split protocol, where fields in the split protocol are as shown in fig. 3, where descriptions of the fields are as follows:

resource id (resource identity) is used for identifying the resource to which the currently uploaded data belongs, is an identity certificate of the resource, and occupies 7 bytes.

payload Protocol is used to identify the Protocol type of payload part data, and takes 1 byte, for example, 1 is used to identify RTP (Real-Time Transport Protocol), 2 is used to identify RTMP (Real-Time Messaging Protocol), and so on.

The serial number is used to record the order of the current payload block, and takes 4 bytes.

The Length is used for recording the Length of the transmission payload part, and occupies 4 bytes, and the Length can be determined by the acquisition end according to the actual situation in order to ensure the real-time property.

payload is streaming data that is actually transmitted.

By adopting the shunting protocol, the data collected by the equipment end can be timely and correctly pushed to the service server, and the server can recombine the received block data into the original stream data, so that the delay is reduced to the maximum extent, and the timeliness of the stream data is ensured.

Step 204: analyzing the data block to obtain the serial number;

continuing with the above example, the server may analyze the obtained data blocks by using the above offload protocol to obtain the sequence numbers corresponding to the data blocks.

Step 206: determining the target position of the data block in the array according to the sequence number and the length of the array for caching the streaming data;

optionally, in one or more embodiments of the present specification, determining a target position of the data block in the array according to the sequence number and the length of the array for caching the streaming data may include: and taking the position in the array corresponding to the numerical value obtained by modulo the sequence number and the length of the array as the target position. For example, assuming that the length of the array is 16, for a data block with sequence number 0, a value obtained by modulo 0 and 16 is 0, it is determined that the current data block is placed at a position corresponding to 0 in the array, and for a data block with sequence number 17, for example, a value obtained by modulo 17 and 16 is 1, it is determined that the current data packet is placed at a position corresponding to 1 in the array.

In another example, the length of the array for buffering the stream data may be preset to a fixed value, for example, may be 1024.

Step 208: and storing the data blocks into the target position so as to merge the acquired data blocks.

In the data converging method in one or more embodiments of the present specification, after receiving stream data split into multiple data blocks, a server analyzes the data blocks to obtain sequence numbers corresponding to the data blocks, and determines positions of the data blocks in an array according to the sequence numbers of the data blocks and the length of the array for caching the stream data, so that the received block data can be recombined into original stream data, delay is reduced to the maximum extent, and timeliness of the stream data is ensured.

In one or more embodiments of the present disclosure, the array may be a heap array, for example, the minimum heap data may be used, and for the heap array, the object of the array is a heap whose time complexity is logN, so that it is efficient to cache the stream data by the heap data.

In one or more embodiments of the present description, the data merging method may further include:

after the data blocks are stored in the target positions, the data blocks are sequentially taken out of the array according to the sequence from small to large of the numerical values corresponding to the positions in the array, wherein the numerical value corresponding to the target position is m +1, and m is a natural number; and after taking out one data block, adding 1 to the value of m +1, comparing the value of m +1 with the serial number of the data block taken out this time, if the value of m +1 is not equal to the serial number of the data block taken out this time, putting the data block taken out this time into the array again, waiting for a preset time length, continuously acquiring the data block at the target position within the preset time length, and if the waiting time length exceeds the preset time length or the value of m +1 is equal to the serial number of the data block taken out this time, continuously acquiring the data block at the position with the corresponding numerical value of m +2 in the array. In one example, a variable m is set to specify a sequence number of a data block currently being processed, the variable is used to identify the sequence number of the data block currently being processed after confluence, the found data blocks can be successively confluent according to an increasing order of m to ensure continuity of the data blocks, and for improving processing efficiency, an atomic operation can be adopted for the variable, wherein the atomic operation is an operation which cannot be interrupted by a thread scheduling mechanism; this operation, once started, runs to the end without any context switch in between (switching to another thread). In this example, the array still takes the heap array as an example, the thread is responsible for calculating the position of the heap array according to the position designated by m +1, taking out the heap object at the position and pop (popping) one data block, comparing whether the sequence number of the data block is m +1, if so, indicating that the data of the heap object is found, otherwise, pushing (pushing) the data block into the heap object again, waiting for 1 second (which is an example of the preset duration) for the longest time, and then pushing one data block at the position pop of the heap array designated by m +1 every 1 millisecond within 1 second until the data block with the sequence number of m +1 is obtained, if the data block with the sequence number of m +1 is not obtained within 1 second, the print log identifies that the data block is lost, and continuing to calculate the position of the array according to the position designated by m +2 and obtain the data block at the position.

In one or more embodiments of the present description, the data merging method may further include:

before acquiring streaming data which is split into a plurality of data blocks, receiving multi-path streaming data from a device side, wherein the device side splits the streaming data into the plurality of data blocks, and each data block is assigned with a resource identifier; and distributing the received data blocks with the same resource identification to the same task processing object, wherein one task processing object corresponds to one independent array. In one example, it is assumed that the above-described data merging method is implemented by using a distributed server, in the method, a device side establishes a multi-path TCP (Transmission Control Protocol) connection with an index server, split stream data is sent to the index server through the multi-path TCP connection, the index server determines whether the stream data is the same video stream according to a resource ID in the stream data, the multi-path TCP connections belonging to the same resource ID are allocated to the same merging server, an independent thread is allocated to each connection on the merging server for data reception, and a heap array object is also maintained inside the merging server for storing data blocks. In another example, the device side directly establishes a multi-path TCP connection with the merge server, and sends the split stream data to the merge server through the multi-path TCP connection, and the merge server allocates an independent block service processing object according to the resource ID in each data block, and the object is responsible for collecting the block data of the same resource ID from pushing. In this example, one task processing object corresponds to one independent array for caching block data corresponding to one resource ID, so that data splitting processing can be realized, and data processing efficiency can be improved.

Fig. 4 is a flowchart illustrating a data splitting method according to one or more embodiments of the present specification, which may be performed by a device side, as shown in fig. 4, and the method includes:

step 402: splitting streaming data to be sent to a receiving end into a plurality of data blocks; for example, the stream data may be split into a plurality of data blocks according to the stream data size or time.

Step 404: establishing multi-path connection with the receiving end; as described above, a multi-way TCP connection may be established with an index server or a merge server, for example.

Step 406: and packaging the plurality of data blocks through the multi-path connection, and then transmitting the plurality of data blocks to the receiving end in a multi-path manner, wherein the serial numbers of the packaged data blocks are assigned.

For example, the device side may package and send the plurality of split data blocks to the server side through the streaming protocol. The device side needs to ensure the uniqueness and the timeliness of pushing of the split data block.

In one or more embodiments of the present specification, each packed data block is assigned a resource identifier, where data blocks having the same resource identifier belong to the same resource.

In order to show the information interaction between the device and the server in the data transmission and reception processes, the data merging method and the data splitting method will be further described below with reference to fig. 5 by way of an example.

In this example, a 4K/8K video-on-demand scene is taken as an example, in this scene, a user needs to view a certain live video, and operations executed by the device side and the server side are as follows:

collecting data by an equipment end;

the device side splits the Stream data according to a certain policy, and divides the split Stream data into multiple paths, such as Stream1, Stream2 and Stream3, and pushes the Stream data to the server side;

the device side supports multiple paths (three or more paths, for example, three paths in fig. 5) to package the collected video stream data into an agreed protocol and upload the agreed protocol to the server side. Can include the following steps: establishing multi-path TCP connection with the confluence server; cutting stream data into data blocks and specifying a unique ID (one example of the above-described sequence number) of each data block, setting necessary parameters; packing the data blocks into an agreed protocol and sending the agreed protocol to a server;

the back-end server allocates an independent block service processing object according to the resource ID in each data block, where the object is responsible for collecting the data blocks of the same resource ID pushed by the back-end server, and as shown in fig. 5, processes sets of data corresponding to service handles 1 to N (which are an example of resource IDs) through N different protocols, respectively;

the service processing object also needs to initialize some internal transactions, such as starting a coroutine group monitoring receiving channel, starting independent coroutine monitoring heap array data, and the like, and the receiving channel is shown in fig. 6, wherein data blocks in the channel are out of order and have a large data volume;

for each pushed video stream, the confluence server has a resident thread in the background which is responsible for monitoring whether data is stored in the heap array, and if so, linear sequencing inspection is started.

Counting m from 0 during sorting, adding 1 to the value of m when each data block is obtained, and placing the data block into an outlet channel;

as shown in fig. 7, after the service coroutine reads a data block from the receiving channel, it is detected whether the serial number of the data block is equal to the value of m +1, if so, the data block is directly put into the egress channel, if not, m +1 is modulo with the length of the heap array, and the remainder is used as the lower array table to put the block data.

The exit channel is a queue, and when data exists in the queue, the thread is started to sequentially push the data in the queue to the remote server. Egress channel the data in the channel is serialized block data as shown in fig. 8.

If a packet loss occurs, that is, a certain data block cannot be found, the maximum timeout time that the service routine needs to wait may be 1 second.

It should be noted that the structures shown in fig. 6-8 need to be assisted by business processing, for example, data put into an entry channel needs to be validated, and block data existing in an independent coroutine processing set needs to be processed in a business coroutine.

Fig. 9 is a block diagram illustrating a data merging apparatus according to one or more embodiments of the present disclosure, and as shown in fig. 9, the apparatus 90 includes:

an obtaining module 92, configured to obtain stream data that is split into a plurality of data blocks, where each data block is assigned a sequence number;

the analyzing module 94 is configured to analyze the data block to obtain the serial number;

a determining module 96, configured to determine a target position of the data block in the array according to the sequence number and the length of the array used for caching the stream data;

and a merging module 98, configured to store the data block in the target position, so as to merge the obtained data blocks.

Fig. 10 is a block diagram illustrating a data splitting apparatus according to one or more embodiments of the present specification, where, as shown in fig. 10, the apparatus 100 includes:

a splitting module 103, configured to split stream data to be sent to a receiving end into multiple data blocks;

a connection module 105, configured to establish a multi-path connection with the receiving end;

a sending module 107, configured to send the plurality of data blocks to the receiving end in a multiplexed manner after being packed by the multi-path connection, where a sequence number is assigned to each packed data block.

One or more embodiments of the present specification provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing any one of the data merging methods or the data splitting methods described above when executing the program.

It should be noted that the method of one or more embodiments of the present disclosure may be performed by a single device, such as a computer or server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the devices may perform only one or more steps of the method of one or more embodiments of the present disclosure, and the devices may interact with each other to complete the method.

The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.

For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the modules may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.

The apparatus of the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.

Fig. 11 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the electronic device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 are communicatively coupled to each other within the device via bus 1050.

The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.

The Memory 1020 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1020 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 1020 and called to be executed by the processor 1010.

The input/output interface 1030 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.

The communication interface 1040 is used for connecting a communication module (not shown in the drawings) to implement communication interaction between the present apparatus and other apparatuses. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).

Bus 1050 includes a path that transfers information between various components of the device, such as processor 1010, memory 1020, input/output interface 1030, and communication interface 1040.

It should be noted that although the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.

Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.

Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the spirit of the present disclosure, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of different aspects of one or more embodiments of the present description as described above, which are not provided in detail for the sake of brevity.

In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures, for simplicity of illustration and discussion, and so as not to obscure one or more embodiments of the disclosure. Furthermore, devices may be shown in block diagram form in order to avoid obscuring the understanding of one or more embodiments of the present description, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the one or more embodiments of the present description are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that one or more embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.

While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.

It is intended that the one or more embodiments of the present specification embrace all such alternatives, modifications and variations as fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of one or more embodiments of the present disclosure are intended to be included within the scope of the present disclosure.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:空间内目标判定方法、装置、计算机设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!