Data processing method and computing device

文档序号:169037 发布日期:2021-10-29 浏览:27次 中文

阅读说明:本技术 数据处理的方法以及计算设备 (Data processing method and computing device ) 是由 彭琨 于 2020-06-12 设计创作,主要内容包括:本申请提供了一种数据处理的方法以及计算设备,该方法包括:可信执行环境TEE中的多个可信应用TA分别获取第一用户的多个数据片段,所述多个数据片段组成所述第一用户的数据;所述多个TA中的每个TA对各自获取的数据片段做处理,得到各自的结果;所述多个TA分别反馈各自的结果,所述多个TA反馈的结果用于确定所述第一用户的数据对应的结果。本申请的技术方案可以在TEE中进一步保证数据的安全隔离。(The application provides a data processing method and computing equipment, wherein the method comprises the following steps: a plurality of Trusted Applications (TA) in a Trusted Execution Environment (TEE) respectively acquire a plurality of data fragments of a first user, wherein the data fragments form data of the first user; each TA in the plurality of TAs processes the data fragment acquired by the TA to obtain a respective result; and the plurality of TAs respectively feed back respective results, and the results fed back by the plurality of TAs are used for determining the result corresponding to the data of the first user. The technical scheme of the application can further ensure the safety isolation of the data in the TEE.)

1. A method of data processing, comprising:

a plurality of Trusted Applications (TA) in a Trusted Execution Environment (TEE) respectively acquire a plurality of data fragments of a first user, wherein the data fragments form data of the first user;

each TA in the plurality of TAs processes the data fragment acquired by the TA to obtain a respective result;

and the plurality of TAs respectively feed back respective results, and the results fed back by the plurality of TAs are used for determining the result corresponding to the data of the first user.

2. The method according to claim 1, wherein the obtaining of the plurality of data fragments by the plurality of trusted applications TA in the trusted execution environment TEE comprises:

the TAs respectively obtain respective ciphertexts through encryption channels;

and the TAs decrypt the respective ciphertexts respectively to obtain respective data segments.

3. The method according to claim 1 or 2, wherein the plurality of TAs respectively feed back respective results, comprising:

and the TAs respectively feed back the respective encryption results to the client of the first user through an encryption channel.

4. The method according to any of claims 1-3, wherein the plurality of TAs feed back the plurality of results separately, comprising:

the plurality of TAs respectively feed back respective results to the first TA;

and the first TA determines a result corresponding to the data of the first user according to the results fed back by the plurality of TAs.

5. The method of claim 4, wherein the plurality of TAs respectively feed back respective results to the first TA via the TEE channel, comprising:

when the plurality of TAs includes the first TA, other TAs of the plurality of TAs, except the first TA, feed back respective results to the first TA through a TEE channel.

6. The method according to any one of claims 1 to 5, further comprising:

changing the identification IDs of the plurality of TAs.

7. The method of claim 6, further comprising:

simultaneously changing the identification IDs of the plurality of TAs for processing the plurality of data pieces of the first user and the identification IDs of the plurality of TAs for processing the plurality of data pieces of the second user.

8. A system for data processing, comprising: a plurality of trusted applications TA in the client and the trusted execution environment TEE,

the plurality of TAs respectively acquire a plurality of data fragments of the first user from the client, wherein the plurality of data fragments form data of the first user;

each TA in the plurality of TAs processes the data fragment acquired by the TA to obtain a respective result;

and the plurality of TAs respectively feed back respective results, and the results fed back by the plurality of TAs are used for determining the result corresponding to the data of the first user.

9. The system of claim 8, wherein the TAs are specifically configured to:

respectively acquiring respective ciphertexts from the client through encryption channels;

and respectively decrypting the respective ciphertexts to obtain respective data segments.

10. The system according to claim 8 or 9, wherein the plurality of TAs are specifically configured to:

respectively feeding back the respective encrypted results to the client through an encryption channel;

the client is specifically configured to: and determining a result corresponding to the data of the first user according to the decrypted result of each TA.

11. The system according to any of claims 8 to 10, wherein the plurality of TAs are specifically configured to: feeding back respective results to the first TA respectively;

the first TA is used for determining a result corresponding to the data of the first user according to the result fed back by the plurality of TAs.

12. The system of claim 11, wherein when the plurality of TAs includes the first TA, other ones of the plurality of TAs other than the first TA feed back respective results to the first TA via a TEE channel.

13. The system according to any one of claims 8 to 12, further comprising:

an operating system for changing the identification IDs of the plurality of TAs.

14. The system of claim 13, wherein the operating system is further configured to:

simultaneously changing the identification IDs of the plurality of TAs for processing the plurality of data pieces of the first user and the identification IDs of the plurality of TAs for processing the plurality of data pieces of the second user.

15. The system of any one of claims 8 to 14, wherein the client is further configured to split the data of the first user into the plurality of data segments.

16. A trusted application TA, wherein the TA is one of a plurality of trusted application TAs in a trusted execution environment TEE, the TA comprising:

the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring one data fragment in a plurality of data fragments of a first user, and the data fragments form the data of the first user;

the processing module is used for processing the acquired data fragment to obtain a result;

and the feedback module is used for feeding back the result, and the result is used for determining the result corresponding to the data of the first user.

17. The TA of claim 16, wherein the acquisition module is specifically configured to:

obtaining a ciphertext through an encryption channel;

the processing module is further configured to: and decrypting the ciphertext to obtain the data fragment.

18. A TA as claimed in claim 16 or 17, wherein the feedback module is specifically configured to:

and feeding back the encrypted result to the client of the first user through an encryption channel.

19. A TA as claimed in any one of claims 16 to 18, wherein the feedback module is specifically configured to:

and feeding back the result to a first TA, wherein the first TA is used for determining a result corresponding to the data of the first user according to the result fed back by the plurality of TAs.

20. A computing device comprising a processor and a memory; the processor executes instructions in the memory to cause the computing device to perform the method of any of claims 1-7.

21. A computing device comprising a processor and a memory; the processor executing the instructions in the memory causes the computing device to deploy the TA of any of claims 16-19.

22. A computer-readable storage medium comprising instructions; the instructions are for implementing the method of any one of claims 1 to 7.

23. A computer-readable storage medium comprising instructions; the instructions are for implementing a TA according to any of claims 16 to 19.

Technical Field

The present application relates to the field of data processing technologies, and in particular, to a data processing method and a computing device.

Background

A system is deployed on a mobile device, and the system includes a Rich Execution Environment (REE) and a Trusted Execution Environment (TEE). A TEE is a separate, isolated, secure operating environment within the mobile device (e.g., smartphone, tablet, smart tv, etc.) that is logically isolated from a REE. The TEE provides a more secure space for the execution of code and data and ensures confidentiality and security of the user's sensitive information (e.g., the user's code and data).

In order to prevent the random access and data access between different TAs, in the related technical solution, the data of the TA in the memory is encrypted. Therefore, on one hand, in order to encrypt different TA data in the memory, multiple software and hardware are required to be completed cooperatively, so that the implementation complexity is increased, and the utilization rate of the memory and a Central Processing Unit (CPU) is reduced. On the other hand, even if the memory is encrypted, when the data is called into the CPU to participate in the calculation, the data is restored to the plaintext, and there is still a security risk, so that the data isolation of the TEE in the above related technical solutions is not completely reliable, and the plaintext data is leaked once the isolation fails.

Therefore, how to further ensure the security isolation of data in the TEE becomes a problem which needs to be solved urgently.

Disclosure of Invention

The application provides a data processing method and computing equipment, which can further ensure the safety isolation of data in a TEE.

In a first aspect, a method for data processing is provided, including: a plurality of Trusted Applications (TA) in a Trusted Execution Environment (TEE) respectively acquire a plurality of data fragments of a first user, wherein the data fragments form data of the first user; each TA in the plurality of TAs processes the data fragment acquired by the TA to obtain a respective result; and the plurality of TAs respectively feed back respective results, and the results fed back by the plurality of TAs are used for determining the result corresponding to the data of the first user.

In the technical scheme, the user randomly divides the task data into a plurality of segments, actively shares the segments with the plurality of TAs respectively, and operates the plurality of TAs in the TEE environment. After the distributed calculation of the plurality of TAs, the respective results are fed back so as to combine the results corresponding to the data of the plurality of TAs according to the respective calculation results. Even if an attacker breaks the protection of the TEE and steals the data of a TA in the memory, the attacker only acquires one random segment of the task data of the user and cannot acquire the task data of the user. Therefore, on one hand, by processing fragmented plaintext data in the TEE, the data in the memory can still be protected after the memory of a certain TA is exposed, and the effect of privacy protection can be enhanced while the simplicity and the high efficiency are achieved. On the other hand, the distributed calculation performed by the multiple TAs can also improve the calculation efficiency.

In a possible implementation manner, TA1 in the TEE acquires a first data fragment of the first user, and processes the first data fragment to obtain a first result; TA2 in the TEE acquires a second data fragment of the first user, and processes the second data fragment to obtain a second result; TA1 and TA2 in TEE feed back respective first result and second result, which are used to determine the result corresponding to the data of the first user.

In another possible implementation manner, the multiple TAs respectively obtain respective ciphertexts through encryption channels; and the TAs decrypt the respective ciphertexts respectively to obtain respective data segments.

In the technical scheme, the user encrypts the data before importing the data into the TEE environment through the client, so that the safety of the data can be enhanced, and the privacy protection effect is enhanced.

In another possible implementation manner, the TAs respectively feed back the respective encryption results to the client of the first user through an encryption channel.

In the above technical solution, the multiple TAs may feed back their respective encryption results to the client of the user, and the client of the user decrypts the encryption results of the multiple TAs, and obtains a result corresponding to the data of the first user according to the decrypted respective result. Therefore, in the process that the results are fed back to the user by the multiple TAs, the data safety can be enhanced, and the privacy protection effect can be enhanced.

In another possible implementation manner, the multiple TAs respectively feed back respective results to the first TA; and the first TA determines a result corresponding to the data of the first user according to the results fed back by the plurality of TAs.

In the above technical solution, if the computing function or security performance of the client of one user is limited, the task of the client of the user may be completed by one TA in the TEE environment. Because the environment of the TEE is relatively safe, the result corresponding to the data of the first user is determined in the environment of the TEE, so that the safety of the data is improved.

In another possible implementation manner, when the plurality of TAs includes the first TA, other TAs of the plurality of TAs except the first TA feed back respective results to the first TA through a TEE channel.

In another possible implementation manner, when the plurality of TAs includes the first TA, the first TA locally obtains a result of the first TA.

In another possible implementation manner, the method further includes: changing the identification IDs of the plurality of TAs.

In the above technical solution, a confusion mechanism is added, the ID of multiple TAs of a user can be changed, and when all the TA memories of a user are broken and all the data fragments of the user are leaked, multiple protections can be performed, so that an attacker cannot determine which TAs belong to the same user, thereby further increasing the security of data.

In another possible implementation manner, the method further includes: simultaneously changing the identification IDs of the plurality of TAs for processing the plurality of data pieces of the first user and the identification IDs of the plurality of TAs for processing the plurality of data pieces of the second user.

In the above technical solution, a confusion mechanism is added, the ID of multiple TAs of multiple users can be changed at the same time, and when all the TA memories of one user are broken and all the data fragments of the user are leaked, multiple protections can be performed, so that an attacker cannot determine which TAs belong to the same user, thereby further increasing the security of data.

In a second aspect, a system for data processing is provided, including: a plurality of trusted applications TA in the client and the trusted execution environment TEE,

the plurality of TAs respectively acquire a plurality of data fragments of the first user from the client, wherein the plurality of data fragments form data of the first user;

each TA in the plurality of TAs processes the data fragment acquired by the TA to obtain a respective result;

and the plurality of TAs respectively feed back respective results, and the results fed back by the plurality of TAs are used for determining the result corresponding to the data of the first user.

In one possible implementation, the TAs are specifically configured to: respectively acquiring respective ciphertexts from the client through encryption channels; and respectively decrypting the respective ciphertexts to obtain respective data segments.

In another possible implementation manner, the plurality of TAs are specifically configured to: respectively feeding back the respective encrypted results to the client through an encryption channel; the client is specifically configured to: and determining a result corresponding to the data of the first user according to the decrypted result of each TA.

In another possible implementation manner, the plurality of TAs are specifically configured to: feeding back respective results to the first TA respectively; the first TA is used for determining a result corresponding to the data of the first user according to the result fed back by the plurality of TAs.

In another possible implementation manner, when the plurality of TAs includes the first TA, other TAs of the plurality of TAs except the first TA feed back respective results to the first TA through a TEE channel.

In another possible implementation manner, the system further includes: an operating system for changing the identification IDs of the plurality of TAs.

In another possible implementation, the operating system is further configured to: simultaneously changing the identification IDs of the plurality of TAs for processing the plurality of data pieces of the first user and the identification IDs of the plurality of TAs for processing the plurality of data pieces of the second user.

In another possible implementation manner, the client is further configured to split the data of the first user into the plurality of data fragments.

The extensions, definitions, explanations and effects on the related contents in the above-described first aspect also apply to the same contents in the third aspect.

In a third aspect, a trusted application TA is provided, where the TA is one of a plurality of trusted applications TA in a trusted execution environment TEE, and the TA includes: an acquisition module, a processing module, a feedback module,

the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring one data fragment in a plurality of data fragments of a first user, and the data fragments form the data of the first user;

the processing module is used for processing the acquired data fragment to obtain a result;

and the feedback module is used for feeding back a result, and the result is used for determining a result corresponding to the data of the first user.

In a possible implementation manner, the obtaining module is specifically configured to: obtaining a ciphertext through an encryption channel;

the processing module is further configured to: and decrypting the ciphertext to obtain the data fragment.

In another possible implementation manner, the feedback module is specifically configured to: and feeding back the encrypted result to the client of the first user through an encryption channel.

In another possible implementation manner, the feedback module is specifically configured to: and feeding back the result to a first TA, wherein the first TA is used for determining a result corresponding to the data of the first user according to the result fed back by the plurality of TAs.

The extensions, definitions, explanations and effects on the related contents in the above-described first aspect also apply to the same contents in the third aspect.

In a fourth aspect, a computing device is provided, comprising: a processor and a memory, the processor executing the instructions in the memory to cause the computing device to perform the method steps as performed in the first aspect or any one of the possible implementations of the first aspect.

In a fifth aspect, a computing device is provided that includes a processor and a memory; the processor executes the instructions in the memory to cause the computing device to deploy the TA in any one of the possible implementations of the third aspect or the third aspect as described above.

In a sixth aspect, a computer-readable storage medium is provided that includes instructions; the instructions are for implementing the method steps as performed in the first aspect or any one of the possible implementations of the first aspect.

In a seventh aspect, a computer-readable storage medium is provided, comprising instructions; the instructions are configured to implement the TA in the third aspect or any one of the possible implementations of the third aspect.

Optionally, as an implementation manner, the storage medium may be specifically a nonvolatile storage medium.

An eighth aspect provides a chip, where the chip acquires an instruction and executes the instruction to implement the method for data processing in any one of the implementations of the first aspect and the first aspect.

Optionally, as an implementation manner, the chip includes a processor and a data interface, and the processor reads instructions stored in a memory through the data interface to execute the method for data processing in any one of the implementations of the first aspect and the first aspect.

Optionally, as an implementation manner, the chip may further include a memory, where the memory stores instructions, and the processor is configured to execute the instructions stored on the memory, and when the instructions are executed, the processor is configured to execute the method for data processing in any one of the implementations of the first aspect and the first aspect.

A ninth aspect provides a chip, which obtains an instruction and executes the instruction to implement TA in any one of the possible implementations of the third aspect or the third aspect.

Optionally, as an implementation manner, the chip includes a processor and a data interface, and the processor reads instructions stored in a memory through the data interface to implement the TA in the third aspect or any one of the possible implementation manners of the third aspect.

Optionally, as an implementation manner, the chip may further include a memory, where the memory stores instructions, and the processor is configured to execute the instructions stored on the memory, and when the instructions are executed, the processor is configured to implement the third aspect or the TA in any possible implementation manner of the third aspect.

Drawings

Fig. 1 is a schematic diagram of a possible system architecture suitable for use in embodiments of the present application.

Fig. 2 is a schematic architecture diagram of a computing device 200 according to an embodiment of the present application.

Fig. 3 is a schematic flow chart of a method for data processing according to an embodiment of the present application.

Fig. 4 is a schematic flow chart of another data processing method provided in the embodiment of the present application.

Fig. 5 is a schematic block diagram of a data splitting process provided in an embodiment of the present application.

Fig. 6 is a schematic block diagram illustrating that a plurality of TAs respectively report an intermediate calculation result according to an embodiment of the present application.

Fig. 7 is a schematic block diagram of communication among multiple sub-TAs according to an embodiment of the present application.

Fig. 8 is a schematic block diagram of a TA 800 provided by an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

In the embodiments of the present application, "first", "second", "third", "fourth", and the like are only intended to refer to different objects, and do not indicate other limitations on the objects referred to.

Since the embodiments of the present application relate to a large number of terms in the art, the following description will first describe terms and concepts related to the embodiments of the present application for easy understanding.

1. Trusted Execution Environment (TEE)

A system is deployed on a mobile device, and the system includes a Rich Execution Environment (REE) and a Trusted Execution Environment (TEE). A TEE is a separate, isolated, secure operating environment within the mobile device (e.g., smartphone, tablet, smart tv, etc.) that is logically isolated from a REE. Because the TEE provides an environment isolated from the REE to store sensitive information of the user (e.g., the code and data of the user), the TEE can directly acquire the information of the REE, and the REE cannot acquire the information of the TEE, the TEE provides a more secure space for executing the code and data, and guarantees confidentiality and security of the sensitive information of the user (e.g., the code and data of the user).

Optionally, the mobile device may be a desktop, a notebook, a mobile phone, a tablet computer, a smart watch, a smart bracelet, or the like, which is not specifically limited in this application.

An application running on a TEE is called a Trusted Application (TA), and the TEE divides an independent trusted zone in a processor (e.g., a Central Processing Unit (CPU)) and a memory, and isolates different TAs from each other in the processor and the memory, thereby preventing the different TAs from freely reading and accessing data from each other. The TA running in the TEE has access to all functions of the device main processor and memory, while the hardware isolation protects these components from user-installed applications running in the main operating system.

2. Software defense instruction set extension (SGX)

SGX is a new extension of the Intel architecture that adds a new set of instruction sets and memory access mechanisms to the original architecture, allowing applications to implement a container called enclave (enclave). In order to prevent the random access and data access between different TAs, an independent protected area can be divided in the address space of the application program by a Basic Input Output System (BIOS), and the protected area is used as enclave. The data in enclave is encrypted and cannot be viewed by both the kernel and the hypervisor. Therefore, the SGX may partition and isolate data of different TAs in the memory, and may encrypt data in the memory enclave. In this way, the developer can divide the application program into an enclave or an executable protection area in the memory, and the security can be improved even in an attacked platform. With this new application layer trusted execution environment, developers can enable identity and record privacy, secure browsing and digital management protection (DRM), or any high-security application scenario that requires secure storage of secrets or protected data.

The SGX is implemented by software and hardware such as a processor, a memory management unit, a BIOS, a driver, and a runtime environment.

3. Enclave (enclave)

The security operation of legal software is encapsulated in an enclave, so that the software is protected from being attacked by malicious software, and privileged or non-privileged software cannot access the enclave. That is, once software and data are located in enclave, even a Virtual Machine Manager (VMM) level cannot affect the code and data inside the enclave.

4. Secure Multi-party computing (SMC)

SMC, as a sub-field of cryptography, allows multiple data owners to perform cooperative computations without trust, output the computation results, and ensure that no one can obtain any information other than the corresponding computation results.

The SMC is used for solving the cooperative computing problem of protecting privacy among a group of distrusted parties, and the SMC is required to ensure the characteristics of independence of input, correctness of computation, decentralization and the like and simultaneously does not reveal each input value to other members participating in computation. The method mainly aims at the problem of how to safely calculate an agreed function under the condition of no trusted third party, and simultaneously requires that each participating subject cannot obtain any input information of other entities except the calculation result.

Multiple participants in the SMC each hold private data and need to compute a function of the data (e.g., a size comparison function in the millionaire's problem) without revealing the respective data, so that they perform a distributed computation. Each Model Predictive Control (MPC) node locally completes data extraction and calculation according to calculation logic, and routes an output calculation result to a designated node, so that a plurality of nodes complete a collaborative calculation task and output a unique result. All data of all parties in the whole process are calculated locally, original data are not provided for other nodes, and under the condition that data privacy is guaranteed, intermediate calculation results are fed back to the whole calculation task system, so that all parties obtain correct final results.

5. Threshold Secret Sharing (TSS)

The purpose of TSS is to leverage trust decentralization independently of trust in a single principal (secret holder) to reduce the risk of secret misuse and leakage, while strengthening the robust fault tolerance of secrets. A t-out-of-n threshold is typically used, i.e. a secret is shared by n holders, any t of which can recover the secret, but fewer than t holders do not have any knowledge of the secret.

6. Task

In this application, what a computer needs to implement can be referred to as a task, such as a process, a thread, a child thread, a Client Application (CA), a Trusted Application (TA), a service, and so on.

A system architecture suitable for use in embodiments of the present application is described in detail below with reference to fig. 1.

Fig. 1 is a schematic diagram of a possible system architecture suitable for use in embodiments of the present application.

As shown in fig. 1, advanced reduced instruction set computer (ARM) at hardware level introduces security extensions (security extensions) that divide hardware resources and software resources of a system on a chip into a secure world (secure world) and a non-secure world (non-secure world).

All operations that need to be kept secret are secure world (e.g., fingerprinting, cryptographic processing, data encryption/decryption, security authentication, etc.). The rest of the operations are executed in non-secure world (such as a user operating system, various application programs, etc.). The conversion between secure world and non-secure world is performed by a module called monitor.

When the CPU runs on secure world, it can access all hardware resources, but when the CPU runs on normal world, it can only access normal world's resources.

Alternatively, the non-secure world may also be referred to as a normal world.

In non-secure world, the normal operating system (normal OS) may include, but is not limited to: linux, Android (Android), normal OS may also be referred to as rich operating system (rich OS). Applications running on the normal OS are called normal applications (normal Apps).

In secure world, a secure operating system (secure OS) may include, but is not limited to: an open-source portable trusted execution environment operating system (OP-TEE OS), and the like. secure world is a trusted execution environment TEE that can guarantee computations that are not disturbed by conventional operating systems, and is therefore referred to as "trusted". The TEE is an independent execution environment running in parallel with the rich OS, providing security services to the rich OS environment. The TEE accesses hardware and software security resources independent of the rich OS and applications thereon. The application running on secure OS is called TA.

The core technology of secure computing and trusted computing is data isolation, i.e., protecting a user's data from being accessed by other entities (including supervisors such as administrators). Although the full ciphertext computation can also protect the data privacy through full-process full-environment encryption, the method has the defect of high implementation cost. Data isolation remains a more comprehensive and practical means of data protection.

In the TEE, in order to avoid random data reading and accessing between different TAs, different TA data in the memory needs to be encrypted. Therefore, on one hand, in order to encrypt different TA data in the memory, a plurality of software and hardware are required to be completed cooperatively, so that the complexity of implementation is increased, and the utilization rate of the memory and the CPU is reduced. On the other hand, even if the memory is encrypted, for example, in the SGX, when data is called into the CPU to participate in calculation, the data is restored to plaintext, and there is still a security risk, so that data isolation of the TEE is not completely reliable, and plaintext data is leaked once isolation fails.

In view of this, the present application provides a data processing method, which further prevents a TA from randomly reading and accessing data of other TAs in a TEE, and can reduce implementation complexity and improve utilization rates of a memory and a CPU while ensuring data security isolation.

For example, in the system architecture shown in fig. 1, the CPU gives multi-TA support to a task in cooperation with other control mechanisms (e.g., secure OS, hypervisor). That is, multiple TAs (e.g., TA1, TA2, TA3) running on secure OS compute distributively for one task or one user. Specifically, the above sub-tasks are implemented by a plurality of TAs in the TEE, respectively, with the CPU, secure OS, and sometimes hypervisor, as necessary. As shown in the architecture of fig. 1, the distributed plaintext computation of each data segment by multiple TAs in the TEE can be regarded as a functional extension of ARM security.

The data processing method provided by the embodiment of the application can be applied to a computing device, and the computing device can also be referred to as a computer system, and comprises a hardware layer, an operating system layer running on the hardware layer, and an application layer running on the operating system layer. The hardware layer includes hardware such as a processing unit, a memory, and a memory control unit, and the function and structure of the hardware are described in detail later. The operating system is any one or more computer operating systems for implementing business processing through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, a windows operating system, or the like. The application layer comprises application programs such as a browser, an address list, word processing software, instant messaging software and the like. The computer system may be a handheld device such as a smartphone or a terminal device such as a personal computer, and the present application is not particularly limited as long as the method provided in the embodiments of the present application can be used. The execution main body of the method for providing data processing in the embodiment of the present application may be a computer system, or a functional module capable of calling a program and executing the program in the computer system.

A computing device provided by the embodiment of the present application is described in detail below with reference to fig. 2.

Fig. 2 is a schematic architecture diagram of a computing device 200 according to an embodiment of the present application. The computing device 200 may be a server or a computer or other device with computing capabilities. The computing device 200 shown in FIG. 2 includes: at least one processor 110 and memory 120.

The processor 110 executes instructions in the memory 120 to cause the computing device 200 to implement the data processing methods provided herein, such as implementing the steps performed by the TA. Alternatively, the processor 110 executes instructions in the memory 120, so that the computing device 200 implements the TA provided by the present application, for example, implements each functional module included in the TA.

Optionally, the computing device 200 further comprises a system bus, wherein the processor 110 and the memory 120 are respectively connected to the system bus. The processor 110 can access the memory 120 through the system bus, for example, the processor 110 can read and write data or execute code in the memory 120 through the system bus. The system bus is a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The system bus is divided into an address bus, a data bus, a control bus and the like. For ease of illustration, only one thick line is shown in FIG. 2, but it is not intended that there be only one bus or one type of bus.

In one possible implementation, the functions of the processor 110 are mainly to interpret instructions (or codes) of a computer program and to process data in computer software. Wherein the instructions of the computer program and the data in the computer software can be stored in the memory 120 or the cache 116.

Alternatively, the processor 110 may be an integrated circuit chip having signal processing capabilities. By way of example, and not limitation, processor 110 is a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. Wherein the general purpose processor is a microprocessor or the like. For example, the processor 110 is a Central Processing Unit (CPU).

Optionally, each processor 110 includes at least one processing unit 112 and a memory control unit 114.

Alternatively, the processing unit 112, also referred to as a core or core, is the most important component of the processor. The processing unit 112 is manufactured from single crystal silicon in a certain manufacturing process, and all calculations, commands, memory commands, and processing data of the processor are executed by the core. The processing units respectively and independently run the program instructions, and the running speed of the program is accelerated by utilizing the parallel computing capability. The various processing units have a fixed logical structure, e.g., the processing units include logical units such as level one cache, level two cache, execution unit, instruction level unit, and bus interface.

In one implementation, the memory control unit 114 is used to control data interaction between the memory 120 and the processing unit 112. Specifically, the memory control unit 114 receives a memory access request from the processing unit 112 and controls access to the memory based on the memory access request. By way of example, and not limitation, the memory control unit is a Memory Management Unit (MMU) or the like.

In one implementation example, each memory control unit 114 addresses memory 120 via a system bus. And an arbiter (not shown) is configured in the system bus and is responsible for handling and coordinating competing accesses of the plurality of processing units 112.

In an implementation example, the processing unit 112 and the memory control unit 114 are communicatively connected through a connection line inside the chip, such as an address line, so as to implement communication between the processing unit 112 and the memory control unit 114.

Optionally, each processor 110 also includes a cache 116, where a cache is a buffer for data exchange (referred to as a cache). When the processing unit 112 needs to read data, it first searches the needed data from the cache, and if the needed data is found, it directly executes the data, and if the needed data is not found, it searches the data from the memory. Since caches operate much faster than memory, the role of caches is to help the processing unit 112 run faster.

The memory (memory)120 can provide a running space for a process in the computing device 100, for example, the memory 120 stores a computer program (specifically, a code of the program) for generating the process. After the computer program is run by the processor to generate a process, the processor allocates a corresponding memory space for the process in the memory 120. Further, the storage space further includes a text segment, an initialization data segment, a bit initialization data segment, a stack segment, and so on. The memory 120 stores data generated during the operation of the process, such as intermediate data, process data, and the like, in a memory space corresponding to the process.

Alternatively, the memory is also referred to as an internal memory, and functions to temporarily store operation data in the processor 110 and data exchanged with an external memory such as a hard disk. As long as the computer is running, the processor 110 will call the data to be operated into the memory for operation, and the processing unit 112 will send out the result after the operation is completed.

By way of example, and not limitation, memory 120 is either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory is a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory is Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory 120 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of storage.

The above-mentioned structure of the computing device 200 is only an example, and the present application is not limited thereto, and the computing device 200 of the embodiment of the present application includes various hardware in computer systems in the related art, for example, the computing device 200 further includes other storage, such as a disk storage, besides the memory 120. Those skilled in the art will appreciate that the computing device 200 may also include other components necessary to achieve proper operation. Also, those skilled in the art will appreciate that the computing device 200 described above may also include hardware components that implement other additional functionality, according to particular needs. Furthermore, those skilled in the art will appreciate that the computing device 200 described above may also include only those elements necessary to implement embodiments of the present application, and need not include all of the elements shown in FIG. 2.

The following describes the data processing method provided in the embodiment of the present application in detail with reference to fig. 3. The method may be performed by the computing device 200 shown in fig. 2 to implement the method of data processing provided herein, for example, to implement the steps performed by the TA and the client.

Fig. 3 is a schematic flow chart of a method for data processing according to an embodiment of the present application. As shown in FIG. 3, the method may include steps 310 and 330, and the steps 310 and 330 are described in detail below.

Step 310: a plurality of trusted applications TA in the trusted execution environment TEE respectively obtain a plurality of pieces of data of the first user from the client.

The plurality of TAs operating in the TEE may respectively obtain a plurality of data fragments of the first user from the client, and the plurality of data fragments may constitute data of the first user.

As an example, the client may split the data of the first user into a plurality of random data segments, and transmit the data segments to a plurality of TAs running in the TEE, respectively.

In one possible implementation, the multiple TAs may obtain respective ciphertexts from the client through the encryption channels, and decrypt the respective ciphertexts to obtain respective data segments.

Step 320: each TA in the plurality of TAs processes the data segment acquired by the TA to obtain a respective result.

Each TA of the multiple TAs may process the respective data fragment after acquiring the respective data fragment from the client, and obtain a respective processing result.

Alternatively, the process of processing the respective data segment by each TA may also be understood as that each TA performs the respective calculation task and obtains the calculation result respectively.

The calculation tasks performed by each TA of the plurality of TAs may be the same or different, and this is not specifically limited in this application.

The calculation task executed by the TA may be various, and the present application is not particularly limited. Two examples are listed below to illustrate the specific implementation of the TA to perform the computational task.

An example, assume that the user's data is a key that is to be used to decrypt the data. If the key is lost, it not only destroys the secret calculation this time, but also affects the data that was and will be encrypted with the same key. Thus, the key can be divided into random fragments, the sum of which is a complete key. After each TA takes a random segment, partial decryption is carried out according to the random segment, and respective decryption results are obtained respectively. Finally, the respective decryption results are collected together to obtain the decrypted plaintext.

As another example, in a financial service, a user pays an electronic currency as a deposit. The electronic currency cannot be seen and removed at will, but it needs to be verified as a genuine and valid currency. It can be assumed that one data of the user is the electronic money, and the electronic money can be split into a plurality of random pieces, the sum of which is a complete electronic good. After each TA takes a random segment, partial verification is carried out according to the random segment, and respective verification results are obtained respectively. And finally, the validity of the electronic money can be judged by gathering the respective verification results. The multiple TAs may also collectively retrieve electronic money for payment of fines, etc., if necessary.

Step 330: the plurality of TAs feed back their respective results.

The plurality of TAs may feed back their respective results after obtaining their respective results. Wherein, the results of the multiple TA feedbacks may be used to determine the result corresponding to the data of the first user. As an example, the results of the multiple TA feedbacks may be pieced together to obtain a result corresponding to the data of the first user. The following description will be given in conjunction with specific examples, which will not be described in detail herein.

There are various specific implementations for the TAs to feed back their respective results, and two possible implementations are described in detail below.

In a possible implementation manner, the TAs may respectively feed back the respective encryption results to the client through the encryption channel. The client may decrypt the encrypted result of each of the plurality of TAs to obtain the result of each of the plurality of TAs, and determine the result corresponding to the data of the first user based on the result of each of the plurality of TAs.

In another possible implementation manner, the multiple TAs may respectively feed back respective results to the first TA, and the first TA determines a result corresponding to the data of the first user according to the results fed back by the multiple TAs.

In this implementation, when the plurality of TAs includes the first TA, other TAs of the plurality of TAs except the first TA feed back respective results to the first TA through the TEE channel, and the first TA may locally obtain its own results.

In the technical scheme, the user randomly divides the task data into a plurality of segments, actively shares the segments with the plurality of TAs respectively, and operates the plurality of TAs in the TEE environment. After the distributed calculation of the plurality of TAs, the respective results are fed back so as to combine the results corresponding to the data of the plurality of TAs according to the respective calculation results. Even if an attacker breaks the protection of the TEE and steals the data of a TA in the memory, the attacker only acquires one random segment of the task data of the user and cannot acquire the task data of the user. Therefore, on one hand, by processing fragmented plaintext data in the TEE, the data in the memory can still be protected after the memory of a certain TA is exposed, and the effect of privacy protection can be enhanced while the simplicity and the high efficiency are achieved. On the other hand, the distributed calculation performed by the multiple TAs can also improve the calculation efficiency.

A specific implementation of the data processing method is described below with reference to the specific example in fig. 4. It should be understood that the example of fig. 4 is only for assisting the skilled person in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or specific scenarios of fig. 4. It will be apparent to those skilled in the art from the examples given that various equivalent modifications or variations can be made, and such modifications and variations also fall within the scope of the embodiments of the application.

Fig. 4 is a schematic flow chart of another data processing method provided in the embodiment of the present application. As shown in FIG. 4, the method may include steps 410 and 430, and the steps 410 and 430 are described in detail below.

Step 410: a user divides one data of the user into a plurality of shares through a client and respectively divides the data into one data fragment for each TA.

Suppose a user needs to compute a complex function f (x), where x is an input of large length and f () is computationally expensive. The user is not computationally efficient, and needs to protect the private data x of the user by means of external computing power.

Alternatively, the secret data of the user can also be understood as data of the user or data of a task.

A user can randomly divide own data into a plurality of data fragments through a client side, and the encrypted data fragments are transmitted to a TEE environment through a TA controlled by the user through an encryption channel.

One implementation is illustrated in fig. 5. The user can split each data into a plurality of random data fragments through the client and transmit the data fragments to a plurality of TAs in the TEE through the encryption channel. For example, in fig. 5, a user may split one data into 3 random data fragments by a client, and transmit the data fragments to TA1, TA2, and TA3 in the TEE through a dense channel, respectively.

The data of the user can be stored on a hard disk or a cloud server, and the safety of the data of the user cannot be guaranteed before the data of the user is imported into the TEE environment. Therefore, the user needs to encrypt the data before importing it into the TEE environment from the hard disk or the cloud server through the client.

In order to avoid unnecessary communication or user intervention between TAs, the most preferable data splitting scheme is that TAs receive data fragments and then can calculate their respective results without exchanging information with each other.

Meanwhile, for the safety of data, the split data fragments have randomness, and each single fragment does not reveal any information of the original data. For example, the private data x is split into x ═ x1+ x2 or x ═ x1 x2 by selecting the random number x 1. As an example, the data splitting may be performed according to a method of the SMC, and please refer to the description of the SMC above, which is not detailed here.

Step 420: and each TA acquires the data fragments and then completes the calculation task in a distributed mode.

Each TA can obtain an encrypted data fragment from the client, and each TA occupies the CPU in a distributed manner in a time-sharing manner to complete respective calculation task after the data fragment is decrypted in the TEE, so as to obtain an intermediate calculation result. After the data is imported into the TEE environment, the TEE environment does not need to encrypt the data fragments because the TEE environment can achieve protection of the data.

In one implementation, three TAs are taken as an example. In step 410, the user randomly splits the private data x into x1, x2, and x3 through the client, and then performs three corresponding distributed computing functions f1(x1), f2(x2), and f3(x 3). And instantiates the three corresponding distributed computation functions as TA1, TA2, TA3, respectively. That is, TA1, TA2, TA3 may distributively realize the calculation tasks of f1(x1), f2(x2), f3(x3) according to the input information x1, x2, x 3.

Based on the technical support of multi-party computing algorithm decomposition and data splitting, a computing task is already split into a plurality of subtasks on software. The tasklets are implemented by tass in the TEE with the necessary support and coordination of the CPU, Operating System (OS), and sometimes hypervisor. In this way, TEEs may be relied upon for secure computing with support for program reliability and data privacy isolation.

Step 430: and obtaining a final calculation result according to the intermediate calculation result of each TA in the TEE.

Each TA in the TEE may obtain intermediate calculation results f1(x1), f2(x2), and f3(x3) by calculation according to the respective obtained data segments, and may obtain a final calculation result according to a plurality of intermediate calculation results. There are various specific implementations, and several possible implementations are described in detail below.

In a possible implementation manner, referring to fig. 6, after each TA encrypts the obtained intermediate calculation result, the encrypted intermediate calculation result is fed back to the client of the user through the encryption channel, and the client of the user decrypts the plurality of intermediate calculation results and then synthesizes the plurality of intermediate calculation results to obtain the final calculation result. As an example, the user's client obtains the final calculation result F (x) through a simple spelling function F (). For example, the final calculation result F (x) is F (F1(x1), F2(x2), F3(x 3)).

In another possible implementation manner, if the computing function or security performance of the user client is limited, a TA may be specifically set up in the TEE, and the task of the user is completed through the TA, and the final computing result is obtained by synthesizing a plurality of intermediate computing results. Specifically, other TAs (e.g., TA1, TA2, TA3) may transmit the obtained intermediate calculation results f1(x1), f2(x2), and f3(x3) to the TA through the TEE internal channel, and obtain the final calculation result f (x) from the TA according to the intermediate calculation results. As an example, the TA gets the final computation result F (x) by a simple splicing function F (). For example, the final calculation result F (x) is F (F1(x1), F2(x2), F3(x 3)).

Optionally, in some embodiments, each TA needs to perform information interaction or exchange with a user or other TAs in a distributed computing process. When communication (e.g., exchange of necessary intermediate calculation results) between various TAs is excessive, they may often need to suspend waiting for the user (or other controlled information exchange TAs) to collect and transmit intermediate calculation results between various TAs. In this case, to avoid unnecessary waiting for suspension, it is conceivable to further split each TA into several rounds of sub-TAs on the time axis, and to control the mutual information between the rounds through the center of the user.

As an example, referring to fig. 7, each TA may be split into 3 sub-TAs. For example, TA1 is split into TA1-1, TA1-2, and TA 1-3; TA2 is divided into TA2-1, TA2-2 and TA 2-3; TA3 is split into TA3-1, TA3-2 and TA 3-3. Communication between each sub-TA may be via the internal channel of the TEE (e.g., exchanging necessary intermediate computation results). Thus, taking TA2 as an example, if TA2 needs to obtain the interim calculation result of TA1, and calculate the interim calculation result of TA2 based on the interim calculation result of TA 1. If TA1-1 has already obtained the intermediate calculation result of TA1, at this time, TA2 can directly obtain the intermediate calculation result of TA1 from TA1-1, instead of obtaining the intermediate calculation result of TA1 after TA1 is completely executed, thereby avoiding unnecessary suspension waiting.

Optionally, in some embodiments, if all TA memory of a user is compromised due to TEE failure, all data fragments of the user are revealed. To provide multiple protections, increase security and flexibility, obfuscation mechanisms may be added, and the OS in the computing device may break up the Identities (IDs) of multiple TAs. After all data segments of a user are revealed, an attacker cannot judge which TAs belong to the same user, and thus cannot recover the original data by using the data segments.

In a possible implementation manner, the OS may change the Identification (ID) of the multiple TAs executing the data task of one user, so that after a spectator (e.g., an owner of another TA) acquires the data of all TAs of one user in the memory, the original data cannot be recovered according to the changed ID of the multiple TAs.

In another possible implementation, the OS may change the IDs of multiple TAs of multiple users at the same time. As an example, the OS may break up IDs of multiple TAs of multiple users, so that after a spectator (e.g., an owner of another TA) acquires data in the memory of all TAs of one user, the spectator cannot piece together original data of one user according to the changed IDs of the multiple TAs.

Suppose there are three users, user 1 and user 2, each of which processes the respective acquired data segment in a distributed manner through four TAs. The TA corresponding to the user 1 is TA1, TA2, TA3 and TA4 respectively, and the identification of the TA is 1, 2, 3 and 4 respectively; the TAs corresponding to user 2 are TA5, TA6, TA7 and TA8, and the TA identifiers are 5, 6, 7 and 8. The IDs of the eight TAs of user 1 and user 2 are in turn: 1,2,3,4,5,6,7,8. The OS may reset the identifier corresponding to TA1-TA6 described above and save the correspondence between the identifier of the TA after the change and the identifier of the TA before the change. For example, the IDs of the eight TAs of user 1 and user 2 are changed to: 1,4,5,7,2,3,6,8.

Thus, the spectator (e.g., the owner of other TAs) cannot recover the original data from the data segments even if they have acquired the data segments of all TAs of one user. For example, the spectator has obtained the data of user 1's TA1, TA2, TA3, TA4 in memory, and the IDs of TA1, TA2, TA3, TA4 are: 1, 4, 5, 7, the bystander cannot respectively have the following ID: the TAs 1, 4, 5, and 7 belong to a user, and cannot use their data segments to recover the original data.

In the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.

The method of data processing is described in detail above with reference to fig. 1 to 7, and an embodiment of the apparatus of the present application is described in detail below with reference to fig. 8.

Fig. 8 is a schematic block diagram of a TA 800 provided by an embodiment of the present application.

The TA 800 is capable of executing the steps executed by the TA in the data processing method shown in fig. 3 to 4, and therefore, in order to avoid repetition, the detailed description is omitted here. The TA 800 includes: an acquisition module 810, a processing module 820, a feedback module 830,

an obtaining module 810, configured to obtain one data fragment of multiple data fragments of a first user, where the multiple data fragments constitute data of the first user;

a processing module 820, configured to process the acquired data segment to obtain a result;

a feedback module 830, configured to feed back a result, where the result is used to determine a result corresponding to the data of the first user.

Optionally, the obtaining module 810 is specifically configured to: obtaining a ciphertext through an encryption channel;

the processing module is further configured to: and decrypting the ciphertext to obtain the data fragment.

Optionally, the feedback module 830 is specifically configured to: and feeding back the encrypted result to the client through an encryption channel.

Optionally, the feedback module 830 is specifically configured to: and feeding back the result to the first TA, wherein the first TA is used for determining the result corresponding to the data of the first user according to the result fed back by the plurality of TAs.

The TA 800 herein may be embodied in the form of a functional module. The term "module" herein may be implemented in software and/or hardware, and is not particularly limited thereto.

For example, a "module" may be a software program, a hardware circuit, or a combination of both that implements the functionality described above. The hardware circuitry may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared processor, a dedicated processor, or a group of processors) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality.

Accordingly, the units of the respective examples described in the embodiments of the present application can be realized in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

The embodiment of the present application further provides a chip, where the chip acquires an instruction and executes the instruction to implement the data processing method, or the instruction is used to implement the TA.

Optionally, as an implementation manner, the chip includes a processor and a data interface, and the processor reads instructions stored on the memory through the data interface to execute the data processing method.

Optionally, as an implementation manner, the chip may further include a memory, where the memory stores instructions, and the processor is configured to execute the instructions stored on the memory, and when the instructions are executed, the processor is configured to execute the data processing method.

Embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores instructions for the method for data processing in the foregoing method embodiments, or the instructions are used to implement the foregoing TA.

Embodiments of the present application further provide a computer program product containing instructions for implementing the method for data processing in the foregoing method embodiments, or for implementing the foregoing TA.

For example, the processor may be a Central Processing Unit (CPU), and the processor may be other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

For example, the memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).

The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In addition, the "/" in this document generally indicates that the former and latter associated objects are in an "or" relationship, but may also indicate an "and/or" relationship, which may be understood with particular reference to the former and latter text.

In the present application, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.

In the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computing device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:应用漏洞检测方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类