Homomorphic encryption

文档序号:411893 发布日期:2021-12-17 浏览:5次 中文

阅读说明:本技术 同态加密 (Homomorphic encryption ) 是由 H·马克莱姆 F·舒尔曼 K·赫斯 F·德拉隆德尔 于 2020-03-12 设计创作,主要内容包括:用于同态加密的方法、系统和设备。在一个实现中,所述方法包括:将第一数据输入到循环人工神经网络中;识别所述循环人工神经网络中的活动的响应于安全数据的输入的模式;存储表示活动的所识别的模式是否与拓扑模式相称的第二数据;以及对所述第二数据进行统计分析以得出关于所述第一数据的结论。(Methods, systems, and devices for homomorphic encryption. In one implementation, the method includes: inputting the first data into a recurrent artificial neural network; identifying a pattern of activity in the recurrent artificial neural network in response to entry of security data; storing second data representing whether the identified pattern of activity is commensurate with the topological pattern; and statistically analyzing the second data to draw conclusions about the first data.)

1. A homomorphic encryption method implemented in hardware, in software, or a combination thereof, the method comprising:

storing binary data, wherein each digit in the binary data represents whether an activity in the recurrent artificial neural network is commensurate with a corresponding pattern, wherein the activity is responsive to an input of secure data; and

performing statistical analysis on the binary data to draw conclusions about the security data.

2. The method of claim 1, wherein the pattern of activity in the recurrent artificial neural network comprises a simplex pattern of activity in the network.

3. The method of claim 2, wherein the simplex pattern surrounds a cavity.

4. The method of claim 1, further comprising identifying the pattern of activity in the recurrent artificial neural network.

5. The method of claim 4, wherein identifying the pattern of activity comprises:

determining a particular time of an activity having a complexity distinguishable from other activities responsive to the input; and

identifying the pattern based on the particular time of the activity having the distinguishable complexity.

6. The method of claim 1, wherein the method further comprises:

receiving data characterizing input customizing the security data into the network; and

customizing an input of plaintext into the network based on the data.

7. The method of claim 6, wherein the data:

characterize synapses and nodes to which bits of the plaintext are to be injected, or

Characterizing the order in which bits of the plaintext are to be injected.

8. The method of claim 1, wherein the method further comprises:

customizing a response of the network to the input of the security data by changing one or more attributes of nodes or links in the network.

9. A homomorphic encryption method implemented in hardware, in software, or a combination thereof, the method comprising:

inputting the first data into a recurrent artificial neural network;

identifying a pattern of activity in the recurrent artificial neural network in response to entry of security data;

storing second data representing whether the identified pattern of activity is commensurate with the topological pattern; and

performing a statistical analysis on the second data to draw conclusions about the first data.

10. The method of claim 9, wherein the pattern of activity in the recurrent artificial neural network comprises a simplex pattern of activity in the network.

11. The method of claim 10, wherein the simplex pattern surrounds a cavity.

12. The method of claim 9, wherein identifying the pattern of activity comprises:

determining a particular time of an activity having a complexity distinguishable from other activities responsive to the input; and

identifying the pattern based on the particular time of the activity having the distinguishable complexity.

13. The method of claim 9, wherein the method further comprises:

receiving data characterizing input customizing the security data into the network; and

customizing an input of plaintext into the network based on the data.

14. The method of claim 13, wherein the data:

characterize synapses and nodes to which bits of the plaintext are to be injected, or

Characterizing the order in which bits of the plaintext are to be injected.

15. The method of claim 9, wherein the method further comprises:

customizing a response of the network to the input of the security data by changing one or more attributes of nodes or links in the network.

16. A homomorphic encryption device configured to:

inputting the first data into a recurrent artificial neural network;

identifying a pattern of activity in the recurrent artificial neural network in response to entry of security data;

storing second data representing whether the identified pattern of activity is commensurate with the topological pattern; and

performing a statistical analysis on the second data to draw conclusions about the first data.

17. The homomorphic encryption device of claim 16, wherein the pattern of activity in the recurrent artificial neural network comprises a simplex pattern of activity in the network, wherein the simplex pattern surrounds a void.

18. The homomorphic encryption device of claim 16, wherein the homomorphic encryption device is configured to:

determining a particular time of an activity having a complexity distinguishable from other activities responsive to the input; and

identifying the pattern based on the particular time of the activity having the distinguishable complexity.

19. The homomorphic encryption device of claim 16, wherein the homomorphic encryption device is configured to:

receiving data characterizing input customizing the security data into the network; and

customizing an input of plaintext into the network based on the data.

20. The homomorphic encryption device of claim 16, wherein the homomorphic encryption device is configured to:

customizing a response of the network to the input of the security data by changing one or more attributes of nodes or links in the network.

Background

Cryptographic encryption (cryptographical encryption) provides secure communication between parties, even in the event that a third party (often referred to as an "adversary") intercepts the communication. The encrypted communication is encoded such that only authorized recipients can access the communication. In general, the communication itself is referred to as "plaintext," which is a term that encompasses both text messages and other messages. The algorithm of communication encryption is generally called "cipher (cipher)" and encrypting communication is called "cipher text (cipher text)". Although the ciphertext may be intercepted or otherwise made available to an adversary, decrypting the ciphertext to access the encrypted communication is often very difficult.

In general, encryption can be classified as "symmetric key" or "public key". In symmetric key encryption, the same key is used to encrypt plaintext and decrypt ciphertext. Since both the sender and the receiver must have access to the same symmetric key, it must be ensured that the symmetric key is exchanged over a secure channel. In public key encryption, an encryption key may be disclosed and used by multiple parties to encrypt plaintext. However, only the intended recipient will have access to the decryption key that enables the ciphertext to be decrypted.

In some instances, a party may use the ciphertext in a computation without having to fully decrypt the ciphertext. In so-called "homomorphic encryption," an operation performed on ciphertext may produce a result that, when decrypted, matches the result of a similar operation performed on the corresponding plaintext. Examples of operations include linear and non-linear statistical analysis, as well as deep learning and other AI-based techniques.

Homomorphic encryption is particularly useful in scenarios where the computation is performed by parties that cannot access the plaintext without restriction. For example, a party performing statistical analysis of medical data may not have access to patient identification information. However, the results of the analysis should be as accurate as if the party had access to the complete identification information.

Disclosure of Invention

This document relates to homomorphic encryption and systems and techniques for performing homomorphic encryption. For example, in one implementation, a homomorphic encryption method includes: storing binary data, wherein each digit in the binary data represents whether an activity in the recurrent artificial neural network is commensurate with a corresponding pattern (pattern), wherein the activity is responsive to input of security data; and statistically analyzing the binary data to draw conclusions about the security data. The method may be implemented in hardware, in software, or in a combination thereof.

This and other homomorphic encryption methods may include one or more of the following features. The pattern of activity in the recurrent artificial neural network may comprise a simplex pattern of activity in the network, for example, wherein the simplex pattern is a directed simplex or wherein the simplex pattern surrounds a cavity (cavity). The method may include identifying the pattern of activity in the recurrent artificial neural network. Identifying the pattern of activity may include: determining a particular time (timing, time) of an activity having a complexity distinguishable from other activities in response to the input; and identifying the pattern based on the particular time of the activity having the distinguishable complexity. The method may include: receiving data characterizing a characteristic (charateristic) that customizes the input of the security data into the network; and customizing plain text input into the network in accordance with the data. The data may characterize synapses and nodes to which the bits of plaintext (bit) are to be injected, or characterize the order in which the bits of plaintext are to be injected. The method may further include customizing the response of the network to the input of the security data, for example, by creating or removing nodes or links in the network or by changing one or more properties (properties) of nodes or links in the network.

In another implementation, a homomorphic encryption method includes: inputting the first data into a recurrent artificial neural network; identifying a pattern of activity in the recurrent artificial neural network in response to entry of security data; storing second data representing whether the identified pattern of activity is commensurate with the topological pattern; and statistically analyzing the second data to draw conclusions about the first data. The method may be implemented in hardware, in software, or in a combination thereof.

This and other homomorphic encryption methods may include one or more of the following features. The pattern of activity in the recurrent artificial neural network may comprise a simplex pattern of activity in the network, for example, wherein the simplex pattern is a directed simplex or wherein the simplex pattern surrounds a cavity. Identifying the pattern of activity may include: determining a particular time of an activity having a complexity distinguishable from other activities responsive to the input; and identifying the pattern based on the particular time of the activity having the distinguishable complexity. The method may include: receiving data characterizing input customizing the security data into the network; and customizing plain text input into the network in accordance with the data. The data may characterize synapses and nodes to which the plain-text bits are to be injected, or characterize the order in which the plain-text bits are to be injected. The method may comprise customising the response of the network to the input of the security data, for example by creating or removing nodes or links in the network or by changing one or more properties of nodes or links in the network.

In some implementations, a non-transitory computer-readable storage medium may have instructions stored thereon that, when executed by one or more processors, cause the one or more processors to perform any of the homomorphic encryption methods described above.

In some implementations, a homomorphic encryption device can be configured to perform any of the homomorphic encryption methods described above.

The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.

Drawings

Fig. 1 is a schematic representation of a process for homomorphic encryption of data.

Fig. 2 and 3 are representations of patterns of activity that may be identified and read in a recurrent artificial neural network.

Fig. 4 is a schematic representation of the determination of a specific time of an activity pattern with distinguishable complexity.

Detailed Description

Fig. 1 is a schematic representation of a process 100 for homomorphic encryption of data. Process 100 may be performed autonomously by one or more computing devices or under the operational supervision of a human.

In process 100, a collection of data has been stored in one or more data storage devices 105. As illustrated, the stored data may include, for example, image data, video data, text data, audio data, and/or structured or unstructured database data. In some cases, data storage device 105 may form a data center for a company or other entity. The entity or entities that own the data stored on the data storage device 105 may wish to limit access to the data by others for one or more reasons. For example, the stored data may include business secrets or other data of business importance to the owner. As another example, access to the data may be regulated by law, such as when the stored data is non-anonymous medical data.

The data stored at the storage device 105 may be injected into the artificial recurrent neural network 110. An artificial neural network is a device inspired by structural and functional aspects of biological neuronal networks but implemented in hardware, in software, or in a combination thereof. In particular, artificial neural networks use a system of interconnected constructs called nodes to model the information coding and other processing capabilities of biological neuronal networks. The arrangement and strength of the connections between nodes in an artificial neural network determine the result of information processing or information storage by the artificial neural network.

Neural networks may be trained to produce a desired signal flow in the network and to achieve desired information processing or information storage results. Typically, training a neural network will change the placement and/or strength of connections between nodes during the learning phase. A neural network may be considered trained when it achieves a sufficiently appropriate processing result for a given set of inputs.

Artificial neural networks may be used in a variety of different devices to perform nonlinear data processing and analysis. The nonlinear data processing does not satisfy the principle of superposition (i.e., the variable to be determined cannot be written as a linear sum of the independent components).

In a recurrent artificial neural network, the connections between nodes form a directed graph along a time sequence, and the network exhibits a time-dynamic behavior.

The data stored at the storage device 105 may be input into the recurrent artificial neural network 110 in a variety of different ways. In general, a user may be able to uniquely specify the manner in which data stored at the storage device 105 is injected into a particular network to provide a degree of security against unwanted access. For example, the recurrent artificial neural network 110 need not be constrained to receive input on a well-defined input layer. Rather, in some implementations, the user may specify that data stored at the storage device 105 is to be injected into particular nodes or links distributed throughout the network 110. As another example, the recurrent artificial neural network 110 need not be constrained to receive input in a known, previously defined manner (e.g., always inject a first bit (bit) into a first node, a second bit into a second node, … …, etc.). Instead, the user may specify that certain bits in the data stored at the storage device 105 are to be injected into synapses rather than neurons, the order of injection need not follow the order in which the bits appear, or a combination of these and other parameters.

In some implementations, the data stored at the storage device 105 can be input into the recurrent artificial neural network 110 that has been customized using one or more settings 115 of the response of the customization network 110 to the input. These settings may, for example, create or remove nodes or links within network 110 and/or change attributes of individual nodes or links within network 110. For example, the settings may change the strength and/or directionality of links within the network 110. As another example, the setting may change a signal accumulation or firing threshold in a node operating according to an integrated-and-fire model. The nature of these changes can be sufficient to tailor the responsiveness of the network 110 to inputs in a manner that is hidden from other parties, for example, who may have access to the network 110 but not to the settings 115. As such, the settings 115 may be considered a "private key" that, along with the unchanged attributes of the network 110, determines to encode data stored at the storage device 105. For the purpose of teaching, the arrangement 115 is schematically represented in fig. 1 as a key.

In response to the input of data stored at the storage device 105, the recurrent artificial neural network 110 responds in an active mode. The topological pattern that occurs in this activity can be "read" as the neural-topological code 120. In more detail, the neural-topology code 120 may represent a topological feature corresponding to a pattern of activity occurring in the neural network when the neural network is provided with a given input. In other words, the neural network may be represented as a graph. A graph is a set of nodes and a set of edges between the nodes. The nodes may correspond to, for example, artificial neurons in a neural network. An edge may correspond to some relationship between nodes. Examples of relationships include, for example, structural connections or activities along the connections. In the context of neural networks, artificial neurons may be associated by structural connections between neurons or by transmission of information along structural connections. Thus, an edge may characterize a relatively short "activity" that occurs within a defined time frame.

The neuro-topology code 120 may use a series of binary bits to represent the presence or absence of topological features in an activity. The feature whose presence or absence is indicated by a bit in the neuro-topological code 120 can be, for example, an activity in a node, a set of nodes, a set of sets of nodes, a set of edges, a set of sets of edges, and/or an additional hierarchically more complex feature (e.g., a set of sets of multiple sets of nodes). Bits in the neuro-topological code 120 typically indicate the presence or absence of features at different hierarchical levels. For example, a first bit may represent the presence or absence of activity at a set of five nodes, while a second bit may represent the presence or absence of activity at a set of eight nodes. In some implementations, the bits may represent the presence or absence of a multi-dimensional simplex pattern of activity in a graph representing the activity.

In some implementations, the format of the neuro-topological code 120 can be customized by a user. For example, the order of the bits may be selected by the user.

In some implementations, bits in the neural-topology code 120 may represent information about features in a graph, not just the presence or absence of those features. For example, a bit may represent a feature that is not only present but also has some characteristic of a threshold level. For example, a bit may indicate not only that there is a simplex pattern of activity in a set of edges, but also that this activity is above or below a threshold level of activity.

In a simplified sense (reducing sense), the data from storage device 105 that is input into neural network 110 is plain text, and the plain text is homomorphically encrypted in response to the neural topology code 120.

The neuro-topology code 120 may be stored in one or more data storage devices 125. Typically, data storage device 125 is different from data storage device 105. For example, data storage device 125 may be a cloud data store or a vertical database accessible to parties other than the one or more entities that own the data stored on data storage device 105. This accessibility is schematically indicated in the figure by an unlock flag on the data storage device 125.

Entities having access to the data can access data stored on the data storage device 125 to perform one or more data analyses 130, 135, 140. The results of the data analysis 130, 135, 140 of the neuro-topological code 120 can produce results that, when decrypted, match the results of similar operations performed on the corresponding data stored on the data storage device 125. Examples of suitable data analysis include linear and non-linear statistical analysis, as well as deep learning and other AI-based techniques. These results are achieved without providing the entity with unrestricted access to the data stored on the data storage device 125.

Fig. 2 is a representation of a pattern 400 of activity that may be identified and "read" to generate the neural topology code 120 from the neural network 100 (fig. 1).

The pattern 400 is a representation of activity in a recurrent artificial neural network. To read the schema 400, a functional graph (functional graph) is considered as a topological space with nodes as points. Activities in nodes and links commensurate with the schema 400 can be identified as ordered regardless of the identity of the particular node and/or link participating in the activity. In the illustrated implementation, the schema 400 is all a directed blob or a directed simplex. In such a mode, the activity originates from the source node transmitting a signal to each of the other nodes in the mode. In the pattern 400, such a source node is designated as point 0, while the other nodes are designated as points 1, 2, … …. Further, in a directed clique or simplex, one of the nodes acts as a sink and receives signals transmitted from each other node in the pattern. In pattern 400, such an aggregation node is designated as the highest numbered point in the pattern. For example, in mode 405, the sink node is designated as Point 2. In pattern 410, the sink node is designated as point 3. In pattern 415, the sink node is designated as point 3, and so on. Thus, the activities represented by the schema 400 are ordered in a distinguishable manner.

Each of the patterns 400 has a different number of points and reflects ordered activity in the different number of nodes. For example, schema 405 is a two-dimensional simplex and reflects activity in three nodes, schema 410 is a three-dimensional simplex and reflects activity in four nodes, and so on. As the number of points in a pattern increases, the degree of ordering and complexity of the activities also increases. For example, for a large set of nodes with some degree of random activity within the window, some of the activity may be commensurate with the pattern 405 by chance. However, random activity will progressively be less likely to be commensurate with the respective ones of the patterns 410, 415, 420 … …. The presence of activity commensurate with pattern 430 thus indicates a relatively higher degree of ordering and complexity in the activity than the presence of activity commensurate with pattern 405.

Windows of different durations may be defined for different determinations of the complexity of the activity. For example, when activity commensurate with pattern 430 is to be identified, a longer duration window may be used than when activity commensurate with pattern 405 is to be identified.

Fig. 3 is a representation of a pattern 500 of activity that may be identified and "read" to generate the neural topology code 120 from the neural network 100 (fig. 1).

The pattern 500 is a directed blob or a group of directed simplices having the same dimensions (i.e., having the same number of points), which defines a pattern involving more points than the blob or the simplex alone and encloses a cavity within the group of directed simplices.

By way of example, pattern 505 includes six different three-point, two-dimensional patterns 405 that together define a level 2 coherence class, while pattern 510 includes eight different three-point, two-dimensional patterns 405 that together define a level 2 coherence class. Three of the modes 505, 510, each of the two-dimensional modes 405, can be considered to surround a respective cavity. The nth Betti number associated with the directed graph provides a count of such classes of coherence in the topological representation.

The activities represented by patterns such as pattern 500 represent a relatively high degree of ranking of activities in the network that are unlikely to occur by random chance. The schema 500 can be used to characterize the complexity of the activity.

In some implementations, only some patterns of activity are identified and/or some portion of the identified patterns of activity are discarded or otherwise ignored during the identification at the decision time. For example, referring to FIG. 2, activities commensurate with a five-point, four-dimensional simplex pattern 415 inherently include activities commensurate with four-point, three-dimensional, and three-point, two-dimensional simplex patterns 410, 405. For example, points 0, 2, 3, 4 and points 1, 2, 3, 4 in the four-dimensional simplex pattern 415 of FIG. 4 are all commensurate with the three-dimensional simplex pattern 410. In some implementations, patterns that contain fewer points-and therefore have lower dimensionality-may be discarded or otherwise ignored during identification of decision instants.

As another example, only some patterns of activity need to be identified. For example, in some implementations, only patterns with odd-numbered points (3, 5, 7, … …) or even-numbered dimensions (2, 4, 6, … …) are identified.

Fig. 4 is a schematic illustration of the determination of a specific time of an activity pattern with distinguishable complexity. The determination represented in fig. 4 may be performed as part of the recognition or "reading" of the pattern of activity to generate the neural topology code 120 from the neural network 110 (fig. 1).

Fig. 4 includes a graph 605 and a graph 610. Graph 605 shows the occurrence of a pattern as a function of time along the x-axis. In particular, the respective occurrences are schematically represented as vertical lines 606, 607, 608, 609. Each row occurrence may be an instance of an activity matching a corresponding pattern or class of patterns. For example, the occurrence of the top row may be an instance of an active matching pattern 405 (FIG. 2), the occurrence of the second row may be an instance of an active matching pattern 410 (FIG. 2), the occurrence of the third row may be an instance of an active matching pattern 415 (FIG. 2), and so on.

The graph 605 also includes dashed rectangles 615, 620, 625 schematically depicting different time windows when the active mode has distinguishable complexity. As shown, during the windows depicted by the dashed rectangles 615, 620, 625, activity matching in the recurrent artificial neural network is more likely to indicate patterns of complexity than outside those windows.

Graph 610 represents the complexity associated with these occurrences as a function of time along the x-axis. The chart 610 includes: a first peak of complexity 630, which coincides with the window depicted by the dashed rectangle 615; and a second peak 635 of complexity, which coincides with the window depicted by the dashed rectangles 620, 625. As shown, the complexity represented by peaks 630, 625 is distinguishable from the complexity of baseline level 640, which may be considered complexity.

In some implementations, the time at which the output of the recurrent artificial neural network will be read is consistent with the occurrence of activity patterns with distinguishable complexity. For example, in the illustrative scenario of fig. 4, the output of the recurrent artificial neural network may be read at the peaks 630, 625, i.e., during the windows depicted by the dashed rectangles 615, 620, 625.

In some implementations, not only the specific time of the output of the recurrent artificial neural network, but also the content of the output of the recurrent artificial neural network is given by the activity pattern having distinguishable complexity. In particular, the identity and activity of nodes participating in activities commensurate with the activity pattern may be considered the output of the recurrent artificial neural network. Thus, the identified activity pattern may represent the result of processing by the neural network, and the particular time at which this decision is to be read.

The content of the decision can be expressed in a variety of different forms. For example, in some implementations, the content of the decision may be expressed as a binary vector of 1 and 0, where each number indicates: for a predefined group of nodes, the respective mode is active or inactive. In such an implementation, the content of the decision is expressed in binary and may be compatible with conventional digital data processing infrastructure.

Embodiments of the operations and subject matter described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, program instructions may be encoded on an artificially generated propagated signal (e.g., a machine-generated electrical, optical, or electromagnetic signal) that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage media may be or be included in a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Further, although the computer storage medium is not a propagated signal, the computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium may also be or be included in one or more separate physical components or media, such as multiple CDs, disks, or other storage devices.

The operations described in this specification may be implemented as operations performed by data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.

The term "data processing apparatus" encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or a combination of any or all of the foregoing. The apparatus can comprise special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment may implement a variety of different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.

A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).

Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with the instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Further, the computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game controller, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a Universal Serial Bus (USB) flash drive, to name a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

To provide for interaction with a user, embodiments of the subject matter described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with the user; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, the computer may interact with the user by sending and receiving documents to and from the device used by the user; for example, by sending a web page to a web browser on the user's client device in response to a request received from the web browser.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

Thus, particular implementations of the subject matter have been described. Other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.

A number of implementations have been described. However, various modifications may be made. Accordingly, other implementations are within the scope of the following claims.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用智能手机扫描物理不可克隆功能的物理签名数据的设备和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!