Trusted intermediate domains

文档序号:555472 发布日期:2021-05-14 浏览:12次 中文

阅读说明:本技术 受信赖中间领域 (Trusted intermediate domains ) 是由 尼古拉斯·伍德 于 2019-09-03 设计创作,主要内容包括:存储器存取电路(26)基于所有权信息来控制对存储器的存取,该所有权信息针对给定存储器区域定义了从两个或更多个领域中规定的拥有者领域,每个领域对应于在处理电路(8)上执行的软件过程的至少一部分。拥有者领域有权阻止其他领域存取存储在给定存储器区域内的数据。当给定领域的安全性配置参数规定该给定领域与由安全性配置参数识别的受信赖中间领域相关联时,该受信赖中间领域可被允许执行给定领域的至少一个领域管理功能,例如,提供密钥和/或保存/恢复安全性配置参数。这可实现以下用例,其中需要在不同时间在相同系统上、或在不同系统上建立具有共享参数的相同领域的多个实例。(Memory access circuitry (26) controls access to the memory based on ownership information defining an owner zone specified from two or more zones for a given memory region, each zone corresponding to at least a portion of a software process executing on the processing circuitry (8). The owner field has the right to prevent other fields from accessing the data stored in a given memory region. When a security configuration parameter of a given realm specifies that the given realm is associated with a trusted intermediate realm identified by the security configuration parameter, the trusted intermediate realm may be allowed to perform at least one realm management function of the given realm, e.g., providing a key and/or saving/restoring the security configuration parameter. This may enable use cases where multiple instances of the same domain with shared parameters need to be established at different times on the same system, or on different systems.)

1. An apparatus, comprising:

processing circuitry to perform data processing in response to one or more software processes;

memory access circuitry to control access to a plurality of memory regions based on ownership information defining, for a given memory region, an owner zone defined from a plurality of zones, each zone corresponding to at least a portion of at least one of the software processes, the owner zone having access to prevent other zones from accessing data stored in the given memory region; and

a domain management unit to control operation of a given domain based on security configuration parameters associated with the given domain; wherein:

when a security configuration parameter of the given realm specifies that the given realm is associated with a trusted intermediate realm identified by the security configuration parameter, the realm management unit is configured to allow the trusted intermediate realm to perform at least one realm management function of the given realm.

2. The apparatus of claim 1, wherein the domain management function comprises updating at least a portion of security configuration parameters for the given domain.

3. The apparatus according to any one of claims 1 and 2, wherein each domain other than a root domain is associated with a respective parent domain that created the domain; and is

The domain management unit is configured to: supporting the security configuration parameters defines the trusted intermediary domain as a domain other than a predecessor domain of the given domain.

4. The device of claim 3, wherein the domain management unit is configured to: setting, during the establishment of the given realm, whether the given realm is associated with the trusted intermediate realm based on at least one command issued by the predecessor realm.

5. The apparatus of any preceding claim, wherein the domain management unit is configured to: in response to an attestation command identifying a target domain, providing attestation certifying a property of the target domain.

6. The apparatus of claim 5, wherein the attestation includes information indicating that the target realm is associated with the trusted intermediate realm when security configuration parameters of the given realm specify that the target realm is associated with the trusted intermediate realm.

7. The apparatus according to any of claims 5 and 6, wherein the target realm certification specifies intermediary realm certification information when a security configuration parameter of the target realm specifies that the target realm is associated with the trusted intermediary realm,

the intermediate realm attestation information attests to the nature of the trusted intermediate realm or provides information for enabling a recipient of the attestation to request attestation for the trusted intermediate realm associated with the target realm.

8. The device of any of claims 5 to 7, wherein the domain management unit is configured to: preventing the processing circuitry from processing a given domain until the given domain has been launched; and is

The domain management unit is configured to: allowing the trusted intermediary domain associated with the given domain to trigger generation of an attestation for the given domain before the given domain has been launched.

9. The apparatus of any preceding claim, wherein the domain management unit is configured to: allowing the trusted intermediate realm to provide at least one provided secret for the given realm, the provided secret including at least one of:

at least one key for protecting data associated with the given domain; and

keying material for deriving the at least one key.

10. The device of claim 9, wherein the domain management unit is configured to: disabling the domains other than the trusted intermediate domain from providing the at least one provided secret for the given domain.

11. The device of any of claims 9 and 10, wherein the domain management unit is configured to: preventing the processing circuitry from processing a given domain until the given domain has been launched;

the domain management unit is configured to: allowing the trusted intermediary realm to provide the at least one provided secret for the given realm before the given realm has been started; and

the domain management unit is configured to: preventing the trusted intermediate realm from providing the at least one provided secret for the given realm after the given realm has been started.

12. The apparatus of any of claims 9 to 11, wherein the domain management unit is configured to: managing provision of the at least one provided secret based on key management policy information provided by the trusted intermediate realm.

13. The apparatus of any preceding claim, wherein the domain management unit is configured to: allowing the trusted intermediate realm to record a security configuration record indicating at least a subset of security configuration parameters associated with the given realm.

14. The apparatus of any preceding claim, wherein the domain management unit is configured to: allowing the trusted intermediate realm to update at least a subset of security configuration parameters associated with the given realm based on security configuration records previously recorded by the trusted intermediate realm.

15. The device of any of claims 13 and 14, wherein the domain management unit is configured to: managing logging the security configuration record, or recovering security configuration parameters from the security configuration record, based on policy information provided by the trusted intermediary domain.

16. The apparatus of any preceding claim, wherein the security configuration parameters of the given domain comprise at least one of:

a domain type;

a protected address range associated with the given domain;

an indication of whether debugging is enabled within the given domain;

an indication of whether data is allowed to be output from a first memory to a second memory that is access controlled by the memory access circuit; and

keying material for deriving at least one key to protect data associated with the given domain.

17. The apparatus of any one of the preceding claims, wherein the owner zone has authority to prevent access to the given memory region by processes performed at a higher privilege level than the owner zone.

18. A method of data processing, comprising:

performing data processing in response to one or more software processes;

enforcing ownership rights for a plurality of memory regions based on ownership information defining, for a given memory region, an owner zone defined from a plurality of zones, each zone corresponding to at least a portion of at least one of the software processes, the owner zone having authority to prevent other zones from accessing data stored within the given memory region; and

controlling operation of a given domain based on security configuration parameters associated with the given domain; wherein:

when a security configuration parameter of the given realm specifies that the given realm is associated with a trusted intermediate realm identified by the security configuration parameter, allowing the trusted intermediate realm to perform at least one realm management function of the given realm.

19. A computer program for controlling a host data processing apparatus to provide an instruction execution environment, comprising:

memory access program logic to control access to a plurality of memory regions of a simulated memory address space based on ownership information defining, for a given memory region, an owner zone defined from a plurality of domains, each domain corresponding to at least a portion of at least one of a plurality of software processes executing in the instruction execution environment, the owner zone having authority to prevent other domains from accessing data stored in the given memory region; and

domain manager logic to control operation of a given domain based on security configuration parameters associated with the given domain; wherein:

when a security configuration parameter of the given realm specifies that the given realm is associated with a trusted intermediate realm identified by the security configuration parameter, the realm management program logic is configured to allow the trusted intermediate realm to perform at least one realm management function of the given realm.

20. A storage medium storing a computer program according to claim 19.

Technical Field

The present technology relates to the field of data processing.

Background

It is known to provide memory access control techniques for enhancing access rights to specific memory regions in a memory address space. In general, these techniques may be based on privilege levels such that processes executing at a higher privilege level may preclude processes with lower privilege levels from accessing a memory region.

Disclosure of Invention

At least some examples provide an apparatus comprising: processing circuitry to perform data processing in response to one or more software processes; and memory access circuitry for controlling access to the plurality of memory regions based on ownership information defining, for a given memory region, an owner zone defined from a plurality of zones, each zone corresponding to at least a portion of at least one of the software processes, the owner zone having access to prevent other zones from accessing data stored in the given memory region; and a domain management unit for controlling operation of the given domain based on security configuration parameters associated with the given domain; wherein: when the security configuration parameters of the given realm specify that the given realm is associated with a trusted intermediate realm identified by the security configuration parameters, the realm management unit is configured to allow the trusted intermediate realm to perform at least one realm management function of the given realm.

At least some examples provide a data processing method comprising: performing data processing in response to one or more software processes; enforcing ownership rights for a plurality of memory regions based on ownership information defining, for a given memory region, an owner zone defined from a plurality of zones, each zone corresponding to at least a portion of at least one of the software processes, the owner zone having authority to prevent other zones from accessing data stored within the given memory region; and controlling operation of the given domain based on security configuration parameters associated with the given domain; wherein: when the security configuration parameters of the given realm specify that the given realm is associated with a trusted intermediate realm identified by the security configuration parameters, the trusted intermediate realm is allowed to perform at least one realm management function of the given realm.

At least some examples provide a computer program for controlling a host data processing apparatus to provide an instruction execution environment, comprising: memory access program logic to control access to a plurality of memory regions of a simulated memory address space based on ownership information defining an owner zone for a given memory region specified from a plurality of domains, each domain corresponding to at least a portion of at least one of a plurality of software processes executing in an instruction execution environment, the owner zone having authority to prevent other domains from accessing data stored in the given memory region; and domain manager logic to control operation of the given domain based on security configuration parameters associated with the given domain; wherein: when the security configuration parameters of the given realm specify that the given realm is associated with a trusted intermediate realm identified by the security configuration parameters, the realm management program logic is configured to allow the trusted intermediate realm to perform at least one realm management function of the given realm.

A storage medium may store a computer program. The storage medium may be a non-transitory storage medium.

Drawings

Further aspects, features and advantages of the present technology will become apparent from the following description of examples, read in conjunction with the accompanying drawings, in which:

FIG. 1 schematically illustrates a data processing system that includes a plurality of processing components that utilize memory regions stored within a first memory and a second memory;

FIG. 2 schematically illustrates the relationship between processes being performed, the privilege levels associated with the processes, and the realms associated with the processes for controlling which process owns a given memory region and thus has exclusive rights to control access to the given memory region;

FIG. 3 schematically shows a memory area under management by a domain management unit and a memory management unit;

FIG. 4 shows a more detailed example of the processing components and the domain management control data stored in memory;

FIG. 5 illustrates an example of a domain hierarchy in which an parent domain may define domain descriptors that describe the properties of various child domains;

FIGS. 6 and 7 show two different examples of domain hierarchies;

FIG. 8 illustrates an example of a domain descriptor tree that an predecessor domain maintains to record domain descriptors of its descendant domains;

FIG. 9 shows an example of the contents of a domain descriptor;

FIG. 10 is a table showing different domain lifecycle states;

FIG. 11 is a state machine diagram indicating changes in the lifecycle states of a domain;

FIG. 12 is a table showing the contents of entries in an ownership table for a given memory region;

FIG. 13 is a table showing visibility attributes that may be set for a given memory region to control which domains other than the owner are allowed to access the region;

FIG. 14 illustrates examples of different lifecycle states for memory regions, including states corresponding to RMU-private memory regions reserved for mutually exclusive access by a domain management unit;

FIG. 15 is a state machine showing the transition of the lifecycle states for a given memory region;

FIG. 16 illustrates how ownership of a given memory region may be transferred between an ancestor domain and its descendant domains;

FIG. 17A schematically illustrates memory access control provided based on page tables defining memory control attributes that depend on privilege levels and domain management unit levels that provide orthogonal levels of control of memory access based on permissions set by the owner domain;

FIG. 17B illustrates an example of a translation look-aside buffer;

FIG. 18 is a flow chart illustrating a method of controlling access to memory based on a page table and an RMU table;

FIG. 19 illustrates the use of child realms that correspond to particular address ranges within a process associated with an ancestor realm of the child realms;

FIG. 20 illustrates an example of a parameter signature;

FIG. 21 illustrates a method of limiting domain startup when a parameter signature does not match an expected signature;

FIGS. 22 and 23 show two examples of use cases in which it is possible to launch one domain with the same keys or security parameters as used by the other domain;

FIG. 24 illustrates an example in which a domain can be associated with a trusted intermediary domain;

FIG. 25 illustrates an exemplary method for controlling a domain management function for a target domain using an intermediate domain;

FIG. 26 illustrates a method of generating a proof for a target domain; and

FIG. 27 shows an example of a simulator that can be used.

Detailed Description

In a privilege-based memory access control scheme, a more privileged process may set access permissions that define regions of memory that are accessible to less privileged processes. However, this may generally mean that any region accessible to less privileged processes is also accessible to more privileged regions. Thus, all applications executing under a given operating system or virtual machine may need to trust the software associated with that operating system or virtual machine, and all operating systems or virtual machines executing under a given hypervisor may need to trust that hypervisor.

In some use cases, this reliance on a more privileged process may not be desirable. For example, in a datacenter, multiple virtual machines may be provided by multiple different parties, each virtual machine executing under control of a hypervisor provided by a cloud platform provider that manages the datacenter. For example, a provider of a given virtual machine (or an application executing under a given virtual machine) may not wish to expose its data to a hypervisor or other virtual machine. For example, a banking provider may provide a virtual machine for executing banking applications, and may not want sensitive financial information accessible to a hypervisor or other virtual machine sharing the same physical platform.

The device may have processing circuitry for supporting the processing of one or more software processes, such as applications, operating systems/virtual machines, hypervisors, and the like. Memory access circuitry may be provided to control access to a plurality of memory regions of a memory address space based on ownership information defining an owner zone specified from a plurality of zones for a given memory region. Each domain corresponds to at least a portion of at least one of the software processes. The owner field of a given memory region has the right to prevent other fields from accessing data stored in the given memory region. Thus, in contrast to the privilege-based model in which access permissions only define which processes are allowed to access (read or write) a given memory region, for a domain-based approach, the owner domain has the ability to control which other domains access its own memory region. Thus, there is distributed control of memory accesses where different portions of the address space may be assigned different domain owners having control over access to that portion of the address space, rather than the typical privilege-based model where a single process defines top-down rules for accessing the address space by less privileged processes. The method enables a given domain to protect its data from other processes, including processes operating at the same privilege level or at a higher privilege level.

In some examples, this realm-based approach may be applied in parallel with the privilege-based approach so that there are multiple overlapping sets of access rights for a given memory region: privilege-based permissions set by higher privilege processes, and access permissions set by owner zones for memory regions. If the memory access satisfies both sets of permissions, the memory access may be allowed.

Based on the security configuration parameters associated with the given domain, the domain management unit may be provided to control operations of the given domain. For example, the security configuration parameters may define information such as a domain type (which may govern the nature of the domain or which operations the domain is capable of), a protected address range associated with a given domain (which may mark the boundaries of memory regions that may be securely accessed by the given domain), and other information regarding whether operations such as debugging using the memory access circuitry, or outputting data from the memory protector to external memory outside the boundaries protected by the memory access circuitry, will be permitted. Furthermore, the security configuration parameters may, for example, comprise keying material for deriving at least one key for protecting data associated with a given domain. It will be appreciated that a variety of different security configuration parameters may be defined for a domain.

In the techniques discussed below, security configuration parameters for a given realm may specify that the given realm is associated with a trusted intermediate realm identified by the security configuration parameters. The realm management unit can allow the trusted intermediate realm to perform at least one realm management function for the given realm. Providing such a trusted intermediary domain may be useful for implementing multiple use cases where individual instances of the same domain need to be established, e.g., at different points in time and/or on different physical platforms, where each instance of the same domain needs to access some shared configuration parameters to enable the domain to run predictably, regardless of the particular instance of the domain being established. It may be difficult to establish such shared configuration parameters in a repeatable manner, either by the given domain itself or by the hardware platform, while still maintaining security and trust. Providing a trusted intermediate realm defining certain realm management functions that are allowed to be performed for a given realm may enable use cases such as secure migration of a realm from one platform to another or providing a shared key to multiple instances of the same realm executing on the same system or different physical systems for load balancing or redundancy purposes. Since the trusted intermediate realm itself is a realm with self-ownership protection provided by the memory access circuitry, this means that the security of the trusted intermediate realm itself can be verified and certified to provide a trust that the trusted intermediate realm managed realm is secure.

The domain management function may include updating at least a portion of the security configuration parameters for the given domain. In some cases, the ability to update security configuration parameters may be limited to certain phases of the domain lifecycle, such as prior to startup of a given domain. A domain may not be allowed to be processed by the processing circuitry until the domain has been started.

By providing a trusted intermediate realm that can manage the updating of certain security configuration parameters, it is simpler to migrate realms between platforms, or to save or restore previous versions of realms, or to launch multiple instances of the same realm with shared security configurations.

In one example, each domain other than the root domain may be associated with a corresponding parent domain that created the domain. The realm management unit can support security configuration parameters to define the trusted intermediate realm as a realm other than the previous realm of the given realm. That is, while a trusted intermediary domain may be defined as an upper generation domain (if desired), the architecture also allows trusted intermediary domains to be defined as domains other than the upper generation domain.

Thus, while the domain of the prior generation may initially establish the domain, sometimes the domain of the prior generation itself may not be trusted to be aware of certain security configuration parameters for a given domain, such as keying material used to derive keys for protecting data to be used by the domain. By enabling domains other than the previous generation domain to establish certain security configuration parameters for the target domain managed by the intermediate domain, greater security may be achieved if multiple domains need to have the same security configuration. For example, a trusted intermediary domain may be a domain based on software operations provided by a banking provider, healthcare provider, or other party, which refer to a managed domain having access to certain sensitive information, while a contemporary domain may be a hypervisor executing on a cloud platform that may not be trusted by the banking/healthcare provider, etc.

During the setup of the given realm, the realm management unit can set whether the given realm is associated with a trusted intermediate realm based on at least one command issued by the previous generation realm. Thus, both the upper generation realm and the trusted intermediate realm may have the ability to set certain security configuration parameters at certain stages of the lifecycle of a given realm. As described below, a parameter signature scheme may be used to verify that security configuration parameters have been properly set for a given domain prior to launching the domain. This can be used to check whether the previous generation realm correctly configured the trusted intermediate realm during the set up of the realm, so that a party that needs to ensure that the realm has certain security configuration settings as defined by the trusted intermediate realm can check whether the correct trusted intermediate realm is involved. This may prevent malicious upper generation realms from mistakenly setting up the identification of trusted intermediate realms for a given realm.

The domain management unit may support a certification function in which the domain management unit may provide a certification certifying a property of the target domain in response to a certification order identifying the target domain. For example, the attestation may include information derived from configuration parameters of the target domain and/or contents of a memory region owned by the target domain. The certificate may be signed with some kind of key that certifies its authenticity.

For domains for which security configuration parameters specify that the target domain is associated with a trusted intermediate domain, the attestation may include information indicative of the fact that the target domain is associated with the trusted intermediate domain. Further, the attestation may include intermediate realm attestation information, either information directly attesting to a property of the trusted intermediate realm or information enabling receipt of attestation to request further attestation of the trusted intermediate realm associated with the target realm. For example, the intermediate realm attestation information may simply be an identifier of the trusted intermediate realm, and a subsequent attestation command may then be issued that identifies the trusted intermediate realm as the target realm in order to generate further attestation for the trusted intermediate realm. Thus, in general, when certifying a given realm, the realm management unit can also provide information that enables the verified entity to also certify the associated trusted intermediate realm so that it can establish trust in the fact that the realm is correctly configured by the trusted intermediate realm.

The domain management unit may prevent the given domain from being processed by the processing circuit until the given domain has been launched. Before a given domain has been launched, a trusted intermediary domain associated with the given domain may be allowed to trigger generation of a proof of the given domain. This may be used in a trusted intermediate realm to verify whether security configuration parameters for a given realm have been properly configured before providing a key or other configuration information to the realm. This may improve safety. Conversely, if the attestation command is issued by a domain other than a trusted intermediate domain associated with the target domain, the attestation command may be rejected if the target domain is not in a valid state.

In one example, the domain management functionality provided by the trusted intermediary domain may be to provide at least one provided secret of a given domain. The provided secret may comprise at least one key for protecting data (comprising data values and/or program code) associated with a given domain; and/or keying material used to derive the key. Thus, by defining a trusted intermediate realm that can be trusted to provide provided secrets that can be used to protect the contents of an owned memory region of another realm, this can enable multiple instances of the same realm to be established on the same or different physical platforms or under different instances in a secure manner in a timely manner. The realm management unit can prohibit any realm other than the trusted intermediate realm from providing the provided secret for the given realm.

Providing at least one provided secret for a given domain may be limited to before the given domain has been launched. After a given realm has been launched, trusted intermediate realms may be prohibited from providing these provided secrets for the given realm. This can be used to improve security as some checks can be performed on the start-up of a given domain and this ensures that the secret provided can be verified before the domain is started up so that it can be processed. The management of providing the at least one provisioning secret may be based on key management policy information, which may be provided by the trusted intermediate realm. For example, the key management policy may specify how many other realms may have a given secret obtained by the trusted intermediary realm, a time period within which a given version of the secret may be provided to the realm managed by the trusted intermediary realm, or other conditions that are verified by the trusted intermediary realm as being satisfied by the managed realm before the secret can be provided to the managed realm. The policy itself may be attested by attesting to the nature of the trusted intermediate domain using the attestation mechanisms discussed above.

The at least one provided secret may not be the only type of key used by a given domain. Other types of secrets may also exist, such as "instance-unique" secrets derived from characteristics of a particular hardware instance of hardware. These secrets may remain the same if a given domain executes on a particular platform and then reboots on the same platform; but these secrets may be different if the domain reboots on a different platform. Instance-unique secrets can be securely derived without a trusted intermediary domain, and these secrets can be derived, for example, using a hardware appliance of the physical platform when operating within a given domain. Another type of secret may be a "domain-unique" root secret, which may be generated, for example, using a pseudo-random number generator accessed by the software of the given domain itself, and which may be different each time the domain is restarted, even if restarted in the same system.

However, in cases where a domain needs to be migrated to a different physical platform, or if multiple instances of the same domain need to be created, where different instances of the same domain need to access a shared secret so that they can all securely access the same data, these instance-unique or domain-unique root secrets may not be applicable. Providing a trusted intermediary realm to manage the provision of at least one provisioning secret as described above helps to solve this problem.

In another example, the realm management unit can allow the trusted intermediate realm to record a security configuration record indicating at least a subset of security configuration parameters associated with the given realm. In general, domains other than the given domain itself may not be allowed to access its security configuration parameters. In some implementations, only the domain management unit may be allowed to read the security configuration parameters of a given domain (even though the given domain may not be allowed to read it by itself). However, by defining a specific trusted intermediate realm in the security configuration parameters of a given realm, and allowing the trusted intermediate realm to record indications of certain security configuration parameters, migration and save/restore of realms can be achieved.

For example, the realm management unit may allow the trusted intermediate realm to update at least a subset of security configuration parameters associated with a given realm based on security configuration records previously recorded by the trusted intermediate realm. Thus, if a realm needs to be migrated from one physical platform to another, a trusted intermediate realm can be established to operate on both the source and target platforms, and then the trusted intermediate realm can record a security configuration record associated with a given realm on the source platform, encrypt the security configuration record using its own key, and send the encrypted data to a corresponding instance of the trusted intermediate realm on the target platform, which can then decrypt and recover these security configuration parameters from the security configuration record. This enables secrets or configuration information associated with a given domain to survive the migration process while maintaining security. Another exemplary use may be for a domain to be terminated, requiring processing resources to be made available for all other purposes, and then for the same domain to be later re-established, while still having access to the shared secret. Furthermore, similar approaches may support related use cases, such as backing up and then restoring a complete domain, or taking a snapshot or checkpoint that allows the domain to later be rolled back to a previously known state. Thus, multiple operations are implemented using trusted intermediate realms that may not otherwise be secure.

Managing the record security configuration records and/or recovering security configuration parameters from the security configuration records may be based on policy information provided by the trusted intermediary domain. Also, the policy itself may be attested by attesting to trusted intermediaries.

The security configuration parameters for a given domain may include at least one of: a domain type; a protected address range associated with a given domain; an indication of whether debugging is to be achieved within a given domain; an indication of whether data is allowed to be accessed from the first memory to the second memory by the memory access circuit; and keying material for deriving at least one key to protect data associated with a given domain. Further, the security configuration parameters may also include identification of trusted intermediate realms as discussed above.

The domain management unit may be implemented in different ways. In one example, the domain management unit may be a dedicated hardware unit that implements the security protection provided by the domain scheme. In other examples, the domain management unit may include software executing on the processing circuit that is different from the software associated with each domain.

The above examples describe an apparatus having the memory access circuitry and domain management unit discussed above. However, in another example, a corresponding computer program may be provided for controlling a host data processing apparatus to provide an instruction execution environment for executing instructions. The computer program may include memory access program logic and domain management program logic, both of which functionally correspond to the memory access circuitry and domain management units discussed above. For example, the computer program may be an emulator program that may be presented to software executing on the emulator, with a similar execution environment to that provided by the actual hardware device, even though there may not be any actual hardware that provides the architectural features desired by the software within the host computer that is executing the emulator computer program. Rather, the functionality of the desired hardware architecture, including enforcing ownership rights and domain management based on commands issued by a trusted intermediate domain of the domain, can be emulated by providing program logic (such as a collection of instructions or data structures) that causes a general purpose host computer to execute code for execution on a device with the above-described domain protection in a manner that is compatible with the results that would be achieved on a device actually having the above-described memory access circuitry and domain management unit. The emulator computer program for controlling the host data processing device may be stored on a storage medium, which may be a non-transitory storage medium.

Fig. 1 schematically shows a data processing system 2 comprising a system-on-chip integrated circuit 4 connected to a separate non-volatile memory 6, such as an off-chip flash memory serving as a mass storage device. The system-on-chip integrated circuit 4 includes a plurality of processing components in the form of (in this exemplary embodiment) two general purpose processors (CPUs) 8, 10, and a Graphics Processing Unit (GPU) 12. It will be appreciated that in practice many different forms of processing components may be provided, such as additional general purpose processors, graphics processing units, Direct Memory Access (DMA) units, co-processors, and other processing components used to access memory regions within a memory address space and perform data processing operations on data stored within these memory regions.

The general purpose processors 8, 10 and the graphics processing unit 12 are coupled to interconnect circuitry 14 via which they conduct memory transactions with on-chip memory 16 and external memory 6 (via external memory interface 18). Although memory 16 is on-chip in fig. 1, in other embodiments, memory 16 may instead be implemented as off-chip memory. The on-chip memory 16 stores data corresponding to a plurality of memory regions within an overall memory address space. These memory regions correspond to memory pages and are subject to management operations that control which memory regions (pages) are present in the on-chip memory 16 at a given time, and these processes can access data stored in these memory regions and other parameters associated with these memory regions. More specifically, in this exemplary embodiment, each of the processing components 8, 10, 12 includes a domain management unit 20, 22, 24 and a general memory management unit 26, 28, 30. The universal memory management units 26, 28, 30 are used to control aspects of the operation of the memory regions, such as address mapping (e.g., mapping between virtual addresses and intermediate physical addresses, or physical addresses), privilege level constraints on processes that are able to access a given memory region, storage characteristics of data within a given memory region (e.g., cacheability, device memory state, etc.), and other characteristics of a region of memory.

The domain management units 20, 22, 24 manage data used to enforce ownership rights to multiple memory regions, whereby a given memory region has a given owning process (or owner "domain") specified from multiple processes (the process or domain being, for example, one of a monitor program, a manager program, a guest operating system program, an application program, etc., or a particular sub-portion of such a program). A given owning process (owner zone) for a given memory region has exclusive rights to control access to given owned data stored within the given memory region. In particular, the owner process has the right to prevent access to its owned memory region by processes executing at a greater privilege level than the owner process.

Thus, multiple memory regions are divided among multiple owner zones. Each domain corresponds to at least a portion of at least one software process and is assigned ownership of a plurality of memory regions. Owning processes/domains have rights to control access to data stored in memory regions of their domain, including rights to prevent more privileged processes from accessing owning regions of the domain. Management and control of which memory regions are memory mapped to each domain is performed by processes other than the owner domain itself. With this arrangement, a process such as a hypervisor may control which memory regions (pages of memory) are contained within a domain owned by respective guest virtual (guest operating systems) managed by the hypervisor, yet the hypervisor itself may not have the authority to actually access data stored within the memory regions that the hypervisor has allocated to a given domain. Thus, for example, a guest operating system may keep data stored within the domain of the guest operating system (i.e., data stored within a memory region owned by the guest operating system) private with respect to its management manager.

The division of the memory address space into realms and control of ownership of those realms is managed by a realm management unit 20, 22, 24 associated with each of the processing components 8, 10, 12 and is a control process that is orthogonal to the more traditional form of control provided by the general purpose memory management units 26, 28, 30. The domain management units 20, 22, 24 thus provide memory access circuitry to enforce ownership rights to memory regions of the memory address space. In some cases, the memory access circuitry implementing domain ownership rights may also include portions of MMUs 26, 28, 30 (e.g., a TLB in MMUs 26, 28, 30 may include some control data for controlling access based on domain control provided by RMUs 20, 22, 24 to avoid the need to access two separate structures). In this exemplary embodiment, each of the processing components 8, 10, 12 includes its own domain management unit 20, 22, 24; is advantageous for performance purposes. More generally, however, the memory access circuitry implementing ownership rights may include a single instance of a domain management unit, a combination of all domain management units 20, 22, 24 present, or a subset of these domain management units 20, 22, 24 present. Thus, the memory access circuitry used to enforce ownership rights may be dispersed across the system-on-chip integrated circuits 4 associated with the different processing components 8, 10, 12, or collected together in one location or in some other configuration.

The processing components comprising the general purpose processors 8, 10 are shown to include respective decode and execution circuits 32, 34 which decode and execute program instructions. These program instructions include commands (domain management commands or RMU commands) to control the management of memory regions within different ownership domains of the memory address space. As an example, the executed program instructions may comprise program instructions designated as domain management unit commands, and when encountered within a stream of program instructions, are directed to the associated domain management unit 20, 22, 24 so that they can be executed (acted upon) by the associated domain management unit 20, 22, 24. Examples of the domain-management-unit command include a command to initialize a new domain or invalidate an existing domain, a command to allocate a memory area to a specific domain, remove a memory area from a specific domain, output data contained in a memory area from the first memory 16 to the second memory 6 in a case where the output data is protected within the second memory 6 by encryption and other processes. Further domain management unit commands are provided to input data from the second memory 6 back to the first memory 16 with the associated decryption and validation operations on the input data.

In the context of such output and input of data from a memory region, it will be appreciated that a first memory, such as the on-chip memory 16, is closely managed by the domain management units 20, 22, 24 within the system-on-chip integrated circuit 4, and thus these domain management units 20, 22, 24 are able to enforce ownership rights and limit access to data within a given memory region to processes owning that memory region, or those processes to which access has been granted by the owning process. However, when the data within this memory area is output to the external non-volatile memory 6, for example as a second memory, the control of the access provided by the domain management unit 20, 22, 24 is no longer valid and therefore the data needs to be protected in some other way. This is accomplished by encrypting the data in the memory region before it is output, and then decrypting it with the private key when it is input back to the on-chip memory 16.

The output process may be accompanied by the generation of metadata that specifies characteristics of the output data. This metadata may be separately stored within a metadata memory area of the first memory (on-chip memory 16), where the metadata is kept private to the domain management units 20, 22, 24 (i.e., accessible only to such domain management units 20, 22, 24 and not to any of the existing processes), so that when data is input back to the on-chip memory 16, the metadata may be read for the input data and the characteristics of the data represented in the metadata and the characteristics of the input data may be checked to ensure the integrity of the input data (e.g., checksum, data size, signature, etc.). It may be the case that private data of the domain management unit 20, 22, 24 (including the above metadata characterizing the output area/page) needs to be output from the on-chip memory 16 to the off-chip non-volatile memory 6 (e.g., to make room within the on-chip memory 16), and in this case, the RMU-private metadata itself may be encrypted for protection of the RMU-private metadata and new metadata characterizing the output metadata may be held within the on-chip memory 16 (this held metadata is significantly smaller in size than the output metadata) so that when the encrypted and output metadata is input back to the on-chip memory 16 for use, the encrypted and output metadata can be checked and validated.

This metadata describing the characteristics of the memory region and the data stored within the memory region may be arranged as part of a hierarchical structure, such as a metadata memory region tree with a branching pattern. The form of this metadata memory area tree may be determined under software control because different areas of the memory address space are registered for use as metadata areas owned by the domain management units 20, 22, 24. It will be appreciated that while software controlling the registration of such memory regions is able to allocate, deallocate and control the relationship between the memory regions used to store metadata, this software itself does not own the data contained within these memory regions in the sense of being able to control which processes can access this data. In the case of a memory region that is private to the domain management unit 20, 22, 24 (i.e., memory management circuitry), such access rights may be limited only to the domain management unit 20, 22, 24 itself and this RMU-private data will not be shared with any other process.

When a given data stored in a given memory region is output, the memory region in question is subsequently invalidated, making the content inaccessible. To reuse this page, the memory region is "validated" by using a Clean (Clean) command that overwrites the memory region with other data unrelated to the previous content so as to not make this previous content accessible to another process when the given memory region is freed for use by another process. For example, the contents of a given memory region may be written all as zero values, or as fixed values, or as random values, thereby overwriting the original contents of the memory region. In other examples, the overwriting of the contents of the output memory region may be triggered by the output command itself rather than a subsequent cleaning command. In general, given owned data that is output may be overwritten by values that are not associated with the given owned data before the given memory region is made accessible to processes other than the given owned process. When a given memory region owned by a given process is to be exported, as part of the export process, the domain management unit 20, 22, 24 that is executing a domain command to perform the export takes ownership of the memory region in question from the given process (even if the region is RMU-private), locks access to that memory region relative to all other processes (and other domain management units), performs export operations (including encryption, metadata generation, and overwriting), and then unlocks access to that memory region and releases ownership of that memory region. Thus, the memory area in the process of being output, input, may remain private to the domain management unit under consideration while the command is being executed.

Figure 2 schematically shows the relationship between a number of processes (programs/threads), a number of exception levels (privilege levels), secure and non-secure processor domains, and a number of domains representing ownership of a given memory region. As shown, the hierarchy of privilege levels extends from exception level EL0 to exception level EL3 (with exception level EL3 having the highest privilege level). The operating state of the system may be divided between a safe operating state and a non-safe operating state, the safe operating state and the non-safe operating state being as determined by useSecure and non-secure domain representations of an architecture, such as a processorIs constructed byLimited (Cambridge, UK).

As shown in fig. 2, memory access circuitry (domain management units 20, 22, 24 and associated control software (e.g., millicode running one domain management unit)) manages multiple domains within an execution environment. A given memory region (memory page) is owned by a particular domain. A domain may have descendant domains within the domain, and grandchild domains within those descendant domains (see, e.g., domain a (top generation), domain B (descendant), and domain C (top generation)). Ownership given to the memory regions of Domain A may cause ownership of those memory regions to then be transferred from Domain A to Domain B under the control of the processes owned by Domain A. Thus, an ancestor domain can give ownership of a region to its own descendant domain. These descendant domains can then transfer ownership of the memory regions they receive from their descendant domains to be subsequently owned by their own descendant domains (e.g., domain C), which are descendant domains of the original domain (i.e., domain A). Processes within a given domain may be performed at the same privilege level or at different privilege levels. The domains to which the process belongs are thus orthogonal parameters relative to privilege levels of the process, but in many practical cases domains and privilege levels may correspond, as a convenient mechanism for moving between domains may involve the use of exceptions that themselves move the system between different privilege levels (exception levels).

Fig. 3 schematically shows a domain management unit 20 and a general memory management unit 26, which respectively perform different management operations on a plurality of memory pages (memory areas) stored within the on-chip memory 16. As shown, the domain management unit 24 uses a plurality of domain descriptors 42, where each descriptor specifies a property of a domain. The domain management unit 24 may also maintain a domain group table (or ownership table) that includes entries indexed by physical addresses, each entry including information for a corresponding memory region, including indications of: which domain the memory region belongs to, i.e. which domain has exclusive rights to control access to control data within the memory region, even if the domain does not control whether the domain itself actually owns the memory region. The domain descriptor and domain region set table entries may be stored in memory 16, but may also be cached in the RMU itself. Thus, as shown in FIG. 3, different memory regions have different owning domains as indicated by domain designations RA, RB, RC, RD, and RE. Some of the memory regions are also owned by (private to) the domain management unit 20 and are marked RMU-private. Such RMU-private areas may be used to store metadata describing characteristics of other memory areas, to temporarily store memory areas being exported or imported, or for other purposes of the domain management unit 20 itself. The RMU-private area may still be owned by the corresponding owner zone, but may not be accessible to general read/write access issued by the owner zone (the RMU 20 may instead be triggered by an RMU command issued to the RMU 20 to make any changes to the RMU-private area).

The memory regions may be addressed by virtual addresses, intermediate physical addresses, or physical addresses, depending on the particular system under consideration. The domain management unit 20, and the general memory management unit 26, may therefore store translation data that enables received addresses (whether these received addresses are virtual memory addresses or intermediate memory addresses) to be translated into addresses, such as physical addresses, that more directly represent memory regions within the on-chip memory 16 under consideration. This address translation data may be managed and dispersed within system-on-chip integrated circuit 4 using translation look-aside buffers and other distributed control mechanisms.

Fig. 4 shows a more detailed example of one of the processing components 8, 10, 12 of fig. 1 and control data stored in the memory 16 for controlling memory access. For ease of explanation, fig. 4 shows CPU 0 as processing component 8, but it will be appreciated that the processing component may also be CPU 110 of GPU 12 or any other processing component within data processing device 2. As shown in fig. 4, the processing component 8 includes a processing circuit 32 (which may include the decode and execution logic described above), a memory management unit 26 that may include one or more translation lookaside buffers 100 for caching entries of translation tables (which may also be appended with domain-based control data from the RMU 20 if a shared MMU-RMU TLB structure is used), and a table walk unit 102 for controlling the allocation of data to the TLB 100 and triggering a walk access to the memory to locate the required data to control whether a given memory access is allowed to be executed. The processing component 8 may also include a cryptographic unit 104 that may perform cryptographic operations for encrypting or decrypting data, for example, for use in the paging (output/input) operations discussed above. Processing component 8 also includes a plurality of caches 110 that may cache data or instructions read from memory 16. If an access to memory triggered by processing circuit 32 or by table walk unit 102 misses in the cache, the data may be located from main memory 16.

The processing component 8 further comprises a domain management unit 20 as discussed above. In some embodiments, the domain management unit (RMU)20 may be provided as a hardware circuit. However, some of the RMU operations discussed below may be relatively complex to implement purely in hardware, for example if these RMU operations require multiple accesses to different memory regions to be performed. Thus, in some examples, the RMU 20 may be implemented using program code that may be stored within the data processing apparatus 2 and executed using the general purpose processing circuitry 32. Unlike general-purpose software, which may be written to memory 16 and may be rewritable, RMU software (microcode) may be installed on the data processing apparatus in a relatively permanent manner such that the RMU software is not removable and may be considered part of the platform provided by the processing system. For example, RMU program code may be stored in Read Only Memory (ROM). Thus, the RMU may comprise a hardware unit, or may comprise processing circuitry 32 executing domain management software, triggered for execution by RMU commands included in general purpose software executed by the processing circuitry 32. In some examples, the RMU 20 may be implemented using a combination of hardware and software, e.g., some simpler functions may be implemented using hardware circuitry for faster processing, but more complex functions may be implemented using millicode. Thus, it will be appreciated that subsequent references to RMUs may involve hardware or software or a combination of both.

As shown in FIG. 4, the memory 16 may store multiple pieces of control information used by the MMU 26 and RMU 20 to control access to the memory. This includes translation tables (also called page tables) 120 that define memory access attributes for controlling which processes are allowed to access a given memory region, as well as address mapping information for translating virtual addresses to physical addresses. Translation tables 120 may be defined based on the exception levels discussed above with respect to FIG. 2, such that a process executing at a more privileged exception level may set a permission that governs whether a process executing at a less privileged exception level is allowed to access the corresponding memory region.

In addition, a plurality of domain management tables or domain control information 122 are provided for controlling memory accesses in an orthogonal manner relative to the MMU page tables 120 to allow less privileged processes to control whether more privileged processes are accessed (domain control is orthogonal to MMU control in the sense that it may be necessary to pass two types of access control checks (for memory access requests to be serviced)). Using the realm management table, the owner process (realm) that owns a given memory region has the right to exclude processes that are executing at a more privileged exception level from accessing that memory region. The domain management data includes domain descriptors 124 that describe the nature of a given domain. Each domain corresponds to at least a portion of at least one software process executed by the processing circuit 32. Some domains may correspond to two or more processes, while other domains may correspond to only a sub-portion of a given software process. A realm can also be considered as being mapped to a given region of memory address space (where processing circuitry 32 executes within the given realm when it is executing program instructions located within the corresponding region of memory address space). Thus, a domain may be considered a collection of software processes or a portion of a software process, or as a region of memory address space. These two views are equivalent. For ease of explanation, the subsequent description refers to a domain as at least part of at least one software process, but the corresponding view of the domain as a collection of memory regions is equally valid (in which case "entry" and "exit" to/from the domain may correspond to program execution to/from a portion of a memory address corresponding to the domain).

The domain management data 122 also includes a domain execution context area 126 that can be used to save and restore the architectural state associated with a given domain upon domain exit or entry. The domain management data also includes a domain group table (or ownership table) 128 that defines, for each region of the memory address space, which domain is the owner domain of that memory region. The owner domain of a given memory region has the right to exclude other domains (including more privileged processes) from accessing data stored in that memory region. The use of this domain management data is discussed in more detail below. Generally, the domain management unit 20 and the MMU 26 can be considered memory access circuitry that enforces ownership rights defined by the owner domain for a memory region owned by that domain. This may be particularly useful, for example, for a cloud platform in which multiple virtual machines 36 provided by different parties may execute under the control of a manager 38 provided by a cloud server operator. The party that provides one of the virtual machines may not want its data and code to be accessible by the manager. By introducing a domain concept where domains executing at less privileged exception levels may exclude more privileged exception levels from accessing data or instructions of the domain, this enables the provision of a blind manager that may increase the confidence of code developers to install their software on a cloud service where physical hardware may be shared with code provided by others.

As shown in fig. 5, the realms are managed by the RMU 20 according to a realm hierarchy in which each realm other than the root realm 130 is a child realm, which has a corresponding parent realm that initializes the child realm by executing an initialization command. The root realm 130 can be, for example, a realm associated with monitor code or system firmware executing at the most privileged exception level EL 3. For ease of explanation, the example of FIG. 5 and the initial examples discussed below illustrate the case where each child domain executes at a lower privilege level than its parent domain. However, as will be discussed below, it is also possible to build a child domain that executes at the same exception level as its parent.

In general, for the domain management portion of memory access control provided by the MMU 26, child domains have default access to any memory region owned by their parent domain. Similarly, assume that any descendant of a given domain can access an owned memory region of the given domain. However, because the realm management control is orthogonal to the control provided by the translation table 120 based on the exception level, processes executing at a higher privilege level can still exclude less privileged codes from data accessing the process by setting the parameters of the translation table 120 accordingly. Thus, in general, a given child domain has the right to exclude the parent domain of the given child domain from accessing data stored in a given memory region owned by the given child domain. Whether a child domain actually excludes the parent domain from accessing a given memory region may be set based on a control attribute set in the ownership table 128 (a default may be that the parent domain does not have access to the owned region of the child domain, but the child domain may choose to grant access to the parent domain by setting the visibility attribute accordingly). When there are multiple co-generation domains (different children domains sharing the same co-generation domain), then a given child domain may exclude the co-generation domain from accessing data stored in a given memory region owned by the given child domain. In addition, the visibility attribute set in the ownership table 128 can control the extent to which the same generation domain can access data of each of the other generation domains. Alternatively, access to the same generation domain may be controlled based on the previous generation visibility attribute such that if a child domain makes a page visible to its previous generation domain, the same page also becomes visible to the same generation domain of that child domain and further descendants of the same generation domain. In some cases, the ownership table 128 may have a global visibility attribute that may allow a given owner process to enable any process executing in any domain to access data within the owned memory region of the given owner process.

As shown in fig. 5, each domain 140 is associated with one or more domain execution context (REC) memory regions 126 that may be used to store architectural states of the domain, such as register values, when exiting from a given domain. The number of RECs 126 provided for a given domain may depend on how many threads of execution are operating under the given domain. For example, a realm may be established with a single primary REC region 126 upon initial initialization, but then the realm may configure other memory regions owned by the realm as necessary for use as further RECs. The REC memory region is owned by the corresponding domain whose execution state is stored to the REC.

Each domain is associated with a domain descriptor 124 that is stored in a memory region owned by a predecessor of the domain whose nature is described in the domain descriptor 124. For flexibility in the number of child domains that can be defined at a given domain generation, domain descriptors are managed using a tree structure called a domain descriptor tree (RDT), which will be discussed in more detail later. The domain descriptors 124 may be used to define domain properties that may be checked by the RMU 20 upon entry into or exit from the domain to ensure security. The domain descriptor may also track the progress of the domain throughout various lifecycle states, such that the execution of certain RMU commands of the domain may be limited to a particular lifecycle state to ensure that the domain is created in a secure manner and invalidated.

Fig. 6 and 7 show two different examples of possible domain levels. In the example of FIG. 6, each of the processes shown in FIG. 2 defines its own domain. Thus, the root domain 130 corresponds to monitor software or firmware operating at the exception level EL 3. The root realm defines two child realms 142, one child realm corresponding to the secure operating system operating at secure EL1 and the other child realm corresponding to the manager at EL 2. The hypervisor defines grandchild domains 144 corresponding to the different guest operating systems at EL1, and each of these guest operating systems defines a further great-grandchild domain 146 corresponding to the application executing at the least privileged exception level EL 0. Similarly, a secure operating system in domain 142 may define a grandchild domain 148 corresponding to a different secure application. An ancestor in the hierarchy may transfer ownership of a page of memory that it currently owns to a new descendant domain (by using a granule add command as discussed below), or may invalidate one of the pages of the ancestor, map the page to the virtual address space of the descendant, and allow the descendant domain to claim ownership of the page by executing a page ownership (claim) command. If a specified page of the memory address space is not already owned by the previous generation domain issuing the command, the page ownership command may be denied.

As shown in fig. 7, it is not necessary for the process of each privilege level to have a separate realm, and thus some of the privilege level boundaries shown in dashed lines in fig. 7 may not correspond to realm boundaries. For example, in FIG. 7, application 150 and its operating system execute within the same domain as manager domain 142 operating at exception level EL2, and thus a single domain spans EL2 manager code, the operating system operating at EL1, and the application at EL 0. On the other hand, different applications 152 under the same manager may define their own separate domains. In this case, the domain boundary is between EL1 and EL0 and there is no EL2-EL1 domain boundary (both the manager and operating system can execute in the same domain). For another operating system, a separate EL1 domain 154 may be defined, which separate EL1 domain may also have some applications executing within the same domain as the operating system, as well as other applications with their own dedicated domain. Similarly, on the secure side, the secure OS and applications in fig. 7 execute entirely within the EL3 root domain, and thus there are no domain boundaries when operating on the secure side. Thus, the precise configuration of the domain may be determined at runtime of a given system according to the requirements of the process being performed. The software may decide at runtime that the software requires only a small and fixed number of child domains (which may be the case for low-level firmware), or many domains or a varying number of domains (e.g., this may be for a manager on the cloud-side platform that may manage an unknown number of guest virtual machines).

The domain descriptors 124 for a given antecedent domain are managed according to a domain descriptor tree (which is an instance of a domain management tree that defines domain management data for a plurality of child domains of the antecedent domain). The tree has a variable number of stages. Fig. 8 shows an example of a domain descriptor tree 160 managed by a particular previous generation domain. The tree 160 includes a plurality of domain descriptor tree granules (RDTGs) 162, each of which includes a plurality of domain descriptor tree entries (RDTEs) 164. Each RDTE 164 provides pointers to the domain descriptors 166 for a given child domain of the previous generation domain, or to further RDTGs 162 for the next stage of the domain descriptor tree. The RDTG 162 for the first level of the tree may be identified by a domain descriptor tree pointer 168, which may be stored as part of data associated with the domain of the previous generation (e.g., a domain descriptor associated with the domain of the previous generation). Thus, when an upstream domain issues an RMU command associated with a given child domain, the upstream domain may trigger the RMU to traverse the domain descriptor tree in order to locate the domain descriptor 166 of the desired child domain (if the domain descriptor has not already been cached within the RMU 20). Each RDTG 162 may have a variable number of entries 164.

As shown in the table in fig. 8, a given RDTE 164 providing a pointer to an RDTG 162 at a subsequent stage of the tree may include a rank value (order value) indicating the maximum number of entries in the RDTG to which it points. For example, the order value may indicate a power of 2 corresponding to the total number of entries in the RDTG pointed to. Other information that may be included in the RDTE 164 may include a status value that indicates the status of the RDTE (e.g., whether the RDTE is free for allocation of domain descriptor tree data, and whether the RDTE provides pointers to further RDTG 162 or to child domain descriptors 166). In addition to the pointers, the RDTEs may include a reference count that may track the number of non-free RDTEs in the pointer to the RDTG that may be used to determine whether further RDTEs may be allocated to the RDTG 162. RMU commands triggered by the prior generation domain may control the RMU 20 to build further RDTG of the tree and/or edit the contents of the RDTE within the existing RDTG.

It should be noted that the tree shown in FIG. 8 illustrates the child domains of a particular parent domain. Each other parent domain may have a split domain descriptor tree that tracks its own children domains of the parent domain. Datagrams associated with the tree, including RDTG 162 and child domain descriptors 166, are stored within pages owned by the parent domain, and thus other domains may be excluded from accessing this data. Thus, only the superior realm can have visibility of those particular child realms that the superior realm is configured with, such that a process executing at a higher privilege level may not have visibility of what realms have been created below any child realms that the process itself has directly created.

As shown in fig. 8, each of the child domains of a given parent domain may have a corresponding domain identifier (RID)168 that is used by that parent domain to identify a particular child domain. A RID is a local realm identifier because the RID is specific to a particular generation realm. Progeny domains of different predecessor domains may have the same local RID. Although it is possible to use a local RID having any value selected by the predecessor domain for a given child domain, in the method shown in fig. 8, the local RID for a given child domain has a variable number of variable length bit portions and each of the variable length portions is used by the RMU 20 to index into a given stage of the domain descriptor tree 160. For example, the domain descriptors of the child domains with local RID 7 in fig. 13 are accessed by the domain descriptor pointer in entry 7 of the following first-stage RDTG 162. The domain descriptor of the child domain with local RID 3.3 is accessed by entry 3 in the first level of the following tree and then entry 3 in the second level of the tree. Similarly, the domain descriptor of the child domain with local RID 1.2 is accessed by entry 1 in the first layer and entry 2 in the second layer that follow. It should be noted that although fig. 8 shows local RIDs using decimal values 7, 3.3, etc. for the sake of simplicity, they will be implemented as sequential concatenation of binary digits in the processing device 2.

The RID for a given domain may include an in-order connection of indices to be used at respective stages of the domain descriptor tree to access domain management data for the given domain. Although it is not necessary that the indexes connect in order in the same sequential order as they are used to step through the tree, this may be preferred as it makes management of tree accesses simpler. It does not matter whether the consecutive connections are from low to high or from high to low. The indexed sequential connections may be followed by a predetermined termination pattern that may allow the RMU 20 to determine when there are no further levels of trees to be stepped through.

Some embodiments may apply this RID construction technique to a global domain descriptor tree that may store domain descriptors for all domains within the system in a tree-like structure (where each of the RIDs is a globally unique value). However, software development can be made simpler by defining the child domains of a given generation within one tree and then tracking the child domains of that generation for the split tree for each other generation domain. Thus, a domain descriptor tree may be a local domain descriptor tree related to a given parent domain for storing domain management data for child domains that have been initialized by the given parent domain. The realm identifier can thus be a local realm identifier that identifies a particular child realm used by a given parent realm. Child domains initialized by different parent domains may be allowed to have the same value of the local domain identifier. In this way, the parent domain can select which RIDs are used for child domains of the parent domain without knowing that any other domain has been established by other parent domains, where the RIDs for the child domains are constructed according to the way the parent domain configures its domain descriptor tree.

The local realm identifier can be used by a realm entry instruction or RMU command issued by a software process. However, the hardware architecture may use absolute identification of a given child domain to distinguish domains created by different parents. Thus, in addition to the local domain identifiers shown in FIG. 8, a given domain may also have a global domain identifier (or "internal" domain identifier) that is unique to the given domain. At least one hardware structure may identify a given domain using a global domain identifier (GRID) instead of a local domain identifier (LRID). For example, the domain group table 128 and/or the TLB 100 may use a global domain identifier to identify a domain.

In some instances, any binary value may be assigned as a GRID for a given realm, which may be completely independent of the LRID used by the predecessor realm to reference the descendant realm. Different microarchitectural implementations of the same domain architecture may use different methods to assign GRIDs.

However, in one example, a GRID for a given domain may be constructed based on LRIDs of prior domains of the given domain. This may be useful because it may enable simpler determinations that a given domain is a descendant of another domain or an antecedent of another domain, which may be used for access control by the MMU 26 and RMU 20.

It is not necessary that all local RIDs be constructed using the concatenation of tree indexing methods. In some cases, it may be useful to retain specific values of the local RID for reference to certain pre-set fields. RMU commands specifying the current domain or a previous generation domain of the current domain may be relatively common. Therefore, the predetermined RID value can be reserved for the current generation domain with reference to the current domain. For example, an LRID (value indicating 1) with all bits set to 1 may be reserved for referencing the current domain's predecessor domain. Similarly, a predetermined realm identifier value can be reserved for reference to the current realm itself. For example, an LRID value of 0 may be used to reference the current domain.

The RMU may support certain query commands that may be triggered by a given domain in order to query the constraints that must be met when the given domain builds its domain descriptor tree. For example, in response to a query command, the RMU 20 (or the processing circuitry 32) may return a constraint value indicating at least one of a maximum number of levels of the domain descriptor tree 160 that are allowed to be defined by a given domain, a maximum number of entries that are allowed at a given level of the tree structure for a given domain, and/or a maximum number of children domains that may be initialized by a given domain. For example, the system may include registers that may indicate properties such as the number of bits available in an LRID or a GRID for a particular hardware implementation. The RMU or processing circuitry may check the number of bits available for the realm identifier in response to a query command (or an appropriate response may be hardwired for a particular processor implementation), and may also check information specifying how many bits of the identifier have been used up by an earlier realm in the global realm identifier in order to determine how many bits are available to remain for further descendants of the current realm definition. The domain of the generation may use the response to the query command to determine how to construct the RDT for the domain of the generation.

Fig. 9 shows an example of the contents of the domain descriptor 166 for a given domain. The domain descriptor may define security configuration parameters for the domain. It will be appreciated that this is just one example and that other implementations may not include all of the listed information, or they may include additional information. In this example, the domain descriptor includes the following:

the global RID of the domain. Thus, by traversing the domain descriptor tree based on a local RID, a corresponding global RID may be identified and this may be used to index hardware structures, such as TLBs, or check ownership tables or other information defined based on GRID by a given domain.

The lifecycle state of a given domain, which may be used by the RMU 20 to determine whether to accept a given command triggered by the given domain.

The type of a given domain. For example, a domain type may indicate that the domain is a complete domain or a child domain as discussed later.

A Boundary Exception Level (BEL) value that identifies a boundary exception level for the corresponding domain. The BEL indicates the maximum level of privilege the domain is allowed to execute. For example, domain 142 in fig. 7 may have a BEL for EL2, domain 152 may have a BEL for EL0, and domain 154 may have a BEL for EL 1. By providing explicit parameters that identify the BELs in the domain descriptor, this provides flexibility for domains to span multiple exception levels, as the BELs can be used at the time of an exception occurrence to determine whether the exception can be taken within the current domain or whether a domain exit to an earlier generation domain is required to handle the exception.

A resource count indicating the total number of memory regions (domain protection groups or RPGs) owned by the domain and its descendants. This is used to ensure that all memory pages owned by the descendants of the domain are invalidated (and eventually erased) before these memory regions can be allocated to different domains. For example, a resource count may be used to track how many regions still need to be washed.

The start and end addresses of protected addresses for the domain. For example, a protected address range may define a memory address space range within which a page may be owned by a corresponding domain. This can be used to protect against malicious parent domains that reclaim ownership of a region previously assigned to a child domain in attempting to access child domain data, as by comparing the protected address range defined in the domain descriptor with subsequent addresses of memory accesses, situations can be identified where a memory region previously owned by a domain is no longer owned by that domain.

One or more encryption keys used by the cryptographic circuit 104 to encrypt or decrypt data associated with a given domain. In this example, two separate encryption keys are provided: a memory key for encrypting/decrypting content and memory owned by the domain and a paging key for encrypting/decrypting data exported/imported between the memory 16 to the persistent memory 6 as discussed above. However, in other instances, the same key may be used for both purposes, or further may be provided for other specific purposes.

A domain description tree entry (RDTE) that identifies the root of the domain descriptor tree. The RDTE in the domain descriptor provides an index for accessing the root RDTG (and defining how many bits will be used as the order value of the index for that RDTG).

Pointers to main REC (domain execution context) memory regions for saving or restoring architectural state related to the execution of the domain.

As discussed in more detail below, the domain descriptor may also include other information, such as identification of the trusted intermediate domain, information defining whether debugging or export is enabled for the domain, and a desired signature for a signature used to check parameters of the domain when launching the domain.

FIG. 10 shows a set of lifecycle states that may exist for a given domain, including in this example a clean state, a new state, an active state, and an invalid state. Fig. 10 summarizes the properties of each state, indicating for each state: whether a domain in the corresponding state can have the parameters of the domain descriptor 166 of the domain modified by the previous generation domain of the domain, whether an encryption key specified for the domain can be effectively used, whether the domain can own any memory Region (RPG), and whether code associated with the domain is executable. It should be noted that the parameters of the domain descriptor are modifiable in the clean state, but not in any of the other states. This prevents malicious predecessor domains from updating the properties of a given domain after it has become active. In addition, the domain is only executable in the active state.

FIG. 11 is a state machine diagram showing the allowable transitions of the lifecycle states of a domain. Each state transition shown in fig. 11 is triggered by the previous generation realm issuing a realm management command to the RMU 20 that specifies a local RID of the child target realm (realm. invalid) command 212 may also be issued by the target realm itself). When no previous domain has been defined for this local RID and the domain descriptor register granule command 200 is executed by the previous generation domain, this triggers the configuration of a given memory region owned by the previous generation domain because the domain descriptors for the child domains have the specified local RID. The global RID for the child domain may be set based on the global RID of the previous domain and the in-sequence connection of the new local RID specified in the domain descriptor register granule command 200. The specified child domain then enters the clean state 202. In the clean state, the descendant domain can set the properties of the descendant domain by updating various parameters of the domain descriptor of the descendant domain. These properties may be modified using further RMU commands issued by the predecessor domain (if a specified predecessor domain is not in a clean state, such domain descriptor modification commands may be rejected). Furthermore, for domains associated with a trusted intermediate domain, a domain descriptor modification command may also be accepted if it is issued by a specified trusted intermediate domain identified in the domain descriptor of the target domain (whose parameters are being modified). When the previous generation realm has finished setting the parameters of the realm descriptor of the child realm, the previous generation realm executes a realm initialization command 204 that specifies the LRID of the child realm and this triggers the transition of the child realm from the clean state 202 to the new state 206, and at this point, the parameters of the realm descriptor can no longer be modified by the previous generation realm. If the specified domain is not currently in a clean state, the domain initialization command 204 will fail.

When a domain is in the new state 206, execution of the domain activity command 208 specifying the local RID of the domain triggers a transition from the new state 206 to an active state 210 in which the domain is now executable, and after this time, domain entry into the corresponding domain will no longer trigger a failure. The field is now fully operational. As discussed below, in some examples, initiation may depend on checking a parameter signature. A subsequent domain invalidate command 212 triggered by an predecessor of a descendant domain in any of clean state 202, new state 206, and active state 210 results in a transition to invalid state 214. To leave the inactive state 214 and return to the clean state 202, the previous generation domain must execute a domain wash command 216. If the resource count, which tracks the number of pages owned by the domain, has a value other than zero, then the domain wash command 216 is rejected. Thus, for the realm wash command 216 to succeed, the previous generation realm must first issue a granule for each page owned by the invalid realm. The eviction command specifies a target memory page and triggers invalidation of the target page to make the page inaccessible, and also lowers the reference count of the owner domain of the page by one. When the granule retraction or domain wash command 216 is executed, it is not necessary to actually write the data in the invalid region, since an overwrite can be made when a clean command is subsequently issued to transition the memory page from invalid to valid (see FIG. 15 discussed below). Additionally, any cache data related to the invalid domain may also be invalidated in response to the domain flush command, for example within the TLB 100 or cache 110 of any of the processing components 8, 10, 12 (not only the processing component executing the RMU command). The global RID may be used to trigger such invalidation of the cached data.

Thus, by providing a managed life cycle for a domain associated with a given domain identifier, this ensures that data associated with a previous domain using the same domain identifier must be washed from memory and any cache before the domain can be returned to a clean state in which parameters of that domain can be modified (and thus before the given domain identifier can be recycled for use by a different domain) to prevent any data associated with an old domain from being leaked to other domains by reuse of the same domain identifier. While the domain is in the clean state 202, the domain descriptor for the domain may also be cancelled by executing a domain descriptor release command 218 that enables the memory region stored in the domain descriptor to be allocated for other purposes (at this point, no washing is required, as the domain is clean).

Fig. 12 shows an example of the contents of an entry of the domain zone group table 128 (or ownership table). Each entry corresponds to a given memory region of the memory address space. The size of a given memory region may be fixed or variable, depending on the implementation. The particular manner in which the ownership table 128 is structured may vary significantly depending on implementation requirements, and thus the particular manner in which the corresponding memory region for a given entry is identified may vary (e.g., data may be stored in each entry identifying the corresponding region, or alternatively, the corresponding entry may be identified based at least in part on the location of the corresponding ownership entry within the table itself). In addition, fig. 12 shows specific examples of parameters that may be specified for a given memory region, but other examples may provide more information or may omit some of the information types shown.

As shown in fig. 12, each ownership table entry may specify the following for the corresponding memory region:

identify the global RID for the owner zone of the memory region. An owner realm may be a realm that has the right to set attributes that control which other realms are allowed to access a memory region.

The life cycle state of the corresponding memory region used to control which RMU commands are allowed to execute on the memory region.

Mapped addresses mapped to by the MMU 26 when a memory region becomes owned by the owner domain. The mapping address may be a virtual address or an intermediate physical address. By specifying this address in the ownership table, this may prevent possible attempts to exceed the security provided by the domain fabric by remapping the address translation tables after the domain has taken ownership of a given memory region.

Visibility attributes that specify which domains other than the owner can access the memory region. For example, as shown in FIG. 13, the visibility attribute may specify a previous generation visibility bit that controls whether a previous generation domain of the current domain is allowed to access the region, and may specify a global visibility bit that whether any domain can access the corresponding memory region. In general, a domain protection scheme may assume that descendant domains of a current domain are always allowed to access memory regions owned by the current domain's descendant or predecessor domains (subject to whether access is allowed based on a translation table 120 that provides protection based on privilege level), but a given domain may control whether memory regions are accessible by the given domain's descendant or any other domain that is not an immediate descendant of the given domain. In some embodiments, both the previous generation visibility bit and the global visibility bit may be set by the owner zone itself. Alternatively, while the previous generation visibility bit may be set by the owner domain, the global visibility bit could perhaps be set by a previous generation domain of the owner domain (provided that the previous generation visibility bit for a memory region has been set to give the memory region a previous generation visibility). It will be appreciated that this is just one example of how the owner zone can control which other processes can access data of the owner zone.

FIG. 14 is a table showing different lifecycle states that may exist for a given memory region, and FIG. 15 is a state machine showing commands that trigger transitions between the corresponding lifecycle states. In a similar manner to the domain lifecycle states shown in FIG. 11, transitions between memory region lifecycle states are managed to ensure that a memory region that is passed from ownership of one domain to ownership of another domain must first undergo an invalidation process in which data in the region is scrubbed (e.g., set to zero). Thus, to transition from the inactive state 220 to the active state 222 of the software accessible memory area, a cleaning command 224 must be executed by the RMU 20, triggered by software executing on the processing assembly 8. The clean command 224 identifies a particular memory region (page) and controls the RMU to step through the memory addresses of the corresponding memory region to invalidate/zero the data in each location within the memory region. If the target memory region is in any state other than invalid, the clean command is rejected (e.g., fail-over).

In some systems, it may be sufficient to provide the valid state 222 and the invalid state 220 as the only memory region lifecycle states. However, in the example of fig. 15, a given memory region may also be designated as an "RMU-private" memory region, which is reserved for exclusive access by the RMU 20 itself, such that access to the RMU-private memory region triggered by software executing on the processing circuitry 32 (in addition to any RMU software) will be denied. This is particularly useful for storing domain management data, such as domain descriptors, domain descriptor tree entries, domain execution contexts, and metadata for paging, as discussed above. By providing an attribute for designating a given memory region as an RMU-private memory region reserved for exclusive access by RMUs, software processes (including owner processes of the memory region itself) may be prevented from being able to access domain management data that might otherwise allow the software processes to circumvent the security protection provided by the domain scheme.

Thus, cleaning command 224 may specify, as one of the parameters of the cleaning command, a privacy indication specifying whether this is a normal cleaning command or a private cleaning command. Alternatively, two completely separate commands may be provided for these purposes. This triggers a transition to the active state 222 when the cleaning command is a normal cleaning command, as discussed above. However, when the cleaning command is a private cleaning command 224, this triggers a transition to the RMU cleaning state 226, where the memory region is designated as the RMU-private memory region. In some instances, all types of RMU data may be stored within a single type of RMU-private memory area corresponding to the RMU clean state.

However, robustness can be promoted by specifying multiple types of RMU-private memory areas each corresponding to a particular form of domain management data. For example, in fig. 14 and 15, a plurality of RMU registration states 228 are defined that each correspond to RMU private areas that are designated for a specific purpose. In this example, the RMU registration state 228 includes RMU registration RDT (RDTG for storing domain descriptor trees), RMU registration RD (for storing domain descriptors), RMU registration REC (for storing domain execution context data), and RMU registration MDT (for storing paged metadata used during output/input operations as discussed above). Different forms of registration commands 230 may be executed by the RMU for a memory region in the RMU clean state to transition the memory region to a corresponding one of the RMU registration states 228. Commands for storing data to RMU-private memory areas that do not correspond to a prescribed purpose (RDT, RD, REC, or MDT) may be rejected. Accordingly, in a first life cycle state of the RMU registration state, a first type of RMU command for storing a first type of domain management data may be allowed, and in a second life cycle state, a second type of RMU command for storing a second type of domain management data may be allowed, wherein the first RMU command is rejected when the target memory region is in the second life cycle state, and the second RMU command is rejected when the target memory region is in the first life cycle state. This may enable further security by avoiding malicious predecessor domains, e.g., attempting to store domain descriptor entries to a domain execution context region or vice versa, in order to attempt to split operations of children domains. According to each of the RMU registration states 228, a corresponding form of a release command 232 may return the corresponding memory region to the invalid state 220. A further cleaning command may trigger a wash of data from the previously defined RMU-private area before the area can be reallocated for general data.

Thus, in summary, at least one RMU-private memory region may be defined that is still owned by a given owner zone but has an attribute specified in the ownership table that means that the at least one RMU-private memory region is reserved for mutually exclusive access by RMUs. In this example, the attribute controlling the RMU-private state is the lifecycle state specified in the corresponding entry in the ownership table, but the attribute may also be identified in other ways. When a given memory region is specified by at least one state attribute as an RMU private memory region, the MMU may prevent access to the given memory region by one or more software processes. Thus, any software-triggered access that is not triggered by the RMU itself may be denied when it targets the RMU-private memory area. This includes preventing access to the RMU-private memory area by the owner zone itself.

The skilled person can ask why it is useful to define an owner zone for an RMU-private memory area if the owner zone cannot even access the data in the memory area. For example, an alternative method for implementing access to data only by an RMU would define a special domain for the RMU, and allocate pages of memory address space for storing data that would remain private to that special RMU owner domain. However, the inventors have recognized that when a domain is invalidated, there may be a requirement to invalidate all control data related to that domain, and this may complicate the washing of data of the invalid domain if this control data is associated with a particular RMU owner domain rather than the invalid domain.

In contrast, by using RMU-private attributes, the memory region storing the control data for a given domain is still owned by that domain even if the owner cannot access the control data, which means that it is simpler to identify which memory regions need to be invalidated when the owner domain is revoked. When a given realm is invalidated, the previous generation realm may simply perform a sequence of eviction operations (e.g., by executing an eviction command that is subsequently acted upon by the RMU) that triggers a region of memory owned by the specified invalidation realm (or a descendant of the specified invalidation realm) to be invalidated, made inaccessible, and returned to ownership of the previous generation realm that triggered the eviction command. The eviction operation may affect not only the pages accessible by the invalidation domain, but also the RMU-private memory area owned by the invalidation domain.

Another advantage of storing control data for a domain in an RMU-private memory area owned by the domain is when performing output operations. To reduce the memory footprint of a domain to zero, management structures associated with the domain may be exported in addition to normal memory during export operations. These structures are required to be owned by the field to simplify the management of the output operations.

In general, any kind of domain management data may be stored in the RMU-private area, but specifically, the domain management data may include any of: a domain descriptor defining properties of a given domain, a domain descriptor tree entry or further domain descriptor tree entries identifying a memory region storing the domain descriptor for the given domain, domain execution context data indicating an architectural state related to at least one thread executing within the given domain, and temporal work data for use at intermediate points of predetermined operations related to the given domain.

While in general, RMU private areas may be used to store domain-specific control data related to a given domain, these RMU private areas may also be used in order to increase security with respect to certain other operations performed once the domain is active. For example, when performing the above-discussed paging out or in operations in which data is encrypted or decrypted, and a check using metadata is performed to check that the data is still valid when the data is again input, such operations may take many cycles and such long-running operations are more likely to be interrupted in the middle. To avoid the need to restart operations again, it is desirable to allow metadata or other temporary working data associated with such long running operations to remain in the cache/memory even at the time of interruption, without making this data accessible to other processes (including the owner zone itself). This temporary working data can be protected by temporarily designating an area of the memory system as an RMU-private area. Thus, as shown in FIG. 14, the page states may also include RMUExporting and RMUIMPorting states that may be used when this temporary working data is stored to the memory area, and when one of these states is selected, then only the RMU may access the data.

Other examples of operations that may benefit from temporarily designating a corresponding memory region as RMU private may include: generation or verification of encrypted or decrypted data during data transfer between at least one memory region owned by a given domain and at least one memory region owned by a domain other than the given domain; transfer of ownership of a memory region to another domain; and a destructive eviction operation performed to render inaccessible data stored in the invalid memory region. For example, a eviction operation to wash the entire contents of a given page of the address space may be interrupted in the middle, and thus ensure that other processes cannot access the page until the wash is complete, the page may be temporarily designated as RMU-private. In general, any long latency operation performed by the RMU may benefit from transitioning the lifecycle state of some memory regions to RMU-private state before beginning the long-running operation, and then transitioning the lifecycle state back when the long-running operation is completed so that the temporary working data of the long latency operation is protected.

When an area is designated as private to an RMU, the area is reserved for access by the RMU 20, which is used to perform domain management operations. The domain management operations may include at least one of: creating a new domain; updating the properties of the existing field; rendering the domain useless; allocating memory regions for ownership by a given domain; changing an owner zone for a given memory region; changing the state of a given memory region; updating access control information for controlling access to the given memory region in response to a command triggered by an owner field for the given memory region; managing transitions between domains during processing of one or more software processes; managing transfer of data associated with a given domain between memory regions owned by the given domain and memory regions owned by a different domain than the given domain; and encryption or decryption of data associated with a given domain. The RMU may be a hardware unit to perform at least a portion of the domain management operations, or may include processing circuitry 32 that executes domain management software to perform at least a portion of the domain management operations, or may be a combination of both.

FIG. 15 illustrates a state transition that may be triggered by a given domain to clean a given page so that the given page may be validly accessed, or invalidate the corresponding page. FIG. 16 expands this scenario to show further commands that may be used to transfer ownership of a given page from one domain to another. If the memory region is currently in the invalid state 220 and owned by a parent domain, execution of the region claim command 230 by the parent domain enables the corresponding memory region to be passed to the specified child domain. The region claim command 230 is rejected when the target memory region is owned by any domain other than the predecessor domain of the given child domain, or if the memory region is valid or in one of the RMU-private lifecycle states 226, 228. This prevents the previous generation domain from arbitrarily assigning ownership of pages that are not accessible by itself or are used by the RMU 20. Once a page has been assigned to a child domain, the child domain can execute a clean command to transition to the active state 222 in the same manner as shown in FIG. 15. For simplicity, the use of RMU-private areas is not shown in fig. 16, but within any given domain, a private clean command may instead transition the memory area to the RMU clean state 226, as previously discussed.

Granule claim command 230 is used to transfer ownership to an already established child realm. In addition, the predecessor domain may execute a granule add command 232 that triggers the RMU 20 to assign ownership to a new child domain in a new state so that the predecessor domain can write data to the region assigned to the child. For example, this can be used to install program code for a new child domain so that the child domain can be executed for the first time. Thus, add command 232 is different from claim command 230 in terms of the lifecycle state in which the corresponding memory region is assigned to a child domain. The add command 232 may be allowed only when the child domain is in the new state 206 shown in FIG. 11. By executing the granule release command 234 that triggers the RMU to update the corresponding entry of the ownership table 128, as well as updating properties such as resource counts in the domain descriptors of the child domains, the child domains may release ownership of a given memory region back to the parent of the child domain. If the specified memory region is not owned by the current domain issuing the command, or if the region is in a state other than invalid, then granule release command 234 may be denied (ensuring that a destructive clean of the data is required before it can be returned to ownership of the previous domain).

One advantage of using the hierarchical domain structure discussed above in which an offspring domain is initialized with an offspring domain is that this greatly simplifies invalidation of domains and descendants of the domain. It is relatively common that if a given virtual machine realm is to be invalidated, it may also be desirable to invalidate the realm for any application running under that virtual machine. However, there may be a large amount of program code, data, and other control information associated with each of the processes that will be invalidated. It may be desirable to ensure that such invalidation occurs atomically, so that when only part of the data wash has been implemented, it is not possible to continue accessing data related to the invalid domain. This can make such atoms difficult if each domain is built completely independently of other domains without a domain hierarchy as discussed above, as multiple separate commands must be provided to individually invalidate each domain identified by the corresponding domain ID.

In contrast, by providing a domain level in which the RMU management domains are such that each domain other than the root domain is a child domain initialized in response to a command triggered by the parent domain, the RMU 20 may make the target domain and any child domains of the target domain inaccessible to the processing circuitry with more efficient operation when a command requesting invalidation of the target domain is received.

In particular, in response to invalidation of the target domain, the RMU may update domain management data (e.g., domain descriptors) associated with the target domain to indicate that the target domain is invalid, but need not update any domain management data associated with any descendant domains of the target domain. The domain management data associated with the descendant domains may remain unchanged. This is because simply invalidating the target domain may also make any descendant domain ineffectively inaccessible even though the domain management data has not changed, because access to a given domain is controlled by the descendant of the given domain and thus if the descendant domain is invalidated, this means that it is not yet possible to access descendants of the descendant domain. Because each of the domains is entered using a domain entry instruction (an ERET instruction discussed below) that uses a local RID defined by the parent domain to identify a particular child of the parent domain, and this is used to step through domain descriptors stored in memory regions owned by the parent domain of a given child domain, no process other than the parent domain can trigger the RMU to access domain management data of the child domain. Thus, if the predecessor realm is invalidated, the RMU cannot access the realm management data of a given successor realm, thereby ensuring that the given successor realm becomes inaccessible.

After a domain has been invalidated, a predecessor domain of the domain may trigger the RMU to perform an eviction operation to evict each memory region owned by the invalidated target domain. For example, as shown in FIG. 16, an eviction command 236 for a memory region owned by a child domain may trigger the return of the memory region to the invalid state 220, and also pass ownership of the memory region back to the parent domain. However, this reclamation operation may be done in the context of continued processing of other domains and need not be done immediately in order to allow any descendant domains of the invalid domain to be made inaccessible. The single action used to change the domain state of a given domain to invalid, such as the activities shown in FIG. 11, is sufficient to ensure that all data related to any descendant domain of the invalid domain is still inaccessible. Since any of the upper generation realms can only assign pages owned by that upper generation realm to children of that upper generation realm, and children can only assign pages owned by that child to a lower generation realm, this also means tracking which pages need to be invalidated and retired back relatively straightforwardly upon invalidation of a given realm, since the protected address range defined in the domain descriptor of the invalid realm (see FIG. 16) can be used to identify which pages will retire, since any further lower generation realms of the invalid realm will also own pages within that range.

Thus, in summary, the use of domain hierarchies greatly simplifies the management of domains and inefficiencies. In such invalidation, and overwriting of data in memory, the invalidation may also trigger invalidation of cache realm management data for the target realm and any descendant realms of the target realm, which cache realm management data is held not only in the processing component 8 triggering the invalidation, but also in other processing components such as another CPU or GPU. Thus, there may be a broadcast of the invalidation to other processing components to ensure that the other processing components do not continue to have access to the invalidation field. When such invalidation is triggered, it may be useful for the cached domain management data to be associated with a global domain identifier that uniquely identifies the corresponding domain, and the global domain identifier is formed as discussed above such that the global RID of a given child domain shares a common prefix portion with the global RID of the parent domain of the given child domain. This enables bit masking or other similar operations to be used to quickly compare whether a given domain is a descendant of a specified domain ID. If invalidation of a prior domain renders a given domain inaccessible, then attempts to enter a prescribed target domain are not possible (because there is no prior domain to execute the ERET instruction for that domain), but even in other implementations using different domain entry mechanisms, domain entry may fail and trigger a fault condition if the domain descriptor of a subsequent domain is no longer locatable.

FIG. 17A illustrates an example of a check performed by the MMU 26 and RMU 20 to determine whether a given memory access is allowed. The MMU 26 supports two stages of address translation, stage 1 which translates Virtual Addresses (VA) to Intermediate Physical Addresses (IPA) under control of stage 1 page tables 120-1 set by a given guest operating system, and stage 2 address translation which translates intermediate physical addresses provided by stage 1 translation to Physical Addresses (PA) used to access the memory 16 based on stage 2 page tables 120-2 set by the hypervisor 38. The hypervisor may define multiple sets of stage 2 page tables for different virtual machines, and the virtual machine id (vmid)250 provided with the memory access request may identify which particular stage 2 page tables are used. Similarly, the operating system may define multiple sets of stage 1 page tables for different applications, and an Address Space Identifier (ASID)252 may be used to identify which stage 1 page tables to use. VMID 250 and ASID 252 may be collectively referred to as a translation context identifier 254 that identifies the current translation context associated with the memory access request. The memory access request also specifies various attributes 256, such as an attribute indicating whether the transaction is a read (R) or write (W) request, or an exception level (X) associated with the process issuing the memory access request.

Upon receiving a memory access, MMU 26 may determine whether the transaction attributes are valid based on information from the stage 1 page table. For example, the stage 1 page table may specify that only read transactions may be allowed for certain addresses, or that both read and write accesses to a given address may be allowed (some embodiments may also allow only a region of the address space to be defined to be written). In addition, attributes in the stage 1 page table may limit access to processes operating at a given exception level or higher. If the transaction attribute is valid and access is allowed by the stage 1 page table, the MMU may return the corresponding Intermediate Physical Address (IPA). The IPA is then indexed into the stage 2 page tables along with the VMID 250, which again validate the attributes of the transaction and, if valid, return the physical address. It should be noted that not all transactions need to undergo two stages of address translation. For example, if the incoming memory transaction is issued at EL3 or EL2, or at EL1 or EL0 in the secure domain, the output of the stage 1MMU may be treated as a physical address and the stage 2MMU may be bypassed.

Having obtained the physical address, the physical address may then be looked up in the RMU table 128 (domain group table) to determine whether the domain protections implemented by the MMU allow the memory access to proceed. The domain check is discussed in more detail below in FIG. 18. If the check at stage 3 is successful, the confirmed physical address is output and the memory access is allowed to proceed. If either the check at stage 1 or stage 2 address translation or the RMU implemented realm protection provided at stage 3 fails, then memory access is denied. Thus, in addition to any existing address translation checks based on page tables 120, the protection provided by the domain management unit may be considered an additional layer of checks to be made. The check shown in FIG. 17A may be performed relatively slowly because there may be multiple tables in memory that need to be accessed and compared to the parameters of the memory access request or the current translation context or domain from which the access was made. While these checks will likely be made for each memory access, when the checks have been successfully performed for a given memory access request, the data within the TLB 100 may be cached more quickly, such that the next time a similar memory access request is issued, the similar memory access request may be allowed without repeating all checks again. Thus, it may be desirable to perform these permission checks only when there is a miss in the TLB 100, and not for a hit.

FIG. 17B illustrates an example of a TLB structure 100 for caching data for verified memory accesses. While FIG. 17B shows a single TLB, it will be appreciated that some systems may include multiple levels of TLB in a cache hierarchy, with a level 1 TLB storing a smaller subset of translation entries for faster access, and a level 2 or further level of TLB storing a larger set of translation entries that may be accessed if there is a miss in the level 1 TLB. The TLB 100 (or "translation cache") has a plurality of entries 260, each entry specifying address translation data for a corresponding memory region. Each entry 260 includes a virtual addressing tag 262 that corresponds to a virtual address for which data is provided a corresponding physical address 264 for that virtual address. In this example, the TLB is a combined stage 1 and stage 2TLB, such that a virtual address can be translated directly to a physical address using the TLB, without having to go through an intermediate physical address (although the corresponding stage 1 and stage 2 translations would be done on TLB misses in order to locate the correct physical address, the TLB need not store an intervening IPA and the VA can be mapped directly to the OA). Other examples may use split stage 1(S1) and stage 2(S2) TLBs, in which case the VA-PA pair 262, 264 may be replaced with a VA-IPA pair or an IPA-PA pair. TLB entries 260 are also tagged with a translation context identifier 254 (formed by ASID 252 and VMID 250). Although this example provides two separate translation context identifiers, in other examples, a single unified translation context identifier may be used, or in the case of a split S1/S2 TLB, the S1 TLB may use the ASID and the S2 TLB may use the VMID. The translation context identifier allows different operating systems or applications specifying the same virtual address to map accesses by these different operating systems or applications to different entries of the TLB 100 that provide different physical addresses.

A hit in the TLB 100 not only requires that the tag 262 match the corresponding portion of the address 258 specified for the memory access request, but that the translation context identifier stored in the same entry also match the current translation context from which the memory access was issued. It is contemplated that the comparison of tag 262 and translation context identifier 254 may be sufficient to locate the correct physical address 264 for a given memory access. However, if these are the only comparisons performed in the lookup, then there is a potential security weakness if a memory access hit in the TLB is received without further examination of the Domain free management Unit Table 128. This is because it is possible to create a new process that has the same VMID 250 or ASID 252 as the previously executed process to entice the MMU to accept a memory access that is actually from a different domain than the one previously accepted for accessing a given memory region.

To address this issue, the TLB 100 may specify, within each TLB entry 260, a global RID 270 for the owner domain that owns the corresponding memory region, as well as visibility attributes 272 set by the owner domain for controlling which other domains are allowed to access the corresponding memory region. When a given lookup of the translation cache 100 is performed in response to a memory access to a given target memory region issued from the current translation context and the current domain, if there is a miss in the translation cache 100, the TLB control circuit 280 may trigger the table walk unit 102 to access the associated page table 120 and the RMU table 128 in order to check whether the access is allowed. If the page table or RMU table 128 excludes the current combination of translation context, exception level, and domain from accessing the corresponding memory region, no data is allocated to the translation cache in response to the memory access. In particular, when a lookup misses and the current realm is excluded from accessing the target memory region by the owner realm of the target memory region, then allocation of address translation data to the translation cache is prevented. Thus, an entry is allocated to the TLB 100 when the corresponding memory access passes the checks of both the MMU page tables 120 and the RMU table 128.

Subsequently, when a translation cache is looked up to check whether the translation cache already includes an entry 260 that provides an address translation for a given address, TLB control circuitry 280 determines whether the memory access matches the given entry of translation cache 100 based on a first comparison between a translation context identifier 254 specified in the corresponding entry 260 and a translation context identifier 254 for the current translation context received along with the memory access request, and a second comparison between a global RID 270 specified by the entry 260 and the current global RID, which is associated with the current realm from which the memory access request issued. By providing additional checking that TLB entries are still verified from before as allowing domain accesses to memory regions, this ensures that even if a malicious supervisory process regenerates another process having the same ASID 252 or VMID 250 as the preexisting process allowed to access data by the owner domain, this means that the global RID of the current domain can be trusted to be valid and not "false" as is possible for ASIDs or VMIDs, because the global domain identifier 270 may not be reassigned to other processors without being subjected to a domain wash command 216 as discussed with respect to fig. 18. Thus, if the global RID of the current domain still satisfies the permissions indicated by the owner GRID 270 and visibility attributes 272, this indicates that the previously conducted domain table check is still valid.

If the second comparison of the domain identifiers detects a mismatch, then even if the tag comparison and translation context comparison match, the access request is considered a miss in the TLB, since it indicates that there is a change in the mapping between translation context ID 254 and domain ID 270, since the entry is allocated. This does not necessarily imply that access will be denied because another traversal of the page table and RMU table may be triggered by table traversal unit 102, and if the domain check is successful, this may result in the allocation of a different entry 260 in TLB 100 and the servicing of a memory access based on information from the newly allocated entry.

FIG. 18 is a flow chart illustrating a method of determining whether a given memory access is allowed by MMU 26. At step 300, a memory access request is received and looked up in the TLB 100. The memory access request specifies at least a virtual address to be accessed, one or more translation context identifiers indicating a current translation context, and a global realm identifier identifying a current realm. For example, the global RID may be read from a status register of the processing component 8, which may be written with the global RID of the current domain upon entering the domain.

In response to a memory access request, TLB control circuitry 280 looks up these TLBs. The lookup accesses at least some entries of the TLB. Some methods may use a fully associative cache structure, and in this case, all entries of at least the level 1 TLB may be searched and compared to parameters of the current request in order to identify whether there is a hit or a miss. Other approaches may use set associative cache allocation policies, and in this case only a subset of entries of a given level of the TLB may need to be looked up, indexed using the target address of the memory access. For each of the access sets of entries, the TLB control circuit 280 these multiple comparisons (in parallel or sequentially), including:

a tag comparison 302 for comparing whether the address of the memory access request matches the tag 262 stored in the access entry;

a first (context) comparison 304 for comparing the translation context identifier stored in the access entry with the translation context identifier of the memory access request; and

a second (realm) comparison 306 for comparing the global RID of the memory access request with the owner RID 270 and the visibility attributes 272 for each of the access sets of entries.

At step 308, control circuitry 280 determines whether there is an entry in the TLB that returns a match for all of the comparisons 302, 304, 306, and if so, a hit is identified, and at step 310, returns the physical address 264 specified in the matching entry and allows the memory access to proceed based on the physical address. In the case of a hit, there is no need to perform any lookup of the page table or RMU table (the ownership table lookup for memory access may be omitted). The protection provided by the page table and RMU table is only invoked on a miss.

If there are no entries matching all three of the comparisons 302, 304, 306, then a miss is detected. If further levels of TLB are provided, corresponding lookup steps 300-308 may be performed in the level 2 or subsequent levels of TLB. If the lookup misses in the last level TLB, various page tables and RMU tables are traversed. Accordingly, a stage 1 page table walk is performed at step 311, and a determination is made at step 312 as to whether a stage 1 page table fault has occurred (e.g., because there is no address mapping defined for the specified virtual address or because the current parameters of the access request violate the access permissions specified for the target virtual address). If a stage 1 fault occurs, then at step 314, the memory access is denied and the allocation of address mapping data to the TLB 100 in response to the memory access is prevented.

On the other hand, if the access request passes the stage 1 page table check, then at step 315, a stage 2 page table walk is triggered to obtain mapping data for the intermediate physical address returned by the stage 1 process, and at step 316, a determination is made as to whether a stage 2 page table fault has occurred (again, because the address mapping is undefined or because access is not allowed by the stage 2 access permissions). If a stage 2 failure occurs, the access request is again denied at step 314.

If a stepless stage 2 failure occurs, at step 318, an RMU table lookup is triggered based on the physical address returned by stage 2, and at step 320, a determination is made as to whether a domain failure has been detected. A domain fault may be triggered if any of the following events occur:

if the lifecycle state for the corresponding memory region is indicated as invalid in the domain ownership table 128. This ensures that pages of the memory address space that have not been subjected to the clean operation 224 shown in FIG. 15 are not accessible to protect any data previously stored in the memory region by another domain from being accessed by a different domain.

The current realm is not allowed by the owner realm for the corresponding memory region for accessing the memory region. There may be a number of reasons why a given domain may not be allowed to access a given memory region. If an owner domain has specified a memory region that is only visible to the owner itself and to descendants of the owner, then another domain may not be allowed to access that domain. Additionally, memory access may be denied if the current domain is a previous generation domain of the owner domain and the owner domain has not defined a previous generation visibility attribute to allow the previous generation to access the region. Additionally, if the memory region is currently set to RMU-private as discussed above, the owner zone itself may be prevented from accessing the memory region. At the RMU check stage, descendant domains of the owner domain may be allowed to access the memory region (as long as the memory region is not an RMU-private region). Thus, this check enforces the access permissions set by the owner zone.

If the physical address translated by S1/S2 for the current memory access map does not match the mapped address specified in the ownership table 128 for the corresponding memory region as shown in FIG. 12, then the memory access is denied. This protection is from the following situation: the malicious predecessor domain may assign ownership of a given memory region to a child domain, but then change the translation mapping in page table 120 so that subsequent memory accesses triggered by the child domain using the same virtual address that the child domain previously used to reference the page owned by the child domain now map to a different entity address that is not actually owned by the child domain itself. By providing a reverse mapping in the ownership table from the physical address of the corresponding memory region back to the mapped address used to generate the physical address when ownership was asserted, this allows security breaches caused by changes in the address mapping to be detected so that the memory access will fail.

It will be appreciated that other types of inspections may also be performed. If the realm check is successful, then the physical address is returned at step 322, the memory access is allowed to proceed using the physical address, and a new entry is allocated to the TLB indicating the physical address obtained from the page table 120 and the owner realm and visibility attributes obtained from the ownership table 128 corresponding to the requested virtual address and translation context.

Upon entering or exiting from the domain, the processing component 8 and/or the RMU 20 may need to perform a number of operations to ensure safe handling of domain entry or exit. For example, upon entering a domain, multiple checks may need to be performed to check that the target domain is in the correct lifecycle state (to avoid exceeding safety measures by domains that attempt to enter a non-existent domain or, for example, have not yet undergone washing of pages from which data is owned). Additionally, upon exiting a realm, it may be desirable to mask the architectural state stored in registers of a processing component so that state data used by the realm of a lower privilege level is not accessible to higher privilege level processes (which would otherwise cause the security measures provided by the realm protection to be exceeded). One method for handling a domain entry and exit may provide a special domain entry or domain exit instruction that triggers the RMU 20 to perform an associated operation for entering or exiting the domain.

Another approach may be to have provided a reuse mechanism for exception entry and return to enter and exit from the domain. This reduces the amount of software modification required to support domain entry and exit and simplifies the architecture and hardware. This is particularly useful because the general domain boundaries may correspond to exception level boundaries anyway, and even if new instructions are provided to control entry and exit, behavior for handling exceptions will still be required, so in general, extending the exception mechanism so as to also control entry and exit may be less expensive.

Thus, an Exception Return (ERET) instruction that would normally return processing from an exception handled in the current realm to another process also handled in the current realm, where the other process may be handled at the same or less privileged exception level than the exception, may be reused to trigger a realm entry from the current realm to the destination realm. In response to a first variant of the exception return instruction, the processing circuitry may switch processing from a current exception level to a less privileged exception level (without changing the realm), while in response to a second variant of the exception return instruction, the processing circuitry may switch processing from the current realm to a destination realm that may operate at the same exception level or a reduced (less privileged) exception level as the current realm. The use of an exception return instruction to trigger a domain entry may greatly simplify the architectural and hardware management burden and reduce the software modification requirements to support the use of domains.

Another advantage of using an exception return instruction is that, typically on return from an exception, the processing circuitry may perform an atomic set of operations in response to the exception return instruction. The set of operations required on return from an exception may be executed atomically, such that the operations may not be split in the middle, and thus either the instruction fails and none of the atomic set of operations is executed, or the instruction is successfully executed and all of the atomic set of operations are executed. For a second variant of the exception return instruction, the processing circuitry may similarly perform an atomic set of second operations, which may be different from the atomic set of the first operations. Mechanisms that have been provided in processors to ensure that exception return instructions complete atomically may be reused for domain entry in order to avoid situations where domain entry may only be partially executed that may lead to security vulnerabilities. For example, the atomic set of second operations may include changing the current domain being executed, making domain execution context state available, and branching control to processing the program counter address where previously executed on the last execution of the same domain.

The first variant and the second variant of the exception return instruction may have the same instruction encoding. Modification of the exception-free return instruction itself is therefore necessary in order to trigger a domain entry. This improves compatibility with legacy code. The execution of a given exception return instruction as either the first variant or the second variant may depend on the control value that the given exception return instruction stores in the status register (e.g., the first and second values of the control value may represent the first and second variants of the exception return instruction, respectively). Thus, the current architecture state when the exception return instruction is executed controls the exception return instruction to return the processor to a lower privilege level in the same domain, or to trigger entry into a new domain.

This approach enables domain entry to be controlled with fewer software modifications, especially when the values in the status registers can be set automatically by hardware in response to certain events that suggest a domain switch to be possible (in addition to allowing for voluntary setting of control values in response to software instructions). For example, when an exception condition triggering an exit to a given domain occurs, the processing circuitry may set the control value to a second value for the given domain, such that a subsequent exception return instruction will automatically return processing to the domain in which the exception occurred, even taking into account that the exception handler code used to handle the exception is the same as the previous legacy code not written in the domain. Alternatively, it is contemplated in some architectures that when exiting from a domain, the control value in the status register will still include the second value set prior to triggering the domain entry to that domain, and thus explicit setting of the control value in the status register may not be required.

At least one realm identifier register may be provided, and in response to a second variant of the exception return instruction, the processing circuitry may identify the destination realm from a realm identifier stored in the realm identifier register. The domain identifier registers may be grouped such that there are a plurality of domain identifier registers each associated with one of the exception stages, and in response to a second variant of the exception return instruction, the processing circuitry may identify the destination domain from the domain identifier stored in the domain identifier register associated with the current exception stage. By using the realm identifier register to store the target realm identifier, there is no need to include this in the instruction encoding of the ERET instruction, which enables the existing format of the ERET instruction to be used to trigger the realm entry, thereby reducing the amount of software modification required. The realm identifier in the realm identifier register can be a local realm identifier used by an upper-generation realm to reference a child realm of the upper-generation realm, and thus, realm entry can be limited to transfer from the upper-generation realm to the child realm, and it is not possible to go from a first realm to another realm that is not a direct child of the first realm. In response to a second variant of the exception return instruction, the processing circuitry may trigger a fault condition when the realm associated with the realm ID identified in the RID register is an invalid realm (no realm descriptor has been defined or a realm descriptor defines a RID for a life cycle state other than active).

In response to a second variation of the exception return instruction, the processing circuitry may restore an architectural state associated with the thread to be processed in the destination domain from a domain execution context (REC) memory region specified for the exception return instruction. The state recovery may occur immediately (e.g., as part of the atomic set of operations) in response to the second variant of the exception return instruction, or may occur later. For example, state recovery may be done in a lazy manner such that state that requires processing to begin in the destination domain may be recovered immediately (e.g., program counters, processing mode information, etc.), but other state such as general purpose registers may be gradually recovered as needed at a later time, or in the context of continued processing in the new domain. Thus, the processing circuitry may begin processing of the destination domain before all required architectural states have been restored from the REC memory region.

In response to a first variant of the exception return instruction, the processing circuitry may branch to the program instruction address stored in the link register. Conversely, for a second variant of an exception return instruction, the processing circuitry may branch to a program instruction address specified in a domain execution context (REC) memory region. Because the link register will not be used for the second variant of the exception return instruction to directly identify any architectural state for the new domain, the link register can be reused to instead provide a pointer to the REC memory region from which the architectural state of the new domain is to be restored. This avoids the need to provide further registers for storing the REC pointer.

Thus, prior to executing an exception return instruction that attempts to cause a realm entry into a given realm, some additional instructions may be included in order to set the RID register to the realm identifier of the destination realm and set the link register to store a pointer to the REC memory region associated with the destination realm. The REC index may be obtained by the previous generation domain from a domain descriptor of the destination domain.

In response to the second variant of the exception return instruction, the fault condition may be triggered by the processing circuitry when the REC memory region is associated with an owner field other than the destination field or the REC memory region specified for the exception return instruction is invalid. The first check prevents the predecessor realm from causing a predecessor realm to execute with a processor state that the predecessor realm did not create itself, because only the memory region owned by a predecessor realm can store the REC memory region that is accessible upon entry into the realm (and as discussed above, the REC memory region will be set to RMU private). A second check of the validity of the REC memory region may be used to ensure that the REC memory region can be used only once to enter the domain, and subsequent attempts to enter the domain with the same REC data will be rejected thereafter. For example, each REC may have a lifecycle state that may be invalid or valid. In response to an exception occurring during processing of a given thread in the current domain, the architectural state of the thread may be saved to a corresponding REC memory region, and the corresponding REC memory region may then transition from inactive to active. The REC memory region may then transition from active back to inactive in response to successful execution of the second variant of the exception return instruction. This avoids the descendant domain from maliciously incorrectly behaving by the descendant domain by specifying a pointer to an obsolete REC memory region, a REC memory region associated with a different thread, or some other REC associated with the destination domain but not used to store the correct REC for architectural state at the previous exit of the destination domain.

In a corresponding manner, exit from the domain may reuse the mechanism provided for exception handling. Thus, in response to an exception condition occurring during processing of the first domain that cannot be handled by the first domain, the processing circuitry may trigger a domain exit to an earlier domain that initializes the first domain. Upon exception occurrence/domain exit, some additional operations may be performed that would not be performed for exception occurrences that may be handled within the same domain. This may include, for example, masking or washing of architectural states and triggering of state storage to the REC.

However, in some cases, an exception may occur that may not be handled by the predecessor domain of the first domain in which the exception occurred. Thus, in this case, it may be necessary to switch to a further prior generation area beyond the previous generation. While it may be possible to provide the ability to switch directly from a given domain to an earlier generation domain that is more than one generation old, this may increase the complexity of the status registers needed to handle exception entry and return or domain exit and entry.

Alternatively, a nested domain exit may be performed when the exception condition is to be handled at a target exception level having a greater privilege level than the most privileged exception level that the previous generation domain of the first domain is allowed to execute. Nested domain exits may include two or more successive domain exits from a child domain to an upper-generation domain until a second domain is reached that is allowed to be processed at a target exception level for the exception that occurred. Thus, by raising the domain level one level at a time, this may simplify the architecture. At each successive domain exit, there may be operations performed to save a subset of the processor state to the REC associated with the corresponding domain.

FIG. 19 illustrates the concept of a child domain that can be initialized by a parent domain. As shown in FIG. 19, a given parent domain 600 operating at a particular exception level may initialize a child domain 602 operating at the same exception level as the parent of the child domain. The full realm 600 corresponds to a given software process (or set of two or more processes), but the child realms correspond to predetermined address ranges within the given software process. Because the full realm is the predecessor of a child realm, as discussed above, a child realm can have the right to access data stored in a memory region owned by the predecessor full realm, but a child realm can have the right to exclude the child realm's predecessor full realm from accessing data stored in a memory region owned by child realm 602. This may be used to allow certain portions of a given software process to be made more secure than other portions of the software process. For example, a portion of code used to check a password in a mobile banking application or to process other sensitive information may be assigned to a child domain in order to prevent other portions of the same application or operating system from accessing the sensitive information.

Child domains can generally be handled in the same way as the full domain, with some differences as explained below. Entry into and exit from child domains may be handled in the same manner as discussed above using exception return instructions and exception events. Thus, a child domain may have a child domain ID constructed in the same manner for a full child domain of the same generation and may be provided with a domain descriptor within a domain descriptor tree as discussed above. Entry into a child domain may be triggered simply by executing an ERET instruction that has placed the appropriate child domain RID into the RID register prior to executing the ERET instruction. Thus, the same type of ERET instruction (belonging to the second variant) may be used to trigger entry into the full domain or into a child domain.

One way that a child domain may differ from a full domain may be that the child domain may not allow initialization of its own child domains. Thus, if the current domain is a child domain, the domain initialization command for initializing the new domain may be rejected. The RMU may use a domain type value in a domain descriptor of the current domain to determine whether the current domain is a full domain or a child domain. By disabling realm initialization when currently in a child realm, this simplifies the architecture, as no additional status registers have to be provided for use by the child realm in initializing further realms.

Similarly, when currently in a child domain, execution of a domain entry instruction may be prohibited. This simplifies the architecture, as it means that the banked registers for handling domain entries and exits (and exception entries and returns) that are replicated for different exception states do not need to be banked again for each child domain, which would be difficult to manage, as it may not be known at design time how many child domains a given process would create. Similarly, when the current realm is a child realm rather than a full realm, exception return events that trigger a switch to a process operating at a lower privilege level may be prohibited. Although in the examples discussed above a single type of ERET instruction is used as both a realm entry instruction and an exception return instruction, this is not necessary for all embodiments, and where a separate instruction is provided, then both exception return instructions may be disabled when the current realm is a child realm.

Similarly, when an exception occurs while in a child domain, the processing circuitry may trigger an exit from the child domain to the parent full domain that initializes the child domain before handling the exception, rather than taking the exception directly from the child domain. Thus, the exception triggers a return to the full domain of the previous generation. Exception returns to the upper generation full realm may include state mask, wash, and save operations on the REC, but by avoiding exceptions going directly from the child realm to the realm of higher exception levels, this avoids the need to group exception control registers again for the child realm, simplifying the architecture.

For a child domain, the boundary exception level indicating the maximum privilege level for processing of the allowed domain is equal to the boundary exception level for the previous generation full domain for that domain. In contrast, for a child full domain, the boundary exception level is a less privileged exception level than the boundary exception level of the parent domain of the child full domain.

When a domain is initialized by an predecessor domain, the predecessor domain may select whether the new domain will be the child full domain or the child domain, and may set the appropriate domain type parameter in the domain descriptor accordingly. Once the domain is operational, the previous generation domain can no longer change the domain type because modification of the domain descriptor is prohibited by the management domain lifecycle discussed above with respect to FIG. 11.

In summary, the ability to introduce child domains that are managed similar to the full domain but where exception handling, domain initialization, and domain entry functions are disabled within the child domains enables smaller portions of code corresponding to a given address range within the full domain's software process to be isolated from other portions of the software in order to provide additional security for certain pieces of sensitive code or data.

Parameter signatures

The access control model described above protects the realm from any other agent on the system, including other software at the same privilege level, other software at a higher privilege level, and other realms. When a domain is created, the domain is filled with content (memory pages) representing the initial content of the domain. The initial content is measured. The realm is also specified with security configurations, such as whether the realm is started in debug mode, a scope parameter for deriving realm secrets, etc., which are typically used by the realm to protect its own internal boot process and keep its data private. The process of establishing a realm, measuring a realm, and implementing its security parameters (including the derivation of the realm secret) is managed by a realm management unit.

Once a domain is established and executed, some external users (e.g., clients connected to server applications executing inside the domain) may connect to the domain and request attestation reports. The attestation report allows an external user to verify whether the domain is executing on a trusted system, and whether the domain is initially populated with expected software (domain measurements), and whether the domain is configured as desired (e.g., not started with enabled debugging). This is useful for applications where data is only provided or becomes available after successful proof.

However, providing proof after domain establishment and execution may not be sufficient for the following exemplary use cases:

where data needs to be stored continuously on the system so that it remains available after a reboot of the domain, or the system,

if it is desired to make data available to multiple instances of the "same" domain for load balancing, redundancy, etc.,

if the domain needs to access some boot secret before it has been certified, e.g. to access a boot image stored in a protected file system. Including a subsequent restart of the "same" domain, or system.

Here, "same" realm means one or more realm instances populated with the same initial content and using the same security configuration.

These problems can be solved by introducing the concept of signing domain parameters. Safety parameters of the field are extended to include the expected measurements. The security configuration of the domain is signed by the parameter signer. The parameter signer identity is included with the signature. For example, using asymmetric cryptography, a parameter may be signed using a private key owned by the parameter signer, and a hash of the corresponding public key may be used as the parameter signer identity.

The rules for establishing the realm are extended so that if signed parameters are used, the signature of the actual security configuration must match the expected signature before the realm can be booted, and the signer ID is included in the key derivation for deriving the realm secret. These rules are enforced by the domain management unit 20 and cannot be prevented by the pass-through.

This means that the domain can be started and its secrets accessed only under the following conditions: (i) its security configuration has been properly signed; and (ii) the signer is the correct entity. If either of these two conditions is not met, either domain will not boot or the domain will boot but cannot access its secrets.

For example, fig. 20 shows an example of using such a parameter signature to verify whether security configuration parameters of a domain have been set as expected by a party requesting that the domain be installed on a given hardware platform when launching the given domain.

As shown at the top of fig. 20, when a domain is established, the party using the domain is required to request the previous generation domain to start the target domain and establish the target domain with some set of domain security configuration parameters 400. For example, the initial domain parameters may be specified in a command sent to the upper generation domain by some external party (e.g., banking provider, healthcare provider) or other party that desires the hardware portion to install a security domain for interacting with that provider. It is contemplated that the initial set of domain security configuration parameters 400 will be added to the domain descriptor of the target domain from the previous generation domain using a domain parameter update command.

For example, the initial domain parameters may include some of the above-described domain descriptors (such as domain type 402), as well as other security configuration parameters, such as an indication 406 of whether to enable output of data from the first memory 16 (for domain ownership protection) to the second external memory 6, or an indication 408 of whether to enable debugging for the domain. Other realm security configuration parameters (such as a protected address range), as well as key data used to derive the root key, may not be included in the initial parameter set given to the parent realm, but may instead be generated at the time of realm establishment by the RMU provided to the realm by a trusted intermediate realm as described below, or generated by the realm itself.

The initial realm parameters 400 also include an expected signature 410, the expected signature 410 corresponding to a signature of a subset of security configuration parameters expected to be established for the realm. The expected signature 410 is computed by a parameter signer (e.g., a party requesting domain installation) based on the expected security configuration parameters and is given to the domain of the prior generation along with the security configuration parameters for the established domain. The initial realm parameters also include an indication of a signer ID 412 that identifies the parameter signer. When configuring the target domain, the desired signature 410 and signer ID 412 may be recorded by the previous generation domain in a domain descriptor of the target domain while the target domain is in a clean state.

For example, the desired signature 410 may be generated by a parameter signer as follows: a hash value is generated based on a hash function of the desired domain parameter, and the hash function is encrypted using a private key associated with the parameter signer. The signer ID 412 can be a public key of the parameter signer that corresponds to a private key used to generate the signature 410 in an asymmetric cryptographic scheme.

The expected signature 410 may be calculated not only on a subset of the expected domain security configuration parameters themselves, but may be based on measurements that may be calculated as a function of the expected domain content (data and code) that is expected to be stored in the memory area owned by the target domain at the time of startup of the domain.

It is expected that the signature 410 may not cover all security configuration parameters of the domain. Some security configuration parameters that would be set in a domain descriptor for a given domain may be excluded from the calculation of the expected signature 410. This may include the desired signature 410 itself and the signer ID 412. Furthermore, some domain parameters in the domain descriptor may depend on local features of a particular physical platform, rather than on security configuration settings desired by external parties that need to establish the domain. For example, the particular address defined for the protected address range may depend on a particular address mapping established for the domain on a given entity instance, or some hardware instance-unique key may be generated by the RMU 20 of a particular entity instance, and thus may not be predictable by a parameter signer, and thus may not undergo parameter signing.

Upon launching the target domain to make it available for processing by the processing circuitry, the RMU 20 verifies the actual domain parameters 420 represented in the domain descriptor of the target domain based on the expected signature 410 provided by the parameter signer at domain build time. At this point, the RMU determines the parameter signature 422 according to: (i) a subset of the domain's actual security configuration parameters 420 (again excluding certain parameters as shown above), and (ii) a measure 421 of the actual domain contents of the memory region owned by the target domain. For example, certain hash functions 424 may be applied to the domain security configuration parameters 420 and the measurements 421 to generate the parameter signature 422. The hash function 424 may correspond to a hash function used by a parameter signer to generate the expected signature 410 based on expected domain parameters and expected domain content.

The RMU 20 also obtains the expected signature 410 and signer ID 412 from the domain descriptor for the target domain, and verifies that the expected signature 410 and the parameter signature 424 match. For example, if the expected signature 410 is calculated by a parameter signer by decrypting the desired parameters using a private key, the RMU 10 may decrypt the expected signature 410 using the signer's public key as represented by the signer ID 412 and then compare the decrypted signature to the parameter signature 422 generated from the actual domain parameters. Alternatively, other cryptographic techniques may be used to verify that the parameter signature 422 derived from the actual parameters 420 matches the expected signature.

In general, if a match is detected between the actual parameter signature 422 derived from the security configuration parameters 420 and the expected signature 410 provided at the time of establishment of the domain, then the target domain's initiation is allowed to proceed (assuming any other security checks are satisfied). On the other hand, if a mismatch is detected between the parameter signature 422 and the expected signature 410, the initiation is limited by one of: generating a fault to not allow the domain to boot at all, or to allow the domain to boot but deny access to the keys used to protect the domain contents, may prevent the domain from operating incorrectly. In any event, using signature verification, the RMU may implement a match of actual parameters at startup of the domain with expected parameters having a parameter signature expected to be installed by the domain to alert the previous generation of the domain without maliciously changing domain parameters from the domain parameters given to the domain by the party requesting the installation of the domain.

The signer ID 412 is also included in the key material for deriving a domain secret for the target domain. This means that if a given realm is established and its realm parameters are verified as authentic based on the signature provided by the wrong signer, the realm will not have the correct key for accessing data protected by the key associated with the correct signer, although the realm can be booted.

This approach of using parameter signatures is relatively uncommon because typically a cryptographic signature will be used to verify that the identity of a party providing some information matches some known identity. However, in the scenario shown in FIG. 20, the actual identity of the party that is requesting the realm establishment is not verified against any known identity. In fact, if a given expected signature at domain creation time matches the actual signature generated from the actual parameters at domain startup time, then either party is allowed to request that a given domain be created and have its domain started. Thus, if an attacker provides a different signature than the desired signature, and the actual parameters are established in a way that matches the different signature, the domain will be allowed to boot. However, including the signer's public key in the domain's key material prevents the domain configured by the attacker from being allowed to access the data protected by the parameter signer, thus still enforcing security. The purpose of the signature check is not to verify the identity of the party requesting the installation domain, but to check whether the parameters defined at startup (regardless of which has requested the installation domain) match the parameters provided with the signature when requesting the installation domain, to prevent the previous generation domain from improperly modifying the parameters.

As shown in FIG. 20, the domain parameters also optionally include a time period indication 430, which time period indication 430 may represent the version of software installed for a given domain. The epoch 430 is covered by the desired signature 410 and the parameter signature 422. Furthermore, the epoch 430 is also included in the keying material used to derive the domain secret. This allows to verify the version of the installed domain software and to check which versions of the domain software are allowed to derive or use the secret established by the earlier or later domain/version. Thus, if a security vulnerability identified with a particular version of domain software has a given epoch value, a later update to fix the problem may be given a later epoch value. Domain secret derivation may be such that a domain is allowed to derive or use secrets for any security period (earlier than or the same as the domain itself), but cannot derive secrets for any security period that is more up-to-date than the domain itself. By including the time period in the parameters covered by the domain signature, this prevents the legacy domain from attempting to be given an update time period between the initial domain parameters and the initiating domain at domain build time.

FIG. 21 is a flow chart illustrating a method of verifying a domain parameter based on a parameter signature. At step 440, a domain start command specifying the target domain is issued by the predecessor domain of the target domain to be started. At step 442, the RMU 20 checks whether the target domain is in a new state, and if not, at step 444, a failure is triggered. If the realm is in a new state, the RMU checks whether parameter signing is enabled at step 446. In some systems, the parameter signature may enable or disable the system as a whole. In other embodiments, the parameter signatures may be enabled or disabled for various domains (e.g., using security configuration parameters for domain descriptors for which the previous generation domain is not allowed to be updated). If the parameter signatures are disabled, then at step 448, the target realm is enabled, regardless of any parameter signatures.

However, if parameter signing is enabled, then at step 450, the RMU obtains the expected signature 410 for the domain parameter from the domain descriptor for the target domain. At step 452, the RMU generates a parameter signature 422 based on a subset of the actual security configuration parameters 420 defined in the domain descriptor of the target domain, and based on the measurement of the domain content 421. At step 454, the RMU determines if the parameter signature 422 matches the expected signature 410 and, if so, at step 456, the boot is allowed and the signer ID 412 and time period 430 are included in the key data for use in deriving the domain secret. If the parameter signature does not match the expected signature, then at step 458, the launch limit is applied. This may be a failure to prevent the boot from being successful, or alternatively, the boot may be allowed but the configuration settings may be specified to prevent the domain from accessing its domain secrets.

Trusted intermediate domains

As shown in fig. 22, a domain may be initially established on a particular system managed by a particular entity system. Later, the domain may terminate and then restart on the same or different system, or multiple instances of the same domain may be built on the same or different system for load balancing and redundancy purposes. In either case, it may be desirable to share the same data set, protected by keys that are derivable by all instances of the same domain. Similarly, security configuration parameters of a domain may need to be consistent across multiple instances of the same domain. Any link that needs to survive a field reboot or that may be reestablished on a different system cannot itself be managed by a particular instance of the physical system.

A related problem is that service providers may need to migrate domains between different systems of a data center, or between different data centers, to manage load, redundancy, etc. across all available computing resources, as shown in fig. 23. Without the above-described domain-based protected system, migration may be implemented, for example, by pausing a virtual machine, paging the entire virtual machine, restoring the virtual machine to a different machine, and then restarting it again. The destination is usually not known at the beginning of the migration process, but is decided at some later point, so the migrated virtual machine may eventually recover on any system. In some cases, the process may be initiated while the virtual machine is still executing ("online migration"). For systems using domain-based protection as described above, this existing migration process does not work because it would compromise the basic security guarantees of the domain-based system. For example, the domain may have been launched and certified on a system with known security attributes. Because the normal migration process involves untrusted system software in the data center, the domains or RMUs 20 on a given physical system cannot implement the following for this migration through paging: the target system has the same security attributes before the domain has been started on the new system.

These problems may be solved by defining a trusted intermediate realm that is associated with a given target realm and that is allowed to manage the target realm on behalf of an external party associated with the target realm. The trusted intermediate realm may, for example, be allowed to perform certain realm management functions, including injecting "provisioned" secrets and/or saving and restoring security configuration parameters, so that instances of the realm may be migrated between different physical platforms or terminated and later restored, while having a consistent set of keys and security configuration parameters for each instance of the "same realm".

As shown in fig. 24, a given realm a may specify in its realm security configuration parameters 400 (i.e., in its realm descriptor 124) an identifier 500 of a trusted intermediate realm, which is another realm operating on the same entity instance. For example, a Global Realm Identifier (GRID) as described above can be used to identify trusted intermediate realms. In some cases, the security configuration parameters in the domain descriptor for a given domain a may also include a flag indicating whether the domain is associated with a trusted intermediate domain. Alternatively, if the flag can be derived from a value in the trusted intermediate domain identifier field 500, the flag may not be necessary in some implementations. For example, if the trusted intermediate realm identifier 500 is set to a value that is not an allowed identifier for the real realm, this may implicitly identify that there is no trusted intermediate realm associated with the given realm. A domain may be managed by only one trusted intermediary domain, but one trusted intermediary domain may manage a plurality of other domains each specifying the same domain as the trusted intermediary domain.

The trusted intermediate realm may store information in its own memory page 502 memory for managing the associated realm a. For example, the trusted intermediate realm may store a plurality of provided secrets 504 and key management policies 506, the provided secrets 504 may be injected in realm a as keying material for deriving keys to protect data and code of realm a, and the key management policies 506 may specify information about how and when these keys may be injected. Further, the owned page 502 of the trusted intermediate realm may store a configuration record 508, which configuration record 508 may indicate a set of security configuration parameters that may be injected into the realm descriptor of realm a. The updating of security configuration parameters for realm a by the trusted intermediate realm may be limited prior to the start of the realm. Some parameters of the domain descriptor may not be allowed to be set by the trusted intermediate domain (e.g., an identifier of the trusted intermediate domain itself).

In some instances, security configuration records 508 may have been provided to the trusted intermediate realm at the time of establishment of the trusted intermediate realm (e.g., security configuration records 508 for realm a to be managed by the trusted intermediate realm may be included in an information bundle provided to a previous generation realm of the trusted intermediate realm at the time of establishment of the trusted intermediate realm).

Alternatively, the security configuration record may be generated as a snapshot of the configuration parameters of Domain A, which is taken after Domain A is launched. For example, the trusted intermediate realm may be allowed to issue a command to the RMU 20 requesting that a snapshot of the security configuration parameters of realm a be returned and stored as a security configuration record in a memory area owned by the trusted intermediate realm. If the domain issuing the command is any domain other than the trusted intermediate domain specified in the identifier 500 within the domain security configuration parameters 400 defined for domain a, the command may be rejected by the RMU. This allows the parameters of the active domain to be backed up so these parameters can be restored later, for example to allow restoring a previously terminated domain as shown in FIG. 22, or to migrate a domain to a different physical platform as shown in FIG. 23, or to roll back the configuration of a given domain to an earlier state. The security configuration record 508 may be associated with a migration policy 510, which migration policy 510 may define attributes for controlling how, when, and if a domain is allowed to migrate to different platforms and under which conditions.

Trusted intermediate realms do not necessarily support both injecting the provided secrets and saving and restoring security configuration records. Some intermediate domains (or the entirety of some embodiments of the domain-based architecture) may be capable of handling only one of these functions.

The association of a domain with some trusted intermediate domain at domain establishment can be verified by an external party or by other domains by requesting the RMU 20 to generate a certification of the target domain and/or a certification of the trusted intermediate domain. These attestations may include signatures of security configuration parameters of the domain, or domain content of the target domain a managed by a trusted intermediate domain. The fact that domain a is associated with a trusted intermediary may be evident from domain certification when generating certification for domain a. When certifying the target realm a, the verifying entity that checks the certification may also certify the associated trusted intermediate realm, either because the direct certification of the intermediate realm is included in the certification of the target realm a, or because the certification of the target realm may specify an identifier of the trusted intermediate realm, so that a separate certification generating command may be issued to request a separate certification of the intermediate realm.

Thus, by defining a trusted intermediate realm, which allows multiple instances of a given realm to be established at different times or on different physical platforms, each instance shares access to shared keys or security configuration parameters, which would be difficult to manage securely through the RMU alone or through the native code of the realm.

FIG. 25 illustrates a method of processing a Security configuration parameter update command for updating security configuration parameters associated with a target realm. This command may be used to update security configuration parameters associated with the target domain. At step 520, a security configuration parameter update command is received that specifies a target domain whose parameters are to be updated.

At step 522, the RMU 20 checks whether the target zone identified by the command is currently in a clean state. If not, a fault is generated at step 524 because the safety configuration parameters cannot be updated once the domain has been passed from the clean state to the new state. If the domain is in a clean state, then at step 524, it is determined whether the domain issuing the command is a predecessor of the target domain. If so, then at step 528, the RMU allows the requested parameters to be updated as long as the update is a subset of the security configuration parameters that are allowed to be updated for the prior generation domain. Some of the content of the domain descriptor of a given domain, such as a key, may not be accessible to the predecessor domains of the given domain. Further, the previous generation realm may not be allowed to update some parameters, such as whether the realm is associated with a trusted intermediate realm, the identity of the trusted intermediate realm, the expected signature 410, the signer ID 412, and so forth.

If the command was not issued by the previous generation realm, the RMU 20 checks whether the target realm is associated with the trusted intermediate realm and whether the command was issued by the trusted intermediate realm at step 530. If the target realm is not associated with any trusted intermediate realms, or the command is not issued by a trusted intermediate realm associated with the target realm, a failure is triggered at step 532. Additionally, if the command was issued by a trusted intermediate realm associated with the target realm, then at step 534, parameter updates of the realm descriptor are allowed. Furthermore, there may be some parameters that the trusted intermediate realm is not allowed to be updated, but these parameters may be fewer parameters than those that the previous generation realm is not allowed to be updated. For example, trusted intermediate realms may not be allowed to change which realms are identified as trusted intermediate realms. However, unlike the prior generation realm, a trusted intermediate realm may be allowed to update a provided secret, which is keying material used to generate keys to protect data/code associated with the realm.

The RMU may also support a command to trigger the acquisition of a security configuration record 508, the security configuration record 508 representing a snapshot of a subset of the security configuration parameters of the target domain. These commands are accepted only if they are issued by a trusted intermediate domain defined in a domain descriptor of the target domain.

FIG. 26 illustrates a method of processing an attestation command to trigger generation of an attestation to a target domain. At step 550, the RMU 20 receives an attestation command identifying a target domain for which to generate an attestation. At step 552, it is determined whether the attestation command is acceptable, and if not, a fault is generated at step 554. Various checks may be performed to determine whether the attestation command is acceptable. If the target domain identified by the attestation command is not valid, the attestation command may be rejected and a failure triggered. Furthermore, if the attestation command is issued by a domain other than a trusted intermediate domain associated with the target domain, the attestation command may be accepted if the target domain is in a valid state; if the target domain is in another state, the attestation command may be denied. If the attestation command is issued by the trusted intermediate realm, the attestation command may be accepted if the target realm is in any of a clean state, a new state, or a valid state.

If the attestation command is accepted, then at step 556, attestation information is generated based on the security configuration parameters of the target domain, where the attestation information provides some information that enables the verifying entity to check whether the target domain meets certain properties. At step 558, the RMU 20 checks whether the target realm is associated with a trusted intermediate realm. If so, at step 560, the RMU 20 then includes information in the attestation information indicating that the target realm is associated with a trusted intermediate realm, and also includes intermediate realm attestation information identifying the trusted intermediate realm or providing attestation information attesting to the nature of the intermediate realm. If the target realm does not have an associated trusted intermediate realm, step 560 is omitted. Either way, at step 562, the attestation information is signed with a key to prove the validity of the attestation, and the attestation is output to the party requesting it.

Thus, when the target realm is associated with a trusted intermediate realm, the verifier may use the attestation to check whether the trusted intermediate realm has certain properties, either by checking the attestation of the target realm itself, or by using an identifier included in the attestation of the target realm to request further attestation of the trusted intermediate realm. In this way, it can be ensured that the fact that the target domain is correctly configured by a properly functioning trusted intermediate domain is relied upon.

Thus, in summary, a trusted intermediate domain is defined for managing a given domain, which itself is a domain associated with the same domain administrator (e.g., the same banking provider, healthcare provider, etc.) as the given domain to be managed, with special properties that allow it to manage other domains on behalf of that party. In the simplest implementation, one instance of a trusted intermediate realm (per realm manager) may exist on every system in which a realm may exist. Because the trusted intermediary is itself a domain, it can be certified by a domain administrator as part of the debugging, ensuring that the trusted intermediary can only be valid/debugged on systems with the required security attributes.

Thus, a domain may be associated with a trusted intermediary at domain setup, such that: only the identified trusted intermediate realms may manage the realms; the fact that a domain is associated with a trusted intermediate domain is apparent from domain attestation; and when certifying a realm, the verifying entity may also certify the associated trust middle. The trusted intermediate realm may receive the security environment of the realm from a given entity instance of device 2 and restore the security environment of the realm on the same instance or a different instance. The migration policy of the domain owner is encoded within a trusted intermediate domain associated with the domain owner. The policy itself may be attested by attesting to a trusted intermediate realm. This includes policies on how and when to communicate the domain security environment between different systems. As a secondary use, the same method can support related use cases, e.g., backup/restore of a complete domain, or taking snapshots/checkpoints of a domain that allow the domain to roll back to a previously known state, etc.

The trusted intermediary domain may certify the managed domain before launching the domain. The trusted intermediate realm is allowed to inject the provided root secret during realm establishment prior to launching the realm. The domain owner's key management policy is encoded within a trusted intermediate domain associated with the domain owner. The policy itself may be attested by attesting to a trusted intermediate realm. This includes providing the same root secret after rebooting the domain, or providing the root secret to multiple instances of the same domain, or providing the same root secret regardless of which system the domain boots on, etc.

FIG. 27 shows a simulator embodiment that may be used. Although the earlier described embodiments implement the present invention in terms of apparatus and methods for operating specific processing hardware supporting the technology in question, it is also possible to provide an instruction execution environment in accordance with what is described herein by using a computer program. To the extent that such computer programs provide software-based implementations of the hardware architecture, these computer programs are often referred to as emulators. Various emulator computer programs include emulators, virtual machines, models, and binary translators, including dynamic binary translators. In general, an emulator implementation may run on a host processor 730, which optionally runs a host operating system 720, supporting an emulator program 710. In some configurations, there may be multiple layers of emulation interposed between the hardware and the provided instruction execution environment, and/or multiple distinct instruction execution environments provided on the same host processor. Historically, powerful processors have been required to provide emulator implementations that execute at reasonable speeds, but this approach may be adjusted in certain circumstances, such as when code native to another processor needs to be run for compatibility or reuse reasons. For example, emulator implementations may provide an instruction execution environment with additional functionality not supported by the host processor hardware, or provide an instruction execution environment typically associated with a different hardware architecture. An overview of Simulation is given in "Some Efficient Architecture Simulation technologies" (Some Efficient Architecture Simulation technologies), Robert Bedichek, Winter 1990USENIX Conference, pages 53-63.

To the extent that embodiments have been described above with reference to particular hardware configurations or features, in an analog embodiment, equivalent functionality may be provided by appropriate software configurations or features. For example, certain circuits (such as the MMU 26 and RMU 20) may be implemented in an emulated embodiment as computer program logic (e.g., memory access program logic and domain manager logic) in the emulator program 710. Similarly, memory hardware such as registers or caches may be implemented as software data structures in analog embodiments. Some emulation embodiments may utilize host hardware where appropriate, in configurations where one or more of the hardware components referenced in the previously described embodiments reside on host hardware (e.g., host processor 730).

Emulator program 710 may be stored on a computer-readable storage medium (which may be a non-transitory medium) and provides a program interface (instruction execution environment) to object code 700 (which may include an application operating system and a manager as shown in fig. 2) that is the same as the application programming interface of the hardware architecture modeled by emulator program 710. Thus, the program instructions of the object code 700 (including control of memory access based on the above-described domain protection functions) may be executed within the instruction execution environment using the emulator program 710 so that the host computer 730, which does not actually have the hardware features of the device 2 discussed above, may emulate these features.

In this application, the word "configured to … …" is used to mean that a component of a device has a configuration capable of performing the defined operation. In this context, "configuration" means the configuration or manner of interconnection of hardware or software. For example, the apparatus may have dedicated hardware providing the defined operations, or may be programmed as a processor or other processing device that performs the functions. "configured to" does not imply that the device components need to be changed in any way in order to provide the defined operation.

Although illustrative embodiments of the present invention have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope and spirit of the invention as defined by the appended claims.

62页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于改进的实施区块链的智能合约的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类