Provisioning services (PVS) cloud streaming with read caching

文档序号:1821538 发布日期:2021-11-09 浏览:3次 中文

阅读说明:本技术 具有读高速缓存的供应服务(pvs)云流送 (Provisioning services (PVS) cloud streaming with read caching ) 是由 M·李 S·格雷厄姆 于 2021-01-14 设计创作,主要内容包括:一种计算系统包括通过通信网络流送基础盘映像的服务器,以及客户端机器。客户端机器包括存储包括网络驱动的预引导数据的读高速缓存,以及耦合到读高速缓存的处理器。处理器使用预引导数据启动客户端机器的预引导,并且在网络驱动运行之后,经由通信网络从服务器接收流送的基础盘映像以继续引导客户端机器。(A computing system includes a server that streams an underlying disk image over a communication network, and a client machine. The client machine includes a read cache storing pre-boot data including a network drive, and a processor coupled to the read cache. The processor initiates a pre-boot of the client machine using the pre-boot data and, after the network driver is running, receives a streamed base disk image from the server via the communication network to continue booting the client machine.)

1. A computing system, comprising:

a server configured to stream a base disk image over a communication network; and

a client machine, comprising:

a read cache configured to store preboot data including a network drive, an

A processor coupled to the read cache and configured to perform the following:

initiating preboot of a client machine using preboot data, an

After the network driver is running, a streamed base disk image is received from the server via the communication network to continue booting the client machine.

2. The computing system of claim 1, wherein the base disk image comprises an operating system, and wherein the preboot data further comprises a subset of the operating system comprising a network drive.

3. The computing system of claim 2, wherein the operating system comprises an operating system network driver, and wherein the network driver is configured to be the same as the operating system network driver.

4. The computing system of claim 1, wherein the processor is further configured to switch from using pre-boot data in a read cache to using data in a streamed base disk image in response to a network driver running.

5. The computing system of claim 1, wherein the processor is further configured to execute instructions to access the read cache for preboot data at preboot time.

6. The computing system of claim 5, wherein the client machine further comprises firmware that provides instructions to be executed at pre-boot time, and wherein firmware comprises UEFI (unified extensible firmware interface) firmware.

7. The computing system of claim 1, wherein the pre-boot data comprises an operating system kernel including a network driver, and the network driver is connected to the communication network as the operating system kernel begins to run.

8. A client machine, comprising:

a read cache configured to store preboot data including a network drive; and

a processor coupled to the read cache and configured to perform the following:

initiating preboot of a client machine using preboot data, an

After the network driver is running, a streamed base disk image is received from the server via the communication network to continue booting the client machine.

9. The client machine of claim 8, wherein the base disk image comprises an operating system, and wherein the preboot data further comprises a subset of the operating system comprising a network drive.

10. The client machine of claim 9, wherein the operating system comprises an operating system network driver, and wherein the network driver is configured to be the same as the operating system network driver.

11. The client machine of claim 8, wherein the processor is further configured to switch from using the pre-boot data in the read cache to using the data in the streamed base disk image in response to the network driver running.

12. The client machine of claim 8, wherein the processor is further configured to execute instructions to access the read cache for preboot data at preboot time.

13. The client machine of claim 12, further comprising firmware that provides instructions to be executed at pre-boot time, and wherein the firmware comprises UEFI (unified extensible firmware interface) firmware.

14. The client machine of claim 8, wherein the preboot data comprises an operating system kernel including the network driver, and the network driver is connected to the communications network as the operating system kernel begins to run.

15. A method, comprising:

the client machine storing pre-boot data including the network drive in a read cache within the client machine;

the client machine initiating a pre-boot of the client machine using the pre-boot data; and

after the network driver is running, the client machine receives the streamed base disk image from the server via the communication network to continue booting the client machine.

16. The method of claim 15, wherein the base disk image comprises an operating system, and wherein the preboot data further comprises a subset of the operating system comprising a network drive.

17. The method of claim 16, wherein the operating system comprises an operating system network driver, and wherein the network driver is configured to be the same as the operating system network driver.

18. The method of claim 15, wherein the client machine is further operative to switch from using the pre-boot data in the read cache to using the data in the streamed base disk image in response to the network driver running.

19. The method of claim 15, further comprising the client machine executing instructions to access the read cache for pre-boot data at pre-boot time.

20. The method of claim 15, wherein the preboot data comprises an operating system kernel including a network driver, and the network driver is connected to the communications network as the operating system kernel begins running.

Technical Field

The present disclosure relates to desktop virtualization, and more particularly to provisioning (provisioning) of client machines.

Background

Many organizations are now using desktop virtualization to provide more flexible options to address the changing needs of their users. In desktop virtualization, a user's computing environment may be separated from the user's physical computing device.

In an environment for centralized management of desktops, multiple client machines may receive access to or execute a computing environment based on a copy of a single "golden master" desktop disk image. This golden image is a shared template of the virtual machine and includes the operating system and applications. The gold image may also be referred to as a base disk image (base disk image).

One approach for machine deployment of the underlying disk image is based on image cloning. The image clone may copy the base disk image to a virtual disk in each client machine using an xcopy operation. Once the base disk image is deployed to each client machine, it is the distributed computing model. The method allows the client machine to perform offline computations after image deployment.

Another approach for machine deployment of base disk images is based on provisioning services (PVS). The provisioning service streams (streams) the base disk image to the client machine on demand over the communication network using network boot technology. This approach requires the client machine to maintain a constant network connection.

Disclosure of Invention

A computing system includes a server that streams an underlying disk image over a communication network, and a client machine. The client machine includes a read cache configured to store pre-boot (preboot) data including a network drive, and a processor coupled to the read cache. The processor is configured to initiate a pre-boot of the client machine using the pre-boot data and, after the network driver is running, receive a streamed base disk image from the server via the communication network to continue booting the client machine.

The base disk image may include an operating system and the preboot data may also include a subset of the operating system that includes the network drive.

The operating system in the base disk image may include an operating system network driver, and the network driver is configured the same as the operating system network driver. The processor may be further configured to switch from using the pre-boot data in the read cache to using the data in the streamed base disk image in response to the network driver running.

The processor may be further configured to execute the instructions to access the read cache for pre-boot data at pre-boot time. More particularly, the client machine may further include firmware to provide instructions to be executed at pre-boot time. The firmware may be, for example, UEFI (unified extensible firmware interface) firmware.

The preboot data may include an operating system kernel that includes a network driver, and the network driver is connected to the communication network as the operating system kernel begins to run.

Another aspect relates to a client machine that includes a read cache configured to store preboot data including a network drive, and a processor coupled to the read cache. The processor is configured to initiate a pre-boot of the client machine using the pre-boot data and, after the network driver is running, receive a streamed base disk image from the server via the communication network to continue booting the client machine.

Yet another aspect relates to a method for operating a client machine as described above. The method includes storing preboot data including a network drive in a read cache within a client machine. The method also includes initiating a pre-boot of the client machine using the pre-boot data, and after the network driver is running, receiving a streamed base disk image from the server via the communication network to continue booting the client machine.

Drawings

FIG. 1 is a schematic block diagram of a network environment of computing devices in which aspects of the present disclosure may be implemented.

FIG. 2 is a schematic block diagram of a computing device that may be used to practice embodiments of the client machine or remote machine shown in FIG. 1.

FIG. 3 is a schematic block diagram of a cloud computing environment in which aspects of the present disclosure may be implemented.

FIG. 4 is a schematic block diagram of a desktop, mobile, and web-based device operating a workspace app in which aspects of the present disclosure may be implemented.

FIG. 5 is a schematic block diagram of a workspace network environment for computing devices in which aspects of the present disclosure may be implemented.

FIG. 6 is a schematic block diagram of a computing system having a client machine including a read cache in which aspects of the present disclosure may be implemented.

FIG. 7 is a more detailed schematic block diagram of the computing system shown in FIG. 6.

FIG. 8 is a flow chart of a method for operating the client machine shown in FIG. 6.

FIG. 9 is a more detailed flow chart of a method for operating the client machine shown in FIG. 6.

Detailed Description

The present description makes reference to the accompanying drawings, in which exemplary embodiments are shown. However, many different embodiments may be used and therefore the description should not be construed as limited to the particular embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. Like numbers refer to like elements throughout, and prime notation (prime notation) is used to indicate similar elements in different embodiments.

In desktop virtualization, cloud service providers provide client machines that operate based on copies of a shared "golden master" desktop disk image. As described above, the gold image may also be referred to as a base disk image, and includes an operating system and applications. There are two different ways to provide the base disk image to the client machine, which may also be referred to as a virtual machine.

Some cloud service providers stream the base disk image from the Provisioning server to each client machine, such as, for example, the Citrix Provisioning Services (PVS) by Citrix Systems, inc. This is possible based on the firmware in each client machine that provides the network drive during pre-boot. The network driver allows the client machine to boot from an operating system within the base disk image streamed to the client machine. Streaming the base disk image has the advantage of making one time (once) patches, updates and other configuration changes to the base disk image. Then, when each client machine starts (boot up), it will start using the updated base disk image.

Other Cloud service providers replicate the base disk image to a virtual disk within each client machine, such as Azure Cloud from Microsoft corp and Google Cloud Platform (Cloud Platform) from Google inc. These client machines do not have a network driver in their firmware because they boot directly from the operating system in the base disk image copied to their respective virtual disks. A disadvantage of image cloning is that when changes are made to the base disk image, then each client machine needs to receive a copy of the updated base image before starting up again. The number of times the base disk image is to be replicated depends on the number of client machines supported by the cloud service provider, which may involve hundreds or thousands of client machines, for example.

The techniques and teachings of this disclosure provide a cloud service provider that is unable to stream a base disk image to a client machine with the ability to do so because the firmware within the client machine lacks a network driver. As will be explained in detail below, this is accomplished with a client machine having a read cache that provides a network drive for use during pre-boot.

Referring initially to FIG. 1, a non-limiting network environment 10 in which various aspects of the present disclosure may be implemented includes one or more client machines 12A-12N, one or more remote machines 16A-16N, one or more networks 14, 14', and one or more appliances (applications) 18 installed in the computing environment 10. The client machines 12A-12N communicate with the remote machines 16A-16N via the networks 14, 14'.

In some embodiments, the client machines 12A-12N communicate with the remote machines 16A-16N via the intermediary 18. The appliance 18 is shown as being located between the networks 14, 14' and may also be referred to as a network interface or gateway. In some embodiments, the appliance 108 may operate as an Application Delivery Controller (ADC) to provide clients with access to business applications and other data deployed in a datacenter, cloud, or delivered as software as a service (SaaS) across the scope of client devices, and/or to provide other functionality such as load balancing, and the like. In some embodiments, multiple appliances 18 may be used, and the appliance(s) 18 may be deployed as part of the network 14 and/or 14'.

The client machines 12A-12N may be generally referred to as client machines 12, local machines 12, clients 12, client nodes 12, client computers 12, client devices 12, computing devices 12, endpoints 12, or endpoint nodes 12. The remote machines 16A-16N may generally be referred to as servers 16 or server farms (farms) 16. In some embodiments, a client device 12 may have the capability to function both as a client node seeking access to resources provided by the server 16 and as a server 16 providing access to hosted resources of other client devices 12A-12N. The networks 14, 14' may be generally referred to as networks 14. The network 14 may be configured in any combination of wired and wireless networks.

The server 16 may be any server type, such as, for example: a file server; an application server; a web server; a proxy server; an appliance; a network appliance; a gateway; an application gateway; a gateway server; a virtualization server; deploying a server; a secure socket layer virtual private network (SSL VPN) server; a firewall; a web server; a server executing an active directory; a cloud server; or a server executing an application acceleration program that provides firewall functionality, application functionality, or load balancing functionality.

The server 16 may execute, operate or otherwise provide an application that may be any one of: software; carrying out a procedure; executable instructions; a virtual machine; a hypervisor; a web browser; a web-based client; a client-server application; a thin client computing client; an ActiveX control; a Java applet; software related to voice over internet protocol (VoIP) communications, such as soft IP phones; applications for streaming video and/or audio; an application for facilitating real-time data communication; an HTTP client; an FTP client; an Oscar client; a Telnet client; or any other set of executable instructions.

In some embodiments, the server 16 may execute a remote presentation service program or other program that uses a thin client or remote display protocol to capture display output generated by an application executing on the server 16 and transmit the application display output to the client device 12.

In other embodiments, the server 16 may execute a virtual machine that provides a user of the client device 12 with access to the computing environment. The client device 12 may be a virtual machine. The virtual machines may be managed by, for example, a hypervisor, a Virtual Machine Manager (VMM), or any other hardware virtualization technique within server 16.

In some embodiments, the network 14 may be: local Area Networks (LANs), Metropolitan Area Networks (MANs); a Wide Area Network (WAN); a primary public network 14; and a primary private network 14. Additional embodiments may include a network 14 of mobile telephone networks that communicate between mobile devices using various protocols. For short-range communication within a Wireless Local Area Network (WLAN), protocols may include 802.11, bluetooth, and Near Field Communication (NFC).

Fig. 2 depicts a block diagram of a computing device 20 that may be used to practice embodiments of client device 12, appliance 18, and/or server 16. Computing device 20 includes one or more processors 22, volatile memory 24 (e.g., Random Access Memory (RAM)), non-volatile memory 30, a User Interface (UI) 38, one or more communication interfaces 26, and a communication bus 48.

The nonvolatile memory 30 may include: one or more Hard Disk Drives (HDDs) or other magnetic or optical storage media; one or more Solid State Drives (SSDs), such as flash drives or other solid state storage media; one or more hybrid magnetic and solid state drives; and/or one or more virtual storage volumes, such as cloud storage, or a combination of such physical storage volumes and virtual storage volumes or arrays thereof.

The user interface 38 may include a Graphical User Interface (GUI) 40 (e.g., a touch screen, a display, etc.) and one or more input/output (I/O) devices 42 (e.g., a mouse, a keyboard, a microphone, one or more speakers, one or more cameras, one or more biometric scanners, one or more environmental sensors, and one or more accelerometers, etc.).

The non-volatile memory 30 stores an operating system 32, one or more applications 34, and data 36 such that computer instructions of the operating system 32 and/or the applications 34 are executed from the volatile memory 24, for example, by the processor(s) 22. In some embodiments, volatile memory 24 may include one or more types of RAM and/or cache memory, which may provide faster response times than main memory. Data may be input using input devices of the GUI 40 or received from I/O device(s) 42. The various elements of the computer 20 may communicate via a communication bus 48.

The illustrated computing device 20 is shown only as an example client device or server and may be implemented by any computing or processing environment having any type of machine or collection of machines, which may have suitable hardware and/or software capable of operating as described herein.

Processor(s) 22 may be implemented by one or more programmable processors to execute one or more executable instructions, such as computer programs, to perform the functions of the system. As used herein, the term "processor" describes a circuit that performs a function, an operation, or a sequence of operations. The functions, acts or sequences of acts may be hard coded into the circuitry or soft coded by way of instructions held in the memory device and executed by the circuitry. A processor may perform a function, an operation, or a sequence of operations using digital values and/or using analog signals.

In some embodiments, the processor may be embodied as follows: one or more Application Specific Integrated Circuits (ASICs), microprocessors, Digital Signal Processors (DSPs), Graphics Processing Units (GPUs), microcontrollers, Field Programmable Gate Arrays (FPGAs), Programmable Logic Arrays (PLAs), multi-core processors, or general purpose computers with associated memory.

The processor 22 may be an analog, digital or mixed signal. In some embodiments, the processor 22 may be one or more physical processors, or one or more virtual (e.g., remotely located or cloud) processors. A processor comprising multiple processor cores and/or multiple processors may provide functionality for executing instructions simultaneously in parallel or for executing one instruction simultaneously in parallel with respect to more than one piece of data.

Communication interface 26 may include one or more interfaces to enable computing device 20 to access a computer network, such as a Local Area Network (LAN), Wide Area Network (WAN), Personal Area Network (PAN), or the internet, through various wired and/or wireless connections, including cellular connections.

In the described embodiments, computing device 20 may execute applications on behalf of a user of a client device. For example, computing device 20 may execute one or more virtual machines managed by a hypervisor. Each virtual machine may provide an execution session within which an application executes on behalf of a user or client device, such as a hosted desktop session. Computing device 20 may also execute terminal service sessions to provide a hosted desktop environment. The computing device 20 may provide access to a remote computing environment that includes one or more applications, one or more desktop applications, and one or more desktop sessions in which the one or more applications may execute.

The example virtualization server 16 may be implemented using Citrix Hypervisor, available from Citrix Systems, Inc. (of Fort Lauderdale, Florida). Virtual Apps and desktop sessions may further be provided by Citrix Virtual Apps and desktops (cvad), also from Citrix Systems. Citrix Virtual Apps and Desktops are an application virtualization solution that enhances productivity with generic access to Virtual sessions including Virtual Apps, Desktops, and data sessions from any device plus the option of implementing an extensible VDI solution. Virtual sessions may also include, for example, software as a service (SaaS) and desktop as a service (DaaS) sessions.

Referring to fig. 3, a cloud computing environment 50 is depicted, which may also be referred to as a cloud environment, cloud computing, or cloud network. The cloud computing environment 50 may provide delivery of shared computing services and/or resources to multiple users or tenants. For example, shared resources and services may include, but are not limited to, networks, network bandwidth, servers, processes, memory, storage, applications, virtual machines, databases, software, hardware, analytics, and intelligence.

In cloud computing environment 50, one or more clients 52A-52C (such as those described above) communicate with cloud network 54. The cloud network 54 may include a backend platform such as a server, storage, server farm, or data center. Users or clients 52A-52C may correspond to a single organization/tenant or multiple organizations/tenants. More particularly, in one example implementation, cloud computing environment 50 may provide a private cloud (e.g., an enterprise cloud) that serves a single organization. In another example, cloud computing environment 50 may provide a community or public cloud that serves multiple organizations/tenants. In other embodiments, the cloud computing environment 50 may provide a hybrid cloud that is a combination of a public cloud and a private cloud. The public cloud may include public servers maintained by third parties to clients 52A-52C or enterprises/tenants. The server may be located off-site at a remote geographic location or other location.

The cloud computing environment 50 may provide resource pooling to serve multiple users via clients 52A-52C through a multi-tenant environment or multi-tenant model, where different physical and virtual resources are dynamically allocated and reallocated in response to different needs in the respective environment. A multi-tenant environment may include a system or architecture that may provide a single instance of software, application, or software application to serve multiple users. In some embodiments, cloud computing environment 50 may provide self-service on demand (self-service) to unilaterally provision computing capacity (e.g., server time, network storage) across a network for multiple clients 52A-52C. The cloud computing environment 50 may provide resiliency to dynamically expand outward (scale out) or inward (scale in) in response to different demands from one or more clients 52. In some embodiments, the computing environment 50 may include or provide a monitoring service to monitor, control, and/or generate reports corresponding to the shared services and resources provided.

In some embodiments, the cloud computing environment 50 may provide cloud-based delivery of different types of cloud computing services, such as, for example, software as a service (SaaS) 56, platform as a service (PaaS) 58, infrastructure as a service (IaaS) 60, and desktop as a service (DaaS) 62. IaaS may refer to the use of infrastructure resources required by a user to lease for a specified period of time. IaaS providers may offer storage, networking, servers, or virtualized resources from large pools, allowing users to quickly scale up (scale up) by accessing more resources as needed. Examples of IaaS include AMAZON WEB SERVICES, available from amazon.com, inc. of Seattle, Washington, RACKSPACE CLOUD available from Rackspace US, inc. of San Antonio, Texas, Mountain View, Google computer Engine, available from Google inc. of California, or rigidscale, available from Santa Barbara, RIGHTSCALE available from inc.

PaaS providers may provide the functionality provided by IaaS, including, for example, storage, networking, server or virtualization, as well as additional resources, such as, for example, operating systems, middleware, or runtime resources. Examples of PaaS include WINDOWS azere, supplied by Microsoft Corporation of Redmond, Washington, Google App Engine, supplied by Google inc, and Heroku, supplied by San Francisco, California.

SaaS providers may provide PaaS-provided resources, including storage, networking, servers, virtualization, operating systems, middleware, or runtime resources. In some embodiments, the SaaS provider may provide additional resources, including, for example, data and application resources. Examples of SaaS include Google APPS, available from Google inc, SALESFORCE, available from salesforce.com inc, California, or OFFICE 365, available from Microsoft Corporation. Examples of SaaS may also include data storage providers such as Dropbox provided by Dropbox, inc. of San Francisco, California, Microsoft ONEDRIVE provided by Microsoft Corporation, Google Drive provided by Google inc. or Apple ICLOUD provided by Apple inc. of Cupertino, California.

Like SaaS, DaaS (also known as hosted desktop services) is a form of Virtual Desktop Infrastructure (VDI) in which virtual desktop sessions are typically delivered as cloud services along with apps used on the virtual desktop. Citrix Cloud is an example of a DaaS delivery platform. The DaaS delivery platform may be hosted on a public CLOUD computing infrastructure, such as AZURE CLOUD (herein "AZURE") from Microsoft Corporation of Redmond, Washington or AMAZON WEB SERVICES (herein "AWS") provided by Seattle, Washington, AMAZON. In the case of Citrix Cloud, the Citrix workpace app can be used as a single entry point for integrating apps, files, and desktops, whether in-house (on-premiums) or in the Cloud, to achieve a unified experience.

The unified experience provided by the Citrix Workspace app will now be discussed in more detail with reference to FIG. 4. The Citrix Workspace app is generally referred to herein as Workspace app 70. Workspace app 70 is how users access their workspace resources, one category of which is applications. These applications may be SaaS apps, web apps, or virtual apps. Workspace app 70 also gives the user access to their desktop, which may be a local desktop or a virtual desktop. In addition, workspace app 70 gives users access to their files and data, which may be stored in many libraries. Files and data may be hosted on Citrix ShareFile, hosted on an on-premise network file server, or hosted in some other cloud storage provider, such as Microsoft OneDrive or Google Drive Box, for example.

To provide a unified experience, all resources needed by the user can be located and accessed from workspace app 70. The workspace app 70 is provided in a different version. One version of workspace app 70 is an installed application for desktop 72, which may be based on the Windows, Mac, or Linux platforms. The second version of the workspace app 70 is an installed application for the mobile device 74, which may be based on the iOS or Android platform. A third version of workspace app 70 uses a HyperText markup language (HTML) browser to provide users with access to their workspace environment. The web version of workspace app 70 is used when the user does not want to install the workspace app or has no rights to install the workspace app, such as when operating public kiosk 76.

Each of these different versions of workspace app 70 may advantageously provide the same user experience. This advantageously allows a user to move from client device 72 to client device 74 to client device 76 in different platforms and still get the same user experience for their workspace. Client devices 72, 74, and 76 are referred to as endpoints.

As described above, workspace app 70 supports Windows, Mac, Linux, iOS and Android platforms as well as platforms with HTML browsers (HTML 5). The workspace app 70 incorporates multiple engines 80-90, allowing a user to access multiple types of apps and data resources. Each engine 80-90 optimizes the user experience for a particular resource. Each engine 80-90 also provides insight to an organization or enterprise into user activity and potential security threats.

Rather than launching them on a locally installed and unmanaged browser, the embedded browser engine 80 keeps the SaaS and web app contained in the workspace app 70. With the embedded browser, the workspace app 70 can intercept user-selected hyperlinks in SaaS and web apps and request risk analysis before granting, denying, or quarantining access.

The high definition experience (HDX) engine 82 establishes connections to virtual browsers, virtual apps, and desktop sessions running on Windows or Linux operating systems. Using the HDX engine 82, Windows and Linux resources can run remotely on the endpoint, while the display remains local. To provide the best possible user experience, the HDX engine 82 utilizes different virtual channels to accommodate changing network conditions and application requirements. To overcome high latency or high packet loss networks, the HDX engine 82 automatically implements an optimized transmission protocol and a larger compression algorithm. Each algorithm is optimized for a certain type of display, such as video, images or text. The HDX engine 82 identifies these types of resources in the application and applies the most appropriate algorithm to that portion of the screen.

For many users, the workspace is data-centric. The content collaboration engine 84 allows a user to integrate all data into a workspace, whether the data resides on an on-premise or in the cloud. The content collaboration engine 84 allows administrators and users to create a collection of connectors to company and user specific data storage locations. This may include OneDrive, Dropbox, and in-house network file sharing, for example. The user may maintain files in multiple libraries and allow workspace app 70 to merge them into a single personalized library.

Networking engine 86 identifies whether an endpoint or an app on an endpoint requires network connectivity to a secure backend resource. Networking engine 86 may automatically establish a full VPN tunnel for the entire endpoint device, or it may create an app-specific μ -VPN connection. The mu-VPN defines what back-end resources the application and endpoint device can access, thus protecting the back-end infrastructure. In many cases, certain user activities benefit from unique network-based optimizations. If a user requests a copy of a file, workspace app 70 may automatically utilize multiple network connections simultaneously to complete the activity more quickly. If the user initiates a VoIP call, workspace app 70 improves its quality by replicating the call across multiple network connections. Networking engine 86 uses only the first arriving packet.

Analysis engine 88 reports the user's device, location, and behavior, where the cloud-based service identifies any potential anomalies that may be the result of a stolen device, the identity of a hacked, or a user that is about to leave the company. The information collected by the analysis engine 88 protects company assets by automatically implementing counter-measures.

Management engine 90 keeps workspace app 70 up to date (current). This not only provides the user with up-to-date capabilities, but also includes additional security enhancements. The workspace app 70 includes an automatic update service that routinely checks and automatically deploys updates based on customizable policies.

Referring now to FIG. 5, a workspace network environment 100 that provides a unified experience to users based on workspace app 70 will be discussed. Desktop, mobile, and web versions of workspace app 70 communicate with workspace experience service 102 running in Citrix Cloud 104. The workspace experience service 102 then pulls (pull in) all the different resource sources 16 via the resource source (feed) microservice 108. That is, all different resources from other services running in Citrix Cloud 104 are pulled by resource source microservice 108. The different services may include a virtual app and desktop service 110, a secure browser service 112, an endpoint management service 114, a content collaboration service 116, and an access control service 118. Any services subscribed to by an organization or enterprise will be automatically pulled into workspace experience service 102 and delivered to the user's workspace app 70.

In addition to cloud source 120, resource source microservice 108 may pull in on-premise source 122. The cloud connector 124 is used to provide virtual apps and desktop deployments running in the in-house deployment datacenter. For example, desktop virtualization may be provided by Citrix virtual apps and desktops 126, Microsoft RDS 128, or VMware Horizon 130. In addition to the cloud source 120 and the in-house deployment source 122, a device source 132, for example, from an internet of things (IoT) device 134, may be pulled by the resource source microservice 108. Site aggregation is used to bind different resources into the user's overall workspace experience.

Each of cloud source 120, on-premise source 122, and device source 132 provide a workspace experience for a user with a different and unique type of application. The workspace experience may support local apps, SaaS apps, virtual apps, and desktop browser apps, as well as storage apps. As sources continue to increase and expand, the workspace experience can include additional resources in the user's overall workspace. This means that the user will be able to contact (get to) every application they need to access.

Still referring to the workspace network environment 20, a series of events will be described regarding how to provide a unified experience to the user. The unified experience begins with the user connecting to a workspace experience service 102 running in Citrix Cloud 104 using workspace app 70 and presenting their identity (event 1). The identity includes, for example, a username and password.

The workspace experience service 102 forwards the user's identity to the identity microservice 140 in Citrix Cloud 104 (event 2). The identity microservice 140 authenticates the user to the correct identity provider 142 based on the organization's workspace configuration (event 3). Authentication may be based on the on-premise active directory 144 requiring deployment of the cloud connector 146. Authentication may also be based on Azure Active Directory 148 or even a third party identity provider 150 such as, for example, Citrix ADC or Okta.

Once authorized, the workspace experience service 102 requests a list of authorized resources from the resource source microservice 108 (event 4). For each configured resource source 106, the resource source microservice 108 requests an identity token from a single-sign microservice 152 (event 5).

The resource source specific identity token is passed to the point of each resource authenticated (event 6). Contacts the on-premise resources 122 through a Citrix Cloud Connector 124. Each resource source 106 replies with a list of resources authorized for the corresponding identity (event 7).

The resource source microservice 108 aggregates all items from different resource sources 106 and forwards (event 8) to the workspace experience service 102. The user selects a resource from the workspace experience service 102 (event 9).

The workspace experience service 102 forwards the request to the resource source microservice 108 (event 10). The resource source microservice 108 requests an identity token from a single sign-on microservice 152 (event 11). The user's identity token is sent to the workspace experience service 102 (event 12), where an initiation ticket is generated and sent to the user.

The user initiates a secure session to the gateway service 160 and presents the start ticket (event 13). The gateway service 160 initiates a secure session to the appropriate resource source 106 and presents the identity token to seamlessly authenticate the user (event 14). Once the session is initialized, the user is able to utilize the resources (event 15). Delivering the entire workspace through a single access point or application advantageously improves productivity and streamlines the user's common workflow.

Referring now to FIG. 6, when the firmware in client machine 210 does not provide a network driver during pre-boot, computing system 200 provides client machine 210 with the ability to receive base disk image 232 streamed from server 230. The firmware is used to perform hardware initialization during the pre-boot process. The preboot process is part of a preboot execution environment (PXE) that allows client machines 210 that have not been loaded with an operating system to be configured and preboot. As will be discussed in detail below, the client machine 210 includes a memory having a read cache 212, the read cache 212 including preboot data 214. Preboot data 214 is accessed by firmware and includes a network driver for connecting to communication network 240 to receive streamed base disk image 232.

The illustrated computing system 200 includes a server 230, the server 230 configured to stream an underlying disk image 232 over a communication network 240. The client machine 210 includes a read cache 212 configured to store preboot data 214 including a network drive. The processor 216 is coupled to the read cache 212. Processor 216 is configured to initiate a pre-boot of client machine 210 using pre-boot data 214. After the network driver is running, processor 216 can receive streamed base disk image 232 from server 230 via communication network 240.

As described above, base disk image 232 includes operating system 234 and applications 236. The operating system 234 includes an operating system network driver. The network driver in preboot data 214 is the same as the operating system network driver. The operating system 234 may be, for example, Microsoft Windows from Microsoft corp. The firmware in the client machine 210 is pre-installed to perform hardware initialization during the pre-boot process and is the first software to run when the client machine 210 is powered on.

The firmware may be, for example, UEFI (unified extensible firmware interface) firmware. As those skilled in the art will readily appreciate, UEFI firmware is intended to replace BIOS (basic input/output system) firmware.

When a client machine is intended to operate using an underlying disk image stored on a local disk within the client machine, no network drive is required in its firmware. This is because the client machine will boot from the base disk image stored on the local disk. Thus, the manufacturer of the client machine 210 has removed the network driver from the firmware because the client machine 210 does not need to be booted.

In order for Cloud service providers such as Azure and Google Cloud Platform to change from using a base disk image stored on a local disk within client machine 210 to using a base disk image 232 streamed from server 230, processor 216 in client machine 210 needs to access communication network 240 during pre-boot. If the client machine 210 cannot access the communication network 240, the client machine 210 cannot become operational. The read cache 212 includes a subset of the operating system 234, which includes an operating system network driver. Read cache 212 allows client machine 210 to pre-boot without communicating with server 230 via communication network 240. Read cache 212 advantageously bridges the gap (gap) during firmware pre-boot when there is no network drive.

Processor 216 within client machine 210 is configured to access arrival cache 212 for pre-boot data 214 included therein. More particularly, processor 216 executes an operating system loader within client machine 210. The processor 216 retrieves the required parameters through a local configuration file during pre-boot without connecting to the server 230 via the communication network 240. The initial settings of the client machine 210 are configured using the local configuration file. The required parameters contain the data to be transitioned to the operating system loader.

After the network driver in read cache 212 has been loaded and started running, communication with server 230 via communication network 240 is initiated. In response to the network driver running, processor 216 switches from using preboot data 214 in read cache 212 to using data in streamed base disk image 232. Communication with server 230 allows client machine 210 to receive the remainder of base disk image 232 via communication network 240 (e.g., via streaming).

The firmware searches the data to boot the operating system of the client machine 210. Because the preboot data 214 in the read cache 212 supports a limited portion of the operating system, the processor 216 is instructed to continue loading the remainder of the operating system 234 needed by the client machine 210 using the network driver in the read cache 212. Once the network driver is running, client machine 210 transitions from the pre-boot environment to an environment that receives the data in streamed base disk image 232.

In one embodiment, the preboot data 214 in the read cache 212 is the same as the operating system in the base disk image 232 that the server 230 is to stream. This approach ensures that preboot data 214 contains the disk blocks needed for the operating system loader, kernel, and boot-time (boot-time) driver to boot. The kernel is at the core of the operating system and facilitates interaction between hardware and software components.

Preboot data 214 shares the same trade-off as the operating system in base disk image 232 in that the size of read cache 212 is larger than what preboot client machine 210 actually needs. In this approach, for example, the size of the read cache 212 may be about 10 GB.

To reduce the size of the read cache 212, the preboot data 214 in the read cache 212 may be a subset of the operating system. The method may be based on a determination of what files the operating system loader will load in the pre-boot environment. For example, a starting minimum list of files used in the boot process is generated and files needed in the pre-boot process are selectively added. Since base disk image 232 differs between different cloud service providers, this determination may be an iterative process to identify which files are needed in read cache 212 by pre-boot client machine 210.

Referring now to FIG. 7, the computing system 200 will be discussed in more detail. Server 230 includes an underlying disk image 232 for servicing any number of virtual or physical provisioning machines of any number of client machines 210 over a communication network 240. Base disk image 232 may also be referred to as a base Virtual Hard Disk (VHDX). Base disk image 232 includes an operating system 234 and applications 236 for the provisioning virtual machine of client machine 102. The base disk image 232 may execute on an NTFS (new technology file system) file system 234 of a physical disk 236 within the server 230. The NTFS file system is a file system used by the Windows operating system to store and retrieve files on disk partitions in the physical disk 236.

More particularly, base disk image 232 includes any software, hardware, or combination of software and hardware, programs, executables, functions, instructions, data, or library functionality. Base disk image 232 may include files, programs, instructions, applications, or processes needed or used to operate any application or service. Base disk image 232 may include any functionality to enable the operation of a provisioning machine executing on client machine 210.

NTFS is a file system used by operating systems to store and retrieve files on virtual or physical disks. NTFS may be a hierarchical file system or a non-hierarchical file system and may be configured to operate on any of the operating systems referenced herein. The NTFS may be or include the functionality of a File Allocation Table (FAT) archiving system.

The client machine 210 includes a physical disk 228 that stores a read cache file 212, the read cache file 212 residing on an NTFS file system 227 of the physical disk 228. The NTFS file system 227 may present mount point D drives for the physical disks 228. Mount points are drives that represent data on the physical disk 228 managed by the NTFS file system. Client machine 210 further includes virtual disk 226. The NTFS file system 224 may present mount point C drives for the virtual disk 226. The C-driver provides an environment for processor 216 in client machine 210 to execute operating system 234, applications 236, and services provided by base disk image 232 streamed from server 230.

Bootstrap (bootstrap) functions 220 are used to control or manage the pre-boot and boot processes used to start client machine 210. Bootstrapping function 220 is software based and may include programs such as UEFI applications.

The virtual disk controller 218 is used to control or manage the virtual disks 226 of the client machines 210. The virtual disk controller 218 may include any software, hardware, or combination of software and hardware, program, function, executable, instructions, data, or library. The virtual disk controller 218 may launch the virtual disk 226 in response to instructions from the bootstrap function 220 during startup or pre-boot of the client machine 210.

Similarly, the physical disk controller 222 is used to control or manage the physical disks 228 of the client machine 210. The physical disk controller 222 may boot the physical disk 228 during boot or pre-boot of the client machine 210.

During pre-boot, firmware in client machine 210 is launched from a designated firmware loader located on physical disk 228. This is performed via the physical disk controller 222. The firmware loader is on a FAT (file allocation table) partition within the physical disk 228. The firmware loader may be an EFI (extensible firmware interface) file that includes a boot loader executable that contains data on how to perform the boot process. The EFI file is associated with UEFI firmware.

The firmware loader operates in a network-less (network) pre-boot environment. A network-less pre-boot environment means that the client machine 210 cannot connect to the communication network 240 during pre-boot. The firmware loader retrieves the required parameters using a local configuration file, and the local configuration file may be an INI file. The INI file is a file extension for the initialization file format used by the processor 216. Since preboot data 214 is available in read cache 212, the firmware does not need to communicate with server 230 during early preboot.

At this point, the firmware loader is looking to read blocks for the operating system loader from the C-drive on virtual disk 226. However, the firmware loader embeds the NTFS file system driver via the virtual disk controller 218 and the bootstrap function 220 to find the read cache file 212 located on the physical disk 228. Read cache file 212 is deployed during the creation of client machine 210 and is a subset of the operating system on base disk image 232.

In some examples, read cache file 212 may be a Virtual Hard Disk (VHDX) file. Once the read cache file 212 is found, the firmware loader installs it as a virtual disk. For example, the virtual disk may be a Microsoft Hyper-V virtual hard disk. Virtual disk 226 is then presented to the firmware as the newly added disk. The firmware loader begins loading the operating system loader from virtual disk 226. As the operating system 234 boots, the operating system 234 will present mount point C drivers.

During this pre-boot process, the operating system kernel, including the network driver, is loaded from the read cache 212. Control is then passed to the operating system kernel, which selects a Network Interface Controller (NIC) for connecting the network driver to the communication network 240. At this point, operating system 234 transitions to streaming data from base disk image 232 using a network driver, rather than using preboot data in read cache 212.

Referring now to FIG. 8, a general flow chart 300 illustrating a method for operating the client machine 210 will be discussed. From the outset (block 302), the method includes storing pre-boot data 214 including a network drive in a read cache 212 within a client machine 210 at block 304. The method also includes operating a processor 216 coupled to the read cache 212 to initiate a preboot of the client machine 210 using the preboot data 214 at block 306. After the network driver is running, a streamed base disk image is received from server 230 via communication network 240 at block 308. The method ends at block 310.

Referring now to fig. 9, a more detailed flow chart 400 illustrating a method for operating the client machine 210 will be discussed. From the beginning (block 402), the method includes starting a firmware loader at block 404. The firmware loader is used to perform hardware initialization during the pre-boot process. At block 406, the firmware loader accesses the read cache 212 with the preboot data including the network driver.

Since the read cache 212 is a subset of the operating system 234, the operating system 234 begins running at block 408. A determination is made at block 410 as to whether a network driver is running. If the network driver is not running, the method loops back to block 408 until the network driver is running. If the network driver is not running, the client machine 210 cannot connect to the communication network 240.

When it is determined that the network driver is running, then processor 216 within client machine 210 switches from using the pre-boot data in read cache 212 to satisfy the required blocks to using the data received in the streamed base disk image 232 at block 412. Client machine 210 continues the boot process via the data in the received streamed base disk image 232. The method ends at block 414.

The following examples are additional example embodiments, and other arrangements and configurations therefrom will be apparent to those skilled in the art.

Example 1 is a computing system comprising a server and a client machine configured to stream an underlying disk image over a communications network. The client machine includes a read cache configured to store preboot data including a network drive, and a processor coupled to the read cache. The processor is configured to initiate a pre-boot of the client machine using the pre-boot data and, after the network driver is running, receive a streamed base disk image from the server via the communication network to continue booting the client machine.

Example 2 includes the subject matter of example 1, wherein the base disk image includes an operating system, and wherein the preboot data further includes a subset of the operating system that includes the network driver.

Example 3 includes the subject matter of any of examples 1-2, wherein the operating system comprises an operating system network driver, and wherein the network driver is configured to be the same as the operating system network driver.

Example 4 includes the subject matter of any of examples 1-3, wherein the processor is further configured to switch from using the pre-boot data in the read cache to using data in the streamed base disk image to continue booting the client machine in response to the network driver running.

Example 5 includes the subject matter of any one of examples 1-4, wherein the processor is further configured to execute the instructions to access the read cache for pre-boot data at pre-boot time.

Example 6 includes the subject matter of any of examples 1-5, wherein the client machine further comprises firmware that provides instructions to be executed at pre-boot time, and wherein the firmware comprises UEFI (unified extensible firmware interface) firmware.

Example 7 includes the subject matter of any of examples 1-6, wherein the pre-boot data includes an operating system kernel that includes a network driver, and the network driver is connected to the communication network as the operating system kernel begins to run.

Example 8 is a client machine comprising a read cache configured to store preboot data comprising a network drive, and a processor coupled to the read cache. The processor is configured to initiate a pre-boot of the client machine using the pre-boot data and, after the network driver is running, receive a streamed base disk image from the server via the communication network to continue booting the client machine.

Example 9 includes the subject matter of example 8, wherein the base disk image includes an operating system, and wherein the preboot data further includes a subset of the operating system that includes the network driver.

Example 10 includes the subject matter of any one of examples 8-9, wherein the operating system comprises an operating system network driver, and wherein the network driver is configured to be the same as the operating system network driver.

Example 11 includes the subject matter of any one of examples 8-10, wherein the processor is further configured to switch from using the pre-boot data in the read cache to using data in the streamed base disk image to continue booting the client machine in response to the network driver running.

Example 12 includes the subject matter of any one of examples 8-11, wherein the processor is further configured to execute the instructions to access the read cache for pre-boot data at pre-boot time.

Example 13 includes the subject matter of any one of examples 8-12, wherein the client machine further comprises firmware to provide instructions to be executed at pre-boot time, and wherein the firmware comprises UEFI (unified extensible firmware interface) firmware.

Example 14 includes the subject matter of any one of examples 8-13, wherein the pre-boot data includes an operating system kernel including a network driver, and the network driver is connected to the communication network as the operating system kernel begins to run.

Example 15 is a method comprising storing, by a client machine, pre-boot data comprising a network drive in a read cache within the client machine, and initiating, by the client machine, a pre-boot of the client machine using the pre-boot data. After the network driver is running, the client machine receives the streamed base disk image from the server via the communication network to continue booting the client machine.

Example 16 includes the subject matter of example 15, wherein the base disk image includes an operating system, and wherein the preboot data further includes a subset of the operating system that includes the network driver.

Example 17 includes the subject matter of any one of examples 15-16, wherein the operating system comprises an operating system network driver, and wherein the network driver is configured to be the same as the operating system network driver.

Example 18 includes the subject matter of any of examples 15-17, wherein the client machine is further operative to switch from using the pre-boot data in the read cache to using the data in the streamed base disk image in response to the network driver running.

Example 19 includes the subject matter of any of examples 15-18, further comprising executing, by the client machine at pre-boot time, the instructions to access the read cache for pre-boot data.

Example 20 includes the subject matter of any one of examples 15-19, wherein the pre-boot data includes an operating system kernel that includes a network driver, and the network driver is connected to the communication network as the operating system kernel begins to run.

As will be understood by one of skill in the art upon reading the above disclosure, various aspects described herein may be embodied as an apparatus, method, or computer program product (e.g., a non-transitory computer-readable medium having computer-executable instructions for performing the operations or steps). Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.

Furthermore, such aspects may take the form of a computer program product, stored by one or more computer-readable storage media having computer-readable program code or instructions embodied in or on the storage media. Any suitable computer readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof.

Many modifications and other embodiments will come to mind to one skilled in the art to which this invention pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the foregoing is not to be limited to the example embodiments and that modifications and other embodiments are intended to be included within the scope of the appended claims.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:多核电子装置及其分组处理方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!