Super-fusion system, IO request issuing method thereof and physical server

文档序号:1888623 发布日期:2021-11-26 浏览:4次 中文

阅读说明:本技术 一种超融合系统及其io请求下发方法、物理服务器 (Super-fusion system, IO request issuing method thereof and physical server ) 是由 马怀旭 颜秉珩 于 2021-07-30 设计创作,主要内容包括:本申请公开了一种超融合系统,针对分布式块存储,该系统使用virtio半虚拟化技术,通过共享的大页内存和RDMA技术,实现从虚拟机IO到分布式块存储多副本间的全程内存零拷贝,即虚拟机直接通过大页内存访问物理机上的后端分布式块存储资源,使得虚拟机IO无需通过网络即可访问分布式块存储服务端的资源,缩减虚拟机到分布式块存储端的IO路径,并通过轮询加速IO访问速度,提高分布式块存储在虚拟化场景下的的IO性能,提高超融合性能。此外,本申请还提供了一种超融合系统的IO下发方法、物理服务器和可读存储介质,其技术效果与上述系统的技术效果相对应。(The application discloses super-fusion system, for distributed block storage, the system uses virtio paravirtualization technology, through shared large page memory and RDMA technology, realize from virtual machine IO to the whole journey memory zero copy between the distributed block storage multiple copies, the virtual machine directly visits the rear end distributed block storage resource on the physical machine through the large page memory promptly, make virtual machine IO need not to visit the resource of distributed block storage service end through the network, reduce the IO route of virtual machine to distributed block storage end, and through polling accelerateing IO access speed, improve the IO performance of distributed block storage under the virtualization scene, improve super-fusion performance. In addition, the application also provides an IO issuing method, a physical server and a readable storage medium of the super-fusion system, and the technical effect of the system corresponds to that of the system.)

1. A hyper-fusion system, comprising:

the virtio front-end driver is used for applying for a large-page memory and creating a request queue; when the virtual machine issues an IO request, storing the IO request to the request queue;

the virtio back-end driver is used for polling the request queue and calling a local server of the distributed block storage to process the IO request when the IO request is detected;

the local server is used for judging whether the IO request is a local read request, if so, writing the request content of the IO request into the large-page memory, and otherwise, sending the IO request and the address of the large-page memory to a remote server of distributed block storage in an RDMA mode;

and the remote server is used for performing corresponding read-write operation on the large-page memory in an RDMA (remote direct memory access) mode according to the address of the large-page memory and the IO request.

2. The system of claim 1, wherein the virtio front-end driver is to: an unlocked queue is created as a request queue.

3. The system of claim 1, wherein the local server is to: and acquiring a request address of the IO request from the large-page memory, and judging whether the IO request is a local read request according to the request address.

4. The system of claim 3, wherein the local server is to: and obtaining LUN information and/or offset information of the IO request from the large-page memory, and judging whether the IO request is a local read request according to the LUN information and/or the offset information.

5. The system of claim 1, wherein the remote server is to: and after the read-write operation is completed, sending prompt information of IO processing completion to the virtio back-end driver, and sending the prompt information of IO processing completion to the virtio front-end driver by the virtio back-end driver.

6. The system of any one of claims 1 to 5, wherein the local server is configured to: and when the IO request is not a local read request, registering the large-page memory to an intelligent network card, and sending the IO request and the address of the large-page memory to a remote server of distributed block storage in an RDMA (remote direct memory access) mode by using the intelligent network card.

7. An IO request issuing method of a super-fusion system is characterized by being applied to a host machine and comprising the following steps:

applying for a large-page memory by using a virtio front-end driver, and creating a request queue; when the virtual machine issues an IO request, the virtual machine stores the IO request to the request queue by using the virtio front-end driver;

polling the request queue by utilizing a virtio back-end driver, and sending the IO request to a local server of the distributed block storage when the IO request is detected;

and judging whether the IO request is a local read request by using the local server, if so, writing the request content of the IO request into the large-page memory, otherwise, sending the IO request and the address of the large-page memory to a remote host in an RDMA (remote direct memory access) manner, so that a remote server for distributed block storage on the remote host can perform corresponding read-write operation on the large-page memory in the RDMA manner according to the address of the large-page memory and the IO request.

8. An IO request issuing method of a super-fusion system is characterized by being applied to a remote host and comprising the following steps:

receiving an IO request and an address of a large-page memory which are sent by a local server side of distributed block storage on a host machine in an RDMA (remote direct memory access) mode;

performing corresponding read-write operation on the large-page memory on the host machine in an RDMA (remote direct memory access) mode according to the address of the large-page memory and the IO (input/output) request, wherein the large-page memory is applied by a virtio front-end driver on the host machine;

the process of the local server side sending the IO request and the address of the large-page memory to the remote host includes:

receiving a calling request sent by a virtio back-end driver, wherein the virtio back-end driver is used for polling a request queue and sending the calling request for processing the IO request to the local server when the IO request is detected, and the request queue is created by the virtio front-end driver on the host; and judging whether the IO request is a local read request, if so, writing the request content of the IO request into the large-page memory, and otherwise, sending the IO request and the address of the large-page memory to the remote host in an RDMA (remote direct memory access) mode.

9. A physical server of a super-converged system, comprising:

a memory: for storing a computer program;

a processor: for executing the computer program to implement the IO request issuing method of the hyper-converged system according to claim 7 or 8.

10. A readable storage medium storing a computer program which, when executed by a processor, is configured to implement the IO request issuing method of the hyper-converged system according to claim 7 or 8.

Technical Field

The application relates to the technical field of computers, in particular to a super-fusion system, an IO request issuing method thereof, a physical server and a readable storage medium.

Background

In the era of information explosion growth, mass data grows, the traditional storage cost is high, the efficiency is low, the growth speed of user data cannot be met, the pain is solved by a high-efficiency intelligent distributed storage technology, and the distributed storage has the following characteristics: high performance, high reliability, high expandability, transparency and autonomy. The distributed storage data storage firstly needs to be subjected to fragmentation and cutting processing, then the data storage position is calculated through a certain algorithm, and as the user data is divided into a plurality of data blocks, the data can not be used due to the fact that any one data block is lost, therefore, a reasonable redundant storage model must be considered in the distributed storage, a plurality of redundant storage copies are provided for the data blocks of the user, and therefore the safety and the reliability of the data are guaranteed.

For storage provided by distributed storage, there are three directions: object storage, file storage, and block storage. The object storage is mainly used for storing unchangeable objects, the file storage is mainly used for storing files, and the block storage provides block equipment; block storage is generally an application that provides blocks to qemu for use in creating a virtual machine, or providing a database, or storing files, etc.; there are two ways for normally using block storage, one is that distributed storage provides iSCSI device mapping to the host, and the other is that distributed storage is used through a proprietary protocol direct connection, but both suffer from the disadvantage of long IO path.

In summary, how to overcome the above disadvantage of long IO path in distributed block storage is a problem to be solved by those skilled in the art.

Disclosure of Invention

The application aims to provide a super-fusion system, an IO request issuing method thereof, a physical server and a readable storage medium, and is used for solving the problem of long IO path in distributed block storage. The specific scheme is as follows:

in a first aspect, the present application provides a hyper-fusion system, comprising:

the virtio front-end driver is used for applying for a large-page memory and creating a request queue; when the virtual machine issues an IO request, storing the IO request to the request queue;

the virtio back-end driver is used for polling the request queue and calling a local server of the distributed block storage to process the IO request when the IO request is detected;

the local server is used for judging whether the IO request is a local read request, if so, writing the request content of the IO request into the large-page memory, and otherwise, sending the IO request and the address of the large-page memory to a remote server of distributed block storage in an RDMA mode;

and the remote server is used for performing corresponding read-write operation on the large-page memory in an RDMA (remote direct memory access) mode according to the address of the large-page memory and the IO request.

Optionally, the virtio front-end driver is configured to: an unlocked queue is created as a request queue.

Optionally, the local server is configured to: and acquiring a request address of the IO request from the large-page memory, and judging whether the IO request is a local read request according to the request address.

Optionally, the local server is configured to: and obtaining LUN information and/or offset information of the IO request from the large-page memory, and judging whether the IO request is a local read request according to the LUN information and/or the offset information.

Optionally, the remote server is configured to: and after the read-write operation is completed, sending prompt information of IO processing completion to the virtio back-end driver, and sending the prompt information of IO processing completion to the virtio front-end driver by the virtio back-end driver.

Optionally, the local server is configured to: and when the IO request is not a local read request, registering the large-page memory to an intelligent network card, and sending the IO request and the address of the large-page memory to a remote server of distributed block storage in an RDMA (remote direct memory access) mode by using the intelligent network card.

In a second aspect, the present application provides an IO request issuing method for a hyper-converged system, which is applied to a host computer, and includes:

applying for a large-page memory by using a virtio front-end driver, and creating a request queue; when the virtual machine issues an IO request, the virtual machine stores the IO request to the request queue by using the virtio front-end driver;

polling the request queue by utilizing a virtio back-end driver, and sending the IO request to a local server of the distributed block storage when the IO request is detected;

and judging whether the IO request is a local read request by using the local server, if so, writing the request content of the IO request into the large-page memory, otherwise, sending the IO request and the address of the large-page memory to a remote host in an RDMA (remote direct memory access) manner, so that a remote server for distributed block storage on the remote host can perform corresponding read-write operation on the large-page memory in the RDMA manner according to the address of the large-page memory and the IO request.

In a third aspect, the present application provides an IO request issuing method for a super fusion system, which is applied to a remote host, and includes:

receiving an IO request and an address of a large-page memory which are sent by a local server side of distributed block storage on a host machine in an RDMA (remote direct memory access) mode;

performing corresponding read-write operation on the large-page memory on the host machine in an RDMA (remote direct memory access) mode according to the address of the large-page memory and the IO (input/output) request, wherein the large-page memory is applied by a virtio front-end driver on the host machine;

the process of the local server side sending the IO request and the address of the large-page memory to the remote host includes:

receiving a calling request sent by a virtio back-end driver, wherein the virtio back-end driver is used for polling a request queue and sending the calling request for processing the IO request to the local server when the IO request is detected, and the request queue is created by the virtio front-end driver on the host; and judging whether the IO request is a local read request, if so, writing the request content of the IO request into the large-page memory, and otherwise, sending the IO request and the address of the large-page memory to the remote host in an RDMA (remote direct memory access) mode.

In a fourth aspect, the present application provides a physical server of a hyper-converged system, comprising:

a memory: for storing a computer program;

a processor: the computer program is used for executing the computer program to realize the IO request issuing method of the super-fusion system.

In a fifth aspect, the present application provides a readable storage medium, which stores a computer program, and the computer program is used for implementing the IO request issuing method of the hyper-fusion system as described above when being executed by a processor.

The super-fusion system comprises a virtio front-end driver, a virtio back-end driver, a local server for distributed block storage and a remote server for distributed block storage. The virtual machine comprises a virtio front-end driver, a request queue and a virtual machine, wherein the virtio front-end driver is used for applying for a large-page memory, creating the request queue, and storing an IO request to the request queue when the virtual machine issues the IO request; the virtio back-end driver is used for polling the request queue, and calling the local server to process the IO request when the IO request is detected; the local server is used for judging whether the IO request is a local read request, if so, writing the request content of the IO request into the large-page memory, otherwise, sending the IO request and the address of the large-page memory to the remote server in an RDMA (remote direct memory access) mode; and the remote server is used for performing corresponding read-write operation on the large-page memory in an RDMA (remote direct memory access) mode according to the address of the large-page memory and the IO (input/output) request.

It can be seen that, in a super-fusion scene, for distributed block storage, the system uses a virtio paravirtualization technology, and through a shared large-page memory and an RDMA technology, a zero-copy of a whole-process memory between a virtual machine IO and multiple copies of distributed block storage is realized, that is, a virtual machine directly accesses a rear-end distributed block storage resource on a physical machine through the large-page memory, so that the virtual machine IO can access the resource of a distributed block storage server without a network, an IO path from the virtual machine to the distributed block storage server is reduced, an IO access speed is accelerated through polling, the IO performance of the distributed block storage in the virtualization scene is improved, and the super-fusion performance is improved.

In addition, the application also provides an IO issuing method, a physical server and a readable storage medium of the super-fusion system, and the technical effect of the method corresponds to that of the system, and is not repeated here.

Drawings

For a clearer explanation of the embodiments or technical solutions of the prior art of the present application, the drawings needed for the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.

FIG. 1 is a schematic diagram of a first embodiment of a hyper-fusion system provided in the present application;

FIG. 2 is a schematic diagram of a second embodiment of a hyper-fusion system provided in the present application;

FIG. 3 is another schematic diagram of a second embodiment of a hyper-fusion system provided in the present application;

fig. 4 is a flowchart of an embodiment of an IO request issuing method applied to a hyper-fusion system of a host according to the present application;

fig. 5 is a flowchart of an embodiment of an IO request issuing method applied to a super fusion system of a remote host according to the present application.

Detailed Description

The core of the application is to provide a super-fusion system, an IO issuing method thereof, a physical server and a readable storage medium, which are used for reducing IO paths from a virtual machine to a distributed block storage end, improving IO performance of distributed block storage in a virtualization scene and improving super-fusion performance.

In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Referring to fig. 1, a first embodiment of the super-fusion system provided in the present application is described as follows, where the first embodiment includes:

the virtio front-end driver is used for applying for a large-page memory and creating a request queue; when the virtual machine issues an IO request, storing the IO request to the request queue;

the virtio back-end driver is used for polling the request queue and calling the local server of the distributed block storage to process the IO request when the IO request is detected;

the local server is used for judging whether the IO request is a local read request, if so, writing the request content of the IO request into the large-page Memory, otherwise, sending the IO request and the address of the large-page Memory to the Remote server of the distributed block storage in an RDMA (Remote Direct Memory Access) mode;

and the remote server is used for performing corresponding read-write operation on the large-page memory in an RDMA (remote direct memory access) mode according to the address of the large-page memory and the IO (input/output) request.

The embodiment is applied to a Hyper Converged Infrastructure (HCI) scene. In the embodiment, a large memory page is used, and the maximum page size of 1GB can be defined. During the system starting period, a large memory page is used for reserving a part of memory for the virtio front-end driver and the virtio back-end driver, and the part of memory cannot be occupied by other programs. The virtio front-end driver and the virtio back-end driver share a large page memory, and direct memory address access of data inside the node is realized through memory sharing without memory copying. Moreover, the large-page memory cannot generate page fault interruption, so that full-speed memory access can be realized.

As a preferred embodiment, the virtio front-end driver creates an unlocked queue as a request queue. The lock-free queue is directly used by multiple producers and multiple consumers, IO multithread access lock competition in the virtual machine can be avoided, and high concurrency is realized.

Specifically, the local server obtains a request address of the IO request from the large page memory, and determines whether the IO request is a local read request according to the request address, where the request address may specifically be LUN (Logical Unit Number) information and/or offset information.

After writing the request content of the IO request into the large-page memory, the local server informs the virtio back-end driver that the IO request is processed, and then the virtio back-end driver informs the virtio front-end driver that the IO request is processed. Similarly, after completing the read-write operation on the large-page memory, the remote server sends the prompt information of the completion of the IO processing to the virtio back-end driver, and then the virtio back-end driver sends the prompt information of the completion of the IO processing to the virtio front-end driver.

As a preferred embodiment, the data transmission between the local server and the remote server is realized through an intelligent network card. Specifically, when the IO request is not a local read request, the local server registers the large-page memory in the intelligent network card, and sends the IO request and the address of the large-page memory to the remote server of the distributed block storage in an RDMA manner by using the intelligent network card.

The super-fusion system provided by the embodiment is applied to a super-fusion scene, uses virtio paravirtualization technology for distributed block storage, through the shared large-page memory, the non-locking queue and the RDMA technology, the zero copy of the whole-process memory from the IO of the virtual machine to the multi-copy of the distributed block storage is realized, namely, the virtual machine directly accesses the back-end distributed block storage resources on the physical machine through the large-page memory, the IO access of the virtual machine to the resources of the distributed block storage service end can be realized without a network, the IO path from the virtual machine to the distributed block storage end is shortened, the IO access speed is accelerated through polling and a lock-free queue, in addition, the hardware performance of distributed block storage is fully exerted through an intelligent network card unloading Roce (RDMA over converted Ethernet) protocol, the IO time delay is reduced, the IO performance of the distributed block storage in a virtualization scene is improved, and the super fusion performance is improved.

The second embodiment of the super-fusion system provided by the present application is described in detail below, and the architecture is shown in fig. 2 and fig. 3.

In the second embodiment, the distributed block storage provides virtio back-end drive, which is used for linking with a local server of the distributed block storage, and directly sending the IO request of the distributed block storage. When the virtual machine is started, the virtual machine is linked through a virtio front-end driver and a virtio back-end driver of a non-locking queue, a large-page memory and distributed block storage.

And the virtual machine is driven by the virtio front end when sending the IO request. Specifically, the virtual machine calls an internally identifiable virtio front-end driver, applies for a large-page memory address to store a content to be issued specifically, and inserts the IO request into the non-locking queue. And (3) the Virtio back end drives a polling non-locking queue, and when the virtual machine is detected to send the IO request and the corresponding large-page memory address, the IO request sending process of the distributed block storage is called. That is to say, the IO request is directly issued to the local server of the distributed block storage, and the shared memory can be directly accessed at the local server, so that the memory is applied only once without being copied.

And then, the distributed block storage end judges the address of the virtual machine for requesting reading and writing by issuing information such as LUN (logical unit number) and offset corresponding to IO (input/output), if the address is a local reading request, namely the request type is a reading request and the local machine has request content, the address is directly issued by IO, the request content is directly filled in the large memory page address, and meanwhile, the virtio front end is informed to drive IO to complete.

And if the IO request is a write request or a remote read request, processing the IO request through a remote server of the distributed block storage. Specifically, a local server of the distributed block storage registers a large memory page on the intelligent network card, a remote server of the distributed block storage informs a large memory page address and an IO request to the remote server of the distributed block storage in an RDMA mode, the remote server performs corresponding IO operation on a large memory page in the RDMA mode according to the IO request and the large memory page address, no memory copy is performed in the whole process, after the completion, the remote server informs a virtio back-end driver of IO completion, and the virtio back-end driver further informs a virtio front-end driver of IO completion.

Therefore, the super-fusion system provided by the embodiment at least has the following advantages:

1. the direct memory address access of the internal data of the node is realized through large-page memory sharing without memory copying. Moreover, the large-page memory cannot generate page fault interruption, so that full-speed memory access can be realized.

2. High concurrency is achieved through lock-free queues. The lock-free queue is directly used by multiple producers and multiple consumers, the competition of IO multithread access locks in the virtual machine is avoided, and the service concurrency capability is improved.

3. By the polling acceleration event perception and the polling operation by monopolizing one CPU, the performance of event processing cannot be influenced by the switching of CPU threads, and the timeliness and the efficiency of event processing are improved.

4. RDMA traffic is offloaded using the intelligent network card. The distributed block storage is to store a plurality of copies through a plurality of nodes to ensure fault redundancy, IO is required to be sent to a remote distributed block storage server during write operation, data is stored on the remote distributed block storage servers, a read request can access the data only by finding out the position of one copy, and therefore network access is required to ensure data distribution access. The RDMA network is transmitted through a RoCE protocol, and the intelligent network card can automatically unload the network according to the protocol type, so that the burden of a CPU is reduced. The memory used in the RDMA network is a large-page memory, the physical memory address of the large-page memory is solidified, the whole large-page memory is registered on the physical network card, the physical network card can directly access the physical address through the RDMA without copying the memory for network transmission, the memory copy is realized in the network card, no information such as resource copy and the like exists in the network transmission process, the transmission process is unloaded, and the CPU pressure of a physical machine is reduced. Meanwhile, the memory required to be applied in the IO flow of the virtual machine is saved, and the memory used by distributed block storage is saved.

The IO request issuing method applied to the super-fusion system of the host machine provided by the present application is introduced below, and the method is implemented based on the super-fusion system.

As shown in fig. 4, the IO request issuing method applied to the hyper-fusion system of the host according to this embodiment includes the following steps:

s41, utilizing a virtio front-end driver to apply for a large page memory, and creating a request queue; when the virtual machine issues an IO request, storing the IO request to a request queue by using a virtio front-end driver;

s42, driving a polling request queue by using a virtio back end, and sending the IO request to a local server of the distributed block storage when the IO request is detected;

s43, judging whether the IO request is a local read request by the local server, if so, writing the request content of the IO request into the large-page memory, otherwise, sending the IO request and the address of the large-page memory to the remote host in an RDMA mode, so that the remote server of the distributed block storage on the remote host can perform corresponding read-write operation on the large-page memory in the RDMA mode according to the address of the large-page memory and the IO request.

The IO request issuing method applied to the super fusion system of the remote host provided in the present application is described below, and the method is implemented based on the super fusion system.

As shown in fig. 5, the IO request issuing method applied to the super fusion system of the remote host in this embodiment includes the following steps:

s51, receiving an IO request and an address of a large-page memory which are sent by a local server of distributed block storage on a host machine in an RDMA (remote direct memory access) mode;

s52, performing corresponding read-write operation on the large-page memory on the host machine in an RDMA (remote direct memory access) mode according to the address and the IO (input/output) request of the large-page memory, wherein the large-page memory is applied by a virtio front-end driver on the host machine;

the process that the local server side sends the IO request and the address of the large-page memory to the remote host includes: receiving a calling request sent by a virtio back-end driver, wherein the virtio back-end driver is used for polling a request queue and sending the calling request for processing the IO request to a local server when the IO request is detected, and the request queue is created in a host by the virtio front-end driver; and judging whether the IO request is a local read request, if so, writing the request content of the IO request into the large-page memory, and otherwise, sending the IO request and the address of the large-page memory to the remote host in an RDMA (remote direct memory access) mode.

In addition, the present application also provides a physical server of a super-fusion system, including:

a memory: for storing a computer program;

a processor: the method is used for executing the computer program to realize the IO request issuing method applied to the super-fusion system of the host machine or the IO request issuing method applied to the super-fusion system of the remote host machine.

Finally, the present application provides a readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the computer program is used to implement the IO request issuing method applied to the super-fusion system of the host or the IO request issuing method applied to the super-fusion system of the remote host.

The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.

The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

The above detailed descriptions of the solutions provided in the present application, and the specific examples applied herein are set forth to explain the principles and implementations of the present application, and the above descriptions of the examples are only used to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种存储系统的写数据方法及相关装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类