Linear text layout method in three-dimensional space, electronic device and storage medium

文档序号:1846548 发布日期:2021-11-16 浏览:24次 中文

阅读说明:本技术 三维空间内的线性文本布局方法、电子装置及存储介质 (Linear text layout method in three-dimensional space, electronic device and storage medium ) 是由 徐敏 范渊 黄进 于 2021-08-13 设计创作,主要内容包括:本申请涉及一种三维空间内的线性文本布局方法、电子装置和存储介质。该三维空间内的线性文本布局方法包括:获取待显示的线性文本的文本数据,并根据文本数据生成线性文本在预设图形库中的坐标参数;利用GPU对坐标参数进行运算,确定线性文本在GPU的第一着色器中的旋转平移参数,以及通过GPU获取坐标参数中的文本中心投影在显示线性文本的屏幕空间中的第一投影深度信息;在利用第一投影深度信息对第一坐标数据进行处理,生成线性文本在屏幕空间中的第二坐标数据之后,利用旋转平移参数对第二坐标数据进行处理,生成第三坐标数据;根据第三坐标数据,确定线性文本在第一着色器中的显示位置,并基于显示位置对线性进行渲染显示。(The application relates to a linear text layout method in a three-dimensional space, an electronic device and a storage medium. The linear text layout method in the three-dimensional space comprises the following steps: acquiring text data of a linear text to be displayed, and generating coordinate parameters of the linear text in a preset graphic library according to the text data; calculating the coordinate parameters by using the GPU, determining the rotation translation parameters of the linear text in a first shader of the GPU, and acquiring first projection depth information of the text center in the coordinate parameters projected in a screen space for displaying the linear text by using the GPU; processing the first coordinate data by using the first projection depth information to generate second coordinate data of the linear text in a screen space, and then processing the second coordinate data by using the rotation and translation parameters to generate third coordinate data; and according to the third coordinate data, determining the display position of the linear text in the first shader, and rendering and displaying the linear text based on the display position.)

1. A method for linear text layout in three-dimensional space, comprising:

acquiring text data of a linear text to be displayed, and generating coordinate parameters of the linear text in a preset graphic library according to the text data, wherein the coordinate parameters comprise first coordinate data and a preset text center;

calculating the coordinate parameters by using a GPU (graphics processing Unit), determining the rotation and translation parameters of the linear text in a first shader of the GPU, and acquiring projection transformation information of the center projection of the text in a screen space for displaying the linear text by using the GPU, wherein the projection transformation information comprises first projection depth information;

processing the first coordinate data by using the first projection depth information to generate second coordinate data of the linear text in the screen space, and then processing the second coordinate data by using the rotation and translation parameters to generate third coordinate data;

and according to the third coordinate data, determining the display position of the linear text in the first shader, and rendering and displaying the linear text based on the display position.

2. The method of claim 1, wherein the processing the first coordinate data using the first projection depth information, and generating second coordinate data of the linear text in the screen space comprises:

reading a first word of the text data, and reading first initial coordinate data corresponding to the first word from the first coordinate data, wherein the text data comprises a plurality of first words;

and performing coordinate conversion on the first initial coordinate data according to the first projection depth information and a preset display size corresponding to the first character to generate temporary coordinate data corresponding to the first character in the screen space, wherein the second coordinate data comprises a plurality of temporary coordinate data, and the preset display size is determined according to the first initial coordinate data and preset second projection depth information.

3. The method of claim 2, wherein the processing the second coordinate data using the rotational-translational parameters to generate third coordinate data comprises:

reading the rotational-translational parameters and the second coordinate data;

and performing telescopic operation on the second coordinate data based on the rotation and translation parameters to obtain third coordinate data, wherein the telescopic operation comprises translation operation and/or rotation operation.

4. The method of claim 1, wherein determining a display position of the linear text in the first shader based on the third coordinate data comprises:

reading the third coordinate data, and converting the third coordinate data into fourth coordinate data in an object space coordinate system corresponding to the linear text;

and performing rendering operation on the fourth coordinate data through the first shader to obtain display position coordinate data of the text data in the first shader of the GPU, wherein the rendering operation comprises model transformation, view transformation and projection transformation.

5. The method of claim 1, wherein the coordinate parameters further comprise a first text direction vector and center coordinate data, and wherein operating on the coordinate parameters with a GPU determines the roto-translation parameters of the linear text in a first shader of the GPU comprises:

performing matrix operation on the first text direction vector through the GPU to obtain a second text direction vector, wherein the matrix operation at least comprises model transformation and view transformation, and the second text direction vector is a direction vector of the first text direction vector in a preset projection world coordinate system;

detecting an interval value of the second text direction vector in the preset projection world coordinate system, and determining a space operation parameter of the text data according to the interval value, wherein the space operation parameter comprises a translation parameter and/or a rotation angle parameter of the text data;

performing a position correction operation based on the spatial operation parameter, the first coordinate data and the central coordinate data to obtain the rotation and translation parameter corresponding to the linear text, wherein the position correction operation at least includes one of the following operations: translation operation and rotation operation.

6. The method of claim 5, wherein detecting a range value of the second text direction vector in the preset projected world coordinate system, and determining a spatial operation parameter of the text data according to the range value comprises:

determining the interval value of the second text direction vector in the preset projection world coordinate system;

and inquiring the space operation parameter corresponding to the interval value in a first space operation parameter table, wherein the first space parameter table comprises the mapping relation between the interval value and the space operation parameter.

7. The method of claim 5, wherein performing a position correction operation based on the spatial operation parameter, the first coordinate data, and the center coordinate data comprises:

reading a second word of the text data, and respectively reading second initial coordinate data and second center coordinate data corresponding to the second word from the first coordinate data and the center coordinate data, wherein the text data comprises a plurality of second words;

calculating a difference value between the second initial coordinate data and the second central coordinate data, performing translation calculation on the difference value by using the translation parameter in the space operation parameter to obtain position coordinate data after the second character is translated, and determining that the rotation translation parameter comprises the position coordinate data after the second character is translated; and/or the presence of a gas in the gas,

and performing rotation operation on the second initial coordinate data by using the rotation angle parameter in the space operation parameter to obtain position coordinate data of the second character after rotation, and determining that the rotation and translation parameter comprises the position coordinate data of the second character after rotation.

8. The method according to claim 5, wherein the first coordinate data includes third initial coordinate data corresponding to the text data, and performing the position correction operation based on the spatial operation parameter, the first coordinate data, and the center coordinate data includes:

reading the first text direction vector, and carrying out standardization processing on the first text direction vector to obtain a unit vector of the first text direction vector;

and generating a rotation matrix corresponding to the first text vector based on the unit vector, performing matrix operation on the third initial coordinate data according to the rotation matrix to obtain position coordinate data of the text data after rotation, and determining that the rotation and translation parameters comprise the position coordinate data of the text data after rotation.

9. The method of claim 1, wherein the obtaining, by the GPU, projective transformation information of the center projection of the text in a screen space displaying the linear text comprises:

reading model coordinate data corresponding to the text center, and sequentially performing model transformation operation, world transformation operation and projection transformation operation on the model coordinate data through the GPU to generate fifth coordinate data of the text center in a projection space of the preset graphic library;

mapping and converting the fifth coordinate data to a coordinate system corresponding to the screen space, and generating sixth coordinate data of the text center in the screen space, wherein the sixth coordinate data is a homogeneous coordinate corresponding to a two-dimensional coordinate of the text center in the screen space;

extracting a first projection depth parameter from the sixth coordinate data, wherein the first projection depth parameter is used for representing a homogeneous component of a homogeneous coordinate corresponding to the sixth coordinate data, and the first projection depth information corresponding to the linear text comprises the first projection depth parameter.

10. The method of claim 1, wherein generating the coordinate parameter of the linear text in a preset graphic library according to the text data comprises:

extracting a third word of the text data and generating a third text direction vector, wherein the third text direction vector is at least used for describing the orientation of the third word in a world coordinate system of the preset graphic library;

generating a rectangular surface corresponding to each third character in the world coordinate system of the preset graphic library, and respectively determining corresponding coordinate data of the vertex and the central point of the rectangular surface in the world coordinate system of the preset graphic library;

and determining that the coordinate parameters of the text data in a preset image library at least comprise the third text direction vector, and the corresponding coordinate data of the vertex and the central point of the rectangular surface in a world coordinate system of the preset graphic library.

11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of linear text layout in three-dimensional space according to any of claims 1 to 9.

12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for linear text layout in a three-dimensional space according to any one of claims 1 to 9.

Technical Field

The present disclosure relates to the field of computer technologies, and in particular, to a linear text layout method in a three-dimensional space, an electronic device, and a storage medium.

Background

Image rendering techniques are widely used in terminals such as computers and mobile phones. And displaying the text and the image on a display space corresponding to the terminal through an image rendering technology. As applications applied to terminals increasingly seek vivid and pleasant visual effects, fluency of text and image rendering and readability of text rendering are particularly important.

The method is applied to display of texts and images of conventional scenes of a terminal, and in terms of the terminal, the display of the text images meets the rendering display requirements of corresponding scenes, but in the existing text display, under the conditions of complex 3D scenes, interactive scenes, real-time scenes and the like, the existing rendering method of the text images does not meet the requirements of good rendering fluency and text readability, so that when the texts are arranged under the complex 3D scenes, the interactive scenes and the real-time scenes, the viewing angle changes along with rotation and translation of the scenes, the text size and the text orientation change along with the change, and the readability and the display effect are poor after the three-dimensional space linear texts are arranged.

At present, no effective solution is provided for the problem that the readability and the display effect are poor after the layout of the three-dimensional space linear text is completed in the related technology.

Disclosure of Invention

The embodiment of the application provides a linear text layout method in a three-dimensional space, an electronic device and a storage medium, so as to at least solve the problem that the readability and the display effect are poor after the layout of a linear text in the three-dimensional space is finished in the related art.

In a first aspect, an embodiment of the present application provides a method for linear text layout in a three-dimensional space, including: acquiring text data of a linear text to be displayed, and generating coordinate parameters of the linear text in a preset graphic library according to the text data, wherein the coordinate parameters comprise first coordinate data and a preset text center; calculating the coordinate parameters by using a GPU (graphics processing Unit), determining the rotation and translation parameters of the linear text in a first shader of the GPU, and acquiring projection transformation information of the center projection of the text in a screen space for displaying the linear text by using the GPU, wherein the projection transformation information comprises first projection depth information; processing the first coordinate data by using the first projection depth information to generate second coordinate data of the linear text in the screen space, and then processing the second coordinate data by using the rotation and translation parameters to generate third coordinate data; and according to the third coordinate data, determining the display position of the linear text in the first shader, and rendering and displaying the linear text based on the display position.

In some embodiments, processing the first coordinate data using the first projected depth information, and generating second coordinate data of the linear text in the screen space comprises: reading a first word of the text data, and reading first initial coordinate data corresponding to the first word from the first coordinate data, wherein the text data comprises a plurality of first words; and performing coordinate conversion on the first initial coordinate data according to the first projection depth information and a preset display size corresponding to the first character to generate temporary coordinate data corresponding to the first character in the screen space, wherein the second coordinate data comprises a plurality of temporary coordinate data, and the preset display size is determined according to the first initial coordinate data and preset second projection depth information.

In some of these embodiments, processing the second coordinate data using the rotational-translational parameters, generating third coordinate data includes: reading the rotational-translational parameters and the second coordinate data; and performing telescopic operation on the second coordinate data based on the rotation and translation parameters to obtain third coordinate data, wherein the telescopic operation comprises translation operation and/or rotation operation.

In some embodiments, determining the display position of the linear text in the first shader based on the third coordinate data comprises: reading the third coordinate data, and converting the third coordinate data into fourth coordinate data in an object space coordinate system corresponding to the linear text; and performing rendering operation on the fourth coordinate data through the first shader to obtain display position coordinate data of the text data in the first shader of the GPU, wherein the rendering operation comprises model transformation, view transformation and projection transformation.

In some embodiments, the coordinate parameters further include a first text direction vector and center coordinate data, and operating on the coordinate parameters with a GPU determines the roto-translation parameters of the linear text in a first shader of the GPU includes: performing matrix operation on the first text direction vector through the GPU to obtain a second text direction vector, wherein the matrix operation at least comprises model transformation and view transformation, and the second text direction vector is a direction vector of the first text direction vector in a preset projection world coordinate system; detecting an interval value of the second text direction vector in the preset projection world coordinate system, and determining a space operation parameter of the text data according to the interval value, wherein the space operation parameter comprises a translation parameter and/or a rotation angle parameter of the text data; performing a position correction operation based on the spatial operation parameter, the first coordinate data and the central coordinate data to obtain the rotation and translation parameter corresponding to the linear text, wherein the position correction operation at least includes one of the following operations: translation operation and rotation operation.

In some embodiments, detecting an interval value of the second text direction vector in the preset projected world coordinate system, and determining the spatial operation parameter of the text data according to the interval value includes: determining the interval value of the second text direction vector in the preset projection world coordinate system; and inquiring the space operation parameter corresponding to the interval value in a first space operation parameter table, wherein the first space parameter table comprises the mapping relation between the interval value and the space operation parameter.

In some of these embodiments, performing a position correction operation based on the spatial operation parameter, the first coordinate data, and the center coordinate data comprises: reading a second word of the text data, and respectively reading second initial coordinate data and second center coordinate data corresponding to the second word from the first coordinate data and the center coordinate data, wherein the text data comprises a plurality of second words; calculating a difference value between the second initial coordinate data and the second central coordinate data, performing translation calculation on the difference value by using the translation parameter in the space operation parameter to obtain position coordinate data after the second character is translated, and determining that the rotation translation parameter comprises the position coordinate data after the second character is translated; and/or performing rotation operation on the second initial coordinate data by using the rotation angle parameter in the space operation parameter to obtain position coordinate data of the second characters after rotation, and determining that the rotation and translation parameters comprise the position coordinate data of the second characters after rotation.

In some embodiments, the first coordinate data includes third initial coordinate data corresponding to the text data, and performing a position correction operation based on the spatial operation parameter, the first coordinate data, and the center coordinate data includes: reading the first text direction vector, and carrying out standardization processing on the first text direction vector to obtain a unit vector of the first text direction vector; and generating a rotation matrix corresponding to the first text vector based on the unit vector, performing matrix operation on the third initial coordinate data according to the rotation matrix to obtain position coordinate data of the text data after rotation, and determining that the rotation and translation parameters comprise the position coordinate data of the text data after rotation.

In some embodiments, obtaining, by the GPU, projective transformation information of the center projection of text in a screen space in which the linear text is displayed comprises: reading model coordinate data corresponding to the text center, and sequentially performing model transformation operation, world transformation operation and projection transformation operation on the model coordinate data through the GPU to generate fifth coordinate data of the text center in a projection space of the preset graphic library; mapping and converting the fifth coordinate data to a coordinate system corresponding to the screen space, and generating sixth coordinate data of the text center in the screen space, wherein the sixth coordinate data is a homogeneous coordinate corresponding to a two-dimensional coordinate of the text center in the screen space; extracting a first projection depth parameter from the sixth coordinate data, wherein the first projection depth parameter is used for representing a homogeneous component of a homogeneous coordinate corresponding to the sixth coordinate data, and the first projection depth information corresponding to the linear text comprises the first projection depth parameter.

In some embodiments, generating the coordinate parameter of the linear text in a preset graphic library according to the text data includes: extracting a third word of the text data and generating a third text direction vector, wherein the third text direction vector is at least used for describing the orientation of the third word in a world coordinate system of the preset graphic library; generating a rectangular surface corresponding to each third character in the world coordinate system of the preset graphic library, and respectively determining corresponding coordinate data of the vertex and the central point of the rectangular surface in the world coordinate system of the preset graphic library; and determining that the coordinate parameters of the text data in a preset image library at least comprise the third text direction vector, and the corresponding coordinate data of the vertex and the central point of the rectangular surface in a world coordinate system of the preset graphic library.

In a third aspect, an embodiment of the present application provides an electronic apparatus, which includes a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for linear text layout in a three-dimensional space according to the first aspect.

In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the linear text layout method in three-dimensional space according to the first aspect.

Compared with the related art, the linear text layout method in the three-dimensional space, the electronic device and the storage medium provided by the embodiment of the application generate the coordinate parameters of the linear text in the preset graphic library according to the text data by acquiring the text data of the linear text to be displayed, wherein the coordinate parameters comprise the first coordinate data and the preset text center; calculating coordinate parameters by using a GPU (graphics processing Unit), determining rotation translation parameters of a linear text in a first shader of the GPU, and acquiring projection transformation information of a text center projected in a screen space for displaying the linear text by using the GPU, wherein the projection transformation information comprises first projection depth information; processing the first coordinate data by using the first projection depth information to generate second coordinate data of the linear text in a screen space, and then processing the second coordinate data by using the rotation and translation parameters to generate third coordinate data; according to the third coordinate data, the display position of the linear text in the first shader is determined, and the linear text is rendered and displayed based on the display position, so that the problem that the readability and the display effect are poor after the three-dimensional space linear text is laid out in the related technology is solved, the linear text is displayed towards the observation visual angle, and the readability and the display effect of the linear text are improved.

The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.

Drawings

The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:

fig. 1 is a block diagram of a hardware configuration of a terminal of a linear text layout method in a three-dimensional space according to an embodiment of the present invention;

FIG. 2 is a flow diagram of a method of linear text layout in three-dimensional space according to an embodiment of the present application;

FIG. 3 is a flow diagram of a method for linear text layout in three-dimensional space in accordance with a preferred embodiment of the present application;

fig. 4 is a block diagram of a GPU-based text real-time layout apparatus according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.

It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.

Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.

Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.

Various techniques described herein may be used for text rendering, layout display in 3D scenes, interactive scenes.

Before describing and explaining embodiments of the present application, a description will be given of the related art used in the present application as follows:

graphics Library (GL), a Library for rendering computer Graphics on a display, generally provides a set of optimized functions to perform common rendering tasks, which can be usually solved completely at a software level, computed by a CPU, commonly found in embedded systems, or hardware accelerated by a GPU, commonly found in home computers.

A Graphics Processing Unit (GPU), also called a display core, a visual processor, and a display chip, is a microprocessor that is dedicated to perform operations related to images and Graphics on computers, workstations, game machines, and mobile devices (e.g., tablet computers, smart phones, etc.); the GPU is used, so that the display card reduces the dependence on the CPU, and can perform part of the work of the original CPU, particularly 3D graphic processing; the core technologies adopted by the GPU are hardware T & L (geometric transformation and illumination processing), cubic environment material mapping and vertex mixing, texture compression and concave-convex mapping, a dual-texture four-pixel 256-bit rendering engine and the like.

Shaders (shaders), which are used to implement image rendering, replace editable programs of fixed rendering pipelines. Shaders are well known in the art and typically include vertex shaders and fragment shaders, wherein,

and a vertex shader for processing a calculation of each vertex of the triangles (two triangles constitute one rectangular plane) constituting the text and determining a position of the vertex of the triangle displayed on the screen, wherein the calculation process may be described as a projection matrix (projectMatrix) view matrix (viewMatrix) model matrix (modelMatrix) vertex coordinates (position).

And the fragment shader is used for pixelizing the image, namely determining the pixel value of the image data displayed on the screen.

Model space, also called object space or local space, each model has its own independent coordinate space, and when it moves or rotates, the model space also moves and rotates with it. The method comprises the following steps of model transformation and a model matrix, wherein the model transformation refers to the transformation of an object in an object coordinate system to a world coordinate system, and the model transformation is realized by multiplying the model matrix; the model matrix includes information of translation, rotation, and scaling, and may also be referred to as a basic global transformation matrix.

World space, the world space being the largest, the outermost coordinate space, has a fixed origin and coordinate axes.

Camera space, where the camera is at the origin, + x pointing to the right, + y pointing up, and + Z pointing to the back of the camera, determines the viewing angle used to render the game, unlike model space and world space, since U3D uses the right hand coordinate system in camera space, which conforms to the OpenGL convention, so the-Z direction is directly in front of the camera.

The visual change is realized by multiplying the view matrix by the coordinate under the world coordinate system; the view matrix transforms the coordinates of the world coordinate system into the camera coordinate system.

The method comprises the following steps of cutting a picture element by using a viewing cone in a cutting space, wherein the cutting space is also called a uniform cutting space, a required matrix is a cutting matrix and is also called a projection matrix, the viewing cone is used for cutting the picture element, the viewing cone is formed by surrounding six planes, and the planes are cutting planes. There are two types of view cones, orthogonal and perspective, where only perspective projections are temporarily recorded.

The projection transformation and the projection matrix, the projection transformation is to transform the object under the camera coordinate system to the clipping coordinate system, and the projection transformation is realized by multiplying the projection matrix; the projection mode has two kinds: orthogonal projection and perspective projection, wherein,

orthogonal projection: the figures with the same size projected on the near plane have the same size, and the feeling of the real world such as the near size and the far size cannot be caused.

Homogeneous coordinates: that is, N +1 dimension is used to represent N-dimension coordinates, that is, an additional variable w may be added at the end of a 2D cartesian coordinate to form a 2D homogeneous coordinate, so that a point (X, Y) becomes (X, Y, w) within the homogeneous coordinate and there is a point (X, Y) with

X=x/w

Y=Y/w

For example: the homogeneous coordinate of (1, 2) in the cartesian coordinate system can be expressed as (1, 2, 1), and if the point (1, 2) moves to an infinite distance, it becomes (∞ ) in the cartesian coordinate system, and then its homogeneous coordinate is expressed as (1, 2, 0), because (1/0,2/0) (∞ ), we can not express a point at infinity with "∞".

Perspective projection: the same size of objects, the near objects project large, and the far objects project small, will produce the same feeling of near and far as the real world. After multiplication by the projection matrix, the component of the lower x, y, z in the 2D homogeneous coordinate [ x, y, z, w ] of any one point will be within-w. The screen space is a two-dimensional space, and the vertex is projected from the clipping space to the screen space to obtain the corresponding two-dimensional coordinate.

The method provided by the embodiment can be executed in a terminal, a computer or a similar operation device. Taking the operation on the terminal as an example, fig. 1 is a block diagram of a hardware structure of the terminal operated by the linear text layout method in the three-dimensional space according to the embodiment of the present invention. As shown in fig. 1, the terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.

The memory 104 may be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the linear text layout method in the three-dimensional space in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.

The present embodiment provides a method for laying out a linear text in a three-dimensional space, where the method for laying out the linear text in the three-dimensional space is implemented based on a GPU, and fig. 2 is a flowchart of the method for laying out the linear text in the three-dimensional space according to the embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:

step S201, obtaining text data of a linear text to be displayed, and generating a coordinate parameter of the linear text in a preset graphic library according to the text data, where the coordinate parameter includes first coordinate data and a preset text center.

In this embodiment, the received text data of the linear text to be displayed includes text of the linear text, position data and image data, the text is used to describe characters to be displayed, the position data refers to position coordinates of the linear text, the image data includes a set of raster or pixelized images composed of multiple pixel points, and the rendering position of the linear text to be displayed in the screen space in the interactive real-time scene is determined by performing data processing on the position data, that is, by executing the layout method of this embodiment.

It should be noted that, in the embodiment of the present application, one image data may represent one character, and each character is composed of one rectangular surface (two triangles in the graphic library) and corresponding image data.

In this embodiment, after the corresponding text data is acquired, model transformation and view transformation are sequentially performed according to position data (corresponding to coordinate data in an object coordinate system) in the text data, so that the position data is sequentially transformed into coordinate data in a world coordinate system and coordinate data in the world coordinate system corresponding to a preset image library; in the process of generating the coordinate parameter, coordinate data describing the orientation of the linear text in the world coordinate system of the preset image library and the marked text center of the entire linear text are also generated. In this embodiment, the first coordinate data in the coordinate parameters after the text data processing mainly includes coordinate data of vertices of rectangular surfaces forming the text in the preset image library.

Step S202, a GPU is used for calculating coordinate parameters, rotational translation parameters of the linear text in a first shader of the GPU are determined, and projection transformation information of the text center projected in a screen space for displaying the linear text is obtained through the GPU, wherein the projection transformation information comprises first projection depth information.

In this embodiment, the GPU is used to calculate the coordinate parameters, that is, calculate the amount of the translation direction and the rotation angle required to convert the coordinate parameters into the position coordinates corresponding to the screen space in the interactive and real-time scene, for example: in the original object space of the text, the text is displayed from left to right and oriented from west to east, when the text is converted into the screen space in the interactive real-time scene, the translation from right to left, the rotation of a single character around the central point thereof and the rotation of the whole text from north to south to west to east are required, wherein the coordinate data of the translation of the text and the rotation angle data of the single character and the text are the rotation translation parameters in the embodiment.

In this embodiment, the GPU is used to obtain projection transformation information of the center of the text projected in the screen space for displaying the linear text, that is, the center of the text is used as a vertex data to perform spatial conversion processing and screen coordinate conversion, so as to obtain a coordinate position of the center of the text projected on the screen space, and projection depth information of the text of the linear text projected in the screen space is determined according to a projection depth attribute in the coordinate position, so that the first coordinate data is processed by the projection depth information, so that the text display size of the linear text is unchanged regardless of the rotation and translation of the scene, and the viewing angle (corresponding to the face-to-face camera) is maintained.

In this embodiment, the projection transformation information includes model matrix information, view matrix information, projection matrix information, and conversion information of a screen coordinate system conversion process corresponding to coordinate data corresponding to a text center, and further includes a projection distance determined according to coordinates after the text center is converted into the coordinates in a screen space.

Step S203, after the first coordinate data is processed by using the first projection depth information to generate second coordinate data of the linear text in the screen space, the second coordinate data is processed by using the rotation and translation parameter to generate third coordinate data.

In this embodiment, the first coordinate data is three-dimensional coordinates corresponding to a preset graphic library coordinate system; meanwhile, after the three-dimensional coordinates are subjected to projection transformation (such as perspective projection), the depth of each point in the three-dimensional coordinates in a projection scene can be generated, and the display size of characters corresponding to the linear text can be determined based on the three-dimensional coordinates and the depth; in this embodiment, after the first projection depth information is acquired, the first coordinate data of the linear text in the preset graphics library is processed, that is, the first projection depth information is utilized to perform processing of removing a projection depth component on the first coordinate data under the condition that the display size of the characters of the linear text is kept unchanged, and then the conversion from the coordinates of which the projection depth is removed to the coordinates in the screen space is performed, so as to generate the two-dimensional second coordinate data.

In this embodiment, after generating the two-dimensional second coordinate data, the translational coordinate data and the rotational angle data are extracted from the acquired rotational/translational parameters, and then the second coordinate data is subjected to scaling operation to obtain vertex coordinate data of the corresponding character.

Step S204, according to the third coordinate data, determining the display position of the linear text in the first shader, and rendering and displaying the linear text based on the display position.

In this embodiment, after the third coordinate data is obtained, the GPU needs to restore the two-dimensional third coordinate data to coordinate data of a three-dimensional space (an object space corresponding to the linear text), and then the GPU performs spatial conversion processing based on the restored coordinate data, and the coordinate data generated by the conversion processing is used as a coordinate position where the linear text is rendered and displayed in the first shader.

It should be noted that, in this embodiment, the first coordinate data, the second coordinate data, and the third coordinate data are all represented by homogeneous coordinates, for example: when the coordinate of a certain point corresponding to the coordinate data is a three-dimensional coordinate, the corresponding homogeneous coordinate is a four-bit coordinate generated by adding homogeneous components; meanwhile, in this embodiment, the three-dimensional coordinates and the two-dimensional coordinates can be represented in a unified form of a maximum dimension (four-dimensional), and when the maximum dimension is used to represent the low-dimensional coordinates, a component of a certain corresponding dimension is zero, for example: and the component of the corresponding Z-direction dimension of the two-dimensional coordinate is 0.

It is further noted that, in the embodiments of the present application, the first shader includes, but is not limited to, a vertex shader. In the embodiment of the application, the character layout is realized by processing the relevant parameters of the text data based on the calculation process of the vertex shader.

Through the steps S201 to S204, obtaining text data of a linear text to be displayed, and generating a coordinate parameter of the linear text in a preset graphic library according to the text data, wherein the coordinate parameter includes first coordinate data and a preset text center; calculating coordinate parameters by using a GPU (graphics processing Unit), determining rotation translation parameters of a linear text in a first shader of the GPU, and acquiring projection transformation information of a text center projected in a screen space for displaying the linear text by using the GPU, wherein the projection transformation information comprises first projection depth information; processing the first coordinate data by using the first projection depth information to generate second coordinate data of the linear text in a screen space, and then processing the second coordinate data by using the rotation and translation parameters to generate third coordinate data; according to the third coordinate data, the display position of the linear text in the first shader is determined, and the linear text is rendered and displayed based on the display position, so that the problem that the readability and the display effect are poor after the three-dimensional space linear text is laid out in the related technology is solved, the linear text is displayed towards the observation visual angle, and the readability and the display effect of the linear text are improved.

In some embodiments, the processing of the first coordinate data with the first projection depth information and the generating of the second coordinate data of the linear text in the screen space are performed by:

step 1, reading first words of text data, and reading first initial coordinate data corresponding to the first words from the first coordinate data, wherein the text data comprises a plurality of first words.

In the present embodiment, the first word is all the words constituting the linear text; the first coordinate data includes at least a set of coordinate data of first initial coordinate data of a plurality of first letters.

And 2, performing coordinate conversion on the first initial coordinate data according to the first projection depth information and a preset display size corresponding to the first character to generate temporary coordinate data corresponding to the first character in a screen space, wherein the second coordinate data comprises a plurality of temporary coordinate data, and the preset display size is determined according to the first initial coordinate data and the preset second projection depth information.

In this embodiment, the first initial coordinate data is three-dimensional coordinates corresponding to a preset graphic library coordinate system; meanwhile, after the first initial coordinate data is subjected to projection transformation, the depth of each point in the first initial coordinate data in a projection scene can be generated, and the display size corresponding to the first character, namely the preset display size, can be determined based on the first initial coordinate data and the corresponding depth; in this embodiment, after the first projection depth information is acquired, the first initial coordinate data is processed, that is, the first projection depth information is utilized to perform projection depth component removal processing on the first coordinate data under the condition that the display size of the first text is kept unchanged, and then the conversion from the coordinates with the projection depth removed to the coordinates in the screen space is performed, so as to generate two-dimensional temporary coordinate data; in the present embodiment, the provisional coordinate data is used as coordinate data for performing the scaling operation based on the rotational/translational parameters.

Reading first characters of the text data in the steps, and reading first initial coordinate data corresponding to the first characters from the first coordinate data; according to the first projection depth information and the preset display size corresponding to the first character, coordinate conversion is carried out on the first initial coordinate data, temporary coordinate data corresponding to the first character in the screen space are generated, conversion of the coordinate data based on the projection depth information is achieved, the coordinate cannot deform after the three-dimensional space is projected to the two-dimensional screen space, the linear text displayed in the screen space keeps the size corresponding to the preset display size, and the viewing angle (perspective camera) is maintained.

In some embodiments, the coordinate parameters further include a first text direction vector and center coordinate data, and the coordinate parameters are calculated by the GPU, and determining the rotation-translation parameters of the linear text in the first shader of the GPU is implemented by:

step 1, performing matrix operation on the first text direction vector through a GPU to obtain a second text direction vector, wherein the matrix operation at least comprises model transformation and view transformation, and the second text direction vector is the direction vector of the first text direction vector in a preset projection world coordinate system.

In this embodiment, the operation performed on the first text direction vector is performed by the first shader of the GPU, and meanwhile, the first text direction vector is used to indicate the orientation of the text data in the world coordinate system of the original object space, for example: and vertically orienting, wherein the first text direction vector is a constant fixed in the generated coordinate parameters, the generated second text direction vector is a direction vector which is obtained after matrix change is carried out on the first text direction vector and indicates the orientation of the text data in a preset projection world coordinate system, and the second text direction vector is also a constant fixed.

It should be noted that the first text direction vector is an original orientation of the text data, and the second text direction vector is an orientation of the camera projecting the world after being projected by the camera at different positions, for example: setting the original object space of the first text direction vector as the ground, setting the orientation of the text data indicated by the first text direction vector to be upward, setting the preset projection world as a camera projection world, and placing the camera at a certain position on the ground, wherein the orientation of the text data is downward relative to the camera, and the downward orientation is the second text direction vector.

And 2, detecting an interval value of the second text direction vector in a preset projection world coordinate system, and determining a space operation parameter of the text data according to the interval value, wherein the space operation parameter comprises a translation parameter and/or a rotation angle parameter of the text data.

In this embodiment, the interval values in the arbitrary projected world coordinate system include: the first interval, the second interval, the third interval, and the fourth interval may be set such that the interval values in other world coordinate systems include the above-mentioned intervals.

In some optional embodiments, the relationship between the interval value corresponding to the second text direction vector and the spatial operation parameter is as follows:

the lineWorldVec is positioned in the first interval order=1 charRotation=0°
lineWorldVec is located in the second interval order=-1 charRotation=90°
lineWorldVec is located in the third interval order=-1 charRotation=0°
lineWorldVec is located in the fourth interval order=1 charRotation=90°

Wherein lineWorldVec represents a second text direction vector;

the order represents a translation parameter, the parameter value of the order does not limit the size of the translation parameter value, but represents whether the text data is translated, wherein 1 represents no translation, and-1 represents translation;

the charRotation indicates a rotation angle parameter, and a parameter value of the charRotation indicates whether or not the text data is rotated and also indicates a rotation angle of the text data, where charRotation of 0 ° indicates no rotation, and charRotation of 90 ° indicates rotation by 90 °.

It should be noted that the relationship between the interval value and the spatial operation parameter includes, but is not limited to, the corresponding relationship in the above table, and for example, it may also be determined whether the second text direction vector is in the interval/quadrant in the preset projected world coordinate system to determine whether the second text direction vector is positive translation or negative translation.

And 3, performing position correction operation based on the space operation parameter, the first coordinate data and the center coordinate data to obtain a rotation and translation parameter of the text data, wherein the position correction operation at least comprises one of the following steps: translation operation and rotation operation.

In this embodiment, the first coordinate data includes initial coordinate data corresponding to text data of a linear text, that is, a set of initial coordinate data of all characters in the text data.

In this embodiment, the rotation-translation parameter may be a single translation parameter, a single rotation angle parameter, or a translation parameter and a rotation angle parameter.

Performing matrix operation on the first text direction vector through the GPU in the step to obtain a second text direction vector; detecting an interval value of the second text direction vector in a preset projection world coordinate system, and determining a space operation parameter of the text data according to the interval value; and performing position correction operation based on the spatial operation parameter, the first coordinate data and the central coordinate data to obtain the rotation and translation parameter of the text data, so that the acquisition of the rotation and translation parameter in the text layout operation is accelerated through the parallel operation capability of the GPU, the efficiency of the text layout is improved, and the CPU operation pressure of the terminal is reduced.

In some embodiments, detecting an interval value of the second text direction vector in a preset projected world coordinate system, and determining a spatial operation parameter of the text data according to the interval value is implemented by:

step 1, determining an interval value of a second text direction vector in a preset projection world coordinate system.

And 2, inquiring a space operation parameter corresponding to the interval value in a first space operation parameter table, wherein the first space parameter table comprises a mapping relation between the interval value and the space operation parameter.

In this embodiment, the spatial operation parameters are obtained by looking up the table, so that the efficiency of text layout operation is accelerated, and excessive consumption of the computing resources of the terminal is avoided. The first spatial parameter table in this embodiment includes, but is not limited to, the above-mentioned relationship table between the interval value corresponding to the second text direction vector and the spatial operation parameter.

In some embodiments, performing the position correction operation based on the spatial operation parameter, the first coordinate data, and the center coordinate data is performed by:

step 1, reading second characters of the text data, and respectively reading second initial coordinate data and second center coordinate data corresponding to the second characters from the first coordinate data and the center coordinate data, wherein the text data comprises a plurality of second characters.

In the present embodiment, the second words are all the words constituting the text.

And 2, calculating a difference value between the second initial coordinate data and the second central coordinate data, performing translation calculation on the difference value by using a translation parameter in the spatial operation parameter to obtain position coordinate data after the second character is translated, and determining that the rotation translation parameter comprises the position coordinate data after the second character is translated.

In this embodiment, the position coordinate data after the second character translation is calculated as follows:

translatePositioni=order*(positioni-charCenteri)

wherein, translateposotioniRepresenting the translated coordinates of the ith second character, and the order representing the translation parameter, positioniInitial coordinate data, charCenter, representing the ith second characteriAnd center coordinate data representing the ith second letter.

It should be noted that, in some embodiments, the calculation process of the text layout is to process a second word of the text, where the rotation-translation parameter corresponding to the second word is only the position coordinate data after the second word is translated, that is, the second word is only translated.

Reading second characters of the text data through the steps, and respectively reading second initial coordinate data and second central coordinate data corresponding to the second characters from the first coordinate data and the central coordinate data; calculating a difference value between the second initial coordinate data and the second central coordinate data, performing translation calculation on the difference value by using a translation parameter in the space calculation parameters to obtain position coordinate data after the translation of the second character, determining that the rotation translation parameter comprises the position coordinate data after the translation of the second character, realizing the calculation of the rotation translation parameter of the second character needing to be translated, calculating the translation position parameters of a plurality of characters by using the GPU, and solving the problem of terminal blockage caused by the fact that a large data volume occupies CPU (central processing unit) calculation resources.

In some alternative embodiments, for a portion of the second word of the text data, the corresponding rotational translation is only to rotate the portion of the second word.

For the second character of the portion, the position correction operation based on the spatial operation parameter, the first coordinate data and the center coordinate data includes the steps of:

step 1, reading second initial coordinate data and second center coordinate data corresponding to second characters of the text data, wherein the text data comprises a plurality of second characters.

And 2, performing rotation operation on the second initial coordinate data by using the rotation angle parameter in the space operation parameter to obtain position coordinate data of the second characters after rotation, and determining that the rotation and translation parameters comprise the position coordinate data of the second characters after rotation.

Reading second initial coordinate data and second center coordinate data corresponding to a second character of the text data; the rotation angle parameter in the space operation parameter is used for performing rotation operation on the second initial coordinate data to obtain position coordinate data of the second character after rotation, the rotation and translation parameter is determined to comprise the position coordinate data of the second character after rotation, calculation of the rotation and translation parameter of the second character needing to be rotated is achieved, the rotation angle values of the plurality of characters are calculated through the GPU, and the problem that terminal blocking is caused by the fact that a large amount of data occupies CPU (central processing unit) calculation resources is solved.

In some embodiments, the rotation angle parameter for performing the rotation operation on the second text may be a rotation angle, and may also be a rotation matrix based on the rotation angle.

The rotation matrix based on the rotation angle can be calculated by adopting the following formula:

wherein charRotation is a rotation angle, and charRotationMatrix is a rotation matrix of the rotation angle.

In some embodiments, the first coordinate data includes third initial coordinate data corresponding to the text data, and the position correction operation based on the spatial operation parameter, the first coordinate data, and the center coordinate data is performed by:

step 1, reading a first text direction vector, and carrying out standardization processing on the first text direction vector to obtain a unit vector of the first text direction vector.

And 2, generating a rotation matrix corresponding to the first text vector based on the unit vector, performing matrix operation on the third initial coordinate data according to the rotation matrix to obtain position coordinate data of the text data after rotation, and determining that the rotation and translation parameters comprise the position coordinate data of the text data after rotation.

In the present embodiment, the rotation angle parameter of the whole linear text is calculated, that is, the rotation-translation parameter includes the position coordinate data after the text data is rotated for the whole linear text.

Reading a first text direction vector in the steps, and carrying out standardization processing on the first text direction vector to obtain a unit vector of the first text direction vector; and generating a rotation matrix corresponding to the first text vector based on the unit vector, performing matrix operation on the third initial coordinate data according to the rotation matrix to obtain position coordinate data after the text data is rotated, determining that the rotation translation parameters comprise the position coordinate data after the text data is rotated, realizing the calculation of the rotation angle parameter of the text, and performing operation by adopting a GPU (graphics processing unit), accelerating the calculation of the rotation translation parameters and improving the efficiency of text layout.

In some optional embodiments, the first text vector is normalized by using the following formula, and a unit vector and a rotation matrix of the first text direction vector are calculated:

lineUnitVec=normalize(lineVec)

where lineVec denotes a first text vector, lineUnitVec denotes a unit vector corresponding to lineVec, normaize denotes a normalization function, lineRotationMatrix denotes a rotation matrix of the unit vector lineUnitVec, lineUnitVec.

In this embodiment, after the lineRotationMatrix is calculated, the text may be rotated based on the rotation matrix.

In an embodiment, for the text rotation, the individual rotation is performed first for the single text, that is, the rotation is performed based on the calculated charrotationdatax, and after the rotation of the single text is completed, the entire text is further rotated again based on the calculated linerotationdatax.

In some optional embodiments, for a single word of a text, the second word in the above description is used, and the second word needs to be translated and rotated, so that the rotation-translation parameter corresponding to the second word includes the position coordinate data after the translation of the second word and the position coordinate data after the rotation of the second word, and meanwhile, for the second word, the influence on the position coordinate data of the second word when the whole text is rotated needs to be considered.

In some embodiments, the second coordinate data is processed by using the rotational-translational parameters, and the third coordinate data is generated by:

step 1, reading the rotation and translation parameters and second coordinate data.

And 2, performing telescopic operation on the second coordinate data based on the rotation and translation parameters to obtain third coordinate data, wherein the telescopic operation comprises translation operation and/or rotation operation.

Reading the rotation and translation parameters and the second coordinate data in the steps; and performing telescopic operation on the second coordinate data based on the rotation and translation parameters to obtain third coordinate data, thereby realizing translation and/or rotation operation of the second coordinate data converted into two-dimensional coordinate data.

In a specific embodiment, the third coordinate data corresponding to the translated and rotated text corresponding to the text data of the linear text may be calculated by the following formula:

rotationPositioni=(translatePositioni*charRotationMatrix+charCenteri*order)*lineRoteteMatrix

wherein, rotationPositioniRepresents the position coordinate data of the ith character in the text after translation and rotation, translatePositioniIndicating the position coordinate data of the ith character in the text after translation, charRotatray is a rotation matrix of a rotation angle, charCenteriThe coordinate data of the center of the ith character in the text are represented, the order represents the translation parameter of the ith character in the text, and linerotanotrix represents the rotation matrix of the unit vector lineUnitVec of the first text vector.

In some implementations, determining the display position of the linear text in the first shader based on the third coordinate data is performed by:

step 1, reading third coordinate data, and converting the third coordinate data into fourth coordinate data in an object space coordinate system corresponding to the linear text.

In this embodiment, the third coordinate data is two-dimensional coordinate data, that is, coordinate data corresponding to the screen space, and before rendering, the third coordinate data is restored to corresponding three-dimensional coordinate data (corresponding vertex coordinate data), and then a shader is used to perform rendering operation, so as to complete the layout of the linear text.

And 2, performing rendering operation on the fourth coordinate data through the first shader to obtain display position coordinate data of the text data in the first shader of the GPU, wherein the rendering operation comprises model transformation, view transformation and projection transformation.

Reading third coordinate data in the steps, and converting the third coordinate data into fourth coordinate data in an object space coordinate system corresponding to the linear text; and performing rendering operation on the fourth coordinate data through the first shader to obtain the display position coordinate data of the text data in the first shader of the GPU, so that the coordinate conversion from the screen space to the world space or the observation space is realized, and meanwhile, through coordinate restoration, the condition that the vertex is rendered to the coordinates of the screen space by the rendering system is not deformed is ensured.

In this embodiment, the calculation of the display position may be calculated as follows:

gl_Position=projectionMatrix*viewMatrix*modelMatrix*rotationPosition

the glposition represents a display Position of the text, and the glposition is a default variable of the first shader, the projectMatrix represents a projection matrix, the viewMatrix represents a view matrix, the modelMatrix represents a model matrix, and the rotationPosition represents fourth coordinate data corresponding to the text.

In some embodiments, obtaining, by the GPU, projective transformation information of a center projection of text in a screen space displaying linear text is achieved by:

step 1, reading model coordinate data corresponding to a text center, and sequentially performing model transformation operation, world transformation operation and projection transformation operation on the model coordinate data through a GPU (graphics processing Unit) to generate fifth coordinate data of the text center in a projection space of a preset graphic library.

In this embodiment, the calculation of the fifth coordinate data may be calculated as follows:

gl_Position=projectionMatrix*viewMatrix*modelMatrix*vec4(position,1.0)

where gl _ Position represents fifth coordinate data, projectMatrix represents a projection matrix, viewMatrix represents a view matrix, modelMatrix represents a model matrix, and vec4(Position,1.0) represents model coordinate data of a text center in an object space.

And 2, mapping and converting the fifth coordinate data to a coordinate system corresponding to a screen space, and generating sixth coordinate data of the text center in the screen space, wherein the sixth coordinate data is a homogeneous coordinate corresponding to a two-dimensional coordinate of the text center in the screen space.

In this embodiment, the calculation of the sixth coordinate data may be calculated as follows:

projToScreenTextCenter=

((gl_Position.x+1.)/2.*u_resolution.x,(gl_Position.y+1.)/2.*u_resolution.y)

the proj to screen textcenter represents sixth coordinate data of the text center in the screen space, gl _ position.x represents a component in the X direction in the fifth coordinate data corresponding to the text center, gl _ position.y represents a component in the Y direction in the fifth coordinate data corresponding to the text center, u _ resolution.x represents the screen lateral resolution, and u _ resolution.y represents the screen longitudinal resolution; meanwhile, after calculating the proj toscreentextcenter, the proj toscreentextcenter is converted into a homogeneous coordinate, that is, the homogeneous component w is added, and has the following conversion relationship:

Xi=xi/w

Yi=yi/w

wherein (X)i,Yi) Represents the coordinates corresponding to projToScreenTextcenter, and XiIs the component of the projToScreenTextcenter in the X direction, YiIs the component of the projToScreenTextcenter in the Y direction, (x)i,yiW) is (X)i,Yi) 2D homogeneous coordinates of (a).

And 3, extracting a first projection depth parameter from the sixth coordinate data, wherein the first projection depth parameter is used for representing the homogeneous component of the homogeneous coordinate corresponding to the sixth coordinate data, and the first projection depth information corresponding to the linear text comprises the first projection depth parameter.

Reading model coordinate data corresponding to the text center in the steps, and sequentially performing model transformation operation, world transformation operation and projection transformation operation on the model coordinate data through a GPU (graphics processing Unit) to generate fifth coordinate data of the text center in a projection space of a preset graphic library; mapping and converting the fifth coordinate data to a coordinate system corresponding to a screen space, and generating sixth coordinate data of the text center in the screen space; the first projection depth parameter is extracted from the sixth coordinate data, so that the operation of obtaining the homogeneous component corresponding to the first projection depth information is realized, and the coordinate converted by using the homogeneous component is not deformed after being projected to the two-dimensional screen space from the three-dimensional space.

In some embodiments, generating the coordinate parameter of the linear text in the preset graphic library according to the text data is implemented by the following steps:

step 1, extracting a third character of the text data, and generating a third text direction vector, wherein the third text direction vector is at least used for describing the orientation of the third character in a world coordinate system of a preset graphic library.

In this embodiment, the third word is a single word of the text data, and is different from the first word and the second word which are both single words of the text data, the first, the second and the third words are only used for distinguishing different codes in different embodiments, and the first word, the second word and the third word do not indicate that the text data includes two different types of words.

In this embodiment, the third text direction vector and the first text direction vector in the above are both vectors of the orientation of the text data in the original object space.

And 2, generating a rectangular surface corresponding to each third character in the world coordinate system of the preset graphic library, and respectively determining corresponding coordinate data of the vertex and the central point of the rectangular surface in the world coordinate system of the preset graphic library.

And 3, determining that the coordinate parameters of the text data in the preset image library at least comprise the third text direction vector, the vertex of the rectangular plane and the corresponding coordinate data of the central point in the world coordinate system of the preset image library.

In this embodiment, the preset graphic library includes, but is not limited to, one of the following: OpenGL graphics library, WebGL graphics library.

The embodiments of the present application are described and illustrated below by means of preferred embodiments.

Fig. 3 is a flow chart of a method of linear text layout in three-dimensional space according to a preferred embodiment of the present application. As shown in fig. 3, the method for linear text layout in three-dimensional space includes the following steps:

step S301, text data to be displayed is acquired.

Step S302 generates initial coordinate data and center coordinate data of each text data.

Step S303 calculates a second text direction vector of the text data.

In step S304, it is determined that the second text direction vector is located in the second section of the preset projected world coordinate system.

In step S305, a rotational-translational parameter is calculated.

Step S306, pre-calculating the coordinates of the text center in the three-dimensional space projected to the two-dimensional screen space and the corresponding homogeneous component.

And step S307, based on the homogeneous component, performing coordinate conversion on the vertex of each character, and performing telescopic operation on the converted coordinates based on the rotation and translation parameters.

And step S308, restoring the vertex coordinates after the telescopic operation into vertex world coordinates, and displaying the text layout according to the vertex world coordinates.

It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.

The present embodiment further provides a device for linear text layout in a three-dimensional space, where the device is used to implement the foregoing embodiments and preferred embodiments, and the description of the device that has been already made is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.

Fig. 4 is a block diagram of a linear text layout apparatus in a three-dimensional space according to an embodiment of the present application, and as shown in fig. 4, the apparatus includes:

the obtaining module 41 is configured to obtain text data of a linear text to be displayed, and generate a coordinate parameter of the linear text in a preset graphic library according to the text data, where the coordinate parameter includes first coordinate data and a preset text center.

And the preprocessing module 42 is coupled to the obtaining module 41, and configured to perform an operation on the coordinate parameters by using the GPU, determine a rotation and translation parameter of the linear text in a first shader of the GPU, and obtain, by using the GPU, projection transformation information of a center of the text projected in a screen space where the linear text is displayed, where the projection transformation information includes first projection depth information.

And the operation module 43 is coupled to the preprocessing module 42 and configured to process the second coordinate data in the screen space by using the rotation and translation parameters after processing the first coordinate data by using the first projection depth information to generate the second coordinate data of the linear text, so as to generate third coordinate data.

And the processing module 44 is coupled to the operation module 43, and configured to determine a display position of the linear text in the first shader according to the third coordinate data, and render and display the linear text based on the display position.

In some embodiments, the preprocessing module 42 is configured to read a first word of the text data, and read first initial coordinate data corresponding to the first word from the first coordinate data, where the text data includes a plurality of first words; and performing coordinate conversion on the first initial coordinate data according to the first projection depth information and a preset display size corresponding to the first character to generate temporary coordinate data corresponding to the first character in a screen space, wherein the second coordinate data comprises a plurality of temporary coordinate data, and the preset display size is determined according to the first initial coordinate data and the preset second projection depth information.

In some of these embodiments, the preprocessing module 42 is configured to read the rotational-translational parameters and the second coordinate data; and performing telescopic operation on the second coordinate data based on the rotation translation parameters to obtain third coordinate data, wherein the telescopic operation comprises translation operation and/or rotation operation.

In some embodiments, the processing module 44 is configured to read the third coordinate data, and convert the third coordinate data into fourth coordinate data in an object space coordinate system corresponding to the linear text; and performing rendering operation on the fourth coordinate data through the first shader to obtain display position coordinate data of the text data in the first shader of the GPU, wherein the rendering operation comprises model transformation, view transformation and projection transformation.

In some embodiments, the coordinate parameters further include a first text direction vector and center coordinate data, and the preprocessing module 42 is configured to perform matrix operation on the first text direction vector through the GPU to obtain a second text direction vector, where the matrix operation at least includes model transformation and view transformation, and the second text direction vector is a direction vector of the first text direction vector in a preset projection world coordinate system; detecting an interval value of the second text direction vector in a preset projection world coordinate system, and determining a space operation parameter of the text data according to the interval value, wherein the space operation parameter comprises a translation parameter and/or a rotation angle parameter of the text data; performing position correction operation based on the spatial operation parameter, the first coordinate data and the central coordinate data to obtain a rotation and translation parameter corresponding to the linear text, wherein the position correction operation at least comprises one of the following steps: translation operation and rotation operation.

In some embodiments, the preprocessing module 42 is configured to determine an interval value of the second text direction vector in a preset projected world coordinate system; and inquiring the space operation parameter corresponding to the interval value in a first space operation parameter table, wherein the first space parameter table comprises the mapping relation between the interval value and the space operation parameter.

In some embodiments, the preprocessing module 42 is configured to read a second word of the text data, and read a second initial coordinate data and a second central coordinate data corresponding to the second word from the first coordinate data and the central coordinate data, respectively, where the text data includes a plurality of second words; calculating a difference value between the second initial coordinate data and the second central coordinate data, performing translation calculation on the difference value by using a translation parameter in the space operation parameter to obtain position coordinate data after the second character is translated, and determining that the rotation translation parameter comprises the position coordinate data after the second character is translated; and/or performing rotation operation on the second initial coordinate data by using the rotation angle parameter in the space operation parameter to obtain position coordinate data of the second characters after rotation, and determining that the rotation translation parameter comprises the position coordinate data of the second characters after rotation.

In some embodiments, the first coordinate data includes third initial coordinate data corresponding to the text data, and the preprocessing module 42 is configured to read a first text direction vector, and perform normalization processing on the first text direction vector to obtain a unit vector of the first text direction vector; and generating a rotation matrix corresponding to the first text vector based on the unit vector, performing matrix operation on the third initial coordinate data according to the rotation matrix to obtain position coordinate data of the text data after rotation, and determining that the rotation and translation parameters comprise the position coordinate data of the text data after rotation.

In some embodiments, the preprocessing module 42 is configured to read model coordinate data corresponding to a text center, and sequentially perform model transformation operation, world transformation operation, and projection transformation operation on the model coordinate data through the GPU to generate fifth coordinate data of the text center in a projection space of a preset graphics library; mapping and converting the fifth coordinate data to a coordinate system corresponding to a screen space, and generating sixth coordinate data of the text center in the screen space, wherein the sixth coordinate data is a homogeneous coordinate corresponding to a two-dimensional coordinate of the text center in the screen space; and extracting a first projection depth parameter from the sixth coordinate data, wherein the first projection depth parameter is used for representing a homogeneous component of a homogeneous coordinate corresponding to the sixth coordinate data, and first projection depth information corresponding to the linear text comprises the first projection depth parameter.

In some embodiments, the obtaining module 41 is configured to extract a third word of the text data, and generate a third text direction vector, where the third text direction vector is at least used to describe an orientation of the third word in a world coordinate system of the preset graphic library; generating a rectangular surface corresponding to each third character in a world coordinate system of a preset graphic library, and respectively determining corresponding coordinate data of a vertex and a central point of the rectangular surface in the world coordinate system of the preset graphic library; and determining the coordinate parameters of the text data in the preset image library at least comprising the third text direction vector, the vertex of the rectangular plane and the corresponding coordinate data of the central point in the world coordinate system of the preset image library.

The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.

The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.

Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.

Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:

and S1, acquiring text data of the linear text to be displayed, and generating coordinate parameters of the linear text in a preset graphic library according to the text data, wherein the coordinate parameters comprise first coordinate data and a preset text center.

And S2, operating the coordinate parameters by using the GPU, determining the rotation and translation parameters of the linear text in a first shader of the GPU, and acquiring projection transformation information of the center of the text projected in a screen space for displaying the linear text by using the GPU, wherein the projection transformation information comprises first projection depth information.

And S3, processing the first coordinate data by using the first projection depth information to generate second coordinate data of the linear text in the screen space, and then processing the second coordinate data by using the rotation and translation parameters to generate third coordinate data.

And S4, determining the display position of the linear text in the first shader according to the third coordinate data, and rendering and displaying the linear text based on the display position.

It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.

In addition, in combination with the linear text layout method in the three-dimensional space in the foregoing embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements a linear text layout method in a three-dimensional space according to any of the above embodiments.

It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.

The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:答案生成方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!