Data preprocessing method, scenario display method, device, medium and equipment

文档序号:199585 发布日期:2021-11-05 浏览:23次 中文

阅读说明:本技术 一种数据预处理方法、剧情显示方法、装置、介质及设备 (Data preprocessing method, scenario display method, device, medium and equipment ) 是由 王达 于 2021-08-03 设计创作,主要内容包括:本发明实施例公开了一种数据预处理方法、剧情显示方法、装置、介质及设备,其中数据预处理方法包括:获取任务执行过程中的任务状态参数,基于所述任务状态参数确定至少一个结果剧情数据;对于任一结果剧情数据,调用所述结果剧情数据中每一个对象的影子模型,并获取所述各对象的影子模型执行所述结果剧情数据过程形成的序列帧;将所述结果剧情数据的标签与对应的序列帧进行关联存储,其中,存储的序列帧用于基于任务执行结果进行匹配和调用。本实施例的技术方案,减少了结果剧情的响应时长,提高了结果剧情的响应效率。同时通过序列帧的形式展示结果剧情,减少了对内存资源的占用,以及运行过程中出现卡顿的情况。(The embodiment of the invention discloses a data preprocessing method, a plot display device, a plot display medium and a plot display device, wherein the data preprocessing method comprises the following steps: acquiring task state parameters in a task execution process, and determining at least one result scenario data based on the task state parameters; for any result scenario data, calling a shadow model of each object in the result scenario data, and acquiring a sequence frame formed in the process that the shadow model of each object executes the result scenario data; and performing associated storage on the label of the result scenario data and the corresponding sequence frame, wherein the stored sequence frame is used for matching and calling based on the task execution result. According to the technical scheme, the response time of the result scenario is shortened, and the response efficiency of the result scenario is improved. Meanwhile, the result plot is displayed in a sequence frame mode, so that the occupation of memory resources is reduced, and the situation of blocking in the operation process is reduced.)

1. A method of pre-processing data, comprising:

acquiring task state parameters in a task execution process, and determining at least one result scenario data based on the task state parameters;

for any result scenario data, calling a shadow model of each object in the result scenario data, and acquiring a sequence frame formed in the process that the shadow model of each object executes the result scenario data;

and performing associated storage on the label of the result scenario data and the corresponding sequence frame, wherein the stored sequence frame is used for matching and calling based on the task execution result.

2. The method of claim 1, wherein said determining at least one resulting storyline data based on said task state parameter comprises:

matching based on the task state parameters and the state parameters corresponding to the task completion levels, determining at least one successfully matched task completion level, and obtaining result scenario data corresponding to the at least one successfully matched task completion level; alternatively, the first and second electrodes may be,

processing the state parameters of each object based on the object type and the state parameters of each object in the task state parameters and the weight of each object type to determine a task execution index; and generating corresponding result plot data based on the task execution index.

3. The method of claim 1, wherein the resulting storyline data comprises object types and numbers of objects of each type;

the calling the shadow model of each object in the result scenario data comprises:

and calling the shadow model corresponding to the number of the objects based on any object type.

4. The method of claim 1, wherein the resulting storyline data comprises an action of each object;

the obtaining of the sequence frame formed in the process that the shadow model of each object executes the result scenario data comprises:

and controlling the shadow model of each object to execute corresponding actions in the result scenario data, simultaneously controlling a virtual camera to collect image frames of each result scenario data process, and forming sequence frames based on each image frame.

5. The method of claim 4, wherein each shadow model is configured with a virtual camera for capturing image frames corresponding to the shadow model, the image frames of each shadow model forming a sequence of frames.

6. The method of claim 4, further comprising:

and for any object, recording the displacement and the corresponding timestamp of the object in the result scenario data, and performing associated storage on the displacement, the corresponding timestamp and the formed sequence frame.

7. A plot display method, comprising:

acquiring task state parameters in a task execution process, generating a plot processing request based on the task state parameters, and sending the plot processing request to an edge server so that the edge server responds to the plot processing request to generate a plurality of sequence frames corresponding to result plot data;

acquiring a task execution result, and requesting a corresponding sequence frame from the edge server based on the task execution result;

and rendering the sequence frame.

8. The method according to claim 7, wherein the obtaining task state parameters during task execution comprises:

and determining the task execution progress, and acquiring the current task state parameter when the task execution progress meets the scenario trigger condition.

9. The method of claim 7, wherein the sequence frames comprise sequence frames corresponding to respective objects in the resulting scenario data;

the rendering the sequence of frames comprises:

and respectively creating display billboards of all objects in the result plot data, and respectively rendering the sequence frames of the corresponding objects in the display billboards.

10. The method of claim 9, wherein rendering the sequential frames of the corresponding objects in each of the display billboards comprises:

obtaining the displacement of the object and a corresponding timestamp in the result plot data;

and controlling the corresponding display billboard to move based on the displacement corresponding to the timestamp, and rendering the image frame corresponding to the timestamp in the sequence frame in the display billboard.

11. A data preprocessing apparatus, comprising:

the result scenario data determining module is used for acquiring task state parameters in a task execution process and determining at least one result scenario data based on the task state parameters;

the sequence frame acquisition module is used for calling a shadow model of each object in the result scenario data for any result scenario data and acquiring a sequence frame formed in the process that the shadow model of each object executes the result scenario data;

and the sequence frame storage module is used for storing the label of the result plot data and the corresponding sequence frame in a correlation manner, wherein the stored sequence frame is used for matching and calling based on the task execution result.

12. A plot display apparatus, comprising:

the system comprises a scenario processing request sending module, a scenario processing module and an edge server, wherein the scenario processing request sending module is used for acquiring task state parameters in a task execution process, generating a scenario processing request based on the task state parameters, and sending the scenario processing request to the edge server so that the edge server responds to the scenario processing request and generates a plurality of sequence frames corresponding to result scenario data;

the sequence frame request module is used for acquiring a task execution result and requesting a corresponding sequence frame from the edge server based on the task execution result;

and the sequence frame rendering module is used for rendering the sequence frames.

13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements a data pre-processing method as claimed in any one of claims 1 to 6 or a scenario display method as claimed in any one of claims 7 to 10 when executing the program.

14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a data preprocessing method as claimed in any one of claims 1 to 6 or a scenario display method as claimed in any one of claims 7 to 10.

Technical Field

The embodiment of the invention relates to the technical field of computers, in particular to a data preprocessing method, a plot display device, a plot display medium and a plot display device.

Background

With the continuous development of computer technology, online games are widely accepted, and accordingly, the requirements of users on online games are higher and higher.

In the running process of the online game, the requirement of showing the plot exists. However, in the process of implementing the present invention, the inventors found that at least the following technical problems exist in the prior art: different scenarios are displayed due to different task development, correspondingly, in the generation process of the scenarios, a plurality of scenario role models need to be loaded according to scenario data, and the scenario role models are instantiated, so that a large amount of memory is consumed, and the phenomenon of pause caused by the running of the online game occurs.

Disclosure of Invention

The embodiment of the invention provides a data preprocessing method, a plot display device, a plot display medium and plot display equipment, which are used for reducing the pause of plot display.

In a first aspect, an embodiment of the present invention provides a data preprocessing method, including:

acquiring task state parameters in a task execution process, and determining at least one result scenario data based on the task state parameters;

for any result scenario data, calling a shadow model of each object in the result scenario data, and acquiring a sequence frame formed in the process that the shadow model of each object executes the result scenario data;

and performing associated storage on the label of the result scenario data and the corresponding sequence frame, wherein the stored sequence frame is used for matching and calling based on the task execution result.

In a second aspect, an embodiment of the present invention further provides a scenario display method, including:

acquiring task state parameters in a task execution process, generating a plot processing request based on the task state parameters, and sending the plot processing request to an edge server so that the edge server responds to the plot processing request to generate a plurality of sequence frames corresponding to result plot data;

acquiring a task execution result, and requesting a corresponding sequence frame from the edge server based on the task execution result;

and rendering the sequence frame.

In a third aspect, an embodiment of the present invention further provides a data preprocessing apparatus, including:

the result scenario data determining module is used for acquiring task state parameters in a task execution process and determining at least one result scenario data based on the task state parameters;

the sequence frame acquisition module is used for calling a shadow model of each object in the result scenario data for any result scenario data and acquiring a sequence frame formed in the process that the shadow model of each object executes the result scenario data;

and the sequence frame storage module is used for storing the label of the result plot data and the corresponding sequence frame in a correlation manner, wherein the stored sequence frame is used for matching and calling based on the task execution result.

In a fourth aspect, an embodiment of the present invention further provides a scenario display apparatus, including:

the system comprises a scenario processing request sending module, a scenario processing module and an edge server, wherein the scenario processing request sending module is used for acquiring task state parameters in a task execution process, generating a scenario processing request based on the task state parameters, and sending the scenario processing request to the edge server so that the edge server responds to the scenario processing request and generates a plurality of sequence frames corresponding to result scenario data;

the sequence frame request module is used for acquiring a task execution result and requesting a corresponding sequence frame from the edge server based on the task execution result;

and the sequence frame rendering module is used for rendering the sequence frames.

In a fifth aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the data preprocessing method or the scenario display method according to any embodiment of the present invention.

In a sixth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the data preprocessing method or the scenario display method according to any one of the embodiments of the present invention.

According to the technical scheme provided by the embodiment, the task state data is acquired in the task execution process, and at least one feasible result scenario data is obtained based on the task state parameters. And generating sequence frames corresponding to the result scenario data, and storing the labels of the result scenario data and the corresponding sequence frames in an associated manner, so that the quick calling is convenient to carry out after the task is executed, the response time of the result scenario is reduced, and the response efficiency of the result scenario is improved. Meanwhile, the result plot is displayed in a sequence frame mode, so that the occupation of memory resources is reduced, and the situation of blocking in the operation process is reduced.

Drawings

Fig. 1 is a schematic flow chart illustrating a data preprocessing method according to an embodiment of the present invention;

fig. 2 is a flowchart of a scenario display method according to a second embodiment of the present invention;

fig. 3 is a schematic structural diagram of a data preprocessing apparatus according to a third embodiment of the present invention;

fig. 4 is a schematic structural diagram of a scenario display apparatus according to a fourth embodiment of the present invention;

fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.

Detailed Description

The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.

Example one

Fig. 1 is a schematic flow diagram of a data preprocessing method according to an embodiment of the present invention, where the embodiment is applicable to a scenario preprocessing based on a predicted scenario during a task execution process, and the method may be executed by a data preprocessing apparatus according to an embodiment of the present invention, where the data preprocessing apparatus may be implemented by software and/or hardware, and the data preprocessing apparatus may be configured on an edge server, and specifically includes the following steps:

s110, acquiring task state parameters in a task execution process, and determining at least one result scenario data based on the task state parameters.

And S120, for any result scenario data, calling the shadow model of each object in the result scenario data, and acquiring a sequence frame formed in the process that the shadow model of each object executes the result scenario data.

And S130, performing associated storage on the tags of the result scenario data and corresponding sequence frames, wherein the stored sequence frames are used for matching and calling based on the task execution result.

And displaying the scenario, wherein after the task is executed, the result scenario is influenced by the task execution result, and different result scenarios are triggered by different task execution results. In the prior art, after a task is executed, a triggered result scenario is determined based on a task execution result, and the result scenario is generated in real time for display, and the generation process of the result scenario causes response delay and affects the running efficiency of an online game. Meanwhile, a large amount of memory is consumed in the generation process of the result scenario, so that the situation of stutter in the running process of the online game is easily caused. In view of the above technical problems, in this embodiment, in the task execution process, that is, when the task is not executed and completed, at least one result scenario data is generated by triggering, and the result scenario is preprocessed based on the result scenario data, so that when the task is executed and completed, the sequence frames of the processed result scenario are directly called, a process of generating the triggered result scenario based on the task execution result after the task is executed and completed is saved, and the efficiency of displaying the result scenario is improved.

The method comprises the steps that an edge server receives a scenario preprocessing request sent by a terminal in a task execution process, wherein the scenario preprocessing request can comprise a task state parameter, and the task state parameter is used for generating at least one sequence frame. The trigger time of the scenario preprocessing request may be a preset task execution progress, and for example, when the task execution progress meets 70% or 80%, the terminal obtains a task state parameter at the current time and generates the scenario preprocessing request based on the task state parameter.

The task state parameters are used for evaluating parameter values of each parameter of the task execution result in the task execution process, and may include, for example and without limitation, the type, the level, the vitality, the revival number, the task output quantity, the task completion degree, the used time length, and the like of each task execution object. It should be noted that, parameters for evaluating task execution results corresponding to different tasks are different, a corresponding task state parameter type is determined according to a currently executed task type, and a corresponding task state parameter is obtained based on the task state parameter type. By way of example, the task types include, but are not limited to, a transcript task, an athletic task, a transaction task, and the like.

Because the task is in the execution process, a plurality of task execution results exist based on the task state parameter, and correspondingly, different task execution results correspond to different result scenarios. In this embodiment, a plurality of result scenario data are determined based on a task execution result in a task execution process, so that a corresponding sequence frame is generated based on each result scenario data.

In some embodiments, said determining at least one resulting storyline data based on said task state parameter comprises: and matching based on the task state parameters and the state parameters corresponding to the task completion levels, determining at least one successfully matched task completion level, and obtaining result scenario data corresponding to the at least one successfully matched task completion level. In this embodiment, the task execution result may include a plurality of task levels, where different task levels respectively correspond to different task state parameters, and the task state parameter corresponding to each task completion level is a task state parameter when the task execution is completed. And matching the task state parameters when the task is completed with the task state parameters corresponding to the task completion levels, and determining the task completion level which is successfully matched. Illustratively, the task completion level may include one level, two levels, three levels, etc., and may also include an S level, an a level, a B level, etc. The number of stages of the task completion level and the setting of the display form are not limited, and the setting can be performed according to the requirements of users.

And acquiring task state parameters in the task execution process, and predicting at least one task completion level with possibility when the task is completed based on the task state parameters. Optionally, according to the development trend of the task state parameters, eliminating impossible task completion levels, and obtaining at least one task completion level with possibility. Illustratively, the duration of use in the task state parameter tends to increase, the number of revival cycles tends to increase, the task output tends to increase, and the remaining vitality tends to decrease.

Illustratively, taking the task state parameter as the killing number as an example, the killing number ranges corresponding to the three task completion levels are respectively: the killing quantity range corresponding to the first level is 0-5, the killing quantity range corresponding to the second level is 5-20, and the killing quantity range corresponding to the third level is greater than 20. in the task execution process, the killing quantity in the acquired task state parameters is 7, and as the development trend of the killing quantity is increased, the condition that the killing quantity range is 0-5 correspondingly met does not exist, when the task execution is completed, the probability that the killing quantity meets the range of 5-20 or greater than 20 exists, namely at least one task completion level with possibility is the second level and the third level. Illustratively, taking the task state parameter as the remaining vitality as an example, the remaining vitality ranges corresponding to the three task completion levels are respectively: the range of the residual vitality corresponding to the first level is 0-50%, the range of the residual vitality corresponding to the second level is 50-75%, the range of the residual vitality corresponding to the second level is 75-100%, in the task execution process, the residual vitality in the acquired task state parameters is 80%, because the development trend of the residual vitality is reduced, when the task execution is completed, the probability that the residual vitality meets 0-50%, 50-75% and 75-100% exists, and correspondingly, at least one task completion level with the possibility is the first level, the second level and the third level.

It should be noted that, in some embodiments, the number of the task state parameters for matching is multiple, which is not limited to this.

In some embodiments, said determining at least one resulting storyline data based on said task state parameter comprises: processing the state parameters of each object based on the object type and the state parameters of each object in the task state parameters and the weight of each object type to determine a task execution index; and generating corresponding result plot data based on the task execution index. In this embodiment, the weight of each task state parameter is preset, and a task execution index is obtained by performing weighted calculation based on the weight and the obtained task state parameter. In some embodiments, the weight includes a positive weight and a negative weight, for example, the weight of the parameters such as the number of kills, remaining vitality, etc. is a positive weight, the weight is a positive number, the weight of the parameters such as the duration and the number of revival, etc. is a negative weight, and the weight is a negative number.

The task state parameters can be of various types, and different types of task state parameters can be of different development trends. Optionally, based on each object type in the task state parameters and the state parameters of each object, processing the state parameters of each object based on the weight of each object type, and determining a task execution index, including: for any type of task state parameter, based on the development trend of the type of task state parameter, determining a plurality of feasible ranges of the type of task state parameter when the task execution is completed. The feasible ranges of the task state parameters of various types are arranged and combined, and various task execution results can be obtained. And performing weighting processing based on the task state parameters and the corresponding weights in the task execution results to obtain the task execution indexes of the task execution results. Illustratively, the development trend of the residual vitality is reduced, the residual vitality obtained in the task execution process is 60%, correspondingly, a plurality of feasible ranges of the residual vitality when the task execution is completed can be 0-50% and 50-75%, the development trend of the killing number is increased, the killing number obtained in the task execution process is 7, correspondingly, a plurality of feasible ranges of the killing number when the task execution is completed can be 5-20 or more than 20, for the task state parameters, 4 task execution results can be obtained in a combined manner, illustratively, the residual vitality is 0-50%, and the killing number is 5-20; the residual vitality is 50-75%, and the killing quantity is 5-20; the residual vitality is 0-50%, and the killing quantity is more than 20; the residual vitality is 50-75%, and the killing number is more than 20. And determining corresponding task execution indexes for the task execution results. For the feasible range of each task state parameter, the intermediate value of the feasible range may be used as the calculation value.

In some embodiments, the result scenario data corresponding to different task execution index ranges are preset, and the corresponding result scenario data can be called through the task execution index obtained through the calculation. In some embodiments, each type of data in the result scenario data has a preset association relationship with the task performance index, and the association relationship between different types of data and the task performance index may be different, for example, may be a positive correlation or a negative correlation. Optionally, a calculation mode of each type of data based on the task execution index is preset, the preset calculation mode of each type of data is called, the obtained task execution index is input, each type of data is obtained, and accordingly, the result scenario data is obtained.

The resulting scenario data is the basis for generating the sequence frames. Optionally, the result scenario data includes object types and the number of objects of each type; correspondingly, the invoking a shadow model of each object in the resulting storyline data includes: and calling the shadow model corresponding to the number of the objects based on any object type.

In this embodiment, the objects in the result scenario data may be one or more of a game player object, a pet object, a ride object, and a non-game player including a task to be performed, that is, the objects include an object of a fixed figure and an object of a non-fixed figure, for example, the non-game player is an object of a fixed figure, and the game player object, the pet object, and the ride object are objects of a non-fixed figure. Optionally, invoking a shadow model of each object in the result scenario data includes: calling a corresponding shadow model according to the type of the fixed image object, calling a corresponding initial shadow model according to the type of the non-fixed image object, acquiring the configuration information of the non-fixed image object, updating the corresponding initial shadow model based on the configuration information, and obtaining the shadow model for generating the sequence frame.

The initial shadow model can be understood as a basic model, configured with basic information the same as any object type, where the basic information may include, but is not limited to, identification, facial features or body types of a target object, different object types correspond to different initial shadow models, such as a juridical and a show, and objects of the same type operated by different users correspond to the same initial shadow model. In some embodiments, a plurality of initial shadow models of each type may be preset, and when a plurality of objects of the same type exist in the result scenario data, a plurality of initial shadow models, that is, shadow models corresponding to the number of the objects, may be called at the same time. In other embodiments, one initial shadow model may be preset for each type, and when a plurality of objects of the same type exist in the result scenario data, the preset initial shadow model, that is, the shadow model corresponding to the number of the objects, is copied.

In this embodiment, the configuration information of the object with the non-fixed character in the task execution process may be read. The configuration information may be information of any object configurable by the subject, the configurable object includes, but is not limited to, props, equipment, fashion, hair accessories, skin, and the like, and accordingly, the configuration information may be information of the configurable object, in some embodiments, information of the configurable object includes, but is not limited to, information of a shape, a color, and a special effect of the object, and in some embodiments, the configuration information may be a shape, a color, or a special effect of the fashion. In other embodiments, the configuration information may be an identifier of each configurable object, where the identifier is a unique identifier of the object, and the configuration information is determined by identifying the identifier of the configured object of the target object. By acquiring the configuration information and generating the sequence frame based on the configuration information, the configuration information of each object in the sequence frame is the same as the actual configuration information correspondingly, and the authenticity and the accuracy of the sequence frame display are improved.

Specifically, for any object, the called initial shadow model is updated based on the configuration information of the object, and a shadow model for generating a sequence frame is formed, so that the initial shadow model and the object are in a synchronous state of the configuration information. Illustratively, the handheld weapon of the object is an axe in the task execution process, and the initial shadow model is updated according to the shape, color and other attribute information of the axe, so that the weapons in the hands of the initial shadow model are the same axe; when the object is configured with the spring festival limiting suit in the task execution process, the initial shadow model is updated based on the attribute information such as the special effect of the spring festival limiting suit, so that the initial shadow model is also configured with the special effect of the spring festival limiting suit.

The initial shadow models may be mounted on preset empty nodes, and in some embodiments, each of the initial shadow models may be respectively configured with a virtual camera, and the virtual camera is used for shooting the corresponding initial shadow model or the current shadow model updated based on the configuration information to obtain the image frame.

In some embodiments, the initial shadow model and the virtual camera may be set independently, and when any initial shadow model is called, a virtual camera is called correspondingly to control the virtual camera to shoot the called initial shadow model or the shadow model updated based on the configuration information, so as to obtain the image frame. In some embodiments, the one or more initial shadow models may collectively call a virtual camera, and the virtual camera may accordingly capture one or more image frames from the one or more initial shadow models after updating the one or more shadow models based on the configuration information.

Optionally, a virtual camera is respectively configured for any one of the shadow models, and is configured to collect image frames corresponding to the shadow models, the image frames of each shadow model respectively form a sequence frame, the sequence frames of each shadow model are packed to form a sequence frame data packet corresponding to result scenario data, and correspondingly, each result scenario data corresponds to a sequence frame data packet. Optionally, after each shadow model required in the result scenario data is called, a virtual camera is called, and image frames of the plurality of shadow models are simultaneously acquired based on the virtual camera, so that a sequence frame is formed, that is, each result scenario data corresponds to one sequence frame.

On the basis of the above embodiment, the result scenario data includes the actions of the respective objects; correspondingly, obtaining the sequence frame formed by the shadow model of each object executing the result scenario data process, includes: and controlling the shadow model of each object to execute corresponding actions in the result scenario data, simultaneously controlling a virtual camera to collect image frames of each result scenario data process, and forming sequence frames based on each image frame.

And controlling the virtual camera to periodically acquire image frames in the process of executing corresponding actions by the called shadow models. For example, the image frames of the current shadow model are acquired based on a preset time interval, wherein the preset time interval may be preset, for example, 0.1 second, 0.5 second, or the like, which is not limited herein. And combining the collected image frames according to the collected time stamps to obtain corresponding sequence frames.

When the sequence frames are required to be explained, each shadow model calls a virtual camera together, and the shooting angle of the virtual camera is controlled according to the display angle of the sequence frames so as to obtain the sequence frames meeting the display angle. Wherein the presentation angle may be determined based on presentation requirements of the game player object. When each shadow model calls one virtual camera, the shooting angles of the virtual cameras are the same, and the shooting angles of the virtual cameras are determined based on the display requirements of the game player objects. The shooting angle of each virtual camera may be a back shooting angle, a forward shooting angle, or a side shooting angle of the game player object, which is not limited herein.

On the basis of the above embodiment, the method further includes: and for any object, recording the displacement and the corresponding timestamp of the object in the result scenario data, and performing associated storage on the displacement, the corresponding timestamp and the formed sequence frame.

In this embodiment, the result scenario data further includes a displacement of each object and a corresponding timestamp, and correspondingly, each image frame in the obtained sequence frame is configured with the timestamp, and the displacement and each image frame in the sequence frame are stored in association through the timestamp to form a displacement that changes with time and a sequence frame that is displayed with time. Through recording time stamp and displacement, need not to control the removal of shadow model when gathering the image frame, simplified the collection process of sequence frame, it is corresponding, in the show process of sequence frame, the removal of each object of accessible time stamp and displacement control improves the flexibility ratio and the authenticity of each object.

The sequence frames (or sequence frame data packets) generated based on each of the result scenario data are stored in association with the tags corresponding to the result scenario data. Wherein the label of the result scenario data may be determined based on a task execution result corresponding to the result scenario data. Alternatively, the displacement data, the sequence frames, or the sequence frame data packets may be stored as a binary file.

On the basis of the embodiment, a task execution result sent by the terminal is obtained, the task execution result is matched with a tag of result scenario data, the tag of the corresponding result scenario data is determined, and a sequence frame or a sequence frame data packet corresponding to the tag of the result scenario data is sent to the terminal, so that the terminal renders each sequence frame in the sequence frame or the sequence frame data packet, and the display of the result scenario is realized.

According to the technical scheme of the embodiment, the task state data is acquired in the task execution process, and at least one feasible result scenario data is obtained based on the task state parameters. And generating sequence frames corresponding to the result scenario data, and storing the labels of the result scenario data and the corresponding sequence frames in an associated manner, so that the quick calling is convenient to carry out after the task is executed, the response time of the result scenario is reduced, and the response efficiency of the result scenario is improved. Meanwhile, the result plot is displayed in a sequence frame mode, so that the occupation of memory resources is reduced, and the situation of blocking in the operation process is reduced.

Example two

Fig. 2 is a flowchart of a scenario display method according to a second embodiment of the present invention, where the scenario display method is applicable to a case where a scenario with a result is quickly displayed after a task is executed, the scenario display method may be executed by a scenario display apparatus, the scenario display apparatus may be implemented by software and/or hardware, and the scenario display apparatus may be configured on an electronic computing device such as a mobile phone, a tablet computer, and the like, and specifically includes the following steps:

as shown in fig. 2, the method of the embodiment of the present invention specifically includes the following steps:

s210, acquiring task state parameters in a task execution process, generating a plot processing request based on the task state parameters, and sending the plot processing request to an edge server, so that the edge server responds to the plot processing request and generates a plurality of sequence frames corresponding to result plot data.

S220, acquiring a task execution result, and requesting a corresponding sequence frame from the edge server based on the task execution result.

And S230, rendering the sequence frame.

The sequence frame in this embodiment is obtained by processing, by the edge server, the data preprocessing method provided in any of the above embodiments.

In the task execution process, the plot processing request is sent to the edge server, so that the edge server carries out plot preprocessing, time and resources consumed by the terminal for generating the plot are saved, the response efficiency of the resulting plot is improved, and the normal operation of the terminal is ensured.

In the embodiment, the scenario processing request is generated at a specific trigger time, so that the problem of large calculation amount caused by high possibility of result scenario is avoided, and meanwhile, the problem of long waiting time caused by late trigger time is avoided. Optionally, the obtaining of the task state parameter in the task execution process includes: and determining the task execution progress, and acquiring the current task state parameter when the task execution progress meets the scenario trigger condition. The task execution progress can be 0-100%, the determination modes of the task execution progress of different types of tasks are different, for example, the task execution progress can be determined based on the task duration, namely the total task duration, and the task execution progress is determined according to the ratio of the task execution time to the total task duration; alternatively, the task execution progress may be determined based on the consumption of the task target vital power, i.e., the total vital power of the task target is 100%, and the proportion of the consumption of the task target vital power, i.e., the task execution progress. Different task scenes correspond to different task execution progress determination modes, which is not limited in this respect.

And monitoring the task execution progress, and acquiring the current task state parameter to generate a plot processing request when the task execution progress meets the plot triggering condition. The scenario trigger condition may be a preset task execution progress value, and may be 70% or 80%, for example. The scenario trigger conditions of different types of tasks may be different, and are not limited thereto.

In some embodiments, the sequence frame corresponding to the task execution result is a sequence frame, and accordingly, rendering the sequence frame includes: creating a display billboard, and rendering the matched sequence frames in the display billboard.

In some embodiments, the sequence frames include sequence frames corresponding to respective objects in the resulting storyline data; correspondingly, rendering the sequence of frames comprises: and respectively creating display billboards of all objects in the result plot data, and respectively rendering the sequence frames of the corresponding objects in the display billboards.

The display billboard is a picture display technology, a display picture is always towards a virtual camera, when the virtual camera moves in a scene, the display billboard rotates along with the virtual camera, and therefore a vector from the display billboard to the virtual camera is perpendicular to the surface of the display billboard. Illustratively, the shadow model image (namely, the image frame of the afterimage model) corresponding to the character in the game is a two-dimensional image, and the shadow model image can be played through the display billboard to generate the effect of a three-dimensional image.

Optionally, each display billboard may be configured with an identifier, the identifier may be an object identifier, and correspondingly, the sequence frame corresponding to each object also carries the object identifier, so as to implement association between the display billboard and the sequence frame, and improve accuracy of the sequence frame displayed by each display billboard.

Since each object in the resulting scenario data is not stationary, there is motion such as walking, running, etc., and there is a displacement change in the presentation interface. Correspondingly, rendering the sequence frames of the corresponding objects in each display billboard respectively comprises: obtaining the displacement of the object and a corresponding timestamp in the result plot data; and controlling the corresponding display billboard to move based on the displacement corresponding to the timestamp, and rendering the image frame corresponding to the timestamp in the sequence frame in the display billboard.

For any object in the result plot data, the displacement of the object and each image frame in the sequence frame are associated based on the time stamp, at any moment, the corresponding displacement and the corresponding image frame are determined based on the time stamp, the corresponding display billboard is controlled to move based on the displacement, the associated image frame is displayed in the display billboard, and the synchronism of displacement and sequence frame display is realized.

According to the technical scheme provided by the embodiment, in the task execution process, the task state parameters are obtained, the scenario processing request is generated, and the scenario processing request is sent to the edge server, so that the edge server responds to the scenario processing request and generates a plurality of sequence frames corresponding to result scenario data. And when the task is executed, requesting the corresponding sequence frame from the edge server based on the task execution result, and displaying a result scenario in a rendering sequence frame mode. The response duration of the result scenario is reduced, and the response efficiency of the result scenario is improved. Meanwhile, the result plot is displayed in a sequence frame mode, so that the occupation of memory resources is reduced, and the situation of blocking in the operation process is reduced.

EXAMPLE III

Fig. 3 is a schematic structural diagram of a data preprocessing apparatus according to a third embodiment of the present invention, where the apparatus includes:

a result scenario data determining module 310, configured to obtain a task state parameter during a task execution process, and determine at least one result scenario data based on the task state parameter;

a sequence frame acquiring module 320, configured to, for any result scenario data, invoke a shadow model of each object in the result scenario data, and acquire a sequence frame formed in a process in which the shadow model of each object executes the result scenario data;

and the sequence frame storage module 330 is configured to store the tags of the result scenario data in association with corresponding sequence frames, where the stored sequence frames are used for matching and invoking based on the task execution result.

In the above embodiment, optionally, the result scenario data determining module 310 is configured to:

matching based on the task state parameters and the state parameters corresponding to the task completion levels, determining at least one successfully matched task completion level, and obtaining result scenario data corresponding to the at least one successfully matched task completion level; alternatively, the first and second electrodes may be,

processing the state parameters of each object based on the object type and the state parameters of each object in the task state parameters and the weight of each object type to determine a task execution index; and generating corresponding result plot data based on the task execution index.

In the above embodiment, optionally, the result scenario data includes object types and the number of objects of each type;

the sequence frame storage module 330 is configured to: and calling the shadow model corresponding to the number of the objects based on any object type.

In the above embodiment, optionally, the result scenario data includes actions of each object;

the sequence frame storage module 330 is configured to:

and controlling the shadow model of each object to execute corresponding actions in the result scenario data, simultaneously controlling a virtual camera to collect image frames of each result scenario data process, and forming sequence frames based on each image frame.

In the above embodiment, optionally, a virtual camera is respectively configured for any one of the shadow models, and is used to collect image frames corresponding to the shadow model, where the image frames of each shadow model respectively form a sequence of frames.

In terms of the acquisition of the above embodiment, optionally, the apparatus further includes:

and the displacement recording module is used for recording the displacement and the corresponding timestamp of the object in the result scenario data for any object, and performing associated storage on the displacement, the corresponding timestamp and the formed sequence frame.

The data preprocessing device provided by the embodiment of the invention can execute the data preprocessing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.

Example four

Fig. 4 is a schematic structural diagram of a scenario display apparatus according to a fourth embodiment of the present invention, where the apparatus includes:

a scenario processing request sending module 410, configured to obtain task state parameters in a task execution process, generate a scenario processing request based on the task state parameters, and send the scenario processing request to an edge server, so that the edge server responds to the scenario processing request and generates a plurality of sequence frames corresponding to result scenario data;

a sequence frame request module 420, configured to obtain a task execution result, and request a corresponding sequence frame from the edge server based on the task execution result;

a sequence frame rendering module 430, configured to render the sequence frame.

In the above embodiment, optionally, the scenario processing request sending module 410 is configured to:

and determining the task execution progress, and acquiring the current task state parameter when the task execution progress meets the scenario trigger condition.

In the above embodiment, optionally, the sequence frames include sequence frames corresponding to each object in the result scenario data;

the sequence frame rendering module 430 is configured to:

and respectively creating display billboards of all objects in the result plot data, and respectively rendering the sequence frames of the corresponding objects in the display billboards.

In the above embodiment, optionally, the sequential frame rendering module 430 is configured to:

obtaining the displacement of the object and a corresponding timestamp in the result plot data;

and controlling the corresponding display billboard to move based on the displacement corresponding to the timestamp, and rendering the image frame corresponding to the timestamp in the sequence frame in the display billboard.

The plot display device provided by the embodiment of the invention can execute the plot display method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.

EXAMPLE five

Fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. FIG. 5 illustrates a block diagram of an electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 5 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention. The device 12 is typically an electronic device that undertakes image classification functions.

As shown in FIG. 5, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors 16, a memory device 28, and a bus 18 that connects the various system components (including the memory device 28 and the processors 16).

Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.

Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.

Storage 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), a Digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Storage 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.

A program 36 having a set (at least one) of program modules 26 may be stored, for example, in storage 28, such program modules 26 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a gateway environment. Program modules 26 generally perform the functions and/or methodologies of the described embodiments of the invention.

Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, camera, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, electronic device 12 may communicate with one or more gateways (e.g., Local Area Network (LAN), Wide Area Network (WAN), etc.) and/or a public gateway, such as the internet, via gateway adapter 20. As shown, the gateway adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape drives, and data backup storage systems, to name a few.

The processor 16 executes various functional applications and data processing, such as a data preprocessing method or a scenario display method provided by the above-described embodiments of the present invention, by running a program stored in the storage device 28.

EXAMPLE six

A sixth embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a data preprocessing method or a scenario display method according to an embodiment of the present invention.

Of course, the computer program stored on the computer-readable storage medium provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform the data preprocessing method or the scenario display method provided by any of the embodiments of the present invention.

Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable source code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Source code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Computer source code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The source code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of gateway, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种显示方法、装置、存储介质及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类