Display control method of three-dimensional model, electronic device and readable storage medium

文档序号:1964879 发布日期:2021-12-14 浏览:21次 中文

阅读说明:本技术 三维模型的显示控制方法、电子设备及可读存储介质 (Display control method of three-dimensional model, electronic device and readable storage medium ) 是由 李太和 龚金思 黄有志 魏仁杰 于 2021-08-13 设计创作,主要内容包括:本发明公开了一种三维模型的显示控制方法,所述显示控制方法步骤包括:识别三维模型中的目标区域;获取目标区域的属性信息;根据属性信息确定所述三维模型的目标视角;获取各个目标视角对应的初始预设权重;根据初始预设权重展示目标视角对应的视角图像。本发明还公开了一种电子设备及可读存储介质,本申请通过获取三维模型对应的目标区域的属性信息确定目标视角,并且基于所述目标视角的初始预设权重向用户排序展现所述目标视角对应的视角图像,用户可通过所述视角图像快速观察所述三维模型进而对所述三维模型进行评估,而无需人工操作,反复调整三维模型去获取符合用户预期的视角图像。(The invention discloses a display control method of a three-dimensional model, which comprises the following steps: identifying a target region in the three-dimensional model; acquiring attribute information of a target area; determining a target view angle of the three-dimensional model according to the attribute information; acquiring initial preset weights corresponding to all target visual angles; and displaying the view angle image corresponding to the target view angle according to the initial preset weight. The invention also discloses an electronic device and a readable storage medium, the target visual angle is determined by obtaining the attribute information of the target area corresponding to the three-dimensional model, the visual angle images corresponding to the target visual angle are displayed to a user in a sequencing mode based on the initial preset weight of the target visual angle, the user can rapidly observe the three-dimensional model through the visual angle images to further evaluate the three-dimensional model, manual operation is not needed, and the three-dimensional model is adjusted repeatedly to obtain the visual angle images meeting the expectation of the user.)

1. A method for controlling display of a three-dimensional model, the method comprising:

identifying a target area in a three-dimensional model, wherein the three-dimensional model is determined by three-dimensional modeling of an image through a preset neural network algorithm;

acquiring attribute information of the target area;

determining a target view angle of the three-dimensional model according to the attribute information;

acquiring initial preset weights corresponding to the target visual angles;

and displaying the view angle image corresponding to the target view angle according to the initial preset weight.

2. The display control method of a three-dimensional model according to claim 1, wherein the attribute information includes at least one of long and short path information of the target region, volume information of the target region, projection area information of the target region, position information of the target region, association information of the target region with adjacent regions, property information of the target region, and knowledge map relation information corresponding to the target region.

3. The method for controlling display of a three-dimensional model according to claim 1, wherein the step of displaying the view angle image corresponding to the target view angle according to the initial preset weight comprises:

comparing the initial preset weight corresponding to each target visual angle;

and displaying the view angle images corresponding to the target view angles in a preset display mode based on the sequence of the initial preset weights from large to small, and outputting prompt information, wherein the preset display mode comprises the step of displaying the view angle images and the three-dimensional model in an overlapping mode.

4. The method of controlling display of a three-dimensional model according to claim 3, wherein the step of outputting the indication information includes:

determining prompt information according to the view angle image, wherein the prompt information comprises operation information aiming at the view angle image;

and outputting the prompt information in a preset mode, wherein the preset mode comprises voice and marks.

5. The method for controlling display of a three-dimensional model according to claim 3, wherein the step of displaying the view angle image corresponding to the target view angle according to the preset weight further comprises:

determining a view angle parameter according to the three-dimensional model and a target view angle, wherein the view angle parameter comprises at least one of aperture, focal length and depth of field;

and displaying the view angle image corresponding to the target view angle according to the view angle parameter and the initial preset weight.

6. The method for controlling display of a three-dimensional model according to claim 3, wherein the step of displaying the view angle image corresponding to the target view angle according to the initial preset weight further comprises:

recording and storing the current browsing duration corresponding to each visual angle image and acquiring the historical browsing duration corresponding to each visual angle image;

adjusting the weight corresponding to the target view angle according to the current browsing duration and/or the historical browsing duration;

and determining the next preset weight corresponding to the target visual angle according to the adjusted weight, so that when a next user displays the visual angle image corresponding to the target visual angle, the visual angle images corresponding to the target visual angle are displayed in a descending order according to the next preset weight.

7. The method for controlling display of a three-dimensional model according to claim 5, wherein the step of displaying the view angle image corresponding to the target view angle according to the initial preset weight further comprises:

recording and storing the current adjustment operation corresponding to each visual angle image and acquiring the historical adjustment operation corresponding to each visual angle image;

adjusting the view angle parameters of each view angle image according to the current adjustment operation and/or the historical adjustment operation;

and determining a next view angle parameter according to the adjusted view angle parameter, so that when a view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed according to the next view angle parameter.

8. The method of controlling display of a three-dimensional model according to claim 1, wherein the step of identifying a target region in the three-dimensional model comprises:

and identifying a target area in the three-dimensional model according to a preset neural network machine learning algorithm.

9. An electronic device, characterized in that the electronic device comprises a memory, a processor, and a display control program of a three-dimensional model stored on the memory and executable on the processor, the display control program of the three-dimensional model realizing the steps of the display control method of the three-dimensional model according to any one of claims 1 to 8 when executed by the processor.

10. A readable storage medium, characterized in that the readable storage medium has stored thereon a display control program of a three-dimensional model, which when executed by a processor implements the steps of the display control method of a three-dimensional model according to any one of claims 1 to 8.

Technical Field

The present invention relates to the field of image processing, and in particular, to a display control method for a three-dimensional model, an electronic device, and a readable storage medium.

Background

In the existing medical technology, the examination part of a patient can be reconstructed in a three-dimensional visualization manner through CT, MR and US, so that medical staff can evaluate the state of an illness of the patient visually, but in the actual operation process, the medical staff needs to adjust the examination visual angle repeatedly, so that the examination process is complicated, and the connection relation between a focus in the examination part and an important visceral organ or a blood vessel can be missed, so that misdiagnosis and missed diagnosis of the disease of the patient can be caused.

Disclosure of Invention

The invention mainly aims to provide a display control method of a three-dimensional model, electronic equipment and a readable storage medium, and aims to solve the problems of overlong observation time and complicated steps caused by the fact that a visual angle needs to be adjusted repeatedly when the three-dimensional model is observed.

In order to achieve the above object, the present invention provides a display control method for a three-dimensional model, including:

identifying a target area in a three-dimensional model, wherein the three-dimensional model is determined by three-dimensional modeling of an image through a preset neural network algorithm;

acquiring attribute information of the target area;

determining a target view angle of the three-dimensional model according to the attribute information;

acquiring initial preset weights corresponding to the target visual angles;

and displaying the view angle image corresponding to the target view angle according to the initial preset weight.

Optionally, the attribute information includes at least one of major-minor diameter information of the target region, volume information of the target region, projection area information of the target region, position information of the target region, association information of the target region and an adjacent region, property information of the target region, and knowledge map relationship information corresponding to the target region.

Optionally, the step of displaying the view image corresponding to the target view according to the initial preset weight includes:

comparing the initial preset weight corresponding to each target visual angle;

and displaying the view angle images corresponding to the target view angles in a preset display mode based on the sequence of the initial preset weights from large to small, and outputting prompt information, wherein the preset display mode comprises the step of displaying the view angle images and the three-dimensional model in an overlapping mode.

Optionally, the step of outputting the prompt message includes:

determining prompt information according to the view angle image, wherein the prompt information comprises operation information aiming at the view angle image;

and outputting the prompt message in a preset mode, wherein the preset mode comprises but is not limited to voice and marks.

Optionally, the step of displaying the view image corresponding to the target view according to the preset weight further includes:

determining a view angle parameter according to the three-dimensional model and a target view angle, wherein the view angle parameter comprises at least one of aperture, focal length and depth of field;

and displaying the view angle image corresponding to the target view angle according to the view angle parameter and the initial preset weight.

Optionally, the step of displaying the view image corresponding to the target view according to the initial preset weight further includes:

recording and storing the current browsing duration corresponding to each visual angle image and acquiring the historical browsing duration corresponding to each visual angle image;

adjusting the weight corresponding to the target view angle according to the current browsing duration and/or the historical browsing duration;

and determining the next preset weight corresponding to the target visual angle according to the adjusted weight, so that when a next user displays the visual angle image corresponding to the target visual angle, the visual angle images corresponding to the target visual angle are displayed in a descending order according to the next preset weight.

Optionally, the step of displaying the view image corresponding to the target view according to the initial preset weight further includes:

recording and storing the current adjustment operation corresponding to each visual angle image and acquiring the historical adjustment operation corresponding to each visual angle image;

adjusting the view angle parameters of each view angle image according to the current adjustment operation and/or the historical adjustment operation;

and determining a next view angle parameter according to the adjusted view angle parameter, so that when a view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed according to the next view angle parameter.

Optionally, the step of identifying a target region in the three-dimensional model comprises:

and identifying a target area in the three-dimensional model according to a preset neural network algorithm.

In addition, to achieve the above object, the present invention further provides an electronic device including a memory, a processor, and a display control program of a three-dimensional model stored on the memory and executable on the processor, wherein the display control program of the three-dimensional model, when executed by the processor, implements the steps of the display control method of the three-dimensional model as described above.

Further, to achieve the above object, the present invention provides a readable storage medium having stored thereon a display control program of a three-dimensional model, which when executed by a processor, realizes the steps of the display control method of a three-dimensional model as described above.

According to the display control method, the electronic device and the readable storage medium related to the three-dimensional model, the target area of the three-dimensional model and the attribute information of the target area are identified, at least one target visual angle for observing the three-dimensional model is determined according to the attribute information, and the visual angle image corresponding to each target visual angle is displayed to a user according to the initial preset weight of the target visual angle, so that the user can observe the three-dimensional model quickly, accurately and in high quality and evaluate a scanning object corresponding to the three-dimensional model, and the situation that the three-dimensional model is adjusted manually to obtain the visual angle image which is in line with the expectation of the user is reduced or not needed.

Drawings

FIG. 1 is a schematic diagram of an electronic device in a hardware operating environment according to an embodiment of the present invention;

FIG. 2 is a schematic flow chart illustrating a display control method for a three-dimensional model according to a first embodiment of the present invention;

FIG. 3 is a flowchart illustrating a step S50 of a display control method for three-dimensional models according to a second embodiment of the present invention;

FIG. 4 is a flowchart illustrating a step S50 of a display control method for three-dimensional models according to a third embodiment of the present invention;

FIG. 5 is a flowchart illustrating a display control method for three-dimensional models according to a fourth embodiment of the present invention;

fig. 6 is a flowchart illustrating a display control method for a three-dimensional model according to a fifth embodiment of the present invention.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

The main solution of the embodiment of the invention is as follows: identifying a target region in the three-dimensional model; acquiring attribute information of the target area; determining a target view angle of the three-dimensional model according to the attribute information; and adjusting the display angle of the three-dimensional model according to the target visual angle.

As shown in fig. 1, fig. 1 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present invention.

An embodiment of the present invention provides an electronic device, where the electronic device may be a terminal, and as shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a touch layer covered on the Display screen, a key, a trackball or a touch pad arranged on a casing of the computer device, an external keyboard, a touch pad or a mouse, and the optional user interface 1003 may also include a standard wired interface or a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.

Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.

Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a display control program of a three-dimensional model.

In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call up a display control program of the three-dimensional model stored in the memory 1005, and perform the following operations:

identifying a target area in a three-dimensional model, wherein the three-dimensional model is determined by three-dimensional modeling of an image through a preset neural network algorithm;

acquiring attribute information of the target area;

determining a target view angle of the three-dimensional model according to the attribute information;

acquiring an initial preset weight corresponding to the target visual angle;

and displaying the view angle image corresponding to the target view angle according to the initial preset weight.

Further, the processor 1001 may call up the display control program of the three-dimensional model stored in the memory 1005, and also perform the following operations:

the attribute information comprises at least one of long and short path information of the target area, volume information of the target area, projection area information of the target area, position information of the target area, association information of the target area and an adjacent area, property information of the target area and knowledge map relation information corresponding to the target area.

Further, the processor 1001 may call up the display control program of the three-dimensional model stored in the memory 1005, and also perform the following operations:

comparing the initial preset weight corresponding to each target visual angle;

and displaying the view angle images corresponding to the target view angles in a preset display mode based on the sequence of the initial preset weights from large to small, and outputting prompt information, wherein the preset display mode comprises the step of displaying the view angle images and the three-dimensional model in an overlapping mode.

Further, the processor 1001 may call up the display control program of the three-dimensional model stored in the memory 1005, and also perform the following operations:

determining prompt information according to the view angle image, wherein the prompt information comprises operation information aiming at the view angle image;

and outputting the prompt message in a preset mode, wherein the preset mode comprises but is not limited to voice and marks.

Further, the processor 1001 may call up the display control program of the three-dimensional model stored in the memory 1005, and also perform the following operations:

determining a view angle parameter according to the three-dimensional model and a target view angle, wherein the view angle parameter comprises at least one of aperture, focal length and depth of field;

and displaying the view angle image corresponding to the target view angle according to the view angle parameter and the initial preset weight.

Further, the processor 1001 may call up the display control program of the three-dimensional model stored in the memory 1005, and also perform the following operations:

recording and storing the current browsing duration corresponding to each visual angle image and acquiring the historical browsing duration corresponding to each visual angle image;

adjusting the weight corresponding to the target view angle according to the current browsing duration and/or the historical browsing duration;

and determining the next preset weight corresponding to the target visual angle according to the adjusted weight, so that when a next user displays the visual angle image corresponding to the target visual angle, the visual angle images corresponding to the target visual angle are displayed in a descending order according to the next preset weight.

Further, the processor 1001 may call up the display control program of the three-dimensional model stored in the memory 1005, and also perform the following operations:

recording and storing the current adjustment operation corresponding to each visual angle image and acquiring the historical adjustment operation corresponding to each visual angle image;

adjusting the view angle parameters of each view angle image according to the current adjustment operation and/or the historical adjustment operation;

and determining a next view angle parameter according to the adjusted view angle parameter, so that when a view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed according to the next view angle parameter.

Further, the processor 1001 may call up the display control program of the three-dimensional model stored in the memory 1005, and also perform the following operations:

and identifying a target area in the three-dimensional model according to a preset neural network algorithm.

In order to acquire a three-dimensional model of a scan object, the scan object needs to be scanned with an imaging device first. The imaging device scans a scanning object to obtain scanning data, and generates an image sequence according to the scanning data. Wherein the medical image sequence is an image of each cross section of the scanned object in the scanning direction. And finally generating a three-dimensional model of the internal structure of the scanned object according to the image sequence. Wherein the imaging device may be: x-ray imaging equipment (Xray), CT (plain CT, spiral CT), Positron Emission Tomography (PET), magnetic resonance imaging (MR), infrared scanning equipment, endoscopes, US, and combination scanning equipment of multiple scanning equipment, and the like. When the three-dimensional model is acquired, in order to evaluate the scanning object, a user needs to repeatedly adjust a display angle of the three-dimensional model to inspect the scanning object, which results in complicated inspection steps and easily overlook important regions of the scanning object, resulting in evaluation errors.

It is understood that the scan object may include any kind of physical object, including but not limited to a patient and an industrial device, and the embodiment of the present invention is exemplified by a patient, and when the three-dimensional model is a three-dimensional model of a patient, after acquiring an image corresponding to the patient based on the imaging device, the three-dimensional model is acquired based on the image.

Referring to fig. 2, a first embodiment of the present invention provides a display control method for a three-dimensional model, including:

step S10, identifying a target area in a three-dimensional model, wherein the three-dimensional model is determined by three-dimensional modeling of an image through a preset neural network algorithm;

step S20, acquiring the attribute information of the target area;

step S30, determining the target view angle of the three-dimensional model according to the attribute information;

step S40, acquiring initial preset weights corresponding to the target visual angles;

and step S50, displaying the view angle image corresponding to the target view angle according to the initial preset weight.

In this embodiment, before identifying the target region of the three-dimensional model, the image may be obtained by forming an image of a scanned object by an imaging device such as an X-ray imaging apparatus (Xray), CT (normal CT, spiral CT), Positron Emission Tomography (PET), magnetic resonance imaging (MR), an infrared scanning device, an endoscope, US, and a combined scanning device of multiple scanning devices, and after obtaining the three-dimensional model of the scanned object by a preset neural network algorithm according to the image, the scanned object may be an organ, a tissue, or a cell set that a patient needs to observe with emphasis, or may be a component that an industrial device needs to observe with emphasis, and it can be understood that the imaging device may obtain three-dimensional models of different objects by using preset scanning parameters.

Optionally, the three-dimensional model is obtained based on a volume rendering approach.

Optionally, the three-dimensional model includes a target region and a non-target region of the scanned object, and the target region is an object that is mainly observed by a user.

Optionally, the identifying of the target area of the three-dimensional model may be that a user determines the target area in the three-dimensional model by observing the three-dimensional model, receives a user input, determines the target area in the three-dimensional model according to the user input, and displays the target area in a preset manner, where the preset manner may be that a contour of the target area is traced, the target area is highlighted, or the target area is captured by a frame and displayed. For example, when a doctor observes a three-dimensional model of a patient, the target region may be a lesion region, and after the doctor determines a lesion region of the three-dimensional model, the determined lesion region is input, and then the lesion region is displayed in a preset manner according to the input of the doctor.

Optionally, identifying the target region of the three-dimensional model may also be based on a neural network algorithm, and the step S10 includes:

and identifying a target area in the three-dimensional model according to a preset neural network algorithm.

Specifically, the three-dimensional model is input into a preset neural network algorithm obtained based on training of an image training set, and a target area is obtained through extraction and matching of features or variables. The preset neural network algorithm can acquire a large number of training sets of images for training the neural network, the training images can be two-dimensional images or three-dimensional images acquired by any imaging equipment, and the neural network is trained based on machine learning through learning characteristics or variables of the images.

Optionally, the attribute information includes at least one of major-minor diameter information of the target region, volume information of the target region, projection area information of the target region, position information of the target region, association information of the target region and an adjacent region, property information of the target region, and knowledge map relationship information corresponding to the target region.

Optionally, a target view angle of the three-dimensional model is determined according to the attribute information, where the target view angle is a preferred view angle at which a user can quickly view a scanned object when viewing the three-dimensional model, for example, when a doctor views the three-dimensional model of a patient, the target view angle may be a view line corresponding to a maximum projection area of the lesion region, or may be a view line corresponding to a vascular tissue of the lesion region corresponding to the lesion region. It can be understood that the target areas have different attributes and corresponding target viewing angles.

Optionally, the manner of obtaining the target view angle may be to determine a maximum projection area based on the projection area information of the target region, determine a normal direction of the maximum projection area, and determine a preset view angle of the target region according to the normal direction. For example, when a doctor consults a three-dimensional model of a patient, in order to observe a lesion area more clearly, the maximum projection area corresponding to the lesion area is projected to a user, so that the doctor can see the lesion area quickly.

Optionally, the three-dimensional model further includes an adjacent region adjacent to the target region, and the manner of obtaining the target view angle may also be determining a shortest distance between the target region and the adjacent region based on the position information of the target region, and further determining the target view angle of the target region from a normal direction corresponding to the shortest distance. For example, when a doctor consults a three-dimensional model of a patient, it needs to judge whether an important organ or blood vessel of a non-target region is close to the lesion region, so as to avoid the influence on the important organ or blood vessel in the non-target region in the subsequent treatment of the lesion region.

It can be understood that the target view acquiring method is not limited to the two methods, and in actual operation, the terminal stores the mapping relationship between the attribute information of the target area and the target view, and after the user inputs the three-dimensional model, the terminal automatically matches a plurality of different target views for the user according to the attribute information of the target area of the three-dimensional model and the mapping relationship, and displays the view images corresponding to the target views to the user.

Optionally, after the target view angles of the three-dimensional model are obtained, the initial preset weights of the target view angles are determined. Optionally, the initial preset weights corresponding to the target viewing angles may be different or the same.

Optionally, the mapping relationship between the attribute information of the target region and the target view further includes an initial preset weight of each target view, and it can be understood that the initial preset weights corresponding to each target view are different in size, the initial preset weights are used for representing the degree of usefulness of the target view, the higher the initial preset weights are, the higher the corresponding degree of usefulness is, the higher the degree of importance for representing the target view is, and conversely, the lower the weights are, the lower the corresponding degree of usefulness is, the lower the degree of importance for representing the target view is. In the actual operation process, while generating a target view angle corresponding to a target area in the three-dimensional model, matching a corresponding initial preset weight for the target view angle according to the mapping relation. For example, in a three-dimensional model of a patient examined by a doctor, in order to better evaluate a lesion and determine a treatment scheme, when the three-dimensional model is observed, a normal direction of a maximum projected area of a lesion region is determined as a better observation angle, an initial preset weight of a target angle corresponding to the normal direction of the maximum projected area of the lesion region is set to be higher, and when the position of the lesion region is not close to a blood vessel or an important organ, a degree of importance to a user is not high in a manner of determining the target angle based on association information between the lesion region and an adjacent region, and the initial preset weight corresponding to the association information between the target region and the adjacent region is set to be lower.

In specific operation, acquiring attribute information of a target area in the three-dimensional model, automatically matching the three-dimensional model with a better target view angle, automatically matching initial preset weights of all the target view angles according to the attribute information of the current target area, and then sequentially displaying the target view angles for a user according to the initial preset weights.

Optionally, after a target view angle of the three-dimensional model is obtained, the computer forms a view angle image corresponding to the target view angle according to the target view angle, and then sequentially displays the view angle images corresponding to the target view angle to the user according to the initial preset weight, for example, when a doctor observes the three-dimensional model of the patient, when the target view angle a is a view line corresponding to the maximum projection area of the lesion area, the view angle image corresponding to the maximum projection area of the three-dimensional model is generated, and when the target view angle B is a view line corresponding to the association information of the lesion area and the adjacent area, the view angle image corresponding to the association information of the lesion area and the adjacent area is generated. Meanwhile, calculating the initial preset weight corresponding to the target visual angle A to be 100 and the initial preset weight corresponding to the target visual angle B to be 50 through a computer, and setting the visual angle image corresponding to the target visual angle A in front of the visual angle image corresponding to the target visual angle B so that a doctor can quickly evaluate the focus area according to the visual angle image.

Alternatively, the perspective image may be acquired based on a surface rendering manner.

In the embodiment of the application, the target visual angle is obtained through the attribute information of the target area in the three-dimensional model, the initial preset weight is distributed to each target visual angle, and then the visual angle image corresponding to the target visual angle is displayed to the user in sequence according to the initial preset weight so that the user can conveniently check the three-dimensional model, the problem that the user needs to manually adjust the display angle of the three-dimensional model for multiple times when observing the three-dimensional model is solved, and the efficiency of evaluating the scanning object corresponding to the three-dimensional model by the user is improved.

Optionally, the three-dimensional model includes a plurality of different viewing angles, and in order to enable the user to better view the three-dimensional model and avoid overlooking some important parts, which leads to misjudgment of the scanned object, referring to fig. 3, the step S50 in the second embodiment of the present application includes:

step S51, comparing the initial preset weight corresponding to each target visual angle;

and step S52, displaying the view angle images corresponding to the target view angles in a preset display mode based on the sequence of the initial preset weight from large to small, and outputting prompt information, wherein the preset display mode comprises the step of displaying the view angle images and the three-dimensional model in an overlapping mode.

In this embodiment, after the initial preset weight of each target view is obtained, view images corresponding to each target view may be displayed on the display interface of the terminal in descending order based on the size of the initial preset weight, and a user may determine which target views are useful views and which may be unimportant views based on the distribution sequence of the target views in the display interface.

Optionally, the preset display mode includes displaying the perspective image and the three-dimensional model in an overlapping manner. Specifically, the perspective image is superimposed on the three-dimensional model with the three-dimensional model as a reference.

Optionally, in a further embodiment, after the three-dimensional model is obtained, the three-dimensional model is separately presented in a display interface for a user to observe the three-dimensional model.

Optionally, in yet another embodiment, after a target region of the three-dimensional model is obtained, the cross-sectional view and the three-dimensional model are displayed in the display interface in an overlapping manner according to the cross-sectional view corresponding to the target region, so that a user can quickly diagnose the target region according to the cross-sectional view.

It is understood that the preset display modes include, but are not limited to, the three modes.

Optionally, the method further includes displaying the perspective image to a user and outputting prompt information to the user, so that the user can know the perspective image based on the prompt information while browsing the perspective image.

Optionally, the step S52 includes:

determining prompt information according to the view angle image, wherein the prompt information comprises operation information aiming at the view angle image;

and outputting the prompt message in a preset mode, wherein the preset mode comprises but is not limited to voice and marks.

When a doctor actually browses the view angle images, a surgical treatment route aiming at the three-dimensional model needs to be planned according to the view angle images, important organs and/or blood vessels needing to be avoided in the actual surgical process need to be judged in the process of planning the surgical treatment route, and meanwhile, attention needs to be paid to avoiding which positions are inconvenient to operate in the actual operation process. Therefore, it takes a long time for the doctor to observe and think. Based on this, the embodiment of the invention provides a method for outputting prompt information, and a doctor can determine a surgical treatment route more quickly according to the prompt information.

Optionally, the prompt information includes but is not limited to important organs and/or vessels to be avoided and operation positions. For example, when a doctor observes a three-dimensional model of a patient, the target region is a lung, the adjacent positions of the lung are distributed over blood vessels, and the blood vessels need to be avoided in the subsequent surgical treatment process of the doctor to prevent the patient from suffering from heavy bleeding. In addition, the doctor needs to determine which positions are inconvenient to operate when performing the surgical treatment on the target region, and automatically output prompt information to the user based on the blood vessels and the positions so that the user can adjust the surgical treatment route based on the positions.

Optionally, after determining the prompt information, the terminal may mark the prompt information, where a specific marking manner may be a circle or a dot, and is not specifically limited herein. In addition, the terminal can also play the prompt message in voice, and the user can acquire the prompt message after receiving the voice. Or, the terminal can also mark the prompt message and play the voice at the same time.

It is understood that the preset manner includes, but is not limited to, the above three manners.

In this embodiment, the target viewing angles are displayed in a descending order according to the initial preset weight, that is, the target viewing angles with low initial preset weight are arranged behind the display interface, and the target viewing angles with high initial preset weight are arranged in front of the display interface, so that a user can browse the viewing angle images from the front of the display interface, and meanwhile, prompt information is output to the user, so that the user can better evaluate the three-dimensional model according to the prompt information.

Optionally, referring to fig. 4, based on the second embodiment, step S50 in the third embodiment of the present application further includes:

step S53, determining a viewing angle parameter according to the three-dimensional model and the target viewing angle, wherein the viewing angle parameter includes at least one of aperture, focal length and depth of field.

Step S54, displaying the perspective image corresponding to the target perspective according to the perspective parameter and the initial preset weight.

In this embodiment of the application, the aperture is used to control the amount of light entering, the terminal acquires image information of the target area, and determines image brightness of a display position corresponding to the target area, when the image brightness does not meet a preset brightness threshold, it is difficult to see image details of the display position corresponding to the target area, and the aperture is increased or decreased, so that the image brightness of the display position corresponding to the target area meets the preset brightness threshold, and thus a user can clearly detect the image details of the target area in the three-dimensional model according to the adjusted viewing angle parameter.

The focal length is used for controlling the distance between the three-dimensional model and a user, after the terminal acquires the image information of the target area, whether the distance between the target area and the user does not meet a preset distance threshold value or not is judged, and when the distance does not meet the preset distance threshold value, the focal length is increased or reduced, so that the user can clearly detect the image details matched with the target area in the three-dimensional model according to the adjusted focal length.

The depth of field is used for controlling the distance of a clear image presented in the range before and after the focus of the three-dimensional model, and after the terminal acquires the image information of the target area, the depth of field parameter is automatically adjusted according to the image information.

It is understood that the viewing angle parameters include, but are not limited to, aperture, focal length, and depth of field.

In the actual operation process, after a user uploads the three-dimensional model to a terminal, the terminal acquires a target area of the three-dimensional model and attribute information of the target area according to the three-dimensional model, automatically matches a target visual angle for the user according to the attribute information, and automatically acquires visual angle parameters according to the three-dimensional model and the target visual angle, so that the visual angle parameters and the initial preset weight display a visual angle image corresponding to the target visual angle, and the user can observe the three-dimensional model based on a proper distance, proper brightness and a proper visual angle.

Optionally, based on the first embodiment, referring to fig. 5, after the step of displaying the view image corresponding to the target view according to the initial preset weight, the method further includes:

step S60, recording and storing the current browsing duration corresponding to each view angle image and acquiring the historical browsing duration corresponding to each time image;

step S70, adjusting the weight corresponding to the target view angle according to the current browsing time length and/or the historical browsing time length;

step S80, determining a next preset weight corresponding to the target view according to the adjusted weight, so that when a view image corresponding to the target view is displayed next time, the view images corresponding to the target view are displayed in descending order according to the next preset weight.

In the embodiment of the application, the terminal is provided with the timing device, the user browses the view angle images based on the self requirement, the browsing duration of the view angle images browsed by the user is recorded and saved, and the browsing duration of each view angle image can be different or the same.

In a general situation, when a user actually browses the view images, the user views the view images corresponding to the own needs according to the own needs, if the browsing duration is long, the importance degree of the view images is high, and if the browsing duration is long, the importance degree of the view images is low.

Optionally, after acquiring a current browsing duration corresponding to the view image corresponding to each target view, adjusting the weight of the target view according to the current browsing duration. If the weight of the target view angle corresponding to the view angle image with the longest current browsing duration is adjusted to be the highest, the weight corresponding to the target view angle corresponding to the view angle image with the shortest current browsing duration is adjusted to be the highest. For example, the current browsing duration of the view image a is 10min, the current browsing duration of the view image B is 9min, the current browsing duration of the view image C is 3min, and so on, after the current browsing duration is determined, the weight of the target view corresponding to the view image a is adjusted to 100, the weight of the target view corresponding to the view image B is adjusted to 90, and the weight of the target view corresponding to the view image C is adjusted to 80.

Optionally, after obtaining a browsing duration corresponding to each view image corresponding to each target view, obtaining a historical browsing duration corresponding to each view image at the same time, where the historical browsing duration is a historical browsing duration obtained before each view image is browsed currently. Based on this, the manner of adjusting the weight of the target view according to the browsing duration may also be that the current browsing duration of each view image by the user is recorded and stored at the terminal, and based on the historical browsing duration, the weight of the view image is adjusted based on the superimposed browsing duration by superimposing the current browsing duration and the historical browsing duration. For example, the historical browsing duration of the perspective image a is 1200s, the historical browsing duration of the perspective image B is 2000s, the browsing duration of the current browsing perspective image a is 60s, the browsing duration of the current browsing perspective image B is 100s, the superimposed browsing duration corresponding to the perspective image a is 1290s, the superimposed browsing duration corresponding to the perspective image B is 2100s, and based on that the superimposed browsing duration corresponding to the perspective image B is greater than the superimposed browsing duration corresponding to the perspective image a, it is determined that the importance degree corresponding to the perspective image B is high, the weight of the target perspective corresponding to the perspective image B is increased, and the weight corresponding to the target perspective corresponding to the perspective image a is decreased, so that the next preset weight of the perspective image B is greater than the next preset weight of the perspective image a.

It can be understood that, in the embodiment of the present application, the weights corresponding to the target view angles are readjusted based on the browsing duration, and the weights may be the same as or different from the initial preset weights.

Optionally, determining a next preset weight corresponding to the target view according to the adjusted weight, where the next preset weight is used for iterating the next preset weight to the initial preset weight when the user observes the three-dimensional model next time, so that the terminal displays view images corresponding to the target view in a descending order according to the next preset weight.

Optionally, in another implementation, when the user browses each of the perspective images, while recording and storing the current browsing duration corresponding to each of the perspective images, the voice information input by the user when browsing the perspective image is acquired through a microphone, the voice information is converted into corresponding text information, and the corresponding perspective image is marked according to the text information. It is understood that the degree of importance of the marked perspective images is greater than that of the unmarked perspective images, and the degree of importance of the perspective images is higher for more text information. Based on this, when the current browsing duration corresponding to each visual angle image is recorded and stored, the visual angle image corresponding to the voice information is marked according to the corresponding voice information, and then the weight corresponding to the target visual angle corresponding to the visual angle image is adjusted according to the mark and the corresponding current browsing duration.

Optionally, in another embodiment, when the user browses each of the perspective images, the current browsing duration corresponding to each of the perspective images is recorded and stored, the video image corresponding to the voice information is marked according to the voice information input by the user, the historical browsing duration corresponding to each of the perspective images is also acquired, and the weight corresponding to the target perspective corresponding to the perspective image is adjusted according to the current browsing duration, the mark and the historical browsing duration.

It is understood that the adjusting the weight corresponding to the target view includes, but is not limited to, the above-mentioned several ways.

In the embodiment of the application, the weight corresponding to the target view angle is readjusted according to the browsing duration when the user actually browses the view angle image, so that when the user observes the three-dimensional model next time, the terminal displays the view angle image corresponding to the target view angle to the user according to the adjusted weight.

Optionally, based on the first embodiment, referring to fig. 6, after the step of displaying the view image corresponding to the target view according to the initial preset weight, the method further includes:

step S90, recording and storing the current adjustment operation corresponding to each view angle image and acquiring the historical adjustment operation corresponding to each view angle image;

step S100, adjusting the view angle parameters of each view angle image according to the current adjustment operation and/or the historical adjustment operation;

and step S110, determining a next view angle parameter according to the adjusted view angle parameter, so that when a view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed according to the next view angle parameter.

In the actual browsing process of the user, the user adjusts the view angle image according to the own requirement, wherein the adjustment mode includes but is not limited to enlarging/reducing the view angle image and increasing or decreasing the image brightness of the view angle image.

Optionally, after obtaining a current adjustment operation of a user, saving the current adjustment operation, and adjusting the view angle parameter according to the current adjustment operation. For example, when the user enlarges the angle-of-view image, the focal length of the angle-of-view parameter may be adjusted to be small, and when the user increases the image brightness of the angle-of-view image, the aperture of the angle-of-view parameter may be adjusted to be large.

Optionally, in another embodiment, when current adjustment operations corresponding to the view images are recorded and saved, historical adjustment operations corresponding to the view images are further obtained, where the historical adjustment operations include historical adjustment operations of the view images before the current adjustment operations, and then the view parameters of the view images are adjusted according to the current adjustment operations and the historical adjustment operations.

Optionally, determining a next view parameter according to the adjusted view parameter, where the next view parameter is used for iterating the next view parameter when the user observes the three-dimensional model next time, so that the terminal displays a view image corresponding to the target view according to the next view parameter sequence.

In the embodiment of the application, the adjustment operation of the user is recorded and saved, the visual angle parameter is adjusted according to the adjustment operation, and the next visual angle parameter is determined according to the adjusted visual angle parameter, so that when the visual angle image is displayed next time, the visual angle image corresponding to the target visual angle is automatically displayed according to the next visual angle parameter without manually adjusting the visual angle parameter again by the user, and the efficiency of browsing the visual angle image by the user is improved.

Furthermore, an embodiment of the present invention further provides a readable storage medium, where a display control program of a three-dimensional model is stored, and the display control program of the three-dimensional model is executed by a processor to perform the steps of the display control method of the three-dimensional model in any one of the above embodiments.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.

The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.

The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于智慧停车场的地锁系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!