Three-dimensional target selection method based on object attribute progressive expression

文档序号:1476989 发布日期:2020-02-25 浏览:10次 中文

阅读说明:本技术 一种基于物体属性递进式表达的三维目标选择方法 (Three-dimensional target selection method based on object attribute progressive expression ) 是由 万华根 李嘉栋 韩晓霞 于 2019-10-24 设计创作,主要内容包括:本发明公开了一种基于物体属性递进式表达的三维目标选择方法,属于人机交互技术领域,包括以下步骤:(1)提取场景中所有物体的形状、位置和尺寸属性;(2)获取用户的手势动作,提取手势运动轨迹的形状、位置和尺寸属性;(3)执行基于递进式表达的选择意图判定,首先进行物体轮廓和手势轮廓的形状匹配,当只有一个匹配结果时输出该匹配物体,否则进行位置属性匹配;当匹配结果唯一时输出该匹配物体,否则进行尺寸属性匹配;最后根据尺寸属性匹配得到匹配物体。利用本发明,用户可以向系统逐步地、递进式地表明自己的选择意图,更符合人的能力和习惯,在任务难度和人的能力之间达到了更理想的平衡。(The invention discloses a three-dimensional target selection method based on object attribute progressive expression, belonging to the technical field of human-computer interaction and comprising the following steps of: (1) extracting the shape, position and size attributes of all objects in the scene; (2) acquiring gesture actions of a user, and extracting shape, position and size attributes of a gesture motion track; (3) executing selection intention judgment based on progressive expression, firstly, carrying out shape matching of an object outline and a gesture outline, outputting a matched object when only one matching result exists, and otherwise, carrying out position attribute matching; outputting the matched object when the matching result is unique, otherwise, performing size attribute matching; and finally, matching according to the size attribute to obtain a matched object. By using the invention, the user can gradually and progressively express own selection intention to the system, thereby being more in line with the abilities and habits of people and achieving more ideal balance between task difficulty and the abilities of people.)

1. A three-dimensional target selection method based on object attribute progressive expression is characterized by comprising the following steps:

(1) extracting the shape, position and size attributes of all objects in the scene;

(2) acquiring gesture actions of a user, and extracting shape, position and size attributes of a gesture motion track;

(3) and performing selection intention judgment based on the progressive expression, wherein the specific process is as follows:

(3-1) matching and judging the shape: calculating the similarity between the shapes of all objects in the scene and the shapes of the gesture motion tracks; sequencing the shape similarity of all objects in the scene, and intercepting an object set with the most similar shape to the gesture motion trail of the user by using the tolerance p; if only one object exists in the set, the object is used as a selection result and fed back to the user, and the selection is finished; if a plurality of objects are in the set, the objects to be selected enter a position matching stage;

(3-2) matching and judging the positions: performing distance matching on the position of the gesture motion track and the object set to be selected in the shape matching stage by using a distance matching algorithm; if the position ambiguity does not exist, directly feeding back the object closest to the position as a recommendation to the user; otherwise, the set of the objects to be selected which are closest to the gesture motion trail enters a size matching stage;

(3-3) matching and judging the size: comparing the size of the gesture motion track with the size of the object to be selected in the position matching stage by using a size matching algorithm, and feeding back the object with the closest size as a recommendation result to the user; and judging whether the recommendation result of the user accords with the selection intention of the user, if so, performing confirmation operation, and finishing the selection.

2. The method for selecting the three-dimensional target based on the object property progressive expression according to claim 1, wherein the specific process of the step (1) is as follows: additionally adding an off-screen rendering stage on the basis of fixing a rendering pipeline, closing a depth test when a rendering process is executed, rendering the complete shape of each object in the rendering pipeline into respective independent off-screen textures, and extracting the attribute characteristics of shape, position and size of an image in the off-screen textures.

3. The method for selecting a three-dimensional object based on object property progressive expression according to claim 1, wherein in the step (1), the shape property of the object is defined as a peripheral outline of a two-dimensional image of the three-dimensional object on the projection plane.

4. The method for selecting a three-dimensional object based on object property progressive expression of claim 1, wherein in the step (1), the position property of the object is defined as a center point of a minimum circumflex axis alignment rectangle of a two-dimensional image of the three-dimensional object on a projection plane.

5. The method for selecting a three-dimensional object based on object property progressive expression according to claim 1, wherein in the step (1), the size property of the object is defined as an area of a two-dimensional image of the three-dimensional object on a projection plane.

6. The method for selecting a three-dimensional target based on object property progressive expression according to claim 1, wherein in the step (2), the shape property of the gesture motion track is expressed by drawing the peripheral outline of the target object in a floating and consecutive way.

7. The object property-based three-dimensional object selection method based on progressive expression of the object property of claim 1, wherein in the step (2), the position property of the gesture motion trajectory is expressed by aligning the minimum circumflex axis of the gesture outline with the center point of the rectangle.

8. The method for selecting a three-dimensional target based on object property progressive expression of claim 1, wherein in the step (2), the size property of the gesture motion trajectory is expressed by the area of the gesture outline.

9. The method for selecting the three-dimensional target based on the object attribute progressive expression according to claim 1, wherein in the step (3-1), the specific process of calculating the similarity between the shapes of all the objects in the scene and the shape of the gesture motion trail is as follows:

(3-1-1) calculating three characteristics of the aspect ratio, the roundness and the convexity of the gesture motion track shape and the object shape, dividing the smaller value by the larger value to obtain the normalized similarity of three characteristic values with the value range of (0.0, 1.0), wherein the calculation formulas of the aspect ratio, the roundness and the convexity are as follows:

Figure FDA0002246671660000031

Figure FDA0002246671660000033

(3-1-2) normalizing the approximate polygons of the gesture motion trail shapes and the object shapes into a pixel matrix of 128x128, wherein all pixel points on or in the polygons are marked as 1, and pixel points outside the polygons are marked as 0, so as to obtain a two-dimensional matrix of the pixel distribution of the gesture motion trail shapes and the object shapes; performing exclusive-or operation on the two-dimensional matrixes to obtain a difference matrix of pixel distribution, and dividing the number of elements with the median value of 0 of the difference matrix by the area of the matrix to obtain the normalized similarity of the pixel distribution of the gesture motion track shape and the object shape with the value range of (0.0, 1.0);

(3-1-3) calculating the similarity between the shape of the object and the shape of the gesture motion track, wherein the calculation formula is as follows:

shape similarity is the aspect ratio similarity × circularity similarity × convexity similarity × pixel distribution similarity.

10. The method for selecting a three-dimensional object based on object property progressive expression according to claim 1, wherein in the step (3-1), the latitude p is set to 10%, and the calculation formula is as follows: p-1-shape similarity.

Technical Field

The invention belongs to the technical field of human-computer interaction, and particularly relates to a three-dimensional target selection method based on object attribute progressive expression.

Background

The selection technology of the object is one of core technologies in human-computer interaction. In human-computer interaction, an object needs to be selected first before being operated. With the rapid development of virtual reality technology and augmented reality technology, the selection technology of three-dimensional objects in a virtual environment has become a research hotspot in three-dimensional human-computer interaction due to the wide application value of the selection technology.

As a first step of three-dimensional interaction, three-dimensional selection is a key research content in the field of three-dimensional interaction. The three-dimensional selection technology based on the pointing metaphor and ray casting, which is mainstream nowadays, has a greatly reduced accuracy when selecting small objects, long-distance objects and occluded objects, and even loses effectiveness when target objects are occluded, which is a major challenge faced by the long-distance three-dimensional selection technology. The subsequent optimization of the selection technology is based on the standard ray projection technology, and the defects of the standard ray projection technology are made up by adding a gradual accurate method or ideas of primary coarse screening and secondary confirmation such as similar body projection and the like, and the optimization strategies are at the cost of efficiency reduction when the problem of accuracy dip is solved. The fundamental reason for this inherent deficiency is that humans do not have the ability to precisely control the pointing of muscles at a target object. The natural shake and Heisebauer effect of the human hand make it difficult for the human to complete accurate pointing operation.

The mainstream three-dimensional selection technique does not make reasonable use of human power and therefore has inherent drawbacks. In fact, the study of three-dimensional selection techniques should focus more on the human abilities, from a human perspective to think about strategies. Starting with the ability of fully and reasonably following people, a harmonious man-machine cooperation state is achieved between the user and the computer interaction system, so that the user can express a selection intention to the interaction system more naturally; the interactive system can comprehensively infer the selection intention according to the description of the user and put forward new expression requirements to the user when necessary so as to more accurately understand the interactive intention of the user.

Disclosure of Invention

In order to solve the problems in the prior art, the invention provides a three-dimensional target selection method based on object attribute progressive expression, and a user can progressively and progressively express own selection intention to a system, so that the system can more accurately deduce the selection intention of the user.

A three-dimensional target selection method based on object attribute progressive expression is characterized by comprising the following steps:

(1) extracting the shape, position and size attributes of all objects in the scene;

(2) acquiring gesture actions of a user, and extracting shape, position and size attributes of a gesture motion track;

(3) and performing selection intention judgment based on the progressive expression, wherein the specific process is as follows:

(3-1) matching and judging the shape: calculating the similarity between the shapes of all objects in the scene and the shapes of the gesture motion tracks; sequencing the shape similarity of all objects in the scene, and intercepting an object set with the most similar shape to the gesture motion trail of the user by using the tolerance p; if only one object exists in the set, the object is used as a selection result and fed back to the user, and the selection is finished; if a plurality of objects are in the set, the objects to be selected enter a position matching stage;

(3-2) matching and judging the positions: performing distance matching on the position of the gesture motion track and the object set to be selected in the shape matching stage by using a distance matching algorithm; if the position ambiguity does not exist, directly feeding back the object closest to the position as a recommendation to the user; otherwise, the set of the objects to be selected which are closest to the gesture motion trail enters a size matching stage;

(3-3) matching and judging the size: comparing the size of the gesture motion track with the size of the object to be selected in the position matching stage by using a size matching algorithm, and feeding back the object with the closest size as a recommendation result to the user; and judging whether the recommendation result of the user accords with the selection intention of the user, if so, performing confirmation operation, and finishing the selection.

The specific process of the step (1) is as follows: additionally adding an off-screen rendering stage on the basis of fixing a rendering pipeline, closing a depth test when a rendering process is executed, rendering the complete shape of each object in the rendering pipeline into respective independent off-screen textures, and extracting the attribute characteristics of shape, position and size of an image in the off-screen textures.

In the step (1), the shape attribute of the object is defined as a peripheral contour of a two-dimensional image of the three-dimensional object on the projection plane.

The position attribute of the object is defined as the center point of the smallest circumscribed axis of the two-dimensional image of the three-dimensional object on the projection plane aligned with the rectangle.

The dimensional attribute of an object is defined as the area of a two-dimensional image that the three-dimensional object takes on a projection plane.

In the step (2), the shape attribute of the gesture motion track is expressed by drawing the peripheral outline of the target object in a suspension and consistency mode.

The position attribute of the gesture motion track is expressed by the position of the minimum external axis of the gesture outline aligned with the center point of the rectangle.

The size attribute of the gesture motion trajectory is expressed by the area of the gesture outline.

In the step (3-1), the specific process of calculating the similarity between the shapes of all objects in the scene and the shapes of the gesture motion tracks is as follows:

(3-1-1) calculating three characteristics of the aspect ratio, the roundness and the convexity of the gesture motion track shape and the object shape, dividing the smaller value by the larger value to obtain the normalized similarity of three characteristic values with the value range of (0.0, 1.0), wherein the calculation formulas of the aspect ratio, the roundness and the convexity are as follows:

Figure BDA0002246671670000031

Figure BDA0002246671670000032

Figure BDA0002246671670000041

(3-1-2) normalizing the approximate polygons of the gesture motion trail shapes and the object shapes into a pixel matrix of 128x128, wherein all pixel points on or in the polygons are marked as 1, and pixel points outside the polygons are marked as 0, so as to obtain a two-dimensional matrix of the pixel distribution of the gesture motion trail shapes and the object shapes; performing exclusive-or operation on the two-dimensional matrixes to obtain a difference matrix of pixel distribution, and dividing the number of elements with the median value of 0 of the difference matrix by the area of the matrix to obtain the normalized similarity of the pixel distribution of the gesture motion track shape and the object shape with the value range of (0.0, 1.0);

(3-1-3) calculating the similarity between the shape of the object and the shape of the gesture motion track, wherein the calculation formula is as follows:

shape similarity (aspect ratio similarity, roundness similarity, convexity similarity)

X pixel distribution similarity.

In the step (3-1), the tolerance p is set to be 10%, and the calculation formula is as follows:

p-1-shape similarity.

Compared with the prior art, the invention has the following beneficial effects:

the invention provides a brand-new three-dimensional selection method based on object attribute progressive expression from the perspective of reasonably utilizing the ability of people, balancing the ability of people and task difficulty, and can ensure the accuracy of a final matching result when the result of a matching algorithm of a certain dimension is not accurate enough. This both balances the human abilities and task difficulties, and relaxes the accuracy requirements for matching algorithms, i.e., the algorithm for each dimension need not be very accurate, as long as the trend can be generally discerned, the cascading enhancement between the three dimensional attribute features is sufficient to accurately infer the user's selection intent.

Drawings

FIG. 1 is a schematic diagram illustrating the shape, position, and size attribute definition of an object according to the present invention;

FIG. 2 is a diagram illustrating the additional step of extracting attribute features of an object during an off-screen rendering stage according to the present invention;

FIG. 3 is a flow diagram of a framework for performing a selection intent determination based on a progressive expression in accordance with the present invention;

FIG. 4 is a schematic diagram of a two-dimensional matrix of triangles in an embodiment of the invention;

FIG. 5 is a schematic diagram of a difference matrix of pixel distribution obtained by XOR operation between a triangle and a circle according to an embodiment of the present invention;

FIG. 6 is a diagram illustrating the effect of the tolerance p on the error rate of shape matching and the redundancy rate of matching in the next stage according to an embodiment of the present invention;

fig. 7 is a schematic diagram of an actual scene of progressive attribute expression.

Detailed Description

The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.

A three-dimensional target selection method based on object attribute progressive expression comprises the following three steps:

(1) extracting shape, position and size attributes of all objects in a scene

The shape attribute of an object is a main attribute that needs to be expressed by a user, and is defined as a peripheral outline of a two-dimensional image that a three-dimensional object takes on a projection plane. When objects of the same or similar shape are not present in the scene, the shape attributes are sufficient to uniquely distinguish the target object. When the shape ambiguity problem exists in a scene, the position attribute of an object, which is a commonly used attribute in a three-dimensional selection technology based on a pointing metaphor and ray casting, needs to be introduced, and the intention is definitely selected through the expression of the position attribute. The position attribute is defined as the center point of the smallest circumscribed axis-aligned rectangle of the two-dimensional image that the three-dimensional object takes on the projection plane. In the extreme case where there is no "concentric isomorphism (multiple geometric objects with the same shape are in the same position)" in the scene, the shape and position attributes are sufficient to uniquely distinguish the target object. To ensure that this extreme scenario is adequately addressed, size attributes are also included. The dimensional attribute is defined as the area of a two-dimensional image that a three-dimensional object takes on a projection plane. FIG. 1 is a definition of shape, position, and size attributes of an object.

As shown in fig. 2, on the basis of a fixed rendering pipeline, the interactive system additionally adds an off-screen rendering stage to simultaneously render the object into the off-screen texture, and then extracts the attribute features of shape, position, and size from the image in the off-screen texture. In order to ensure that the proposed selection technology can also show robustness to occlusion, each object in the rendering pipeline is rendered into a separate off-screen texture, and when the part of the rendering process is executed, the depth test is closed, so that the complete shape of the object is rendered into the off-screen texture. In addition, when the fragment coloring of the part of rendering process is executed, functions such as illumination, texture and the like can be closed to accelerate the rendering speed, and the resolution and the channel depth of the off-screen texture are reduced to further reduce the resource consumption of the algorithm.

(2) Obtaining the shape, position and size attributes of the gesture motion track extracted by the user action

The gesture input device needs to have at least three degrees of freedom of movement. If the virtual three-dimensional space is projected on the monocular display plane, the user only needs to use two degrees of freedom of movement of the input device on the vertical plane when performing a selection task in the monocular three-dimensional space. If the three-dimensional space is projected to the binocular display and a user needs to execute a selection task in a binocular stereoscopic three-dimensional environment, the gesture input device is required to have three degrees of freedom of movement.

The system records the current spatial position of the gesture input equipment in real time to obtain a motion track of a gesture. The shape attribute of the gesture motion track is expressed by drawing the peripheral outline of the target object in a suspending and coherent mode, namely the gesture outline, the position attribute is expressed by aligning the minimum external axis of the gesture outline with the central point position of the rectangle, and the size attribute is expressed by the area of the gesture outline.

(3) Selection intent determination based on progressive expression

The shape, position and size attributes of the object are matched with those of the gesture motion track, and when the attributes of a single dimension are enough for the interaction system to deduce the selection intention of the user, the user can complete the selection task without redundantly expressing the attributes of other dimensions. When the system cannot infer a selection intent from a single attribute (e.g., the user draws a square, but there are multiple squares in the scene), the interactive system may progressively ask the user to additionally express attributes of other dimensions to make an inference of the selection intent.

A corresponding selection intention judgment algorithm based on progressive expression is proposed according to the attribute progressive expression sequence of the shape, the position and the size of the object, and is shown in FIG. 3. Firstly, carrying out shape matching on an object contour and a gesture contour, obtaining a shape matching result according to the tolerance p, outputting the matched object when only one matching result is available, namely, the object is a target selection object, otherwise, carrying out position attribute matching, then judging whether further size attribute matching is needed according to the position matching result, outputting the matched object only according to the matching result, otherwise, carrying out size attribute matching, and finally, obtaining the matched object according to the size attribute matching.

1) Shape matching

Firstly, the shape matching of the outline is carried out, an outline matching algorithm is executed between the gesture outline and the object outline, and the similarity degree between two groups of outline characteristics is calculated. And (3) approximating the outline by using a polygon, and extracting four characteristics of aspect ratio, roundness, convexity and pixel distribution from the polygon to describe the approximation degree of the gesture outline and the object outline. Calculating three characteristics of aspect ratio, roundness and convexity of the gesture outline and the object outline respectively according to the following formula 1, formula 2 and formula 3, dividing the smaller value by the larger value to obtain normalized similarity of three characteristic values with a value range of (0.0, 1.0), expressing pixel distribution by a 128x128 two-dimensional matrix, normalizing the approximate polygons of the gesture outline and the object outline into the 128x128 pixel matrix, marking all pixel points on or in the polygons as 1, marking pixel points outside the polygons as 0, obtaining the two-dimensional matrix of the pixel distribution of the gesture outline and the object outline, showing a triangular two-dimensional matrix in FIG. 4, carrying out XOR operation on the two-dimensional matrix of the gesture outline and the object outline to obtain a difference matrix, and showing a difference matrix of the pixel distribution obtained by XOR operation of the triangles and the circles in FIG. 5, and finally, multiplying the four normalized similarity degrees to obtain the shape similarity of the two groups of contours (formula 4), wherein the value range is (0.0, 1.0).

Figure BDA0002246671670000072

Figure BDA0002246671670000081

Shape similarity (aspect ratio similarity, roundness similarity, convexity similarity)

Distribution similarity of x pixels (equation 4)

2) Shape matching result determination

Due to the inaccuracy of the expression of the user, the shape similarity sorting result is not completely credible, so a threshold value for describing the shape similarity difference is needed, when the difference of the shape matching similarities of a plurality of objects is lower than the threshold value, the algorithm considers that the judgment cannot be made through the shape matching, and the user is required to express other attributes to further distinguish the selection intention. The threshold is defined as "latitude p", and all the candidate objects with similarity difference within p with the candidate object with the highest shape similarity are considered to have the same shape by the algorithm, cannot be distinguished only by the shape attribute, and need to enter the matching stage of the next attribute. The setting of the tolerance p affects the frequency of the target object passing through the shape matching stage smoothly and the frequency of the redundant object entering the next object attribute matching stage. If p is too small, the frequency that the target object cannot pass through the shape matching stage due to the reasons of insufficient algorithm accuracy, the rough nature of artificial expression and the like is higher; if p is too large, although the target object can be ensured to pass through the shape matching stage to a greater extent, more redundant alternative objects enter the next matching stage, and the difficulty of expressing the next attribute by the user is increased.

According to the algorithm matching result obtained by the experiment, the influence of the value of the tolerance p on the shape matching error rate (i.e. the frequency that the target object cannot pass through the shape matching stage) and the matching redundancy rate of the next stage (i.e. the frequency that the redundant object enters the next attribute matching stage) is shown in fig. 6. According to data in the graph, when the tolerance p is set to be 10%, the probability that the target object cannot successfully pass through the shape matching stage is lower than 1%, and the fact that the redundant candidate object enters the next attribute matching stage under the condition that only about 8% of the target object can pass through the shape matching stage is guaranteed.

And sorting the shape similarity of all objects in the scene according to the descending order of the similarity, and firstly intercepting an object set which is most similar to the shape of the outline drawn by the user by using the latitude p. If only one object exists in the set, namely the situation that the shape similarities of a plurality of objects are very close does not exist, terminating the intention judgment algorithm, and feeding back the object as a selection result to the user; if a plurality of objects exist in the set, namely shape ambiguity exists, the object to be selected enters a position matching stage, and the system semi-highlights the edge of the object to be selected to prompt a user to enter the position matching stage.

3) Location matching and determination

And the distance matching algorithm carries out distance matching on the gesture outline and the object set to be selected in the shape matching stage. If the position ambiguity does not exist, the algorithm is terminated, and the object closest to the position is directly fed back to the user as a recommendation in an edge full highlight mode; otherwise, the set of objects to be selected closest to the hand-drawn outline enters the next size matching stage.

4) Size matching and determination

And the size matching algorithm compares the size of the gesture outline with the size of the object to be selected in the position matching stage, and the object with the closest size is fed back to the user as a recommendation result in an edge full highlight mode. And judging whether the recommendation result of the user accords with the selection intention of the user, if so, performing confirmation operation, and finishing the selection.

Both position and size matching are performed in real time after the user has finished expressing the shape attributes. When the system executes the intention judgment algorithm, although the position matching is executed in preference to the size matching, the user is not strictly required to express the sequence of the positions and the sizes, and the user can decide how to express the position and the size attributes according to personal preference and actual conditions.

In the proposed intent decision algorithm, the similarity of three attribute features is not directly added in a weighting manner to calculate the selection result, but the matching of three stages is performed, and the first echelon object matched in each stage is equally entered into the matching algorithm of the next stage. The weight assignment is very dependent on the characteristics of the algorithm and the characteristics of the scene, and when different algorithms are replaced and the weights are applied to different scenes, the weights trained before may not be applicable any more. The decision algorithm provided by the invention can also ensure the accuracy of the final matching result when the result of the matching algorithm of a certain dimension is not accurate enough (the inaccuracy is caused by the performance problem of the algorithm or the inaccuracy of human expression). This both balances the human abilities and task difficulties, and relaxes the accuracy requirements for matching algorithms-the algorithms for each dimension need not be very precise, as long as trends can be generally discerned, the cascading enhancement between the three dimensional attribute features is sufficient to accurately infer the user's selection intent.

In the specific implementation process of the embodiment, a projector EPSON CB-X04 with the resolution of 1024X 768 is adopted as a display device to provide an effective display area of 110cm X110 cm. The HTC Vive Lighthouse serves as a three-dimensional tracking device, the HTC Vive Controller handle serves as a gesture input device, and all objects to be selected are located on a virtual plane 2 meters away from the rendering camera. The user stands at a position 2 m from the display screen to perform the experiment. Before the experiment, the sizes and the positions of the virtual environment and the real environment are adjusted and unified, and the field angles of the virtual camera and the real display area are strictly calibrated so as to ensure that the visible size of the virtual object on the display screen is equal to the virtual size of the virtual object.

(1) Preparation work

A blue cursor is arranged on the screen and used for indicating the current spatial position of the handle. The user is required to maintain the arm in the most comfortable hover position, the cursor in that position is calibrated to the center of the screen, and the mapping sensitivity between the handle movement and the cursor movement is set so that the user can drive the cursor through the arm movement to just reach the whole area of the screen. The blue diamond in the center of the screen is used for gesture position calibration before starting, the cursor is moved into the diamond, the application button on the handle is clicked, the diamond disappears, the preparation work is finished, and the object selection operation can be started.

(2) Object selection

The user continuously enters the profile trace by holding the application button on the handle, and the trace recording program running in the background samples the cursor position at a frequency of 100Hz and records it into the trace sequence. In the drawing process, a user can see the drawn track appearing in the area marked by the cursor in real time. When the application button is released, the system judges that the outline drawing is finished, and then the outline translation can be carried out in a mode of moving the handle so as to express the outline position attribute of the object. In the moving process, the edges of all the objects to be selected which pass through the shape attribute matching stage are highlighted in half, and the edge of the object to be selected which is most similar to the outline position attribute is highlighted in full. The semi-highlight feedback is used for prompting a user to modify a to-be-selected object set of the outline position attribute; and the full highlight feedback is used for prompting the user of the object to be selected when the attribute expression of the outline position is terminated, and the application button can be clicked to finish the selection when the edge of the target object is full highlight.

Fig. 7 is a schematic diagram of an actual scene expressed by progressive attributes, in which the left side (a) shows that a user selects a cone in the scene at one time by expressing shape attributes, and the selected cone is fully highlighted. In the middle (b), since two squares exist in the scene, the user needs to modify the outline position to express the position attribute after completing the expression of the shape attribute to select the square to be selected. According to the shape attribute matching, two squares to be selected exist, further according to the position attribute matching, the square to be selected is close to the gesture outline position and can be displayed in a fully highlighted mode, and the square far away from the gesture outline position is displayed in a semi-highlighted mode. In the right side (c), a pair of concentric circles exist in the scene, and when a user wants to select a smaller red circle, the user needs to continue to express the size attribute after expressing the shape attribute so as to enable the system to accurately infer the selection intention.

The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种消息查看方法及终端

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类