information processing apparatus, information processing system, and non-transitory computer-readable medium

文档序号:1782706 发布日期:2019-12-06 浏览:8次 中文

阅读说明:本技术 信息处理装置、信息处理系统和非暂时性计算机可读介质 (information processing apparatus, information processing system, and non-transitory computer-readable medium ) 是由 得地贤吾 于 2018-12-18 设计创作,主要内容包括:本公开涉及信息处理装置、信息处理系统和非暂时性计算机可读介质。具体地,该信息处理装置包括:检测单元,该检测单元检测检测对象对在空中形成的图像进行的运动;以及控制单元,该控制单元根据相对于图像的显示区域的运动的开始位置和运动的方向的组合,来控制对图像的操作的内容。(The present disclosure relates to an information processing apparatus, an information processing system, and a non-transitory computer readable medium. Specifically, the information processing apparatus includes: a detection unit that detects a motion of a detection object on an image formed in the air; and a control unit that controls the content of the operation on the image according to a combination of a start position of the movement and a direction of the movement with respect to the display area of the image.)

1. An information processing apparatus, the information processing apparatus comprising:

A detection unit that detects a motion of a detection object with respect to an image formed in the air; and

A control unit that controls a content of an operation on the image according to a combination of a start position of the movement and a direction of the movement with respect to a display area of the image.

2. The information processing apparatus according to claim 1, wherein

The combination is a plurality of combinations, and the plurality of combinations are prepared in advance.

3. The information processing apparatus according to claim 2, wherein

The start position of the motion is one or more of a position not overlapping the image, a position overlapping the image, and a position in contact with the image.

4. The information processing apparatus according to claim 1, wherein

The control content corresponding to the combination is decided according to an application program for displaying the image.

5. The information processing apparatus according to claim 1, wherein

and determining the control content corresponding to the combination according to the attribute of the object to be operated.

6. The information processing apparatus according to claim 1, wherein

The start position of the movement is a position at which the detection object is regarded as stationary for a predetermined time or more.

7. The information processing apparatus according to claim 1, wherein

The start position of the movement is a position where passage of a detection object is detected within a predetermined detection area.

8. The information processing apparatus according to claim 1, wherein

The control unit enlarges a maximum display area of the image in a case where positions of both hands at the start of the motion do not overlap with the image and the both hands are moved away from each other.

9. The information processing apparatus according to claim 1, wherein

In a case where the positions of both hands at the start of the movement overlap with the image and the both hands are moved away from each other, the control unit locally enlarges a portion of the image sandwiched between the hands within a maximum display area.

10. The information processing apparatus according to claim 1, wherein

In a case where the positions of both hands at the start of the movement do not overlap with the image and the both hands are moved toward each other, the control unit reduces the maximum display area of the image as a result of the operation.

11. The information processing apparatus according to claim 1, wherein

in a case where the positions of both hands at the start of the movement overlap with the image and the both hands are moved toward each other, the control unit locally reduces a portion of the image sandwiched between the hands within a maximum display area.

12. The information processing apparatus according to claim 1, wherein

in a case where the position of the single hand at the start of the movement does not overlap with the image and the single hand is rotated or moved in one direction, the control unit rotates the image or moves a space forming the image in the direction of the movement.

13. The information processing apparatus according to claim 1, wherein

In a case where the position of the single hand at the start of the movement overlaps with the image and the single hand is moved in one direction, the control unit deletes a portion through which the single hand passes.

14. The information processing apparatus according to claim 1, wherein

The control unit enlarges a maximum display area of the image as a result of the operation in a case where a position of a single hand at the start of the movement does not overlap with the image and the fingers are moved away from each other.

15. The information processing apparatus according to claim 1, wherein

In a case where the position of the single hand overlaps the image at the start of the movement and the fingers are moved away from each other, the control unit locally enlarges a portion sandwiched between the fingers within a maximum display area.

16. The information processing apparatus according to claim 1, wherein

In a case where the position of the single hand does not overlap with the image at the start of the movement and the fingers are moved toward each other, the control unit reduces the maximum display area of the image as a result of the operation.

17. The information processing apparatus according to claim 1, wherein

In a case where the position of the single hand overlaps the image at the start of the movement and the fingers are moved toward each other, the control unit locally reduces a portion sandwiched between the fingers within a maximum display area.

18. The information processing apparatus according to claim 1, wherein

The control unit controls the contents of the operation according to the plurality of combinations.

19. The information processing apparatus according to claim 18, wherein

In a case where the image is a stereoscopic image, the control unit controls the outer layer image or the inner image according to a difference in the start position.

20. The information processing apparatus according to claim 1, wherein

A stimulus corresponding to the content of the operation is given to a part of the body of the user.

21. An information processing system, the information processing system comprising:

An image forming unit that forms an image in the air;

A detection unit that detects a motion of a detection object with respect to the image; and

A control unit that controls a content of an operation on the image according to a combination of a start position of the movement and a direction of the movement with respect to a display area of the image.

22. a non-transitory computer-readable medium storing a program that causes a computer to execute a process, the process comprising:

Detecting a motion of a detection object with respect to an image formed in the air; and

Controlling contents of an operation on the image according to a combination of a start position of the movement and a direction of the movement with respect to a display area of the image.

Technical Field

The present disclosure relates to an information processing apparatus, an information processing system, and a non-transitory computer readable medium.

background

There is a technique of recognizing that the user has performed a touch operation in a case where a difference between a distance from an aerial image displayed in the air and a distance from the hand of the user measured by a distance sensor is within a predetermined range.

Japanese unexamined patent publication No. 2017-62709 is an example of the background art.

Disclosure of Invention

In the background art, only whether or not a touch operation has been performed is detected.

It is an object of the present disclosure to provide a technique that distinguishes the same type of gesture performed on an image displayed in the air according to a position where a gesture start is detected.

According to a first aspect of the present disclosure, there is provided an information processing apparatus including: a detection unit that detects a motion of a detection object with respect to an image formed in the air; and a control unit that controls the content of the operation on the image according to a combination of a start position of the movement and a direction of the movement with respect to the display area of the image.

An information processing apparatus according to a second aspect is the information processing apparatus according to the first aspect, wherein the combination is a plurality of combinations, and the plurality of combinations are prepared in advance.

An information processing apparatus according to a third aspect is the information processing apparatus according to the second aspect, wherein the start position of the movement is one or more of a position not overlapping with the image, a position overlapping with the image, and a position in contact with the image.

An information processing apparatus according to a fourth aspect is the information processing apparatus according to the first aspect, wherein the control content according to the combination is decided in accordance with an application program for displaying the image.

An information processing apparatus according to a fifth aspect is the information processing apparatus according to the first aspect, wherein the control content corresponding to the combination is decided in accordance with an attribute of the object to be operated.

An information processing apparatus according to a sixth aspect is the information processing apparatus according to the first aspect, wherein the start position of the movement is a position at which the detection object is regarded as stationary for a predetermined time or more.

An information processing apparatus according to a seventh aspect is the information processing apparatus according to the first aspect, wherein the start position of the movement is a position at which passage of the detection object is detected within a predetermined detection area.

An information processing apparatus according to an eighth aspect is the information processing apparatus according to the first aspect, wherein the control unit enlarges the maximum display area of the image in a case where the positions of both hands at the start of the movement do not overlap with the image and the both hands are moved away from each other.

An information processing apparatus according to a ninth aspect is the information processing apparatus according to the first aspect, wherein the control unit locally enlarges a portion of the image sandwiched between the hands within the maximum display area in a case where positions of both hands overlap the image at a start of the movement and the both hands are moved away from each other.

An information processing apparatus according to a tenth aspect is the information processing apparatus according to the first aspect, wherein the control unit reduces the maximum display area of the image as a result of the operation in a case where the positions of both hands do not overlap with the image at the start of the movement and the both hands are moved toward each other.

An information processing apparatus according to an eleventh aspect is the information processing apparatus according to the first aspect, wherein the control unit locally reduces a portion of the image sandwiched between the hands within the maximum display area in a case where positions of both hands overlap the image at a start of the movement and the both hands are moved toward each other.

An information processing apparatus according to a twelfth aspect is the information processing apparatus according to the first aspect, wherein the control unit rotates the image or moves a space where the image is formed in a direction of the movement in a case where a position of the single hand at a start of the movement does not overlap the image and the single hand is rotated or moved in the direction.

An information processing apparatus according to a thirteenth aspect is the information processing apparatus according to the first aspect, wherein the control unit deletes a portion through which the single hand passes in a case where a position of the single hand overlaps with the image at a start of the movement and the single hand is moved in one direction.

An information processing apparatus according to a fourteenth aspect is the information processing apparatus according to the first aspect, wherein the control unit enlarges a maximum display area of the image as a result of the operation in a case where a position of the one hand does not overlap with the image at a start of the movement and the fingers are moved away from each other.

an information processing apparatus according to a fifteenth aspect is the information processing apparatus according to the first aspect, wherein the control unit locally enlarges a portion sandwiched between fingers within the maximum display area in a case where a position of a single hand overlaps with the image at a start of the movement and the fingers are moved away from each other.

An information processing apparatus according to a sixteenth aspect is the information processing apparatus according to the first aspect, wherein the control unit reduces the maximum display area of the image as a result of the operation in a case where the position of the one hand does not overlap with the image at the start of the movement and the fingers are moved toward each other.

an information processing apparatus according to a seventeenth aspect is the information processing apparatus according to the first aspect, wherein the control unit locally reduces a portion sandwiched between the fingers within the maximum display area in a case where a position of the one hand overlaps with the image at a start of the movement and the fingers are moved toward each other.

An information processing apparatus according to an eighteenth aspect is the information processing apparatus according to the first aspect, wherein the control unit controls a content of the operation according to the plurality of combinations.

An information processing apparatus according to a nineteenth aspect is the information processing apparatus according to the eighteenth aspect, wherein the control unit controls the skin image or the internal image in accordance with a difference in the start position in a case where the image is a stereoscopic image.

An information processing apparatus according to a twentieth aspect is the information processing apparatus according to the first aspect, wherein a stimulus corresponding to the content of the operation is given to a part of the body of the user.

According to a twenty-first aspect of the present disclosure, there is provided an information processing system including: an image forming unit that forms an image in the air; a detection unit that detects a motion of a detection object with respect to an image; and a control unit that controls the content of the operation on the image according to a combination of a start position of the movement and a direction of the movement with respect to the display area of the image.

According to a twenty-second aspect of the present disclosure, there is provided a non-transitory computer-readable medium storing a program that causes a computer to execute a process including: detecting a motion of a detection object with respect to an image formed in the air; and controlling the content of the operation on the image according to a combination of the start position of the movement and the direction of the movement with respect to the display area of the image.

according to a first aspect of the present disclosure, the same type of gesture performed on an image displayed in the air may be distinguished according to where the gesture start is detected.

According to the second aspect of the present disclosure, small differences in position and orientation may be ignored.

According to the third aspect of the present disclosure, predictability of operation may be improved.

According to the fourth aspect of the present disclosure, the meaning of the same gesture can be changed according to the application.

according to the fifth aspect of the present disclosure, the meaning of the same gesture can be changed according to the object.

According to the sixth aspect of the present disclosure, it is possible to distinguish between a gesture for operation and a gesture not for operation.

According to the seventh aspect of the present disclosure, it is possible to distinguish between a gesture for operation and a gesture not for operation.

According to an eighth aspect of the present disclosure, the maximum display area may be enlarged by a gesture.

According to a ninth aspect of the present disclosure, a portion of an image may be enlarged locally by a gesture.

According to a tenth aspect of the present disclosure, the maximum display area may be reduced by a gesture.

According to an eleventh aspect of the present disclosure, a portion of an image may be locally reduced by a gesture.

According to a twelfth aspect of the present disclosure, the display portion of the image may be moved or rotated by a gesture.

According to a thirteenth aspect of the present disclosure, a portion of an image may be deleted by a gesture.

According to a fourteenth aspect of the present disclosure, the maximum display area may be enlarged by a gesture.

According to a fifteenth aspect of the present disclosure, a portion of an image may be locally enlarged by a gesture.

According to a sixteenth aspect of the present disclosure, the maximum display area may be reduced by a gesture.

According to a seventeenth aspect of the present disclosure, a portion of an image may be locally reduced by a gesture.

According to the eighteenth aspect of the present disclosure, various operations can be realized.

According to the nineteenth aspect of the present disclosure, an operation specific to a stereoscopic image can be realized.

According to the twentieth aspect of the present disclosure, the user can actually feel the reception of the operation using the gesture.

According to a twenty-first aspect of the present disclosure, the same type of gesture performed on an image displayed in air as different meanings may be distinguished according to a position where a gesture start is detected.

According to a twenty-second aspect of the present disclosure, the same type of gesture performed on an image displayed in the air as different meanings can be distinguished according to the position at which the start of the gesture is detected.

Drawings

Exemplary embodiments of the present disclosure will be described in detail below based on the following drawings, in which:

fig. 1 is a view for explaining a general configuration of an aerial image forming system according to a first exemplary embodiment;

Fig. 2A and 2B illustrate the principle of an aerial image forming apparatus that forms an aerial image by passing light output from a display device through a dedicated optical plate, and fig. 2A illustrates the positional relationship between each component and the aerial image and fig. 2B illustrates a part of the cross-sectional structure of the optical plate;

Fig. 3 illustrates the principle of an aerial image forming apparatus that forms a three-dimensional image as an aerial image;

Fig. 4A and 4B illustrate the principle of an aerial image forming apparatus that forms an aerial image by using a micromirror array having the following structure: the minute rectangular holes constituting the dihedral corner reflectors are arranged at equal intervals in a plane, wherein fig. 4A illustrates a positional relationship between each component and an aerial image and fig. 4B illustrates an enlarged view of a part of the micromirror array;

FIG. 5 illustrates the principle of an aerial image forming device using a beam splitter and retroreflective sheeting;

FIG. 6 illustrates the principle of an aerial image forming device that forms an aerial image as a collection of plasma emitters;

Fig. 7 is a view for explaining an example of a hardware configuration of an operation reception apparatus according to the first exemplary embodiment;

Fig. 8 is a view for explaining a functional configuration of an operation reception apparatus according to the first exemplary embodiment;

Fig. 9A to 9C are views for explaining the positions detected by the start position detection unit, and fig. 9A illustrates a case where the right and left hands are located outside the aerial image (a case where the right and left hands do not overlap with the aerial image), fig. 9B illustrates a case where the right and left hands are in contact with the outer edge of the aerial image, and fig. 9C illustrates a case where the right and left hands are located inside the aerial image (a case where the right and left hands overlap with the aerial image);

Fig. 10 is a table for explaining an example of rules for specifying the contents of operations in the case where the application that outputs the aerial image is drawing software;

Fig. 11 is a table for explaining an example of rules for specifying the contents of an operation in the case where the application that outputs an aerial image is document creation software;

Fig. 12 is an example of a flowchart for explaining the contents of processing by the operation reception apparatus according to the first exemplary embodiment;

Fig. 13A and 13B are views for explaining a specific example 1 of an operation using a gesture, and fig. 13A illustrates a positional relationship between operation hands (right hand and left hand) and fig. 13B illustrates an aerial image displayed after receiving the operation;

Fig. 14A and 14B are views for explaining a specific example 2 of an operation using a gesture, and fig. 14A illustrates a positional relationship between operation hands (right hand and left hand) and fig. 14B illustrates an aerial image displayed after receiving the operation;

Fig. 15A and 15B are views for explaining specific example 3 of an operation using a gesture, and fig. 15A illustrates a positional relationship between operation hands (right hand and left hand) and fig. 15B illustrates an aerial image displayed after receiving the operation;

Fig. 16A and 16B are views for explaining specific example 4 of an operation using a gesture, and fig. 16A illustrates a positional relationship between operation hands (right hand and left hand) and fig. 16B illustrates an aerial image displayed after receiving the operation;

Fig. 17A and 17B are views for explaining a specific example 5 of an operation using a gesture, and fig. 17A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 17B illustrates an aerial image displayed after receiving the operation;

Fig. 18A and 18B are views for explaining a specific example 6 of an operation using a gesture, and fig. 18A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 18B illustrates an aerial image displayed after receiving the operation;

Fig. 19A and 19B are views for explaining a specific example 7 of an operation using a gesture, and fig. 19A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 19B illustrates an aerial image displayed after receiving the operation;

Fig. 20A and 20B are views for explaining a specific example 8 of an operation using a gesture, and fig. 20A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 20B illustrates an aerial image displayed after receiving the operation;

Fig. 21A and 21B are views for explaining a specific example 9 of an operation using a gesture, and fig. 21A illustrates a positional relationship between an operator (left hand) and an aerial image and fig. 21B illustrates an aerial image displayed after receiving the operation;

Fig. 22A and 22B are views for explaining a specific example 10 of an operation using a gesture, and fig. 22A illustrates a positional relationship between an operator (left hand) and an aerial image and fig. 22B illustrates an aerial image displayed after receiving the operation;

Fig. 23A and 23B are views for explaining a specific example 11 of an operation using a gesture, and fig. 23A illustrates a positional relationship between an operator (left hand) and an aerial image and fig. 23B illustrates an aerial image displayed after receiving the operation;

Fig. 24A and 24B are views for explaining a specific example 12 of an operation using a gesture, and fig. 24A illustrates a positional relationship between an operator (left hand) and an aerial image and fig. 24B illustrates an aerial image displayed after receiving the operation;

Fig. 25A and 25B are views for explaining a specific example 13 of an operation using a gesture, and fig. 25A illustrates a positional relationship between operation hands (right hand and left hand) and fig. 25B illustrates an aerial image displayed after receiving the operation.

Fig. 26A and 26B are views for explaining a specific example 14 of an operation using a gesture, and fig. 26A illustrates a positional relationship between operation hands (right hand and left hand) and fig. 26B illustrates an aerial image displayed after receiving the operation.

Fig. 27A and 27B are views for explaining a specific example 15 of an operation using a gesture, and fig. 27A illustrates a positional relationship between operation hands (right hand and left hand) and fig. 27B illustrates an aerial image displayed after receiving the operation.

fig. 28A and 28B are views for explaining a specific example 16 of an operation using a gesture, and fig. 28A illustrates a positional relationship between operation hands (right hand and left hand) and fig. 28B illustrates an aerial image displayed after receiving the operation.

Fig. 29A and 29B are views for explaining a specific example 17 of an operation using a gesture, and fig. 29A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 29B illustrates an aerial image displayed after receiving the operation;

Fig. 30A and 30B are views for explaining a specific example 18 of an operation using a gesture, and fig. 30A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 30B illustrates an aerial image displayed after receiving the operation;

Fig. 31A and 31B are views for explaining a specific example 19 of an operation using a gesture, and fig. 31A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 31B illustrates an aerial image displayed after receiving the operation;

Fig. 32A and 32B are views for explaining a specific example 20 of an operation using a gesture, and fig. 32A illustrates a positional relationship between an operator (left hand) and an aerial image and fig. 32B illustrates an aerial image displayed after receiving the operation;

Fig. 33A and 33B are views for explaining a specific example 21 of an operation using a gesture, and fig. 33A illustrates a positional relationship between an operator (left hand) and an aerial image and fig. 33B illustrates an aerial image displayed after receiving the operation;

Fig. 34A and 34B are views for explaining a specific example 22 of an operation using a gesture, and fig. 34A illustrates a positional relationship between an operator (left hand) and an aerial image and fig. 34B illustrates an aerial image displayed after receiving the operation;

Fig. 35A and 35B are views for explaining a specific example 23 of an operation using a gesture, and fig. 35A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 35B illustrates an aerial image displayed after receiving the operation;

Fig. 36A and 36B are views for explaining a specific example 24 of an operation using a gesture, and fig. 36A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 36B illustrates an aerial image displayed after receiving the operation;

Fig. 37A and 37B are views for explaining a specific example 25 of an operation using a gesture, and fig. 37A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 37B illustrates an aerial image displayed after receiving the operation;

Fig. 38A to 38C are views for explaining a specific example 26 of an operation using a gesture, and fig. 38A illustrates a positional relationship between an operating hand (right hand) and an aerial image, fig. 38B illustrates an operation on an aerial image, and fig. 38C illustrates an aerial image displayed after receiving a two-stage operation;

Fig. 39A to 39C are views for explaining a specific example 27 of an operation using a gesture, and fig. 39A illustrates a positional relationship between an operating hand (right hand) and an aerial image, fig. 39B illustrates an operation on an aerial image, and fig. 39C illustrates an aerial image displayed after receiving a two-stage operation;

Fig. 40A and 40B are views for explaining a specific example 28 of an operation using a gesture, and fig. 40A illustrates a positional relationship between an operating hand (right hand) and an aerial image and fig. 40B illustrates an aerial image displayed after receiving the operation; and

Fig. 41A to 41C are views for explaining a specific example 27 of an operation using a gesture, and fig. 41A illustrates a positional relationship between an operating hand (right hand) and an aerial image, fig. 41B illustrates an operation on an aerial image, and fig. 41C illustrates an aerial image displayed after further receiving the operation.

Detailed Description

Exemplary embodiments of the present disclosure are described with reference to the accompanying drawings.

First exemplary embodiment

General configuration of aerial display

Fig. 1 is a view for explaining a general configuration of an aerial image forming system 1 according to a first exemplary embodiment.

In the present exemplary embodiment, the aerial image 10 is an image formed in the air, and is formed by, for example, reproducing a state of light equivalent to light reflected by an object in the air.

The aerial image 10 is an image floating in the air so that a person can pass through the aerial image 10.

The aerial image forming system 1 shown in fig. 1 includes: an aerial image forming apparatus 11 that forms an aerial image 10 in the air, an operation receiving apparatus 12 that detects a direction in which a person approaches the aerial image 10 and receives an operation performed on the aerial image 10, a camera 13 that captures an image of a gesture performed on the aerial image 10 by the user 3, and an aerial tactile device 14 that gives a stimulus to a part of the body (for example, the right hand 3R, the left hand 3L) according to the content of the received operation.

The aerial image forming system 1 according to the present exemplary embodiment is an example of an information processing system, and the operation reception device 12 is an example of an information processing device.

the aerial image forming apparatus 11 is an example of an information forming apparatus.

The aerial image 10 is a form of display area and is used to display various types of information. For example, the aerial image 10 is used to display a still image such as a document, a painting, a picture, or a map, a moving image such as a video, or a composite image combining a still image and a moving image. For example, the aerial image 10 is used for guidance, advertising, operation, development, learning, and the like through such display.

In fig. 1, the outer edge (i.e., the maximum display area) of the aerial image 10 is defined by a spherical shape, but the shape defining the outer edge is not limited. For example, the outer edge of aerial image 10 may be defined by the outer edge of an object displayed as aerial image 10.

In the present exemplary embodiment, the object itself is defined as an object to be displayed or processed, and the object itself is defined by an outer edge serving as a boundary with an outer space.

for example, the object may be defined by the outline of an operation button image, the outline of a human image, the outline of an animal image, the outline of a product image, the outline of a fruit image, or the like.

The outer edge of the aerial image 10 may be a plane or may be a solid figure such as a curved surface or a rectangular parallelepiped. In the case where the aerial image 10 has a stereoscopic shape, the aerial image 10 may be hollow or may have an internal structure.

In fig. 1, the user is in front of the aerial image 10 (on the negative side in the X direction with respect to the aerial image 10), and makes a gesture of touching the outer peripheral surface of the spherical aerial image 10 with the right hand 3R and the left hand 3L. The right hand 3R and the left hand 3L are examples of the detection object.

Since the aerial image 10 is an image optically formed in the air (because there is no physical projection screen or display device), the user 3 can see the back side of the aerial image 10 and the background behind the aerial image 10 through the aerial image 10.

in the present exemplary embodiment, the position of the camera 13 is designed so that it can detect not only the gesture of the user 3 but also the relationship between the spatial position at which the gesture starts and the aerial image 10.

For example, the camera 13 is disposed above (in the positive direction of the Z axis) or below (in the negative direction of the Z axis) the aerial image 10 in the vertical direction. A plurality of cameras 13 may be provided to enclose the aerial image 10.

Instead of the camera 13 or in combination with the camera 13, a technique for measuring the distance to an object in space or a sensor technique for detecting an object crossing an optical detection surface may be used.

As a technique of measuring a distance, for example, the following methods may be used alone or in combination: a Time Of Flight (ToF) method in which a distance from an object is measured by measuring, for each pixel, a Time required for light emitted from a semiconductor laser or a Light Emitting Diode (LED) to return after being reflected by the object; a Structured Light (SL) time-series pattern projection method in which a distance to an object is measured based on a luminance change occurring in a pixel of an image of the object on which a vertical stripe pattern that changes in time series is projected; a method of measuring a distance to an object using ultrasonic waves or millimeter waves; and a method of measuring a distance to an object using laser light or infrared light. Examples of technologies combined with such technologies include a technology of recognizing a gesture by processing a captured image.

Example of aerial image Forming apparatus

The principle of forming aerial image 10 is described below with reference to fig. 2 to 6. Note that various principles described below are known.

Fig. 2A and 2B illustrate the principle of the aerial image forming apparatus 11A that forms the aerial image 10 by passing light output from the display apparatus 21 through the dedicated optical plate 22. Fig. 2A illustrates a positional relationship between the respective components and the aerial image 10, and fig. 2B illustrates a part of the cross-sectional structure of the optical plate 22.

the optical sheet 22 has the following structure: the plates each having the glass strip 22A serving as a wall surface of the mirror and the plates provided with the glass strips 22B in a direction orthogonal to the glass strips 22A are vertically stacked on each other.

The optical plate 22 reproduces an image displayed on the display device 21 in the air by reflecting light output from the display device 21 twice by the glass strips 22A and 22B, thereby forming an image in the air. The distance between the display device 21 and the optical plate 22 is the same as the distance between the optical plate 22 and the aerial image 10. The size of the image displayed on the display device 21 is the same as the size of the aerial image 10.

fig. 3 illustrates the principle of an aerial image forming apparatus 11B that forms a three-dimensional image as the aerial image 10. The aerial image forming apparatus 11B reproduces a three-dimensional image in the air by passing light reflected by the surface of the real object 23 through the two annular optical plates 22. Note that the optical sheets 22 are not necessarily disposed in series.

Fig. 4A and 4B illustrate the principle of an aerial image forming apparatus 11C that forms an aerial image 10 by using a micromirror array 24 having the following structure: the minute rectangular holes 24A constituting the dihedral corner reflector are arranged at equal intervals in a plane. Fig. 4A illustrates a positional relationship between the respective components and the aerial image 10, and fig. 4B illustrates an enlarged view of a part of the micromirror array 24. For example, each of the holes 24A has a square shape, and each side has a length of 100. mu.m.

Fig. 5 illustrates the principle of the aerial image forming apparatus 11D using the beam splitter 26 and the retroreflective sheet 27. The beam splitter 26 is arranged at 45 degrees with respect to the display surface of the display device 25. The retroreflective sheet 27 is disposed at 90 degrees with respect to the display surface of the display device 25 in the reflection direction of the display image by the beam splitter 26.

In the case of the aerial image forming apparatus 11D, light output from the display device 25 is reflected by the beam splitter 26 toward the retroreflective sheet 27, is subsequently retroreflected by the retroreflective sheet 27, passes through the beam splitter 26, and then forms an image in the air. An aerial image 10 is formed at the location where the light forms the image.

fig. 6 illustrates the principle of an aerial image forming apparatus 11E that forms an aerial image as a set of plasma illuminants.

in the case of the aerial image forming apparatus 11E, the infrared pulse laser 28 outputs a pulse laser light, and the XYZ scanner 29 condenses the pulse laser light in the air. In this process, the gas near the focal point is instantaneously converted into plasma and emits light.

In this case, for example, the pulse frequency is 100Hz or less, and, for example, the pulse emission period is in the order of nanoseconds.

The method of generating aerial images is not limited to the methods described in fig. 2 through 6.

For example, holographic methods may be used to generate aerial images.

Alternatively, a method of making a user feel as if an image is floating in the air by synthesizing light from a scene and light from a displayed image using a transparent prism (e.g., a holographic optical element) disposed in front of the user's eyes.

Alternatively, a method of making a user wearing a head mounted display feel as if an image floats in front of the user may be used.

Arrangement for operating the receiving means 12

Fig. 7 is a view for explaining an example of a hardware configuration of the operation reception apparatus 12 according to the first exemplary embodiment.

The operation receiving device 12 includes: a Central Processing Unit (CPU)31 that provides various functions by executing firmware or application programs, a Read Only Memory (ROM)32 as a storage area that stores firmware and a Basic Input Output System (BIOS), and a Random Access Memory (RAM)33 as a program execution area. The CPU 31, ROM 32, and RAM 33 constitute a computer.

The operation reception device 12 includes a storage device 34 that stores information displayed as the aerial image 10. The storage device 34 is, for example, a rewritable nonvolatile storage medium.

The operation reception apparatus 12 changes the content of the image displayed as the aerial image 10 according to the content of the operation by controlling the aerial image forming apparatus 11 using the communication interface (communication IF) 35.

The operation reception apparatus 12 is connected to: a camera 13 that takes images of the user's gestures, and an airborne haptic device 14 that imparts a stimulus to a part of the body according to the content of the operation.

for example, the aerial haptic device 14 according to the present exemplary embodiment is constituted by an ultrasonic oscillator array in which a plurality of ultrasonic oscillators are arranged in a grid. This type of airborne haptic device 14 can generate a focal point of ultrasound at any location in the air. The user-perceived touch sensation is changed by adjusting the focus distribution or the vibration intensity.

The CPU 31 and the respective units are connected by a bus 37.

Fig. 8 is a view for explaining an example of the functional configuration of the operation reception apparatus 12 (see fig. 1) according to the first exemplary embodiment.

The functional configuration shown in fig. 8 is realized by the CPU 31 executing a program.

The CPU 31 functions as: a start position detection unit 41 that detects a start position of a gesture performed on the aerial image 10 (see fig. 1), a movement direction detection unit 42 that detects a direction in which the local movement of the body that performed the gesture moves, an operation content determination unit 43 that determines the content of the operation based on the start position of the gesture and the direction in which the local movement of the body that performed the gesture moves, and a screen update unit 44 that updates the aerial image 10 (see fig. 1) based on the determined content of the operation.

The start position detection unit 41 and the movement direction detection unit 42 are examples of detection units, and the operation content decision unit 43 is an example of a control unit.

In the present exemplary embodiment, an aerial gesture is received as an operation on the aerial image 10. The operation on the aerial image 10 is specified by the starting position of the movement and the direction of the detected movement. The start position detection unit 41 and the moving direction detection unit 42 are used to detect these pieces of information.

In the present exemplary embodiment, the start position detection unit 41 detects a position where a part of the body (for example, a hand or a leg) for operation remains stationary for a predetermined time or more as the start position of the movement. A part of the body such as a hand, a finger, or a leg is an example of the detection object. A part of the body serving as a detection object is set in the start position detection unit 41 in advance.

The state considered stationary need not be a completely stationary state. The state considered as stationary is defined by the program processing the image.

The reason why the part of the body needs to be kept stationary for a predetermined time or more is that it is necessary to distinguish between a motion of moving the part of the body to a starting point and a motion serving as an operation. In the present exemplary embodiment, for example, the part of the body needs to be kept still for two seconds.

The start position detection unit 41 detects the position of a part of the body (for example, a hand) at the start of the operation as a relative relationship with the aerial image 10 (see fig. 1) by processing the image captured by the camera 13.

Fig. 9A to 9C are views for explaining the positions detected by the start position detecting unit 41. Fig. 9A illustrates a case where the right hand 3R and the left hand 3L are located outside the aerial image 10 (not overlapping the aerial image 10), fig. 9B illustrates a case where the right hand 3R and the left hand 3L are in contact with the outer edge of the aerial image 10, and fig. 9C illustrates a case where the right hand 3R and the left hand 3L are located inside the aerial image 10 (overlapping the aerial image 10).

Contact with the outer edge does not mean strict contact.

In the exemplary embodiment, a case where the part of the body for operation exists within a predetermined range from the outer edge of the aerial image 10 is regarded as a state where the part of the body for operation is in contact with the outer edge. The range includes not only the outer side of the outer edge but also the inner side of the outer edge. The range on the outer side of the outer edge and the range on the inner side of the outer edge may be set to different values. The value of the range may be set according to the content of the aerial image 10.

Although 9A to 9C illustrate that the positions of the right hand 3R and the left hand 3L are the same, the positions of the right hand 3R and the left hand 3L may be different. For example, the right hand 3R may be located inside the aerial image 10, and the left hand 3L may be located outside the aerial image 10.

Although fig. 9A to 9C illustrate the case where the part of the body for operation is both hands, the part of the body for operation may be one hand, may be fingers, or may be legs. In the present exemplary embodiment, an object such as a stick operated by a user is also regarded as equivalent to a part of a body for operation. An object such as a stick operated by a user is also an example of the detection object. Examples of objects may include gloves and shoes.

The number of fingers used for operation may be 1 or may be greater than 1. A particular finger may be considered part of the operation. By specifying the portion for operation, false detection can occur less frequently.

the movement direction detection unit 42 detects a motion toward the aerial image 10, a motion away from the aerial image 10, a motion along the outer edge, and the like as the operation-related motion. The direction of movement of the part of the body may be included in response to movement along the outer edge. In the case of performing an operation using a plurality of portions (e.g., both hands, a plurality of fingers), it is possible to detect whether the intervals between the plurality of portions are narrowed or widened.

Fig. 10 and 11 are tables illustrating examples of rules used by the operation content decision unit 43 to specify the content of an operation.

Fig. 10 is a table for explaining an example of rules for specifying the contents of operations in the case where the application that outputs the aerial image 10 is drawing software.

Fig. 11 is a table for explaining an example of rules for specifying the contents of operations in the case where the application that outputs the aerial image 10 is document creation software.

Each table shown in fig. 10 and 11 is composed of a column R1 showing the number given to the combination, a column R2 showing the application of outputting the aerial image 10, a column R3 showing the content displayed as the aerial image 10, a column R4 showing the portion used for the operation, a column R5 showing the start position, a column R6 showing the movement direction, and a column R7 showing the content of the operation.

First, a case where the application is drawing software is described (see fig. 10).

In the case of fig. 10, the drawing software is three-dimensional computer graphics (3DCG) software or two-dimensional computer graphics (2DCG) software.

Combination 1

The aerial image 10 is a stereoscopic image or a planar image.

Combination 1 corresponds to a case where both hands located outside aerial image 10 (at positions that are not regarded as being in contact with aerial image 10) are moved toward each other.

In this case, the operation of the user is received as an operation of equally reducing the maximum display area of the aerial image 10.

Combination 2

The aerial image 10 is a stereoscopic image or a planar image.

combination 2 corresponds to a case where the hands located outside aerial image 10 (at positions that are not considered to be in contact with aerial image 10) are moved away from each other.

In this case, the operation of the user is received as an operation of equally enlarging the maximum display area of the aerial image 10.

Combination 3

The aerial image 10 is a stereoscopic image or a planar image.

The combination 3 corresponds to a case where both hands located inside the aerial image 10 (at positions that are not regarded as being in contact with the aerial image 10) are moved toward each other.

In this case, the operation of the user is received as an operation of partially reducing the partial image of the aerial image 10 sandwiched between both hands (the partial image located on the user side in the case where the aerial image 10 is a stereoscopic image). Meanwhile, a portion of the aerial image 10 that is not sandwiched by both hands is deformed so as to be enlarged in accordance with the reduction of the region.

combination 4

The aerial image 10 is a stereoscopic image or a planar image.

The combination 4 corresponds to a case where the hands located inside the aerial image 10 (at positions that are not regarded as being in contact with the aerial image 10) are moved away from each other.

In this case, the operation of the user is received as an operation of locally enlarging a partial image of the aerial image 10 sandwiched between both hands (a partial image located on the user side in the case where the aerial image 10 is a stereoscopic image). At the same time, the portion of the aerial image 10 not sandwiched by both hands is deformed so as to be reduced in accordance with the enlargement of the region.

Combination 5

the aerial image 10 is a stereoscopic image or a planar image.

The combination 5 corresponds to a case where a single hand located outside the aerial image 10 (at a position not considered to be in contact with the aerial image 10) is moved toward the aerial image 10.

In this case, the operation of the user is received as an operation of moving the entire aerial image 10 in the direction of moving the hand.

Combination 6

the aerial image 10 is a stereoscopic image or a planar image.

Combination 6 corresponds to a situation where multiple fingers (e.g., thumb and forefinger) of a single hand located outside aerial image 10 (at locations not considered to be in contact with aerial image 10) are moved toward one another.

In this case, the operation of the user is received as an operation of equally reducing the maximum display area of the aerial image 10.

Combination 7

The aerial image 10 is a stereoscopic image or a planar image.

Combination 7 corresponds to a situation where multiple fingers (e.g., thumb and forefinger) of a single hand located outside aerial image 10 (at locations not considered to be in contact with aerial image 10) are moved away from each other.

In this case, the operation of the user is received as an operation of equally enlarging the maximum display area of the aerial image 10.

Combination 8

the aerial image 10 is a stereoscopic image or a planar image.

The combination 8 corresponds to a case where a single hand located inside the aerial image 10 (at a position not regarded as being in contact with the aerial image 10) is moved to slide on a specific object.

In this case, an operation by the user is received as an operation of changing the attribute of a specific object constituting a part of the aerial image 10.

Examples of attributes include the color and attack power of the object.

Combination 9

The aerial image 10 is a stereoscopic image or a planar image.

Combination 9 corresponds to a situation where multiple fingers (e.g., thumb and forefinger) of a single hand located inside aerial image 10 (at locations not considered to be in contact with aerial image 10) are moved away from each other.

In this case, the user's operation is received as an operation of partially enlarging a partial image of the aerial image 10 sandwiched between fingers (a partial image located on the user side in the case where the aerial image 10 is a stereoscopic image). At the same time, the portion of the aerial image 10 not sandwiched between the fingers is deformed so as to be reduced in accordance with the enlargement of the area.

Assembly 10

The aerial image 10 is a stereoscopic image or a planar image.

The combination 10 corresponds to a situation where multiple fingers (e.g., thumb and forefinger) of a single hand located inside the aerial image 10 (at locations not considered to be in contact with the aerial image 10) are moved toward each other.

In this case, the user's operation is received as an operation of partially reducing a partial image of the aerial image 10 sandwiched between fingers (a partial image located on the user side in the case where the aerial image 10 is a stereoscopic image). At the same time, the portion of the aerial image 10 not sandwiched between the fingers is deformed so as to be enlarged in accordance with the reduction of the area.

Combination 11

In the case of the present combination, the aerial image 10 is a stereoscopic image.

The combination 11 corresponds to a case where a single hand located on the outer edge of the aerial image 10 (at a position deemed to be in contact with the aerial image 10) is moved along the outer edge.

In this case, the operation of the user is received as an operation of rotating the entire aerial image 10 in the direction of moving the hand.

as described above, different combinations of the part of the body used for the operation of the user, the start position of the operation, and the direction of the movement are received as different operations.

In the present exemplary embodiment, a gesture that does not correspond to any one of combinations 1 to 11 is not regarded as an operation.

In the case where the start position of the operation and the direction of the movement are classified according to a predetermined rule as shown in fig. 10, a small difference in position and direction can be ignored.

Further, by deciding on a combination of operations in advance, it is possible to provide the user with predictability of the content of the operations performed by the user's gestures.

Next, a case where the application is document creation software is described (see fig. 11).

Combination 1

The aerial image 10 is a document. It is assumed here that the document is displayed page by page.

Combination 1 corresponds to a case where both hands located outside aerial image 10 (at positions that are not regarded as being in contact with aerial image 10) are moved toward each other.

In this case, the operation of the user is received as an operation of equally reducing the maximum display area of the aerial image 10.

The content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

Combination 2

The aerial image 10 is a document.

Combination 2 corresponds to a case where the hands located outside aerial image 10 (at positions that are not considered to be in contact with aerial image 10) are moved away from each other.

In this case, the operation of the user is received as an operation of equally enlarging the maximum display area of the aerial image 10.

The content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

Combination 3

The aerial image 10 is a document.

The combination 3 corresponds to a case where both hands located inside the aerial image 10 (at positions that are not regarded as being in contact with the aerial image 10) are moved toward each other.

In this case, the operation of the user is received as an operation of partially reducing the partial image of the aerial image 10 sandwiched between both hands (the partial image located on the user side in the case where the aerial image 10 is a stereoscopic image). At the same time, the portion of the aerial image 10 not sandwiched by both hands is deformed so as to be enlarged in accordance with the reduction of the region.

The content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

Combination 4

The aerial image 10 is a document.

The combination 4 corresponds to a case where the hands located inside the aerial image 10 (at positions that are not regarded as being in contact with the aerial image 10) are moved away from each other.

In this case, the operation of the user is received as an operation of locally enlarging a partial image of the aerial image 10 sandwiched between both hands (a partial image located on the user side in the case where the aerial image 10 is a stereoscopic image). At the same time, the portion of the aerial image 10 not sandwiched by both hands is deformed so as to be reduced in accordance with the enlargement of the region.

The content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

Combination 5

The aerial image 10 is a document.

The combination 5 corresponds to a case where a single hand located outside the aerial image 10 (at a position not considered to be in contact with the aerial image 10) is moved toward the aerial image 10.

In this case, the operation of the user is received as an operation of moving the entire aerial image 10 in the direction of moving the hand.

The content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

Combination 6

The aerial image 10 is a document.

Combination 6 corresponds to a situation where multiple fingers (e.g., thumb and forefinger) of a single hand located outside aerial image 10 (at locations not considered to be in contact with aerial image 10) are moved toward one another.

In this case, the operation of the user is received as an operation of equally reducing the maximum display area of the aerial image 10.

The content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

Combination 7

The aerial image 10 is a document.

Combination 7 corresponds to a situation where multiple fingers (e.g., thumb and forefinger) of a single hand located outside aerial image 10 (at locations not considered to be in contact with aerial image 10) are moved away from each other.

In this case, the operation of the user is received as an operation of equally enlarging the maximum display area of the aerial image 10.

The content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

combination 8

The aerial image 10 is a document.

The combination 8 corresponds to a situation where a single hand located inside the aerial image 10 (at a location not considered to be in contact with the aerial image 10) is moved to slide over the aerial image 10.

In this case, the operation of the user is received as an operation of deleting the object in the portion where the slide motion has been performed.

Although the same gesture is performed, the content of the operation is different from that in the case where the software is drawing software (see fig. 10).

Combination 9

The aerial image 10 is a document made up of a plurality of pages.

The combination 9 corresponds to a situation where a single hand located inside the aerial image 10 (at a position not considered to be in contact with the aerial image 10) is moved to slide over the aerial image 10.

In this case, the operation of the user is received as an operation of scrolling through the page displayed as the aerial image 10 in the direction of the slide.

the content of the operation is different from that in the case where the software is drawing software (see fig. 10).

Assembly 10

The aerial image 10 is a document made up of a plurality of pages.

the combination 10 corresponds to a situation where multiple fingers (e.g., thumb and forefinger) of a single hand located inside the aerial image 10 (at locations not considered to be in contact with the aerial image 10) are moved away from each other.

In this case, the user's operation is received as an operation of partially enlarging a partial image of the aerial image 10 sandwiched between fingers (a partial image located on the user side in the case where the aerial image 10 is a stereoscopic image). At the same time, the portion of the aerial image 10 not sandwiched between the fingers is deformed so as to be reduced in accordance with the enlargement of the area.

The content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

Combination 11

The aerial image 10 is a document made up of a plurality of pages.

the combination 11 corresponds to a case where a plurality of fingers (e.g., thumb and forefinger) of a single hand located inside the aerial image 10 (at positions not regarded as being in contact with the aerial image 10) are moved toward each other.

In this case, the user's operation is received as an operation of partially reducing a partial image of the aerial image 10 sandwiched between fingers (a partial image located on the user side in the case where the aerial image 10 is a stereoscopic image). At the same time, the portion of the aerial image 10 not sandwiched between the fingers is deformed so as to be enlarged in accordance with the reduction of the area.

the content of the operation is the same as that in the case where the software is drawing software (see fig. 10).

As described above, in the case where the software outputting the aerial image 10 is different, even the same combination is received as a different operation.

Needless to say, in some cases, even in the case where the software outputting the aerial image 10 is different, the same combination is received as the same operation.

Even in the case where the software that outputs the aerial image 10 is drawing software, the movement of sliding a single hand on the inner side of the aerial image 10 can be received as a deletion object.

In contrast, even in the case where the software that outputs the aerial image 10 is document creation software, the movement of sliding a single hand on the inner side of the aerial image 10 can be received as changing the attribute (e.g., font, color) of the object in the portion that has performed the sliding movement.

Which operation is to be received may be switched depending on the direction of the sliding.

Operation reception processing

Next, a process of receiving an operation on the aerial image 10 (see fig. 1) by the operation reception device 12 (see fig. 1) is described.

Fig. 12 is an example of a flowchart for explaining the contents of processing by the operation reception apparatus 12 according to the first exemplary embodiment. The contents of this processing are realized by using the functional units described with reference to fig. 8. The specific progress of the process is controlled by executing a program.

In fig. 12, the respective steps constituting this process are denoted by the symbol "S".

First, the start position detection unit 41 (see fig. 8) determines whether or not the state in which the part of the body remains stationary continues for a predetermined time or more (step 1).

The period in which a negative result is obtained in step 1 is a period in which the user is moving the part of the body to the start position of the operation.

In the case where a positive result is obtained in step 1, the start position detection unit 41 (see fig. 8) detects the start position of the motion relating to the aerial image 10 (step 2). For example, it is detected whether the part of the body is located outside the aerial image 10 (the part of the body does not overlap with the aerial image 10), whether the part of the body is located inside the aerial image 10 (the part of the body overlaps with the aerial image 10), and whether the part of the body is in contact with the outer edge of the aerial image 10.

Next, the movement direction detection unit 42 (see fig. 8) detects the direction of the movement of the part of the body for operation (step 3).

Upon detecting the start position and the direction of movement relating to the operation, the operation content decision unit 43 (see fig. 8) decides the content of the operation based on the type of software that outputs the aerial image 10, the content displayed as the aerial image 10, and the like (step 4).

When the content of the operation is determined, processing according to the content of the operation is performed (step 5).

For example, the operation content decision unit 43 gives a stimulus indicating that the operation is received or a stimulus according to the content of the operation by controlling the aerial tactile device 14.

For example, the screen updating unit 44 performs processing according to the content of the operation on the aerial image 10. For example, the screen updating unit 44 changes the display position of the aerial image 10 in the air. For example, the screen updating unit 44 enlarges or reduces the maximum display area of the aerial image 10. For example, the screen updating unit 44 locally enlarges or reduces a partial image (including an object) constituting the aerial image 10. Alternatively, the screen updating unit 44 deletes a partial image (including an object) or changes an attribute.

Specific examples of operations

A specific example of an operation performed based on the start position of the movement related to the operation, the direction of the movement, and the like is described below.

Concrete example 1

Fig. 13A and 13B are views for explaining a specific example 1 of an operation using a gesture. Fig. 13A illustrates the positional relationship between the operating hands (right hand 3R and left hand 3L) and fig. 13B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 13A and 13B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 13A and 13B, the right hand 3R and the left hand 3L of the user are moved toward each other from a state in which the right hand 3R and the left hand 3L are located outside the aerial image 10.

In this case, the maximum display area of the aerial image 10 is reduced equally. Needless to say, the image of the earth is equally reduced according to the reduction of the maximum display area.

This specific example corresponds to combination 1 of fig. 10.

Concrete example 2

Fig. 14A and 14B are views for explaining specific example 2 of an operation using a gesture. Fig. 14A illustrates the positional relationship between the operating hands (right hand 3R and left hand 3L) and fig. 14B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 14A and 14B, the image of the earth is displayed as an aerial image 10.

In the case of fig. 14A and 14B, the right hand 3R and the left hand 3L of the user are moved away from each other from a state in which the right hand 3R and the left hand 3L are located outside the aerial image 10.

in this case, the maximum display area of the aerial image 10 is equally enlarged. Needless to say, the image of the earth is equally enlarged according to the enlargement of the maximum display area.

This specific example corresponds to combination 2 of fig. 10.

Specific example 3

Fig. 15A and 15B are views for explaining specific example 3 of an operation using a gesture. Fig. 15A illustrates the positional relationship between the operating hands (right hand 3R and left hand 3L), and fig. 15B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 15A and 15B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 15A and 15B, the right hand 3R and the left hand 3L of the user are moved toward each other from a state in which the right hand 3R and the left hand 3L are located inside the aerial image 10. Since the aerial image 10 is an image optically formed in the air as described above, the right hand 3R and the left hand 3L can be inserted into the aerial image 10.

In fig. 15A and 15B, the north american continent is sandwiched between the right hand 3R and the left hand 3L.

In this case, the maximum display area of the aerial image 10 is not changed, but the image of the north american continent is locally reduced. The image of a portion around the continental north america is deformed so as to be enlarged in accordance with the reduction of the continental north america. For example, spaces closer to the continental north america (which are deformed to shrink) are deformed more greatly to be enlarged.

This specific example corresponds to combination 3 of fig. 10.

Specific example 4

Fig. 16A and 16B are views for explaining specific example 4 of an operation using a gesture. Fig. 16A illustrates the positional relationship between the operating hands (right hand 3R and left hand 3L) and fig. 16B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 16A and 16B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 16A and 16B, the right hand 3R and the left hand 3L of the user are moved away from each other from a state in which the right hand 3R and the left hand 3L are located inside the aerial image 10.

In fig. 16A and 16B, the north american continent is sandwiched between the right hand 3R and the left hand 3L.

In this case, the maximum display area of the aerial image 10 is not changed, but the image of the north american continent is locally enlarged. The image of a portion around the continental north america is deformed so as to be reduced in accordance with the enlargement of the continental north america. For example, spaces closer to the continental north america (which are deformed so as to be enlarged) are deformed more so as to be reduced.

This specific example corresponds to combination 4 of fig. 10.

Specific example 5

Fig. 17A and 17B are views for explaining a specific example 5 of an operation using a gesture. Fig. 17A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 17B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 17A and 17B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 17A and 17B, the right hand 3R of the user is moved toward the aerial image 10 from a state in which the right hand 3R is located outside the aerial image 10. In other words, in fig. 17, the right hand 3R is moved rightward from the left side of the aerial image 10.

In this case, the size of the maximum display area of the aerial image 10 is not changed, but the position of the aerial image 10 is moved in the direction of moving the right hand 3R.

This specific example corresponds to combination 5 of fig. 10.

Specific example 6

Fig. 18A and 18B are views for explaining specific example 6 of an operation using a gesture. Fig. 18A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 18B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 18A and 18B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 18A and 18B, the right hand 3R of the user is located inside the aerial image 10, and the right hand 3R is moved to slide on north american continent.

In this case, the size of the maximum display area of the aerial image 10 and the position of the aerial image 10 are not changed, but the color of the displayed north american continent is changed.

This specific example corresponds to combination 8 of fig. 10.

When the object displayed as the aerial image 10 is a character in the video game, the attack power of the character can be increased by the same operation. When the attack force is increased, the character may evolve or the character's equipment may be enhanced.

Specific example 7

Fig. 19A and 19B are views for explaining a specific example 7 of an operation using a gesture. Fig. 19A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 19B illustrates the aerial image 10 displayed after receiving the operation.

in the case of fig. 19A and 19B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 19A and 19B, the right hand 3R of the user is located on the outer edge (circumferential surface) of the aerial image 10, and the right hand 3R is moved along the outer edge.

In this case, the size of the maximum display area of the aerial image 10 and the position of the aerial image 10 are not changed, but these continents are rotated in the direction of moving the right hand 3R. In the case of fig. 19A and 19B, the continent as viewed from the user's perspective becomes the african continent and the european continent due to this operation.

this specific example corresponds to combination 11 of fig. 10.

When the object displayed as the aerial image 10 is a character in a video game, the character may be transferred from a front view to a rear view.

Specific example 8

Fig. 20A and 20B are views for explaining a specific example 8 of an operation using a gesture. Fig. 20A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 20B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 20A and 20B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 20A and 20B, the right hand 3R of the user is located inside the aerial image 10, and the right hand 3R is moved to slide on the aerial image 10 in this state.

In this case, the north american continent on which the right-hand 3R is moved is deleted.

This specific example corresponds to combination 8 of fig. 11.

Concrete example 9

fig. 21A and 21B are views for explaining a specific example 9 of an operation using a gesture. Fig. 21A illustrates the positional relationship between the operating hand (left hand 3L) and the aerial image 10, and fig. 21B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 21A and 21B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 21A and 21B, the user's left hand 3L is located outside the aerial image 10, and moves the thumb and the index finger toward each other in this state.

In this case, the maximum display area of the aerial image 10 is reduced equally. Needless to say, the image of the earth is equally reduced according to the reduction of the maximum display area.

This specific example corresponds to combination 6 of fig. 10.

Concrete example 10

Fig. 22A and 22B are views for explaining a specific example 10 of an operation using a gesture. Fig. 22A illustrates the positional relationship between the operating hand (left hand 3L) and the aerial image 10, and fig. 22B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 22A and 22B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 22A and 22B, the left hand 3L of the user is located outside the aerial image 10, and the thumb and the index finger are moved away from each other in this state.

In this case, the maximum display area of the aerial image 10 is equally enlarged. Needless to say, the image of the earth is equally enlarged according to the enlargement of the maximum display area.

This specific example corresponds to combination 7 of fig. 10.

Concrete example 11

Fig. 23A and 23B are views for explaining a specific example 11 of an operation using a gesture. Fig. 23A illustrates the positional relationship between the operator hand (left hand 3L) and the aerial image 10, and fig. 23B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 23A and 23B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 23A and 23B, the user's left hand 3L is located inside the aerial image 10, and moves the thumb and the index finger toward each other in this state.

In the case of fig. 23A and 23B, the north american continent is sandwiched between the thumb and forefinger of the left hand 3L.

In this case, the maximum display area of the aerial image 10 is not changed, but the image of the north american continent is locally reduced. The image of a portion around the continental north america is deformed so as to be enlarged in accordance with the reduction of the continental north america. For example, spaces closer to the continental north america (which deform to shrink) are deformed more to be enlarged.

This specific example corresponds to the combination 10 of fig. 10.

Concrete example 12

Fig. 24A and 24B are views for explaining a specific example 12 of an operation using a gesture. Fig. 24A illustrates the positional relationship between the manipulator (left hand 3L) and the aerial image 10, and fig. 24B illustrates the aerial image 10 displayed after receiving the manipulation.

In the case of fig. 24A and 24B, the image of the earth is displayed as the aerial image 10.

In the case of fig. 24A and 24B, the user's left hand 3L is located inside the aerial image 10 and moves the thumb and the index finger away from each other in this state.

In the case of fig. 24A and 24B, a part of the north american continent is sandwiched between the thumb and the index finger of the left hand 3L.

In this case, the maximum display area of the aerial image 10 is not changed, but the image of the north american continent is locally enlarged. The image of a portion around the continental north america is deformed so as to be reduced in accordance with the enlargement of the continental north america. For example, spaces closer to the continental north america (which are deformed so as to be enlarged) are deformed more so as to be reduced.

this specific example corresponds to combination 9 of fig. 10.

Specific example 13

Fig. 25A and 25B are views for explaining a specific example 13 of an operation using a gesture. Fig. 25A illustrates the positional relationship between the operating hands (right hand 3R and left hand 3L), and fig. 25B illustrates the aerial image 10 displayed after receiving the operation.

In FIG. 25A and FIG. 25B, the image of the document "AAAAA/AAAAA/AAAAA/AAAAA" is displayed as an aerial image 10. Slashes (/) indicate line feeds, respectively.

In the case of fig. 25A and 25B, the right hand 3R and the left hand 3L of the user are moved toward each other from a state in which the right hand 3R and the left hand 3L are located outside the aerial image 10.

In this case, the maximum display area of the aerial image 10 is reduced equally. Needless to say, the image of the displayed document is also reduced equally according to the reduction of the maximum display area. Specifically, the font size is reduced. In the case where a document includes an illustration and a figure, the illustration and the figure are also reduced equally.

this specific example corresponds to combination 1 of fig. 11.

Specific example 14

Fig. 26A and 26B are views for explaining a specific example 14 of an operation using a gesture. Fig. 26A illustrates the positional relationship between the operating hands (right hand 3R and left hand 3L), and fig. 26B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 26A and 26B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

In the case of fig. 26A and 26B, the right hand 3R and the left hand 3L of the user are moved away from each other from a state in which the right hand 3R and the left hand 3L are located outside the aerial image 10.

In this case, the maximum display area of the aerial image 10 is equally enlarged. Needless to say, the image of the displayed document is equally enlarged according to the enlargement of the maximum display area. Specifically, the size of the font is enlarged. In the case where the document includes illustrations and graphics, the illustrations and graphics are also enlarged equally.

This specific example corresponds to combination 2 of fig. 11.

Specific example 15

fig. 27A and 27B are views for explaining a specific example 15 of an operation using a gesture. Fig. 27A illustrates the positional relationship between the operating hands (right hand 3R and left hand 3L), and fig. 27B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 27A and 27B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

In the case of fig. 27A and 27B, the right hand 3R and the left hand 3L of the user are moved toward each other from a state in which the right hand 3R and the left hand 3L are located inside the aerial image 10.

In the case of fig. 27A and 27B, the middle three characters (AAA) of the top character string (AAAAA) are sandwiched between the right hand 3R and the left hand 3L.

In this case, the maximum display area of the aerial image 10 is not changed, but the middle three characters of the top character string are locally reduced. In the case of fig. 27A and 27B, other characters of the top character string and the character strings subsequent from the top are not changed.

Alternatively, the image of the portion around the middle three characters of the top character string may be deformed so as to be enlarged in accordance with the reduction of the character string. In this case, an image closer to the character string (which is deformed so as to be reduced) can be deformed more largely so as to be enlarged.

This specific example corresponds to combination 3 of fig. 11.

Concrete example 16

Fig. 28A and 28B are views for explaining a specific example 16 of an operation using a gesture. Fig. 28A illustrates the positional relationship between the operating hands (right hand 3R and left hand 3L), and fig. 28B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 28A and 28B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

In the case of fig. 28A and 28B, the right hand 3R and the left hand 3L of the user are moved away from each other from a state in which the right hand 3R and the left hand 3L are located inside the aerial image 10.

In the case of fig. 28A and 28B, the middle three characters (AAA) of the top character string (AAAAA) are sandwiched between the right hand 3R and the left hand 3L.

In this case, the maximum display area of the aerial image 10 is not changed, but the middle three characters of the top character string are enlarged locally. In the case of fig. 28A and 28B, other characters of the top character string and the character strings subsequent from the top are not changed.

Alternatively, the image of the portion around the middle three characters of the top character string may be deformed so as to be reduced in accordance with the enlargement of the character string. In this case, an image closer to the character string (which is deformed so as to be enlarged) may be deformed more largely so as to be reduced.

this specific example corresponds to combination 4 of fig. 11.

Specific example 17

Fig. 29A and 29B are views for explaining a specific example 16 of an operation using a gesture. Fig. 29A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 29B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 29A and 29B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

In the case of fig. 29A and 29B, the right hand 3R of the user is moved toward the aerial image 10 from a state in which the right hand 3R is located outside the aerial image 10. In other words, in fig. 29A and 29B, the right hand 3R is moved from the right side to the left side of the aerial image 10.

In this case, the size of the maximum display area of the aerial image 10 is not changed, but the position of the aerial image 10 is moved in the direction of moving the right hand 3R.

This specific example corresponds to combination 5 of fig. 11.

Concrete example 18

Fig. 30A and 30B are views for explaining a specific example 18 of an operation using a gesture. Fig. 30A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 30B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 30A and fig. 30B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

In the case of fig. 30A and 30B, the right hand 3R of the user is positioned inside the aerial image 10, and the right hand 3R is moved to slide on the top character string (AAAAA).

In this case, the size of the maximum hour area of the aerial image 10 is not changed, but the character string displayed in the place where the right hand 3R is moved is deleted.

this specific example corresponds to combination 8 of fig. 11.

Concrete example 19

Fig. 31A and 31B are views for explaining a specific example 19 of an operation using a gesture. Fig. 31A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 31B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 31A and 31B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

In the case of fig. 31A and 31B, the right hand 3R of the user is located inside the aerial image 10, and the thumb and the index finger are moved toward each other in this state.

In the case of fig. 31A and 31B, the three characters (AAA) on the right side of the top character string (AAAAA) are sandwiched between the thumb and the index finger of the right hand 3R.

In this case, the maximum display area of the aerial image 10 is not changed, but the characters on the right side of the top character string are locally reduced. In the case of fig. 31A and 31B, other characters of the top character string, and the second character string and subsequent character strings are not changed.

Alternatively, the image of the portion around the character may be deformed so as to be enlarged in accordance with the reduction of the character string. In this case, an image closer to the character string (which is deformed so as to be reduced) can be deformed more largely so as to be enlarged.

This specific example corresponds to combination 11 of fig. 11.

Concrete example 20

Fig. 32A and 32B are views for explaining a specific example 20 of an operation using a gesture. Fig. 32A illustrates the positional relationship between the manipulator (left hand 3L) and the aerial image 10, and fig. 32B illustrates the aerial image 10 displayed after receiving the manipulation.

In the case of fig. 32A and 32B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

In the case of fig. 32A and 32B, the left hand 3L of the user is located outside the aerial image 10, and the thumb and the index finger are moved toward each other in this state.

In this case, the maximum display area of the aerial image 10 is reduced equally. Needless to say, the image of the displayed document is also reduced equally according to the reduction of the maximum display area. Specifically, the font size is reduced. In the case where a document includes an illustration and a figure, the illustration and the figure are also reduced equally.

This specific example corresponds to combination 6 of fig. 11.

Specific example 21

Fig. 33A and 33B are views for explaining a specific example 21 of an operation using a gesture. Fig. 33A illustrates the positional relationship between the manipulator (left hand 3L) and the aerial image 10, and fig. 33B illustrates the aerial image 10 displayed after receiving the manipulation.

In the case of fig. 33A and 33B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

in the case of fig. 33A and 33B, the left hand 3L of the user is located outside the aerial image 10, and the thumb and the index finger are moved away from each other in this state.

In this case, the maximum display area of the aerial image 10 is equally enlarged. Needless to say, the image of the displayed document is equally enlarged according to the enlargement of the maximum display area. Specifically, the size of the font is enlarged. In the case where the document includes illustrations and graphics, the illustrations and graphics are also enlarged equally.

This specific example corresponds to combination 7 of fig. 11.

Concrete example 22

Fig. 34A and 34B are views for explaining a specific example 22 of an operation using a gesture. Fig. 34A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 34B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 34A and 34B, the document (AAAAA/AAAAA) is displayed as the aerial image 10.

In the case of fig. 34A and 34B, the right hand 3R of the user is located inside the aerial image 10 and moves the thumb and the index finger away from each other in this state.

In the case of fig. 34A and 34B, the three characters (AAA) on the right side of the top character string (AAAAA) are sandwiched between the thumb and the index finger of the right hand 3R.

In this case, the maximum display area of the aerial image 10 is not changed, but the three characters on the right side of the top character string are enlarged locally. In the case of fig. 34A and 34B, the other characters of the top character string, and the second character string and subsequent character strings are not changed.

Alternatively, the image of the portion around the character may be deformed so as to be reduced in accordance with the enlargement of the character string. In this case, an image closer to the character string (which is deformed so as to be enlarged) may be deformed more largely so as to be reduced.

This specific example corresponds to the combination 10 of fig. 10.

Specific example 23

fig. 35A and 35B are views for explaining a specific example 23 of an operation using a gesture. Fig. 35A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 35B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 35A and 35B, images of a plurality of pages are displayed as aerial images 10. Specifically, the images of pages 1, 2, 3 are displayed from the left side to the right side.

In the case of fig. 35A and 35B, the right hand 3R of the user is located inside the aerial image 10, and the right hand 3R is moved to slide on the image.

In this case, the maximum display area of the aerial image 10 is not changed, but the page displayed as the aerial image 10 is changed. Specifically, the images of the pages 2, 3, 4 are displayed from the left side to the right side. This operation corresponds to a page turn applied to turn the displayed page forward.

This specific example corresponds to combination 9 of fig. 11.

Concrete example 24

Fig. 36A and 36B are views for explaining a specific example 24 of an operation using a gesture. Fig. 36A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 36B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 36A and 36B, the aerial image 10 including the game characters 10A and 10B is displayed.

A specific example 23 is a case where the aerial image 10 is output from the drawing software as described in the specific examples 1 to 12.

In the case of fig. 36A and 36B, the right hand 3R of the user is positioned inside the aerial image 10 and moves the right hand 3R to slide on the person 10A positioned on the right side of fig. 36A and 36B.

In this case, the maximum display area of the aerial image 10 is not changed, but the color of the person 10A is changed.

This specific example corresponds to combination 8 of fig. 10.

Specific example 25

Fig. 37A and 37B are views for explaining a specific example 25 of an operation using a gesture. Fig. 37A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 37B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 37A and 37B, the aerial image 10 including the game characters 10A and 10B is displayed.

In the case of fig. 37A and 37B, the right hand 3R of the user is located inside the aerial image 10, and the right hand 3R is moved to slide on the person 10A located on the right side of fig. 37A and 37B.

However, in the specific example 24, this operation is for moving the display position of the person 10A in the aerial image 10.

In the case of fig. 37A and 37B, the person 10A attacks the person 10B at the position to which the person 10A is moved. The expression of the attacked figure 10B becomes the painful expression.

Concrete example 26

Fig. 38A to 38C are views for explaining a specific example 26 of an operation using a gesture. Fig. 38A illustrates the positional relationship between the manipulator (right hand 3R) and aerial image 10, fig. 38B illustrates the manipulation of aerial image 10, and fig. 38C illustrates aerial image 10 displayed after receiving the two-stage manipulation.

In the case of fig. 38A to 38C, the aerial image 10 including the game character 10A equipped with the sword 10C is displayed.

The examples of fig. 38A to 38C correspond to examples of switching the contents of operations by combining a plurality of kinds of operations.

In the case of fig. 38A to 38C, the right hand 3R is located outside the aerial image 10. In this state, the right hand 3R is moved toward the aerial image 10. This individual movement is not recognized as an operation on the aerial image 10 and therefore does not change the aerial image 10.

Then, the right hand 3R of the user is moved to the inside of the aerial image 10, and the right hand 3R is moved so as to slide on the person 10A. Unlike the specific example 23 (see fig. 35A and 35B) and the specific example 24 (see fig. 36A and 36B), this operation is received as an operation of moving the aerial image 10.

Specific example 27

Fig. 39A to 39C are views for explaining a specific example 27 of an operation using a gesture. Fig. 39A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, fig. 39B illustrates the operation on the aerial image 10, and fig. 39C illustrates the aerial image 10 displayed after receiving the two-stage operation.

In the case of fig. 39A to 39C, an aerial image 10 including a game character 10A equipped with a sword 10C is displayed.

The examples of fig. 39A to 39C correspond to an example of switching the contents of operations by combining a plurality of kinds of operations.

In the case of fig. 39A to 39C, the right hand 3R is placed in contact with the sword 10C or overlapping the sword 10C. In this state, the right hand 3R is moved closer to the aerial image 10 or inserted into the aerial image 10. In the case of fig. 39A to 39C, the operation is not recognized as an operation on the aerial image 10, and therefore the aerial image 10 is not changed in this stage.

Then, the right hand 3R of the user is moved to the inside of the aerial image 10, and the right hand 3R is moved so as to slide on the person 10A. This operation is received as an operation to change the attribute of the sword 10C.

In the case of fig. 39A to 39C, sword 10C is changed to sword 10C with improved offensive power.

In the case of fig. 39A to 39C, the equipment of the character 10A is the sword 10C, but the shield, clothes, shoes, hat, etc. can be changed each time an operation is received.

Concrete example 28

Fig. 40A and 40B are views for explaining a specific example 28 of an operation using a gesture. Fig. 40A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, and fig. 40B illustrates the aerial image 10 displayed after receiving the operation.

In the case of fig. 40A and 40B, an aerial image 10 of an egg having three-dimensional data (internal structure data) is displayed.

In the case of fig. 40A and 40B, the right hand 3R of the user is moved toward the aerial image 10 from a state in which the right hand 3R is located outside the aerial image 10. In other words, in fig. 40A and 40B, the right hand 3R is moved from the right side to the left side of the aerial image 10.

In this case, the size of the maximum display area of the aerial image 10 is not changed, but the position of the aerial image 10 is moved in the direction of moving the right hand 3R.

This specific example is an example of an operation on an outer layer image.

Concrete example 29

fig. 41A to 41C are views for explaining a specific example 29 of an operation using a gesture. Fig. 41A illustrates the positional relationship between the operating hand (right hand 3R) and the aerial image 10, fig. 41B illustrates the aerial image 10 displayed after receiving the operation, and fig. 41C illustrates the aerial image 10 after further receiving the operation.

In the case of fig. 41A-41C, an aerial image 10 of an egg having an internal structure is displayed. Specifically, the egg has an eggshell as the topmost layer, an egg white as the middle layer, and an egg yolk as the lowermost layer. These three layers may be displayed in parallel, or only the user-specified layer may be displayed.

In the case of fig. 41A to 41C, the right hand 3R of the user is in contact with the aerial image 10 or is located inside the aerial image 10.

Fig. 41A to 41C illustrate changes of aerial image 10 generated in a case where right hand 3R is moved to slide on aerial image 10 in this state.

In the case of fig. 41A to 41C, the aerial image 10 becomes egg white due to the first gesture, and the aerial image 10 becomes egg yolk due to the second gesture.

This specific example is an example of an operation on an internal image.

Other exemplary embodiments

Although the exemplary embodiments of the present disclosure have been described above, the technical scope of the present disclosure is not limited to the scope described in the exemplary embodiments. Various changes or modifications of the above-described exemplary embodiments are also encompassed within the technical scope of the present disclosure, as apparent from the description of the claims.

The foregoing description of the exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Obviously, many modifications and variations will occur to those skilled in the art. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical application, to thereby enable others skilled in the art to understand the disclosure for various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the disclosure be defined by the following claims and their equivalents.

69页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:信息处理设备和非暂时性计算机可读介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类