Shooting control method and device, holder and shooting system

文档序号:1967130 发布日期:2021-12-14 浏览:12次 中文

阅读说明:本技术 拍摄控制方法和装置、云台、拍摄系统 (Shooting control method and device, holder and shooting system ) 是由 林明鹏 于 2020-06-28 设计创作,主要内容包括:一种拍摄控制方法和装置、云台、拍摄系统,所述方法包括:在云台处于预设拍摄模式时,控制所述云台依次处于所述预设拍摄模式指示的多个预设姿态;在所述云台处于每一所述预设姿态时,获取用户输入的拍摄指令;根据所述拍摄指令,控制所述云台上的拍摄装置进行拍摄。本申请在通过云台拍摄获取多角度的图像时,在云台的每个预设姿态均通过用户触发而进行拍摄,使得用户能够确认构图后再进行拍摄,如此,可以有效减少废片,这样后续将多角度的图像拼接后获得的目标图像会比较容易满足预期效果。(A shooting control method and device, a holder and a shooting system are provided, and the method comprises the following steps: when the cloud deck is in a preset shooting mode, controlling the cloud deck to be in a plurality of preset postures indicated by the preset shooting mode in sequence; when the holder is in each preset posture, acquiring a shooting instruction input by a user; and controlling a shooting device on the holder to shoot according to the shooting instruction. This application is when shooing the image that acquires the multi-angle through the cloud platform, predetermines every of cloud platform gesture and all triggers and shoot through the user for shoot again after the user can confirm the composition, so, can effectively reduce the piece of discarding, follow-up target image that obtains after the image concatenation with the multi-angle like this can satisfy anticipated effect more easily.)

1. A shooting control method, characterized by comprising:

when the cloud deck is in a preset shooting mode, controlling the cloud deck to be in a plurality of preset postures indicated by the preset shooting mode in sequence;

when the holder is in each preset posture, acquiring a shooting instruction input by a user;

and controlling a shooting device on the holder to shoot according to the shooting instruction.

2. The method of claim 1, wherein the preset gesture is a default gesture; alternatively, the first and second electrodes may be,

the preset posture is preset by a user.

3. The method according to claim 2, wherein the size, direction and/or number of the preset gestures are preset by a user.

4. The method of claim 1, wherein the preset attitude comprises at least one of a roll angle, a yaw angle, and a pitch angle.

5. The method according to claim 4, wherein the preset photographing mode is a first mode, and the preset attitude is a yaw angle; alternatively, the first and second electrodes may be,

the preset shooting mode is a second mode, and the preset posture is a pitch angle; alternatively, the first and second electrodes may be,

the preset shooting mode is a third mode, and the preset posture is a roll angle; alternatively, the first and second electrodes may be,

the preset shooting mode is a fourth mode, and the preset posture comprises a yaw angle and a pitch angle; alternatively, the first and second electrodes may be,

the preset shooting mode is a fifth mode, and the preset posture comprises a yaw angle and a roll angle.

6. The method according to claim 1, wherein acquiring a shooting instruction input by a user while the pan/tilt head is in each of the preset postures comprises:

and when the cloud deck is in each preset gesture, waiting for acquiring a shooting instruction input by a user.

7. The method of claim 6, further comprising:

if the shooting instruction input by the user is not obtained within the preset time, the cloud deck is controlled to be in the next preset posture after the shooting device is controlled to be in the current preset posture for shooting.

8. The method of claim 6, further comprising:

and if the shooting instruction input by the user is not obtained after the preset time is exceeded, automatically exiting the preset shooting mode.

9. The method of claim 6, further comprising:

and controlling the holder to keep at the preset posture within the time of waiting for acquiring the shooting instruction input by the user.

10. The method according to claim 1, wherein the overlapping rate between the images captured by the capturing device at the adjacent preset postures is greater than a preset overlapping rate threshold.

11. The method of claim 10, wherein the overlapping rate is a default value or is preset by a user.

12. The method of claim 1, wherein the zoom factor of the camera in capturing at the plurality of preset poses is equal.

13. The method according to claim 1, wherein the target object to be photographed in the images obtained by photographing by the photographing device at a plurality of the preset attitudes is the same object; alternatively, the first and second electrodes may be,

the shooting device at least partially differs from the shot target object in the images obtained by shooting at a plurality of preset postures.

14. The method according to claim 1 or 13, wherein the position of the target object in the image captured by the capturing device at each of the preset postures is a default position or is preset by a user.

15. The method according to claim 1 or 13, wherein the distance between the subject in the image captured by the capturing device at each of the preset poses and the capturing device is constant; alternatively, the first and second electrodes may be,

the shooting device shoots the distance between the shot target object and the shooting device in the obtained images at a plurality of preset postures, and the distance gradually increases and/or gradually decreases.

16. The method according to claim 1, wherein the preset shooting mode comprises a plurality of types, and the triggering modes of the shooting instructions of different preset shooting modes are different.

17. The method of claim 1 or 16, wherein the preset photographing mode includes a first photographing mode, and the photographing instruction is remotely input by a user.

18. The method according to claim 17, wherein the remotely inputted information includes at least one of a gesture motion and voice information of a subject in an image captured by the capturing device;

the shooting instruction comprises at least one of a preset gesture action and preset voice information.

19. The method of claim 18, wherein the preset gesture actions comprise preset gesture actions.

20. The method according to claim 17, wherein after acquiring a shooting instruction input by a user when the cradle head is in each of the preset postures, and before controlling a shooting device on the cradle head to shoot according to the shooting instruction, the method further comprises:

and counting down the preset time length.

21. The method of claim 20, further comprising:

outputting first prompt information in the process of counting down the preset duration;

wherein the first prompt message is used for indicating the remaining time of the countdown.

22. The method according to claim 1 or 16, wherein the preset shooting mode comprises a second shooting mode, and the shooting instruction is generated when a user triggers a control element on at least one of the shooting device, a handheld element of the pan/tilt head and a control accessory of the pan/tilt head, and the trigger of the shooting instruction is different from the shooting of the shooting device for the shot target object in the image.

23. The method according to claim 1 or 16, wherein the preset shooting mode comprises a plurality of types, and the trigger objects of the shooting instructions of different preset shooting modes are different.

24. The method according to claim 23, wherein the preset shooting mode includes a self-timer shooting mode in which the trigger object of the shooting instruction is a target object to be shot in an image captured by the shooting device, and an other shooting mode in which the trigger object of the shooting instruction is not the target object to be shot in the image captured by the shooting device.

25. The method of claim 24, wherein the preset photographing mode includes a first photographing mode and a second photographing mode, the photographing instruction in the first photographing mode being a user remote input, the photographing instruction in the second photographing mode being a user trigger control generation;

the first photographing mode includes the self-timer photographing mode, and the second photographing mode includes the other photographing mode.

26. The method according to claim 25, wherein the photographing device photographing the obtained images at the plurality of preset attitudes includes photographing the obtained images while the pan-tilt head is in the first photographing mode and/or the second photographing mode.

27. The method according to claim 1, wherein before controlling the shooting device on the pan/tilt head to shoot according to the shooting instruction, the method further comprises:

outputting second prompt information;

wherein the second prompt message is used for indicating the current position of the photographed target object in the real-time image captured by the shooting device in the image obtained by shooting by the shooting device so as to prompt the first movement message expected by the photographed target object.

28. The method according to claim 27, wherein the first movement information is determined based on a deviation between a current position of the subject to be photographed in a real-time image captured by the photographing device and a preset position of the real-time image.

29. The method according to claim 28, wherein the first movement information is movement information along a first direction, the first direction is substantially perpendicular to a second direction, and the second direction is a direction of a line connecting the subject to be photographed and the photographing apparatus.

30. The method of claim 27, wherein before outputting the second prompt message, further comprising:

outputting third prompt information;

the third prompt message is used for indicating that the photographed target object in the image obtained by the photographing device enters the real-time image captured by the photographing device currently.

31. The method according to claim 1, wherein before controlling the shooting device on the pan/tilt head to shoot according to the shooting instruction, the method further comprises:

outputting fourth prompt information;

wherein the fourth prompting information is used for indicating the size of the photographed target object in the image obtained by shooting by the shooting device in the real-time image captured by the shooting device so as to prompt the second movement information expected by the photographed target object.

32. The method according to claim 31, wherein the second movement information is determined based on a deviation between a size of the subject currently captured in the live image captured by the camera and a preset size in the live image.

33. The method according to claim 32, wherein the second movement information is movement information in a second direction, and the second direction is a direction of a line connecting the subject to be photographed and the photographing apparatus.

34. The method according to claim 1, wherein after controlling the shooting device on the pan/tilt head to shoot according to the shooting instruction, the method further comprises:

and splicing the images shot by the shooting device at the preset postures to obtain a target image.

35. A shooting control apparatus, characterized by comprising:

storage means for storing program instructions; and

one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured to, when the program instructions are executed, perform operations comprising:

when the cloud deck is in a preset shooting mode, controlling the cloud deck to be in a plurality of preset postures indicated by the preset shooting mode in sequence;

when the holder is in each preset posture, acquiring a shooting instruction input by a user;

and controlling a shooting device on the holder to shoot according to the shooting instruction.

36. The shooting control apparatus according to claim 35, wherein the preset attitude is a default attitude; alternatively, the first and second electrodes may be,

the preset posture is preset by a user.

37. The photographing control apparatus of claim 36, wherein the size, direction and/or number of the preset postures are preset by a user.

38. The shooting control apparatus according to claim 35, wherein the preset attitude includes at least one of a roll angle, a yaw angle, and a pitch angle.

39. The shooting control apparatus according to claim 38, wherein the preset shooting mode is a first mode, and the preset attitude is a yaw angle; alternatively, the first and second electrodes may be,

the preset shooting mode is a second mode, and the preset posture is a pitch angle; alternatively, the first and second electrodes may be,

the preset shooting mode is a third mode, and the preset posture is a roll angle; alternatively, the first and second electrodes may be,

the preset shooting mode is a fourth mode, and the preset posture comprises a yaw angle and a pitch angle; alternatively, the first and second electrodes may be,

the preset shooting mode is a fifth mode, and the preset posture comprises a yaw angle and a roll angle.

40. The capture control apparatus according to claim 35, wherein the one or more processors, when acquiring the capture instruction input by the user while the pan/tilt head is in each of the preset postures, are further configured individually or collectively to perform:

and when the cloud deck is in each preset gesture, waiting for acquiring a shooting instruction input by a user.

41. The capture control device of claim 40, wherein the one or more processors are further configured, individually or collectively, to:

if the shooting instruction input by the user is not obtained within the preset time, the cloud deck is controlled to be in the next preset posture after the shooting device is controlled to be in the current preset posture for shooting.

42. The capture control device of claim 40, wherein the one or more processors are further configured, individually or collectively, to:

and if the shooting instruction input by the user is not obtained after the preset time is exceeded, automatically exiting the preset shooting mode.

43. The capture control device of claim 40, wherein the one or more processors are further configured, individually or collectively, to:

and controlling the holder to keep at the preset posture within the time of waiting for acquiring the shooting instruction input by the user.

44. The shooting control apparatus according to claim 35, wherein an overlap ratio between images obtained by the shooting apparatus shooting at adjacent to the preset attitudes is larger than a preset overlap ratio threshold.

45. The imaging control apparatus according to claim 44, wherein the overlapping ratio is a default value or is preset by a user.

46. The shooting control apparatus according to claim 35, wherein zoom magnification when the shooting apparatus shoots at a plurality of the preset postures is equal in size.

47. The shooting control apparatus according to claim 35, wherein the shooting apparatus shoots the same subject as the target subject in the obtained images at a plurality of the preset attitudes; alternatively, the first and second electrodes may be,

the shooting device at least partially differs from the shot target object in the images obtained by shooting at a plurality of preset postures.

48. The shooting control apparatus according to claim 35 or 47, wherein a position in the image of the subject in the image obtained by the shooting apparatus at each of the preset attitudes is a default position or is set in advance by a user.

49. The shooting control apparatus according to claim 35 or 47, wherein a distance between a subject to be shot in an image obtained by the shooting apparatus at each of the preset attitudes and the shooting apparatus is constant; alternatively, the first and second electrodes may be,

the shooting device shoots the distance between the shot target object and the shooting device in the obtained images at a plurality of preset postures, and the distance gradually increases and/or gradually decreases.

50. The imaging control apparatus according to claim 35, wherein the preset imaging mode includes a plurality of types, and the trigger modes of the imaging instruction are different for different preset imaging modes.

51. The photographing control apparatus of claim 35 or 50, wherein the preset photographing mode includes a first photographing mode, and the photographing instruction is remotely input by a user.

52. The shooting control apparatus according to claim 51, wherein the remotely inputted information includes at least one of a posture action and voice information of a subject to be shot in an image taken by the shooting apparatus;

the shooting instruction comprises at least one of a preset gesture action and preset voice information.

53. The photographing control device of claim 52, wherein the preset gesture action comprises a preset gesture action.

54. The capture control device of claim 51, wherein the one or more processors are further configured to, individually or collectively, after obtaining a capture instruction input by a user while the pan/tilt head is in each of the preset poses, and before controlling a capture device on the pan/tilt head to capture according to the capture instruction, perform the following operations:

and counting down the preset time length.

55. The capture control device of claim 54, wherein the one or more processors, individually or collectively, are further configured to:

outputting first prompt information in the process of counting down the preset duration;

wherein the first prompt message is used for indicating the remaining time of the countdown.

56. The imaging control apparatus according to claim 35 or 50, wherein the preset imaging mode includes a second imaging mode, and the imaging instruction is generated when a user triggers a control member of at least one of the imaging apparatus, a hand-held member of the pan/tilt head, and a control member of the pan/tilt head, and the trigger of the imaging instruction is different from that of the imaging apparatus for imaging the target object.

57. The shooting control apparatus according to claim 35 or 50, wherein the preset shooting mode includes a plurality of types, and a trigger object of the shooting instruction differs for different preset shooting modes.

58. The shooting control apparatus according to claim 57, wherein the preset shooting mode includes a self-timer shooting mode in which the trigger subject of the shooting instruction is a target subject to be shot in an image captured by the shooting apparatus, and an other shooting mode in which the trigger subject of the shooting instruction is not the target subject to be shot in the image captured by the shooting apparatus.

59. The photographing control apparatus according to claim 58, wherein the preset photographing mode includes a first photographing mode and a second photographing mode, the photographing instruction in the first photographing mode is a user remote input, and the photographing instruction in the second photographing mode is a user trigger control generation;

the first photographing mode includes the self-timer photographing mode, and the second photographing mode includes the other photographing mode.

60. The shooting control apparatus according to claim 59, wherein the shooting means shoots the obtained images at a plurality of the preset attitudes includes the shooting means shoots the obtained images while the pan-tilt is in the first shooting mode and/or the second shooting mode.

61. The capture control device of claim 35, wherein the one or more processors, individually or collectively, are further configured to, prior to controlling the capture device on the pan/tilt head to capture in accordance with the capture instructions:

outputting second prompt information;

wherein the second prompt message is used for indicating the current position of the photographed target object in the real-time image captured by the shooting device in the image obtained by shooting by the shooting device so as to prompt the first movement message expected by the photographed target object.

62. The shooting control apparatus according to claim 61, wherein the first movement information is determined based on a deviation between a position of the subject currently captured in a live image captured by the shooting apparatus and a preset position of the live image.

63. The imaging control apparatus according to claim 62, wherein the first movement information is movement information along a first direction, the first direction being substantially perpendicular to a second direction, the second direction being a direction of a line connecting the subject to be imaged and the imaging apparatus.

64. The capture control apparatus of claim 61, wherein the one or more processors, individually or collectively, are further configured to, prior to outputting the second prompt message:

outputting third prompt information;

the third prompt message is used for indicating that the photographed target object in the image obtained by the photographing device enters the real-time image captured by the photographing device currently.

65. The capture control device of claim 35, wherein the one or more processors, individually or collectively, are further configured to, prior to controlling the capture device on the pan/tilt head to capture in accordance with the capture instructions:

outputting fourth prompt information;

wherein the fourth prompting information is used for indicating the size of the photographed target object in the image obtained by shooting by the shooting device in the real-time image captured by the shooting device so as to prompt the second movement information expected by the photographed target object.

66. The shooting control apparatus according to claim 65, wherein the second movement information is determined based on a deviation between a size of the subject currently captured in a live image captured by the shooting apparatus and a preset size in the live image.

67. The imaging control apparatus according to claim 66, wherein the second movement information is movement information in a second direction, the second direction being a direction of a line connecting the imaging target object and the imaging apparatus.

68. The capture control device of claim 35, wherein the one or more processors, after controlling the capture device on the pan/tilt head to capture according to the capture instructions, are further configured individually or collectively to:

and splicing the images shot by the shooting device at the preset postures to obtain a target image.

69. A head, characterized in that it comprises:

a carrying part for carrying the shooting device; and

the photographing control apparatus of any of claims 35 to 68, being capable of communicating with the photographing apparatus.

70. A camera system, characterized in that the camera system comprises:

a head according to claim 69; and

and a shooting device mounted on the holder.

Technical Field

The application relates to the field of shooting, in particular to a shooting control method and device, a cloud deck and a shooting system.

Background

At present, some holders have a function of realizing multi-angle shooting by one key, and subsequently splicing multi-angle images obtained by shooting into one image, for example, a 180-degree (unit: degree) synthesized panoramic image. When the user uses the multi-angle shooting function, fix the cloud platform on tripod or other objects earlier, click once more and trigger the shooting, the cloud platform will rotate a set of specific angle voluntarily, shoots a set of image, and follow-up can be synthesized this group of image, and this kind of shooting mode is applicable to and shoots static scenery. When a photographed target object is dynamic and needs composition at each angle, because the existing multi-angle photographing function is triggered once before photographing, after the holder rotates, the holder mostly adopts an in-place mode to trigger a photographing device carried on the holder to photograph, and the photographed target object needs to estimate the real-time angle of the holder or the time when the holder reaches a preset angle.

Disclosure of Invention

The application provides a shooting control method and device, a cloud deck and a shooting system.

In a first aspect, an embodiment of the present application provides a shooting control method, where the method includes:

when the cloud deck is in a preset shooting mode, controlling the cloud deck to be in a plurality of preset postures indicated by the preset shooting mode in sequence;

when the holder is in each preset posture, acquiring a shooting instruction input by a user;

and controlling a shooting device on the holder to shoot according to the shooting instruction.

In a second aspect, an embodiment of the present application provides a shooting control apparatus, including:

storage means for storing program instructions; and

one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured to, when the program instructions are executed, perform operations comprising:

when the cloud deck is in a preset shooting mode, controlling the cloud deck to be in a plurality of preset postures indicated by the preset shooting mode in sequence;

when the holder is in each preset posture, acquiring a shooting instruction input by a user;

and controlling a shooting device on the holder to shoot according to the shooting instruction.

In a third aspect, an embodiment of the present application provides a pan/tilt head, including:

a carrying part for carrying the shooting device; and

the imaging control apparatus according to a second aspect is capable of communicating with the imaging apparatus.

In a fourth aspect, an embodiment of the present application provides a shooting system, including:

the head of the third aspect; and

and a shooting device mounted on the holder.

According to the technical scheme provided by the embodiment of the application, when the image of the multi-angle is obtained through the cloud platform shooting, the shooting is carried out through the triggering of the user at each preset posture of the cloud platform, so that the user can confirm the composition and then shoot, the number of waste films can be effectively reduced, and the expected effect can be easily met by the target image obtained after the subsequent image splicing of the multi-angle.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.

Fig. 1 is a schematic method flow diagram of a shooting control method in an embodiment of the present application;

fig. 2A is a schematic diagram of a shooting process in an embodiment of the present application;

FIG. 2B is a schematic diagram of a shooting process in another embodiment of the present application;

FIG. 2C is a schematic diagram of a shooting process in another embodiment of the present application;

FIG. 3A is a schematic diagram of a target image obtained after stitching of a plurality of images captured in the embodiment shown in FIG. 2A;

FIG. 3B is a schematic diagram of a target image obtained after stitching of multiple images captured in the embodiment shown in FIG. 2B;

FIG. 3C is a schematic diagram of a target image obtained after stitching of multiple images captured in the embodiment shown in FIG. 2C;

FIG. 4 is a schematic flow chart illustrating an image acquisition method according to an embodiment of the present application;

fig. 5 is a block diagram of a configuration of a photographing control apparatus according to an embodiment of the present application;

fig. 6 is a block diagram of a configuration of a photographing system in an embodiment of the present application.

Detailed Description

At present, when a dynamic target object is shot through a multi-angle shooting function of a holder, and composition is needed for each angle of the holder (namely each angle in a set of specific angles), because the existing multi-angle shooting function is triggered once before shooting, after the holder rotates, the holder mostly adopts an in-place mode to trigger a shooting device carried on the holder to shoot, and the shot target object needs to estimate the real-time angle of the holder or the time of the holder reaching a preset angle, the mode has high difficulty and low estimation accuracy, the rotating angle of the holder is difficult to correspond to the expected composition of a user, so that images obtained by subsequent splicing cannot reach the expected effect easily.

Illustratively, if a user needs to use the pan-tilt to perform self-shooting, and the user needs to put different postures at each angle to synthesize images of the user at different positions and different postures, the user needs to mark position points in advance, the user needs to estimate a real-time angle of the pan-tilt or the time for the pan-tilt to reach a preset angle, and quickly move to the position points to put the postures of the corresponding angles before the pan-tilt rotates to each angle to perform shooting, so that the operation is very troublesome, the difficulty is high, the estimation accuracy is low, the rotation angle of the pan-tilt is difficult to correspond to a composition expected by the user, and subsequent spliced images are difficult to achieve the effect required by the user.

To this end, when the image of multi-angle is obtained through the cloud platform shooting, the preset gesture of cloud platform is all triggered through the user and is shot at every for the user can confirm to shoot again after the composition, so, can effectively reduce the scrap, and the follow-up target image that obtains after the image concatenation with the multi-angle can satisfy anticipated effect more easily like this.

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It should be noted that, in the following examples and embodiments, features may be combined with each other without conflict.

The cloud platform of this application embodiment can be for handheld cloud platform, also can be for carrying the machine cloud platform. The onboard holder may include a holder mounted on a movable platform, such as an unmanned aerial vehicle or a holder on a manned aerial vehicle, which may be an unmanned aerial vehicle or other type of unmanned aerial vehicle. In addition, the holder may be a single-axis holder, a two-axis holder, a three-axis holder, or other multi-axis holders.

Fig. 1 is a schematic method flow diagram of a shooting control method in an embodiment of the present application; the execution main body of the shooting control method in the embodiment of the application may be a pan/tilt head, and for example, the execution main body may be a controller of the pan/tilt head or another controller provided in the pan/tilt head. Referring to fig. 1, the photographing control method according to the embodiment of the present application may include S101 to S103.

In S101, when the cloud deck is in a preset shooting mode, controlling the cloud deck to be in a plurality of preset postures indicated by the preset shooting mode in sequence;

in S102, when the holder is in each preset posture, a shooting instruction input by a user is obtained;

and in S103, controlling a shooting device on the holder to shoot according to the shooting instruction.

In the embodiment of the application, each shooting of presetting the gesture is triggered by the user, the shooting of a plurality of presetting gestures is asynchronous triggering, exemplarily, in the presetting shooting mode, after the shooting begins, the cloud platform turns to the first presetting gesture, wait for the user to trigger and shoot the first image, after the user triggers the shooting device to shoot for the first time, the cloud platform automatically rotates to the second presetting gesture, wait for the user to trigger and shoot the second image, after the user triggers the shooting device to shoot for the second time, the cloud platform automatically rotates to the third presetting gesture, and so on, until all images of presetting the gesture are shot.

The pan-tilt head may include one or more preset shooting modes, for example, the preset shooting modes may include a first shooting mode and/or a second shooting mode; for example, the pan/tilt head may include one or more of a first mode, a second mode, a third mode, a fourth mode, and a fifth mode; of course, the preset photographing mode may also include other photographing modes. It should be noted that, when the preset shooting mode includes multiple types, at the same time, the pan-tilt can be in one preset shooting mode for shooting.

The preset posture of the embodiment of the application can comprise at least one of a yaw angle, a pitch angle and a roll angle, so that each image achieves different shooting effects, and thus target images obtained after splicing a plurality of images with different shooting effects can achieve different effects. For example, in some embodiments, the preset shooting mode is a first mode, and the preset posture is a yaw angle, that is, when the cradle head is in the first mode, only the yaw angle of the cradle head needs to be adjusted, and the roll angle and the pitch angle of the cradle head remain unchanged, so that multiple images with gradually changing backgrounds in the horizontal direction can be shot, and thus, the backgrounds of the target images obtained after splicing the multiple images in the horizontal direction can exhibit gradually changing effects in the horizontal direction. Illustratively, referring to fig. 2A, when the pan/tilt head is in the first mode, the pan/tilt head is controlled to be in the yaw angle 11 corresponding to the posture 11, the yaw angle 12 corresponding to the posture 12, the yaw angle 13 corresponding to the posture 13, the yaw angle 14 corresponding to the posture 14, and the yaw angle 15 corresponding to the posture 15 in sequence, and the roll angle and the pitch angle of the pan/tilt head are always kept unchanged, so that a plurality of images with gradually changing backgrounds in the horizontal direction are captured, and a target image obtained after splicing the plurality of images along the horizontal direction is as shown in fig. 3A.

In some other embodiments, the preset shooting mode is the second mode, and the preset posture is a pitch angle, that is, when the cradle head is in the second mode, only the pitch angle of the cradle head needs to be adjusted, and the roll angle and the pitch angle of the cradle head remain unchanged, so that multiple images with gradually changing backgrounds in the vertical direction can be shot, for example, multiple images with gradually changing backgrounds from the ground to the sky, and thus, the backgrounds of the target images obtained after splicing the multiple images in the vertical direction can have a gradually changing effect in the vertical direction. Illustratively, referring to fig. 2B, when the pan-tilt is in the second mode, the pan-tilt is controlled to be in the pitch angle 21 corresponding to the posture 21, the pitch angle 22 corresponding to the posture 22, and the pitch angle 23 corresponding to the posture 23 in sequence, and the roll angle and the yaw angle of the pan-tilt are always kept unchanged, so that a plurality of images with backgrounds gradually changing in the vertical direction are captured, and the target image obtained after the plurality of images are spliced in the vertical direction is as shown in fig. 3B.

In some other embodiments, the preset shooting mode is a third mode, and the preset posture is a roll angle, that is, when the pan-tilt is in the third mode, only the roll angle of the pan-tilt needs to be adjusted, and the yaw angle and the pitch angle of the pan-tilt remain unchanged, so that an effect that the background changes gradually along the roll direction can be shot, and thus, the background of the target image obtained after splicing the plurality of images along the roll direction can have a gradually changing effect along the roll direction.

In some other embodiments, the preset shooting mode is a fourth mode, and the preset posture includes a yaw angle and a pitch angle, that is, when the pan-tilt head is in the fourth mode, the yaw angle and the pitch angle of the pan-tilt head are adjusted, and the roll angle of the pan-tilt head remains unchanged. As such, for example, when the target object to be photographed includes a plurality of target objects, the panning positions of the plurality of target objects at the same time are arranged side by side along the shooting direction of the shooting device (e.g. the optical axis direction of the lens), and the panning positions of the plurality of target objects at different times are arranged along the horizontal arc direction, that is, the panning track of the target object to be photographed is wavy, if only the yaw angle of the pan/tilt head is adjusted, while the roll angle and the pitch angle of the pan/tilt head are kept unchanged, only the target object to be photographed at the track point closest to the pan/tilt head among the plurality of track points arranged side by side along the shooting direction of the shooting device may be photographed, and the target objects to be photographed at other track points in the same row are all blocked by the target object to be photographed at the track point closest to the pan/tilt head in the row, and thus, the present embodiment may adjust the yaw angle and the pitch angle of the pan/tilt head, while the pan angle of the pan/tilt head is kept constant or the pan angle is finely adjusted (the fine adjustment is used for stability enhancement), that is, an image of the target object at each track point of the plurality of track points arranged side by side along the shooting direction of the shooting device can be shot, and the target object in the shot image is not blocked, for example, as shown in fig. 2C, when the pan/tilt head is in the fourth mode, the pan/tilt head is controlled to be sequentially in a yaw angle a and a pitch angle D corresponding to the posture 31, a yaw angle a and a pitch angle E corresponding to the posture 32, a yaw angle a and a pitch angle F corresponding to the posture 33, a yaw angle B and a pitch angle D corresponding to the posture 34, a yaw angle B and a pitch angle E corresponding to the posture 35, a yaw angle B and a pitch angle F corresponding to the posture 36, a yaw angle C and a pitch angle D corresponding to the posture 37, a yaw angle C and a pitch angle E corresponding to the posture 38, a yaw angle B and a pitch angle E corresponding to the posture 35, and a pitch angle D corresponding to the posture 36, The yaw angle C and the pitch angle F corresponding to the posture 39, and the roll of the pan/tilt head is always kept constant or changed little to realize stabilization, so that an image of the target object to be photographed is photographed with respect to each of a plurality of track points side by side in the photographing direction of the photographing device, and the target object in the image obtained by photographing is free from occlusion, and the target image obtained after stitching the plurality of images in the reverse order such as the photographing order with respect to the respective images is as shown in fig. 3C. Of course, the fourth mode is also suitable for acquiring the squared image, and is not limited to the squared image.

In some other embodiments, the preset shooting mode is a fifth mode, and the preset posture includes a yaw angle and a roll angle, that is, when the pan-tilt is in the fifth mode, the yaw angle and the roll angle of the pan-tilt are adjusted, and the pitch angle of the pan-tilt remains unchanged, so that when the pan-tilt trajectory of the target object is in an arc shape, if only the yaw angle of the pan-tilt is adjusted, and the roll angle and the pitch angle of the pan-tilt remain unchanged, the target object to be shot at some trajectory points cannot be shot due to being blocked, and by adjusting the yaw angle and the roll angle of the pan-tilt, and keeping the pitch angle unchanged, the image of the target object to be shot at each trajectory point can be shot, and the target object to be shot in the shot image obtained by shooting has no blocking.

In addition, in some preset shooting modes, the preset posture may include a roll angle and a pitch angle, that is, the roll angle and the pitch angle of the pan-tilt are adjusted, and the yaw angle of the pan-tilt remains unchanged, so as to achieve an expected shooting effect, which is not specifically described in the embodiments of the present application.

It can be understood that the first mode, the second mode, the third mode, the fourth mode and the fifth mode are all applicable to a usage scenario of self-shooting and other shooting, that is, in the embodiment of the present application, the first mode, the second mode, the third mode, the fourth mode and the fifth mode are related to a swing shooting track of a shot target object, and are not related to self-shooting or other shooting of the shot target object.

In the embodiment of the present application, the size of the preset gesture (i.e. the size of at least one of the roll angle, the yaw angle, and the pitch angle included in the preset gesture) may be related to the requirement of synthesizing the image, and if a panoramic image in the yaw direction needs to be synthesized at 180 °, the yaw angles of a plurality of preset gestures need to cover a range of 0 ° to 180 °; for another example, when an image in the yaw direction is synthesized at 120 ° (i.e., the difference between the yaw angle of the preset posture corresponding to the first image and the yaw angle of the preset posture corresponding to the last image is 120 °), a plurality of yaw angles need to cover an angular range of 0 ° to 120 °, and so on.

The preset gesture of the embodiment of the application can be a default gesture or can be preset by a user. Illustratively, in some embodiments, the preset pose is a default pose, and illustratively, the preset pose is a yaw angle, and the yaw angle may include 0 °, 30 °, 60 °, 90 °, 120 °, 150 °, and 180 ° when the panoramic image in the yaw direction is synthesized by 180 °; of course, when synthesizing the panoramic image in the yaw direction at 180 °, the yaw angle may include other default angle magnitudes as long as the default angle covers an angle range of 0 ° to 180 °.

In other embodiments, the preset gesture is preset by the user, and the user needs to set the size of the preset gesture before the shooting device is triggered for the first time. In this embodiment, the size, the direction and/or the number of the preset postures are preset by a user to meet different shooting requirements, and for example, the size, the direction and the number of the preset postures are preset by the user; illustratively, the number of the preset postures is fixed, and the size and/or the direction of the preset postures are preset by a user; illustratively, the direction of the preset gesture is fixed, the size and/or number of the preset gestures are preset by the user, and the like. For example, when synthesizing a panoramic image in the yaw direction at 180 °, a user needs to set the yaw angle to 0 °, 30 °, 60 °, 90 °, 120 °, 150 °, and 180 ° before triggering the shooting device to shoot for the first time; of course, when synthesizing a panoramic image in the yaw direction at 180 °, the user may set the yaw angle to have another angle size as long as a plurality of yaw angles cover an angle range of 0 ° to 180 °.

For example, in some embodiments, the preset photographing mode may include a first photographing mode, and a photographing instruction of the first photographing mode is remotely input by a user, that is, a photographing is remotely triggered by the user. Optionally, the user remote triggering mode may be applicable to self-shooting, that is, the triggering object of the shooting instruction is the target object to be shot in the image. Wherein, the remotely inputted information may include at least one of a gesture motion and voice information of the photographed target object in the image, and accordingly, the photographing instruction may include at least one of a preset gesture motion and preset voice information, thereby triggering the photographing apparatus to photograph by at least one of the gesture motion and the voice information of the photographed target object. Illustratively, the remotely input information is a posture action of a photographed target object, and the shooting instruction is a preset posture action, so that the shooting device is remotely triggered to shoot through the preset posture action; illustratively, the remotely input information is voice information of a photographed target object, and the shooting instruction is preset voice information, so that the shooting device is remotely triggered to shoot through the preset voice information; illustratively, the remotely input information comprises a gesture action and voice information of the photographed target object, and the shooting instruction comprises a preset gesture dynamic and preset voice information, so that the shooting device is remotely triggered to shoot through the preset gesture action and the preset voice information together.

For example, the manner of acquiring the shooting instruction by using the pan/tilt head as the execution subject will be described.

The manner of the pan-tilt head acquiring the shooting instruction can include various manners, for example, the pan-tilt head acquires the remotely input information first, and then judges whether the shooting instruction is acquired according to the remotely input information; or the external shooting instruction is sent by the shooting device or the external device, wherein the shooting device or the external device can firstly acquire the remotely input information, then judge whether the shooting instruction is acquired according to the remotely input information, and if the shooting instruction is acquired, send the shooting instruction to the holder.

For example, in some embodiments, the cradle head determines whether to acquire the shooting instruction according to the information input remotely, where the information input remotely includes a posture action of the target object in the image, and if the posture action of the target object in the image is consistent with a preset posture action, determines to acquire the shooting instruction; and if the posture action of the photographed target object in the image is inconsistent with the preset posture action, continuously acquiring the remotely input information until the photographing instruction is judged to be acquired. The cradle head can preset and store preset gesture actions, such as storing the preset gesture actions through a database mode, or storing the preset gesture actions through other modes. In addition, the preset gesture actions can include a plurality of gestures, and when the gesture action of the photographed target object in the image is the same as one preset gesture action, the photographing instruction is judged to be acquired; and when the gesture actions of the photographed target object in the image are different from all preset gesture actions, continuously acquiring the remotely input information until the photographing instruction is judged to be acquired.

In some embodiments, the remote input information includes voice information of a photographed target object in the image, and if the voice information of the photographed target object in the image is consistent with preset voice information, it is determined that a shooting instruction is acquired; and if the voice information of the photographed target object in the image is inconsistent with the preset voice information, continuously acquiring the remotely input information until the photographing instruction is judged to be acquired. The cradle head can store preset voice information in a preset mode, such as storing the preset voice information in a database mode, or storing the preset voice information in other modes. In addition, the preset voice information can comprise a plurality of preset voice information, and when the voice information of the photographed target object in the image is the same as one preset voice information, the photographing instruction is judged to be acquired; and when the voice information of the photographed target object in the image is different from all the preset voice information, continuously acquiring the remotely input information until the photographing instruction is judged to be acquired.

In some embodiments, the remote input information includes a posture action and voice information of the photographed target object in the image, and if the posture action of the photographed target object in the image is consistent with a preset posture action and the voice information of the photographed target object in the image is consistent with a preset voice information, it is determined that the shooting instruction is acquired; and if the posture action of the photographed target object in the image is inconsistent with the preset posture action, or the voice information of the photographed target object in the image is inconsistent with the preset voice information, continuously acquiring the remotely input information until the photographing instruction is judged to be acquired.

The shooting device or the external device obtains the information input remotely, and determines whether the process of obtaining the shooting instruction is similar to the process of obtaining the information input remotely by the pan-tilt according to the information input remotely, and may refer to the description of the corresponding part in the above embodiment.

The remote input information may include a plurality of gesture actions of the photographed target object in the image, and may, for example, perform image recognition on the real-time image captured by the photographing device to obtain the gesture actions of the photographed target object in the image, and may perform image recognition on the real-time image captured by the photographing device by using an existing gesture action recognition algorithm, which is not specifically limited in the present application.

For example, the remote input information includes voice information of a photographed target object in an image, the voice information of the photographed target object in the image may be acquired by a voice acquisition device on the pan/tilt and/or the photographing device, and then the content of the voice information is recognized based on a semantic recognition algorithm, which may be an existing algorithm, and this application is not limited in this respect. The manner in which the pan-tilt acquires the gesture motion of the photographed target object in the image may include, but is not limited to, the following manners:

firstly, the cradle head acquires a real-time image captured by the shooting device, and then performs image recognition on the real-time image captured by the shooting device to obtain the gesture motion of the shot target object in the image.

And secondly, acquiring a real-time image captured by the shooting device and/or an external device, carrying out image recognition on the real-time image captured by the shooting device, acquiring the posture motion of the shot target object in the image, and sending the posture motion of the shot target object in the recognized image to the holder. In this embodiment, the external device may include an image processing device.

The manner in which the pan/tilt head acquires the voice information of the photographed target object in the image may include, but is not limited to, the following manners:

firstly, the cradle head acquires voice information of a photographed target object in an image acquired by the cradle head and/or a voice acquisition device on a photographing device, and then content of the voice information is recognized based on a semantic recognition algorithm.

And secondly, acquiring voice information of the photographed target object in the image by the photographing device and/or the voice acquisition device on the photographing device, recognizing the content of the voice information based on a semantic recognition algorithm, and sending the recognized content of the voice information to the holder by the photographing device and/or the external device. In this embodiment, the external device may include a voice acquisition device.

In the following, the preset gesture action and the preset voice message are taken as examples to remotely trigger the shooting device to shoot.

The preset gesture actions can be set according to needs, exemplarily, the preset gesture actions include preset gesture actions, optionally, the preset gesture actions in different preset gestures are the same gesture action, namely, in each preset gesture, the photographed target object triggers the photographing device to photograph through the same gesture action, and exemplarily, in each preset gesture, the photographed target object triggers the photographing device to photograph through a gesture of erecting a thumb.

Optionally, the preset gesture actions of different preset gestures are different, that is, in each preset gesture, the photographed target object triggers the photographing device to photograph through the corresponding gesture action, and exemplarily, in the preset gesture 1, the photographed target object triggers the photographing device to photograph through the gesture 1; in a preset posture 2, triggering shooting of a shot object through a gesture 2; in a preset gesture 3, the shot target object is triggered to shoot through a gesture 3, wherein the gesture 1, the gesture 2 and the gesture 3 are different, for example, the gesture 1 is 'erecting a thumb', the gesture 2 is 'erecting a thumb and an index finger of the same hand', and the gesture 3 is 'erecting an index finger and a middle finger of the same hand'; of course, gesture 1, gesture 2, and gesture 3 may be other gestures.

Optionally, some of the preset gesture actions of different preset gestures are different, and for example, in the preset gesture 1, the shot target object triggers the shooting device to shoot through the gesture 1; in a preset posture 2, triggering shooting of a shot object through a gesture 2; in the preset posture 3, the shot target object triggers shooting through the gesture 1. Gesture 1 may be "raise thumb", gesture 2 is "raise thumb and forefinger of the same hand"; of course, gesture 1 and gesture 2 may be other gestures as well.

It should be understood that the preset gesture motion is not limited to the preset gesture motion, but may be other, such as a preset limb motion.

The preset voice information can also be set according to needs, for example, the preset voice information includes "identification information of the pan/tilt and/or the photographing device, and photographs by the photographing device, when sound content of" identification information of the pan/tilt and/or the photographing device, and photographing "is recognized, wherein the identification information may be identity information id (identity document) or other information capable of uniquely representing the pan/tilt and/or the photographing device; illustratively, the preset voice information includes "eggplant", and when the sound content of "eggplant" is recognized, the photographing device is controlled to photograph. It should be understood that the preset voice information may also include other sound contents, not limited to the above listed sound contents.

In addition, the information input remotely can also comprise other information, such as that the shot target object triggers the shooting device to shoot by operating a remote controller, and the information input remotely comprises a shooting instruction generated when a user operates the remote controller, and the remote controller is in remote communication connection with the holder or the shooting device.

Further, in some embodiments, in the first shooting mode, after the shooting instruction input by the user is acquired when the cradle head is in each preset posture, the countdown of the preset time duration is performed before the shooting device on the cradle head is controlled to shoot according to the shooting instruction. That is, the shooting of each image is respectively triggered after the countdown with the preset duration is carried out after the shooting instruction is acquired, the countdown is to leave time for the shot target object to swing, illustratively, the shot target object is a self-timer, the self-timer remotely triggers the shooting device to shoot through the gesture 1, after the gesture 1 is recognized, the shooting device is triggered to shoot after the countdown with the preset duration is carried out, and in the time period with the preset duration, the self-timer can be switched from the action of the gesture 1 to other actions, such as jumping, so that the expected shooting effect is obtained. The preset duration can be set according to the requirement, such as 10 seconds or other duration.

Further, in some embodiments, the photographing control method may further include: and outputting first prompt information in the process of countdown with preset duration, so that the photographed target object is prompted to swing out required actions before the countdown is finished through the first prompt information. In this embodiment, the pan/tilt head can output the first prompt information through the display screen on the pan/tilt head or the shooting device and/or the indicator light on the pan/tilt head or the shooting device, and also can output the first prompt information through other modes, such as a voice mode. The first prompt message may be used to indicate the remaining time of the countdown, and may also be used to indicate other information of the countdown, such as the first prompt message may be used to indicate the start time of the countdown and the end time of the countdown. Illustratively, the first prompt message is used for indicating the remaining time of the countdown, so as to prompt the photographed target object to swing out the required action within the remaining time, and illustratively, the preset time length is 10 seconds, and when the cradle head counts down, 9- >8- > … - >1- >0 numbers are sequentially output through a display screen on the cradle head or the photographing device and/or the indicating lamp on the cradle head or the photographing device flickers once per second to indicate the remaining time of the countdown.

For example, referring to fig. 2A, the pan/tilt head is in the first shooting mode for shooting, the preset postures include a yaw angle 1, a yaw angle 2, a yaw angle 3, a yaw angle 4, and a yaw angle 5, and the shooting effects to be achieved include: the posture of the subject in the first image is posture 11, the posture in the second image is posture 12, the posture in the third image is posture 13, the posture in the fourth image is posture 14, and the posture in the fifth image is posture 15. The photographing process may include: when the tripod head rotates to a yaw angle 1, the photographed target object makes a first preset gesture action, the tripod head counts down for 10 seconds after recognizing the first preset gesture action, the tripod head is static within a time period of 10 seconds after counting down, the photographed target object is put out of a posture 11, and when the counting down for 10 seconds is finished, the tripod head triggers the shooting device to shoot to obtain a first image; after the first image is obtained, the cradle head rotates to a yaw angle 2, when the cradle head rotates to the yaw angle 2, the photographed target object makes a second preset gesture action, the cradle head counts down for 10 seconds after recognizing the second preset gesture action, the cradle head is static within a time period of 10 seconds of counting down, the photographed target object is put out of a posture 12, and when the counting down for 10 seconds is finished, the cradle head triggers the photographing device to photograph, so that a second image is obtained; after the second image is obtained, the cradle head rotates to a yaw angle 3, when the cradle head rotates to the yaw angle 3, the photographed target object makes a third preset gesture action, the cradle head counts down for 10 seconds after recognizing the third preset gesture action, the cradle head is static within a time period of 10 seconds of counting down, the photographed target object is put out of a posture 13, and when the counting down for 10 seconds is finished, the cradle head triggers the photographing device to photograph, so that a third image is obtained; after the third image is obtained, the cradle head rotates to a yaw angle 4, when the cradle head rotates to the yaw angle 4, the photographed target object makes a fourth preset gesture action, the cradle head counts down for 10 seconds after recognizing the fourth preset gesture action, the cradle head is static within a time period of 10 seconds of counting down, the photographed target object is put out of a posture 14, and when the counting down for 10 seconds is finished, the cradle head triggers the photographing device to photograph, so that a fourth image is obtained; after the fourth image is obtained, the cradle head rotates to a yaw angle 5, when the cradle head rotates to the yaw angle 5, the photographed target object makes a fifth preset gesture action, after the cradle head recognizes the fifth preset gesture action, the countdown is carried out for 10 seconds, the cradle head is static within the time period of 10 seconds, the photographed target object is placed out of a posture 15, when the countdown is finished for 10 seconds, the cradle head triggers the photographing device to photograph, the fifth image is obtained, and the photographing is finished. The first preset gesture action, the second preset gesture action, the third preset gesture action, the fourth preset gesture action and the fifth preset gesture action may be the same or at least partially different.

For example, in some embodiments, the preset shooting mode includes a second shooting mode, and the shooting instruction of the second shooting mode is generated when the user triggers a control component, namely shooting is manually triggered by the user, the control component is far away from the shot target object and is connected to the holder and/or the shooting device directly or through a connecting component. Alternatively, the trigger of the shooting instruction of the second shooting mode may be different from the target object in the image, and this triggering manner may be suitable for him to take a shot. The control element may be disposed on at least one of the shooting device, the handheld device of the cradle head, and the control accessory of the cradle head, and may include, for example, at least one of a shutter of the shooting device, a control portion on the handheld device (which may include an entity control portion disposed on the handheld device or a virtual control portion displayed on a display screen of the handheld device), and a control portion on the control accessory (which may include an entity control portion disposed on the control accessory or a virtual control portion displayed on a display screen of the control accessory). Wherein, the handheld piece can include handle, bracelet or other handheld pieces, and control accessories can include the remote controller of cloud platform or cell-phone, bracelet etc. can control the control terminal of cloud platform, can also include such as have the multi-functional setting with the burnt wheel.

When the preset shooting mode includes a plurality of types, the trigger modes and/or trigger objects of the shooting instructions of different preset shooting modes may be different. For example, in some embodiments, the triggering modes of the shooting instructions of different preset shooting modes are different, such as the first shooting mode and the second shooting mode listed in the above embodiments, and the triggering modes of the first shooting mode and the second shooting mode are different.

In some embodiments, the trigger objects of the shooting instructions of different preset shooting modes are different, and for example, the preset shooting modes may include a self-timer shooting mode in which the trigger object of the shooting instruction is the shot target object in the image and an other shooting mode in which the trigger object of the shooting instruction is not the shot target object in the image. The user can trigger the cloud deck to be in one of preset shooting modes to shoot according to needs, so that different shooting requirements are met, and exemplarily, when the hands are insufficient and self shooting is needed, the cloud deck can be triggered to be in a self shooting mode; for example, when the hands of the person are sufficient, the cradle head can be triggered to be in the shooting mode.

Illustratively, the preset shooting mode includes a first shooting mode and a second shooting mode, the shooting instruction in the first shooting mode is a user remote input, the shooting instruction in the second shooting mode is a user trigger control generation, the first shooting mode includes a self-timer shooting mode, and the second shooting mode includes a non-self-timer shooting mode, such as a self-timer shooting mode.

Further optionally, the plurality of images includes images captured by the camera while the pan/tilt head is in the first photographing mode and/or the second photographing mode. For example, when each of the plurality of images is the image captured by the capturing device when the cradle head is in the first capturing mode, the capturing mode of the cradle head may be set to the first capturing mode in advance by the user before the capturing of the first image; for example, when each of the plurality of images is the image captured by the capturing device when the cradle head is in the second capturing mode, the capturing mode of the cradle head may be set to the second capturing mode in advance by the user before the capturing of the first image; illustratively, a part of the images are captured by the shooting device when the tripod head is in the first shooting mode, and another part of the images are captured by the shooting device when the tripod head is in the second shooting mode. When the plurality of images include images captured by the photographing device while the cradle head is in the first photographing mode and the second photographing mode, the photographing mode of the cradle head may be switched during photographing, for example, the photographing mode of the cradle head may be set to the first photographing mode or the second photographing mode by the user before each image is photographed; for another example, before the first image is shot, the shooting mode corresponding to each preset posture can be set as the first shooting mode or the second shooting mode by the user in advance, in the shooting process, the cradle head is automatically switched to the shooting mode corresponding to each preset posture, and optionally, after the cradle head rotates to each preset posture, before the shooting instruction is obtained, the cradle head is controlled to be in the corresponding shooting mode.

For example, referring to fig. 2A, fig. 2B, and fig. 2C, the cradle head is a handheld cradle head, the shooting device may include a front camera, a rear camera, and a display screen, the front camera is located at one side of the display screen, and in the first shooting mode, the shooting is performed by the front camera, and the self-shooting is performed by the display screen; in the second shooting mode, shooting is carried out through the rear camera, and the shooting mode is suitable for shooting by the user.

In addition, in the embodiment of the application, the overlapping rate of the images captured by the shooting device at the adjacent preset postures is greater than the preset overlapping rate threshold, so that after splicing, the image information at the spliced position of two adjacent images in the spliced target image can be more consistent, and the influence on the visual effect of the target image due to the fact that the image information at the spliced position of two adjacent images is abrupt caused by the small overlapping rate of the images captured by the shooting device at the adjacent preset postures is prevented. The preset overlap rate threshold may be set as needed, and for example, the preset overlap rate threshold is 30% or other values.

It should be understood that in the embodiment of the present application, the overlapping ratio between the images captured by the camera at the adjacent preset postures is less than 100%, and the larger the overlapping ratio between the images captured by the camera at the adjacent preset postures is, the higher the degree of coincidence between the images captured at the adjacent preset postures is, and the stronger the continuity of the image information at the joint of the two adjacent images in the spliced target image is. The overlapping rate between the images captured by the photographing devices at the adjacent preset attitudes may be set to be greater than 30% and less than 60% in consideration of the consistency of the spliced image information, the angle requirement for coverage required for photographing, the number of photographed sheets, and the like.

The size of the overlapping ratio between the images captured by the photographing devices at the adjacent preset postures may be a default numerical size or may be preset by a user. Illustratively, the magnitude of the overlap ratio between images captured by the camera at adjacent preset poses is a default numerical magnitude, such as 45%; illustratively, the size of the overlapping rate between the images captured by the camera at the adjacent preset postures is preset by the user, for example, the overlapping rate between the images captured by the camera at the adjacent preset postures is set to 35% before the camera is triggered to capture for the first time.

The zoom times when the shooting device captures a plurality of images are equal, so that the sizes of the backgrounds in the images are consistent, and the backgrounds of the target images obtained by splicing the images are not too abrupt. In the embodiment of the present application, the equal zoom magnification when the photographing device captures a plurality of images means: the zoom factors when the camera captures multiple images may be considered to be equal in size within an allowable error range.

In the embodiment of the present application, the number of the photographed target objects may be one or more. In addition, the photographed target object may include a human, an animal, or other living objects.

When the number of the photographed target objects is a plurality of objects, in the first photographing mode, a photographing instruction can be remotely input from any one of the photographed target objects. Further, in the first photographing mode, photographing instructions for a plurality of images may be remotely input from the same subject, or photographing instructions for at least a part of the plurality of images may be remotely input from different subjects.

The subject objects in the multiple images may be the same object, or the subject objects in the multiple images may be at least partially different. For example, in some embodiments, when the number of the photographed target objects is plural, the photographed target objects in the plural images being the same object means: the subject to be photographed in the plurality of images is unchanged, and illustratively, the subject to be photographed includes an object a and an object B, and then the subject to be photographed in each image includes the object a and the object B. By the shooting control method, the user can easily shoot an image with the same object appearing at different positions of the same picture.

In some other embodiments, the subject of the multiple images is at least partially different, illustratively, the subject of the first image includes a subject a, the subject of the second image includes a subject a and a subject B, and the third image includes a subject B.

In addition, the position of the photographed target object in each image in the image is a default position or is preset by a user. Illustratively, the position of the captured target object in each image in the image is a default position, for example, the position of the captured target object in each image in the image is a default position of the center of the image, and optionally, the center of the captured target object in each image coincides with the center of the image; it should be understood that the position of the captured target object in each image in the image may also be defaulted to other positions of the image, such as the center of the captured target object in each image being located at 1/3 in the height direction of the image. It should be noted that, when the photographed target object includes a plurality of objects, the center of the photographed target object may include, but is not limited to, geometric centers of the plurality of objects, that is, when the photographed target object includes a plurality of objects, the determination of the position of the photographed target object in the image may be based on not only the geometric centers of the plurality of objects.

For example, the position of the target object in each image in the image is preset by the user, and the positions of the target objects in the images in the multiple images can be set to different positions or the same position, so as to meet different shooting requirements. Illustratively, the center of the captured target object in the first image coincides with the center of the first image, and the center of the captured target object in the second image is located at 1/3 in the height direction of the second image; illustratively, the center of the captured target object in the first image coincides with the center of the first image, and the center of the captured target object in the second image coincides with the center of the second image. The position of the subject in each image in the image may be set by the user before the first image is captured, or the position of the subject in the image may be set by the user before the first image is captured.

The distance between the target object and the imaging device in each image may be set according to the imaging requirements, and the distance between the target object and the imaging device in each image refers to the distance between the target object and the imaging device in the real world before each image is captured. The distance between the target object and the shooting device may be a distance between the target object and a lens of the shooting device, or a distance between the target object and another position of the shooting device.

For example, in some embodiments, the distance between the target object in each image and the camera is not changed, so that the size of the target object in the images is substantially the same, and thus the size of the target object in different positions in the target images obtained by stitching the images is also the same. In this embodiment, during shooting, the positions of the target objects in each image in the real world are all located on the same circular arc or circle, and the circular arc or circle takes the shooting device as the center of the circle.

In some other embodiments, the distances between the subject in the plurality of images and the photographing device are at least partially different, that is, the distances between the subject in the plurality of images and the photographing device are varied, so that the sizes of the subject in the plurality of images in the images are also varied. For example, the distance between the photographed target object in the multiple images and the shooting device is gradually changed, such as gradually increased and/or gradually decreased, and for example, the distance between the photographed target object in the multiple images and the shooting device is gradually increased, so that the target objects at different positions in the spliced target images have the effect of gradually increasing the size; illustratively, the distances between the photographed target objects in the multiple images and the photographing device are gradually reduced, so that the target objects at different positions in the spliced target images have the effect of gradually reducing the size; illustratively, the distances between the shot target objects in the multiple images and the shooting device are gradually increased and then gradually decreased, so that the target objects at different positions in the spliced target images have the effects of gradually increasing and then gradually decreasing in size; illustratively, the distances between the shot target objects in the multiple images and the shooting device are gradually reduced and then gradually increased, so that the target objects at different positions in the spliced target images have the effect of gradually reducing the sizes and then gradually increasing the sizes. For example, referring to fig. 2A, when shooting, the locus formed by the positions of the shot target objects in the real world in the multiple images may be a part of an ellipse; of course, the trajectory formed by the positions of the target objects in the real world in the plurality of images may be another trajectory at the time of shooting, and only the distances between the target objects in the plurality of images and the shooting device need to be changed, as shown in fig. 2B, the positions of the target objects in the real world in the plurality of images form a straight trajectory at the time of shooting.

In this application embodiment, because when the cloud platform is in each and predetermines the gesture, can not acquire immediately and shoot the instruction, consequently, when the cloud platform is in each and predetermines the gesture, the implementation process that acquires the shooting instruction of user input can include: and when the holder is in each preset posture, waiting for acquiring a shooting instruction input by a user. It can be understood that, in some cases, when the cradle head is in each preset posture, the shooting instruction is obtained, and at this time, it is not necessary to wait for obtaining the shooting instruction input by the user.

Furthermore, the cradle head is controlled to be kept at the preset posture within the time of waiting for acquiring the shooting instruction input by the user, so that the cradle head can be stably kept at the preset posture, and after the shooting instruction is acquired, the shooting device is controlled to shoot according to the shooting instruction.

In some cases, acquiring a shooting instruction may have a condition of timeout, and for the timeout condition, selecting a corresponding timeout strategy as needed, for example, in some embodiments, if a shooting instruction input by a user is not acquired after a preset time is exceeded, controlling a shooting device to shoot a pan-tilt in a current preset posture, and then controlling the pan-tilt to be in a next preset posture, that is, even if the acquisition of the shooting instruction has the timeout, continuing to shoot an image corresponding to the current preset posture of the pan-tilt, thereby ensuring the integrity of the image during subsequent splicing or the requirement of the subsequent splicing on the number of images; in some other embodiments, the preset shooting mode may be automatically exited if the shooting instruction input by the user is not acquired for more than the preset time, and in addition, when the preset shooting mode is automatically exited, a prompt message may be output to prompt the user of the reason for the automatic exiting, that is, the shooting instruction input by the user is not acquired for more than the preset time.

It is understood that the preset time period can be set according to the requirement, and for example, the preset time period is 10 seconds or other.

The position of the photographed target object in the real world can be marked in advance or not. Whether the position of the photographed target object in the real world is marked in advance or not, the photographed target object can be prompted to be subjected to position adjustment before each image captured by the shooting device on the cloud deck is acquired, so that each image can achieve the expected shooting effect.

For example, in some embodiments, before the shooting device on the cradle head is controlled to shoot according to the shooting instruction, the second prompt information is output, for example, the second prompt information may be output in a text/graphic manner through a display screen of the cradle head and/or the shooting device, may be output through an indicator lamp of the cradle head and/or the shooting device, or may be output in another manner.

The second prompt information is used for indicating the current position of the photographed target object in the image in the real-time image captured by the shooting device so as to prompt the first movement information expected by the photographed target object, and after the photographed target object obtains the first movement information, the position of the photographed target object in each finally captured image can be adjusted in the real world, so that the position of the photographed target object in the image in each finally captured image is overlapped with the expected composition position as much as possible.

The first movement information may be determined based on a position of the photographed target object in the real-time image captured by the photographing device, and optionally, the first movement information is determined based on a deviation between the position of the photographed target object in the real-time image captured by the photographing device and a preset position of the real-time image. The preset position may be a default position, a position preset by a user before the first image is captured, or a position preset by the user before each image is captured. Optionally, the first movement information is movement information along a first direction, the first direction is substantially perpendicular to a second direction, and the second direction is a direction of a connection line between the target object to be photographed and the photographing device, for example, a direction of a connection line between a center of the target object to be photographed and a center of a lens of the photographing device. It is understood that the first movement information may also be movement information in other directions, for example, if the locus formed by the positions of the photographed target objects in the multiple images in the real world is an arc, the first movement information is an arc direction of the arc.

For convenience of description, herein, the subject is taken as a reference, the first direction is referred to as a left-right direction of the subject, the second direction is referred to as a front-back direction of the subject, and a deviation between a position of the subject currently captured in a live image captured by the photographing device and a preset position of the live image is referred to as a first deviation.

The first deviation may include a deviation magnitude and/or a deviation direction between a position of the photographed target object currently in the real-time image captured by the capturing device and a preset position of the real-time image, optionally, in some embodiments, the first deviation includes a deviation magnitude, and for example, after the photographed target object obtains the deviation magnitude of the first deviation, the photographed target object may select a distance moved leftward by the deviation magnitude of the first deviation, and at this time, if the pan-tilt still outputs the second prompt information, the photographed target object needs to move rightward by a distance twice as large as the deviation magnitude of the first deviation, so that there is no position deviation between the position of the photographed target object currently in the real-time image captured by the capturing device and the preset position of the real-time image; and if the pan-tilt stops outputting the second prompt message or outputting the prompt message used for indicating that no position deviation exists between the position of the photographed target object in the real-time image captured by the shooting device and the preset position of the real-time image, stopping moving the photographed target object left and right.

In some embodiments, the first deviation comprises a deviation direction, the photographed target object moves to the deviation direction of the first deviation after obtaining the deviation direction of the first deviation until the pan-tilt stops outputting the second prompt message or outputting the prompt message for indicating that the photographed target object does not have a position deviation between the position of the photographed target object currently in the real-time image captured by the shooting device and the preset position of the real-time image, and the photographed target object stops moving left and right.

In some embodiments, the first deviation comprises a deviation magnitude and a deviation direction, and the target object is moved by a distance of the deviation magnitude of the first deviation in the deviation direction of the first deviation after obtaining the deviation magnitude and the deviation direction of the first deviation, so that no position deviation exists between the position of the target object currently in the real-time image captured by the shooting device and the preset position of the real-time image, and the position of the target object in the left-right direction is quickly adjusted.

In addition, optionally, when there is no position deviation between the position of the photographed target object currently in the real-time image captured by the photographing device and the preset position of the real-time image, the pan-tilt stops outputting the second prompt message; optionally, when there is no position deviation between the position of the photographed target object currently in the real-time image captured by the photographing device and the preset position of the real-time image, the cradle head outputs prompt information indicating that there is no position deviation between the position of the photographed target object currently in the real-time image captured by the photographing device and the preset position of the real-time image.

Further, in some embodiments, before outputting the second prompt message, the third prompt message is output, for example, the third prompt message may be output in a text/graphic manner through a display screen of the pan/tilt and/or the photographing apparatus, the third prompt message may be output through an indicator light of the pan/tilt and/or the photographing apparatus, or the third prompt message may be output in another manner.

The third prompt information is used for indicating that the photographed target object in the image enters the real-time image of the shooting device currently, so that when the photographed target object enters the lens, a prompt is given to the photographed target object.

In some embodiments, the fourth prompt information is output before the shooting device on the pan/tilt head is controlled to shoot according to the shooting instruction, for example, the fourth prompt information may be output in a text/graphic manner through a display screen of the pan/tilt head and/or the shooting device, may be output through an indicator lamp of the pan/tilt head and/or the shooting device, or may be output in another manner.

The fourth prompting information is used for indicating the size of the shot target object in the image, which is currently located in the real-time image of the shooting device, so as to prompt second movement information expected by the shot target object, and after the shot target object obtains the second movement information, the position of the shot target object in the real world can be adjusted, so that the size of the shot target object in each image in the image is consistent with the size of the expected composition size as much as possible.

For example, the second movement information is determined based on the size of the subject currently captured in the real-time image captured by the camera, and optionally, the second movement information is determined based on the deviation between the size of the subject currently captured in the real-time image captured by the camera and the preset size in the real-time image, that is, the desired composition size is the preset size, the desired composition sizes of the plurality of images may be the same, and the desired composition sizes of at least some of the images may be different. The preset size may be a default size, a size preset by a user before the first image is captured, or a size preset by the user before each image is captured. Optionally, the second movement information is movement information along a second direction, and the second direction is a direction of a connection line between the target object to be photographed and the photographing device, for example, a direction of a connection line between a center of the target object to be photographed and a center of a lens of the photographing device.

For convenience of description, the deviation between the size of the photographed target object currently in the real-time image captured by the photographing device and the preset size in the real-time image is referred to as a second deviation herein.

The second deviation may include a deviation size and/or a deviation direction between a size of the photographed target object currently in the real-time image captured by the capturing device and a preset size in the real-time image, optionally, in some embodiments, the second deviation includes a deviation size, and for example, after the photographed target object obtains the deviation size of the second deviation, the photographed target object may select a distance moved forward by the deviation size of the second deviation, at this time, if the pan-tilt still outputs the fourth prompt information, the photographed target object needs to be moved backward by a distance twice as large as the deviation size of the second deviation, so that there is no size deviation between the size of the photographed target object currently in the real-time image captured by the capturing device and the preset size in the real-time image; and if the cradle head stops outputting the fourth prompt message or outputting the prompt message for indicating that the size of the photographed target object in the real-time image captured by the shooting device currently does not have the size deviation with the preset size in the real-time image, stopping moving the photographed target object back and forth.

In some embodiments, the second deviation comprises a deviation direction, the photographed target object moves to the deviation direction of the second deviation after obtaining the deviation direction of the second deviation until the pan-tilt stops outputting the fourth prompting message or outputting the prompting message for indicating that there is no size deviation between the size of the photographed target object currently in the real-time image captured by the shooting device and the preset size in the real-time image, and the photographed target object stops moving back and forth.

In some embodiments, the second deviation comprises a deviation size and a deviation direction, and the target object is moved by a distance of the deviation size of the second deviation in the deviation direction of the second deviation after obtaining the deviation size and the deviation direction of the second deviation, so that there is no size deviation between the size of the target object currently captured in the real-time image captured by the shooting device and the preset size in the real-time image, and the position of the target object in the front-back direction is quickly adjusted.

In addition, optionally, when there is no size deviation between the size of the photographed target object currently in the real-time image captured by the photographing device and the preset size in the real-time image, the pan-tilt stops outputting the fourth prompt message; optionally, when there is no size deviation between the size of the photographed target object currently in the real-time image captured by the shooting device and the preset size in the real-time image, the cradle head outputs prompt information indicating that there is no size deviation between the size of the photographed target object currently in the real-time image captured by the shooting device and the preset size in the real-time image.

It is understood that the above embodiments of prompting the photographed target object to perform position adjustment may be combined.

It should be noted that, when the pan/tilt head is a handheld pan/tilt head and the pan/tilt head is in a self-photographing mode, the pan/tilt head needs to be fixed on a tripod or other objects before the first image is taken, so as to prevent the pan/tilt head from shaking.

In addition, in some embodiments, after the shooting device on the holder is controlled to shoot according to the shooting instruction, images shot by the shooting device at a plurality of preset postures are spliced to obtain the target image. FIG. 3A is a target image obtained by stitching the first image, the second image, the third image, the fourth image and the fifth image obtained in the embodiment shown in FIG. 2A; FIG. 3B is a target image obtained by stitching the three images obtained in the embodiment shown in FIG. 2B; fig. 3C is a target image obtained by stitching the nine images obtained in the embodiment shown in fig. 2C. It can be understood that the existing image stitching algorithm can be adopted to stitch a plurality of images, and the stitching algorithm is not specifically limited in the present application.

It should be noted that the splicing process may be placed in a camera or other devices besides the pan-tilt.

For example, for a cradle head integrated with a shooting device, the cradle head may communicate with other devices, and after acquiring images of a plurality of preset postures, the cradle head may send the images of the plurality of preset postures to the other devices, and the images of the plurality of preset postures are spliced by the other devices to obtain a target image.

For a pan-tilt without an integrated shooting device, if the shooting device is a mobile phone or a camera, after the pan-tilt acquires images of a plurality of preset postures, the images of the plurality of preset postures can be sent to the mobile phone or the camera, and the images of the plurality of preset postures are spliced by the mobile phone or the camera to acquire a target image.

It will be appreciated that it is possible for the pan/tilt head to communicate with other devices, whether or not the pan/tilt head is integrated with a camera. For example, the pan/tilt head is integrated with a shooting device, and the pan/tilt head can be connected with an electronic device such as a mobile phone wirelessly or by wire so as to perform image splicing, image display or control on the shooting device and the pan/tilt head through the electronic device. For another example, the pan/tilt head is not integrated with a shooting device, the shooting device is a camera, and the pan/tilt head can be connected with an electronic device such as a mobile phone through a wired or wireless connection, so as to perform image splicing, image display or control on the camera and the pan/tilt head through the electronic device.

Further, in some embodiments, the target image is output for display. The target image can be displayed by the display screen of the holder; to the cloud platform that does not set up the display screen itself, accessible external display device shows the target image, and this external display device can be for carrying on the shooting device on the cloud platform, also can be other and the cloud platform communication and possess the device that shows the function.

Further, an embodiment of the present application may also provide an image obtaining method, where an execution main body of the image obtaining method according to the embodiment of the present application may be a cradle head, for example, the execution main body may be a controller of the cradle head or another controller provided in the cradle head, and of course, the execution main body is not limited to the cradle head, and may also be a shooting device such as a cradle head, or another electronic device independent of the cradle head or the shooting device, such as a control device of the cradle head or the shooting device. Referring to fig. 4, the image acquiring method according to the embodiment of the present application may include steps S401 to S402:

in S401, when the cradle head is in a preset shooting mode, acquiring a plurality of images captured by a shooting device on the cradle head, wherein shooting of each image is triggered by a shooting instruction input by a user, the preset shooting mode is used for indicating that the cradle head includes a plurality of preset postures, and the shooting instruction is acquired at each preset posture;

in S402, the plurality of images are stitched to obtain a target image.

For a specific description of the embodiment shown in fig. 4, reference may be made to the contents of the embodiments shown in fig. 1 to 3A, which are not limited in detail here.

Corresponding to the shooting control method of the above embodiment, an embodiment of the present application provides a shooting control apparatus, please refer to fig. 5, the shooting control apparatus of the embodiment of the present application may include a storage device and a processor, and the processor may include one or more processors.

Wherein the storage device is used for storing program instructions. The storage device stores a computer program of executable instructions of the photographing control method, and the storage device may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the photographing control apparatus may cooperate with a network storage apparatus that performs a storage function of the memory through a network connection. The memory may be an internal storage unit of the photographing control apparatus, such as a hard disk or a memory of the photographing control apparatus. The memory may also be an external storage device of the photographing control apparatus, such as a plug-in hard disk provided on the photographing control apparatus, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory may also include both an internal storage unit of the photographing control apparatus and an external storage device. The memory is used for storing computer programs and other programs and data required by the device. The memory may also be used to temporarily store data that has been output or is to be output.

One or more processors invoking program instructions stored in a storage device, the one or more processors individually or collectively configured to perform operations when the program instructions are executed: when the cloud deck is in a preset shooting mode, acquiring a plurality of images captured by a shooting device on the cloud deck, wherein the shooting of each image is triggered by a shooting instruction input by a user, the preset shooting mode is used for indicating that the cloud deck comprises a plurality of preset postures, and the shooting instruction is acquired at each preset posture; and splicing the multiple images to obtain a target image.

The processor of this embodiment can implement the shooting control method according to the embodiment shown in fig. 1 of this application, and the shooting control apparatus of this embodiment will be described with reference to the shooting control method of the above embodiment.

The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

Corresponding to the image acquisition method in the above embodiments, an embodiment of the present application provides an image acquisition apparatus, and the image acquisition apparatus in the embodiment of the present application may include a storage device and a processor, and the processor may include one or more processors. The one or more processors are configured to implement the image acquisition methods of the above embodiments.

The description of the storage device and the processor in the image acquisition device may refer to the corresponding description of the storage device and the processor in the photographing control device, and will not be repeated herein.

Further, corresponding to the shooting control device of the above embodiment, an embodiment of the present application further provides a cradle head, please refer to fig. 2A and fig. 5, the cradle head 100 of the embodiment of the present application may include the bearing portion 110 and the shooting control device of the above embodiment. The mount 110 is used to mount the imaging device 200, and the imaging control device can communicate with the imaging device.

The cradle head 100 may be a handheld cradle head, or an airborne cradle head.

The photographing device 200 may be an electronic device having a photographing function, such as a mobile phone or a camera, or may be an image sensor integrated on a cradle head.

The embodiment of the present application further provides a cradle head corresponding to the image acquisition apparatus of the above embodiment, and the cradle head of the embodiment of the present application may include a bearing portion and the image acquisition apparatus of the above embodiment. The bearing part is used for carrying the shooting device, and the image acquisition device can communicate with the shooting device.

Wherein, the cloud platform can be for handing the cloud platform, also can be for carrying the cloud platform.

The shooting device can be an electronic device with a shooting function, such as a mobile phone or a camera, and can also be an image sensor integrated on a holder.

Further, corresponding to the cradle head 100 including the shooting control device in the above embodiment, the embodiment of the present application further provides a shooting system, please refer to fig. 6 again, and the shooting system in the embodiment of the present application may include the cradle head 100 in the above embodiment and the shooting device 200 mounted on the cradle head.

Corresponding to the cloud platform including image acquisition device of above-mentioned embodiment, this application embodiment still provides a shooting system, and the shooting system of this application embodiment can include the cloud platform of above-mentioned embodiment and carry on the shooting device on the cloud platform.

The embodiment of the present application further provides a shooting system corresponding to the image acquisition apparatus of the above embodiment, and the shooting system of the embodiment of the present application may include a pan-tilt and the image acquisition apparatus of the above embodiment. The holder is used for carrying an image acquisition device, and the image acquisition device has a shooting function.

Further, corresponding to the photographing control method of the above-described embodiment, an embodiment of the present application also provides a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the photographing control method of the above-described embodiment.

Corresponding to the image acquisition method of the above embodiment, an embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the steps of the image acquisition method of the above embodiment.

The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of the cradle head according to any of the foregoing embodiments. The computer readable storage medium may also be an external storage device of the cradle head, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), and the like, provided on the device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the pan/tilt head. The computer-readable storage medium is used for storing the computer program and other programs and data required by the head, and may also be used for temporarily storing data that has been output or is to be output.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

The above disclosure is only a few examples of the present application, and certainly should not be taken as limiting the scope of the present application, which is therefore intended to cover all modifications that are within the scope of the present application and which are equivalent to the claims.

28页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:摄像系统和内窥镜系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类