Multi-camera module focusing method and device

文档序号:38409 发布日期:2021-09-24 浏览:9次 中文

阅读说明:本技术 多摄像头模组对焦方法和装置 (Multi-camera module focusing method and device ) 是由 吴亮 王永益 祝清瑞 王军 王妙锋 于 2020-03-23 设计创作,主要内容包括:本申请公开了多摄像头模组对焦方法和装置,有助于提升多摄像头模组对焦时的用户体验。该方法应用于终端设备中的处理器。终端设备还包括第一摄像头模组和第二摄像头模组,第一摄像头模组包括第一马达、第一感光器和第一镜头。第二摄像头模组包括第二马达、第二感光器和第二镜头。该方法包括:获取第一准焦马达码。第一准焦马达码为第一马达调整第一镜头与第一感光器的距离以完成对焦时,所使用的马达码。基于第一准焦马达码,预测第二摄像头模组的准焦马达码,得到第一预测准焦马达码。将第一预测准焦马达码作用于第二马达,以调整第二镜头与第二感光器的距离。(The application discloses a method and a device for focusing a plurality of camera modules, which are beneficial to improving the user experience when the plurality of camera modules are focused. The method is applied to a processor in the terminal equipment. The terminal equipment further comprises a first camera module and a second camera module, wherein the first camera module comprises a first motor, a first photoreceptor and a first lens. The second camera module comprises a second motor, a second photoreceptor and a second lens. The method comprises the following steps: a first focus motor code is acquired. The first focus motor code is a motor code used by the first motor to adjust the distance between the first lens and the first photoreceptor to complete focusing. And predicting the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code. And applying the first predicted quasi-focus motor code to the second motor to adjust the distance between the second lens and the second photoreceptor.)

1. The method for focusing the multiple camera modules is characterized by being applied to a processor in terminal equipment, wherein the terminal equipment further comprises a first camera module and a second camera module, the first camera module comprises a first motor, a first photoreceptor and a first lens, and the second camera module comprises a second motor, a second photoreceptor and a second lens; the method comprises the following steps:

acquiring a first quasi-focus motor code; the first focusing motor code is a motor code used when the first motor adjusts the distance between the first lens and the first photoreceptor to finish focusing;

predicting the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code;

and applying the first predicted in-focus motor code to the second motor to adjust the distance between the second lens and the second photoreceptor.

2. The method of claim 1, further comprising:

the second camera module carries out automatic focusing from a second distance; and the second distance is the distance between the second lens and the second photoreceptor after the first predicted in-focus motor code is applied to the second motor and the distance between the second lens and the second photoreceptor is adjusted.

3. The method of claim 1 or 2, wherein the obtaining the first focus motor code comprises:

and acquiring the first focusing motor code based on at least one of phase information, depth information or contrast information in the image of the first object to be shot acquired by the first camera module.

4. The method according to any one of claims 1-3, wherein predicting the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code comprises:

acquiring a quasi-focus motor code of the second camera module corresponding to the first quasi-focus motor code based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and a corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances; the quasi-focus motor codes with the corresponding relation are quasi-focus motor codes of the corresponding camera modules at the same object distance, and the object distance is the distance between an object to be shot and the terminal equipment;

and taking the obtained quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code.

5. The method according to any one of claims 1-3, wherein predicting the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code comprises:

acquiring a quasi-focus motor code of the second camera module corresponding to the first quasi-focus motor code based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and a corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances; the quasi-focus motor codes with the corresponding relation are quasi-focus motor codes of the corresponding camera modules at the same object distance and in a preset state, and the preset state comprises at least one of a preset temperature or a preset coordinate system; the object distance is the distance between an object to be shot and the terminal equipment;

when the current state of the second camera module is the same as the preset state of the second camera module, taking the acquired quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code;

when the current state of the second camera module is different from the preset state of the second camera module, the obtained quasi-focus motor code of the second camera module is corrected into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as the first predicted quasi-focus motor code.

6. The method according to any one of claims 1-3, wherein predicting the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code comprises:

when the current state of the first camera module is different from the preset state of the first camera module, correcting the first quasi-focus motor code into a second quasi-focus motor code according to the current state of the first camera module; the second quasi-focus motor code is a quasi-focus motor code of the first camera module in the preset state; the preset state comprises at least one of a preset temperature or a preset coordinate system;

when the current state of the first camera module is the same as the preset state of the first camera module, taking the first quasi-focus motor code as the second quasi-focus motor code;

acquiring a quasi-focus motor code of the second camera module corresponding to the second quasi-focus motor code based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and a corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances; the quasi-focus motor codes with the corresponding relation are quasi-focus motor codes of the corresponding camera modules at the same object distance and in the preset state; the object distance is the distance between an object to be shot and the terminal equipment;

and taking the obtained quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code.

7. The method of claim 6, wherein the using the obtained quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code comprises:

when the current state of the second camera module is the same as the preset state of the second camera module, taking the acquired quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code;

the method further comprises the following steps:

when the current state of the second camera module is different from the preset state of the second camera module, the obtained quasi-focus motor code of the second camera module is corrected into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as the first predicted quasi-focus motor code.

8. The method according to claim 5 or 7, wherein the modifying the obtained quasi-focus motor code of the second camera module to the quasi-focus motor code of the second camera module in the current state comprises:

acquiring a temperature difference between the current temperature of the second camera module and a preset temperature of the second camera module, and marking the acquired temperature difference as a target temperature difference;

acquiring a quasi-focus motor code difference of the second camera module corresponding to the target temperature difference according to the corresponding relation between the temperature differences of the second camera module and the motor code differences of the second camera module;

and correcting the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state by using the acquired quasi-focus motor code difference.

9. The method according to claim 5 or 7, wherein the modifying the obtained quasi-focus motor code of the second camera module to the quasi-focus motor code of the second camera module in the current state comprises:

acquiring a deviation between a current coordinate system of the second camera module and a preset coordinate system of the second camera module, and marking the acquired deviation as a target deviation;

acquiring a focus motor code difference of the second camera module corresponding to the target deviation according to the corresponding relation between the coordinate system deviations of the second camera module and the motor code differences of the second camera module;

and correcting the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state by using the acquired quasi-focus motor code difference of the second camera module.

10. The method according to any one of claims 4-9, further comprising:

acquiring a third quasi-focus motor code and a third quasi-focus distance; the third quasi-focus motor code and the third quasi-focus distance are respectively as follows: when the distance between an object to be shot and the terminal equipment is a first object distance and the first camera module finishes focusing, the quasi-focusing motor code of the first camera module and the difference between the image distance of the first camera module and the focal length of the first camera module are obtained;

acquiring the first object distance based on the third in-focus distance;

acquiring a second in-focus distance based on the first object distance; the second quasi-focal distance is as follows: when the distance between the object to be shot and the terminal equipment is the first object distance and the second camera module finishes focusing, the difference between the image distance of the second camera module and the focal length of the second camera module is obtained;

according to the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of quasi-focus distances, acquiring the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance, and taking the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance as a fourth quasi-focus motor code; each of the plurality of quasi-focal distances is a difference between an image distance of the second camera module and a focal length of the second camera module;

establishing a corresponding relation between the third focusing motor code and the fourth focusing motor code; wherein, the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module comprises the corresponding relation between the third quasi-focus motor code and the fourth quasi-focus motor code.

11. The focusing device is characterized by being applied to terminal equipment, wherein the terminal equipment further comprises a first camera module and a second camera module, the first camera module comprises a first motor, a first photoreceptor and a first lens, and the second camera module comprises a second motor, a second photoreceptor and a second lens; the device comprises:

an acquisition unit for acquiring a first focus motor code; the first focusing motor code is a motor code used when the first motor adjusts the distance between the first lens and the first photoreceptor to finish focusing;

the prediction unit is used for predicting the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code;

and the adjusting unit is used for acting the first predicted in-focus motor code on the second motor so as to adjust the distance between the second lens and the second photoreceptor.

12. The apparatus of claim 11, further comprising:

the focusing unit is used for carrying out automatic focusing on the second camera module from a second distance; and the second distance is the distance between the second lens and the second photoreceptor after the first predicted in-focus motor code is applied to the second motor and the distance between the second lens and the second photoreceptor is adjusted.

13. The apparatus according to claim 11 or 12, wherein the obtaining unit is specifically configured to:

and acquiring the first focusing motor code based on at least one of phase information, depth information or contrast information in the image of the first object to be shot acquired by the first camera module.

14. The apparatus according to any of claims 11-13, wherein the obtaining unit is further configured to:

acquiring a quasi-focus motor code of the second camera module corresponding to the first quasi-focus motor code based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and a corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances; the quasi-focus motor codes with the corresponding relation are quasi-focus motor codes of the corresponding camera modules at the same object distance, and the object distance is the distance between an object to be shot and the terminal equipment;

the prediction unit is specifically configured to use the obtained quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code.

15. The apparatus according to any one of claims 11-13, wherein the obtaining unit is further configured to:

acquiring a quasi-focus motor code of the second camera module corresponding to the first quasi-focus motor code based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and a corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances; the quasi-focus motor codes with the corresponding relation are quasi-focus motor codes of the corresponding camera modules at the same object distance and in a preset state, and the preset state comprises at least one of a preset temperature or a preset coordinate system; the object distance is the distance between an object to be shot and the terminal equipment;

the apparatus further comprises a correction unit for:

when the current state of the second camera module is the same as the preset state of the second camera module, taking the acquired quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code;

when the current state of the second camera module is different from the preset state of the second camera module, the obtained quasi-focus motor code of the second camera module is corrected into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as the first predicted quasi-focus motor code.

16. The apparatus according to any of claims 11-13, further comprising a correction unit configured to:

when the current state of the first camera module is different from the preset state of the first camera module, correcting the first quasi-focus motor code into a second quasi-focus motor code according to the current state of the first camera module; the second quasi-focus motor code is a quasi-focus motor code of the first camera module in the preset state; the preset state comprises at least one of a preset temperature or a preset coordinate system;

when the current state of the first camera module is the same as the preset state of the first camera module, taking the first quasi-focus motor code as the second quasi-focus motor code;

the acquisition unit is further configured to:

acquiring a quasi-focus motor code of the second camera module corresponding to the second quasi-focus motor code based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on a corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and a corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances; the quasi-focus motor codes with the corresponding relation are quasi-focus motor codes of the corresponding camera modules at the same object distance and in the preset state; the object distance is the distance between an object to be shot and the terminal equipment;

the prediction unit is specifically configured to: and taking the obtained quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code.

17. The apparatus of claim 16, wherein the prediction unit is further configured to:

when the current state of the second camera module is the same as the preset state of the second camera module, taking the acquired quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code;

the correction unit is specifically configured to:

when the current state of the second camera module is different from the preset state of the second camera module, the obtained quasi-focus motor code of the second camera module is corrected into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as the first predicted quasi-focus motor code.

18. The apparatus according to claim 15 or 17, wherein the obtaining unit is further configured to:

acquiring a temperature difference between the current temperature of the second camera module and a preset temperature of the second camera module, and marking the acquired temperature difference as a target temperature difference;

acquiring a motor code difference of the second camera module corresponding to the target temperature difference according to the corresponding relation between the temperature differences of the second camera module and the motor code differences of the second camera module;

the correction unit is specifically configured to: and correcting the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state by using the acquired quasi-focus motor code difference.

19. The apparatus according to claim 15 or 17, wherein the obtaining unit is further configured to:

acquiring a deviation between a current coordinate system of the second camera module and a preset coordinate system of the second camera module, and marking the acquired deviation as a target deviation;

acquiring a focus motor code difference of the second camera module corresponding to the target deviation according to the corresponding relation between the coordinate system deviations of the second camera module and the motor code differences of the second camera module;

the correction unit is specifically configured to: and correcting the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state by using the acquired quasi-focus motor code difference.

20. The apparatus according to any of claims 14-19, wherein the obtaining unit is further configured to:

acquiring a third quasi-focus motor code and a third quasi-focus distance; the third quasi-focus motor code and the third quasi-focus distance are respectively as follows: when the distance between an object to be shot and the terminal equipment is a first object distance and the first camera module finishes focusing, the quasi-focusing motor code of the first camera module and the difference between the image distance of the first camera module and the focal length of the first camera module are obtained;

acquiring the first object distance based on the third in-focus distance;

acquiring a second in-focus distance based on the first object distance; the second quasi-focal distance is as follows: when the distance between the object to be shot and the terminal equipment is the first object distance and the second camera module finishes focusing, the difference between the image distance of the second camera module and the focal length of the second camera module is obtained;

according to the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of quasi-focus distances, acquiring the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance, and taking the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance as a fourth quasi-focus motor code; each of the plurality of quasi-focal distances is a difference between an image distance of the second camera module and a focal length of the second camera module;

the device further comprises: a building unit; the corresponding relation between the third focusing motor code and the fourth focusing motor code is established; wherein, the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module comprises the corresponding relation between the third quasi-focus motor code and the fourth quasi-focus motor code.

21. The utility model provides a many cameras module focusing device which characterized in that includes: a memory for storing a computer program and a processor for executing the computer program to perform the method of any one of claims 1-10.

22. A computer-readable storage medium, having stored thereon a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1-10.

Technical Field

The application relates to the technical field of terminals, in particular to a multi-camera module focusing method and device.

Background

Currently, when a terminal device starts a plurality of camera modules (e.g., Auto Focus (AF) camera modules), different camera modules need to realize respective auto focus. The camera module includes: a lens, a photoreceptor, a motor, etc.

The specifications of different camera modules are different. In general, a camera module having a high phase detection pixel density is considered to have a higher specification than a camera module having a low phase detection pixel density. Camera modules with poor specifications generally need to be focused using Contrast Detection Auto Focus (CDAF). As shown in fig. 1: the ordinate represents the image contrast at different distances of the lens from the photoreceptor and the abscissa represents the motor code. The image contrast 1-7 in fig. 1 represents the image contrast of the image of the object to be photographed, which is obtained by the camera module after the motor in the camera module uses different motor codes to adjust the distance between the lens and the photoreceptor. After the processor in the terminal device in fig. 1 acquires the image contrast 6, the distance between the lens and the photoreceptor in the camera module is called a bellows when the distance between the lens and the photoreceptor is adjusted to the image contrast 5.

Consequently, the camera module of difference all need realize autofocus separately among the terminal equipment, and the camera module that the specification is poor can be slower than the camera module that the specification is high when focusing, and the probability of the poor camera module of specification to not focusing (defocus) is bigger, above-mentioned problem of drawing bellows still can appear to lead to the autofocus of many camera modules to experience poorly.

Disclosure of Invention

The embodiment of the application provides a method and a device for focusing a plurality of camera modules, which are beneficial to improving the user experience when the plurality of camera modules are focused.

In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:

in a first aspect, a method for focusing a multi-camera module is provided, which is applied to a processor in a terminal device, the terminal device further includes a first camera module and a second camera module, and the first camera module includes a first motor, a first photoreceptor and a first lens. The second camera module comprises a second motor, a second photoreceptor and a second lens. The method comprises the following steps: a first focus motor code is acquired. The first focus motor code is a motor code used by the first motor to adjust the distance between the first lens and the first photoreceptor to complete focusing. And predicting the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code. And applying the first predicted quasi-focus motor code to the second motor to adjust the distance between the second lens and the second photoreceptor. Therefore, for the second camera module, when the second motor adjusts the distance between the second lens and the second photoreceptor to finish focusing once (for example, the first predicted focus motor code is the focus motor code of the second camera module in the current state), the focusing efficiency of the second camera module is greatly improved. When the distance between the second lens and the second photoreceptor is adjusted by the second motor once and focusing can not be finished, the second camera module starts to focus from the adjusted distance between the second lens and the second photoreceptor, and the range of finding the clear imaging distance between the second lens and the second photoreceptor in the focusing process is reduced, so that the focusing efficiency of the second camera module is improved. Optionally, the processor takes the camera module with high specification in the two camera modules included in the terminal device as the first camera module. The camera module with higher specification has higher focusing efficiency. Therefore, the processor takes the camera module with higher specification as the first camera module, the quasi-focus motor code of the camera module with higher specification can be used, the quasi-focus motor code of the camera module with lower specification is predicted, on one hand, the focusing efficiency of the camera module with lower specification is improved, on the other hand, when the camera module with lower specification cannot complete focusing by means of hardware of the camera module, the focusing can be completed by means of the quasi-focus motor code of the camera module with higher specification. Thereby user experience when having promoted many camera modules and focusing.

According to the first aspect, in a first possible implementation manner of the first aspect, the method further includes: and the second camera module starts to perform automatic focusing from the second distance. The second distance is the distance between the second lens and the second photoreceptor after the first predicted focus motor code is applied to the second motor and the distance between the second lens and the second photoreceptor is adjusted. Therefore, even if the second camera module adjusts the distance between the second lens and the second photoreceptor and does not complete focusing once, the range of finding the clear imaging distance between the second lens and the second photoreceptor in the focusing process of the second camera module is reduced from the second distance, and the focusing efficiency of the second camera module is improved.

According to the first aspect to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the acquiring a first focus motor code includes: and acquiring a first focus motor code based on at least one of phase information, depth information or contrast information in the image of the first object to be shot acquired by the first camera module.

According to the first aspect to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, predicting a quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code includes: the corresponding relation of a plurality of accurate burnt motor codes based on first camera module and a plurality of accurate burnt motor codes of second camera module, perhaps, based on a plurality of accurate burnt motor codes of first camera module and a plurality of object distance's corresponding relation, and a plurality of accurate burnt motor codes of second camera module and a plurality of object distance's corresponding relation, acquire the accurate burnt motor code of the second camera module that first accurate burnt motor code corresponds. The quasi-focus motor code with the corresponding relation is a quasi-focus motor code of the corresponding camera module at the same object distance, and the object distance is the distance between an object to be shot and the terminal equipment. And taking the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code.

According to the first aspect to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, predicting a quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code includes: the corresponding relation of a plurality of accurate burnt motor codes based on first camera module and a plurality of accurate burnt motor codes of second camera module, perhaps, based on a plurality of accurate burnt motor codes of first camera module and a plurality of object distance's corresponding relation, and a plurality of accurate burnt motor codes of second camera module and a plurality of object distance's corresponding relation, acquire the accurate burnt motor code of the second camera module that first accurate burnt motor code corresponds. The quasi-focus motor codes with the corresponding relation are quasi-focus motor codes of the corresponding camera modules at the same object distance and in a preset state, and the preset state comprises at least one of preset temperature or a preset coordinate system. The object distance is the distance between the object to be photographed and the terminal device. And when the current state of the second camera module is the same as the preset state of the second camera module, taking the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code. When the current state of the second camera module is different from the preset state of the second camera module, the obtained quasi-focus motor code of the second camera module is corrected into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as a first predicted quasi-focus motor code. Therefore, the processor corrects the quasi-focus motor code in the preset state of the second camera module, which is obtained through prediction, into the quasi-focus motor code in the current state of the second camera module according to the current state of the second camera module, and can make up the prediction deviation caused by the difference between the current state of the second camera module and the preset state of the second camera module, so that the predicted first predicted quasi-focus motor code is closer to the real quasi-focus motor code in the current state of the second camera module.

According to the first aspect to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, predicting a quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code includes: and when the current state of the first camera module is different from the preset state of the first camera module, correcting the first focusing motor code into a second focusing motor code according to the current state of the first camera module. The second quasi-focus motor code is a quasi-focus motor code of the first camera module in a preset state. The preset state includes at least one of a preset temperature or a preset coordinate system. When the current state of the first camera module is the same as the preset state of the first camera module, the first quasi-focus motor code is used as the second quasi-focus motor code. The corresponding relation of a plurality of accurate burnt motor codes based on first camera module and a plurality of accurate burnt motor codes of second camera module, perhaps, based on the corresponding relation of a plurality of accurate burnt motor codes and a plurality of object distances of first camera module, and the corresponding relation of a plurality of accurate burnt motor codes and a plurality of object distances of second camera module, acquire the accurate burnt motor code of the second camera module that second accurate burnt motor code corresponds. Wherein, the quasi-focus motor code with the corresponding relation is the quasi-focus motor code of the corresponding camera module at the same object distance and in a preset state. The preset state includes at least one of a preset temperature or a preset coordinate system. The object distance is the distance between the object to be photographed and the terminal device. And taking the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code. Therefore, before predicting the quasi-focus motor code of the second camera module in the current state, the processor corrects the first quasi-focus motor code into the quasi-focus motor code of the first camera module in the preset state, so that the prediction error caused by the deviation between the current state of the first camera module and the preset state of the first camera module is compensated, and the predicted first predicted quasi-focus motor code is closer to the real quasi-focus motor code of the second camera module in the current state.

According to the first aspect to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, taking the obtained quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code includes: and when the current state of the second camera module is the same as the preset state of the second camera module, the acquired quasi-focus motor code of the second camera module is used as a first predicted quasi-focus motor code. The method further comprises the following steps: when the current state of the second camera module is different from the preset state of the second camera module, the obtained quasi-focus motor code of the second camera module is corrected into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as a first predicted quasi-focus motor code. Therefore, the processor compensates for the deviation between the preset state of the first camera module and the standard focus motor code in the current state of the first camera module, and also compensates for the deviation between the standard focus motor code in the preset state of the second camera module and the standard focus motor code in the current state of the second camera module, so that the obtained first predicted standard focus motor code is closer to the real standard focus motor code in the current state of the second camera module than the three realization modes. Thereby user experience when the second camera module focuses has been promoted.

According to the first aspect to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, the modifying the obtained quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state includes: and acquiring the temperature difference between the current temperature of the second camera module and the preset temperature of the second camera module, and marking the acquired temperature difference as a target temperature difference. And acquiring the focus motor code difference of the second camera module corresponding to the target temperature difference according to the corresponding relation between the temperature differences of the second camera module and the motor code differences of the second camera module. And correcting the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state by using the acquired quasi-focus motor code difference.

According to the first aspect to the seventh possible implementation manner of the first aspect, in an eighth possible implementation manner of the first aspect, the modifying the obtained quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state includes: and acquiring the deviation of the current coordinate system of the second camera module and the preset coordinate system of the second camera module, and marking the acquired deviation as a target deviation. And acquiring the focus motor code difference of the second camera module corresponding to the target deviation according to the corresponding relation between the coordinate system deviations of the second camera module and the motor code differences of the second camera module. And correcting the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state by using the acquired quasi-focus motor code difference.

According to the first aspect to the eighth possible implementation manner of the first aspect, in a ninth possible implementation manner of the first aspect, the method further includes: and acquiring a third quasi-focus motor code and a third quasi-focus distance. The third quasi-focus motor code and the third quasi-focus distance are respectively as follows: when the distance between the object to be shot and the terminal equipment is a first object distance and the first camera module finishes focusing, the quasi-focusing motor code of the first camera module and the difference between the image distance of the first camera module and the focal length of the first camera module are obtained. A first object distance is acquired based on the third in-focus distance. A second in-focus distance is acquired based on the first object distance. The second quasi-focal distance is: when the distance between the object to be shot and the terminal equipment is the first object distance and the second camera module finishes focusing, the difference between the image distance of the second camera module and the focal length of the second camera module is obtained. According to the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of quasi-focus distances, the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance is obtained, and the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance is used as a fourth quasi-focus motor code. Every accurate focus distance in a plurality of accurate focus distances is the difference of the image distance of second camera module and the focus of second camera module. And establishing a corresponding relation between the third focusing motor code and the fourth focusing motor code. Wherein, a plurality of accurate focus motor codes of first camera module with the corresponding relation of a plurality of accurate focus motor codes of second camera module includes the corresponding relation of third accurate focus motor code and fourth accurate focus motor code.

In a second aspect, a multi-camera module focusing apparatus is provided, which can be used to perform any one of the methods provided in any one of the possible implementations of the first aspect to the first aspect. By way of example, the apparatus may be a terminal device or chip or the like.

According to the second aspect, in a first possible implementation manner of the second aspect, the device may be divided into functional modules according to any one of the methods provided by the first aspect. For example, each functional unit may be divided for each function, or two or more functions may be integrated into one processing unit.

In a second possible implementation form of the second aspect, the apparatus may include a processor configured to perform any one of the methods provided by the first aspect.

In a third aspect, a computer-readable storage medium, such as a computer-non-transitory readable storage medium, is provided. Having stored thereon a computer program (or instructions) which, when run on a computer, causes the computer to perform any of the methods provided by the first aspect or any of the possible implementations of the first aspect.

In a fourth aspect, there is provided a computer program product enabling, when running on a computer, the execution of any one of the methods provided in the first aspect or any one of the possible implementations of the first aspect.

In a fifth aspect, a chip is provided, which includes: a processor, configured to invoke and run a computer program stored in the memory from the memory, and execute any method provided by the first aspect or any possible implementation manner of the first aspect.

It can be understood that any one of the above-mentioned focusing devices, computer readable storage media, computer program products or chips of the multi-camera module can be applied to the corresponding methods provided above, and therefore, the beneficial effects achieved by the focusing devices can refer to the beneficial effects in the corresponding methods, and are not described herein again.

Drawings

Fig. 1 is a schematic diagram of a correspondence relationship between an image contrast of an image of an object to be photographed and a motor code, which is acquired by a camera module to which the technical solution provided by the embodiment of the present application is applied;

fig. 2 is a schematic structural diagram of a terminal device to which the technical solution provided in the embodiment of the present application is applied;

FIG. 3 is a diagram illustrating a corresponding relationship between a focus motor code of a camera module and a focus distance of the camera module;

fig. 4 is a schematic view illustrating a correspondence relationship between a plurality of temperatures of the camera module and a plurality of motor codes of the camera module according to an embodiment of the present disclosure;

fig. 5 is a schematic view of a coordinate system of a camera module according to an embodiment of the present disclosure;

fig. 6 is a schematic flow chart illustrating a process of establishing a code correspondence between focus motor codes of two camera modules according to an embodiment of the present application;

fig. 7 is a schematic flowchart of a focusing method for a multi-camera module according to an embodiment of the present disclosure;

fig. 8 is a schematic structural diagram of a focusing device with multiple camera modules according to an embodiment of the present disclosure.

Detailed Description

As shown in fig. 2, a schematic structural diagram of a terminal device to which the technical solution provided in the embodiment of the present application is applied is shown. The terminal device can be a mobile phone, a camera, a tablet computer, a palm computer or a notebook computer and the like. The terminal device 20 in fig. 2 includes but is not limited to: a camera 200, a processor 201, a memory 202, a display 203, an input unit 204, an interface unit 205, a power supply 206, and the like.

The camera 200 is configured to acquire an image of an object to be photographed and send the image to the processor 201. The camera 200 is also called a camera module.

The processor 201 is a control center of the terminal device, connects various parts of the whole terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 202 and calling data stored in the memory 202, thereby performing overall monitoring of the terminal device. Processor 201 may include one or more processing units; optionally, the processor 201 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 201.

The memory 202 may be used to store software programs as well as various data. The memory 202 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one functional unit, and the like. Further, the memory 202 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Alternatively, the memory 202 may be a non-transitory computer readable storage medium, for example, a read-only memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

The display 203 is used to display information input by the user or information provided to the user. The display 203 may include a display panel, which may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.

The input unit 204 may include a Graphics Processing Unit (GPU) that processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display 203. The image frames processed by the graphics processor may be stored in the memory 202 (or other storage medium).

The interface unit 205 is an interface for connecting an external device to the terminal apparatus 20. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 205 may be used to receive input (e.g., data information, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 20 or may be used to transmit data between the terminal apparatus 20 and an external device.

A power source 206 (such as a battery) may be used to supply power to each component, and optionally, the power source 206 may be logically connected to the processor 201 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.

Optionally, the computer instructions in the embodiments of the present application may also be referred to as application program code or system, which is not specifically limited in the embodiments of the present application.

It should be noted that the terminal device shown in fig. 2 is only an example, and does not limit the terminal device to which the embodiment of the present application is applicable. In actual implementation, the terminal device may include more or fewer devices or components than those shown in fig. 2. For example, the terminal device may include more than 2 cameras in actual implementation.

In the following, some of the terms and techniques referred to in the examples of the present application are briefly described:

1) object distance, image distance, focal length

The object distance is the distance from the object to be shot to the optical center of the lens in the camera module. Since the distance from the object to be photographed to the lens in the camera module is much larger than the size of the terminal device, the object distance can also be approximately expressed as the distance between the object to be photographed and the terminal device.

The image distance is the distance from an image formed by the object to be shot to a lens in the camera module.

The focal length is the distance from the focal point to the lens.

The corresponding relation among the object distance, the image distance and the focal distance is as follows: 1/f is 1/u +1/v, wherein f is the focal length, u is the object distance, and v is the image distance.

lens shift ═ v-f. Wherein, the lens shift characterization: when the object distance is u, and the camera module finishes focusing, the difference between the image distance of the camera module and the focal length of the camera module.

Therefore, when the camera module finishes focusing, the object distance can be calculated by the distance between the lens in the camera module and the photoreceptor. In FIG. 3, 7cm, 15cm, 35cm and 5m represent object distances.

2) Camera module, standard focus motor code, standard focus distance and camera module specification

The camera module comprises a lens, a photoreceptor, a motor and the like. The lens is used for collecting an image of an object to be shot.

The photoreceptor converts a light image on the photoreceptor into an electrical signal proportional to the light image using a photoelectric conversion function.

The motor may change the distance between the lens and the photoreceptor. The motor can be any one of a voice coil motor, an ultrasonic motor, a stepping motor, a memory alloy motor or other devices capable of changing the image distance of the camera module.

The drive circuit in the motor is a circuit with a control algorithm. It can convert the motor code into corresponding output electrical signals, one motor code for each output electrical signal. A drive circuit in the motor uses the output electrical signal to adjust the distance between the lens and the photoreceptor. The motor code may be a binary number. For example, if a quasi-focus motor code is described by 10 bits (bit), the quasi-focus motor code may have a value ranging from 0 to 1023.

The accurate burnt motor sign indicating number of camera module does: the object distance is fixed, and when the camera module finishes focusing, the motor in the camera module adjusts the distance between the lens in the camera module and the photoreceptor, and the used motor code.

The accurate focal distance of camera module does: the object distance is fixed, and when the camera module finishes focusing, the difference between the image distance of the camera module and the focal length of the camera module.

The specification of camera module includes: the camera module comprises pixel density which is responsible for phase detection, whether the camera module can utilize depth devices (such as a time of flight (TOF) device and a laser device) in the terminal equipment, the focal length of the camera module, the type of a motor of the camera module and the like. The camera module capable of utilizing the depth device in the terminal device is higher in specification than the camera module incapable of utilizing the depth device in the terminal device. The camera module in which the depth information of the image acquired by the depth device in the terminal device is closer to the real depth information has a higher specification than a camera module in which the depth information of the image acquired by the depth device in the terminal device differs from the real depth information by a larger amount. The camera module having the motor type of the closed-loop motor has a higher specification than the camera module having the motor type of the open-loop motor. The standard of the camera module with low rate of the quasi-focus motor code of the camera module along with the temperature change of the camera module is higher than the standard of the camera module with high rate of the quasi-focus motor code of the camera module along with the temperature change of the camera module (for example, when the lens of the camera module is a long-focus lens, the rate of the quasi-focus motor code of the camera module along with the temperature change of the camera module is high).

Fig. 3 is a schematic diagram showing a correspondence relationship between a focus motor code of a camera module and a focus distance of the camera module. The ordinate in fig. 3 represents the quasi-focus distance in millimeters and the abscissa represents the quasi-focus motor code.

The corresponding relation between the quasi-focus motor code of one camera module shown in fig. 3 and the quasi-focus distance of the camera module can be obtained by fitting the corresponding relation between the quasi-focus motor code and the quasi-focus distance according to the camera module calibrated by a camera module manufacturer under a plurality of object distances. Generally, the quasi-focus motor code of the camera module and the quasi-focus distance of the camera module form a linear corresponding relation in a certain object distance range. In fig. 3, the camera module is at four object distances of 7cm, 15cm, 35cm and 5m, and the quasi-focal distance corresponding to the quasi-focal motor code at each object distance is taken as an example, and the corresponding relationship between the quasi-focal motor code of the camera module and the quasi-focal distance of the camera module when the object distance is within the range of 7cm-15cm, 15cm-35cm or 35cm-5m is respectively fitted. Wherein, when the object distance is 7cm, the accurate burnt motor code of this camera module and the accurate burnt distance of this camera module show the accurate burnt motor code of this camera module and the accurate burnt distance of this camera module when the object distance equals 7 cm. The distance of 7cm is the closest focusing object distance (also called macro distance) of the camera module, and when the object distance is less than 7cm, the distance between the lens of the camera module and the photoreceptor cannot be further adjusted. The above 7cm can also be the minimum object distance of the camera module to be calibrated. When the object distance is 5m, the quasi-focus motor code of the camera module and the quasi-focus distance of the camera module represent the quasi-focus motor code of the camera module and the quasi-focus distance of the camera module when the object distance is equal to 5 m. 5m characterizes the calibrated infinite object distance. It should be noted that the closest focusing object distance and the farthest focusing object distance calibrated by different camera modules are different, and the above description is only an example and does not limit the present application.

3) Fitting, fitting

The method of fitting is not limited in the embodiments of the present application, and any one of a least square method, an interpolation method, a table look-up method, or other fitting functions may be used as an example.

4) The corresponding relation between the quasi-focus motor code of the first camera module and the quasi-focus motor code of the second camera module

The embodiment of the application does not limit the form of the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module. For example, it may be represented in a table, a function, or the like.

In a first possible form, the correspondence between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module may be as shown in table 1:

TABLE 1

When the quasi-focus motor code of the first camera module in table 1 is the first quasi-focus motor code 1, the quasi-focus motor code of the corresponding second camera module is the second quasi-focus motor code 1.

In a second possible form, the correspondence between the plurality of focus motor codes of the first camera module and the plurality of focus motor codes of the second camera module may be a function. The function representing the corresponding relationship may be obtained by fitting each of the plurality of quasi-focus motor codes of the first camera module and the quasi-focus motor code of the second camera module corresponding to the each quasi-focus motor code.

Optionally, the quasi-focus motor code of the first camera module and the quasi-focus motor code of the second camera module having the corresponding relationship are obtained at the same object distance. The quasi-focus motor codes of the first camera module and the quasi-focus motor codes of the second camera module which have different corresponding relations are obtained under different object distances.

5) The corresponding relation between a plurality of focus motor codes of the first camera module and a plurality of object distances and the corresponding relation between a plurality of focus motor codes of the second camera module and a plurality of object distances

The embodiment of the application does not limit the form of the corresponding relation between the plurality of focusing motor codes of the first camera module and the plurality of object distances. For example, it may be represented in a table, a function, or the like.

In a first possible form, the correspondence between the plurality of focus motor codes and the plurality of object distances of the first camera module may be as shown in table 2:

TABLE 2

When the object distance is an object distance 1 in table 2, the quasi-focus motor code of the corresponding first camera module is a quasi-focus motor code 1.

In a second possible form, the correspondence between the plurality of focus motor codes and the plurality of object distances of the first camera module may be a function. The function representing the corresponding relationship may be obtained by fitting a sum of each of the plurality of quasi-focus motor codes of the first camera module and an object distance corresponding to the each quasi-focus motor code.

The embodiment of the application does not limit the form of the corresponding relation between the plurality of focusing motor codes of the second camera module and the plurality of object distances. For example, it may be represented in a table, a function, or the like.

Specifically, the form of the correspondence between the plurality of focus motor codes of the first camera module and the plurality of object distances may be referred to, and details are not repeated.

6) Corresponding relation between a plurality of temperature differences of camera module and a plurality of motor code differences of camera module

The embodiment of the application does not limit the form of the corresponding relation between a plurality of temperature differences of the camera module and a plurality of motor code differences of the camera module. For example, it may be represented in a table, a function, or the like.

The correspondence relationship between the temperatures of the camera module and the motor codes of the camera module may be as shown in fig. 4. In FIG. 4, the x-axis is temperature and the y-axis is motor code. If the preset temperature of the camera module is 30 ℃, the motor code of the camera module is not changed if the absolute value of the current temperature of the camera module is 5 ℃ (for example, the current temperature of the camera module is any one temperature between 25 ℃ and 35 ℃). If the current temperature of the camera module is less than the preset temperature by more than 5 ℃, the corresponding relation between the temperature difference of the camera module and the motor code difference is a-10 b. Wherein, a is the motor code difference, and b is the temperature difference. If the current temperature of the camera module is more than 5 ℃ higher than the preset temperature of the camera module, the corresponding relation between the temperature difference of the camera module and the motor code difference is a-20 b/3. Wherein, a is the motor code difference, and b is the temperature difference.

The correspondence between the temperatures of the camera module and the motor codes of the camera module in fig. 4 can be obtained as follows: the calibration personnel select a chart template, fix the terminal equipment comprising the camera module to be calibrated on the tripod, and place the chart template, the terminal equipment and the tripod in the incubator. The distance between the terminal device and the chart template is selected to be a particular object distance, such as 5 m. The temperature of the incubator is controlled by the calibrating personnel to change from-20 ℃ to 60 ℃, the quasi-focus motor code is read once every 10 ℃, and the read quasi-focus motor code and the temperature are recorded by the calibrating personnel.

It should be noted that, when actually calibrating the corresponding relationship between the temperatures of the camera module and the motor codes of the camera module, multiple camera modules of the same specification can be calibrated, the variation relationship between the temperatures of multiple object distances and the quasi-focus motor codes, and the degree of temperature variation can also be freely set, so as to obtain the corresponding relationship between the temperature difference and the motor code difference applicable to the camera module of the specification.

The obtained correspondence between the temperatures of the second camera module and the motor codes of the second camera module may be fitted to obtain a correspondence between the temperature differences of the second camera module and the motor code differences of the second camera module, which is y kx + b, where k represents a change amount of the motor code for every 1 ℃ change in temperature. b is a constant. x is the temperature difference and y is the motor code difference.

The result obtained by the fitting may be the linear function or another function, and the present application does not limit the result.

7) The coordinate system of the camera module, the preset coordinate system of the camera module, the corresponding relation between the deviation of a plurality of coordinate systems of the camera module and the difference of a plurality of motor codes of the camera module

The coordinate system of the camera module may be the coordinate system as shown in fig. 5. The origin of the coordinate system is a central axis point of the lens, the X axis and the Y axis of the coordinate system are planes perpendicular to the optical axis of the lens in the camera module, and the Z axis is the optical axis. It should be noted that the coordinate system of the camera module can define the origin, the X axis, the Y axis, and the Z axis as required, and the application is not limited to the example of fig. 5.

Based on the coordinate system shown in fig. 5, the predetermined coordinate system of the camera module may be a coordinate system of the second camera module when a plane of an X-axis and a Y-axis of the coordinate system of the camera module is perpendicular to a horizontal plane.

Based on the above examples of the coordinate system of the camera module and the preset coordinate system of the camera module, when the camera module shoots towards the sky, i.e. the Z-axis is opposite to the gravity direction, the rotation angle between the Z-axis of the current coordinate system of the camera module and the Z-axis of the preset coordinate system of the camera module is positive 90 degrees. When the camera module shoots towards the ground, namely the Z axis is the same as the gravity direction, the rotating angle of the Z axis of the current coordinate system of the camera module and the Z axis of the preset coordinate system of the camera module is minus 90 degrees.

When the X axle Y axle place plane of the predetermined coordinate system of camera module is perpendicular with the horizontal plane, the deviation of the predetermined coordinate system of camera module and the current coordinate system of camera module and, the corresponding relation of X axle deviation, Y axle deviation and Z axle deviation of the predetermined coordinate system of camera module and camera module are:

wherein γ is a rotation angle between the Z axis of the preset coordinate system of the camera module and the Z axis of the current coordinate system of the camera module. X is the X-axis offset, Y is the Y-axis offset, and Z is the Z-axis offset.

Based on the above example, the correspondence between the deviations of the coordinate systems of the camera module and the differences of the motor codes of the camera module can be represented by the following formula:

where offset is the motor code difference and code _ default is a constant, for example, code _ default may be set to 64. X is the X-axis offset, Y is the Y-axis offset, and Z is the Z-axis offset.

Specifically, the value of code _ default may be determined as follows:

the calibration personnel use the tripod to fix the terminal equipment comprising the camera module, and when the camera module obtains a plurality of different coordinate systems at the same object distance (such as 5m), the focusing motor code of the camera module is obtained. Illustratively, when the coordinate system of the camera module is that the plane of the X-axis Y-axis is perpendicular to the horizontal plane, the quasi-focus motor code of the camera module is 264, the coordinate system of the camera module is that the plane of the X-axis Y-axis is parallel to the horizontal plane, and the Z-axis is opposite to the gravity direction, the quasi-focus motor code of the camera module is 328, the coordinate system of the camera module is that the plane of the X-axis Y-axis is parallel to the horizontal plane, and the Z-axis is the same as the gravity direction, the quasi-focus motor code of the camera module is 200.

From the above, when the plane of the X-axis and Y-axis of the coordinate system of the camera module is unchanged and the rotation angle of the Z-axis is positive 90 degrees (i.e. the Z-axis is opposite to the gravity direction), the code of the focus motor of the camera module is 328; when the plane of the X-axis and Y-axis of the coordinate system of the camera module is unchanged and the Z-axis rotation angle is minus 90 degrees (i.e., the Z-axis is the same as the gravity direction), the focus motor code of the camera module is 200. It can be seen that the deviation of the motor code is 64 every 90 degrees of the rotation angle difference. Thus, code _ default may be set to 64.

It should be noted that the code differences of the focus motors of the camera modules corresponding to the deviations of the different coordinate systems of the camera modules are obtained at the same object distance. Whether the corresponding relation between the multiple coordinate system deviations of the camera module and the multiple motor code differences of the camera module is consistent or not can be verified through the focus motor code differences of the camera module which is corresponding to the different coordinate system deviations of the camera module when the camera module is at different object distances, and if the corresponding relation is inconsistent, the corresponding relation between the multiple coordinate system deviations of the camera module and the multiple motor code differences of the camera module can be obtained under the multiple object distances.

8) Other terms

In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.

In the embodiments of the present application, "at least one" means one or more. "plurality" means two or more.

In the embodiment of the present application, "and/or" is only one kind of association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.

In an embodiment of the application, a combination comprises one or more objects.

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Fig. 6 is a schematic flow chart illustrating a process of establishing a code correspondence between focus motor codes of two camera modules according to an embodiment of the present application. The embodiment can be applied to the processor in the terminal device shown in fig. 2. The terminal device shown in fig. 2 further includes a first camera module and a second camera module. The first camera module comprises a first motor, a first photoreceptor and a first lens, and the second camera module comprises a second motor, a second photoreceptor and a second lens. The method shown in fig. 6 may include the steps of:

s100: the processor obtains a third in-focus motor code and a third in-focus distance. Wherein, the third quasi-focus motor code and the third quasi-focus distance are respectively: when the distance between the object to be shot and the terminal equipment is a first object distance and the first camera module finishes focusing, the quasi-focusing motor code of the first camera module and the difference between the image distance of the first camera module and the focal length of the first camera module are obtained.

Specifically, when the object distance is constant, the processor acquiring the third in-focus motor code may include the following steps:

the method comprises the following steps: the first camera module acquires a first image of a first object to be photographed. The first image is an image of a first object to be shot, which is acquired by the first camera module when the first lens and the first photoreceptor are at the current position. The first object to be photographed may be any object to be photographed in any scene to be photographed in reality, such as a plant, a person, an animal, an article, a building, or the like.

Step two: the first camera module sends the first image to the processor.

Step three: the processor acquires a third in-focus motor code based on at least one of phase information, depth information, or contrast information in the first image. Wherein, the third quasi-focus motor code is: when the object distance is obtained, the processor obtains a motor code used when the first motor adjusts the distance between the first lens and the first photoreceptor to finish focusing. Specifically, the method comprises the following steps:

when the accuracy of the phase difference of the pixel points in the first image is greater than or equal to the second threshold, the processor obtains a third focusing motor code by using a Phase Detection Auto Focus (PDAF) algorithm according to the phase difference of the pixel points in the first image.

And when the accuracy of the phase difference of the pixel points in the first image is smaller than the second threshold and is larger than or equal to the third threshold, the processor acquires a third focusing motor code by using a hybrid (hybrid) focusing algorithm based on the phase difference of the pixel points in the first image and the contrast information of the pixel points in the first image. Wherein the second threshold is greater than the third threshold.

When the accuracy of the phase difference of the pixel points in the first image is smaller than a third threshold, the processor moves the first lens to a position by using the depth information based on the depth information of the pixel points in the first image and the contrast information of the pixel points in the first image, and then obtains a third focusing motor code by using a CDAF algorithm.

After the processor acquires the third quasi-focus motor code, the quasi-focus distance of the camera module corresponding to the third quasi-focus motor code is acquired according to the corresponding relation between the plurality of quasi-focus motor codes of the camera module and the plurality of quasi-focus distances of the camera module, and the acquired quasi-focus distance of the camera module is used as the third quasi-focus distance.

Optionally, the third quasi-focus motor code and the third quasi-focus distance are respectively: when the distance between the object to be shot and the terminal equipment is a first object distance and the first camera module finishes focusing in a preset state, the focusing motor code of the first camera module and the distance between the first lens and the first photoreceptor are determined.

S101: the processor obtains a first object distance according to the third in-focus distance. Specifically, the method comprises the following steps:

firstly, the processor acquires the image distance of the first camera module according to the third focusing distance. Wherein, the focal length of the first camera module is known. The third quasi-focal distance is equal to the difference between the image distance of the first camera module and the focal distance of the first camera module.

Then, the processor obtains a first object distance corresponding to the image distance of the first camera module through the corresponding relation among the first object distance, the image distance of the first camera module and the focal length of the first camera module.

S102: the processor obtains a second in-focus distance from the first object distance. The second quasi-focal distance is: when the distance between the object to be shot and the terminal equipment is the first object distance and the second camera module finishes focusing, the difference between the image distance of the second camera module and the focal length of the second camera module is obtained. Specifically, the method comprises the following steps:

firstly, the processor calculates the image distance of the second camera module corresponding to the first object distance according to the corresponding relation of the first object distance, the image distance of the second camera module and the focal length of the second camera module and the first object distance. Wherein the focal length of the second camera module is known.

And then, the processor obtains a second focusing distance according to the image distance of the second camera module corresponding to the first object distance. For example, the second quasi-focal distance is equal to a difference between an image distance of the second camera module and a focal distance of the second camera module.

S103: the processor takes the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance as a fourth quasi-focus motor code according to the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of quasi-focus distances. Wherein each of the plurality of quasi-focal distances is: when the object distance is fixed and the second camera module finishes focusing, the difference between the image distance of the second camera module and the focal length of the second camera module is obtained.

The corresponding relation of a plurality of accurate burnt motor codes of second camera module and a plurality of accurate burnt distance can be according to under a plurality of object distances, and the accurate burnt motor code of second camera module and the accurate burnt distance that this accurate burnt motor code corresponds are fitted and are obtained.

S104: the processor establishes a corresponding relationship between the third focus motor code and the fourth focus motor code.

The processor can obtain the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module by executing the steps from S100 to S104 under different object distances.

It should be noted that, as shown in fig. 3, the quasi-focus motor code of one camera module and the quasi-focus distance of the camera module have a corresponding relationship in a certain object distance range. Therefore, the accurate corresponding relation is provided only when the focus motor code of the first camera module and the focus motor code of the second camera module are within the range of the object distance that the two camera modules can focus. For example, if the object distance range in which the first camera module can complete focusing is 10cm to 5m, the object distance range in which the second camera module can complete focusing is 2.5cm to 3 m. Then, the accurate focus motor code of first camera module and the accurate focus motor code of second camera module when the object distance scope is 2.5cm to 3m, have accurate corresponding relation. When the object distance is outside the object distance range that first camera module can accomplish focusing, or, when the object distance is outside the object distance range that second camera module can accomplish focusing, the accurate burnt motor code of the second camera module that the treater obtained through the accurate burnt motor code of first camera module will not be the accurate burnt motor code of true second camera module, but, the treater equally can act on the accurate burnt motor code that obtains in the second motor and be used for adjusting the distance between second camera lens and the second photoreceptor. The second camera module can start focusing from the adjusted distance between the second lens and the second photoreceptor.

The following is a method for establishing a correspondence between an object distance and a focus motor code of a next camera module according to an embodiment of the present application.

The processor can establish the corresponding relation between the second distance and the quasi-focus motor code of the camera module 1 under the second distance in the following way:

the method comprises the following steps: the processor obtains a fifth quadcocal motor code. The fifth focus motor code is: when the distance between the object to be shot and the terminal equipment is the second object distance and the camera module 1 finishes focusing, the focusing motor code of the camera module 1 is obtained.

Optionally, the fifth focus motor code is: when the distance between the object to be shot and the terminal equipment is the second object distance and the camera module 1 finishes focusing in a preset state, the camera module 1 is provided with a focus-aligning motor code.

Step two: the processor acquires the quasi-focus distance corresponding to the fifth quasi-focus motor code according to the corresponding relation between the plurality of quasi-focus motor codes and the plurality of quasi-focus distances of the camera module 1, and takes the quasi-focus distance corresponding to the acquired fifth quasi-focus motor code as the fourth quasi-focus distance.

The embodiment of the application does not limit the acquisition mode of the corresponding relation between the plurality of focus motor codes and the plurality of focus distances of the camera module 1. Exemplarily, the processor can fit the corresponding relation between a plurality of quasi-focus motor codes and a plurality of quasi-focus distances of the camera module 1 according to the corresponding relation between the quasi-focus motor codes and the plurality of quasi-focus distances under a plurality of object distances preset in the camera module 1.

Step three: and the processor acquires the image distance of the camera module 1 according to the fourth focusing distance. Wherein the focal length of the camera module 1 is known. The fourth quasi-focal distance is equal to the difference between the image distance of the camera module 1 and the focal length of the camera module 1.

Step four: and the processor obtains a second object distance corresponding to the image distance of the camera module 1 according to the corresponding relation between the second object distance, the image distance of the camera module 1 and the focal distance of the camera module 1.

Step five: the processor establishes a corresponding relationship between the fifth quasi-focus motor code and the second object distance.

The processor can obtain the corresponding relation between the plurality of quasi-focus motor codes of the camera module 1 and the plurality of object distances by executing the steps from one to five under different object distances.

Fig. 7 is a schematic flow chart of a focusing method for a multi-camera module according to an embodiment of the present disclosure. The embodiment can be applied to the processor in the terminal device shown in fig. 2. The terminal device shown in fig. 2 further includes a first camera module and a second camera module. The first camera module comprises a first motor, a first photoreceptor and a first lens, and the second camera module comprises a second motor, a second photoreceptor and a second lens. The method shown in fig. 7 may include the steps of:

s200, the processor acquires a first focusing motor code. The first focusing motor code is a motor code used by the first motor to adjust the distance between the first lens and the first photoreceptor to complete focusing.

Specifically, refer to step one to step three in the manner in which the processor acquires the third focus motor code in S100 in the above embodiment. And will not be described in detail.

S201, the processor predicts the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code.

Mode 1:

step 1: the processor acquires the quasi-focus motor code of the second camera module corresponding to the first quasi-focus motor code based on the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances.

The accurate focus motor code of the first camera module and the accurate focus motor code of the second camera module which have corresponding relations are the accurate focus motor codes of the corresponding camera modules at the same object distance. For example, based on table 1, the first quasi-focus motor code 1 of the first camera module is the quasi-focus motor code of the first camera module at the target object distance, and the second quasi-focus motor code 1 of the second camera module corresponding to the quasi-focus motor code of the first camera module is the quasi-focus motor code of the second camera module at the target object distance.

Step 2: and the processor takes the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code.

The processor acquires the quasi-focus motor code of the second camera module corresponding to the first quasi-focus motor code based on the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, and can include the following modes as a first predicted quasi-focus motor code of the acquired quasi-focus motor code of the second camera module:

the method a: when the corresponding relation of the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module is expressed in a table mode:

under one condition, if the first quasi-focus motor code is equal to one quasi-focus motor code in the plurality of quasi-focus motor codes of the first camera module, the processor acquires the quasi-focus motor code of the second camera module corresponding to the first quasi-focus motor code, and takes the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code. For example, if the first quasi-focus motor code is equal to the quasi-focus motor code 1 of the first camera module in table 1, the processor takes the quasi-focus motor code 1 of the second camera module as the first predicted quasi-focus motor code.

Under another kind of circumstances, if first accurate burnt motor code is not equal to any one accurate burnt motor code in a plurality of accurate burnt motor codes of first camera module, then the treater can be through being greater than a plurality of accurate burnt motor code of the first camera module of first accurate burnt motor code, a plurality of accurate burnt motor code of the first camera module that is less than first accurate burnt motor code and the accurate burnt motor code of the second camera module that this a plurality of accurate burnt motor code of first camera module corresponds, acquire the accurate burnt motor code of the second camera module that first accurate burnt motor code corresponds. In one example, an absolute value of a difference between a number of the focus motor codes of the first camera module and the first focus motor code is less than a threshold. In another example, the number of quasi-focus motor codes of the first camera module is the closest number of quasi-focus motor codes to the first quasi-focus motor code.

For example: suppose that the first quasi-focus motor code is greater than the quasi-focus motor code 2 of the first camera module in table 1 and smaller than the quasi-focus motor code 3 of the first camera module, wherein the quasi-focus motor code 2 of the first camera module is the quasi-focus motor code with the smallest absolute value of the difference with the first quasi-focus motor code in the quasi-focus motor codes of the first camera module smaller than the first quasi-focus motor code in table 1. The first quasi-focus motor code 3 is the quasi-focus motor code whose absolute value of the difference from the first quasi-focus motor code is the smallest among the quasi-focus motor codes of the first camera module larger than the first quasi-focus motor code in table 1. Then, the processor acquires the second quasi-focus motor code 2 of the second camera module corresponding to the first quasi-focus motor code 2 of the first camera module, and acquires the second quasi-focus motor code 3 of the second camera module corresponding to the first quasi-focus motor code 3 of the first camera module. The processor takes the average value of the second quasi-focus motor code 2 of the second camera module and the second quasi-focus motor code 3 of the second camera module as a first predicted quasi-focus motor code.

For another example: based on table 1, the treater uses a quasi-burnt motor code 1 of first camera module, a quasi-burnt motor code 2 of first camera module, a quasi-burnt motor code 3 of first camera module and a quasi-burnt motor code 4 of first camera module, No. two quasi-burnt motor codes 1 of second camera module, No. two quasi-burnt motor codes 2 of second camera module, No. two quasi-burnt motor codes 3 of second camera module and No. two quasi-burnt motor codes 4 of second camera module, the fitting obtains the functional relation of the quasi-burnt motor code of first camera module and the quasi-burnt motor code of second camera module, if: ax is by + c, wherein a, b, c are the constants, and the accurate burnt motor code of first camera module is represented to x, and the accurate burnt motor code of second camera module is represented to y. Wherein, a standard focus motor code 1 of first camera module, a standard focus motor code 2 of first camera module are the closest first standard focus motor code, and than the standard focus motor code of the first camera module that first standard focus motor code is little. A quasi-focus motor code 3 of the first camera module and a quasi-focus motor code 4 of the first camera module are the closest to the first quasi-focus motor code, and the quasi-focus motor code of the first camera module is larger than the first quasi-focus motor code. The processor substitutes the first quasi-focus motor code into x in the function to solve for y. And the processor takes the obtained quasi-focus motor code y of the second camera module as a first predicted quasi-focus motor code.

When the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module is expressed in a functional mode: the processor substitutes the first quasi-focus motor code into a function representing the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module to obtain the quasi-focus motor code of the second camera module corresponding to the first quasi-focus motor code, and uses the obtained quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code.

Specifically, the method comprises the following steps: the treater is based on the corresponding relation of a plurality of accurate burnt motor codes and a plurality of object distances of first camera module, and the corresponding relation of a plurality of accurate burnt motor codes and a plurality of object distances of second camera module, acquires the accurate burnt motor code of the second camera module that first accurate burnt motor code corresponds to the accurate burnt motor code of the second camera module that will acquire includes as first prediction accurate burnt motor code:

step 1: the processor acquires the object distance corresponding to the first quasi-focus motor code according to the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances, and takes the object distance corresponding to the acquired first quasi-focus motor code as a third object distance.

Step 2: the processor acquires the quasi-focus motor codes of the second camera module corresponding to the third object distance according to the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances, and takes the acquired quasi-focus motor codes of the second camera module corresponding to the third object distance as the first prediction quasi-focus motor codes.

Optionally, after the processor obtains the quasi-focus motor code of the second camera module through the mode 1, the obtained quasi-focus motor code of the second camera module is reduced, and the reduced quasi-focus motor code of the second camera module is used as the first predicted quasi-focus motor code. The amount of reduction may be set empirically. Therefore, the risk that the predicted in-focus motor code of the second camera module is too large is reduced.

In the method 1, the processor may predict the quasi-focus motor code of the second camera module based on the first quasi-focus motor code to obtain a first predicted quasi-focus motor code.

In the following several implementation manners (e.g., any one of manner 2 to manner 4), the quasi-focus motor code of the first camera module and the quasi-focus motor code of the second camera module having the corresponding relationship are quasi-focus motor codes of the corresponding camera modules at the same object distance and in a preset state. The object distance is the distance between the object to be shot and the terminal equipment. The preset state includes at least one of a preset temperature or a preset coordinate system.

For example, when the focus motor code of a camera module is not different due to the difference of the coordinate system of the camera module and is different due to the difference of the temperature of the camera module under the same object distance, the state of the camera module includes the temperature. For example, the state of the camera module adopting the closed-loop motor comprises the temperature.

For another example, when the focus motor code of a camera module is different due to different coordinate systems of the camera module and different due to different temperatures of the camera module under the same object distance, the state of the camera module includes the temperature and the coordinate system. For example, the state of a camera module using an open-loop motor and a telephoto lens includes temperature and a coordinate system.

For another example, when the focus motor code of a camera module is different due to the difference of the coordinate system of the camera module and is not different due to the difference of the temperature of the camera module under the same object distance, the state of the camera module includes the coordinate system.

It should be noted that, at the same time, the temperature of the first camera module and the temperature of the second camera module in the terminal device may be the same or different. Usually, at the same time, the coordinate system of the first camera module in the terminal device is the same as the coordinate system of the second camera module. The coordinate system of the first camera module and the coordinate system of the second camera module may be different, and the following description will be given by taking the same coordinate system of the first camera module and the same coordinate system of the second camera module as an example.

Mode 2:

step 1: refer to step 1 in mode 1 above. And will not be described in detail.

Step 2: when the current state of the second camera module is the same as the preset state of the second camera module, the processor takes the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code. When the current state of the second camera module is different from the preset state of the second camera module, the processor corrects the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as a first predicted quasi-focus motor code.

For example, the preset state of the second camera module comprises a preset temperature. The preset temperature of the second camera module is 30 ℃, and the current temperature of the second camera module is 40 ℃. The processor obtains the temperature difference between the preset temperature of the second camera module and the current temperature of the second camera module, and takes the obtained temperature difference as a target temperature difference. The processor acquires a quasi-focus motor code difference corresponding to the target temperature difference according to the corresponding relation between the temperature differences of the second camera module and the motor code differences of the second camera module, and corrects the acquired quasi-focus motor code of the second camera module by using the acquired motor code difference to serve as a first predicted quasi-focus motor code. The embodiment of the application does not limit the form of the corresponding relation between a plurality of temperature differences of the camera module and a plurality of motor code differences of the camera module. For example, it may be represented in a table, a function, or the like.

For another example, the preset state of the second camera module includes a preset coordinate system. The processor obtains the deviation (such as X-axis deviation, Y-axis deviation and Z-axis deviation) between the current coordinate system of the second camera module and the preset coordinate system of the second camera module, and takes the obtained deviation between the current coordinate system of the second camera module and the preset coordinate system of the second camera module as a target deviation. The embodiment of the application does not limit the manner in which the processor acquires the deviation, and for example, the acceleration sensor or other devices that capture the deviation between the three directions of the X axis, the Y axis, and the Z axis of the coordinate system of the second camera module and the X axis, the Y axis, and the Z axis of the preset coordinate system of the second camera module may acquire the deviation between the three directions of the X axis, the Y axis, and the Z axis of the coordinate system of the second camera module and the X axis, the Y axis, and the Z axis of the preset coordinate system of the second camera module to determine the posture of the mobile phone, and send the posture to the processor.

The processor acquires a quasi-focus motor code difference of the second camera module corresponding to the target deviation according to a corresponding relation between the coordinate system deviations of the second camera module and the motor code differences of the second camera module, and corrects the acquired quasi-focus motor code of the second camera module by using the acquired quasi-focus motor code difference to serve as a first predicted quasi-focus motor code. The embodiment of the application does not limit the form of the corresponding relation between the deviations of the coordinate systems of the camera module and the differences of the motor codes of the camera module. For example, it may be represented in a table, a function, or the like.

It should be noted that, if there is a corresponding relationship between a plurality of coordinate system deviations of the camera module and a plurality of motor code differences of the camera module at a plurality of object distances, the processor may obtain a quasi-focus motor code difference of the second camera module corresponding to the target deviation according to a corresponding relationship between a plurality of coordinate system deviations of the camera module and a plurality of motor code differences of the camera module at an object distance closest to the current object distance, and may also obtain a quasi-focus motor code difference of the second camera module corresponding to the target deviation in other manners, which is not limited in this application.

For another example, the preset state of the second camera module includes a preset temperature and a preset coordinate system. The processor obtains the state difference between the current state of the second camera module and the preset state of the second camera module, and marks the obtained state difference as a target state difference. The processor obtains the focus motor code difference of the second camera module corresponding to the target state difference based on the corresponding relation between the state differences and the motor code differences of the second camera module. And the processor corrects the acquired quasi-focus motor code of the second camera module by using the acquired quasi-focus motor code difference of the second camera module to be used as a first predicted quasi-focus motor code. The embodiment of the present application does not limit the form of the correspondence relationship between the plurality of state differences and the plurality of motor code differences of the second camera module. For example, it may be represented in a table, a function, or the like.

Therefore, the processor corrects the quasi-focus motor code in the preset state of the second camera module, which is obtained through prediction, into the quasi-focus motor code in the current state of the second camera module according to the current state of the second camera module, and can make up the prediction deviation caused by the difference between the current state of the second camera module and the preset state of the second camera module, so that the predicted first predicted quasi-focus motor code is closer to the real quasi-focus motor code in the current state of the second camera module.

Mode 3: the processor obtains a second quasi-focus motor code based on the first quasi-focus motor code, and obtains a first predicted quasi-focus motor code according to the second quasi-focus motor code.

Step 1, when the current state of the first camera module is different from the preset state of the first camera module, the processor corrects the first quasi-focus motor code into a second quasi-focus motor code according to the current state of the first camera module. The second quasi-focus motor code is a quasi-focus motor code of the first camera module in a preset state. The modification method refers to the modification method in the above mode 2, and is not described again.

When the current state of the first camera module is the same as the preset state of the first camera module, the processor takes the first quasi-focus motor code as a second quasi-focus motor code.

Step 2: the processor obtains the quasi-focus motor code of the second camera module corresponding to the second quasi-focus motor code based on the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances, and takes the obtained quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code.

Therefore, before predicting the quasi-focus motor code of the second camera module in the current state, the processor corrects the first quasi-focus motor code into the quasi-focus motor code of the first camera module in the preset state, so that the prediction error caused by the deviation between the current state of the first camera module and the preset state of the first camera module is compensated, and the predicted first predicted quasi-focus motor code is closer to the real quasi-focus motor code of the second camera module in the current state.

Mode 4:

step 1: the processor acquires the second focus motor code using the method of step 1 in mode 3.

Step 2: the processor acquires the quasi-focus motor code of the second camera module corresponding to the second quasi-focus motor code based on the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module, or based on the corresponding relation between the plurality of quasi-focus motor codes of the first camera module and the plurality of object distances and the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of object distances.

And step 3: when the current state of the second camera module is the same as the preset state of the second camera module, the processor takes the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code. When the current state of the second camera module is different from the preset state of the second camera module, the processor corrects the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as a first predicted quasi-focus motor code. The modification method refers to the modification method in the above mode 2, and is not described again.

Therefore, the processor compensates for the deviation between the preset state of the first camera module and the standard focus motor code in the current state of the first camera module, and also compensates for the deviation between the standard focus motor code in the preset state of the second camera module and the standard focus motor code in the current state of the second camera module, so that the obtained first predicted standard focus motor code is closer to the real standard focus motor code in the current state of the second camera module than the three realization modes. Thereby the focusing efficiency of the second camera module is improved.

S202, the processor applies the first predicted in-focus motor code to the second motor to adjust the distance between the second lens and the second photoreceptor.

It should be noted that, when the precision requirement of the processor on the second camera module is less than or equal to the fifth threshold, after the processor applies the first predicted in-focus motor code to the second motor, the distance between the second lens and the second photoreceptor is the second distance. The processor considers that the second camera module completes focusing.

The embodiment of the application does not limit the specification of the first camera module and the specification of the second camera module. Optionally, the processor takes the camera module with high specification in the two camera modules included in the terminal device as the first camera module. The camera module with higher specification has higher focusing efficiency. Therefore, the processor takes the camera module with higher specification as the first camera module, and can use the quasi-focus motor code of the first camera module to predict the quasi-focus motor code of the camera module with lower specification, thereby improving the focusing efficiency of the camera module with lower specification.

In one example, the first camera module is a camera module having a phase detection density greater than a phase detection density of the second camera module. Thus, first, the processor may acquire a first focus motor code using dual-core pixel phase detection based on a first image acquired by the first camera module. Then, the processor predicts the focus motor code of the second camera module based on the first focus motor code. The processor acts the predicted quasi-focus motor code of the second camera module on the second motor, and adjusts the distance between the second lens and the second photoreceptor to complete the focusing of the second camera module, so that the focusing efficiency of the second camera module is improved.

In another example, the terminal device shown in fig. 2 further includes a depth device that the first camera module can utilize and the second camera module cannot utilize. Or the depth information of the first image acquired by the first camera module by using the depth device is more accurate than the depth information of the first image acquired by the second camera module by using the depth device. In this way, in a dim light, weak texture or multi-depth-of-field scene, first, the processor may determine a clear imaging distance between the first lens and the first photoreceptor according to the depth information in the first image and the contrast information of the first image acquired by the first camera module. Then, the processor predicts the focus motor code of the second camera module based on the first focus motor code. The second motor just once with the distance between adjustment second camera lens and the second photoreceptor, just can accomplish focusing of second camera module to improve the efficiency of focusing of second camera module, promoted user experience.

Optionally, S203: and the second camera module starts to perform automatic focusing from the second distance.

Specifically, when the distance between the second lens and the second photoreceptor is the second distance, the second camera module acquires an image of the first object to be photographed and sends the image to the processor. The processor may determine that the second camera module is in focus based on at least one of phase information, depth information, or contrast information in the image. Or the processor acquires the focus motor code of the current state of the second camera module based on at least one of the phase information, the depth information or the contrast information in the image until the second camera module finishes focusing.

It should be noted that, in the embodiment of the present application, the number of the camera modules included in the terminal device is not limited. The above embodiments all take two camera modules included in the terminal device as an example for explanation. In practical implementation, the terminal device may include more than two camera modules. Exemplarily, if the terminal equipment comprises three camera modules, the processor can determine that the camera module with the highest specification in the terminal equipment is the first camera module, and any one of the other two camera modules is used as the second camera module. After the first camera module finishes focusing, the second camera module focuses based on the quasi-focusing motor code of the first camera module. After the two camera modules finish focusing, the rest camera module can focus based on any one of two focusing motor codes of the two camera modules finishing focusing.

Like this, utilize the camera module that the specification is good to assist poor module and focus, realize better focusing effect. For the second camera module, when the second motor adjusts the distance between the second lens and the second photoreceptor to finish focusing once, the focusing efficiency of the second camera module is greatly improved. When the distance between the second lens and the second photoreceptor is adjusted by the second motor once and focusing can not be finished, the second camera module starts focusing from the second distance, the range of finding the clear imaging distance between the second lens and the second photoreceptor in the focusing process is reduced, and therefore the focusing efficiency of the second camera module is improved. Therefore, the camera module with the poor specification is assisted by the standard motor code of the camera module with the good specification to complete focusing, so that the focusing efficiency of the camera module with the poor specification is improved, and a better focusing effect is realized, thereby improving the user experience during automatic focusing of the multiple camera modules.

The scheme provided by the embodiment of the application is mainly introduced from the perspective of a method. To implement the above functions, it includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the exemplary method steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

According to the embodiment of the application, the functional modules of the multi-camera module focusing device can be divided according to the method example, for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.

Fig. 8 is a schematic structural diagram of a focusing device with multiple camera modules according to an embodiment of the present disclosure. The multi-camera module focusing device 80 can be used to perform the functions performed by the processor in any of the above embodiments (such as the embodiments shown in fig. 6-7). The multi-camera module focusing device 80 is applied to terminal equipment, and the terminal equipment further comprises a first camera module and a second camera module, wherein the first camera module comprises a first motor, a first photoreceptor and a first lens, and the second camera module comprises a second motor, a second photoreceptor and a second lens; the multi-camera module focusing device 80 includes: an acquisition unit 801, a prediction unit 802, and an adjustment unit 803. The acquiring unit 801 is configured to acquire the first focus motor code. The first focus motor code is a motor code used by the first motor to adjust the distance between the first lens and the first photoreceptor to complete focusing. The predicting unit 802 is configured to predict the quasi-focus motor code of the second camera module based on the first quasi-focus motor code, so as to obtain a first predicted quasi-focus motor code. An adjusting unit 803, configured to apply the first predicted focus motor code to the second motor to adjust a distance between the second lens and the second photoreceptor. For example, in conjunction with fig. 6, the acquisition unit 801 is configured to perform S100 to S103. In conjunction with fig. 7, the acquisition unit 801 is configured to perform S200, the prediction unit 802 is configured to perform S201, and the adjustment unit 803 is configured to perform S202.

Optionally, the focusing device 80 of the multi-camera module further includes: and the focusing unit 804 is used for the second camera module to perform automatic focusing from the second distance. The second distance is the distance between the second lens and the second photoreceptor after the first predicted focus motor code is applied to the second motor and the distance between the second lens and the second photoreceptor is adjusted. In conjunction with fig. 7, the focusing unit 804 is configured to perform S203.

Optionally, the obtaining unit 801 is specifically configured to: and acquiring a first focus motor code based on at least one of phase information, depth information or contrast information in the image of the first object to be shot acquired by the first camera module.

Optionally, the obtaining unit 801 is further configured to: the corresponding relation of a plurality of accurate burnt motor codes based on first camera module and a plurality of accurate burnt motor codes of second camera module, perhaps, based on a plurality of accurate burnt motor codes of first camera module and a plurality of object distance's corresponding relation, and a plurality of accurate burnt motor codes of second camera module and a plurality of object distance's corresponding relation, acquire the accurate burnt motor code of the second camera module that first accurate burnt motor code corresponds. The quasi-focus motor code with the corresponding relation is a quasi-focus motor code of the corresponding camera module at the same object distance, and the object distance is the distance between an object to be shot and the terminal equipment. The prediction unit 802 is specifically configured to use the obtained quasi-focus motor code of the second camera module as the first predicted quasi-focus motor code.

Optionally, the obtaining unit 801 is further configured to: the corresponding relation of a plurality of accurate burnt motor codes based on first camera module and a plurality of accurate burnt motor codes of second camera module, perhaps, based on a plurality of accurate burnt motor codes of first camera module and a plurality of object distance's corresponding relation, and a plurality of accurate burnt motor codes of second camera module and a plurality of object distance's corresponding relation, acquire the accurate burnt motor code of the second camera module that first accurate burnt motor code corresponds. The quasi-focus motor codes with the corresponding relation are quasi-focus motor codes of the corresponding camera modules at the same object distance and in a preset state, and the preset state comprises at least one of preset temperature or a preset coordinate system. The object distance is the distance between the object to be photographed and the terminal device.

Optionally, the focusing device 80 of the multi-camera module further includes: a correction unit 805 configured to: and when the current state of the second camera module is the same as the preset state of the second camera module, taking the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code. When the current state of the second camera module is different from the preset state of the second camera module, the obtained quasi-focus motor code of the second camera module is corrected into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as a first predicted quasi-focus motor code.

Optionally, the modification unit 805 is further configured to: and when the current state of the first camera module is different from the preset state of the first camera module, correcting the first focusing motor code into a second focusing motor code according to the current state of the first camera module. The second quasi-focus motor code is a quasi-focus motor code of the first camera module in a preset state. The preset state includes at least one of a preset temperature or a preset coordinate system. When the current state of the first camera module is the same as the preset state of the first camera module, the first quasi-focus motor code is used as the second quasi-focus motor code. The acquisition unit 801 is further configured to: the corresponding relation of a plurality of accurate burnt motor codes based on first camera module and a plurality of accurate burnt motor codes of second camera module, perhaps, based on the corresponding relation of a plurality of accurate burnt motor codes and a plurality of object distances of first camera module, and the corresponding relation of a plurality of accurate burnt motor codes and a plurality of object distances of second camera module, acquire the accurate burnt motor code of the second camera module that second accurate burnt motor code corresponds. Wherein, the quasi-focus motor code with the corresponding relation is the quasi-focus motor code of the corresponding camera module at the same object distance and in a preset state. The preset state includes at least one of a preset temperature or a preset coordinate system. The object distance is the distance between the object to be photographed and the terminal device. The prediction unit 802 is specifically configured to: and taking the acquired quasi-focus motor code of the second camera module as a first predicted quasi-focus motor code.

Optionally, the prediction unit 802 is further configured to: and when the current state of the second camera module is the same as the preset state of the second camera module, the acquired quasi-focus motor code of the second camera module is used as a first predicted quasi-focus motor code. The correction unit 805 is specifically configured to: when the current state of the second camera module is different from the preset state of the second camera module, the obtained quasi-focus motor code of the second camera module is corrected into the quasi-focus motor code of the second camera module in the current state, and the corrected quasi-focus motor code is used as a first predicted quasi-focus motor code.

Optionally, the obtaining unit 801 is further configured to: and acquiring the temperature difference between the current temperature of the second camera module and the preset temperature of the second camera module, and marking the acquired temperature difference as a target temperature difference. And acquiring the motor code difference of the second camera module corresponding to the target temperature difference according to the corresponding relation between the temperature differences of the second camera module and the motor code differences of the second camera module. The correction unit 805 is specifically configured to: and correcting the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state by using the acquired quasi-focus motor code difference.

Optionally, the obtaining unit 801 is further configured to: and acquiring the deviation of the current coordinate system of the second camera module and the preset coordinate system of the second camera module, and marking the acquired deviation as a target deviation. And acquiring the motor code difference of the second camera module corresponding to the target deviation according to the corresponding relation between the deviations of the coordinate systems of the second camera module and the motor code differences of the second camera module. The correction unit 805 is specifically configured to: and correcting the acquired quasi-focus motor code of the second camera module into the quasi-focus motor code of the second camera module in the current state by using the acquired motor code difference.

Optionally, the obtaining unit 801 is further configured to: and acquiring a third quasi-focus motor code and a third quasi-focus distance. The third quasi-focus motor code and the third quasi-focus distance are respectively as follows: when the distance between the object to be shot and the terminal equipment is a first object distance and the first camera module finishes focusing, the quasi-focusing motor code of the first camera module and the difference between the image distance of the first camera module and the focal length of the first camera module are obtained. A first object distance is acquired based on the third in-focus distance. A second in-focus distance is acquired based on the first object distance. The second quasi-focal distance is: when the distance between the object to be shot and the terminal equipment is the first object distance and the second camera module finishes focusing, the difference between the image distance of the second camera module and the focal length of the second camera module is obtained. According to the corresponding relation between the plurality of quasi-focus motor codes of the second camera module and the plurality of quasi-focus distances, the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance is obtained, and the quasi-focus motor code of the second camera module corresponding to the second quasi-focus distance is used as a fourth quasi-focus motor code. Every accurate focus distance in a plurality of accurate focus distances is the difference of the image distance of second camera module and the focus of second camera module. The multi-camera module focusing device 80 further includes: the establishing unit 806 is configured to establish a corresponding relationship between the third focus motor code and the fourth focus motor code. Wherein, the corresponding relation of the plurality of quasi-focus motor codes of the first camera module and the plurality of quasi-focus motor codes of the second camera module comprises the corresponding relation of a third quasi-focus motor code and a fourth quasi-focus motor code.

In one example, referring to fig. 2, each of the above units may be implemented by the processor 201 in fig. 2 calling a computer program stored in the memory 202.

For the detailed description of the above alternative modes, reference is made to the foregoing method embodiments, which are not described herein again. In addition, for the explanation and the description of the beneficial effects of any of the above-mentioned multiple-camera module focusing devices 80, reference may be made to the above-mentioned corresponding method embodiments, which are not repeated herein.

It should be noted that the actions performed by the modules are only specific examples, and the actions actually performed by the units refer to the actions or steps mentioned in the description of the embodiment based on fig. 6 to 7.

The embodiment of the present application further provides an apparatus (e.g., a terminal device or a chip), including: a memory and a processor; the memory is for storing a computer program, and the processor is for invoking the computer program to perform the actions or steps mentioned in any of the embodiments provided above.

Embodiments of the present application also provide a computer-readable storage medium, which stores a computer program, and when the computer program runs on a computer, the computer program causes the computer to execute the actions or steps mentioned in any of the embodiments provided above.

The embodiment of the application also provides a chip. The chip is integrated with a circuit and one or more interfaces for realizing the functions of the multi-camera module focusing device. Optionally, the functions supported by the chip may include processing actions in the embodiments described based on fig. 6 to fig. 7, which are not described herein again. Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments may be implemented by a program instructing the associated hardware to perform the steps. The program may be stored in a computer-readable storage medium. The above-mentioned storage medium may be a read-only memory, a random access memory, or the like. The processing unit or processor may be a central processing unit, a general purpose processor, an Application Specific Integrated Circuit (ASIC), a microprocessor (DSP), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof.

The embodiments of the present application also provide a computer program product containing instructions, which when executed on a computer, cause the computer to execute any one of the methods in the above embodiments. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are all or partially generated upon loading and execution of computer program instructions on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or can comprise one or more data storage devices, such as servers, data centers, and the like, that can be integrated with the media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

It should be noted that the above devices for storing computer instructions or computer programs provided in the embodiments of the present application, such as, but not limited to, the above memories, computer readable storage media, communication chips, and the like, are all nonvolatile (non-volatile).

Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Although the present application has been described in conjunction with specific features and embodiments thereof, various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application.

35页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种设置摄像头启动配置的方法、装置和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类