Design method of AR-HUD head-up display interface for enhancing driving feeling

文档序号:21348 发布日期:2021-09-21 浏览:27次 中文

阅读说明:本技术 一种增强驾驶感的ar-hud抬头显示界面的设计方法 (Design method of AR-HUD head-up display interface for enhancing driving feeling ) 是由 马向东 郭柏淇 洪智聪 彭鑫 闫勉 黄俊鸿 陈世帆 何晶晶 贾梦婷 于 2021-06-30 设计创作,主要内容包括:本发明涉及一种增强驾驶感的AR-HUD抬头显示界面的设计方法,属于虚拟现实领域,其包括:通过摄像头采集驾驶环境的影像信息;调整摄像头参数,完成相机标定;对图像进行预处理;虚拟呈像,标定AR-HUD系统相关的参数并实现物体的虚实注册,实现导航虚拟标记与道路现实目标匹配、对准、提示;对AR-HUD界面进行设计,在游戏引擎中进行虚拟驾驶场景的构建,并通过相关端口连接实体车辆设备,然后结合头戴式VR显示系统构建虚拟测试平台,遴选出最优的AR-HUD显示系统。本发明有效避免了实车测试周期长、成本高、危险性大等缺点,同时也弥补了现有技术中对界面设计架构与用户的视觉匹配等方面的短板。(The invention relates to a design method of an AR-HUD head-up display interface for enhancing driving feeling, which belongs to the field of virtual reality and comprises the following steps: acquiring image information of a driving environment through a camera; adjusting camera parameters to finish camera calibration; preprocessing the image; virtual imaging, namely calibrating relevant parameters of an AR-HUD system, realizing virtual and real registration of an object, and realizing matching, alignment and prompting of a navigation virtual marker and a road real target; the method comprises the steps of designing an AR-HUD interface, constructing a virtual driving scene in a game engine, connecting entity vehicle equipment through a related port, constructing a virtual testing platform by combining a head-mounted VR display system, and selecting an optimal AR-HUD display system. The invention effectively avoids the defects of long test period, high cost, high risk and the like of the real vehicle, and simultaneously makes up for the short board in the aspects of visual matching of the interface design architecture and the user and the like in the prior art.)

1. A design method of an AR-HUD head-up display interface for enhancing driving feeling is characterized by comprising the following steps:

s1, collecting image information of the driving environment through a camera;

s2, adjusting an internal parameter matrix, a distortion coefficient and an external parameter matrix of the camera to finish camera calibration;

s3, preprocessing the image, including graying, filtering and edge enhancement;

s4, virtually imaging, calibrating parameters related to the AR-HUD system, realizing virtual and real registration of objects, and realizing matching, alignment and prompting of navigation virtual marks and road real targets;

s5, designing an AR-HUD interface, constructing a virtual driving scene in a game engine, connecting entity vehicle equipment through a related port, constructing a virtual testing platform by combining a head-mounted VR display system, and selecting an optimal AR-HUD display system.

2. The design method according to claim 1, wherein the camera in step S1 is mounted at the middle of the front windshield of the vehicle, the optical axis of the camera is parallel to the longitudinal central axis of the vehicle, and the camera has a depression angle of 0-5 degrees with respect to the ground.

3. The design method according to claim 1, wherein the filtering process in step S3 is performed by using a two-dimensional discrete gaussian function, and the expression of the two-dimensional discrete gaussian function is as follows:

where σ is the standard deviation and (x, y) are the coordinates of a two-dimensional discrete gaussian function.

4. The designing method according to claim 1, wherein the formula of the edge detection in step S3 is as follows:

the image gradient is calculated by the formula:

where a represents an original image, Gx represents a transverse edge-detected image, and Gy represents a longitudinal edge-detected image.

5. The design method according to claim 1, wherein the AR-HUD equivalent virtual image plane model is used to complete the virtual-real registration of the object in step S4, assuming that the pupil coordinates of the human eye are known and recorded as E (x)E,yE,zE) The vertexes of the AR-HUD equivalent virtual image plane ABCD are respectively A (x)A,yA,zA)、B(xB,yB,zB)、C(xC,yC,zC)、D(xD,yD,zD) The object point to be visually enhanced is N (x)N,yN,zN) The coordinates of the intersection point F of the straight line EN with the plane ABCD are:

wherein:

the coordinates of the intersection point F are calculated to complete the virtual-real registration.

6. The design method according to claim 5, wherein the target calibration is performed in step S4 by performing target calibration on the coordinates of the pupil of the human eye, the spatial coordinates of the virtual and real registered object, and the coordinates of the AR-HUD equivalent virtual image plane.

7. The design method according to claim 6, wherein when performing the target calibration, the camera C is addedAFor assisting calibration and customizing the calibration board B2, wherein a checkerboard printed on the calibration board B2 for implementing camera calibration is visible from both sides of the calibration board.

8. The designing method according to claim 1, wherein step S5 includes:

s51, constructing a driving environment, designing various driving emergencies, and completing registration of the driving environment, wherein driving objects around the driving environment comprise pedestrians, vehicles and urban buildings;

s52, navigating by adopting a corner point algorithm, selecting a route direction, optimizing and presenting a driving route according to different conditions;

s53, after the driving route is optimized, a three-order Bezier curve is adopted to construct a guide curve;

s54, the head-mounted equipment is used as an information carrier on the application, and more vivid and exquisite driving experience is provided.

Technical Field

The invention relates to the technical field of driving Display, in particular to a design method of an AR (Augmented Reality) -HUD (Head Up Display) Head-Up Display interface for enhancing driving feeling.

Background

With the improvement of living standard of people, the number of motor vehicles is increased sharply; meanwhile, with the development of computer technology and graphic processing technology, the performance of hardware is improved, the technologies of virtual driving and automatic driving become mature day by day, in recent years, AR-HUD head-up display technology capable of enhancing driving feeling appears, and a front window augmented reality head-up display appears.

The existing AR-HUD head-up display technology has no perfect and uniform interface design scheme, which is not beneficial to the research and development of the automobile head-up display layout and limits the progress of the automobile driving system to humanization and intellectualization. The invention patent CN111222444A, published in 6/2/2020, discloses an augmented reality head-up display method and system considering the emotion of a driver, and a target image in a scene RGB format is acquired through a TOF module. The invention patent CN111896024A published in 11 months of 2020 discloses a control method and device for navigation display and an AR-HUD display system, which acquire the current position of a vehicle to obtain the current driving direction of the vehicle. Neither of the above two patent documents have conducted tests and studies on the driving safety based on the entirety of the AR-HUD.

In the future, AR-HUD application becomes an important technical direction in a cockpit and becomes a focus of attention of a whole vehicle enterprise, and along with the fusion development of a driving assistance system, the used warning modes tend to be diversified, and the stress response capability of a driver under the action of different modes also has difference. That is to say, as the driving assistance system is integrated and developed, the warning modes used by the driving assistance system tend to be diversified, and the stress response capabilities of drivers under different modes are different, but there is a huge gap in enhancing the visual matching between the interface design framework and the user in the head-up display. Therefore, the research on the novel AR-HUD comprehensive auxiliary driving system and the test on the safety and the effectiveness of the novel AR-HUD comprehensive auxiliary driving system have very important significance for improving the road traffic safety; there is a need to provide a design and test for enhanced heads-up display based on driving safety to address the above problems.

Disclosure of Invention

In order to solve the problems in the prior art, the invention provides a design method of an AR-HUD head-up display interface for enhancing driving feeling, which selects an optimal AR-HUD display system by collecting images, processing the images, virtually presenting the images and designing the AR-HUD interface.

The invention is realized by adopting the following technical scheme: a design method of an AR-HUD head-up display interface for enhancing driving feeling comprises the following steps:

s1, collecting image information of the driving environment through a camera;

s2, adjusting an internal parameter matrix, a distortion coefficient and an external parameter matrix of the camera to finish camera calibration;

s3, preprocessing the image, including graying, filtering and edge enhancement;

s4, virtually imaging, calibrating parameters related to the AR-HUD system, realizing virtual and real registration of objects, and realizing matching, alignment and prompting of navigation virtual marks and road real targets;

s5, designing an AR-HUD interface, constructing a virtual driving scene in a game engine, connecting entity vehicle equipment through a related port, constructing a virtual testing platform by combining a head-mounted VR display system, and selecting an optimal AR-HUD display system.

The invention aims to fill the vacancy in the aspect of visual matching of an interface design framework and a user in the enhanced head-up display; a user selects a required information module and a required display form from a VR test platform, and the platform is modularized for test optimization, so that a safe and comfortable auxiliary driving system is customized for the user. Compared with the prior art, the invention has the following technical effects:

1. by researching, developing and testing the AR-HUD driving system in VR, the invention effectively avoids the defects of long test period, high cost, high risk and the like of the real vehicle, and simultaneously makes up the shortages of visual matching and the like of the interface design framework and the user in the prior art.

2. The virtual driving environment can be generated and constructed in a modularized manner by users so as to be convenient for driving tests according to the requirements of the users, and an AR-HUD display interface is designed in combination with aspects of human-computer interaction technology, engineering psychology, computer vision, market customer requirements, safety and the like, so that the model is robust and has certain innovation and practice values.

3. According to the invention, through the head-mounted VR equipment such as VIVE, the test platform has the requirements of multi-perceptibility, submergence, interactivity and imagination, and more vivid and fine driving experience is provided; meanwhile, the defects of real vehicle testing are avoided.

4. According to the invention, the psychological index, the physiological index and the behavior characteristic are used as analysis indexes, and the display interfaces of different AR-HUDs are compared and positioned, so that the AR-HUD auxiliary driving interface which is most suitable for the requirements of customers is automatically, safely and reliably provided for the customers.

Drawings

FIG. 1 is a flowchart of a method for designing an AR-HUD head-up display interface for enhancing driving feeling according to an embodiment of the present invention;

FIG. 2 is a third order Bessel plot;

FIG. 3 is a block diagram of a test system according to an embodiment of the present invention.

Detailed Description

The invention is used for improving the safety and the effectiveness of the novel AR-HUD comprehensive auxiliary driving system, gives enough attention and attention to the framework in the interface design and the visual matching with the user, solves the problems of long test period, high cost, high risk, unrepeatability and the like of the real vehicle through VR test, and has very important significance for improving the road traffic safety.

The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.

Examples

As shown in fig. 1, in this embodiment, the method for designing an AR-HUD head-up display interface for enhancing driving feeling includes the following steps:

and S1, acquiring image information of the driving environment through the camera.

The camera is installed in the middle of a front windshield of a vehicle, the optical axis of the camera is parallel to the longitudinal central axis of the vehicle, and the camera and the ground form a 0-5-degree depression angle.

And S2, adjusting an internal parameter matrix, a distortion coefficient and an external parameter matrix of the camera to finish camera calibration.

In this embodiment, a calibration method of a two-dimensional planar graph is adopted, and a monocular camera in Matlab is used for calibration, and the calibration process is as follows: printing a checkerboard pattern, and spreading the checkerboard pattern on a plane, and measuring the width of the checkerboard pattern; aligning the camera to the checkerboard, and moving the checkerboard or the camera for multiple times to enable the camera to shoot checkerboard pictures at different angles; the captured checkerboard picture is calibrated using a Matlab Camera calibration App, and then calibration parameters are derived.

And S3, preprocessing the image, accelerating the detection speed and improving the real-time performance and precision of the detection. The preprocessing includes graying processing, filtering processing, edge enhancement processing, and the like.

S31, gradation processing: the color image is changed into a gray image, and in image processing, the three primary colors R, G, B are equalized by conversion operation in order to reduce the complexity of the algorithm. Therefore, one of the primary colors can be taken to complete the processing of the image, and one primary color can be stored by one byte, so that the storage space is reduced, and the processing efficiency is improved. There are three common transformation methods: the weighted average method, the average method and the maximum method can meet the requirements.

In this embodiment, a weighted average method is used, and according to the preference of different image processing systems for specific colors, weighted averages of different weights are performed on three primary colors of an image to highlight and weaken a certain primary color, where the weighted average method uses the following formula:

F(i,j)=αR(i,j)+βG(i,j)+γB(i,j)

wherein F (i, j) is a pixel point of the image (i, j), R (i, j), G (i, j), B (i, j) are red, green, blue color components of the image (i, j), respectively, and α, β, γ are weighted values of the red, green, blue color components of the image (i, j), respectively.

S32, filtering: and filtering the frequency of a specific wave band, removing noise in the image, more accurately extracting required visual features and the like.

The common filtering processing methods include five, namely mean filtering, square filtering, median filtering, gaussian filtering and bilateral filtering, wherein each method has different effects in time consumption and practicality. The gaussian filtering has an advantage in time consumption, so that the gaussian filtering algorithm is adopted in the embodiment, and the noise conforming to the normal distribution can be effectively removed.

The one-dimensional zero mean gaussian function is:

where σ is the standard deviation, the bell-shaped width is characterized, andcorresponding to the height of the peak of the Gaussian function curve, the coordinate of the center of the peak is 0; x is a variable of a one-dimensional zero-mean gaussian function that characterizes the distance from the center (i.e., origin) of the gaussian function.

In the embodiment, a two-dimensional discrete Gaussian function is adopted for filtering; in the image after the graying processing, the gaussian filtering is realized by performing convolution operation once on the two-dimensional discrete gaussian function. Wherein, the expression of the two-dimensional discrete Gaussian function is as follows:

where σ is the standard deviation and (x, y) are the coordinates of a two-dimensional discrete gaussian function. The width of the gaussian filter determines the degree of smoothing of the gaussian filter and is characterized by a parameter σ.

S33, edge enhancement: and detecting the image and extracting the image edge.

The image edge is one of the most basic features of the image, the image acquired in this embodiment is mainly a traffic sign, such as a zebra crossing, a lane boundary, etc., and these pieces of information are stored in the image outline, so that the image needs to be detected and its edge extracted.

Common enhanced operators include Laplacian operator, Robert operator, Sobel operator, and the like. Each operator has advantages and disadvantages, and the embodiment adopts a Sobel operator: the Sobel operator is one of operators in image processing and is mainly used for edge detection; technically, it is a discrete difference operator used to calculate the approximate value of the gradient of the image brightness function. At any point of the image, a Sobel operator is used to generate a corresponding gradient vector or a normal vector, the Sobel operator includes two sets of 3 × 3 matrixes, namely, a horizontal matrix and a vertical matrix, and the horizontal matrix and the vertical matrix are subjected to plane convolution with the image to obtain horizontal brightness difference approximation values and vertical brightness difference approximation values respectively. If A represents the original image, and Gx and Gy represent the horizontal and vertical edge-detected images, respectively, the formula for edge detection is as follows:

the lateral and longitudinal gradient approximations for each pixel in the image may be combined using the following formula to calculate the magnitude of the image gradient:

s4, virtual rendering: and calibrating related parameters of the AR-HUD system, realizing virtual and real registration of objects, and realizing matching, alignment and prompting of the navigation virtual marker and a road real target.

S41, registering virtual reality: for an AR system to enhance vision, it must have the capability of adding virtual marker information to reality in real time, and these information should be mapped to the correct position, which is the virtual-real registration in AR, and this embodiment uses the AR-HUD equivalent virtual image plane model to complete the virtual-real registration.

Let us assume that the coordinates of the pupil of the human eye are known and are denoted as E (x)E,yE,zE) The vertexes of the AR-HUD equivalent virtual image plane ABCD are respectively A (x)A,yA,zA)、B(xB,yB,zB)、C(xC,yC,zC)、D(xD,yD,zD) The object point to be visually enhanced is N (x)N,yN,zN) Only the intersection point F of the straight line EN and the plane ABCD is calculated, and the virtual and real registration of the object can be completed; the coordinates of intersection point F are:

wherein:

s42, realizing target calibration: and carrying out target calibration on the coordinates of the pupils of the human eyes, the space coordinates of the virtual and real registered objects and the coordinates of the AR-HUD equivalent virtual image plane.

In this example, phases are addedMachine CAFor assisting calibration and customizing the calibration board B2, wherein a checkerboard printed on the calibration board B2 for implementing camera calibration is visible from both sides of the calibration board.

Auxiliary camera CAMust satisfy the following two conditions simultaneously: auxiliary camera CAMust be matched with the front camera CFThe calibration board B1 in front of the camera can be shot completely at the same time; auxiliary camera CAAnd one of the two pupil cameras must be able to completely photograph the calibration board B2.

The embodiment adopts a Zhangyingyou calibration method:

wherein the content of the first and second substances,as pupil camera CEWith respect to the external reference matrix of the vehicle coordinate system,as pupil camera CEWith respect to the external reference matrix of calibration plate B2,for assisting camera CAWith respect to the external reference matrix of calibration plate B2,for assisting camera CAWith respect to the external reference matrix of calibration plate B1,is a front camera CFWith respect to the external reference matrix of calibration plate B1,is a front camera CFA parametric extrinsic matrix relative to a vehicle coordinate system.

S5, designing an AR-HUD interface, constructing a virtual driving scene in a game engine, connecting physical vehicle equipment such as a physical steering wheel and a brake through a relevant port, and constructing a virtual testing platform by combining head-mounted VR display systems such as VIVE. Wherein the head mounted VR device is not limited to a VIVE head mounted display device.

The AR-HUD interface is designed on the basis of the IDMF theory in general, and mainly comprises the following steps: market research and design research, user research, business model and concept design, information framework and design implementation, design evaluation, and user testing. And defining and designing a vehicle driving safety icon according to a man-machine interaction principle, engineering psychology and human factors engineering, and simultaneously combining factors such as mental load and the like to arrange an AR-HUD interface. The interface information comprises basic information of the vehicle, navigation information, driving safety information, entertainment information and an auxiliary image system, and in the interface presentation, the concise and clear display form, the information placement position, the brightness (the brightness needs to be changed under different environments), the color (the identification degree needs to be enhanced by using proper color), and the opacity (the proper opacity does not influence the driving safety on the premise of easily identifying the information) are mainly considered; wherein, information is put mainly into consideration: hick's law, three second interaction principle, human eye visual area rule and information definition and symbol design principle and traffic characteristics of drivers. The method specifically comprises the following steps:

s51, constructing a driving environment and designing various driving emergencies, wherein the driving emergencies mainly comprise: and constructing a module by using a multi-detail level technology, a baking technology, a light detector group, mask elimination and a layer blanking distance technology, and then completing registration of a driving environment, wherein driving objects around the driving environment comprise pedestrians, vehicles, urban buildings and the like. In the aspect of illumination implementation, the illumination is carried out on the surface of a baked static object to reduce the performance overhead, the brightness of a key object is calculated in real time by using a real-time illumination technology to ensure the visual reality, and refraction and reflection of the illumination between the objects are realized by using a Global Illumination (GI) system such as UNITY and the like, so that the illumination reality is greatly improved. In terms of simulating real world physical collisions, the collision system of UNITY is used to simulate the effects of gravity and collisions on real world objects by adding collision devices and rigid body components to vehicles, pedestrians and buildings. It should be noted that the collision system is not limited to the UNITY system.

In order to better simulate the effect of the AR-HUD in the real environment, the UNITY is used for realizing the virtual image effect, a special effect shader is adopted for rendering the AR-HUD element into a semitransparent sprite state, and the AR-HUD element is displayed in the virtual environment. In the design of safety prompts such as an AR pedestrian detection prompt box, AR vehicle body collision warning and AR vehicle warning elements, the driver's attention is attracted by adopting a conspicuous red color so as to improve the response speed of the driver to an emergency. In the design of auxiliary driving elements such as an AR navigation arrow, an AR vehicle speed display, an oil quantity display, a navigation small map, an event prompt, a residential area prompt, a remaining distance prompt and the like, soft colors such as cyan or blue are adopted to relieve visual fatigue of a driver.

And S52, taking the traffic characteristics of the driver as the main standard of the AR-HUD interface design. According to the brain information processing flow and the man-machine interaction principle of a driver, the number of information displayed on an AR-HUD single interface is set to be 7-9, the information appearing in single warning information is 3 seconds, and the time span of emergency danger warning information is 10-15 seconds. The display of the AR-HUD interface is positioned in the range of 65 degrees of the visual area when the vehicle speed is below 75km/h, and the display of the AR-HUD interface is positioned in the range of 40 degrees of the visual area when the vehicle speed is above 75 km/h. The display form is clear and concise text icons and the placing position is determined according to the priority.

And S53, navigating by adopting a corner point algorithm, selecting a route direction, and optimizing and presenting the driving route according to different conditions.

S54, after the driving route is optimized, constructing a guide curve by adopting a third-order Bezier curve:

B(t)=P0(1-t)3+3P1t(1-t)2+3P2t2(1-t)+P3t3,t∈[0,1]

wherein the four points P0, P1, P2 and P3 define a cubic Betz curve in a plane or in a three-dimensional space; as shown in fig. 2, the curve starts from P0, runs to P1, and goes from the direction of P2 to P3; aAnd generally do not pass through either P1 or P2, which simply provide directional information there. The spacing between P0 and P1 determines how long the curve "goes in length" in the direction of P2 before turning to P3,

s55, the head-mounted equipment is used as an information carrier on the application, and more vivid and exquisite driving experience is provided.

The system is characterized in that environment building and access development are carried out on software by a game engine, and user operation is realized by simulating a cockpit. On the hardware level, the system consists of a multi-surface hard back projection wall, and according to the requirement of human engineering, through the cooperation of stereo projection, a three-dimensional tracker, stereo glasses and a simulation cockpit, four-surface virtual immersion spaces of the front, the left, the right and the ground are created, and various spatial information and logical information contained in the environment are acquired in an all-around manner, so that a user obtains more real distance sense, UHD information suspension visual sense and finer driving experience.

As shown in fig. 3, the virtual test system is represented by VR headset VIVE, the related port of which is connected to a high performance PC, and other physical devices such as a foot pedal and a steering wheel are also connected to the computer through USB interfaces. The PC machine processes the acquired data and feeds the data back to the Unity, and the head-mounted equipment displays the image in the Unity. Meanwhile, an eye tracker, a pedal sensor and a steering wheel angle sensor in the VIVE supervise and record data all the time, and synchronously import the data into a data storage library according to a written data import algorithm.

S6, a system test optimization module measures index data such as vehicle speed, steering wheel turning angle, heart rate and eye movement, the index data are analyzed by a hierarchical analysis algorithm, and expert scoring indexes are integrated in the hierarchical calculation of the hierarchical analysis algorithm, so that main and objective factors are integrated to select an optimal AR-HUD display system. The method specifically comprises the following steps:

and S61, testing the heart rate and the blood pressure by using an intelligent system. Comparing the influence of different design concept graph systems on the behavior of the driver by analyzing the change of the heart beat and the rhythmicity thereof under the dangerous stress condition; performing corresponding tests according to the module selected by the user, adopting different driving scenes and driving events aiming at different auxiliary driving modules, and performing tests on steering wheel control behaviors by adopting a steering wheel corner sensor to reflect the characteristic indexes of the transverse motion of vehicle driving; and simultaneously, performing eye movement test and analyzing the influence of the AR-HUD auxiliary driving system on the driving cognitive resource distribution. The method comprises the steps of recording driving data of three minutes through a driving scene and a driving environment fixed for a driver, then carrying out comparison inspection and automatic quantitative decision on data such as transverse and longitudinal widths of saccades, fixation duration of each area, pupil change and the like, and giving an optimal AR-HUD interface partitioning module to a user.

S62, performing qualitative and quantitative analysis by using a relevant mathematical analysis method, and under the condition of ensuring that the authenticity of the data is substantially unchanged, continuously analyzing the discrete data so as to be convenient for observation and analysis; deep analysis of the data was performed with the help of the BP neural network to explore the intrinsic connections of inputs and outputs and make short-term predictions. And automatically screening out a high-quality auxiliary driving model according to the driving behavior and the eye movement data for the user to select.

And S63, carrying out deep analysis of the data by means of the BP neural network, so as to explore the internal relation of the input and the output and make short-term prediction.

When the BP neural network is used for deep analysis, the learning process consists of two processes of forward propagation of signals and backward propagation of errors. In forward propagation, an input sample is transmitted from an input layer, processed layer by each hidden layer, and transmitted to an output layer. If the actual output of the output layer does not match the expected output, the error is propagated back to the error stage. The error back transmission transmits the output error to the input layer by layer through a hidden layer in a certain form, and distributes the error to all units of each layer, thereby obtaining the error signal of each layer of units, and the error signal is used as the basis for correcting the weight of each unit. The weight adjustment process of each layer of signal forward propagation and error backward propagation is performed in cycles. And (4) continuously adjusting the weight value, namely, a learning and training process of the network. This process is continued until the error in the network output is reduced to an acceptable level, or until a predetermined number of learning cycles.

The principle of deep analysis is as follows: and presenting the information most needed by the user in the form of a virtual AR-HUD interface. The user can visually accept the contents without generating negative emotions such as ambiguity, rejection and the like, and meanwhile, the driving system has high trust awareness and relaxed driving experience. The information layout presentation mode of the AR-HUD interface needs to be trusted by the driver.

The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于多场景自动充值及回收运营系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!