Contextual model self-defining method and device, electronic equipment and storage medium

文档序号:1136 发布日期:2021-09-17 浏览:38次 中文

阅读说明:本技术 一种情景模式的自定义方法、装置、电子设备及存储介质 (Contextual model self-defining method and device, electronic equipment and storage medium ) 是由 赵保军 岳婧 夏楠 张弛 于 2021-06-10 设计创作,主要内容包括:本发明涉及一种情景模式的自定义方法、装置、电子设备及存储介质,其中,方法包括获取触控指令,触控指令包括待定义情景模式,并根据触控指令,确定每个控制模块的自定义状态信息,进而将待定义情景模式与每个控制模块的自定义状态信息进行匹配存储,得到自定义情景模式。基于本申请实施例,可以根据人机交互系统发送的待定义情景模式、语音控制指令和状态信息获取指令,确定每个控制模块的自定义状态信息,以与待定义情景模式匹配,得到自定义情景模式,可以实时对车辆的情景模式进行更新,带给用户全新的体验。(The invention relates to a contextual model self-defining method, a contextual model self-defining device, electronic equipment and a storage medium, wherein the method comprises the steps of obtaining a touch instruction, determining self-defining state information of each control module according to the touch instruction, and further matching and storing the contextual model to be defined and the self-defining state information of each control module to obtain the self-defining contextual model. Based on the embodiment of the application, the user-defined state information of each control module can be determined according to the to-be-defined contextual model, the voice control instruction and the state information acquisition instruction sent by the man-machine interaction system so as to be matched with the to-be-defined contextual model to obtain the user-defined contextual model, the contextual model of the vehicle can be updated in real time, and brand new experience is brought to a user.)

1. A method for customizing a contextual model is characterized by comprising the following steps:

acquiring a touch instruction, wherein the touch instruction comprises a scene mode to be defined;

determining the self-defined state information of each control module according to the touch instruction;

and matching and storing the contextual model to be defined and the custom state information of each control module to obtain a custom contextual model.

2. The method of claim 1, wherein the touch command comprises a voice control command;

determining the user-defined state information of each control module according to the touch instruction, wherein the determining comprises the following steps:

and determining the self-defined state information of each control module according to the voice control instruction.

3. The method of claim 1, wherein the touch instruction comprises a status information acquisition instruction;

determining the user-defined state information of each control module according to the touch instruction, wherein the determining comprises the following steps:

acquiring the current state information of each control module;

and determining the current state information of each control module as the self-defined state information of each control module.

4. The method of claim 1, wherein the touch command comprises a voice control command and a status information acquisition command;

determining the user-defined state information of each control module according to the touch instruction, wherein the determining comprises the following steps:

according to the voice control instruction, adjusting the historical state of each control module to the current state;

acquiring the current state information of each control module;

and determining the current state information of each control module as the self-defined state information of each control module.

5. The method of claim 1, wherein the touch command comprises a voice control command and a status information acquisition command;

determining the user-defined state information of each control module according to the touch instruction, wherein the determining comprises the following steps:

acquiring the current state information of each control module;

and determining the self-defined state information of each control module according to the voice control instruction and the current state information of each control module.

6. The method of claim 1, wherein the scene modes to be defined comprise a wake mode, a smoke mode, a parent mode, a joy mode and a new mode.

7. The method of claim 1, wherein before the obtaining the touch instruction, further comprising:

determining the current contextual model of the vehicle;

determining the self-defined state information of each control module corresponding to the current contextual model;

and controlling each control module to be in a corresponding working state according to the self-defined state information of each control module.

8. An apparatus for customizing a profile, comprising:

the system comprises a touch instruction acquisition module, a scene definition module and a scene definition module, wherein the touch instruction acquisition module is used for acquiring a touch instruction which comprises a scene mode to be defined;

the user-defined state information determining module is used for determining the user-defined state information of each control module according to the touch instruction;

and the custom contextual model determining module is used for matching and storing the contextual model to be defined and the custom state information of each control module to obtain the custom contextual model.

9. An electronic device comprising a processor and a memory, wherein at least one instruction, at least one program, set of codes, or set of instructions is stored in the memory, and wherein the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method of customization of the contextual model of any of claims 1-7.

10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of customizing a contextual model of any one of claims 1-7.

Technical Field

The invention relates to the technical field of intelligent design of automobiles, in particular to a method and a device for customizing a contextual model, electronic equipment and a storage medium.

Background

With the popularization of automobiles and the increasing frequency of people using automobiles, the intelligent control of automobiles becomes one of the important considerations for people. For example, an automobile needs to design a plurality of contextual models so that the vehicle is automatically controlled to be in the corresponding contextual model according to a real-time environment. For example, when a user smokes in a vehicle, a smoking mode can be preset, and when the user triggers the smoking mode on a human-computer interaction interface, the corresponding opening degree of the vehicle window can be automatically controlled.

Although the existing vehicle is provided with a plurality of contextual models, the contextual models are pre-designed and defined by a host factory and cannot be updated in real time according to the requirements of users, the types of the pre-designed and defined contextual models are limited, and when a new contextual model needs to be added, a vehicle control system needs to be updated correspondingly, so that the self-definition of the contextual models is difficult to realize.

Disclosure of Invention

The embodiment of the invention provides a method and a device for customizing a contextual model, electronic equipment and a storage medium, which can update the contextual model of a vehicle in real time and bring brand new experience to a user.

The embodiment of the invention provides a self-defining method of a contextual model, which comprises the following steps:

acquiring a touch instruction, wherein the touch instruction comprises a scene mode to be defined;

determining the self-defined state information of each control module according to the touch instruction;

and matching and storing the contextual model to be defined and the custom state information of each control module to obtain the custom contextual model.

Further, the touch instruction comprises a voice control instruction;

according to the touch instruction, determining the self-defined state information of each control module, including:

and determining the self-defined state information of each control module according to the voice control instruction.

Further, the touch instruction comprises a state information acquisition instruction;

according to the touch instruction, determining the self-defined state information of each control module, including:

acquiring current state information of each control module;

and determining the current state information of each control module as the self-defined state information of each control module.

Further, the touch instruction comprises a voice control instruction and a state information acquisition instruction;

according to the touch instruction, determining the self-defined state information of each control module, including:

according to the voice control instruction, adjusting the historical state of each control module to the current state;

acquiring current state information of each control module;

and determining the current state information of each control module as the self-defined state information of each control module.

Further, the touch instruction comprises a voice control instruction and a state information acquisition instruction;

according to the touch instruction, determining the self-defined state information of each control module, including:

acquiring current state information of each control module;

and determining the self-defined state information of each control module according to the voice control instruction and the current state information of each control module.

Further, the scene modes to be defined include a wake mode, a smoking mode, a parent-child mode, a joy mode and a new mode.

Further, before the touch instruction is acquired, the method further includes:

determining the current contextual model of the vehicle;

determining custom state information of each control module corresponding to the current contextual model;

and controlling each control module to be in a corresponding working state according to the self-defined state information of each control module.

Correspondingly, the embodiment of the present application further provides a device for customizing a contextual model, where the device includes:

the touch instruction acquisition module is used for acquiring a touch instruction, and the touch instruction comprises a contextual model to be defined;

the user-defined state information determining module is used for determining the user-defined state information of each control module according to the touch instruction;

and the custom contextual model determining module is used for matching and storing the contextual model to be defined and the custom state information of each control module to obtain the custom contextual model.

Correspondingly, the embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the above-mentioned scenario customization method.

Accordingly, an embodiment of the present invention further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the above-mentioned scenario customization method.

The embodiment of the application has the following beneficial effects:

the embodiment of the application provides a contextual model self-defining method and device, electronic equipment and a storage medium, wherein the method comprises the steps of obtaining a touch instruction, wherein the touch instruction comprises a contextual model to be defined, determining self-defining state information of each control module according to the touch instruction, and further matching and storing the contextual model to be defined and the self-defining state information of each control module to obtain the self-defining contextual model. Based on the embodiment of the application, the user-defined state information of each control module can be determined according to the to-be-defined contextual model, the voice control instruction and the state information acquisition instruction sent by the man-machine interaction system so as to be matched with the to-be-defined contextual model to obtain the user-defined contextual model, the contextual model of the vehicle can be updated in real time, and brand new experience is brought to a user.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.

FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the invention;

fig. 2 is a schematic flowchart of a method for customizing a profile according to an embodiment of the present invention;

FIG. 3 is a diagram illustrating a display of a to-be-defined mode provided by an embodiment of the present application;

fig. 4 is a flowchart illustrating a method for customizing a profile according to an embodiment of the present application;

fig. 5 is a flowchart illustrating a method for customizing a profile according to an embodiment of the present application;

fig. 6 is a schematic structural diagram of a profile customization apparatus according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiment is only one embodiment of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

An "embodiment" as referred to herein relates to a particular feature, structure, or characteristic that may be included in at least one implementation of the invention. In the description of embodiments of the present invention, it should be understood that the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, apparatus, article, or apparatus.

Referring to fig. 1, a schematic diagram of an application environment provided by an embodiment of the present invention is shown, which includes a vehicle 101, where a vehicle-mounted server 1011 is installed on the vehicle, and the server 1011 can obtain a touch instruction, where the touch instruction includes a to-be-defined contextual model, and determine the self-defined state information of each control module according to the touch instruction, and further match and store the to-be-defined contextual model and the self-defined state information of each control module to obtain a self-defined contextual model.

A specific embodiment of a method for customizing a profile according to the present invention is described below, fig. 2 is a schematic flow chart of the method for customizing a profile according to the embodiment of the present invention, and the present specification provides the method operation steps as shown in the embodiment or the flow chart, but may include more or less operation steps based on conventional or non-creative labor. The order of steps recited in the embodiments is only one of many possible orders of execution and does not represent the only order of execution, and in actual execution, the steps may be performed sequentially or in parallel as in the embodiments or methods shown in the figures (e.g., in the context of parallel processors or multi-threaded processing). Specifically, as shown in fig. 2, the method includes:

s201: and acquiring a touch instruction, wherein the touch instruction comprises a scene mode to be defined.

In the embodiment of the application, the server may obtain a touch instruction sent by the human-computer interaction system, where the touch instruction may include a contextual model to be defined. Namely, when the user clicks the corresponding contextual model to be defined on the vehicle-mounted multimedia screen, the vehicle-mounted multimedia screen sends a touch signal to the server.

Fig. 3 is a diagram illustrating a display diagram of a to-be-defined mode according to an embodiment of the present application. In an alternative embodiment, the scene mode to be defined may include a wake mode, a smoke mode, a parent-child mode, a joy mode, and a new creation mode.

In the embodiment of the application, before the server acquires the touch instruction, the current contextual model of the vehicle can be determined, the customized state information of each control module corresponding to the current contextual model is determined, and each control module is controlled to be in the corresponding working state according to the customized state information of each control module.

In an optional implementation manner, when the current contextual model of the vehicle is the wake-up model, the central control module may control the vehicle-mounted multimedia to play dynamic music, the ambience lamp control module may adjust the color, the flashing frequency and the brightness of the ambience lamp, the air conditioner control module may control the air conditioner to be in the external circulation mode and the preset wake-up temperature, the vehicle-mounted purification control module may control the vehicle-mounted purifier to start negative ions, the internal and external circulation control module may start the fragrance system with the fragrance function, and the outdoor temperature sensor may be used to detect the outdoor temperature, if the outdoor temperature is the suitable temperature, the seat control module may start the seat ventilation, and if there is no outdoor temperature sensor, the seat control module may default to start the seat ventilation.

In another optional embodiment, when the current contextual model of the vehicle is the smoking mode, the window control module may open the window and is at a preset opening degree, for example, the preset opening degree is 20%, and when the current opening degree of the window is greater than a preset opening degree threshold, the opening degree of the window may not be adjusted any more, for example, the preset opening degree threshold is 20%, and the current opening degree of the window is 25%, the window control module does not need to adjust the opening degree of the window, and the air conditioner control module may open the air conditioner.

In another optional implementation, when the current contextual model of the vehicle is the parent-child model, the central control module may control the vehicle-mounted multimedia to play the children song, the air-conditioning control module may adjust the air-conditioning temperature to 23 ℃, and the fragrance system may be turned on by the internal and external circulation control module of the fragrance function.

In another optional implementation manner, when the current contextual model of the vehicle is the pleasure mode, the central control module may control the vehicle-mounted multimedia to play music with heavy bass, the mood light control module may adjust the color of the mood light to red, the window control module may close the skylight and the windows, and the driving mode control module may adjust the driving mode of the vehicle to the sport mode.

S203: and determining the self-defined state information of each control module according to the touch instruction.

In the embodiment of the application, the server can determine the custom state information of each control module according to the touch instruction sent by the man-machine interaction system. The touch instruction may include a voice control instruction and/or a state information acquisition instruction. That is, the server can determine the custom state information of each control module according to the user voice and the real operation action.

In an optional implementation manner, the touch instruction may include a voice control instruction, and the customized state information of each control module may be determined according to the voice control instruction. For example, in order to prevent a child from extending a hand out of a window, a user can touch a parent-child mode icon on a human-computer interaction display system to enable a vehicle to be in a parent-child mode and send a voice to close the window, a server obtains a touch instruction sent by a human-computer interaction system, namely, the vehicle is controlled to be in the parent-child mode and the window is controlled to be closed, custom state information of a central control module can be determined to play a child song for a vehicle-mounted multimedia, custom state information of an air-conditioning control module is to adjust the temperature to 23 ℃, custom state information of an inner circulation control module and an outer circulation control module of a fragrance function is to open the fragrance system, and custom state information of the window control module is to.

In another optional implementation manner, the touch instruction may include a state information obtaining instruction, and may obtain current state information of each control module, and determine the current state information of each control module as the custom state information of each control module. For example, in order to prevent a child from extending a hand out of a window, a user can touch a parent-child mode icon on a human-computer interaction display system to enable a vehicle to be in a parent-child mode and close the window, a server obtains a touch instruction sent by a human-computer interaction system, namely, to control the vehicle to be in the parent-child mode, and can obtain current state information of a central control module, namely, a vehicle-mounted multimedia playing child song, obtain current state information of an air conditioner control module, namely, the temperature is adjusted to 23 ℃, obtain current state information of an inner circulation control module and an outer circulation control module of a fragrance function, namely, starting the fragrance system, and current state information of a window control module, namely, closing the window, and determining the current state information as custom state information corresponding to each control module.

In another optional implementation manner, the touch instruction may include a voice control instruction and a state information obtaining instruction, and the historical state of each control module may be adjusted to the current state according to the voice control instruction, the current state information of each control module may be obtained, and the current state information of each control module may be determined as the custom state information of each control module.

In another optional implementation, the touch instruction may include a voice control instruction and a state information obtaining instruction, the current state information of each control module may be obtained, and the custom state information of each control module is determined according to the voice control instruction and the current state information of each control module.

S205: and matching and storing the contextual model to be defined and the custom state information of each control module to obtain the custom contextual model.

In the embodiment of the application, the server can match and store the contextual model to be defined and the custom state information of each control module to obtain the custom contextual model. Based on the above description, when it is determined that the custom state information of the central control module is that the vehicle-mounted multimedia plays a child song, the custom state information of the air-conditioning control module is that the temperature is adjusted to 23 ℃, the custom state information of the fragrance function internal and external circulation control module is that the fragrance system is opened, the custom state information of the vehicle window control module is that the vehicle window is closed, the parent-child mode can be matched with the custom state information of the central control module, the air-conditioning control module, the fragrance function internal and external circulation control module and the vehicle window control module, and the custom state information is stored, so that when the parent-child mode on the man-machine interaction display screen is subsequently touched, the child song can be automatically played, the air-conditioning temperature is adjusted to 23 ℃, the fragrance system is opened, and the vehicle window is closed.

By adopting the contextual model customizing method disclosed by the embodiment of the application, the customized state information of each control module can be determined according to the contextual model to be defined, the voice control instruction and the state information acquisition instruction which are sent by the man-machine interaction system so as to be matched with the contextual model to be defined, the customized contextual model can be obtained, the contextual model of the vehicle can be updated in real time, and brand new experience is brought to a user.

Specific embodiments of the two profile customization methods are described below based on the above-described method.

In a specific implementation manner, fig. 4 is a flowchart illustrating a method for customizing a profile provided in an embodiment of the present application. The user clicks one of the plurality of contextual model icons displayed on the man-machine interaction display interface, the contextual model corresponding to the clicked icon is the contextual model to be defined, and then each control module corresponding to the defined contextual model is switched to the corresponding working state. And then, displaying text prompts of 'whether the scene mode needs to be adjusted', 'yes' button and 'no' button on a man-machine interaction display interface. When the user clicks a 'yes' button or sends a voice 'yes', namely 'you can tell me what adjustment to do or manual execution is displayed on the man-machine interaction display interface, the user can customize the scene mode according to the requirement of adjusting and remembering you', if the server receives 'enter self-definition' sent by the user voice or detects that 'enter self-definition' on the man-machine interaction display interface is triggered, for the voice uttered by the user, such as "close the window", "open the air conditioner", "play music", etc., or the user manually closes the vehicle window, closes the air conditioner or opens the vehicle-mounted multimedia player music to carry out memory learning, and if the self-learning completion sent by the voice of the user is received or the self-learning completion on the human-computer interaction display interface is triggered, the self-definition of the contextual model can be determined to be completed.

In another specific implementation manner, fig. 5 is a flowchart illustrating a method for customizing a profile provided in an embodiment of the present application. If a user clicks a newly built scene mode icon displayed on a human-computer interaction display interface, a user displays 'you can tell me which functions to add or manually execute, the user can customize a scene mode according to the requirement of adjusting and memorizing the user', and if the user receives 'entering the user definition' sent by the user voice in 30S or detects that 'entering the user definition' on the human-computer interaction display interface is triggered, the user sends voices, such as 'closing a vehicle window', 'opening an air conditioner', 'playing music', or the user manually closes the vehicle window, closes the air conditioner or opens a vehicle-mounted multimedia to play music, so that memory learning is performed. And if the user does not receive the 'enter self-definition' sent by the user voice in the 30S or the 'enter self-definition' on the man-machine interaction display interface is triggered, exiting the self-definition mode. If the self-learning completion sent by the voice of the user is received in 60S or the self-learning completion on the human-computer interaction display interface is triggered, the situation mode can be determined to be completed. If the 'self-learning completed' sent by the user voice is not received within 60S or the 'self-learning completed' on the human-computer interaction display interface is triggered, the user-defined mode can be exited.

Fig. 6 is a schematic structural diagram of a profile customization apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus includes:

the touch instruction obtaining module 601 is configured to obtain a touch instruction, where the touch instruction includes a contextual model to be defined;

the custom state information determining module 603 is configured to determine custom state information of each control module according to the touch instruction;

the custom contextual model determining module 605 is configured to match and store the contextual model to be defined and the custom state information of each control module to obtain a custom contextual model.

The device and method embodiments in the embodiments of the invention are based on the same inventive concept.

In an embodiment of the present invention, the electronic device may be disposed in the server to store at least one instruction, at least one program, a code set, or a set of instructions related to a method for implementing a profile customization in the method embodiment, where the at least one instruction, the at least one program, the code set, or the set of instructions are loaded from the memory and executed to implement the profile customization method described above.

The present invention further provides a storage medium, where the storage medium may be disposed in a server to store at least one instruction, at least one program, a code set, or a set of instructions related to a method for implementing a profile customization in the method embodiment, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the profile customization method.

Optionally, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to, a storage medium including: various media that can store program codes, such as a usb disk, a Read-only Memory (ROM), a removable hard disk, a magnetic disk, or an optical disk.

As can be seen from the above-mentioned embodiments of the method and apparatus for customizing a contextual model, the electronic device, or the storage medium provided in the embodiments of the present application, the present application obtains a touch instruction, where the touch instruction includes a contextual model to be defined, and determines the customized state information of each control module according to the touch instruction, and further matches and stores the contextual model to be defined and the customized state information of each control module, so as to obtain the customized contextual model. Based on the embodiment of the application, the user-defined state information of each control module can be determined according to the to-be-defined contextual model, the voice control instruction and the state information acquisition instruction sent by the man-machine interaction system so as to be matched with the to-be-defined contextual model to obtain the user-defined contextual model, the contextual model of the vehicle can be updated in real time, and brand new experience is brought to a user.

In the present invention, unless otherwise expressly stated or limited, the terms "connected" and "connected" are to be construed broadly, e.g., as meaning either a fixed connection or a removable connection, or an integral part; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.

It should be noted that: the foregoing descriptions of the embodiments of the present invention are provided for illustration only, and not for the purpose of limiting the invention as defined by the appended claims. In some cases, the actions or steps recited in the claims can be performed in the order of execution in different embodiments and achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown or connected to enable the desired results to be achieved, and in some embodiments, multitasking and parallel processing may also be possible or may be advantageous.

All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment is described with emphasis on differences from other embodiments. Especially, for the embodiment of the device, since it is based on the embodiment similar to the method, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.

While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:驾驶功能智能配置推送方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!