Audio mixing method, device, medium and electronic equipment

文档序号:617688 发布日期:2021-05-07 浏览:29次 中文

阅读说明:本技术 音频混音方法、装置、介质以及电子设备 (Audio mixing method, device, medium and electronic equipment ) 是由 李炎 于 2021-01-27 设计创作,主要内容包括:本公开提供了一种音频混音方法、装置、介质以及电子设备。该方法包括:对行程起始点和行程终止点进行导航行程处理得到预计到达时间和导航诱导音频;其中,导航诱导音频包括至少两个转向播报音频;对预计到达时间进行行程时长计算得到预测行程时长,并对至少两个转向播报音频之间的区间路段长度进行区间时长计算得到区间行程时长;按照预测行程时长在音频对象集合中进行对象提取处理得到候选音频,并获取候选音频的音频播放时长;根据区间行程时长和音频播放时长对候选音频和至少两个转向播报音频进行音频混音处理得到混音播放音频。本公开保障了音频混音处理的准确性,同时兼顾了候选音频的播报体验和用户导航信息的准确性。(The disclosure provides an audio mixing method, an audio mixing device, an audio mixing medium and an electronic device. The method comprises the following steps: navigation travel processing is carried out on the travel starting point and the travel ending point to obtain predicted arrival time and navigation induction audio; the navigation guidance audio comprises at least two steering broadcast audio; calculating the travel time of the estimated arrival time to obtain a predicted travel time, and calculating the interval time of the interval section length between at least two turn-to-broadcast audios to obtain an interval travel time; performing object extraction processing in the audio object set according to the predicted travel time length to obtain candidate audio, and acquiring the audio playing time length of the candidate audio; and carrying out audio mixing processing on the candidate audios and the at least two steering broadcast audios according to the interval travel time length and the audio playing time length to obtain mixed playing audios. The method and the device ensure the accuracy of audio mixing processing, and simultaneously give consideration to the broadcasting experience of the candidate audio and the accuracy of the user navigation information.)

1. An audio mixing method, characterized in that the method comprises:

navigation travel processing is carried out on the travel starting point and the travel ending point to obtain predicted arrival time and navigation induction audio; the navigation guidance audio comprises at least two steering broadcast audio;

calculating the travel time of the estimated arrival time to obtain a predicted travel time, and calculating the interval time of the interval section length between the at least two turn-to-broadcast audios to obtain an interval travel time;

performing object extraction processing in an audio object set according to the predicted travel time length to obtain candidate audio, and acquiring the audio playing time length of the candidate audio;

and carrying out audio mixing processing on the candidate audios and the at least two steering broadcast audios according to the interval travel time length and the audio playing time length to obtain mixed playing audios.

2. The audio mixing method according to claim 1, wherein the audio mixing the candidate audio and the at least two broadcast turning audio according to the interval travel time length and the audio playing time length to obtain a mixed playing audio comprises:

comparing the interval travel time length with the audio playing time length to obtain a time length comparison result;

and carrying out audio mixing processing on the candidate audios and the at least two turning broadcast audios based on the time length comparison result to obtain mixed broadcast audios.

3. The audio mixing method according to claim 2, wherein the audio mixing the candidate audio and the at least two turning broadcast audios to obtain a mixed broadcast audio comprises:

acquiring audio identifiers of the candidate audios, and acquiring steering starting point coordinates corresponding to at least two steering broadcast audios;

and carrying out identification adding processing on the audio identification according to the turning starting point coordinate to obtain mixed playing audio.

4. The audio mixing method according to claim 1, wherein the navigation guidance audio further comprises at least two road condition prompt audios,

the method further comprises the following steps:

carrying out audio merging processing on the at least two road condition prompting audios to obtain road condition merging audios;

and carrying out audio adjustment processing on the road condition merged audio and the mixed sound playing audio to obtain road condition navigation audio.

5. The audio mixing method according to claim 1, wherein the navigating and route processing the route start point and the route end point to obtain the predicted arrival time and the navigation guidance audio comprises:

respectively carrying out coordinate adsorption treatment on the stroke starting point and the stroke ending point to obtain a stroke starting point coordinate and a stroke ending point coordinate;

carrying out route planning processing on the travel starting point coordinate and the travel end point coordinate to obtain a target navigation route and predicted arrival time of the target navigation route;

and carrying out induction voice processing on the target navigation route to obtain navigation induction audio.

6. The audio mixing method according to claim 5, wherein the performing routing processing on the travel start point coordinate and the travel end point coordinate to obtain a target navigation route and an estimated arrival time of the target navigation route includes:

performing route planning processing on the travel starting point coordinates and the travel end point coordinates to obtain at least two travel navigation routes;

respectively carrying out time prediction processing on the at least two travel navigation routes to obtain at least two predicted arrival times of the at least two travel navigation routes;

and carrying out route sequencing processing on the at least two travel navigation routes to obtain a target navigation route, and determining the predicted arrival time of the target navigation route in the at least two predicted arrival times.

7. The audio mixing method according to claim 1, wherein the calculating an interval duration of the interval section length between the at least two turn-to-broadcast audios to obtain an interval travel duration comprises:

acquiring the length of an interval route between at least two steering broadcast audios, and acquiring the current running speed;

and calculating the interval duration of the interval route and the current running speed to obtain the interval travel duration.

8. An audio mixing apparatus, characterized in that the apparatus comprises:

the route processing module is configured to perform navigation route processing on the route starting point and the route ending point to obtain predicted arrival time and navigation induction audio; the navigation guidance audio comprises at least two steering broadcast audio;

the time length calculation module is configured to calculate the travel time length of the estimated arrival time to obtain a predicted travel time length, and calculate the interval time length of an interval road section between the at least two turn-to-broadcast audios to obtain an interval travel time length;

the audio extraction module is configured to perform object extraction processing in an audio object set according to the predicted travel time length to obtain candidate audio and acquire an audio playing time length of the candidate audio;

and the audio mixing module is configured to perform audio mixing processing on the candidate audio and the at least two steering broadcast audios according to the interval travel time length and the audio playing time length to obtain mixed playing audios.

9. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the audio mixing method according to any one of claims 1 to 7.

10. An electronic device, comprising:

a processor; and

a memory for storing executable instructions of the processor;

wherein the processor is configured to perform the audio mixing method of any one of claims 1 to 7 via execution of the executable instructions.

Technical Field

The present disclosure relates to the field of audio processing technologies, and in particular, to an audio mixing method, an audio mixing apparatus, a computer-readable medium, and an electronic device.

Background

Navigation and listening to other audio such as music or books are high frequency usage scenarios for a user during driving. And the requirement of listening to navigation guidance broadcast and other audio can be met by simultaneously playing the navigation audio and other audio.

However, in this way, the navigation guidance broadcast is frequently prompted, so that the playing texture of other audios is reduced, and the navigation guidance broadcast is interfered by other audios, so that the user can detour by yaw.

In view of the above, there is a need in the art to develop a new audio mixing method and apparatus.

It should be noted that the information disclosed in the above background section is only for enhancement of understanding of the technical background of the present application, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.

Disclosure of Invention

The present disclosure is directed to an audio mixing method, an audio mixing apparatus, a computer readable medium, and an electronic device, so as to overcome the technical problems of interference of navigation-induced broadcast and reduction of quality of other audio broadcasts at least to a certain extent.

Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.

According to an aspect of an embodiment of the present disclosure, there is provided an audio mixing method, including: navigation travel processing is carried out on the travel starting point and the travel ending point to obtain predicted arrival time and navigation induction audio; the navigation guidance audio comprises at least two steering broadcast audio;

calculating the travel time of the estimated arrival time to obtain a predicted travel time, and calculating the interval time of the interval section length between the at least two turn-to-broadcast audios to obtain an interval travel time;

performing object extraction processing in an audio object set according to the predicted travel time length to obtain candidate audio, and acquiring the audio playing time length of the candidate audio;

and carrying out audio mixing processing on the candidate audios and the at least two steering broadcast audios according to the interval travel time length and the audio playing time length to obtain mixed playing audios.

According to an aspect of an embodiment of the present disclosure, there is provided an audio mixing apparatus including: the route processing module is configured to perform navigation route processing on the route starting point and the route ending point to obtain predicted arrival time and navigation induction audio; the navigation guidance audio comprises at least two steering broadcast audio;

the time length calculation module is configured to calculate the travel time length of the estimated arrival time to obtain a predicted travel time length, and calculate the interval time length of an interval road section between the at least two turn-to-broadcast audios to obtain an interval travel time length;

the audio extraction module is configured to perform object extraction processing in an audio object set according to the predicted travel time length to obtain candidate audio and acquire an audio playing time length of the candidate audio;

and the audio mixing module is configured to perform audio mixing processing on the candidate audio and the at least two steering broadcast audios according to the interval travel time length and the audio playing time length to obtain mixed playing audios.

In some embodiments of the present disclosure, based on the above technical solutions, the audio mixing module includes: the time length comparison submodule is configured to compare the interval travel time length with the audio playing time length to obtain a time length comparison result;

and the comparison result submodule is configured to perform audio mixing processing on the candidate audios and the at least two turning broadcast audios based on the duration comparison result to obtain mixed playing audios.

In some embodiments of the disclosure, based on the above technical solutions, the comparison result sub-module includes: the identification acquisition unit is configured to acquire audio identification of the candidate audio and acquire steering starting point coordinates corresponding to at least two steering broadcast audios;

and the identification adding unit is configured to perform identification adding processing on the audio identification according to the turning starting point coordinate to obtain mixed playing audio.

In some embodiments of the present disclosure, based on the above technical solutions, the audio mixing apparatus further includes: the audio merging module is configured to perform audio merging processing on the at least two road condition prompting audios to obtain road condition merging audios;

and the audio adjusting module is configured to perform audio adjusting processing on the road condition merged audio and the mixed audio playing audio to obtain road condition navigation audio.

In some embodiments of the present disclosure, based on the above technical solutions, the trip processing module includes: the adsorption processing submodule is configured to perform coordinate adsorption processing on the stroke starting point and the stroke ending point respectively to obtain a stroke starting point coordinate and a stroke ending point coordinate;

the route planning sub-module is configured to perform route planning processing on the travel starting point coordinate and the travel end point coordinate to obtain a target navigation route and predicted arrival time of the target navigation route;

and the induction voice submodule is configured to perform induction voice processing on the target navigation route to obtain navigation induction audio.

In some embodiments of the present disclosure, based on the above technical solutions, the route planning sub-module includes: the journey navigation unit is configured to perform route planning processing on the journey starting point coordinate and the journey end point coordinate to obtain at least two journey navigation routes;

the event prediction unit is configured to respectively perform time prediction processing on the at least two journey navigation routes to obtain at least two predicted arrival times of the at least two journey navigation routes;

and the route sequencing unit is configured to perform route sequencing processing on the at least two travel navigation routes to obtain a target navigation route, and determine the predicted arrival time of the target navigation route in the at least two predicted arrival times.

In some embodiments of the present disclosure, based on the above technical solutions, the duration calculating module includes: the speed obtaining submodule is configured to obtain the length of an interval route between at least two steering broadcast audios and obtain the current running speed;

and the interval duration submodule is configured to calculate the interval duration of the interval route length and the current running speed to obtain the interval travel duration.

According to an aspect of the embodiments of the present disclosure, there is provided a computer readable medium having a computer program stored thereon, the computer program, when executed by a processor, implementing an audio mixing method as in the above technical solution.

According to an aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the audio mixing method as in the above technical solution via execution of the executable instructions.

In the technical scheme provided by the embodiment of the disclosure, on one hand, the interval travel time length and the audio playing time length are used as the judgment conditions of audio mixing processing, a pre-judgment logic of the audio mixing processing is provided, and the accuracy of the audio mixing processing is ensured; on the other hand, audio mixing processing is carried out on the candidate audio and the steering broadcast audio, the playing experience of the candidate audio is improved, the condition that the steering broadcast audio is interfered in a key induction broadcast mode is avoided, and meanwhile the broadcasting experience of the candidate audio and the accuracy of user navigation information are considered.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:

fig. 1 schematically illustrates an architecture diagram of an exemplary system to which the disclosed solution applies;

fig. 2 schematically illustrates a flow chart of steps of a method of audio mixing in some embodiments of the present disclosure;

FIG. 3 schematically illustrates a flow chart of steps of a method of navigation trip processing in some embodiments of the present disclosure;

FIG. 4 schematically illustrates a flow chart of steps of a method of route planning processing in some embodiments of the present disclosure;

FIG. 5 schematically illustrates a flow chart of steps of a method of interval duration calculation in some embodiments of the present disclosure;

FIG. 6 schematically illustrates a flow chart of steps of a method of audio mixing processing in some embodiments of the present disclosure;

fig. 7 schematically illustrates a flow chart of steps of a method of further audio mixing processing in some embodiments of the present disclosure;

FIG. 8 schematically illustrates a flow chart of steps of a method of audio adjustment processing in some embodiments of the present disclosure;

fig. 9 schematically illustrates a system architecture diagram of an audio mixing processing method in an application scenario in some embodiments of the present disclosure;

FIG. 10 schematically illustrates an interface diagram of an authorization interface in an application scenario in some embodiments of the present disclosure;

fig. 11 schematically illustrates an interface diagram for playing mixed-play audio in an application scenario in some embodiments of the present disclosure;

fig. 12 schematically illustrates another interface diagram for playing mixed-play audio in an application scenario in some embodiments of the present disclosure;

fig. 13 schematically illustrates a block diagram of an audio mixing apparatus in some embodiments of the present disclosure;

FIG. 14 schematically illustrates a structural schematic diagram of a computer system suitable for use with an electronic device embodying embodiments of the present disclosure.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.

The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.

The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.

In the related art, navigating and listening to other audio is a high frequency usage scenario for a user during driving. For example, the user listens to music during navigation, or listens to books during navigation, etc. The common feature of both navigation and listening to other audio is that it occupies the user's auditory pathway. The navigation has tool attribute, the song listening or book listening has entertainment attribute, and the navigation and the other audio listening are important scenes with high viscosity in the driving process of a user. However, the occupation of the sound channel by navigation and listening to other audio is conflicting.

At present, the navigation audio and other audio are generally subjected to sound mixing processing through an operating system, so that the navigation guidance broadcast and other audio such as songs occupy sound channels simultaneously, and then the requirement of the navigation guidance broadcast and the requirement of the combination broadcast of other audio such as songs is met.

However, the sound mixing scheme frequently prompts navigation guidance broadcast, so that the playing texture of other audio such as music is reduced, and the experience of the other audio such as music is reduced; on the other hand, important navigation guidance broadcast may be ignored by the user due to interference of other audio such as music, so that the user makes a detour in a yaw manner, and the experience of the user on the navigation requirement is influenced.

Based on the problems existing in the above schemes, the present disclosure provides an audio mixing method, an audio mixing apparatus, a computer readable medium, and an electronic device based on a cloud technology.

Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.

Cloud technology (Cloud technology) is based on a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied in a Cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.

Wherein, cloud computing (cloud computing) refers to a delivery and use mode of an IT infrastructure, and refers to acquiring required resources in an on-demand and easily-extensible manner through a network; the generalized cloud computing refers to a delivery and use mode of a service, and refers to obtaining a required service in an on-demand and easily-extensible manner through a network. Such services may be IT and software, internet related, or other services. Cloud Computing is a product of development and fusion of traditional computers and Network Technologies, such as Grid Computing (Grid Computing), distributed Computing (distributed Computing), Parallel Computing (Parallel Computing), Utility Computing (Utility Computing), Network Storage (Network Storage Technologies), Virtualization (Virtualization), Load balancing (Load Balance), and the like.

With the development of diversification of internet, real-time data stream and connecting equipment and the promotion of demands of search service, social network, mobile commerce, open collaboration and the like, cloud computing is rapidly developed. Different from the prior parallel distributed computing, the generation of cloud computing can promote the revolutionary change of the whole internet mode and the enterprise management mode in concept.

The audio mixing method based on the cloud technology provides a preposed judgment logic of audio mixing processing, ensures the accuracy of audio mixing processing, improves the playing experience of candidate audio, avoids the condition that the key induction broadcasting of turning to broadcasting audio is interfered, and simultaneously considers the broadcasting experience of the candidate audio and the accuracy of user navigation information.

Fig. 1 shows an exemplary system architecture diagram to which the disclosed solution is applied.

As shown in fig. 1, the system architecture 100 may include a terminal 110, a network 120, and a server side 130. Wherein the terminal 110 and the server 130 are connected through the network 120.

The terminal 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. Network 120 may be any type of communications medium capable of providing a communications link between terminal 110 and server 130, such as a wired communications link, a wireless communications link, or a fiber optic cable, and the like, without limitation. The server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like.

Specifically, the server 130 may perform navigation route processing on a route start point and a route end point to obtain an expected arrival time and navigation guidance audio, where the navigation guidance audio includes at least two turn-to-broadcast audio. And then, calculating the travel time length of the estimated arrival time to obtain the predicted travel time length, and calculating the interval time length of the interval road section between at least two turn-to-broadcast audios to obtain the interval travel time length. Further, object extraction processing is carried out in the audio object set according to the predicted travel time length to obtain candidate audio, and the audio playing time length of the candidate audio is obtained. And finally, carrying out audio mixing processing on the candidate audios and the at least two steering broadcast audios according to the interval travel time length and the audio playing time length to obtain mixed playing audios.

In addition, the audio mixing method in the embodiment of the present disclosure may be applied to a terminal, and may also be applied to a server, which is not particularly limited in this disclosure.

The embodiment of the present disclosure is mainly illustrated by applying the audio mixing method to the server 130.

The following describes an audio mixing method, an audio mixing apparatus, a computer-readable medium, and an electronic device provided by the present disclosure in detail with reference to specific embodiments.

Fig. 2 schematically illustrates a flowchart of steps of an audio mixing method in some embodiments of the present disclosure, and as shown in fig. 2, the audio mixing method may mainly include the following steps:

s210, performing navigation stroke processing on a stroke starting point and a stroke ending point to obtain predicted arrival time and navigation induction audio; wherein, the navigation guidance audio comprises at least two steering broadcast audio.

S220, calculating the travel time length of the estimated arrival time to obtain the predicted travel time length, and calculating the interval time length of the interval section between at least two turn-to-broadcast audios to obtain the interval travel time length.

And S230, performing object extraction processing in the audio object set according to the predicted travel time length to obtain candidate audio, and acquiring the audio playing time length of the candidate audio.

And S240, carrying out audio mixing processing on the candidate audio and the at least two steering broadcast audios according to the interval travel time length and the audio playing time length to obtain mixed playing audio.

In the exemplary embodiment of the present disclosure, on one hand, the interval travel time length and the audio playing time length are used as the judgment conditions for audio mixing processing, a pre-judgment logic for audio mixing processing is provided, and the accuracy of audio mixing processing is ensured; on the other hand, audio mixing processing is carried out on the candidate audio and the steering broadcast audio, the playing experience of the candidate audio is improved, the condition that the steering broadcast audio is interfered in a key induction broadcast mode is avoided, and meanwhile the broadcasting experience of the candidate audio and the accuracy of user navigation information are considered.

The respective steps of the audio mixing method will be described in detail below.

In step S210, a navigation route processing is performed on the route start point and the route end point to obtain a predicted arrival time and a navigation guidance audio; wherein, the navigation guidance audio comprises at least two steering broadcast audio.

In an exemplary embodiment of the present disclosure, the trip start point

And the travel end point is the geographical position information of the starting point and the arrival destination of the user starting travel respectively.

The starting point of the trip may be obtained by positioning the terminal. The terminal can be provided with a positioning module, and the current position is positioned by the positioning module to obtain a stroke starting point. In addition, the trip start point may be entered by the user at the routing interface. The route planning interface includes a starting point input box in which the user can type the trip starting point. Further, the route planning interface includes an electronic map, and the user can select a position in the electronic map as a starting point of the route.

The end-of-travel point may be input by a user. Also included in the route planning interface is an end point input box into which the user may enter a trip end point. And, because the route planning interface includes an electronic map, the user can also select another position in the electronic map as the travel end point. In addition to this, the user may also emit voice information to determine the trip end point from the voice information.

After determining the trip start point and the trip end point, the trip start point and the trip end point may be subjected to navigation trip processing to obtain an estimated arrival time and navigation guidance audio.

In an alternative embodiment, fig. 3 shows a flow chart of the steps of a navigation itinerary handling method, which, as shown in fig. 3, comprises at least the following steps: in step S310, coordinate adsorption processing is performed on the stroke start point and the stroke end point, respectively, to obtain a stroke start point coordinate and a stroke end point coordinate.

In the navigation route processing process, the target navigation route can be determined by utilizing the road network topological structure. The road network topology can be described by a dotted line structure of nodes and links. The nodes are used for representing intersections in the geographic space, and the nodes correspond to unique node identifiers. The connection is used for representing a road in the geographic space, and the connection is also corresponding to a unique connection identifier.

The coordinate adsorption processing on the stroke starting point and the stroke ending point may be mapping the stroke starting point and the stroke ending point into a road network topological structure, so as to determine two corresponding connections according to the corresponding connection identifiers. Furthermore, the coordinate positions of the two links connected in the road network topological structure are respectively used as a travel starting point coordinate and a travel end point coordinate. That is, the coordinate adsorption process may character the trip start point and the trip end point to the real trip start point coordinates and the trip end point coordinates where the user is located. The travel start point coordinates and the travel end point coordinates may be expressed in the form of latitude and longitude.

In step S320, a route planning process is performed on the travel start point coordinates and the travel end point coordinates to obtain the target navigation route and the predicted arrival time of the target navigation route.

In an alternative embodiment, fig. 4 shows a flow chart of the steps of a method of route planning process, which, as shown in fig. 4, comprises at least the following steps: in step S410, a route planning process is performed on the travel start point coordinates and the travel end point coordinates to obtain at least two travel navigation routes.

Since one or more routes between the nodes and the connections are recorded in the road network topology structure, a plurality of travel navigation routes between the travel start point coordinates and the travel end point coordinates can be inquired in the road network topology structure.

The travel navigation route may be represented as a set including node identifiers and/or connection identifiers from the travel start coordinate to the travel end coordinate. Each route navigation route can be described by only using the connection identifier between the coordinates of the starting point of the route and the coordinates of the ending point of the route. In addition, the morphological information of each connection may be represented in a sequence of shape points representing a location in geographic space represented by a latitude-once coordinate. In addition, when the number of shape points included in a connection is large, the sequence of shape points included in a connection may be thinned, so that the shape points included in the connection are sparse.

In step S420, the at least two trip navigation routes are respectively subjected to time prediction processing to obtain at least two predicted arrival times of the at least two trip navigation routes.

When the Time prediction processing is performed on at least two route guidance routes, the Time prediction processing may be implemented by using an ETA (Estimated Time of Arrival) model obtained by modeling a regression model, and the ETA model may consider characteristics including physical attributes of the route, such as length, width, road grade, and the like of the route, and historical mining speed of the route and real-Time speed of the route. In addition, other machine learning algorithms may be used, and the exemplary embodiment is not particularly limited in this respect. After the time prediction processing is performed on at least two travel navigation routes respectively, the predicted arrival times corresponding to the two travel navigation routes respectively can be obtained.

In step S430, a route sorting process is performed on the at least two travel navigation routes to obtain a target navigation route, and an estimated arrival time of the target navigation route is determined among the at least two estimated arrival times.

In order to select a target navigation route more suitable for the user from the at least two travel navigation routes, route sorting processing may be performed on the travel navigation routes. Specifically, the route sorting processing manner may be sorting the route lengths of the route guidance routes, sorting the congestion degrees of the route guidance routes, sorting the route levels in the route guidance routes, or sorting other sorting criteria, which is not particularly limited in this exemplary embodiment.

After the target navigation route is obtained, an estimated arrival time corresponding to the target navigation route can be selected from at least two estimated arrival times.

In the exemplary embodiment, route planning processing is performed on the travel starting point coordinates and the travel ending point coordinates, so that a target navigation route and corresponding predicted arrival time can be obtained, and a data basis is provided for a subsequent audio mixing process.

In step S330, a guidance voice process is performed on the target navigation route to obtain a navigation guidance audio.

After the target navigation route is obtained, navigation inducing audio corresponding to the target navigation route may be generated.

The Guidance (Guidance) refers to information indicating how the user travels along the planned route, which is generated by a program in a form of language and graphics from map data of a GPS and an electronic map in navigation software based on a GPS (Global Positioning System).

It should be noted that the navigation guidance audio includes at least two turn-to-broadcast audio and at least two road condition prompt audio. Wherein, turn to and report the audio frequency and be used for reminding user's the suggestion audio frequency of turning to, road conditions suggestion audio frequency includes that the road conditions reports audio frequency and the audio frequency of other prompting nature information.

In the present exemplary embodiment, the navigation route processing is performed on the route start point and the route end point to obtain the predicted arrival time and the navigation guidance audio, so as to accurately determine the route for the user.

In step S220, the predicted travel time is calculated by calculating the travel time for the estimated arrival time, and the interval travel time is calculated by calculating the interval time for the interval section length between at least two turn-to-broadcast audios.

In an exemplary embodiment of the present disclosure, to perform the travel time length calculation on the estimated arrival time, the current time may be acquired. Further, the predicted travel time is calculated by performing subtraction on the estimated arrival time and the current time to obtain the predicted travel time.

Further, interval duration calculation is carried out on the length of the interval road section between at least two turn-to-broadcast audios to obtain interval travel duration.

In an alternative embodiment, fig. 5 shows a flow chart of the steps of the interval duration calculation method, as shown in fig. 5, the method at least includes the following steps: in step S510, a section route length between at least two steering broadcast audios is acquired, and a current driving speed is acquired.

The current driving speed may be obtained in the vehicle, or may be calculated according to the walking habits of the user, and the present exemplary embodiment is not particularly limited thereto.

In step S520, a section travel time length is calculated by calculating the section route length and the current running speed.

Specifically, the calculation of the interval duration for the interval route length and the current running speed may be a division calculation for the interval route length and the current running speed to obtain the interval travel duration.

In the exemplary embodiment, the interval travel time between at least two steering broadcast audios can be determined through interval time length calculation, so that time length parameters are provided for an accurate audio mixing method, and an audio mixing effect is guaranteed.

In step S230, object extraction processing is performed in the audio object set according to the predicted travel time length to obtain candidate audio, and an audio playing time length of the candidate audio is obtained.

In an exemplary embodiment of the present disclosure, the set of audio objects may be a set of audio to be played generated from an audio playlist of the user and related audio recommendations. The audio to be played may be a song or an audio such as a book, and this is not particularly limited in this exemplary embodiment.

After the audio object set is obtained, object extraction processing may be performed in the audio object set according to the predicted travel time length, that is, candidate audio whose accumulated playing time length of the audio object does not exceed the predicted travel time length is selected from the audio object set.

Further, the audio playing duration of the candidate audio is obtained. The audio playback time duration may characterize the playback time duration of each candidate audio.

In step S240, audio mixing processing is performed on the candidate audio and the at least two turn broadcast audios according to the interval travel time length and the audio playing time length to obtain a mixed playing audio.

In an exemplary embodiment of the present disclosure, after the interval trip time length and the audio play time length are obtained, the candidate audio and the at least two switch announcement audio may be further subjected to audio mixing processing.

In an alternative embodiment, fig. 6 shows a flowchart of steps of an audio mixing processing method, as shown in fig. 6, the method at least includes the following steps: in step S610, the interval run length and the audio playing length are compared to obtain a length comparison result.

Specifically, the interval run length may be compared with the audio playing length to determine whether the audio playing length is less than or equal to the interval run length.

In step S620, based on the duration comparison result, audio mixing processing is performed on the candidate audio and the at least two turn broadcast audios to obtain a mixed broadcast audio.

When the audio playing time length is less than or equal to the interval travel time length, it indicates that the candidate audio corresponding to the audio playing time length and the at least two turning broadcast audios can be subjected to audio mixing processing.

In an alternative embodiment, fig. 7 shows a flow chart of the steps of a method of further audio mixing processing, which method comprises at least the following steps, as shown in fig. 7: in step S710, audio identifiers of the candidate audios are acquired, and steering start point coordinates corresponding to at least two steering broadcast audios are acquired.

The audio identification may be identification information that uniquely identifies the candidate audio, such as an id (identity document) of the candidate audio.

The coordinates of the steering starting points corresponding to the at least two steering broadcast audios may be that two broadcast audios determining that the previous one of the two steering broadcast audios is the steering starting point, and determining the coordinate of the broadcast audio of the steering starting point on the target navigation route as the coordinates of the steering starting point.

In step S720, the audio identifier is identified and added according to the turning start coordinate to obtain the audio for audio mixing and playing.

After the coordinates of the turning starting point and the audio identifier are determined, the audio identifier can be inserted at the coordinates of the turning starting point, so that after the turning broadcast audio is played, the candidate audio represented by the audio identifier is played.

In the exemplary embodiment, the audio mixing processing is performed on the candidate audio and the at least two turn broadcast audios to obtain the mixed broadcast audio, so that the playing experience of the candidate audio and the accuracy of the user navigation information are considered.

In this audio mixing process, the key broadcast-inducing mixing effect of inserting candidate audio into the broadcast audio is completed. However, in an actual application scenario, the navigation guidance audio not only includes the steering broadcast audio, but also includes the road condition broadcast audio and the audio of other suggestive information, so that the subsequent audio processing can be performed on the road condition broadcast audio and the audio of other suggestive information to obtain the processing audio more conforming to the actual situation.

In an alternative embodiment, the navigation guidance audio further includes at least two road condition prompt audios, fig. 8 is a flowchart illustrating steps of the audio adjustment processing method, and as shown in fig. 8, the method at least includes the following steps: in step S810, audio merging processing is performed on the at least two road condition prompting audios to obtain a road condition merged audio.

The road condition prompting audio comprises road condition broadcasting audio and audio of other prompting information. The road condition broadcast audio comprises the traffic jam in the front, and the audio of other prompting information comprises the attention of falling rocks in the front or a camera in the position 100 meters in the front.

In order to avoid playing the road condition prompting audio in the process of playing the candidate audio, the road condition prompting audio can be adjusted to the playing position of the steering broadcast audio.

Specifically, at least two road condition prompting audios are acquired within a preset range, and the preset range may be 50 meters or other distance ranges. The at least two road condition prompting audios may be, for example, a camera at a position 10 meters ahead, a camera at a position 50 meters ahead, and the like. Further, the at least two road condition prompting audios are subjected to audio merging processing to obtain road condition merging audios, for example, 2 cameras are arranged in front of the road condition prompting audios within 50 meters.

The preset range may be a section degree between at least two turn-to-broadcast audios, or may be set or calculated according to actual requirements, which is not particularly limited in this exemplary embodiment.

In step S820, the road condition merged audio and the mixed-sound playing audio are subjected to audio adjustment processing to obtain the road condition navigation audio.

After the road condition merged audio and the mixed-sound playing audio are obtained, the playing position of the road condition merged audio can be adjusted to be behind the audio identifier of the candidate audio in the mixed-sound playing audio, so that the candidate audio can be determined to be played after the broadcasting of the turning-to-broadcasting audio is finished, and the road condition merged audio can be played after the candidate audio is determined to be played.

In the exemplary embodiment, the audio merging processing and the audio adjusting processing are performed on at least two road condition prompting audios, so that the road condition prompting audios can be played at the same time, and the effect of navigation guidance broadcasting is ensured.

The following describes an audio mixing method provided in the embodiment of the present disclosure in detail with reference to a specific application scenario.

Fig. 9 is a schematic diagram of a system architecture of an audio mixing method in an application scenario, where as shown in fig. 9, the system architecture includes a client, a music development application program interface, an induction engine, a navigation access layer, and an induction cloud broadcasting service.

In step S910, a mobile phone map is opened.

Specifically, the user opens the client of the map application to enter the navigation mode.

In step S920, music authorization is requested.

And clicking an authorization button on a user Interface to request a music development Application program Interface (API for short) to acquire user authentication. The music development application may be any type of music player. The music player includes, but is not limited to, front-loading and back-loading music playing software at a mobile phone end or a vehicle end.

Fig. 10 is an interface diagram of an authorization interface in an application scenario, and as shown in fig. 10, an authorization interface of a music playing application is displayed on a client interface of a mapping application, and user authorization can be requested by clicking a "de-authorization" control to request a music development application interface.

In step S930, an authentication handle is returned.

When the user carries the client identifier or the terminal identifier to request the music development application program interface and the authentication is passed, the music development application program interface returns identification information passing the authentication as authorization information so as to obtain data such as a user's collection list and related recommended contents in a candidate manner.

In step S940, navigation is started.

The user clicks the start navigation button in the map application to enter navigation. The starting point of the navigation route may be obtained by positioning the terminal. And a positioning module is arranged in the terminal, and the current position is positioned by the positioning module to obtain a stroke starting point. In addition, the trip start point may be entered by the user at the routing interface. The route planning interface includes a starting point input box in which the user can type the trip starting point. Further, the route planning interface includes an electronic map, and the user can select a position in the electronic map as a starting point of the route.

The end-of-travel point for navigation may be input by the user. Also included in the route planning interface is an end point input box into which the user may enter a trip end point. And, because the route planning interface includes an electronic map, the user can also select another position in the electronic map as the travel end point. In addition to this, the user may also emit voice information to determine the trip end point from the voice information.

In step S950, the inducement engine is initialized.

The client side initialization inducement engine can perform initialization processes such as applying for a memory for a map application program.

In step S960, a navigation guidance package is requested.

The client initiates a navigation request to a server of the map application program, and carries authorization information in the navigation request. And the service end of the map application program sequentially calls a coordinate adsorption service, a route planning service, an ETA service, a route sequencing service and an induction voice service through the navigation access layer to obtain the predicted arrival time and the navigation induction audio.

Specifically, in the navigation route processing process, the target navigation route may be determined by using the road network topology. The road network topology can be described in a dotted line structure of nodes and connections. The nodes are used for representing intersections in the geographic space, and the nodes correspond to unique node identifiers. The connection is used for representing a road in the geographic space, and the connection is also corresponding to a unique connection identifier.

The coordinate adsorption processing on the stroke starting point and the stroke ending point may be mapping the stroke starting point and the stroke ending point into a road network topological structure, so as to determine two corresponding connections according to the corresponding connection identifiers. Furthermore, the coordinate positions of the two links connected in the road network topological structure are respectively used as a travel starting point coordinate and a travel end point coordinate. That is, the coordinate adsorption process may character the trip start point and the trip end point to the real trip start point coordinates and the trip end point coordinates where the user is located. The travel start point coordinates and the travel end point coordinates may be expressed in the form of latitude and longitude.

Since one or more routes between the nodes and the connections are recorded in the road network topology structure, a plurality of travel navigation routes between the travel start point coordinates and the travel end point coordinates can be inquired in the road network topology structure.

The travel navigation route may be represented as a set including node identifiers and/or connection identifiers from the travel start coordinate to the travel end coordinate. Each route navigation route can be described by only using the connection identifier between the coordinates of the starting point of the route and the coordinates of the ending point of the route. In addition, the morphological information of each connection may be represented in a sequence of shape points representing a location in geographic space represented by a latitude-once coordinate. In addition, when the number of shape points included in a connection is large, the sequence of shape points included in a connection may be thinned, so that the shape points included in the connection are sparse.

When the time prediction processing is performed on at least two travel navigation routes, the time prediction processing can be implemented by using an ETA model obtained by modeling of a regression model, and the ETA model can consider the physical attributes of the routes, such as the length, the width, the road grade and the like of the routes, and the characteristics of historical mining speed of the routes, real-time speed of the routes and the like. In addition, other machine learning algorithms may be used, and the exemplary embodiment is not particularly limited in this respect. After the time prediction processing is performed on at least two travel navigation routes respectively, the predicted arrival times corresponding to the two travel navigation routes respectively can be obtained.

In order to select a target navigation route more suitable for the user from the at least two travel navigation routes, route sorting processing may be performed on the travel navigation routes. Specifically, the route sorting processing manner may be sorting the route lengths of the route guidance routes, sorting the congestion degrees of the route guidance routes, sorting the route levels in the route guidance routes, or sorting other sorting criteria, which is not particularly limited in this exemplary embodiment.

After the target navigation route is obtained, an estimated arrival time corresponding to the target navigation route can be selected from at least two estimated arrival times.

After the target navigation route is obtained, navigation inducing audio corresponding to the target navigation route may be generated. The navigation guidance audio comprises at least two steering broadcast audio and at least two road condition prompting audio. Wherein, turn to and report the audio frequency and be used for reminding user's the suggestion audio frequency of turning to, road conditions suggestion audio frequency includes that the road conditions reports audio frequency and the audio frequency of other prompting nature information.

In step S970, an induction package is requested.

And the induction cloud broadcasting service receives an induction packet request of the navigation access layer.

The guidance package is a data package which is transmitted to the mobile terminal through the internet by the background service and is used for supporting the navigation guidance function, and the data package is a core data unit module of the navigation function.

In step S980, a user playlist is acquired.

And the induction cloud broadcasting service requests the music development platform to acquire a play list of the user according to the authorization information.

Further, to calculate the travel time length of the estimated arrival time, the current time may be obtained. And calculating the travel time length comprising subtraction operation on the predicted arrival time and the current time to obtain the predicted travel time length.

And then, carrying out object extraction processing in the audio object set according to the predicted travel time length to obtain candidate audio. The set of audio objects may be a set of audio to be played generated from the user's audio playlist and associated audio recommendations. The audio to be played may be a song or an audio such as a book, and this is not particularly limited in this exemplary embodiment. That is, the audio mixing processing for the candidate audio and the at least two broadcast-oriented audio is widely applicable to a scene where navigation guidance broadcast and other applications with audio output coexist, where the applications may be music playing applications or book listening applications, and this exemplary embodiment is not particularly limited to this.

After the set of audio objects is obtained, object extraction processing may be performed in the set of audio objects according to the predicted travel time length, that is, candidate audio that does not exceed the predicted travel time length is selected from the set of audio objects.

And calculating the interval time length of the interval road section between at least two turn-to-broadcast audios according to a protocol packet CarRouteR sp (protocol packet name) of the navigation induction audio to obtain the interval travel time length. Specifically, the interval route length between at least two steering broadcast audios is obtained, and the current running speed is obtained. And calculating the interval duration of the interval route length and the current running speed to obtain the interval travel duration.

Further, the audio playing duration of the candidate audio is obtained. The audio playback time duration may characterize the playback time duration of each candidate audio. The interval run length may then be compared in magnitude to the value of the audio play length to determine whether the audio play length is less than or equal to the interval run length. When the audio playing time length is less than or equal to the interval travel time length, it indicates that the candidate audio corresponding to the audio playing time length and the at least two turning broadcast audios can be subjected to audio mixing processing.

And acquiring audio identifiers of the candidate audios, acquiring steering starting point coordinates corresponding to at least two steering broadcast audios, and performing identifier adding processing on the audio identifiers according to the steering starting point coordinates to obtain mixed sound playing audios. After the coordinates of the turning starting point and the audio identifier are determined, the audio identifier can be inserted at the coordinates of the turning starting point, so that after the turning broadcast audio is played, the candidate audio represented by the audio identifier is played.

In addition, at least two road condition prompting audios in the navigation inducing audios can be acquired. And carrying out audio frequency combination processing on at least two road condition prompting audio frequencies to obtain road condition combined audio frequencies, and carrying out audio frequency adjustment processing on the road condition combined audio frequencies and mixed sound playing audio frequencies to obtain road condition navigation audio frequencies.

In step S990, the hybrid broadcast navigation packet is returned to the client.

The mixed broadcast navigation pack stores mixed broadcast audio or road condition navigation audio. The client analyzes the mixed broadcast navigation packet, and can play corresponding candidate music when meeting the music identification of the candidate music. Because the avoidance processing of the steering broadcast audio is realized, the normal playing of the steering broadcast audio can not be interfered.

Fig. 11 is a schematic diagram of an interface for playing mixed-play audio in an application scenario, where the interface for playing mixed-play audio may be a vertical screen, as shown in fig. 11. The user interface is divided into an interface for displaying the map application program and an interface for playing the music application program, and the interface for playing the music application program is arranged on the lower side of the interface for the map application program.

Fig. 12 is a schematic diagram illustrating another interface for playing mixed-play audio in an application scenario, and the interface for playing mixed-play audio may be a cross screen, as shown in fig. 12. The user interface is divided into an interface for displaying the map application program and an interface for playing the music application program, and the interface for playing the music application program is arranged on the left side of the interface for displaying the map application program.

It should be noted that the interfaces in fig. 11 and 12 may also be used for playing the road condition navigation audio, and may be played and set according to the actual situation, which is not limited in this exemplary embodiment.

Based on the application scenarios, on one hand, the audio mixing method provided by the embodiment of the disclosure takes the interval travel time length and the audio playing time length as the judgment conditions of audio mixing processing, provides a pre-judgment logic of audio mixing processing, and guarantees the accuracy of audio mixing processing; on the other hand, audio mixing processing is carried out on the candidate audio and the steering broadcast audio, the playing experience of the candidate audio is improved, the condition that the steering broadcast audio is interfered in a key induction broadcast mode is avoided, and meanwhile the broadcasting experience of the candidate audio and the accuracy of user navigation information are considered.

It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.

The following describes an embodiment of an apparatus of the present disclosure, which may be used to perform an audio mixing method in the above-described embodiment of the present disclosure. For details that are not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the audio mixing method described above in the present disclosure.

Fig. 13 schematically illustrates a block diagram of an audio mixing apparatus in some embodiments of the present disclosure, and as shown in fig. 13, an audio mixing apparatus 1300 may mainly include: a journey processing module 1310, a duration calculation module 1320, an audio extraction module 1330 and an audio mixing module 1340.

A journey processing module 1310 configured to perform navigation journey processing on a journey starting point and a journey ending point to obtain a predicted arrival time and navigation induction audio; the navigation guidance audio comprises at least two steering broadcast audio; a duration calculation module 1320, configured to calculate a travel duration for the estimated arrival time to obtain a predicted travel duration, and calculate an interval duration for an interval road segment length between at least two turn-to-broadcast audios to obtain an interval travel duration; an audio extraction module 1330 configured to perform object extraction processing on the audio object set according to the predicted travel time length to obtain candidate audio, and obtain an audio playing time length of the candidate audio; and the audio mixing module 1340 is configured to perform audio mixing processing on the candidate audio and the at least two turning broadcast audios according to the interval travel time length and the audio playing time length to obtain mixed playing audios.

In some embodiments of the present disclosure, an audio mixing module includes: the time length comparison submodule is configured to compare the time length of the interval travel time length with the time length of the audio playing time length to obtain a time length comparison result;

and the comparison result submodule is configured to perform audio mixing processing on the candidate audio and the at least two steering broadcast audios to obtain mixed playing audios based on the duration comparison result.

In some embodiments of the disclosure, the comparison result sub-module comprises: the identification acquisition unit is configured to acquire audio identifications of the candidate audios and acquire steering starting point coordinates corresponding to at least two steering broadcast audios;

and the identification adding unit is configured to perform identification adding processing on the audio identification according to the steering starting point coordinate to obtain the mixed playing audio.

In some embodiments of the present disclosure, the audio mixing apparatus further includes: the audio merging module is configured to perform audio merging processing on the at least two road condition prompting audios to obtain road condition merging audios;

and the audio adjusting module is configured to perform audio adjustment processing on the road condition merged audio and the mixed sound playing audio to obtain road condition navigation audio.

In some embodiments of the present disclosure, the trip processing module comprises: the adsorption processing submodule is configured to perform coordinate adsorption processing on the stroke starting point and the stroke ending point respectively to obtain a stroke starting point coordinate and a stroke ending point coordinate;

the route planning submodule is configured to perform route planning processing on the travel starting point coordinate and the travel end point coordinate to obtain a target navigation route and predicted arrival time of the target navigation route;

and the induction voice submodule is configured to perform induction voice processing on the target navigation route to obtain navigation induction audio.

In some embodiments of the present disclosure, a route planning sub-module, comprises: the route planning unit is configured to perform route planning processing on the coordinates of the starting point of the route and the coordinates of the end point of the route to obtain at least two route navigation routes;

the event prediction unit is configured to respectively perform time prediction processing on the at least two journey navigation routes to obtain at least two predicted arrival times of the at least two journey navigation routes;

and the route sequencing unit is configured to perform route sequencing processing on the at least two travel navigation routes to obtain a target navigation route, and determine the predicted arrival time of the target navigation route in the at least two predicted arrival times.

In some embodiments of the present disclosure, the duration calculation module includes: the speed obtaining submodule is configured to obtain the length of an interval route between at least two steering broadcast audios and obtain the current running speed;

and the section duration submodule is configured to calculate the section duration of the section route length and the current running speed to obtain the section travel duration.

The specific details of the audio mixing apparatus provided in the embodiments of the present disclosure have been described in detail in the corresponding method embodiments, and therefore are not described herein again.

FIG. 14 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present disclosure.

It should be noted that the computer system 1400 of the electronic device shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.

As shown in fig. 14, a computer system 1400 includes a Central Processing Unit (CPU)1401 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 1402 or a program loaded from a storage portion 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data necessary for system operation are also stored. The CPU 1401, ROM 1402, and RAM 1403 are connected to each other via a bus 1404. An Input/Output (I/O) interface 1405 is also connected to the bus 1404.

The following components are connected to the I/O interface 1405: an input portion 1406 including a keyboard, a mouse, and the like; an output portion 1407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 1408 including a hard disk and the like; and a communication section 1409 including a Network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 1409 performs communication processing via a network such as the internet. The driver 1410 is also connected to the I/O interface 1405 as necessary. A removable medium 1411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1410 as necessary, so that a computer program read out therefrom is installed into the storage section 1408 as necessary.

In particular, the processes described in the various method flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1409 and/or installed from the removable medium 1411. When the computer program is executed by a Central Processing Unit (CPU)1401, various functions defined in the system of the present application are executed.

It should be noted that the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.

It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:音频信息合成方法、装置、计算机可读介质及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!