Method, device and equipment for processing dubbing information of video commentary

文档序号:1820104 发布日期:2021-11-09 浏览:28次 中文

阅读说明:本技术 一种视频解说配音信息的处理方法、装置及设备 (Method, device and equipment for processing dubbing information of video commentary ) 是由 刘养亭 佘志强 于 2021-08-09 设计创作,主要内容包括:本发明公开了一种视频解说配音信息的处理方法、装置及设备,该方法包括:获取视频数据和第一目标观众的用户信息;对所述视频数据进行分段,得到视频段;从所述视频段中,确定与所述用户信息匹配的第一目标视频段;将所述第一目标观众对所述第一目标视频段的解说配音信息与所述第一目标视频段进行合成处理,得到第二目标视频段。通过上述方式,本发明实现了用户个性化的需求,极大的提高了用户的参与感。(The invention discloses a method, a device and equipment for processing dubbing information of video commentary, wherein the method comprises the following steps: acquiring video data and user information of a first target audience; segmenting the video data to obtain a video segment; determining a first target video segment matched with the user information from the video segments; and synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment. Through the mode, the invention realizes the personalized requirements of the user and greatly improves the participation sense of the user.)

1. A method for processing dubbing information of video commentary, the method comprising:

acquiring video data and user information of a first target audience;

segmenting the video data to obtain a video segment;

determining a first target video segment matched with the user information from the video segments;

and synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment.

2. The method for processing dubbing information of video commentary according to claim 1, wherein segmenting the video data to obtain video segments comprises:

segmenting the video data according to at least one preset time interval to obtain a video segment; or

And automatically segmenting the video data according to the key image information and/or the key audio information of the video data to obtain a video segment.

3. The method for processing the video commentary dubbing information according to claim 1, wherein determining a first target video segment matching the user information from the video segments comprises:

obtaining key image information and/or key audio information of each of the video segments;

and matching the key image information with target image information in the user information and/or matching the key audio information with target audio information in the user information, and determining a successfully matched video segment as a first target video segment matched with the user information.

4. The method according to claim 1, wherein the synthesizing of the commentary dubbing information of the first target video segment and the first target video segment by the first target viewer to obtain a second target video segment comprises:

receiving commentary dubbing information input by the first target audience for the first target video segment;

and synthesizing the commentary dubbing information and the image frame of the first target video segment to obtain a second target video segment.

5. The method for processing dubbing information on video commentary according to claim 1, further comprising, after obtaining the second target video segment:

obtaining a social relationship list of the first target audience, wherein the social relationship list comprises at least one second target audience, and the second target audience and the first target audience are in a friend relationship;

pushing a third target video segment of the social relationship list, in which the second target viewer has completed commentary dubbing, to the first target viewer; and synthesizing the third target video segment and the second target video segment to obtain a played video stream.

6. The method for processing dubbing information on video commentary according to claim 1, further comprising, after obtaining the second target video:

acquiring a social relationship list of the first target audience;

if the social relationship list is empty or a third target video segment of the commentary dubbing finished by a second target audience in the social relationship list does not exist, pushing a fourth target video segment of the commentary dubbing of a preset commentator segment to the first target audience, and synthesizing the fourth target video segment and the second target video segment to obtain a played video stream.

7. The method for processing dubbing information on video commentary according to claim 1, further comprising, after obtaining the second target video segment:

acquiring evaluation information of the second target video segment;

and generating an optimal commentator list according to the evaluation information, and outputting the optimal commentator list to playing equipment for playing the video data.

8. An apparatus for processing dubbing information for video commentary, the apparatus comprising:

the acquisition module is used for acquiring the video data and the user information of the first target audience;

the first processing module is used for segmenting the video data to obtain a video segment;

the determining module is used for determining a first target video segment matched with the user information from the video segments;

and the second processing module is used for synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment.

9. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the processing method of the video commentary dubbing information in any one of claims 1-7.

10. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the method for processing video commentary dubbing information according to any one of claims 1 to 7.

Technical Field

The invention relates to the technical field of mobile terminals, in particular to a method, a device and equipment for processing dubbing information of video commentary.

Background

The existing sports event explanation is generally only provided with one fixed commentator, many commentators are explained according to the work, so that the situation is not so much, a user can only passively receive the information explained by one commentator, other selections cannot be made, the personalized requirements and the personalized selections of the user cannot be met, and the participation sense of the user is low.

Disclosure of Invention

In view of the above problems, embodiments of the present invention are proposed to provide a method, an apparatus, and a device for processing video commentary dubbing information, which overcome the above problems or at least partially solve the above problems.

According to an aspect of the embodiments of the present invention, there is provided a method for processing dubbing information of a video commentary, including:

acquiring video data and user information of a first target audience;

segmenting the video data to obtain a video segment;

determining a first target video segment matched with the user information from the video segments;

and synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment.

According to another aspect of the embodiments of the present invention, there is provided a processing apparatus for dubbing information in video commentary, including:

the acquisition module is used for acquiring the video data and the user information of the first target audience;

the first processing module is used for segmenting the video data to obtain a video segment;

the determining module is used for determining a first target video segment matched with the user information from the video segments;

and the second processing module is used for synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment.

According to still another aspect of an embodiment of the present invention, there is provided a computing device including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the processing method of the video commentary dubbing information.

According to a further aspect of the embodiments of the present invention, there is provided a computer storage medium, where at least one executable instruction is stored, and the executable instruction causes a processor to perform an operation corresponding to the processing method of the video commentary dubbing information.

According to the scheme provided by the embodiment of the invention, the video data and the user information of the first target audience are acquired; segmenting the video data to obtain a video segment; determining a first target video segment matched with the user information from the video segments; and synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment. The method and the device can enable audiences to obtain the video segments matched with the information of the audiences, can explain and dub the video segments, can meet the personalized requirements of the users, and improve the beneficial effect of the participation sense of the users.

The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the embodiments of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.

Drawings

Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the embodiments of the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:

fig. 1 is a flowchart illustrating a processing method of dubbing information in a video commentary according to an embodiment of the present invention;

fig. 2 is a flowchart illustrating a processing method of dubbing information in a video commentary according to another embodiment of the present invention;

fig. 3 is a video interface diagram of an event entered on a terminal device by a processing apparatus for dubbing information in a video commentary provided by an embodiment of the present invention;

fig. 4 shows that the processing apparatus for dubbing information in video commentary according to the embodiment of the present invention adds an commentary interface diagram on a terminal device;

fig. 5 shows that the processing apparatus for video commentary dubbing information provided by the embodiment of the present invention starts to explain an interface diagram on a terminal device;

fig. 6 shows a diagram of a processing apparatus for video commentary dubbing information listening to another commentary interface on a terminal device in an audition manner according to an embodiment of the present invention;

fig. 7 shows that the processing apparatus for video commentary dubbing information provided by the embodiment of the present invention views other commentary member information interface diagrams on the terminal device;

fig. 8 shows how many explanation interface diagrams of each video segment are shown on the terminal device by the processing apparatus of the video commentary dubbing information provided by the embodiment of the present invention;

fig. 9 shows that the processing apparatus for video commentary dubbing information selects and likes a different commentator on the terminal device according to the embodiment of the present invention;

fig. 10 is a schematic structural diagram illustrating a processing apparatus for dubbing information in a video commentary according to an embodiment of the present invention;

fig. 11 shows a schematic structural diagram of a computing device provided by an embodiment of the present invention.

Detailed Description

Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.

Fig. 1 shows a flowchart of a method for processing dubbing information in a video commentary according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:

step 11, acquiring video data and user information of a first target audience;

specifically, the basic data, including viewer behavior data, content preference data, and transaction data, such as browsing volume, access duration, preference settings, return rate, etc., are first collected, but are not limited to those described above.

And secondly, analyzing and processing the collected basic data, refining key elements and constructing a visual model. And performing behavior modeling on the collected data to abstract the label of the user. The information represented by the user's tag in each field may be different, for example, the e-commerce field tags the user's basic attributes, behavior characteristics, interests, psychological characteristics, and social networks, and the financial wind control field tags the user's basic information, risk information, and financial information.

And then, developing and realizing the labeling process by utilizing the overall architecture of the big data such as Hive, HBase and the like, processing the basic data and managing the labels. Meanwhile, in order to improve the real-time performance of the data, real-time computing technologies such as Flink and Kafka are also used for computing the tag computing result in real time.

Finally, the user information of the first target audience is obtained according to the calculation result, wherein the user information can be a user portrait.

Step 12, segmenting the video data to obtain a video segment;

step 13, determining a first target video segment matched with the user information from the video segments;

and 14, synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment.

In the method for processing the dubbing information of the video commentary, the video data and the user information of the first target audience are acquired; segmenting the video data to obtain a video segment; determining a first target video segment matched with the user information from the video segments; and synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment. The method and the device can enable audiences to obtain the video segments matched with the user information, and can explain and dub the video segments, so that the personalized requirements of the users are met, and the participation sense of the users is improved.

In yet another alternative embodiment of the present invention, step 12 may comprise:

step 121, segmenting the video data according to at least one preset time interval to obtain a video segment;

specifically, the size of the preset time interval may be set according to an actual situation, for example, 60 minutes of video data may be segmented according to at least one preset time interval, and the video data is segmented at a first preset time interval (for example, 10 minutes) from the start time of the video data to obtain a first video segment; and then continue to segment the video data at a second preset time interval (e.g., 20 minutes) to obtain a second video segment, and so on.

Alternatively, the first and second electrodes may be,

and step 122, automatically segmenting the video data according to key image information and/or key audio information of the video data to obtain a video segment, wherein the video data comprises a plurality of key image information and/or key audio information.

Specifically, key image information and/or key audio information of the video data are extracted according to a video track or an audio track of the video data; for example, if the key image information of the obtained video data is the image of the player liked by the viewer, the video segment of the player is taken as a video segment; as another example, key audio information "next athlete to go out is XXX" in the video data is obtained, and the video segment of athlete XXX is taken as a video segment.

In yet another alternative embodiment of the present invention, step 13 may comprise:

step 131, obtaining key image information and/or key audio information of each video segment of the video segments;

step 132, matching the key image information with the target image information in the user information and/or matching the key audio information with the target audio information in the user information, and determining the successfully matched video segment as the first target video segment matched with the user information.

In the embodiment, according to the key image information and/or the key audio information, the matching is performed with the related information in the user information, and the successfully matched video segment is determined as the first target video segment matched with the user information, so that the purpose of recommending the optimal video segment for the user is achieved. If the user information does not exist, the user can independently select the video segment needing to be explained.

In yet another alternative embodiment of the present invention, step 14 may comprise:

step 141, receiving commentary dubbing information for the first target video segment input by the first target audience;

specifically, the commentary dubbing information of the first target video segment, which is input by the first target viewer through a commentary input module of the playing interface of the video data, is received, where the commentary input module may be a commentary button of the playing interface.

And 142, synthesizing the commentary dubbing information and the image frame of the first target video segment to obtain a second target video segment.

In this embodiment, commentary dubbing information input by the first target audience can be received through the radio equipment, and the second target video segment is obtained through the synthesis processing of the audio information and the video segment.

In yet another optional embodiment of the present invention, step 14 may further include:

step 15, obtaining evaluation information of the second target video segment;

and step 16, generating an optimal commentator list according to the evaluation information, and outputting the optimal commentator list to playing equipment for playing the video data.

Specifically, when the user watches the second target video segment, the evaluation can be given to the comment effect of the commentator of the second target video segment, when the audience switches the commentator, the commentator can be approved, and when the score is given, the player feeds back the information to the server, and the server records the behavior of the user. The system can obtain the goodness of each commentator through the evaluation information of each commentator, including but not limited to the selection times, praise, present, attention, likes and favorites, and the collection is as described above. When new spectators obtain the event information, the server end integrates all previous user feedback information to generate a new optimal commentator list and sends the new optimal commentator list to the player end.

In this embodiment, when the viewing user views a non-second target video segment, the server may select an optimal commentator of each segment as a default commentator according to the current comprehensive score of the commentator. And then forming an optimal commentator list of the whole video and sending the list to a player end.

Fig. 2 is a flowchart illustrating a method for processing dubbing information in a video commentary according to another embodiment of the present invention. As shown in fig. 2, the method may further include the following steps based on the above steps 11 to 14:

step 21, obtaining a social relationship list of the first target audience, where the social relationship list includes at least one second target audience, and the second target audience is in a friend relationship with the first target audience;

specifically, a social relationship list of the first target audience is obtained, and a friend circle of the first target audience is created and maintained. The friend circle at least comprises a second target audience which is friend to the first target audience. The friend circles include self-built friend circles, WeChat friend circles and QQ friend circles but are not limited to the friend circles.

Step 22, pushing a third target video segment of which the second target audience has finished the commentary dubbing in the social relationship list to the first target audience;

specifically, after the second target audience completes dubbing, the third target video explained by the second target audience can be stored and downloaded, so that subsequent sharing is facilitated. And after the third target video is synthesized, the third target video is automatically pushed to the first target audience according to the friend circle of the second target audience.

Step 23, synthesizing the third target video segment and the second target video segment to obtain a playing video stream, and further, outputting the playing video stream to a playing device that plays the video data.

Specifically, after the second target audience completes dubbing, the comment fragments of the friends circle of the second target audience are preferentially selected for synthesis according to the friends circle of the social relationship of the second target audience. And secondly, preferentially selecting the comment audio track and the video track of the second target audience friend to synthesize and play the video stream in real time according to the friend ring of the second target audience, and sending the video stream to the audience player through a streaming media protocol. And finally, when the user switches the commentator, the player interacts with the server, the server end switches the audio track of the commentator, synthesizes a new video stream, and sends the new video stream to the players of the audiences to realize the switching of the commentator.

In still another alternative embodiment of the present invention, step 14 may be further followed by:

step 17, obtaining a social relationship list of the first target audience;

step 18, if the social relationship list is empty or there is no third target video segment of the commentary dubbing completed by the second target audience in the social relationship list, pushing a fourth target video segment of the commentary dubbing of a preset commentator segment to the first target audience, and synthesizing the fourth target video segment and the second target video segment to obtain a played video stream.

Specifically, the preset commentator segment commentary dubbing comprises the commentator segment with the highest system score, but is not limited to the above.

In this embodiment, the social relationship list of the first target viewer is first obtained, and if there is no commentary fragment that can be used in the friend circle, a preset commentator fragment is used for filling. (for example, the event a is divided into three sections, namely a, b and c, wherein a king explains the section b, and when the king downloads the event a explained by itself, whether friends of the king exist in the explanation corresponding to the section a and the section c can be judged firstly, if yes, explanation sections of the friends of the king are selected preferentially to be synthesized, and if not, the section b and the section c are filled with the commentator section with the highest system score). And finally, when the user switches the commentator, the player interacts with the server, the server end switches the audio track of the commentator, synthesizes a new video stream, and sends the new video stream to the players of the audiences to realize the switching of the commentator.

In this embodiment, the synthesizing of the audio information is to synthesize all the commentator audio contained in the second target video segment and the third target video into the video file at one time, and then issue the mapping relationship of the commentator audio track number to the player. The player selects and plays the audio track according to the mapping relation. When the user switches the commentator, only the corresponding audio track needs to be switched. The image information is synthesized by issuing a real-time synthesized streaming media file through a server. The server selects the video track and the audio track corresponding to the best commentator, synthesizes and plays the video stream in real time, and sends the video stream to the players of the audiences through a streaming media protocol. When the user switches the commentator, the player interacts with the server, the server end switches the audio track of the commentator, synthesizes a new video stream and sends the new video stream to the players of the audiences.

The processing method of the video commentary dubbing information provided by the above embodiments of the present invention can be applied to a terminal device equipped with a touch panel, and for convenience of description, the following steps are mainly performed by way of example, but are not limited thereto.

As shown in fig. 3, first, the user enters the event video interface, and clicks the "add commentary" button, the server may automatically match the optimal commentary fragment according to the user information and the tag of the commentator.

As shown in fig. 4, next, the user enters the video interface to be explained, the top left side shows the user information to be currently involved in the explanation, the right side shows all the commentators of the current clip, the bottom shows the progress bar of the current clip, and the "start" button.

As shown in fig. 5, when the user clicks the "start" button, the button pattern becomes "complete", and the user can start explaining according to the video content, and the user can finish clicking after saying. If the recording scene is recorded and played, the user can drag the progress bar at the bottom to explain again, so that the multi-time recording is supported, and for the live scene, the user can only explain according to the real-time progress.

As shown in fig. 6 and 7, if the user wants to use the comments of other people, the user needs to enter the comment fragment, click "50 people in total and comment" to view all the comment member information (the commentator with friend relationship is preferentially shown), then click a certain user avatar, the current video can play the comment of the user on the video, and when trying to listen to the comments of other people, the comment content can be "liked". Clicking the head portrait again, returning to the self to start the self explanation, enabling the user to return to any position in the recorded scene, enabling the live scene to acquire the current progress in real time, enabling the server to acquire the audio information explained by the user in real time, and filtering the sensitive information.

Finally, after the user completes dubbing, the user can store and download the game video explained by the user so as to facilitate subsequent sharing. And the synthesis of the complete video preferentially selects the comment segments of the friends circle of the user to synthesize according to the friends circle of the user. And if no commentary fragment can be used in the friend circle, filling the commentator fragment with the highest system score.

As shown in fig. 8 and 9, in addition to this, when a user watching an event enters the event video interface, how many people explain each video segment, and the user is recommended the highly rated instructor by default. The viewing user can select a different commentator for each video segment and can "like" the commentator, promoting the commentator's goodness. After the user finishes watching the whole game, a unique comment video is formed. The selection of each user on the current commentator is transmitted to the background server, the next user enters the current event again, and the server recalculates the commentator intelligently recommended by each segment according to the big data.

In the above embodiment of the present invention, by acquiring video data and user information of a first target viewer watching the video data; segmenting the video data to obtain a video segment; determining a first target video segment matched with the user information from the video segments; and synthesizing the commentary dubbing information of the first target video segment and the first target video segment by the first target audience to obtain a second target video segment. The method and the device have the advantages that the user can select the explicator, the problem that the user can only passively receive the information explicated by one explicator and cannot make other selections is solved, the personalized requirements of the user are met, and the participation sense of the user is improved. Meanwhile, when a user enters a very hot video, the server can recalculate the commentator intelligently recommended by each segment according to the big data, so that the optimal recommendation and explanation effects are achieved, thousands of people and thousands of faces of event explanation contents are formed, the personalized explanation requirements of the user are met, the interest and participation of audiences are aroused, and the enthusiasm and spirits of the whole population are stimulated.

Fig. 10 is a schematic structural diagram illustrating a processing apparatus 100 for processing dubbing information in a video commentary according to an embodiment of the present invention. As shown in fig. 10, the apparatus includes:

an obtaining module 101, configured to obtain video data and user information of a first target viewer;

the first processing module 102 is configured to segment the video data to obtain a video segment;

a determining module 103, configured to determine, from the video segments, a first target video segment matching the user information;

the second processing module 104 is configured to perform synthesis processing on the commentary dubbing information of the first target video segment and the first target video segment by the first target viewer to obtain a second target video segment.

Optionally, the first processing module 102 is configured to segment the video data according to at least one preset time interval to obtain a video segment; or

And automatically segmenting the video data according to the key image information and/or the key audio information of the video data to obtain a video segment.

Optionally, the determining module 103 is configured to obtain key image information and/or key audio information of each of the video segments; and matching the key image information with target image information in the user information and/or matching the key audio information with target audio information in the user information, and determining a successfully matched video segment as a first target video segment matched with the user information.

Optionally, the second processing module 104 is further configured to receive commentary dubbing information on the first target video segment input by the first target viewer;

and synthesizing the commentary dubbing information and the image frame of the first target video segment to obtain a second target video segment.

Optionally, the second processing module 104 is further configured to obtain a social relationship list of the first target audience, where the social relationship list includes at least one second target audience, and the second target audience is in a friend relationship with the first target audience;

pushing a third target video segment of the social relationship list, in which the second target viewer has completed commentary dubbing, to the first target viewer; and synthesizing the third target video segment and the second target video segment to obtain a played video stream.

Optionally, the second processing module 104 is further configured to obtain a social relationship list of the first target viewer;

if the social relationship list is empty or a third target video segment of the commentary dubbing finished by a second target audience in the social relationship list does not exist, pushing a fourth target video segment of the commentary dubbing of a preset commentator segment to the first target audience, and synthesizing the fourth target video segment and the second target video segment to obtain a played video stream.

Optionally, the second processing module 104 is further configured to obtain evaluation information of the second target video segment; and generating an optimal commentator list according to the evaluation information, and outputting the optimal commentator list to playing equipment for playing the video data.

It should be noted that this embodiment is an apparatus embodiment corresponding to the above method embodiment, and all the implementations in the above method embodiment are applicable to this apparatus embodiment, and the same technical effects can be achieved.

The embodiment of the invention provides a nonvolatile computer storage medium, wherein the computer storage medium stores at least one executable instruction, and the computer executable instruction can execute the processing method of the video commentary dubbing information in any method embodiment.

Fig. 11 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.

As shown in fig. 11, the computing device may include: a processor (processor), a Communications Interface (Communications Interface), a memory (memory), and a Communications bus.

Wherein: the processor, the communication interface, and the memory communicate with each other via a communication bus. A communication interface for communicating with network elements of other devices, such as clients or other servers. The processor is used for executing the program, and particularly can execute relevant steps in the processing method embodiment of the video commentary dubbing information for the computing device.

In particular, the program may include program code comprising computer operating instructions.

The processor may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.

And the memory is used for storing programs. The memory may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.

The program may be specifically configured to cause the processor to execute the processing method of the video commentary dubbing information in any of the above-described method embodiments. For specific implementation of each step in the program, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing embodiment of the method for processing audio information for video commentary, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.

The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of embodiments of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best modes of embodiments of the invention.

In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.

Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.

The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present invention. Embodiments of the invention may also be implemented as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing embodiments of the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Embodiments of the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于协同过滤推荐算法的HLS缓存方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类