In-memory content classification and control

文档序号:96211 发布日期:2021-10-12 浏览:27次 中文

阅读说明:本技术 存储器内内容分类和控制 (In-memory content classification and control ) 是由 P·卡利 R·R·N·比尔比 于 2021-04-01 设计创作,主要内容包括:本申请涉及存储器内内容分类和控制。用于对通过存储器装置的内容进行分类和/或控制的系统、方法和设备。例如,可以在呈现之前的预定时间在存储器装置中缓冲从内容源接收到的媒体流的一部分。所述存储器装置中的人工神经网络ANN可以在经缓冲部分中识别区域,并分析所述区域以确定所述区域中的内容的分类。在所述存储器装置内,可以根据为所述分类指定的偏好变换所述区域中的所述内容。例如,可以隐藏、失真、跳过、替换和/或过滤非想要或令人不快的内容。通过变换所述区域中的所述内容产生所述部分的经修改版本作为输出以供呈现。(The application relates to in-memory content classification and control. Systems, methods, and apparatus for classifying and/or controlling content passing through a memory device. For example, a portion of a media stream received from a content source may be buffered in a memory device at a predetermined time prior to presentation. An artificial neural network ANN in the memory device may identify regions in the buffered portion and analyze the regions to determine a classification of content in the regions. Within the memory device, the content in the region may be transformed according to preferences specified for the classification. For example, unwanted or objectionable content may be hidden, distorted, skipped, replaced, and/or filtered. Generating a modified version of the portion as output for presentation by transforming the content in the region.)

1. A method, comprising:

buffering a portion of a media stream from a content source in a memory device at a predetermined time before outputting the portion for presentation;

identifying a region in the portion using an Artificial Neural Network (ANN) in the memory device;

analyzing the region using the Artificial Neural Network (ANN) in the memory device to determine a classification of content in the region;

transforming the content in the area in the memory device according to the preferences specified for the classification; and

based on the transformation of the content in the region, generating a modified version of the portion as an output according to the predetermined time.

2. The method of claim 1, wherein the memory device is configured on a communication path from the content source to a display device.

3. The method of claim 2, wherein the Artificial Neural Network (ANN) includes a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), or a Spiking Neural Network (SNN), or any combination thereof.

4. The method of claim 3, wherein the portion of the media stream comprises a video frame; and the region is within the video frame.

5. The method of claim 4, wherein the transforming comprises transforming the region within the video frame without changing the video frame outside the region.

6. The method of claim 4, wherein the transforming is performed iteratively until the Artificial Neural Network (ANN) no longer discriminates transformed content in the region as having the classification.

7. The method of claim 6, wherein the transforming includes adding random noise in the region.

8. The method of claim 3, further comprising:

identifying a viewer of the modified version; and

configuring the preferences according to a classification of people in the audience.

9. The method of claim 8, further comprising:

capturing an image using one or more cameras configured on a vehicle, wherein a classification of the audience and the people in the audience is based on the images from the one or more cameras.

10. The method of claim 9, wherein the viewer is determined based at least in part on an operating state of the vehicle.

11. The method of claim 3, further comprising:

receiving an input from a user of the memory device, the input identifying an area in content processed via the memory device and a classification of the content within the area; and

training the Artificial Neural Network (ANN) using the input identifying the region and the classification.

12. The method of claim 11, further comprising:

storing a first content of a region when the classification of content has a confidence level within a predetermined range;

presenting the first content during an inspection mode of the memory device;

receiving a user-specified classification of the first content during the inspection mode of the memory device; and

training the Artificial Neural Network (ANN) using at least the user-specified classification.

13. The method of claim 11, further comprising:

receiving an indication from a user that an output of the memory device contains non-desired content;

storing content data for producing said output;

presenting first content in accordance with the content data during an inspection mode of the memory device;

receiving, during the inspection mode of the memory device, a user identification area in the first content and a user-specified classification of content within the user identification area; and

training the Artificial Neural Network (ANN) using at least the user identified region and the user specified classification.

14. A data storage device, comprising:

one or more memory components configured to store data;

an interface configured to receive a media stream from a content source, wherein a portion of the media stream is buffered in the one or more memory components for a predetermined duration before generating output for presentation in accordance with the portion;

an inference engine configured to identify a classification of content within the portion using an Artificial Neural Network (ANN) in the data storage; and

a controller configured to modify the content according to the classification to produce an output provided according to the predetermined time.

15. The data storage device of claim 14, wherein the artificial neural network comprises a convolutional neural network, a deep neural network, or a spiking neural network, or any combination thereof.

16. The data storage device of claim 15, wherein the controller is configured to select content based on output from the ANN, store selected content in the one or more memory components, present the selected content in a review mode, receive a classification of the selected content during the review mode, and instruct the inference engine to train the ANN using the classification received during the review mode.

17. The data storage device of claim 15, wherein the inference engine comprises a neural network accelerator configured to perform matrix arithmetic calculations more efficiently than the controller.

18. A vehicle, comprising:

one or more cameras;

a computer system configured to:

receiving images from the one or more cameras;

identifying a person displayed in the image;

determining a classification of the person;

buffering a portion of a media stream from a content source at a predetermined time before outputting the portion for presentation;

identifying non-desired content in the portion using an Artificial Neural Network (ANN) based on preferences associated with the classification of the person; and

transforming the undesired content in the portion in generating an output signal; and

an entertainment system configured to present content based on the output signal.

19. The vehicle of claim 18, further comprising:

an Advanced Driving Assistance System (ADAS) configured to provide driving assistance based at least in part on images from the one or more cameras.

20. The vehicle of claim 19, wherein the computer system is further configured to, in determining the preference, select the person based on an operating state of the vehicle.

Technical Field

At least some embodiments disclosed herein relate generally to content processing, and more particularly, but not by way of limitation, to detection and filtering of unwanted media content streamed in a vehicle.

Background

Recent developments in the field of autonomous driving technology allow computing systems to operate control elements of motor vehicles under at least some conditions without assistance from a human operator of the vehicle.

For example, sensors (e.g., cameras and radar) may be mounted on a motor vehicle to detect conditions of the surroundings of the vehicle traveling on a lane. With or without any input from the vehicle's human operator, a computing system mounted on the vehicle analyzes the sensor inputs to identify conditions and generate control signals or commands for autonomous adjustment of the vehicle's direction and/or speed.

In some arrangements, rather than having the computing system autonomously drive the vehicle, the computing system alerts a human operator of the vehicle and requests that the human operator take over control of the vehicle and manually drive when the computing system recognizes a situation in which the computing system may not be able to continue operating the vehicle in a safe manner.

Autonomous driving and/or Advanced Driving Assistance Systems (ADAS) may use an Artificial Neural Network (ANN) for identifying events and/or objects captured in sensor inputs. Examples of sensor inputs include images from digital cameras, light, radar, ultrasonic sonar, and the like.

Generally, an Artificial Neural Network (ANN) uses a neural network to process inputs to the network and produce outputs from the network.

For example, each neuron in the network receives a set of inputs. Some inputs to a neuron may be the outputs of some neurons in the network; and some inputs of the neuron may be inputs provided to a neural network. The input/output relationships between neurons in the network represent the neuron connectivity in the network.

For example, each neuron may have a bias, an activation function, and a set of synaptic weights, one for its input. The activation function may be in the form of a step function, a linear function, a log-sigmoid function, and the like. Different neurons in the network may have different activation functions.

For example, each neuron may generate a weighted sum of its input and its bias, and then generate an output that is a function of the weighted sum, the output being calculated using the neuron's activation function.

The relationship between the inputs and outputs of an ANN is generally defined by an ANN model that contains data representing the connectivity of neurons in the network, as well as the bias, activation function, and synaptic weight of each neuron. Using a given ANN model, the computing device computes the output of the network from a given set of inputs to the network.

For example, inputs to the ANN network may be generated based on camera inputs; and the output of the ANN network may be the identification of an event or object, for example.

The Spiking Neural Network (SNN) is a type of ANN that closely mimics the natural neural network. When the activation level of a neuron is high enough, the SNN neuron generates a pulse as an output. The activation level of SNN neurons mimics the membrane potential of natural neurons. The output/pulse of an SNN neuron may change the activation level of other neurons receiving the output. The current activation level of an SNN neuron as a function of time is typically modeled using a differential equation and is considered to be the state of the SNN neuron. Afferent pulses from other neurons may push the activation level of the neuron higher to reach the threshold at which the pulses are generated. Once a neuron generates a pulse, its activation level is reset. Prior to generating the pulse, the activation level of the SNN neuron may decay over time, as governed by a differential equation. The temporal elements in the behavior of SNN neurons make SNNs suitable for processing spatiotemporal data. The connectivity of SNNs is typically sparse, which is beneficial for reducing the computational workload.

In general, an ANN may be trained using a supervised approach, where parameters in the ANN are adjusted to minimize or reduce errors between known outputs produced by respective inputs and calculated outputs produced by applying the inputs to the ANN. Examples of supervised learning/training methods include reinforcement learning, and learning with error correction.

Alternatively or in combination, the ANN may be trained using an unsupervised approach, where the exact output produced by a given set of inputs is unknown until the training is complete. The ANN may be trained to classify events into multiple categories, or to classify data points into clusters.

Multiple training algorithms may be employed for complex machine learning/training paradigms.

Disclosure of Invention

An aspect of the present disclosure provides a method, wherein the method comprises: buffering a portion of a media stream from a content source in a memory device at a predetermined time before outputting the portion for presentation; identifying a region in the portion using an Artificial Neural Network (ANN) in the memory device; analyzing the region using the Artificial Neural Network (ANN) in the memory device to determine a classification of content in the region; transforming the content in the area in the memory device according to the preferences specified for the classification; and based on the transformation of the content in the region, generating a modified version of the portion as an output according to the predetermined time.

Another aspect of the present disclosure provides a data storage device, wherein the data storage device includes: one or more memory components configured to store data; an interface configured to receive a media stream from a content source, wherein a portion of the media stream is buffered in the one or more memory components for a predetermined duration before generating output for presentation in accordance with the portion; an inference engine configured to identify a classification of content within the portion using an Artificial Neural Network (ANN) in the data storage; and a controller configured to modify the content according to the classification to produce an output provided according to the predetermined time.

Another aspect of the present disclosure provides a vehicle, wherein the vehicle includes: one or more cameras; a computer system configured to: receiving images from the one or more cameras; identifying a person displayed in the image; determining a classification of the person; buffering a portion of a media stream from a content source at a predetermined time before outputting the portion for presentation; identifying non-desired content in the portion using an Artificial Neural Network (ANN) based on a preference associated with the classification of the person; and transforming the undesired content in the portion while generating the output signal; and an entertainment system configured to present content based on the output signal.

Drawings

Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.

FIG. 1 illustrates an intelligent device configured to process a video data stream in generating a video signal for a display device, according to one embodiment.

Fig. 2 illustrates a method for classifying and controlling content in a media stream according to one embodiment.

FIG. 3 illustrates a method for training an artificial neural network to identify content regions for content classification, according to one embodiment.

FIG. 4 illustrates a system having a vehicle configured to collect and process sensor data, according to some embodiments.

FIG. 5 illustrates an autonomous vehicle according to one embodiment.

6-8 illustrate training of an artificial neural network for prediction, in accordance with some embodiments.

FIG. 9 illustrates a data storage device having a neural network accelerator, according to one embodiment.

FIG. 10 illustrates a memory component for expediting neural network computations, according to one embodiment.

FIG. 11 illustrates a memory capacity configured to support neural network computations, according to one embodiment.

FIG. 12 illustrates a configuration of a memory area for an Artificial Neural Network (ANN) model, according to one embodiment.

FIG. 13 illustrates a configuration of a memory region for input to an artificial neuron, according to one embodiment.

FIG. 14 illustrates a configuration of a memory region for output of an artificial neuron, according to one embodiment.

FIG. 15 illustrates communication between an autonomous vehicle and a data storage device, according to one embodiment.

FIG. 16 illustrates communication within a data storage device according to one embodiment.

Detailed Description

At least some embodiments disclosed herein provide systems, methods, and apparatus for identifying and/or filtering unwanted media content (e.g., video content streamed from a service provider or another source) to a vehicle for presentation via an infotainment system, media player, display device, or the like.

Streaming media content (e.g., video and/or audio) is typically configured to be moved from a source through a memory device prior to presentation. For example, the memory device may be configured to buffer the content stream to avoid interruptions or pauses during playback when the transmission of data from the source may be temporarily delayed or interrupted. Optionally, the entire video clip or song may be downloaded from the source to the memory device so that it may be rendered from the memory device even when the data connection from the memory device to the source is broken during rendering.

The analysis capability may be configured in the memory device to intelligently identify and filter the unwanted images/audio in real-time so that children and/or other passengers in the vehicle or other users of the media player do not come into contact with the unwanted images/audio.

For example, an Artificial Neural Network (ANN), such as a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), and/or a Spiking Neural Network (SNN), may be configured in a memory device to process media content to be rendered in real-time. An Artificial Neural Network (ANN) may be trained to recognize undesired content and/or determine a classification of the content, where the classification may be selectively configured as undesired.

For example, when media content from the internet or a radio broadcast station is streamed into the front or rear entertainment system of a vehicle, the content passing through the memory device is processed by an Artificial Neural Network (ANN) to discern or detect objectionable and/or unwanted images, gestures, words, phrases, and the like. The unwanted content may be filtered out of the stream, skipped or hidden from presentation in the entertainment system.

For example, parental control may be implemented based on content inspection, discrimination, and/or classification performed by an Artificial Neural Network (ANN) in a memory device.

For example, transportation services may use content control techniques to avoid offending certain passengers.

The buffering operation of the memory device allows an Artificial Neural Network (ANN) to process content in real-time for a short period of time (e.g., one or more seconds or minutes or more) before the content is presented in the entertainment system. When unwanted content is detected during a buffering period of the content, the content can be erased, modified, hidden, altered, transformed, etc. to produce a sanitized version of the media stream in real-time.

Further, the cleansing operations performed on the media stream may be customized to the current audience in the vehicle. For example, parental controls may be dynamically implemented or suspended depending on whether any children are currently in or near the vehicle. For example, when anyone in a particular crowd (e.g., one or more friends) is present in or near the vehicle, unwanted content of the crowd may be filtered.

FIG. 1 illustrates an intelligent device (e.g., 101) configured to process a video data stream 103 in generating a video signal 104 for a display device, according to one embodiment.

For example, video data stream 103 may be transmitted from a server of a content provider via a communication connection such as the internet, a cellular communication network, a local area network, and/or a wireless local area network (e.g., Wi-Fi). The media content in video data stream 103 may be decoded/compressed using Discrete Cosine Transform (DCT) techniques or modified versions of DCT techniques, such as those used for various decoding formats, such as MP3 (moving picture experts group-1 (MPEG-1), audio layer III or MPEG-2 audio layer III), advanced audio decoding (AAC), advanced video decoding (AVC), MPEG-4, Advanced Systems Format (ASF), and so forth.

For example, the video signal 104 may be in a standard format ready for an audio/video device to present content, such as a format for Digital Video Interface (DVI), high-definition multimedia interface (HDMI), and so forth.

In some embodiments, video data stream 103 may also be a standard format for display devices. In other embodiments, video data stream 103 and video signal 104 may be in the same format.

In some cases, video data stream 103 is from a receiver of a television station. Optionally, the audio data stream 103 without images may be processed in a similar manner.

The video processing device 101 of fig. 1 includes a buffer memory 102 and one or more processors 111.

In fig. 1, portions of the content of video data stream 103 are reconstructed in digital format for analysis by an Artificial Neural Network (ANN) 125. For example, an input video frame 105 in video data stream 103 may be reconstructed in buffer memory 102. Similarly, audio segments of a predefined time period may be reconstructed in the buffer memory 102.

An Artificial Neural Network (ANN)125 is configured to recognize objects in the region 108 in the input video frames 105 and determine a classification 107 of the objects depicted in the region 108.

The video processing device 101 has configurable preferences 109 that identify unwanted content based on classification (e.g., 107). Based on the classification 107 and the preferences 109, the video processing device 101 may adjust the output video frames 106 generated by the processor 110 to prevent the presentation of unwanted content.

For example, the output video signal 104 of the video processing device 101 may be generated to present the output video frames 106 that may be selectively altered from the input video frames 105 based on the classification 107 and the preferences 109 in case of presenting unwanted content.

For example, preferences 109 may instruct processor 110 to skip input video frames of regions 108 that have identified images showing objects of certain classifications 107 that are identified in preferences 109 as undesirable or offensive. Thus, the output video may appear to be fast-forwarded to skip portions of the undesired video frames.

For example, the preferences 109 may instruct the processor 110 to hide or blur the identified regions 108 showing images of objects of certain classifications 107 that are identified in the preferences 109 as not desirable or objectionable. For example, when a recognized region 108 is detected as having an object with a classification 107 that is undesirable or offensive according to the preferences 109, the processor 110 is configured to modify or transform the region so that its classification is no longer undesirable or offensive; and uses the modified/transformed regions to generate corresponding output video frames 106.

For example, an undesired region of a video frame may be combined with a predefined pattern to obscure content within the undesired region without changing the content in the remainder of the video frame. For example, noise may be added to the undesired regions to obscure content within the undesired regions. Optionally, noise may be added to the entire video frame at different densities and intensities, where the noise density and/or intensity may be higher in areas containing unwanted content than in other areas. For example, the transformation may be applied iteratively until an Artificial Neural Network (ANN)125 can no longer discern regions 108 having content in the content classifications 107 that are not intended or objectionable.

Optionally, Artificial Neural Network (ANN)125 is configured to generate an overall classification of a series of multiple frames of video data stream 103. When the overall classification 107 is associated with a non-desired or objectionable preference, the video processing device 101 may generate an alert selecting an alternative video data stream 103 from the same content source and/or an alternative content source for obtaining another video data stream 103.

Similarly, an Artificial Neural Network (ANN)125 may also be used to identify the audio content buffered in the buffer memory 102 in the data stream 103. An Artificial Neural Network (ANN)125 may generate classifications of the recognized audio content, such as undesired words, phrases, speech, comments, noise, and/or sound patterns, among others.

For example, the processor 111 may be configured to turn down the volume of the recognized audio content, such as undesired words, phrases, speech, comments, noise, and/or sound patterns, among others. Alternatively or in combination, the processor 111 may superimpose a predefined tone or audio clip or noise on the unwanted words, phrases, speech, comments, noise and/or sound patterns, etc. The processor 110 may transform the undesired portions of the audio content in iterations such that they are no longer classified as undesired. Alternatively, when the audio content is presented without images, undesired portions of the audio content may be skipped.

The Artificial Neural Network (ANN)125 may be first trained on a general user population and then further trained to be customized for the user of the video processing device 101.

For example, the video processing device 101 may store image data showing the discerned regions 108 with undesirable or objectionable content classifications. Subsequently, a user of the video processing device may review the stored image data and confirm, adjust, and/or correct the classifications made by the Artificial Neural Network (ANN)125 to generate training data. The training data may be used to further train an Artificial Neural Network (ANN)125 to produce a classification that more closely approximates the classification that would be produced by the user. Thus, from the perspective of a user of the video processing device, training may improve the classification accuracy of the Artificial Neural Network (ANN) 125.

In some implementations, the Artificial Neural Network (ANN)125 may also be further trained to improve the accuracy of the regions 108 identified to determine the classification. For example, the video processing device 101 may store an input video frame that may contain regions of interest for classification. A user of the video processing device may view the stored video frames to identify areas to be identified by the Artificial Neural Network (ANN) 125. The user-identified regions may be included in the training data to improve the accuracy of the Artificial Neural Network (ANN)125 in identifying the region of interest 108 for classification 107.

In some embodiments, the processor 110 is configured to alter the input video frame 105 to generate the output video frame 106 based on a threshold confidence level of the classification 107 in the undesired or offensive category. When the confidence level of the classification 107 is below a threshold confidence level, the processor 110 may allow the identified region 108 to be presented. However, when the confidence level is in a range below the threshold, the video processing device 101 may automatically store the image data for inspection. Accordingly, subsequent user review may be used to generate training data to improve the accuracy and/or confidence level of the Artificial Neural Network (ANN)125 in identifying the region of interest 108 and/or the classification of high confidence levels.

In some cases, the Artificial Neural Network (ANN)125 may not be able to discern content that is not intended or objectionable. Thus, unwanted or objectionable content may appear in the video signal 104. In response, the user of the video processing device 101 may provide an indication of the non-desired/objectionable content that is discerned by the user.

For example, input buttons of the video processing device 101 or a user interface connected to the video processing device 101 may be used to provide an indication that undesirable content is present in the video signal 104. Alternatively, a voice command or comment may be provided indicating that the video signal 104 contains non-desired/objectionable content. In response, the video processing device 101 may store the image data for subsequent review and/or as part of training data for further training of the Artificial Neural Network (ANN) 125.

In some embodiments, the filtering operation in preference 109 is selectively implemented based on the person currently in the vehicle to consume media content corresponding to video signal 104.

For example, when a child is detected in the vehicle, the video processing device 101 at least implements parental controls to restrict content that is not intended or liked by parents of the child.

For example, when a guest is detected in the vehicle, the video processing device 101 may implement a filter to mask, filter, reduce content that is identified by the vehicle owner as not intended or objectionable by the guest.

For example, when a friend is detected in the vehicle, the video processing device 101 may implement personalized and/or customized filters to remove unwanted or objectionable content for the friend.

For example, a vehicle may be configured with one or more cameras to capture images of a driver and/or passengers in the vehicle. The camera may be part of an Advanced Driving Assistance System (ADAS) of the vehicle. From the images captured by the cameras, the vehicle may classify people in and/or near the vehicle into a number of categories, such as owner, driver, passenger, adult, friend, child, general user, and so forth. The same person may be in multiple categories. When the preference 109 indicates that the items in the discerned region 108 have a classification 107 (as recognized by the camera) that is unwanted or objectionable to anyone currently in or near the vehicle, the video processing device 101 applies a filtering operation to the discerned region 108. Thus, a cleansed version of the video data stream 103 may be presented via the video signal 104 to accommodate people in and/or around the vehicle.

Further, when the vehicle can dynamically adjust the space to identify the audience for the video signal 104 based on the operating state of the vehicle. For example, when a vehicle is traveling on a road, the audience for the video signal 104 is limited to people within the vehicle. The viewer of the video signal 104 may include not only a viewer inside the vehicle, but also a viewer approaching and/or located near the vehicle when the vehicle is parked and/or the doors are open.

Fig. 2 illustrates a method for classifying and controlling content in a media stream according to one embodiment. For example, the method of fig. 2 may be implemented in the apparatus 101 of fig. 1.

At block 251, a memory device (e.g., buffer memory 102) buffers a portion of a media stream from a content source. The buffering is performed a predetermined time before outputting the portion for presentation. The memory device is configured on a communication path from a content source to a display device (e.g., a display device of a vehicle, an entertainment system, a media player, an infotainment system, etc.).

For example, the media stream may be a video data stream 103 transmitted from a server; and may generate a video signal 104 to a display device based on buffering a portion of the stream a predetermined amount of time before the stream is scheduled for presentation.

At block 253, an Artificial Neural Network (ANN)125 in a memory device (e.g., 102) identifies the region 108 in the portion. The Artificial Neural Network (ANN) may include a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), or a Spiking Neural Network (SNN), or any combination thereof.

For example, the portion of the media stream may include one or more input video frames 105; and the regions may be within a video frame showing recognized objects, terms, symbols, features, etc.

For example, the portion of the media stream may be audio content within a predetermined time interval; and the region may be an audio content of a word, phrase, or sentence.

At block 255, an Artificial Neural Network (ANN)125 in a memory device (e.g., 102) analyzes the region 108 to determine a classification 107 of content in the region.

At block 257, the content in region 108 is transformed by a memory device (e.g., 102) and/or processor 110 according to the preferences 109 specified for the classification 107.

For example, a transformation may be applied within the identified region 108 such that the identified region 108 within the video frame is modified while the remainder of the video frame outside of the region 108 is unaltered.

For example, the transformation may be applied iteratively until an Artificial Neural Network (ANN) no longer recognizes the transformed content within region 108 as having classification 107.

For example, the transformation may be applied via adding random noise in region 108. Alternatively, the transformation may be applied by altering the intensity and predetermined pattern and/or applying a predetermined transformation function.

At block 259, a modified version of the portion (e.g., 104) is generated as an output provided according to a predetermined time based on the transformation of the content in region 108.

Optionally, a modified version of the audience is identified to dynamically configure preferences according to the classification of people in the audience.

For example, one or more cameras configured on a vehicle may be used to capture images; and the person in the audience can be determined from the image. The persons may be classified into predetermined categories such as children, friends, guests, drivers, car owners, adults, strangers, acquaintances, and the like.

Optionally, the viewer may be determined based at least in part on the operating state of the vehicle, such as whether the vehicle is stationary or moving, whether windows are open or closed, whether the vehicle is in a parked state, and so forth.

Optionally, the memory device may be configured to generate training data to train an Artificial Neural Network (ANN) according to preferences of its user, owner, and/or operator.

For example, the memory device may receive input from a user to identify an area in content processed via the memory device and a classification of content within the area of interest. An Artificial Neural Network (ANN) may be trained using the inputs to identify regions and classifications in a manner similar to that used by the user.

For example, when the Artificial Neural Network (ANN)125 determines the classification 107 of the content in the identified region 108 with a confidence level within a predetermined range, the memory device may store the content of the identified region 108. During an inspection mode of the memory device, the stored content is presented for inspection by a user. After receiving the classification from the user during the inspection mode, an Artificial Neural Network (ANN) may be trained to classify the content in a similar manner as the user.

For example, after providing the video signal 104 for presentation, the buffered content may be stored in the buffer memory 102 for at least a predetermined period of time. When content that is not desired by the user is present in the video signal 104, the user may provide an indication that the output of the memory device contains non-desired content. In response to receiving the indication from the user, the memory device may store the buffered content for review.

While in the inspection mode, the stored content may be presented using a user interface to identify an area to be identified and/or a classification to be generated for content within the area identified by the user. Subsequently, an Artificial Neural Network (ANN) may be trained by the memory device or among the users using at least the user identification area and the user-specified classification of the stored content.

FIG. 3 illustrates a method for training an artificial neural network to identify content regions for content classification, according to one embodiment.

At block 261, the data store (e.g., 101) selects a portion of the media stream that passes through the data store (e.g., 101) prior to presentation of the media stream (e.g., 103).

At block 263, the data storage device (e.g., 101) stores the selected portion after presenting the portion in the play mode of the media stream.

For example, in response to an indication from the user that playback of the portion contains undesired content, the portion may be selected for subsequent review.

For example, in response to determining (e.g., using an Artificial Neural Network (ANN) 125) that the portion has a likelihood of containing non-desired content above a threshold, the portion may be selected for subsequent examination.

At block 265, the data storage device (e.g., 101) provides the selected portion for presentation in an inspection mode of the selectively stored content.

During the inspection mode, the data storage device (e.g., 101) receives a user-specified region in the selected portion at block 267 and receives a user-specified classification of content in the user-specified region at block 269.

At block 271, the artificial neural network 125 is trained using the user-specified region and the user-specified classification of the selected portion. After training, the artificial neural network 125 may identify similar regions and make user-specified classifications for content in the similar regions when a portion of the media stream has content that is the same as or similar to the content in the selected portion.

The training may be performed in a data storage (e.g., 101). Alternatively, training data may be downloaded and/or uploaded to a server at a maintenance facility for training an artificial neural network. The trained artificial neural network 125 may be installed in the data storage device (e.g., 101) by updating to improve the device's (e.g., 101) ability to discern content for filtering.

The techniques of fig. 1-3 may be implemented in the system shown in fig. 4 and/or using the data storage devices discussed below.

For example, the data storage device may include: one or more memory components configured to store data; an interface configured to receive a media stream from a content source, wherein a portion of the media stream is buffered in the one or more memory components for a predetermined duration before generating output for presentation in accordance with the portion; an inference engine configured to identify a classification of content within the portion using an Artificial Neural Network (ANN) in the data storage; and a controller configured to modify the content according to the classification to produce an output provided according to the predetermined time.

For example, the inference engine may include a neural network accelerator configured to perform matrix arithmetic calculations more efficiently than the controller.

Optionally, the controller may be configured to select content based on output from the ANN, store the selected content in the one or more memory components, present the selected content in an inspection mode, receive a classification of the selected content during the inspection mode, and instruct the inference engine to train the ANN using the classification received during the inspection mode.

In some embodiments, the techniques for distinguishing content and/or filtering unwanted content may be implemented in a computer system of a vehicle.

For example, a vehicle may have one or more cameras, a computer system, and an entertainment system. The computer system is configured to: receiving images from the one or more cameras; identifying a person displayed in the image; determining a classification of the person; buffering a portion of a media stream from a content source at a predetermined time before outputting the portion for presentation; identifying non-desired content in the portion using an Artificial Neural Network (ANN) based on a preference associated with the classification of the person; and transforming the unwanted content in the portion while generating an output signal for presentation on the entertainment system.

For example, based at least in part on images from the one or more cameras, an Advanced Driving Assistance System (ADAS) of the vehicle may provide driving assistance, such as autonomous driving, lane keeping, adaptive cruise control, and/or collision avoidance, among others.

For example, the computer system of the vehicle may be configured to select a person based on the operating state of the vehicle when determining the preference. For example, when the vehicle is in certain states, the people selected for content filtering are people within the vehicle; and when the vehicle is in other states, others in the vicinity of the vehicle may be selected to identify preferences for content filtering.

Fig. 4 illustrates a system having a vehicle 111 configured to collect and process sensor data according to some embodiments.

Vehicle 111 in fig. 4 has data storage 112, sensors 122, an 125, and an ADAS 128 configured to process sensor data (including input from sensors 122) to generate control signals for vehicle 111.

In general, one or more sensors (e.g., 122) may be configured on the vehicle 111 to generate sensor data that is input to the ADAS 128 and/or the data storage device 112. The data storage device 112 and/or the ADAS 128 may be configured to generate inference results using the ANN 125. The inference results may include control signals for operating or driving the vehicle 111, recommendations for maintenance services for the vehicle 111, and the like.

In some embodiments, at least a portion of the data generated by the sensors (e.g., 122) is used for driving assistance by the ADAS 128 and maintenance prediction of the ANN 125. Optionally, the output of the ANN 124 is available to the data storage device 112 and the ADAS 128. The ANN125 may be part of the ADAS 128.

The sensor 122 may be configured in a digital camera, a radar, an ultrasonic sonar, and the like. Other types of sensors may also be used, such as brake sensors, velocity sensors, acceleration sensors, airbag sensors, Global Positioning System (GPS) receivers, audio sensors/microphones, vibration sensors, force/stress sensors, deformation sensors, motion sensors, temperature sensors, and so forth. Some of the sensors 122 may be primarily configured to monitor the environment of the vehicle 111; and other sensors 122 may be primarily configured to monitor operating conditions of one or more components of the vehicle 111, such as an internal combustion engine, an exhaust system, an electric motor, brakes, tires, a battery, and so forth.

The ANN125 of the vehicle 111 is configured to process sensor input data from the sensors 122 to control the vehicle 111 and/or the data storage device 112.

Generally, the output of the sensors 122 as a function of time is provided as a stream of sensor data to the ADAS 128 and/or the ANN125 to provide driving assistance (e.g., autonomous driving) and maintenance prediction.

At least a portion of the sensor data stream may be provided to data storage device 112 for storage and/or processing. For example, a portion of the ANN125 may be implemented in the data storage 112. The inference engine of the data storage device 112 may process the sensor data stream to produce inference results for further processing by the ADAS 128. Thus, the input data stream of data storage device 112 may include at least a portion of the sensor data stream from sensor 122; and the output data stream from the data storage device 112 may include the inference results generated for the ADAS 128 of the vehicle 111 using the ANN125 in the data storage device 112. The operating conditions of the vehicle 111, and thus the workload of the data storage device 112, may be determined from the patterns in the input/output data stream.

The ANN125 of the vehicle 111 and/or in the data storage device 112 may include an SNN configured to classify time-based changes in sensor data and/or detect deviations from a known pattern of sensor data of the vehicle 111. When the ANN125 detects a deviation from a known pattern, sensor data corresponding to the deviation may be stored in the data storage 112 for further analysis and/or for further training of the ANN 125.

The data storage device 112 of the vehicle 111 may be configured to record sensor data over a period of time. The recorded sensor data may be used for predictive maintenance of the ANN125 and/or for further training of the ANN 125. A maintenance service (e.g., 127) may download sensor data 121 from data storage 112 and provide sensor data 121 and corresponding inference result data 123 to server 119 to facilitate training of ANN 125.

Optionally or in combination, the data storage device 112 is configured with a machine learning module to customize and/or train the ANN125 installed in the vehicle 111 and/or the data storage device 112.

Vehicle 111 may have wireless communication means for communicating with remote server 119 via wireless signal 113 and communication network 117. The remote server 119 is typically deployed at a location remote from the roadway 114 on which the vehicle 111 is traveling. For example, the vehicle 111 may provide some sensor data 121 to the server 119 and receive updates of the ANN125 from the server 119.

The communication network 117 may be a mobile telephone network having one or more base stations (e.g., 115) for receiving wireless signals (e.g., 113). Alternatively or in combination, the communication network 117 may include the internet, with wireless local area network signals (e.g., 113) transmitted by the vehicle 113 being received in an access point (e.g., 115) for further transmission to the server 119. In some embodiments, the vehicle 111 communicates with the server 119 using a communication link 116 or communication balloon to a satellite 118.

Server 119 may also communicate with one or more maintenance services (e.g., 127) to receive sensor data 121 and/or desired inference result data 123 for a vehicle (e.g., 111).

For example, desired inference result data 123 may be generated by a human operator reviewing sensor data 121 (e.g., images from sensors 122) and/or relevant conditions of vehicle 111. For example, desired inference result data 123 may include inspection records and/or maintenance records for components of a vehicle (e.g., 111). For example, the inspection record and/or the service record may indicate a degree of wear of the components inspected during service at the maintenance service (e.g., 127), an identification of a failed or malfunctioning component, and so forth. Sensor data 121 of a vehicle (e.g., 111) obtained over a period of time associated with desired inferential result data 123 may be used to train the ANN125 at the server 119 to improve the inferencing capabilities of the ANN 125.

The updated ANN125 may be installed in the vehicle 111 at the maintenance service facility 127. Alternatively, the update ANN125 may be transmitted to the vehicle 111 to wirelessly update the vehicle 111.

FIG. 5 illustrates an autonomous vehicle 111 according to one embodiment. For example, the vehicle 111 in the system of fig. 4 may be implemented using the autonomous vehicle 111 of fig. 5.

In general, the vehicle 111 may include an infotainment system 149, a communication device 139, one or more sensors 122, and a computer system 131 connected to some control of the vehicle 111, such as a steering control 141 for direction of the vehicle 111, a braking control 143 for stopping of the vehicle 111, an acceleration control 145 for speed of the vehicle 111, and so forth. In some embodiments, the vehicle 111 in the system of fig. 4 has a similar configuration and/or similar components.

The vehicle 111 of fig. 5 is equipped with an Advanced Driving Assistance System (ADAS) 128. The ADAS 128 of the vehicle 111 may have an Artificial Neural Network (ANN)125 for object detection, recognition, identification, and/or classification based on the images generated in the sensors 122. A portion of the ANN125 may be implemented in the data storage device 112.

The computer system 131 of the vehicle 111 may include one or more processors 133, a data storage device 112, and a memory 135 that stores firmware (or software) 147, including computer instructions and data models for the ADAS 128.

The sensors 122 of the vehicle 111 may include a visible light camera, an infrared camera, a light radar, a radar or sonar system, a peripheral sensor, a Global Positioning System (GPS) receiver, a satellite positioning system receiver, a brake sensor, and/or an airbag sensor. Further, the sensors 122 of the vehicle 111 may include audio sensors (e.g., microphones), vibration sensors, pressure sensors, force sensors, stress sensors, and/or deformation sensors configured to measure loads on components of the vehicle 111, accelerometers and/or gyroscope sensors that measure motions of some components in the vehicle 111, and so forth, configured to monitor noise from various components and locations in the vehicle 111. Such sensors may be used to monitor the operating state and/or health of the component with respect to predictive maintenance.

Sensor 122 may provide a real-time sensor data stream to computer system 131. The sensor data generated by the sensors 122 of the vehicle 111 may include capturing images of objects using a camera that images with lights visible to the human eye or a camera that images using infrared light or a sonar, radar, or LIDAR system. Preferably, the images are processed by an inference engine of the data storage device 112 to produce inference results as an output data stream of the data storage device 112, and thus reduce the computational workload of the host computer system 131.

For example, a camera may be used to obtain lane information for the vehicle 111 to travel, which may be processed by the ANN125 to generate control signals for the vehicle 111. For example, the cameras may be used to monitor the operating status/health of components of the vehicle 111, which may be processed by the ANN125 to predict or schedule maintenance services.

The infotainment system 149 of the vehicle 111 may be used to present data and/or inferencing results from the sensors 122. For example, a compressed image having a reduced resolution and refresh frequency may be generated in the sensor 122 and transmitted to the infotainment system 149 for presentation to occupants of the vehicle 111. Optionally, for the presentation, the communication device 139 may establish a connection with a mobile device of an occupant of the vehicle 111.

When the vehicle 111 is configured with the ADAS 128, the output of the ADAS 128 may be used to control (e.g., 141, 143, 145) the acceleration of the vehicle 111, the speed of the vehicle 111, and/or the direction of the vehicle 111 during autonomous driving.

Fig. 6-8 illustrate training of the artificial neural network 125 for prediction, according to some embodiments.

In fig. 6, the artificial neural network 125 is trained using a supervised machine learning module 171 to minimize the difference between the predictions 129 produced from the sensor data 121 and the desired inference result data 123.

For example, sensor data 121 may contain an image showing an object; and the desired/expected inference result data 123 can identify image areas occupied by objects, characteristics of objects, classifications of objects, identities of objects, and so forth.

For example, sensor data 121 may include an image of the environment surrounding vehicle 111; and the desired/expected inference result data 123 can include preferred control inputs for steering control 141, braking control 143, and acceleration control 145.

The desired/expected inference result data 123 may be generated by a human operator. For example, sensor data 121 may be used to construct a virtual reality presentation of the situation encountered by vehicle 111, including images from sensor 122 showing the environment of vehicle 111; and the desired/expected inference result data 123 may include responses generated by a human operator in response to a virtual reality presentation of the situation.

The supervised machine learning module 171 may adjust the artificial neural network 125 to reduce/minimize the difference between the predictions 129 generated based on the sensor data 121 and the desired/expected inference result data 123 generated by the human operator.

Supervised learning 171 of fig. 6 may be applied in server 119 based on sensor data and corresponding desired/expected inference result data 123 for a group of vehicles to generate a general ANN for the group of vehicles.

The supervised learning 171 of fig. 6 may be applied in the vehicle 111 based on the vehicle's sensor data and inference result data 123 to produce a customized/personalized ANN 125. For example, a general purpose ANN125 may be initially used in the vehicle 111; and the vehicle's ANN125 may be further trained using sensor data specific to the vehicle 111 and desired/expected inference result data 123 of the vehicle 111 to produce a customized/personalized ANN125 in the vehicle 111.

In fig. 7, the artificial neural network 125 is trained or refined using an unsupervised machine learning module 175 to facilitate anomaly detection 173. The unsupervised machine learning module 175 is configured to adjust the ANN (e.g., SNN) to generate normal classifications, clusters, or recognized patterns in the sensor data 121 such that a degree of deviation from the normal classifications, clusters, or recognized patterns in the sensor data 121 may be used to communicate the anomaly detection 173.

For example, anomaly detection 173 can be used to retain sensor data 121 associated with an anomaly for further analysis. In response to anomaly detection 173 in vehicle 111, computer system 131 may issue a read command to sensor 122 to retrieve image data associated with the anomaly from sensor 122 and store the retrieved image data in data storage device 112. Image data associated with the anomaly may be temporarily retained in the memory device of the sensor 122 and loaded onto the data storage device 112 for a period of time using the available communication bandwidth between the sensor 122 and the data storage device 112 without affecting the normal operation of the ADAS 128.

While the vehicle 111 is at the maintenance service facility 127, image data (and other sensor data) associated with the anomaly may be retrieved from the data storage device 112 to produce the desired/expected inference result data 123 to further train the ANN125 using supervised learning 171 of FIG. 6.

Optionally, the ANN125 may be trained using supervised machine learning 171, as shown in FIG. 8. Supervised learning 171 may be used to minimize classification differences between predictions 179 made using ANN125 from sensor data 121 and expected classifications 177.

For example, a "normal" classification may be assumed without an accident, an event proximate to an accident, or user input indicating an abnormal situation. An accident, event proximate to an accident, or user input may be used to identify an expected "anomaly" classification of sensor data that caused the accident, event, or user input. Supervised machine learning 171 can be used to train artificial neural network 125 to reduce/minimize the difference of class 179 from expected class 177.

Optionally, the inference engine of the data store 112 may be configured to expedite the computation of a portion of an Artificial Neural Network (ANN)125 implemented in the data store 112.

For example, the inference engine may include a neural network accelerator 159 (e.g., shown in fig. 9) dedicated to performing at least a portion of the computations involving the Artificial Neural Network (ANN)125 (e.g., dot products of vectors and tensors, multiply-accumulate operations, etc.).

FIG. 9 illustrates a data storage device 112 having a neural network accelerator 159, according to one embodiment. For example, the data storage device 112 of fig. 9 may be used with the vehicle 111 shown in fig. 4 or 5.

In FIG. 9, data storage device 112 has a host interface 157 configured to communicate with a host processor (e.g., 133 in FIG. 5). For example, communication between a host processor (e.g., 133) and the host interface 157 can be based at least in part on a communication protocol for a peripheral component interconnect express (PCIe) bus, a Serial Advanced Technology Attachment (SATA) bus, a Universal Serial Bus (USB) bus, and/or a Storage Area Network (SAN).

For example, host computer system 131 may communicate with host interface 157 to retrieve inferences generated by data storage device 112 from input data stream 103 containing sensor data generated by sensors 122 of vehicle 111.

For example, host interface 157 may be used to receive sensor data 121 of vehicle 111 from sensors 122; and sensor data 121 may optionally be stored in the data storage device 112 for analysis of subsequent or near-accident events.

In FIG. 9, each of memory components 161 through 163 may be a memory integrated circuit configured to store data.

The neural network accelerator 159 and the controller 151 may be implemented via logic circuits formed on one or more integrated circuit dies stacked on an integrated circuit die of the memory components 161-163. Through silicon vias between the integrated circuit die of the neural network accelerator 159 and controller 151 and the integrated circuit die of memory components 161-163 can be used to provide high communication bandwidth for processing data stored in the memory components 161-163 to produce inferential results. The inference results may be stored in local memory 153 and/or some of memory components 161 through 163 of controller 151 for retrieval by a host system, such as computer system 131 of vehicle 111. For example, different memory components 161-163 or different portions of a memory component (e.g., 161 or 163) may facilitate parallel access by different portions of the neural network accelerator 159 and the controller 151 using through silicon vias.

Generally, some memory integrated circuits are volatile, requiring power to maintain stored data; and some memory integrated circuits are non-volatile and can retain stored data even when power is not supplied. Memory components 161-163 can include volatile memory and/or nonvolatile memory. Memory components 161-163 can implement different types of memory or the same type of memory.

Examples of non-volatile memory include flash memory, memory cells formed based on NAND (NAND) logic gates, NOR (NOR) logic gates, Phase Change Memory (PCM), magnetic memory (MRAM), resistive random access memory, cross-point storage, and memory devices. Cross-point memory devices may use transistor-less memory elements, each having memory cells and selectors stacked together as columns. Columns of memory elements are connected via two layers of conductive lines running in a vertical direction, one layer of conductive lines running in one direction and in a layer above the columns of memory elements, and the other layer of conductive lines running in the other direction and in a layer above the columns of memory elements. Each memory element may be individually selected at the intersection of one conductive line on each of the two layers. Cross-point memory devices are both fast and non-volatile and can be used as a unified memory pool for processing and storage. Other examples of non-volatile memory include Read Only Memory (ROM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), and Electronically Erasable Programmable Read Only Memory (EEPROM) memory, among others. Examples of volatile memory include Dynamic Random Access Memory (DRAM) and Static Random Access Memory (SRAM).

The data storage device 112 may have a controller 151 that includes volatile local memory 153 and at least one processing device 155.

The local memory of controller 151 may be an embedded memory configured to store instructions for executing the various processes, operations, logic flows, and routines that control the operation of processing device 155, including processing communications between data storage device 112 and a processor (e.g., 133) of vehicle 111, as well as other functions described herein. Optionally, the local memory 151 of the controller 151 may include Read Only Memory (ROM) for storing microcode and/or memory registers to store, for example, memory pointers, fetch data, and the like, and/or volatile memory such as Dynamic Random Access Memory (DRAM) and Static Random Access Memory (SRAM).

In fig. 9, data storage device 112 includes a neural network accelerator 159 coupled to controller 151 and/or memory components 161 through 163.

For example, the neural network accelerator 159 may be configured to perform matrix arithmetic calculations more efficiently than the processing device 155 of the controller 151. The calculations involving the ANN125 have matrix multiply accumulate operations, which may be computationally intensive for general purpose processors (e.g., 133, 155). Performing matrix arithmetic calculations using the neural network accelerator 159 may reduce the data to be transmitted to the processor 133 of the vehicle 111 and reduce the computational workload of the processors 133, 155.

When the ANN125 includes a Spiking Neural Network (SNN), the simulation of differential equations used to control activation levels of SNN neurons may be computationally intensive for a general purpose processor (e.g., 133, 155). Optionally, the neural network accelerator 159 may use specialized hardware to model the differential equations and thus increase computational efficiency when implementing SNN.

In some implementations, the neural network accelerator 159 is an integrated circuit device separate from the controller 151 and/or the memory components 161-163. Alternatively or in combination, the neural network accelerator 159 is integrated with the controller 151 in an integrated circuit die. Alternatively, or in combination, a portion of the neural network accelerator 159 may be integrated on an integrated circuit die of at least one of the memory components 161 through 163, as shown in fig. 10.

FIG. 10 illustrates a memory component 160 for expediting neural network computations, according to one embodiment. For example, each or some of memory components 161-163 of FIG. 9 may be implemented using memory component 160 of FIG. 10.

In FIG. 10, the memory component 160 may be formed on an integrated circuit die. An input/output (I/O) interface 169 of the memory component 160 is configured to process input/output signals of the memory component 160. For example, the input/output signals may include address signals that specify a location in the media unit 165 and data signals that represent data to be written to or retrieved from the location in the media unit 165 at the location specified via the address signals.

In fig. 10, the neural network accelerator 159 is coupled with the control logic 167 and/or the media unit 165 to perform calculations for evaluating the output of a portion of the ANN125 and/or training the ANN 125.

For example, the input/output interface 169 may receive an address identifying a matrix stored in the media unit and to be operated on via the neural network accelerator 159. The memory component 160 may provide the results of the calculations of the neural network accelerator 159 as output data in response to the address, store the output data in a buffer for other operations, store the output data into the media unit 165 in the location specified via the address signal. Thus, the calculations performed by the neural network accelerator 159 may be within the memory component 160 proximate to the media unit 165 in which the matrix data is stored.

For example, state data of SNN neurons may be stored in media unit 165 according to a predetermined pattern. The neural network accelerator 159 may automatically update the state of the SNN neuron according to a differential equation for controlling the activation level of the SNN neuron over time. Optionally, the neural network accelerator 159 is configured to process pulse generation by neurons in the neural network. Alternatively, the neural network accelerator 159 and/or the processor 133 of the data storage device 112 may be configured to process pulse generation of neurons and/or accumulation of inputs to SNNs.

For example, the sensor 122 generates sensor data (e.g., images) at a predetermined frequency. Each image is stored in a loop into memory components 161 to 163 with the latest image overwriting the oldest image. The memory components 161 through 163 further store the portions of the ANN125 of the vehicle 111 responsible for processing the images from the sensors 122. The controller 151 processes the images in the memory components 161 through 163 in accordance with the portion of the ANN125 to produce an inference result. The inference results are stored in memory components 161 through 163 and/or local memory 153 of controller 151 for reading by a host system, such as computer system 131 of vehicle 111.

For example, the data storage device 112 receives a sensor data stream from at least one sensor 122 disposed on the vehicle 111 and generates inference results based on the sensor data stream according to the portion of the ANN125 stored in the memory components 161-163. A neural network accelerator 159 configured within the data storage device 112 performs at least a portion of the calculations based on the artificial neural network 125 and the stream of sensor data.

Optionally, neural network accelerator 159 may be configured on an integrated circuit die separate from controller 151 and/or separate from memory components 161-163.

Optionally, neural network accelerator 159 may be configured on an integrated circuit die that includes controller 151 of data storage device 112 or memory components 160, 161, or 163 of data storage device 112.

The neural network accelerator 159 may be configured to perform calculations, such as matrix arithmetic calculations for the ANN and/or differential equation simulations for the SNN, using data stored in the data storage device 112.

An example of a matrix arithmetic computation includes a matrix multiply accumulate operation. After performing calculations using data stored in the data storage 112 to produce results of matrix arithmetic calculations, the neural network accelerator 159 may provide the results as output by the data storage 112 in a data retrieval operation (e.g., in response to a read command). Alternatively or in combination, the results of the matrix arithmetic calculations may be buffered in the data storage device 112 as operands for a next matrix calculation to be performed in conjunction with a data matrix retrieved from non-volatile memory via a read command received in the host interface 157.

When the Artificial Neural Network (ANN)125 comprises a Spiking Neural Network (SNN), the neural network accelerator 159 may be configured to model a differential equation that controls the activation levels of neurons in the Spiking Neural Network (SNN). Optionally, the memory component 160 is configured to store the state of neurons in the spiking neural network according to a predetermined pattern; and the neural network accelerator is configured to automatically update the state of the neuron over time according to the differential equation. For example, the neural network accelerator 159 may be configured to detect anomalies via an unsupervised machine learning training impulse neural network (SNN).

The calculations performed by the neural network accelerator 159 according to the Artificial Neural Network (ANN)125 involve different types of data having different usage patterns of the data storage device 112.

For example, using the Artificial Neural Network (ANN)125 to predict includes using data specifying a model of the Artificial Neural Network (ANN)125, input data provided to the artificial neuron, and output data generated by the artificial neuron.

The memory capacity of the data storage device 112 may be partitioned into different portions for data associated with different types of ANNs. The different portions may be individually configured to optimize access to and storage of the corresponding data according to usage patterns of the corresponding data by the neural network accelerator 159 and/or the processor 133 of the computer system 131 in which the data storage device 112 is configured.

A model of an Artificial Neural Network (ANN)125 may include parameters that specify static attributes of individual artificial neurons in the ANN125 and neuron connectivity in the ANN 125. The model data of the ANN125 is static and does not change during the prediction calculations performed using the ANN 125. Therefore, the usage pattern of the model data is mostly read. However, the model data of the ANN125 may change when the updated ANN125 is installed. For example, the vehicle 111 may download the updated ANN125 from the server 119 to the data storage device 112 of the vehicle 111 to update its predictive capabilities. The model data of the ANN125 may also change during or after training the ANN125 using machine learning techniques (e.g., 171 or 175). It is preferred that separate memory partitions or regions of the data storage device 112 be configured to store the model data, where the partitions or regions operate according to configuration parameters that optimize the memory cells for a particular usage pattern (e.g., mostly read, infrequently updated) of the model data. For example, when the memory cells are implemented using NAND logic gate based flash memory, the memory cells in the ANN model partition/region may be configured to operate in a multi-level cell (MLC) mode, a three-level cell (TLC) mode, or a four-level cell (QLC) mode, where each memory cell stores two, three, or four bits to increase storage capacity.

The input data provided to the artificial neurons in the ANN125 may include external inputs and internal inputs. The external input is typically generated by sensors (e.g., 122) of the vehicle 111, rather than by artificial neurons in the ANN 125. The external input may be saved in a round robin fashion such that the most recent period of input data for the predetermined driving length may be found in the data storage device 112. Thus, it is preferred to configure separate memory partitions or regions of the data storage device 112 to store the external input data, where the partitions or regions operate according to configuration parameters that optimize the memory cells for the storage pattern of the external input data (e.g., enhanced endurance, cyclic overwrite). For example, when the memory cells are implemented using flash memory based on NAND logic gates, the memory cells in the ANN input partition/region may be configured to operate in a Single Level Cell (SLC) mode, where each memory cell stores one bit of data, to improve the endurance of the cyclic overwrite operation.

In some embodiments, the artificial neuron may have a state variable that changes over time in response to input during the prediction computation. For example, the activation level of a spiking neuron may change over time and be considered as a dynamic state variable of the spiking neuron. In some embodiments, such state variable data of the artificial neuron has a similar memory usage pattern as external input data; and thus, the state variable data may be stored in a partition or region configured for external input data. In other embodiments, the state variable data of the artificial neuron is held in a buffer and stored less frequently than the external input; and thus, another partition/region may be configured to store dynamic state variable data for the artificial neuron.

Output data generated by the artificial neurons in the ANN125 may be buffered for further access by the neural network accelerator 159 and/or the processor 133 of the computer system 131. The output data may include external outputs and internal outputs. The external input is generated by the artificial neuron as an output from the ANN125, such as the result of a classification or prediction made by the ANN 125. The output of the ANN125 is typically further processed by a processor 133 of the computer system 131. The external input may be saved periodically (e.g., in a manner similar to the storage of state variable data). The internal output and/or some of the external outputs may be internal inputs to artificial neurons in the ANN 125. In general, it may not be necessary to store internal outputs from the data storage device buffer to the memory component. In some embodiments, the data storage device 112 may use swap partitions/regions to expand the capacity of the buffer when the buffer capacity of the data storage device 112 is insufficient to hold the entire state variable data and/or internal output. The swap partition/region may be configured for optimized random access and for improved endurance.

The external outputs and/or dynamic states of the neurons may be saved in a circular fashion in separate output partitions or regions, such that the external output data and/or dynamic states of the neurons may be stored periodically, and the most recent set of external outputs and/or dynamic states may be found in the data store 112. External outputs and/or dynamic states of the neurons may be selectively stored, as some of such data may be regenerated by the ANN from external inputs stored in the input partition or region. Preferably, the output partition or region is configured to store one or more sets of external outputs and/or dynamic states that cannot be generated from external inputs stored in the input partition or region. When data is stored in a circular fashion in an input/output partition or region, the oldest stored data set is erased to make room for the most recent data set. The ANN input/output partition/region may be configured to optimize a continuous write stream to copy data from a buffer of the data storage device into memory cells in a memory component of the data storage device.

FIG. 11 illustrates a memory capacity 181 configured to support neural network computations, according to one embodiment. For example, memory capacity 181 of memory components 161 through 163 of data storage device 112 of FIG. 9 may be configured in accordance with FIG. 11 to support neural network computations.

Memory capacity 181 of FIG. 11 may be implemented using a set of memory components (e.g., 161 through 163) of data storage device 112.

A set of regions 183, 185, 187 … … may be created on the memory capacity 181 of the data storage device 112. Each of the regions (e.g., 183, 185, or 187) corresponds to a named portion of memory capacity 181. A logical address is defined within each region. Address mapping 191 is configured to map between logical addresses defined in regions 183, 185, 187 … … and physical addresses of memory cells in a memory component (e.g., 161 through 163 shown in FIG. 9).

Address map 191 may contain region optimization settings 192 for regions 183, 185, and 187.

For example, the ANN model area 183 may be a memory/storage partition configured to store model data for an Artificial Neural Network (ANN) 125. The region optimization settings 192 optimize memory operations in the ANN model region 183 according to the data usage pattern (e.g., mostly read, infrequently update centers) of the ANN model.

For example, the neuron input region 185 may be a memory/storage partition configured for external input data to the Artificial Neural Network (ANN) 125. The region optimization settings 192 optimize memory operations in the neuron input regions 185 according to a data usage pattern of the external input data (e.g., enhanced endurance for cyclic overwriting of a continuous input data stream supporting continuous writing).

For example, the neuron output region 187 may be a memory/storage partition configured for external output data provided from an Artificial Neural Network (ANN) 125. The region optimization settings 192 optimize memory operations in the neuron output regions 187 according to a data usage pattern of the external output data (e.g., improved endurance for periodically overwriting data with random read/write access).

The data storage device 112 includes a buffer configured to store temporal/intermediate data of an Artificial Neural Network (ANN)125, such as internal inputs/outputs of artificial neurons in the ANN 125.

Optionally, a swap area may be configured in memory capacity 181 to expand the capacity of buffer 152.

Optionally, address map 191 includes a mapping between logical memory addresses received in host interface 157 and the access data of the artificial neuron and the identity of the artificial neuron. Thus, a read or write command that accesses one type of data of an artificial neuron in one area may cause controller 151 to access another type of data of an artificial neuron in another area.

For example, in response to a request to write external input data of a neuron into memory capacity 181 of data storage 185, address map 191 may be used to calculate addresses of model parameters of neurons in ANN model area 183 and read the model parameters into buffer 152 to allow neural network accelerator 159 to perform calculations of outputs of the neurons. The output of a neuron may be saved in buffer 152 as an internal input to other neurons (e.g., to reduce write amplification). Additionally, the identities of other neurons connected to the neuron may also be retrieved from the ANN model area 183 into the buffer 152, allowing the neural network accelerator 159 and/or processor to further process the propagation of the output in the ANN 125. Retrieving model data from the ANN model area 183 may be performed in parallel with storing external input data into the neuron input area 185. Thus, the processor 133 of the computer system 131 of the vehicle 111 does not have to explicitly send in the read command to retrieve the model data from the ANN model area 183.

Similarly, in response to reading the output data of the neuron, address map 191 may be used to compute the addresses of the model parameters of the neuron stored in ANN model area 183 and read the model parameters into buffer 152 to allow neural network accelerator 159 to apply internal inputs in buffer 152 to perform the computation of the output of the neuron. The calculated output may be provided as a response to a reading of the output data of the neuron, and the data storage device 112 need not store the output data in memory components (e.g., 161-163). Thus, the processor 133 and/or the neural network accelerator 159 may control the computation of the neurons via writing inputs to the neurons and/or reading outputs from the neurons.

In general, the incoming external input data of the ANN125 may be raw sensor data 121 that is generated directly by the sensors (e.g., 122) and not processed by the processor 133 and/or the neural network accelerator 159. Alternatively, indirect sensor data 121 that has been processed by the processor 133 of the ANN125 according to signals from the sensors 122 may be provided as external input data. Incoming external input data may be accepted in the host interface 157 and written to the neuron input region 185 in a round-robin fashion, and automatically buffered in the buffer 152 of the neural network accelerator 159 to produce neuron outputs using the model stored in the ANN model region 183. The output produced by the neural network accelerator 159 may be further buffered as an internal input for further application to the model in the ANN model area 183. When the external output becomes available, data storage device 112 may report completion of the write request with an indication that the external output is available. Optionally, the controller 151 and/or the neural network accelerator 159 may generate an internal read command that propagates a signal in the ANN125 when generating an external output. Alternatively, the host processor 133 may control the propagation of signals in the ANN125 by selectively reading the output of the neurons; and the data storage device 112 may actively buffer data that may be needed in the buffer 152 to speed up the ANN calculation.

Fig. 12 illustrates a configuration of a memory area 183 for an Artificial Neural Network (ANN) model according to one embodiment. For example, the configuration of FIG. 12 may be implemented in data storage device 112 of FIG. 9 having logical memory capacity 181 of FIG. 11. For example, the setting 193 of FIG. 12 may be part of the region optimization setting 192 of FIG. 11.

The configuration of fig. 12 maps the ANN model area 183 to at least one memory component a 161. Preferably, the at least one memory component a 161 is usable by the controller 151 that masters other areas (e.g., 185 and 187) of ANN data in parallel with the memory component (e.g., 163). For example, memory component a 161 can be in a separate integrated circuit package from that used for other areas (e.g., 185 and 187). Alternatively, memory components 161-163 are formed on separate integrated circuit dies embedded in the same integrated circuit package. Alternatively, memory components 161-163 can be formed on separate regions of an integrated circuit die, where the separate regions can operate substantially in parallel (e.g., for reading, for erasing, and/or for writing).

In fig. 12, settings 193 are optimized for most read and infrequently updated usage patterns.

Fig. 13 illustrates a configuration of a region 185 for input of an artificial neuron according to one embodiment. For example, the configuration of fig. 13 may be implemented in the data storage device 112 shown in fig. 9 and/or 11. For example, the setting 195 of FIG. 13 can be part of the region optimization setting 192 of FIG. 11.

The configuration of FIG. 13 maps the neuron input regions 185 to at least one memory component B163. Preferably, the at least one memory component B163 is usable by the controller 151 hosting other areas of ANN data (e.g., 183 and 187) in parallel with the memory component (e.g., 161). For example, memory component B163 may be in a separate integrated circuit package from the integrated circuit packages used for the other regions (e.g., 183 and 187). Alternatively, memory components 161-163 are formed on separate integrated circuit dies embedded in the same integrated circuit package. Alternatively, memory components 161-163 can be formed on separate regions of an integrated circuit die, where the separate regions can operate substantially in parallel (e.g., for reading, for erasing, and/or for writing).

In fig. 13, the setting 195 is optimized as a usage pattern of enhanced endurance in a cyclic continuous overwrite when recording a continuous input data stream sampled at fixed time intervals.

Fig. 14 illustrates a configuration of a region 187 for output of an artificial neuron, according to one embodiment. For example, the configuration of fig. 12 may be implemented in the data storage device 112 shown in fig. 9 and/or 11. For example, the setting 197 of FIG. 12 may be part of the zone optimization setting 192 of FIG. 11.

The configuration of fig. 14 maps neuron output regions 187 to at least one memory component C162. Preferably, the at least one memory component C162 is usable by the controller 151 hosting other areas of ANN data (e.g., 183 and 185) in parallel with the memory components (e.g., 161 and 163). For example, memory component C162 may be in a separate integrated circuit package from the integrated circuit packages used for the other regions (e.g., 183 and 185). Alternatively, memory components 161-163 are formed on separate integrated circuit dies embedded in the same integrated circuit package. Alternatively, memory components 161-163 can be formed on separate regions of an integrated circuit die, where the separate regions can operate substantially in parallel (e.g., for reading, for erasing, and/or for writing).

In fig. 14, the setting 197 is optimized as a usage pattern for buffered data with periodic overwriting of random access. For example, the memory unit is configured via optimization settings 193 to 197 to update/overwrite in the neuron output region 187 with a frequency higher than the ANN model region 183 but lower than the neuron input region 185.

The communication protocol/interface may be configured to allow the data storage device to dynamically perform neural network acceleration with reduced data traffic to the host system.

For example, a host processor (e.g., 133) of the vehicle 111 may provide a write command to the data storage device 112 to store a model of the artificial neural network in a model partition (e.g., 183).

To use the ANN model in classification and/or prediction, a host processor (e.g., 133) of vehicle 111 may optionally stream input data of ANN125 into a neuron input partition (e.g., 185). Neural network accelerator 159 of storage device 112 may automatically apply the images from sensors 122 and, if present, input data from host processor 133 to the models stored in the ANN model partition (e.g., 183) according to address map 191. The data storage device 112 makes the computed output available for propagation in the ANN 125. Preferably, the calculated output is made available to the neural network accelerator 159 through the buffer 152 without the need to store intermediate outputs into memory components (e.g., 161-163). Accordingly, data communication between the host processor (e.g., 133) and the data storage device 112 for passing the output of the neuron may be reduced. When the output has propagated to the output neurons in the ANN125, the data storage device 112 may provide a response to the request from the host processor (e.g., 133). The response indicates that external output from the neurons in the ANN125 is available. In response, a host processor (e.g., 133) of vehicle 111 may optionally issue a read command to retrieve the external output for further processing.

FIG. 15 illustrates communication between an autonomous vehicle 111 and a data storage device 112, according to one embodiment. For example, the communication as shown in fig. 15 may be implemented in a vehicle 111 of fig. 4 or 5 having a data storage device 112 as shown in fig. 9.

In fig. 15, the processor 133 of the host system may be configured with a reduced instruction set 201 to perform neural network computations, as some of the computations involving the ANN125 are performed by the neural network accelerator 159 within the data storage device 112. During prediction and/or classification using the ANN125, model data need not be passed back to the processor 133.

The sensor 122 may generate a continuous input stream 205 as part of the sensor data 121 of the vehicle 111. The sensor data 121 in the input stream 205 may be generated at fixed predetermined time intervals (e.g., during operation of the vehicle 111).

The input stream 205 is applied to input neurons in the ANN 125. An input neuron in the ANN125 is configured to accept an external input to the ANN 125; and the output neurons are configured to provide external outputs from the ANN 125.

The processor 133 may execute instructions 201 that process the output data 207 from the data storage device 112 and some of the sensor data 121.

For example, the processor 133 can write the sensor data 121 as part of the input stream 205 into the neuron input region 185 and read the output data 207 from the neuron output region 187 that was generated by the neural network accelerator 159 using the ANN data in the model region 183.

The data storage device 112 stores the input stream 205 into the neuron input region 185 in a round-robin fashion, wherein the earliest input set of the earliest time instance of data samples corresponding to the data set currently stored in the neuron input region 185 has been erased to store the latest input set.

For each input data set, the neural network accelerator 159 applies a model of the ANN125 stored in the ANN model area 183. The neural network accelerator 159 (or the processor 133) may control the propagation of signals within the neural network. When an output neuron of the ANN125 produces its output in response to an input data set, the data storage device 112 may provide an indication to the processor 133 that the neuron output is ready for retrieval. The indication may be configured in response to a request from the processor 133 to write an input data set into the neuron input region 185. The processor 133 can optionally retrieve the output data 207 (e.g., according to conditions and/or criteria programmed in the instructions).

In some embodiments, the triggering parameter is configured in the data storage device 112. When the output parameters in the external outputs 217 meet the requirements specified by the trigger parameters, the data storage device provides a response to a request from the processor 133 to write an input data set into the neuron input region 185.

FIG. 16 illustrates communication within data storage device 112 according to one embodiment. For example, the communications of FIG. 16 may be implemented in the data storage device 112 shown in FIG. 9.

In fig. 16, model area 183 stores model 213 of ANN 125. In response to receiving a set of external inputs 215 at a time instance from the input stream 205 in the buffer 152, the data storage device 112 may write the external inputs 215 into the input region 185 while retrieving the neuron model 212 containing the portion of the ANN model 213 corresponding to the parameters of the input neurons and/or the identities of the neurons connected to the input neurons. The buffer 152 allows the neural network accelerator 159 to combine the neuron model 212 and the external inputs 225 to produce the output 227 of the input neuron.

In general, the neuron outputs 227 may be included as part of the internal output 216 for further propagation within the ANN125 and/or as part of the external output 217 for the processor 133.

In a manner similar to the generation of neuron outputs 227 from external inputs 215, internal outputs 216 are stored as internal inputs 216 in buffer 152 for further propagation in ANN 125. For example, a portion of internal inputs 216 may cause controller 151 and/or neural network accelerator 159 to retrieve corresponding neuron models 212 associated with the internal inputs, such that the internal inputs are applied to the corresponding neuron models 212 in neural network accelerator 159 to produce neuron outputs 227 thereof.

When the entire set of external outputs 217 is available in buffer 152, external outputs 217 may be stored in output region 187.

Optionally, the storage device 112 does not store each set of external outputs 217 corresponding to the set of stored external inputs 215 sampled at each time instance. For example, the storage device 112 may be configured to store a set of external outputs 217 each time a predetermined set of numbers of external inputs (e.g., 215) are counted. Alternatively or in combination, processor 133 may determine whether to store external output 217. For example, storage device 112 may be configured to store external output 217 for further processing in response to processor 133 retrieving external output 217. For example, the storage device 112 may be configured to store the external output 217 in response to a write command from the processor 133 after the external output 217 is processed in the processor 133.

Server 119, computer system 131, and/or data storage device 112 may each be implemented as one or more data processing systems.

The present disclosure includes methods and apparatus to perform the methods described above, including data processing systems to perform these methods, and computer readable media containing instructions that when executed on data processing systems cause the systems to perform these methods.

A typical data processing system may include interconnects (e.g., buses and system core logic) that interconnect a microprocessor and a memory. The microprocessor is typically coupled to a cache memory.

An interconnect interconnects the microprocessor and the memory together and to an input/output (I/O) device via an I/O controller. The I/O devices may include display devices and/or peripheral devices such as mice, keyboards, modems, network interfaces, printers, scanners, cameras, and other devices known in the art. In one embodiment, when the data processing system is a server system, some I/O devices, such as printers, scanners, mice, and/or keyboards, are optional.

The interconnect may include one or more buses connected to each other through various bridges, controllers, and/or adapters. In one embodiment, the I/O controller includes a Universal Serial Bus (USB) adapter for controlling USB peripheral devices, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripheral devices.

The memory may include one or more of the following: read-only memory (ROM), volatile Random Access Memory (RAM), and non-volatile memory such as hard drives, flash memory, and the like.

Volatile RAM is typically implemented as dynamic RAM (dram), which requires power continually in order to refresh or maintain the data in the memory. The non-volatile memory is typically a magnetic hard drive, a magnetic optical drive, an optical drive (e.g., DVD RAM), or other type of memory system that maintains data even after power is removed from the system. The non-volatile memory may also be a random access memory.

The non-volatile memory may be a local device coupled directly to the rest of the components in the data processing system. Non-volatile storage remote from the system, such as a network storage device coupled to the data processing system through a network interface such as a modem or Ethernet interface, may also be used.

In this disclosure, some functions and operations are described as being performed by or caused by software code for simplicity of description. However, such expressions are also used to specify the function resulting from execution of the code/instructions by a processor (e.g., a microprocessor).

Alternatively or in combination, the functions and operations described herein may be implemented using special purpose circuitry, with or without software instructions, such as an Application Specific Integrated Circuit (ASIC) or a field programmable gate array FPGA. Embodiments may be implemented using hardwired circuitry without software instructions or in combination with software instructions. Thus, the techniques are not limited to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.

While an embodiment may be implemented in a fully functional computer and computer system, the various embodiments are capable of being distributed as a computing product in a variety of forms, and are capable of being applied regardless of the particular type of machine or computer-readable medium used to actually carry out the distribution.

At least some aspects of the disclosure may be embodied, at least in part, in software. That is, the techniques may be performed in a computer system or other data processing system in response to a processor (e.g., microprocessor) of the computer system or other data processing system executing sequences of instructions contained in a memory (e.g., ROM, volatile RAM, non-volatile memory, cache, or remote storage).

The routines executed to implement the embodiments, may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as a "computer program". The computer programs typically include one or more instructions that are set at different times in the various memories and storage devices of the computer, which when read and executed by one or more processors in the computer, cause the computer to perform the operations necessary to perform the elements relating to the various aspects.

The machine-readable medium may be used to store software and data which, when executed by a data processing system (e.g., 131 and/or 119), cause the system to perform the various methods discussed above (e.g., the method of fig. 2 and/or the method of fig. 3). Executable software and data may be stored in various places including, for example, ROM, volatile RAM, non-volatile memory, and/or cache (e.g., 112, 135, and/or 152) as discussed above. Portions of this software and/or data may be stored in any of these storage devices. Further, the data and instructions may be obtained from a centralized server or a peer-to-peer network. Different portions of the data and instructions may be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions or the same communication session. The data and instructions may be obtained entirely before the application is executed. Alternatively, the partial data and instructions may be obtained dynamically and in time when execution is required. Thus, it is not required that the data and instructions be located entirely on machine-readable media at a particular time.

Examples of computer readable media include, but are not limited to, non-transitory, recordable and non-recordable type media such as volatile and non-volatile memory devices, Read Only Memory (ROM), Random Access Memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., compact disk read only memory (CD ROM), Digital Versatile Disks (DVD), etc.), among others. A computer-readable medium may store instructions.

The instructions may also be embodied in digital and analog communications links using electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, and the like. However, a propagated signal, such as a carrier wave, an infrared signal, a digital signal, etc., is not a tangible, machine-readable medium nor is it configured to store instructions.

In general, a machine-readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).

In various embodiments, hard-wired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are not limited to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.

The foregoing description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in this disclosure are not necessarily references to the same embodiment; and such references mean at least one.

In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

35页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种非接触式掌纹智能识别系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!