Memory controller and operating method thereof

文档序号:1888613 发布日期:2021-11-26 浏览:4次 中文

阅读说明:本技术 存储器控制器及其操作方法 (Memory controller and operating method thereof ) 是由 金到训 于 2020-09-15 设计创作,主要内容包括:本发明公开了一种电子装置。根据本发明,具有提高的操作速度的存储器控制器可以包括:主存储器;处理器,被配置成生成用于访问主存储器中存储的数据的命令;调度器,被配置成存储命令并且根据预设标准来输出命令;高速缓存存储器,被配置成高速缓存和存储主存储器中存储的数据之中的、由处理器访问的数据;以及危险过滤器,被配置成:存储关于主存储器的、与命令之中的写入命令相对应的地址的信息;在接收写入命令时,将针对写入命令的预完成响应提供到调度器;并且将写入命令提供到主存储器。(The invention discloses an electronic device. According to the present invention, a memory controller having an improved operation speed may include: a main memory; a processor configured to generate a command for accessing data stored in a main memory; a scheduler configured to store commands and output the commands according to preset criteria; a cache memory configured to cache and store data accessed by the processor among data stored in the main memory; and a hazard filter configured to: storing information on an address of the main memory corresponding to a write command among the commands; upon receiving a write command, providing a pre-completion response to the write command to the scheduler; and provides the write command to the main memory.)

1. A memory controller, comprising:

a main memory;

a processor that generates a command for accessing data stored in the main memory;

a scheduler which stores the command and outputs the command according to a preset standard;

a cache memory that caches and stores data accessed by the processor among the data stored in the main memory; and

a hazard filter storing information about an address of the main memory corresponding to a write command among the commands, providing a pre-completion response to the write command to the scheduler upon receipt of the write command, and providing the write command to the main memory.

2. The memory controller according to claim 1, wherein when a first command and a second command corresponding to the same address among addresses of the main memory are sequentially input, the scheduler holds the second command without outputting the second command to the cache memory until a pre-completion response for the first command is received from the hazard filter.

3. The memory controller of claim 1, wherein the scheduler outputs commands for different addresses among addresses of the main memory according to an order of the commands received from the processor.

4. The memory controller according to claim 1, wherein when a read command of an address corresponding to the write command for the main memory is input, the hazard filter provides the read command to the main memory after receiving a write completion response for the write command from the main memory.

5. The memory controller of claim 1, wherein when data corresponding to a read command among the commands is stored in a cache line corresponding to an address of the read command, the cache memory provides the data stored in the cache line to the scheduler.

6. The memory controller of claim 1, wherein the cache memory passes a read command among the commands to the hazard filter when data corresponding to the read command is not present in a cache line corresponding to an address of the read command.

7. The memory controller of claim 1, wherein the hazard filter comprises a lookup table that stores information about the address of the main memory corresponding to the write command.

8. The memory controller of claim 7, wherein the hazard filter removes information about the address of the main memory corresponding to the write command from the lookup table when a write completion response is received from the main memory for the write command.

9. The memory controller of claim 1, wherein multiple addresses of the main memory are mapped with one address of the cache memory when caching data from the main memory to the cache memory.

10. The memory controller of claim 1, wherein the main memory is a dynamic random access memory.

11. The memory controller of claim 1, wherein the cache memory is a static random access memory.

12. A memory controller, comprising:

a main memory including main data stored in an area corresponding to a plurality of main memory addresses;

a cache memory that caches and stores a portion of the main data in cache lines corresponding to the plurality of main memory addresses;

a processor that generates a command for accessing the master data;

a scheduler to provide the commands to the cache memory according to an order in which the commands are generated; and

a hazard filter, responsive to a write command among the commands, to provide a pre-write completion response to the scheduler and to provide the write command to the main memory to perform an operation corresponding to the write command.

13. The memory controller of claim 12, wherein the hazard filter comprises a lookup table that stores a main memory address corresponding to the write command among the plurality of main memory addresses.

14. The memory controller of claim 13, wherein in response to a read command among the commands, the hazard filter provides the read command to the main memory according to whether the main memory address corresponding to the read command is stored in the lookup table.

15. The memory controller of claim 12, wherein any empty cache line among the cache lines caches data from the region corresponding to the plurality of main memory addresses, the empty cache line storing no data.

16. The memory controller of claim 12, wherein when data corresponding to a read command among the commands is stored in a cache line corresponding to a main memory address of the read command, the cache memory provides the data stored in the cache line to the scheduler.

17. The memory controller of claim 12, wherein the cache memory passes a read command among the commands to the hazard filter when data corresponding to the read command is not present in a cache line corresponding to a main memory address of the read command.

18. The memory controller of claim 12, wherein the main memory is a dynamic random access memory.

19. The memory controller of claim 12, wherein the cache memory is a static random access memory.

Technical Field

The present disclosure relates to an electronic device, and more particularly, to a memory controller and an operating method thereof.

Background

A storage device is a device that stores data under the control of a host device such as a computer or smart phone. The memory device may include a memory device to store data and a memory controller to control the memory device. Memory devices may be classified into volatile memory devices and non-volatile memory devices.

A volatile memory device may be a device that stores data only when power is supplied thereto, and loses the stored data when the power supply is interrupted. Volatile memory devices may include Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), and the like.

A nonvolatile memory device is a device that does not lose stored data even if power supply is interrupted. Non-volatile memory devices can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), flash memory, and the like.

Disclosure of Invention

Embodiments of the present disclosure provide a memory controller having an improved operation speed and an operating method thereof.

A memory controller according to an embodiment of the present disclosure may include: a main memory; a processor configured to generate a command for accessing data stored in a main memory; a scheduler configured to store commands and output the commands according to preset criteria; a cache memory configured to cache and store data accessed by the processor among data stored in the main memory; and a hazard filter configured to store information about an address of the main memory corresponding to a write command among the commands, to provide a pre-completion response to the write command to the scheduler, and to provide the write command to the main memory upon receipt of the write command.

A memory controller according to an embodiment of the present disclosure may include: a main memory including main data stored in an area corresponding to a plurality of main memory addresses; a cache memory configured to cache and store a portion of the primary data in cache lines corresponding to a plurality of main memory addresses; a processor configured to generate a command for accessing the master data; a scheduler configured to provide commands to the cache memory according to an order in which the commands are generated; and a hazard filter configured to provide a pre-write completion response to the scheduler in response to a write command among the commands, and provide the write command to the main memory to perform an operation corresponding to the write command.

Drawings

Fig. 1 is a diagram for describing a memory device according to an embodiment of the present disclosure.

Fig. 2 is a diagram for describing the memory device of fig. 1.

Fig. 3 is a diagram for describing the configuration of any one of the memory blocks of fig. 2.

Fig. 4 is a diagram for describing a read-modify-write operation on L2P mapping data stored in the main memory described with reference to fig. 1.

FIG. 5 is a diagram of a read-modify-write operation to describe the effective page table (VPT) of physical addresses.

Fig. 6 is a diagram illustrating a structure of a memory controller according to an embodiment of the present disclosure.

Fig. 7 is a flowchart illustrating an operation of the memory controller described with reference to fig. 6.

Fig. 8 is a diagram for describing a structure of a memory controller according to another embodiment of the present disclosure.

Fig. 9 and 10 are flowcharts for describing the operation of the memory controller described with reference to fig. 8.

FIG. 11 is a diagram illustrating an embodiment of the memory controller of FIG. 1.

Fig. 12 is a block diagram showing a memory card system to which a storage device according to an embodiment of the present disclosure is applied.

Fig. 13 is a block diagram illustrating a Solid State Drive (SSD) system to which a storage device according to an embodiment of the present disclosure is applied.

Fig. 14 is a block diagram showing a user system to which a storage device according to an embodiment of the present disclosure is applied.

Detailed Description

Specific structural or functional descriptions of embodiments according to the concepts disclosed in the present specification or application are shown only to describe embodiments according to the concepts of the present disclosure. Embodiments according to the concepts of the present disclosure may be embodied in various forms and the description is not limited to the embodiments described in the specification or the application.

Fig. 1 is a diagram for describing a storage device 50 according to an embodiment of the present disclosure.

Referring to fig. 1, a memory device 50 may include a memory device 100 and a memory controller 200 controlling an operation of the memory device 100. The storage device 50 may be a device that stores data under the control of a host 500 such as: cellular phones, smart phones, MP3 players, laptop computers, desktop computers, game consoles, TVs, tablet PCs, in-vehicle infotainment systems, and the like.

The storage device 50 may be any one of various types of storage devices according to a host interface as a communication method with the host 500. For example, the storage device 50 may include one of the following: SSD, multimedia cards in the form of MMC, eMMC, RS-MMC and micro MMC, secure digital cards in the form of SD, mini SD or micro SD, Universal Serial Bus (USB) storage devices, universal flash memory (UFS) devices, Personal Computer Memory Card International Association (PCMCIA) card type storage devices, Peripheral Component Interconnect (PCI) card type storage devices, PCI express (PCI-E) card type storage devices, Compact Flash (CF) cards, smart media cards, memory sticks, and the like.

The memory device 50 may be manufactured as one of various types of packages. For example, the storage device 50 may be manufactured as one of the following: a Package On Package (POP), a System In Package (SIP), a System On Chip (SOC), a multi-chip package (MCP), a Chip On Board (COB), a wafer-level manufacturing package (WFP), a wafer-level stack package (WSP), and the like.

The memory device 100 may store data. The memory device 100 may operate under the control of the memory controller 200. The memory device 100 may include a memory cell array (not shown) including a plurality of memory cells storing data.

Each of the memory cells may be configured as a Single Layer Cell (SLC) storing one bit of data, a multi-layer cell (MLC) storing two bits of data, a Triple Layer Cell (TLC) storing three bits of data, or a Quadruple Layer Cell (QLC) storing four bits of data.

The memory cell array (not shown) may include a plurality of memory blocks. One memory block may include a plurality of pages. In an embodiment, a page may be a unit of storing data in the memory device 100 or reading data stored in the memory device 100. The memory block may be a unit of erase data.

In an embodiment, the memory device 100 may be: double data rate synchronous dynamic random access memory (DDR SDRAM), low power double data rate fourth generation (LPDDR4) SDRAM, Graphics Double Data Rate (GDDR) SDRAM, low power DDR (LPDDR), Rambus Dynamic Random Access Memory (RDRAM), NAND flash memory, vertical NAND flash memory, NOR flash memory devices, Resistive Random Access Memory (RRAM), phase change memory (PRAM), Magnetoresistive Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), spin transfer torque random access memory (STT-RAM), and the like. In this specification, for convenience of description, it is assumed that the memory device 100 is a NAND flash memory.

The memory device 100 is configured to receive a command CMD and an address ADDR from the memory controller 200 and access an area of the memory cell array selected by the address ADDR. The memory device 100 may perform the operation indicated by the command CMD on the area selected by the address ADDR. For example, the memory device 100 may perform a write operation (or program operation), a read operation, and an erase operation in response to the command CMD. During a program operation, the memory device 100 may program data to an area selected by the address ADDR. During a read operation, the memory device 100 may read data from an area selected by the address ADDR. During an erase operation, the memory device 100 may erase data stored in the area selected by the address ADDR.

The memory controller 200 may control the overall operation of the memory device 50.

When power is supplied to the storage device 50, the memory controller 200 may run Firmware (FW). When the memory device 100 is a flash memory device, the Firmware (FW) may include a Host Interface Layer (HIL) controlling communication with the host 500, a Flash Translation Layer (FTL) controlling communication between the memory controller 200 and the host 500, and a Flash Interface Layer (FIL) controlling communication with the memory device 100.

In an embodiment, the memory controller 200 may receive data and a Logical Block Address (LBA) from the host 500 and convert the LBA to a Physical Block Address (PBA) indicating an address of a memory unit in the memory device 100 at which the received data is to be stored. In this specification, the LBA and "logical address" or "logical address" may be used to have the same meaning. In this specification, PBA and "physical address" may be used with the same meaning.

The memory controller 200 may control the memory device 100 to perform a program operation, a read operation, or an erase operation according to a request of the host 500. During a programming operation, the memory controller 200 may provide a write command, PBA, write data, and the like to the memory device 100. During a read operation, the memory controller 200 may provide a read command and PBA to the memory device 100. During an erase operation, the memory controller 200 may provide an erase command and PBA to the memory device 100.

In an embodiment, the memory controller 200 may generate and transmit commands, addresses, and data to the memory device 100 regardless of whether there is a request from the host 500. For example, the memory controller 200 may provide commands, addresses, and data to the memory device 100 for performing read operations and program operations along with performing wear leveling, read reclamation, garbage collection, and the like.

In an embodiment, the memory controller 200 may control two or more memory devices 100. In this case, the memory controller 200 may control two or more memory devices 100 according to the interleaving method to improve operation performance. The interleaving method may be a method of controlling operations on two or more memory devices 100 to overlap each other.

The storage device 50 may further include a main memory 300. The main memory 300 may temporarily store data provided from the host 500 or may temporarily store data read from the memory device 100. In an embodiment, the main memory 300 may be a volatile memory device. For example, the main memory 300 may include Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), or both.

In an embodiment, the main memory 300 may read metadata stored in the memory device 100 and store the read metadata therein.

The metadata may be data including various information required to control the storage device 50. For example, the metadata may include bad block information that is information on a bad block among a plurality of memory blocks included in the memory device 100 and firmware to be executed by the processor 210 of the memory controller 200.

In an embodiment, the metadata may include mapping data indicating a correspondence between logical addresses provided by the host 500 and physical addresses of memory units included in the memory device 100; a valid page table indicating whether data stored in a page included in the memory device 100 is valid data. In an embodiment, the valid page table may include a plurality of valid page tables. The valid page table may include data in the form of a bitmap indicating whether data stored in the page in units of 4KB is valid.

Alternatively, in various embodiments, the metadata may include read count data indicating the number of read operations performed on a memory block included in the memory device 100; cycle data indicating the number of times of erasing of a memory block included in the memory device 100; hot/cold data indicating whether data stored in a page included in the memory device 100 is hot data or cold data; and log data indicating modified contents of the mapping data.

In an embodiment, the metadata stored in the main memory 300 may include data blocks having different types of data structures with respect to the type of metadata. For example, the metadata may have different data sizes for its types. Accordingly, the sizes of metadata stored in the main memory 300 may be different from each other with respect to the type of metadata.

In an embodiment of the present disclosure, memory controller 200 may include a processor 210 and a cache memory 220.

The processor 210 may control the overall operation of the memory controller 200. The processor 210 may run Firmware (FW). The processor 210 may perform operations required to access the memory device 100. For example, the processor 210 may provide a command to the memory device 100 and control the memory device 100 and the main memory 300 to perform an operation corresponding to the command.

For example, when a write request is received from host 500, processor 210 may translate a logical address corresponding to the write request to a physical address. The processor 210 may store mapping data indicating a correspondence between logical addresses and physical addresses in the main memory 300.

To store the mapping data, the processor 210 may read a mapping segment from the main memory 300, the mapping segment including mapping information of a logical address provided by the host 500. Thereafter, the processor 210 may record a physical address corresponding to the logical address in the mapping section. The processor 210 may store the mapped segment in which the physical address is recorded in the main memory 300 again. When a physical address is assigned, the data of the active page table corresponding to the assigned physical address may also be updated.

In an embodiment, the mapping data stored in the main memory 300 may be updated. For example, when a write request of new data is input for a logical address for which writing was previously requested, previously stored data may become invalid data, and a physical address corresponding to the corresponding logical address may be changed. Alternatively, mapping data corresponding to the location of the data may be updated as the location where the data is stored changes due to various background operations such as garbage collection, read reclamation, and wear leveling.

The cache memory 220 may store data to be accessed by the processor 210, which is read from the main memory 300. The storage capacity of cache memory 220 may be less than the storage capacity of main memory 300. In an embodiment, cache memory 220 may be a volatile memory device. For example, cache memory 220 may include Dynamic Random Access Memory (DRAM) or Static Random Access Memory (SRAM), or both. The cache memory 220 may be a memory having an operating speed faster than that of the main memory 300.

Since the storage capacity of the cache memory 220 is smaller than that of the main memory 300, the cache memory 220 may store only metadata accessed by the processor 210 among metadata stored in the main memory 300. Storing data stored in a specific address among data stored in the main memory 300 in the cache memory 220 is referred to as a cache.

When the cache memory 220 stores data read from the main memory 300 to be accessed by the processor 210, the cache memory 220 may provide the corresponding data to the processor 210. Because the operating speed of the cache memory 220 is faster than the operating speed of the main memory 300, when data to be accessed by the processor 210 is stored in the cache memory 220, the processor 210 can obtain the data faster than it can obtain the data from the main memory 300. A case where data to be accessed by the processor 210 is stored in the cache memory 220 is referred to as a cache hit, and a case where data to be accessed by the processor 210 is not stored in the cache memory 220 is referred to as a cache miss. As the number of cache hits increases, the operating speed handled by the processor 210 may be increased.

The method of operation of the cache memory 220 may be classified as a direct-mapped cache, a set-associative cache, or a fully-associative cache.

The direct mapped cache may be a many-to-one (n: 1) method where multiple addresses of main memory 300 correspond to one address of cache memory 220. That is, in a direct mapped cache, data stored in a particular address of main memory 300 may be cached in a pre-mapped address of cache memory 220.

A fully associative cache may be an operating method in which the address of the cache memory 220 is not pre-mapped to a specific address of the main memory 300, and thus, the address of a blank portion of the cache memory 220 may cache data stored in an arbitrary address of the main memory 300. When determining whether there is a cache hit, a fully associative cache is needed to search all addresses of cache memory 220.

The set associative cache is an intermediate form of a direct-mapped cache and a fully associative cache, and manages the cache memory 220 by dividing the cache memory 220 into a plurality of cache sets. In addition, a cache set may be divided into cache ways or cache lines.

Host 500 may communicate with storage 50 using at least one of a variety of communication methods such as: universal Serial Bus (USB), serial AT attachment (SATA), serial SCSI (sas), high speed inter-chip (HSIC), Small Computer System Interface (SCSI), Peripheral Component Interconnect (PCI), PCI express (pcie), non-volatile memory express (NVMe), universal flash memory (UFS), Secure Digital (SD), multimedia card (MMC), embedded MMC (emmc), dual in-line memory module (DIMM), registered DIMM (rdimm), and low-load DIMM (lrdimm).

Fig. 2 is a diagram for describing the memory device 100 of fig. 1.

Referring to fig. 2, the memory device 100 may include a memory cell array 110, a voltage generator 120, an address decoder 130, an input/output (I/O) circuit 140, and a control logic 150.

Memory cell array 110 includes a plurality of memory blocks BLK1 through BLKi, i being a positive integer greater than 1. A plurality of memory blocks BLK1 through BLKi are connected to address decoder 130 through row lines RL. A plurality of memory blocks BLK1 through BLKi may be connected to the input/output circuit 140 through column lines CL. In an embodiment, the row lines RL may include word lines, source select lines, and drain select lines. In an embodiment, the column line CL may include a bit line.

Each of the plurality of memory blocks BLK1 through BLKi includes a plurality of memory cells. In an embodiment, the plurality of memory cells may be non-volatile memory cells. Memory cells connected to the same word line among the plurality of memory cells may be defined as one physical page. That is, the memory cell array 110 may include a plurality of physical pages. Each of the memory cells of the memory device 100 may be configured as a Single Layer Cell (SLC) storing one bit of data, a multi-layer cell (MLC) storing two bits of data, a Triple Layer Cell (TLC) storing three bits of data, or a four layer cell (QLC) capable of storing four bits of data.

In an embodiment, the voltage generator 120, the address decoder 130, and the input/output circuit 140 may be collectively referred to as a peripheral circuit. The peripheral circuits drive the memory cell array 110 under the control of control logic 150. The peripheral circuits may drive the memory cell array 110 to perform a program operation, a read operation, and an erase operation.

The voltage generator 120 is configured to generate a plurality of operating voltages using an external power supply voltage supplied to the memory device 100. The voltage generator 120 may operate under the control of the control logic 150.

In an embodiment, the voltage generator 120 may generate the internal supply voltage by adjusting the external supply voltage. The internal power supply voltage generated by the voltage generator 120 is used as an operation voltage of the memory device 100.

In an embodiment, the voltage generator 120 may generate the plurality of operating voltages using an external power supply voltage or an internal power supply voltage. The voltage generator 120 may be configured to generate various voltages required in the memory device 100. For example, the voltage generator 120 may generate a plurality of erase voltages, a plurality of program voltages, a plurality of pass voltages, a plurality of select read voltages, and a plurality of unselected read voltages.

The voltage generator 120 may include a plurality of pump capacitors that receive the internal power supply voltage, and may generate a plurality of operating voltages by selectively enabling the plurality of pump capacitors under the control of the control logic 150.

The plurality of operating voltages generated by the voltage generator 120 may be supplied to the memory cell array 110 through the address decoder 130.

Address decoder 130 is connected to memory cell array 110 by row lines RL. Address decoder 130 is configured to operate under the control of control logic 150. Address decoder 130 may receive address ADDR from control logic 150. The address decoder 130 may decode a block address among the received addresses ADDR. The address decoder 130 may select at least one memory block among the memory blocks BLK1 through BLKi according to the decoded block address. The address decoder 130 may decode a row address among the received addresses ADDR. The address decoder 130 may select at least one word line among the word lines of the selected memory block according to the decoded row address. In an embodiment, the address decoder 130 may decode a column address among the received addresses ADDR. The address decoder 130 may connect the input/output circuit 140 and the memory cell array 110 to each other according to the decoded column address.

According to an embodiment of the present disclosure, during a read operation, the address decoder 130 may apply a read voltage to a selected word line and apply a read pass voltage, which has a higher voltage level than the read voltage, to unselected word lines.

For example, the address decoder 130 may include components such as a row decoder, a column decoder, and an address buffer.

The input/output circuit 140 may include a plurality of page buffers. A plurality of page buffers may be connected to the memory cell array 110 through bit lines. During the programming operation, the write DATA may be stored in the selected memory cell based on DATA stored in the plurality of page buffers corresponding to the input DATA provided by the external device.

During a read operation, read data stored in the selected memory cell may be sensed through the bit line, and the sensed data may be stored in the page buffer. After that, the DATA stored in the page buffer is output to the external device as output DATA.

Control logic 150 may control address decoder 130, voltage generator 120, and input/output circuitry 140. The control logic 150 may operate in response to a command CMD output from an external device. The control logic 150 may generate various signals to control the peripheral circuits in response to the command CMD and the address ADDR.

Fig. 3 is a diagram for describing the configuration of any one of the memory blocks of fig. 2.

For example, fig. 3 shows a memory block BLKi.

Referring to fig. 3, a plurality of word lines arranged in parallel with each other may be connected between a first selection line and a second selection line. Here, the first selection line may be a source selection line SSL, and the second selection line may be a drain selection line DSL. More specifically, the memory block BLKi may include a plurality of strings ST connected between the bit lines BL1 to BLn and the source lines SL. The bit lines BL1 to BLn may be respectively connected to the strings ST, and the source lines SL may be commonly connected to the strings ST. Since the strings ST may be configured to be identical to each other, the string ST connected to the first bit line BL1 will be specifically described as an example.

The string ST may include a source select transistor SST, a plurality of memory cells MC1 to MC16, and a drain select transistor DST connected in series between a source line SL and a first bit line BL 1. In an embodiment, one string ST may include at least one source select transistor SST and drain select transistor DST, and may include memory cells MC1 through MC16, but the embodiment is not limited thereto. In another embodiment, the number of memory cells included in a string may be greater than 16.

A source of the source selection transistor SST may be connected to a source line SL, and a drain of the drain selection transistor DST may be connected to a first bit line BL 1. The memory cells MC1 through MC16 may be connected in series between the source select transistor SST and the drain select transistor DST. The gates of the source select transistors SST included in different strings ST may be commonly connected to a source select line SSL, the gates of the drain select transistors DST in different strings ST may be commonly connected to a drain select line DSL, and the gates of the memory cells MC1 to MC16 in different strings ST may be commonly connected to a plurality of word lines WL1 to WL 16. A group of memory cells connected to the same word line among the memory cells included in different strings ST may be referred to as a physical page PG. Accordingly, the memory block BLKi may include a plurality of physical pages PG as many as the number of word lines WL1 to WL 16.

One memory cell can store one bit of data. This is commonly referred to as single-level cell (SLC). In this case, one physical page PG may store data corresponding to one Logical Page (LPG). Data corresponding to one Logical Page (LPG) may include data bits having the same number as cells included in one physical page PG.

In other embodiments, one memory cell may store two or more bits of data. In this case, one physical page PG may store data corresponding to two or more logical pages.

Fig. 4 is a flowchart for describing a read-modify-write operation on logical-to-physical (L2P) mapped data stored in the main memory 300 described with reference to fig. 1.

Referring to fig. 1 and 4, the L2P mapping data stored in main memory 300 may be updated.

For example, when a write request is input from the host 500, the processor 210 may allocate a physical address to a logical address input from the host 500 according to the write request and update effective page table information corresponding to the physical address. Thereafter, when a write request for writing new data is input for a logical address for which writing was previously requested, the previously stored data may become invalid data, and a new physical address may be assigned to the logical address for which writing was previously requested. That is, the physical address assigned to the logical address is changed. Meanwhile, the L2P mapping data may be updated when the location of the stored data is changed due to various background operations such as garbage collection, read reclamation, and wear leveling.

The L2P mapping data may include a plurality of mapping segments. Each of the map segments may include a plurality of map entries. The mapping entry may include information on correspondence between the logical address and the physical address.

Here, it is assumed that a write request of data corresponding to the first logical block address LBA1 is input from the host 500. The processor 210 may read the mapping section 0(1) of the mapping entry including the first logical block address LBA1 among the L2P mapping data stored in the main memory 300.

The processor 210 may allocate the first physical block address PBA1 as the physical address (2) corresponding to the first logical block address LBA 1.

The processor 210 may store a map segment 0 in the main memory 300, the map segment 0 including a map entry (3) to which a first logical block address LBA1 of a first physical block address PBA1 is allocated. Accordingly, the L2P mapping data stored in the main memory 300 is updated.

FIG. 5 is a diagram of a read-modify-write operation to describe the effective page table (VPT) of physical addresses.

The VPT may include bitmap-form data indicating whether data stored in a page included in the memory device 100 is valid data. The VPT may include a plurality of bits corresponding to a plurality of pages, respectively. A bit of the set state may indicate that data stored in the corresponding page is valid data, and a bit of the clear state may indicate that data stored in the corresponding page is invalid data.

With reference to fig. 1, 4 and 5, a VPT comprising a zeroth physical block address PBA0 and a first physical block address PBA1 will be described.

In general, when the memory controller 200 stores data in the memory device 100, the memory controller 200 secures a free block, which is a blank memory block where no data is stored, and then sequentially stores the data in pages included in the free block. After the data is stored in the page, the bits of the VPT corresponding to the page are changed to a "set" state. Thus, prior to storing the data, all bits of the VPT corresponding to the physical block address to be allocated may be in a "clear" state.

Assume that the mapped segment 0 described with reference to fig. 4 is in a state of allocating the zeroth physical block address PBA0 as a physical address corresponding to the zeroth logical block address LBA 0.

When it is assumed that the bit corresponding to the page of the zeroth physical block address PBA0 is the first bit1, the processor 210 may read the VPT (501) including the zeroth physical block address PBA0 and modify the "clear" state of the first bit1 to the "set" state. In an embodiment, a bit "1" may indicate a "set" state and a bit "0" may indicate a "clear" state. Alternatively, a bit "0" may indicate a "set" state, and a bit "1" may indicate a "clear" state. The processor 210 may store 503 the VPT with the "clear" state of the first bit1 modified to the "set" state in the main memory 300.

Thereafter, the processor 210 may again read the VPT comprising the first physical block address PBA1 (505) because the first physical block address PBA1 is newly allocated as described with reference to fig. 4.

When it is assumed that the bit corresponding to the page of the first physical block address PBA1 is the second bit2, the processor 210 may modify the "clear" state of the second bit2 to the "set" state.

The processor 210 may store 507 the VPT with the "clear" state of the second bit2 modified to the "set" state in the main memory 300.

In the embodiment described with reference to fig. 4 and 5, the main memory 300 may be accessed according to a data access mode of Firmware (FW), and thus the cache memory 220 may be used accordingly.

For example, when write requests are sequentially input from the host 500, the processor 210 may sequentially execute the data access mode of the main memory 300. That is, the L2P mapping data and VPT may be accessed consecutively to allocate physical block addresses for storing data, and store pages of the allocated physical block addresses as valid data pages. Thus, access to the L2P mapping data and VPTs may be very local.

In contrast, when a write request is randomly input from the host 500, the processor 210 may handle the data access mode of the main memory 300 in a mixed manner of sequential access and random access. For example, access to the L2P mapping data may be random, and access to the VPTs may be sequential.

Fig. 6 is a diagram illustrating a structure of a memory controller 400 according to an embodiment of the present disclosure.

Referring to fig. 6, the memory controller 400 may include a processor 410, a cache controller 420, and a main memory 430.

The processor 410 and the main memory 430 may be configured separately and operate the same as the processor 210 and the main memory 300 described with reference to fig. 1.

Cache controller 420 may include a scheduler 421 and a cache memory 422.

Scheduler 421 may store an access request input from processor 410 and an address corresponding to the access request. The scheduler 421 may provide an access request to the cache memory 422 or receive a completion response to the provided access request.

Scheduler 421 may receive access requests and addresses to be accessed from processor 410. When the access request received from the processor 410 is a write request, the scheduler 421 may receive a write request, a write address, and write data. The scheduler 421 may transfer the write request, the write address, and the write data to the cache memory 422. The write data may be stored in an area of the main memory 430 corresponding to the write address through the cache memory 422. The main memory 430 may store the write data in the area corresponding to the write address and then provide a write completion response to the cache controller 420 indicating that the write request has been performed. The write completion response may be communicated to the processor 410 through the cache memory 422 and the scheduler 421.

When the access request received from processor 410 is a read request, scheduler 421 may receive the read request and a read address. Scheduler 421 may communicate the read request and read address to cache memory 422. When data corresponding to the read request is cached in a cache line corresponding to the read address (cache hit), the cache memory 422 may provide the cached data to the scheduler 421. The scheduler 421 may transfer the received data to the processor 410. When data corresponding to the read request is not cached in a cache line corresponding to the read address (cache miss), cache memory 422 may provide the read request and the read address to main memory 430. The main memory 430 may provide read data stored in an area corresponding to the read address to the cache controller 420. The read data may be stored in a cache line (cache) in the cache memory 422 corresponding to the read address. The read data may be transferred to the processor 410 through the scheduler 421.

When a read request for an address corresponding to the same cache line as the write request is input before the write request is completed, the data stored in the cache memory 422 may be different from the data stored in the main memory 430, and the read request is input after the write request. In this case, when data corresponding to the read request has been cached in the cache memory 422, a cache hit may occur, and thus data different from the most recently written data may be supplied to the processor 410 (a hazard occurs).

To prevent the danger from occurring, when cache lines respectively corresponding to addresses of input access requests collide, that is, when a first access request and a second access request for addresses corresponding to the same cache line are input, the scheduler 421 may hold the second access request input after the first access request without transferring the second access request to the cache memory 422 until the first access request is processed.

However, considering the data access pattern of the main memory 430, a phenomenon that many read requests are held or blocked (pending) inside the scheduler 421 by previous write requests may frequently occur.

Accordingly, the read delay occurring in the cache memory 422 may become longer, and thus the processing speed of the processor 410 may be slower.

Fig. 7 is a flowchart illustrating an operation of the memory controller 400 described with reference to fig. 6.

Referring to fig. 6 and 7, in step S601, the processor 410 may provide a read request for the address ADDR0 to the scheduler 421.

In step S603, the scheduler 421 may store the read request for the address ADDR0, and since there is no previous read request or write request for the address ADDR0, the scheduler 421 may provide the read request for the address ADDR0 to the cache memory 422.

In step S605, the cache memory 422 may check whether data corresponding to the address ADDR0 has been cached in the cache memory 422. A cache miss may occur when the data corresponding to address ADDR0 is not present in cache memory 422.

When a cache miss occurs, in step S607, the cache memory 422 may provide a read request for the address ADDR0 to the main memory 430.

In step S609, the main memory 430 may read out DATA corresponding to the address ADDR0, i.e., ADDR0 DATA, and provide the read DATA ADDR0 DATA to the cache memory 422.

In step S611, the cache memory 422 may store (cache) the read DATA ADDR0 DATA in the cache memory 422.

In step S613, the cache memory 422 may provide the read DATA ADDR0 DATA to the scheduler 421. In step S615, the scheduler 421 may provide the read DATA ADDR0 DATA to the processor 410.

In step S617, the processor 410 may provide the write request for the address ADDR0 to the scheduler 421.

In step S619, the scheduler 421 may provide a write request for the address ADDR0 to the cache memory 422.

In step S621, the cache memory 422 may store the write data in the cache memory 422. Alternatively, the write data may not be stored in the cache memory 422, and an indication that the cached data in the cache line corresponding to the address ADDR0 is dirty may be stored in the cache memory 422.

In step S623, cache memory 422 may provide a write request to main memory 430 for address ADDR 0.

When the write request is executed in the main memory 430, the processor 410 may provide another read request for the address ADDR0 to the scheduler 421 in step S625. In this case, since the scheduler 421 has not received the write request COMPLETION response WRITE ADDR0 COMPLETION for the same address ADDR0 as the address ADDR0 of the other read request, the other read request is not output to the cache memory 422 and is held or blocked in the scheduler 421.

In step S627, the main memory 430 may execute a write request for the address ADDR0, i.e., store write data in an area corresponding to the address ADDR0, and provide a write COMPLETION response WRITE ADDR0 COMPLETION to the scheduler 421.

In step S629, the scheduler 421 may provide a write complete response WRITE ADDR0 COMPLETION to the processor 410.

In step S631, the scheduler 421 may provide other read requests for the address ADDR0 to the cache memory 422.

In step S633, the cache memory 422 may check whether the newly written data corresponding to the address ADDR0 has been cached in the cache memory 422. A cache miss may occur because the newly written data corresponding to address ADDR0 is not already cached in cache memory 422.

In step S635, the cache memory 422 may provide other read requests to the main memory 430 for the address ADDR 0.

In step S637, the main memory 430 may read out newly written DATA corresponding to the address ADDR0, i.e., ADDR0 DATA, and supply the read DATA ADDR0 DATA to the cache memory 422.

In step S639, the cache memory 422 may store (cache) the read DATA ADDR0 DATA in the cache memory 422.

In step S641, the cache memory 422 may provide the read DATA ADDR0 DATA to the scheduler 421. In step S643, the scheduler 421 may provide the read DATA ADDR0 DATA to the processor 410.

According to the embodiment described with reference to fig. 7, when there is a conflict among cache lines respectively corresponding to addresses of inputted access requests, for example, when a first access request and a second access request corresponding to addresses corresponding to the same cache line are sequentially inputted, the scheduler 421 may hold the second access request without transferring the second access request to the cache memory 422 until the first access request is processed. Therefore, considering the data access pattern of the main memory 430 processed by the processor 410, a phenomenon that many read requests are held or blocked inside the scheduler 421 by previous write requests may frequently occur. Accordingly, the read delay occurring in the cache memory 422 may become longer, and thus the processing speed of the processor 410 may be slower.

Fig. 8 is a diagram for describing a structure of a memory controller 700 according to an embodiment of the present disclosure.

Referring to fig. 8, a memory controller 700 may include a processor 710, a cache controller 720, and a main memory 730.

The processor 710 and the main memory 730 may be configured and operate the same as the processor 210 and the processor 410 and the main memory 230 and the main memory 430, respectively, described with reference to fig. 1 and 6.

Cache controller 720 may include a scheduler 721, a cache memory 722, and a hazard filter 723.

The scheduler 721 may store an access request input from the processor 710 and an address corresponding to the access request. The scheduler 721 may provide the input access request to the cache memory 722 or receive a completion response to the provided access request.

Scheduler 721 may receive at least access requests and addresses to be accessed from processor 710. When the access request received from the processor 710 is a write request, the scheduler 721 may receive a write request, a write address, and write data. The scheduler 721 may transfer the write request, the write address, and the write data to the cache memory 722. The write data may be provided to hazard filter 723 through cache memory 722.

When the access request received from the processor 710 is a read request, the scheduler 721 may receive the read request and the read address. The scheduler 721 may transfer the read request and the read address to the cache memory 722. When data corresponding to the read address has been cached in a cache line corresponding to the read address (cache hit), the cache memory 722 may provide the cached data to the scheduler 721. Scheduler 721 may transfer the received data to processor 710. When data corresponding to the read address has not been cached in a cache line corresponding to the read address (a cache miss), cache memory 722 may provide the read request and the read address to main memory 730. The main memory 730 may provide the read data stored in the area corresponding to the read address to the cache controller 720. The read data may be stored in a cache line (cache) in the cache memory 722 corresponding to the read address. The read data may be transferred to the processor 710 through the scheduler 721.

When a read request for an address corresponding to the same cache line as the read request is input before the write request for the address is completed, the data in the cache memory 722 may be previous data different from the write data most recently stored in the main memory 730 in response to the write request. In this case, when data corresponding to the read request has been cached in the cache memory 722, a cache hit may occur, and thus previous data stored in the cache memory 722 that is different from the most recently written data may be provided to the processor 710 (a hazard occurs).

To prevent the danger from occurring, when cache lines respectively corresponding to addresses of input access requests collide, that is, when access requests for addresses corresponding to the same cache line are sequentially input, the scheduler 721 may hold the access request input later without transferring the access request input later to the cache memory 722 until the access request input first is processed.

For example, assuming that the first-entered access request is a write request and the later-entered access request is a read request, the first-entered access request and the later-entered access request are for addresses corresponding to the same cache line. In this case, the scheduler 721 may hold the read request without transferring the read request to the cache memory 722 until the write request is completed in the main memory 730.

The hazard filter 723 may receive write requests, write addresses, and write data that have passed through the scheduler 721 and the cache memory 722, and store the write requests and/or write addresses in an internal lookup table LUT. Thereafter, the hazard filter 723 may provide the write request, write address, and write data to the main memory 730. In an embodiment, when a write request is received from cache memory 722 or provided to main memory 730, hazard filter 723 may provide a pre-write completion response to scheduler 721 prior to receiving the write completion response from main memory 730.

After receiving the pre-write complete response from the hazard filter 723, the scheduler 721 may provide the read request and the read address held or blocked by the scheduler 721 to the cache memory 722. The hazard filter 723 may receive a read request when a cache miss occurs in the cache memory 722 for the read request. The hazard filter 723 may check whether a write request for the same address as the read address is included in the internal lookup table LUT.

When a write request for the same address as the read address is stored in the internal lookup table LUT, the hazard filter 723 may hold the read request until a write completion response is received from the main memory 730. When a write request for the same address as the read address is not stored in the internal lookup table LUT, the hazard filter 723 may provide the read request to the main memory 730.

That is, the hazard filter 723 may issue a pre-write completion response to the write request to the scheduler 721 before receiving the write completion response from the main memory 730 and handle the hazardous conditions that may occur later. Therefore, the read delay can be improved.

Fig. 9 and 10 are flowcharts for describing the operation of the memory controller 700 of fig. 8.

Referring to fig. 9 and 10, in step S901, the processor 710 may provide a read request for the address ADDR0 to the scheduler 721.

In step S903, the scheduler 721 may store the read request for the address ADDR 0. When there is no prior read or write request to address ADDR0, scheduler 721 may provide a read request to cache 722 to address ADDR 0.

In step S905, the cache memory 722 may check whether data corresponding to the address ADDR0 has been cached in the cache memory 722. A cache miss may occur when the data corresponding to address ADDR0 is not already cached in cache memory 722.

When a cache miss occurs, in step S907, the cache memory 722 may provide a read request for the address ADDR0 to the hazard filter 723.

In step S909, the hazard filter 723 may transmit a read request for the address ADDR0 to the main memory 730.

In step S911, the main memory 730 may read out DATA corresponding to the address ADDR0, i.e., ADDR0 DATA, and supply the read DATA ADDR0 DATA to the cache memory 722.

In step S913, the cache memory 722 may store the read DATA ADDR0 DATA in the cache memory 722 (cache).

In step S915, the cache memory 722 may provide the read DATA ADDR0 DATA to the scheduler 721. In step S917, the scheduler 721 may provide the read DATA ADDR0 DATA to the processor 710.

In step S919, the processor 710 may provide a write request for the address ADDR0 to the scheduler 721.

In step S921, the scheduler 721 may provide the write request for the address ADDR0 to the cache memory 722.

In step S923, the cache memory 722 may store the write data in the cache memory 722. In another embodiment, the write data may not be stored in the cache memory 722, and an indication that the cached data in the cache line corresponding to the address ADDR0 is dirty may be stored in the cache memory 722.

In step S925, the cache memory 722 may provide the write request for the address ADDR0 to the hazard filter 723.

In step S927, the hazard filter 723 may provide a pre-write complete response to the scheduler 721. Additionally, write address ADDR0 may be stored in an internal lookup table of hazard filter 723.

In step S929, the hazard filter 723 may provide the write request to the main memory 730.

When a write request is executed in the main memory 730, the processor 710 may provide another read request for the address ADDR0 to the scheduler 721 in step S931.

In step S933, because the scheduler 721 has received the pre-write request completion response for the address ADDR0 from the hazard filter 723, which address ADDR0 is the same as the address ADDR0 for the other read requests, the scheduler 721 may provide the other read requests for the address ADDR0 to the cache memory 722.

In step S935, the cache memory 722 may check whether data corresponding to the address ADDR0 has been cached in the cache memory 722. A cache miss may occur when the data corresponding to address ADDR0 is not already cached in cache memory 722.

When a cache miss occurs, in step S937, the cache memory 722 may provide other read requests for the address ADDR0 to the hazard filter 723.

In step S939, the hazard filter 723 may determine whether a write request for the same address as the other read requests is stored in the internal lookup table LUT. As a result of the determination, when a write request for the same address as the other read requests is stored in the internal lookup table LUT, and a write completion response for the write request has not been received, the other read requests for the address ADDR0 may be held or blocked in the hazard filter 723.

In step S941, the main memory 730 may provide the write completion response to the hazard filter 723. Although not shown, when a write completion response is received from the main memory 730, the hazard filter 723 may remove information about the write request, such as the write request or an address corresponding to the write request, from the lookup table LUT.

In step S943, the hazard filter 723 may provide the other read request for address ADDR0 to the main memory 730.

In step S945, the main memory 730 may read out the read DATA corresponding to the address ADDR0, i.e., ADDR0 DATA, and provide the read DATA ADDR0 DATA to the cache memory 722.

In step S947, the cache memory 722 may store the read DATA ADDR0 DATA in the cache memory 722 (cache).

In step S949, the cache memory 722 may provide the read DATA ADDR0 DATA to the scheduler 721. In step S951, the scheduler 721 may provide the read DATA ADDR0 DATA to the processor 710.

In an embodiment, when the processor 710 provides a read request following the write request to the cache controller 720, if the write request is not for the same address as the read request, and thus no write request for the same address as the read request is stored in the internal lookup table LUT, the hazard filter 723 may provide the read request to the main memory 730 without waiting for a write request completion response.

According to the above-described operation in the cache controller 720, the read latency can be reduced, and thus the processing speed of the processor 410 can be faster.

Fig. 11 is a diagram illustrating the memory controller 200 of fig. 1 according to an embodiment.

Referring to fig. 1 and 11, the memory controller 200 may include a processor 210, a RAM 220, an error correction circuit 230, a ROM 260, a host interface 270, and a flash interface 280.

The processor 210 may control the overall operation of the memory controller 200. The RAM 220 may be used as a buffer memory, a cache memory, and an operation memory of the memory controller 200. For example, the cache memory 220 described with reference to FIG. 1 may be a RAM 220. In an embodiment, RAM 220 may be an SRAM.

The ROM 260 may store various information required for the operation of the memory controller 200 in the form of firmware.

The memory controller 200 may communicate with external devices (e.g., the host 500, application processors, etc.) through the host interface 270.

Memory controller 200 may communicate with memory device 100 through host interface 280. The memory controller 200 may transmit a command CMD, an address ADDR, and a control signal CTRL to the memory device 100 through the flash interface 280, and receive DATA read from the memory device 100. For example, flash interface 280 may include a NAND interface.

Fig. 12 is a block diagram showing a memory card system 2000 to which a storage device according to an embodiment of the present disclosure is applied.

Referring to fig. 12, the memory card system 2000 includes a memory controller 2100, a memory device 2200, and a connector 2300.

The memory controller 2100 is connected to the memory device 2200. The memory controller 2100 is configured to access the memory device 2200. For example, the memory controller 2100 may be configured to control read operations, write operations, erase operations, and background operations of the memory device 2200. The memory controller 2100 is configured to provide an interface between the memory device 2200 and a host (not shown). The memory controller 2100 is configured to drive firmware for controlling the memory device 2200. The memory controller 2100 may be configured with the memory controller 200 described with reference to fig. 1.

For example, memory controller 2100 may include components such as Random Access Memory (RAM), a processor, a host interface, a memory interface, an error corrector, and so forth.

The memory controller 2100 may communicate with an external device such as a host through the connector 2300. The memory controller 2100 may communicate with external devices according to a particular communication standard. For example, the memory controller 2100 is configured to communicate with external devices in accordance with at least one of various communication standards such as: universal Serial Bus (USB), multi-media card (MMC), embedded MMC (emmc), Peripheral Component Interconnect (PCI), PCI express (PCI-e or PCIe), Advanced Technology Attachment (ATA), serial ATA, parallel ATA, Small Computer System Interface (SCSI), Enhanced Small Disk Interface (ESDI), Integrated Drive Electronics (IDE), firewire, universal flash memory (UFS), Wi-Fi, bluetooth, NVMe, and the like. For example, the connector 2300 may be defined by at least one of the various communication standards described above.

For example, memory device 2200 may be configured as any of a variety of non-volatile memory elements such as: electrically erasable programmable rom (eeprom), NAND flash memory, NOR flash memory, phase change RAM (pram), resistive RAM (reram), ferroelectric RAM (fram), spin torque magnetic RAM (STT-MRAM), and the like.

The memory controller 2100 and the memory device 2200 may be integrated into one semiconductor device to configure a memory card such as the following: PC card (personal computer memory card international association (PCMCIA)), compact flash Card (CF), smart media card (SM or SMC), memory stick, multimedia card (MMC, RS-MMC, micro MMC or eMMC), SD card (SD, mini SD, micro SD or SDHC), universal flash memory (UFS), and the like.

Fig. 13 is a block diagram illustrating a Solid State Drive (SSD) system 3000 to which a storage device according to an embodiment of the present disclosure is applied.

Referring to fig. 13, SSD system 3000 includes host 3100 and SSD 3200. The SSD3200 exchanges signals SIG with the host 3100 through the signal connector 3001, and receives power PWR through the power connector 3002. The SSD3200 includes an SSD controller 3210, a plurality of flash memories 3221 to 322n, an auxiliary power supply device 3230, and a buffer memory 3240.

According to an embodiment of the present disclosure, the SSD controller 3210 may perform the functions of the memory controller 200 described with reference to fig. 1.

The SSD controller 3210 may control the plurality of flash memories 3221 to 322n in response to a signal SIG received from the host 3100. For example, signal SIG may be a signal based on an interface between host 3100 and SSD 3200. For example, the signal SIG may be a signal defined by at least one of various communication standards such as: universal Serial Bus (USB), multimedia card (MMC), embedded MMC (emmc), Peripheral Component Interconnect (PCI), PCI express (PCI-E), Advanced Technology Attachment (ATA), serial ATA, parallel ATA, Small Computer System Interface (SCSI), Enhanced Small Disk Interface (ESDI), Integrated Drive Electronics (IDE), firewire, universal flash memory (UFS), Wi-Fi, bluetooth, NVMe, and the like.

The auxiliary power supply device 3230 is connected to the host 3100 through a power supply connector 3002. The auxiliary power supply device 3230 may receive power PWR from the host 3100 and may be charged by the power PWR therein. When the power supply from the host 3100 is not smooth, the auxiliary power supply device 3230 may supply the auxiliary power to the SSD 3200. For example, secondary power supply device 3230 may be located inside SSD3200 or may be located outside SSD 3200. For example, the auxiliary power supply device 3230 may be located in a main board, and may supply auxiliary power to the SSD 3200.

The buffer memory 3240 operates as a buffer memory of the SSD 3200. For example, the buffer memory 3240 may temporarily store data received from the host 3100 or data received from the plurality of flash memories 3221 to 322n, or may temporarily store metadata (e.g., a mapping table) of the flash memories 3221 to 322 n. The buffer memory 3240 may include volatile memory such as DRAM, SDRAM, DDR SDRAM, LPDDR SDRAM, GRAM, etc., or non-volatile memory such as FRAM, ReRAM, STT-MRAM, PRAM, etc.

Fig. 14 is a block diagram illustrating a user system 4000 to which a storage device according to an embodiment of the present disclosure is applied.

Referring to fig. 14, the user system 4000 includes an application processor 4100, a memory module 4200, a network module 4300, a storage module 4400, and a user interface 4500.

The application processor 4100 may drive components, an Operating System (OS), user programs, and the like included in the user system 4000. For example, the application processor 4100 may include a controller, an interface, a graphic engine, and the like that control components included in the user system 4000. The application processor 4100 may be provided as a system on chip (SoC).

The memory module 4200 may operate as a main memory, an operating memory, a buffer memory, or a cache memory of the user system 4000. The memory module 4200 may include volatile random access memory such as DRAM, SDRAM, DDR2 SDRAM, DDR3 SDRAM, LPDDR SDARM, LPDDR2 SDRAM, LPDDR3 SDRAM, or the like, or non-volatile random access memory such as PRAM, ReRAM, MRAM, FRAM, or the like. For example, the application processor 4100 and the memory module 4200 may be packaged on the basis of a Package On Package (POP) and provided as one semiconductor device.

The network module 4300 may communicate with an external device. For example, the network module 4300 may support wireless communications such as: code Division Multiple Access (CDMA), Global System for Mobile communications (GSM), Wideband CDMA (WCDMA), CDMA-2000, Time Division Multiple Access (TDMA), Long term evolution, Wimax, WLAN, UWB, Bluetooth, Wi-Fi and the like. For example, the network module 4300 may be included in the application processor 4100.

The memory module 4400 may store data. For example, the memory module 4400 may store data received from the application processor 4100. Alternatively, the memory module 4400 may transmit data stored in the memory module 4400 to the application processor 4100. For example, the memory module 4400 may be implemented as a nonvolatile semiconductor memory element such as a phase change ram (pram), a magnetic ram (mram), a resistive ram (rram), a NAND flash memory, a NOR flash memory, or a three-dimensional NAND flash memory. For example, the memory module 4400 may be provided as a removable storage device (removable drive) such as a memory card or an external drive of the user system 4000.

For example, the memory module 4400 may include a plurality of non-volatile memory devices, and the plurality of non-volatile memory devices may operate the same as the memory device 100 described with reference to fig. 1. The memory module 4400 may operate the same as the memory device 50 described with reference to fig. 1.

The user interface 4500 may include an interface for inputting data or instructions to the application processor 4100 or for outputting data to an external device. For example, user interface 4500 may include one or more of the following user input interfaces such as: a keyboard, keypad, button, touch panel, touch screen, touch pad, touch ball, camera, microphone, gyroscope sensor, vibration sensor, piezoelectric element, and the like. User interface 4500 may include one or more of the following user output interfaces such as: liquid Crystal Displays (LCDs), Organic Light Emitting Diode (OLED) display devices, active matrix OLED (amoled) display devices, LEDs, speakers, monitors, and the like.

While various embodiments have been described above, it will be understood by those skilled in the art that the described embodiments are by way of example only. Thus, the systems and apparatus described herein should not be limited based on the described embodiments.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:存储器系统、存储器控制器及其操作方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类