白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Determine a load balancing mechanism for allocation of shared resources in a storage system using a machine learning module based on number of I/O operations

專利號(hào)
US11175958B2
公開(kāi)日期
2021-11-16
申請(qǐng)人
INTERNATIONAL BUSINESS MACHINES CORPORATION(US NY Armonk)
發(fā)明人
Lokesh M. Gupta; Matthew R. Craig; Beth Ann Peterson; Kevin John Ash
IPC分類
G06F9/50; G06N3/08; G06N20/00
技術(shù)領(lǐng)域
tcbs,learning,storage,in,machine,host,module,adapter,controller,resources
地域: NY NY Armonk

摘要

A plurality of interfaces that share a plurality of resources in a storage controller are maintained. In response to an occurrence of a predetermined number of operations associated with an interface of the plurality of interfaces, an input is provided on a plurality of attributes of the storage controller to a machine learning module. In response to receiving the input, the machine learning module generates an output value corresponding to a number of resources of the plurality of resources to allocate to the interface in the storage controller.

說(shuō)明書(shū)

Data written from a host may be stored in the cache of the storage controller, and at an opportune time the data stored in the cache may be destaged (i.e., moved or copied) to a storage device. Data may also be staged (i.e., moved or copied) from a storage device to the cache of the storage controller. The storage controller may respond to a read I/O request from the host from the cache, if the data for the read I/O request is available in the cache, otherwise the data may be staged from a storage device to the cache for responding to the read I/O request. A write I/O request from the host causes the data corresponding to the write to be written to the cache, and then at an opportune time the written data may be destaged from the cache to a storage device. Since the storage capacity of the cache is relatively small in comparison to the storage capacity of the storage devices, data may be periodically destaged from the cache to create empty storage space in the cache. Data may be written and read from the cache much faster in comparison to reading and writing data from a storage device. In computing, cache replacement policies are used to determine which items to discard (i.e., demote) from the cache to make room for new items in the cache. Host bus adapters operate as interfaces between the storage controller and host computational devices, and storage adapters operate as interfaces between the storage controller and storage devices.

Artificial neural networks (also referred to as neural networks) are computing systems that may have been inspired by the biological neural networks that constitute animal brains. Neural networks may be configured to use a feedback mechanism to learn to perform certain computational tasks. Neural networks are a type of machine learning mechanism.

SUMMARY OF THE PREFERRED EMBODIMENTS

權(quán)利要求

1
微信群二維碼
意見(jiàn)反饋