Unnamed: 0
int64
0
274k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
7
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
75
293k
TechCenter
int64
1.6k
3.9k
274,000
15,955,639
2,131
A unified backup workflow process for different hypervisor configurations of virtual machines on different storage of a cluster leverages RCT-based backup functionality so that backup operations can be performed by a single host of the cluster. The process enables backing up together virtual machines that are local, as well as part of CSV or SMB storage using virtual machine level snapshots as checkpoints rather than volume level snapshots that were traditionally used. Backup data is sent to a backup server as a data stream rather than a file, which avoids the necessity of maintaining chains or structures that identify parent-child disks on the server.
1. A method of unified backup for different hypervisor configurations of virtual machines on different storage of a cluster having a plurality of hosts, the method being performed by a single host, comprising: determining information for targets of all of said virtual machines that span over one or more of said hosts; accessing said targets of virtual machines that span over one or more of said hosts and creating individual virtual machine-level checkpoints for said targets; identifying cluster hosts for a backup rollover of the targets of said individual virtual machines, and balancing the backup data load of said virtual machine targets among said identified hosts; backing up the data of said virtual machines targets using said identified hosts; and creating a reference point for said backed up data to track data changes following said backing up. 2. The method of claim 1 further comprising validating said data backup and creating a snap-view of metadata of the backup process for performing recovery or a next incremental backup of the virtual machine target. 3. The method of claim 1, wherein said accessing comprises querying said hosts to gather virtual machine data points the targets of virtual machines that span said hosts. 4. The method of claim 3, wherein said accessing comprises accessing together virtual machines based on cluster shared column storage and on server message block storage. 5. The method of claim 1, wherein said creating individual machine-level checkpoints comprises creating a machine-level snapshot of each target of a virtual machine. 6. The method of claim 5, wherein said creating snapshots comprises creating snapshots of individual target virtual machines instead of creating snapshots of storage volumes. 7. The method of claim 1, wherein said backing up comprises sending a copy of underlying disks for a full back up as a data stream to a backup server. 8. The method of claim 7, wherein said sending comprises streaming copies of said underlying disks in parallel to the backup server. 9. The method of claim 1, wherein said backing up comprises sending, for an incremental backup, metadata identifying extent locations, lengths and offsets on a backup server of chunks of data that changed since the last backup, and streaming to the backup server the changes to said chunks of data for merging with the previously stored data. 10. The method of claim 1, wherein creating a reference point comprises using a checkpoint created for a virtual machine and a resilient change tracking identifier that tracks changes to a target subsequent to backup. 11. A non-transitory computer readable medium for storing executable instructions for controlling the operations of a processor to perform a method of unified backup of different hypervisor configurations of virtual machines on different storage of a cluster having a plurality of hosts, comprising: determining information for targets of all of said virtual machines that span over one or more of said hosts; accessing said targets of virtual machines that span over one or more of said hosts and creating individual virtual machine-level checkpoints for said targets; identifying cluster hosts for a backup rollover of the targets of said individual virtual machines, and balancing the backup data load of said virtual machine targets among said identified hosts; backing up the data of said virtual machines targets using said identified hosts; and creating a reference point for said backed up data to track data changes following said backing up. 12. The non-transitory computer readable medium of claim 11 further comprising validating said data backup and creating a snap-view of metadata of the backup process for performing recovery or a next incremental backup of the virtual machine target. 13. The non-transitory computer readable medium of claim 11, wherein said accessing comprises querying said hosts to gather virtual machine data points the targets of virtual machines that span said hosts. 14. The non-transitory computer readable medium of claim 13, wherein said accessing comprises accessing together virtual machines based on cluster shared column storage and on server message block storage. 15. The non-transitory computer readable medium of claim 11, wherein said creating individual machine-level checkpoints comprises creating a machine-level snapshot of each target of a virtual machine. 16. The non-transitory computer readable medium of claim 15, wherein said creating snapshots comprises creating snapshots of individual target virtual machines instead of creating snapshots of storage volumes. 17. The non-transitory computer readable medium of claim 11, wherein said backing up comprises sending a copy of underlying disks for a full back up as a data stream to a backup server. 18. The non-transitory computer readable medium of claim 17, wherein said sending comprises streaming copies of said underlying disks in parallel to the backup server. 19. The non-transitory computer readable medium of claim 11, wherein said backing up comprises sending, for an incremental backup, metadata identifying extent locations, lengths and offsets on a backup server of chunks of data that changed since the last backup, and streaming to the backup server the changes to said chunks of data for merging with the previously stored data. 20. The non-transitory computer readable medium of claim 11, wherein creating a reference point comprises using a checkpoint created for a virtual machine and a resilient change tracking identifier that tracks changes to a target subsequent to backup.
A unified backup workflow process for different hypervisor configurations of virtual machines on different storage of a cluster leverages RCT-based backup functionality so that backup operations can be performed by a single host of the cluster. The process enables backing up together virtual machines that are local, as well as part of CSV or SMB storage using virtual machine level snapshots as checkpoints rather than volume level snapshots that were traditionally used. Backup data is sent to a backup server as a data stream rather than a file, which avoids the necessity of maintaining chains or structures that identify parent-child disks on the server.1. A method of unified backup for different hypervisor configurations of virtual machines on different storage of a cluster having a plurality of hosts, the method being performed by a single host, comprising: determining information for targets of all of said virtual machines that span over one or more of said hosts; accessing said targets of virtual machines that span over one or more of said hosts and creating individual virtual machine-level checkpoints for said targets; identifying cluster hosts for a backup rollover of the targets of said individual virtual machines, and balancing the backup data load of said virtual machine targets among said identified hosts; backing up the data of said virtual machines targets using said identified hosts; and creating a reference point for said backed up data to track data changes following said backing up. 2. The method of claim 1 further comprising validating said data backup and creating a snap-view of metadata of the backup process for performing recovery or a next incremental backup of the virtual machine target. 3. The method of claim 1, wherein said accessing comprises querying said hosts to gather virtual machine data points the targets of virtual machines that span said hosts. 4. The method of claim 3, wherein said accessing comprises accessing together virtual machines based on cluster shared column storage and on server message block storage. 5. The method of claim 1, wherein said creating individual machine-level checkpoints comprises creating a machine-level snapshot of each target of a virtual machine. 6. The method of claim 5, wherein said creating snapshots comprises creating snapshots of individual target virtual machines instead of creating snapshots of storage volumes. 7. The method of claim 1, wherein said backing up comprises sending a copy of underlying disks for a full back up as a data stream to a backup server. 8. The method of claim 7, wherein said sending comprises streaming copies of said underlying disks in parallel to the backup server. 9. The method of claim 1, wherein said backing up comprises sending, for an incremental backup, metadata identifying extent locations, lengths and offsets on a backup server of chunks of data that changed since the last backup, and streaming to the backup server the changes to said chunks of data for merging with the previously stored data. 10. The method of claim 1, wherein creating a reference point comprises using a checkpoint created for a virtual machine and a resilient change tracking identifier that tracks changes to a target subsequent to backup. 11. A non-transitory computer readable medium for storing executable instructions for controlling the operations of a processor to perform a method of unified backup of different hypervisor configurations of virtual machines on different storage of a cluster having a plurality of hosts, comprising: determining information for targets of all of said virtual machines that span over one or more of said hosts; accessing said targets of virtual machines that span over one or more of said hosts and creating individual virtual machine-level checkpoints for said targets; identifying cluster hosts for a backup rollover of the targets of said individual virtual machines, and balancing the backup data load of said virtual machine targets among said identified hosts; backing up the data of said virtual machines targets using said identified hosts; and creating a reference point for said backed up data to track data changes following said backing up. 12. The non-transitory computer readable medium of claim 11 further comprising validating said data backup and creating a snap-view of metadata of the backup process for performing recovery or a next incremental backup of the virtual machine target. 13. The non-transitory computer readable medium of claim 11, wherein said accessing comprises querying said hosts to gather virtual machine data points the targets of virtual machines that span said hosts. 14. The non-transitory computer readable medium of claim 13, wherein said accessing comprises accessing together virtual machines based on cluster shared column storage and on server message block storage. 15. The non-transitory computer readable medium of claim 11, wherein said creating individual machine-level checkpoints comprises creating a machine-level snapshot of each target of a virtual machine. 16. The non-transitory computer readable medium of claim 15, wherein said creating snapshots comprises creating snapshots of individual target virtual machines instead of creating snapshots of storage volumes. 17. The non-transitory computer readable medium of claim 11, wherein said backing up comprises sending a copy of underlying disks for a full back up as a data stream to a backup server. 18. The non-transitory computer readable medium of claim 17, wherein said sending comprises streaming copies of said underlying disks in parallel to the backup server. 19. The non-transitory computer readable medium of claim 11, wherein said backing up comprises sending, for an incremental backup, metadata identifying extent locations, lengths and offsets on a backup server of chunks of data that changed since the last backup, and streaming to the backup server the changes to said chunks of data for merging with the previously stored data. 20. The non-transitory computer readable medium of claim 11, wherein creating a reference point comprises using a checkpoint created for a virtual machine and a resilient change tracking identifier that tracks changes to a target subsequent to backup.
2,100
274,001
15,955,432
2,131
Aspects of the disclosure provide for reducing a temperature of one or more non-volatile memory (NVM) dies of a solid state drive (SSD). The methods and apparatus detect a temperature of one or more NVM dies of a plurality of NVM dies of the SSD, the plurality of NVM dies including at least one parity NVM die, and determine that the one or more NVM dies is overheated when the detected temperature is at or above a threshold temperature. If the detected temperature is at or above the threshold temperature, the methods and apparatus redirect parity data designated for the at least one parity NVM die to the one or more overheated NVM dies. By repurposing the one more overheated NVM dies to store the parity data, the repurposed dies will experience less activity, and therefore, generate less heat without throttling or reducing the workload capability of the dies.
1. A method of reducing a temperature of one or more non-volatile memory (NVM) dies of a solid state drive (SSD), the method comprising: detecting a temperature of one or more NVM dies of a plurality of NVM dies of the SSD, the plurality of NVM dies including at least one parity NVM die designated to store parity data; determining that the one or more NVM dies is overheated when the detected temperature is at or above a threshold temperature; and redirecting the parity data designated for the at least one parity NVM die to the one or more overheated NVM dies to increase a concentration of parity data in the one or more overheated NVM dies. 2. The method of claim 1, wherein the redirecting the parity data includes selecting parity data for future write operations to be written in the one or more overheated NVM dies instead of the at least one parity NVM die. 3. The method of claim 1, wherein the redirecting the parity data includes: migrating user data stored in the one or more overheated NVM dies away from the one or more overheated NVM dies; and writing the parity data into the one or more overheated NVM dies. 4. The method of claim 3, wherein the redirecting the parity data further includes activating a garbage collection operation to facilitate migration of the user data away from the one or more overheated NVM dies. 5. The method of claim 1, further comprising: detecting an adjusted temperature of the one or more overheated NVM dies after redirecting the parity data to the one or more overheated NVM dies; and throttling a performance of at least one of the SSD, the one or more overheated NVM dies, or a controller controlling the one or more overheated NVM dies when the adjusted temperature is at or above the threshold temperature. 6. The method of claim 1, wherein the threshold temperature is equivalent to a temperature of one or more NVM dies neighboring the one or more overheated NVM dies plus a preselected temperature amount. 7. The method of claim 1, wherein the threshold is at least one of: a memory-specific temperature; a product-specific temperature; an application-specific temperature; or a customer-specific temperature. 8. A solid state drive (SSD), comprising: a plurality of non-volatile memory (NVM) dies; and a controller communicatively coupled to a host device and the plurality of NVM dies, wherein the controller is configured to: detect a temperature of one or more NVM dies of the plurality of NVM dies of the SSD, the plurality of NVM dies including at least one parity NVM die designated to store parity data, determine that the one or more NVM dies is overheated when the detected temperature is at or above a threshold temperature, and redirect the parity data designated for the at least one parity NVM die to the one or more overheated NVM dies to increase a concentration of parity data in the one or more overheated NVM dies. 9. The solid state drive of claim 8, wherein the controller configured to redirect the parity data is further configured to store the parity data in the one or more overheated NVM dies instead of the at least one parity NVM die. 10. The solid state drive of claim 8, wherein the controller configured to redirect the parity data is further configured to store the parity data in a location alternative to the at least one parity NVM die. 11. The solid state drive of claim 8, wherein the controller configured to redirect the parity data is further configured to: migrate user data stored in the one or more overheated NVM dies away from the one or more overheated NVM dies; and write the parity data into the one or more overheated NVM dies. 12. The solid state drive of claim 11, wherein the controller configured to migrate the user data is further configured to: move the user data out of the one or more overheated NVM dies; and move the user data into at least one NVM die having a temperature below the threshold temperature. 13. The solid state drive of claim 8, wherein the threshold temperature is equivalent to a temperature of one or more NVM dies that is a nearest distance to the one or more overheated NVM dies plus a preselected temperature amount. 14. The solid state drive of claim 8, wherein the controller configured to detect the temperature of the one or more NVM dies is further configured to detect the temperature via one or more temperature sensors located adjacent to the one or more NVM dies. 15. A non-volatile memory (NVM) device including an apparatus for reducing a temperature of one or more NVM dies of a solid state drive (SSD), the apparatus comprising: means for detecting a temperature of one or more NVM dies of a plurality of NVM dies of the SSD, the plurality of NVM dies including at least one parity NVM die designated to store parity data; means for determining that the one or more NVM dies is overheated when the detected temperature is at or above a threshold temperature; and means for redirecting the parity data designated for the at least one parity NVM die to the one or more overheated NVM dies to increase a concentration of parity data in the one or more overheated dies. 16. The apparatus of claim 15, wherein the means for redirecting the parity data is configured to select parity data for future write operations to be written in the one or more overheated NVM dies instead of the at least one parity NVM die. 17. The apparatus of claim 15, wherein the means for redirecting the parity data is configured to: move user data out of the one or more overheated NVM dies; move the user data into at least one NVM die having a temperature below the threshold temperature; and write the parity data into the one or more overheated NVM dies. 18. The apparatus of claim 15, wherein the threshold temperature is equivalent to a temperature of one or more NVM dies that is a nearest distance to the one or more overheated NVM dies plus a preselected temperature amount. 19. The apparatus of claim 15, further comprising: means for detecting an adjusted temperature of the one or more overheated NVM dies after redirecting the parity data to the one or more overheated NVM dies; and means for throttling a performance of at least one of the SSD, the one or more overheated NVM dies, or a controller controlling the one or more overheated NVM dies when the adjusted temperature is at or above the threshold temperature. 20. The apparatus of claim 15, wherein the means for detecting the temperature of the one or more overheated NVM dies is configured to detect the temperature via one or more temperature sensors located adjacent to the one or more overheated NVM dies.
Aspects of the disclosure provide for reducing a temperature of one or more non-volatile memory (NVM) dies of a solid state drive (SSD). The methods and apparatus detect a temperature of one or more NVM dies of a plurality of NVM dies of the SSD, the plurality of NVM dies including at least one parity NVM die, and determine that the one or more NVM dies is overheated when the detected temperature is at or above a threshold temperature. If the detected temperature is at or above the threshold temperature, the methods and apparatus redirect parity data designated for the at least one parity NVM die to the one or more overheated NVM dies. By repurposing the one more overheated NVM dies to store the parity data, the repurposed dies will experience less activity, and therefore, generate less heat without throttling or reducing the workload capability of the dies.1. A method of reducing a temperature of one or more non-volatile memory (NVM) dies of a solid state drive (SSD), the method comprising: detecting a temperature of one or more NVM dies of a plurality of NVM dies of the SSD, the plurality of NVM dies including at least one parity NVM die designated to store parity data; determining that the one or more NVM dies is overheated when the detected temperature is at or above a threshold temperature; and redirecting the parity data designated for the at least one parity NVM die to the one or more overheated NVM dies to increase a concentration of parity data in the one or more overheated NVM dies. 2. The method of claim 1, wherein the redirecting the parity data includes selecting parity data for future write operations to be written in the one or more overheated NVM dies instead of the at least one parity NVM die. 3. The method of claim 1, wherein the redirecting the parity data includes: migrating user data stored in the one or more overheated NVM dies away from the one or more overheated NVM dies; and writing the parity data into the one or more overheated NVM dies. 4. The method of claim 3, wherein the redirecting the parity data further includes activating a garbage collection operation to facilitate migration of the user data away from the one or more overheated NVM dies. 5. The method of claim 1, further comprising: detecting an adjusted temperature of the one or more overheated NVM dies after redirecting the parity data to the one or more overheated NVM dies; and throttling a performance of at least one of the SSD, the one or more overheated NVM dies, or a controller controlling the one or more overheated NVM dies when the adjusted temperature is at or above the threshold temperature. 6. The method of claim 1, wherein the threshold temperature is equivalent to a temperature of one or more NVM dies neighboring the one or more overheated NVM dies plus a preselected temperature amount. 7. The method of claim 1, wherein the threshold is at least one of: a memory-specific temperature; a product-specific temperature; an application-specific temperature; or a customer-specific temperature. 8. A solid state drive (SSD), comprising: a plurality of non-volatile memory (NVM) dies; and a controller communicatively coupled to a host device and the plurality of NVM dies, wherein the controller is configured to: detect a temperature of one or more NVM dies of the plurality of NVM dies of the SSD, the plurality of NVM dies including at least one parity NVM die designated to store parity data, determine that the one or more NVM dies is overheated when the detected temperature is at or above a threshold temperature, and redirect the parity data designated for the at least one parity NVM die to the one or more overheated NVM dies to increase a concentration of parity data in the one or more overheated NVM dies. 9. The solid state drive of claim 8, wherein the controller configured to redirect the parity data is further configured to store the parity data in the one or more overheated NVM dies instead of the at least one parity NVM die. 10. The solid state drive of claim 8, wherein the controller configured to redirect the parity data is further configured to store the parity data in a location alternative to the at least one parity NVM die. 11. The solid state drive of claim 8, wherein the controller configured to redirect the parity data is further configured to: migrate user data stored in the one or more overheated NVM dies away from the one or more overheated NVM dies; and write the parity data into the one or more overheated NVM dies. 12. The solid state drive of claim 11, wherein the controller configured to migrate the user data is further configured to: move the user data out of the one or more overheated NVM dies; and move the user data into at least one NVM die having a temperature below the threshold temperature. 13. The solid state drive of claim 8, wherein the threshold temperature is equivalent to a temperature of one or more NVM dies that is a nearest distance to the one or more overheated NVM dies plus a preselected temperature amount. 14. The solid state drive of claim 8, wherein the controller configured to detect the temperature of the one or more NVM dies is further configured to detect the temperature via one or more temperature sensors located adjacent to the one or more NVM dies. 15. A non-volatile memory (NVM) device including an apparatus for reducing a temperature of one or more NVM dies of a solid state drive (SSD), the apparatus comprising: means for detecting a temperature of one or more NVM dies of a plurality of NVM dies of the SSD, the plurality of NVM dies including at least one parity NVM die designated to store parity data; means for determining that the one or more NVM dies is overheated when the detected temperature is at or above a threshold temperature; and means for redirecting the parity data designated for the at least one parity NVM die to the one or more overheated NVM dies to increase a concentration of parity data in the one or more overheated dies. 16. The apparatus of claim 15, wherein the means for redirecting the parity data is configured to select parity data for future write operations to be written in the one or more overheated NVM dies instead of the at least one parity NVM die. 17. The apparatus of claim 15, wherein the means for redirecting the parity data is configured to: move user data out of the one or more overheated NVM dies; move the user data into at least one NVM die having a temperature below the threshold temperature; and write the parity data into the one or more overheated NVM dies. 18. The apparatus of claim 15, wherein the threshold temperature is equivalent to a temperature of one or more NVM dies that is a nearest distance to the one or more overheated NVM dies plus a preselected temperature amount. 19. The apparatus of claim 15, further comprising: means for detecting an adjusted temperature of the one or more overheated NVM dies after redirecting the parity data to the one or more overheated NVM dies; and means for throttling a performance of at least one of the SSD, the one or more overheated NVM dies, or a controller controlling the one or more overheated NVM dies when the adjusted temperature is at or above the threshold temperature. 20. The apparatus of claim 15, wherein the means for detecting the temperature of the one or more overheated NVM dies is configured to detect the temperature via one or more temperature sensors located adjacent to the one or more overheated NVM dies.
2,100
274,002
15,955,534
2,131
A methods and systems for handling requests for data corresponding to a volume of data are disclosed. A method involves receiving a request from an application, the request related to retrieving a chunk of data from a volume of data or persisting a chunk of data to the volume of data, the request comprising an offset and a size of the chunk of data, establishing a short condition register for the chunk of data as a function of the offset and the size, establishing a long condition register for the chunk of data as a function of the offset and the size, and performing a retrieve operation from the volume of data or a persist operation to the volume of data as a function of the short condition register and the long condition register.
1. A method for handling requests for data corresponding to a volume of data, the method comprising: receiving a request from an application, the request related to retrieving a chunk of data from a volume of data or persisting a chunk of data to the volume of data, the request comprising an offset and a size of the chunk of data; establishing a short condition register for the chunk of data as a function of the offset and the size, wherein the short condition register is set if the offset corresponding to the request is larger than an offset for the data block that would be aligned with a block boundary of the volume of data or if a total of the offset and the size is less than an end of an aligned data block of the volume of data; establishing a long condition register for the chunk of data as a function of the offset and the size, wherein the long condition register is set if the offset corresponding to the request and the size is more than the end of an aligned data block of the volume of data; and performing a retrieve operation from the volume of data or a persist operation to the volume of data as a function of the short condition register and the long condition register. 2. The method of claim 1, wherein block boundaries of the volume of data are determined as a function of a block size of the volume of data. 3. The method of claim 2, wherein the block size of the volume is 4 kilobytes (kB) of data. 4. The method of claim 1, wherein the volume of data is a virtual volume of data. 5. The method of claim 1, wherein the volume of data is a physical volume of data. 6. The method of claim 1, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request. 7. The method of claim 1, wherein when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data blocks, and write the multiple modified in-memory data blocks to the volume of data. 8. The method of claim 1, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request; and when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data blocks, and write the multiple modified in-memory data blocks to the volume of data. 9. The method of claim 1, wherein the receiving, the establishing a short condition register, and the establishing a long condition register are executed in a containerized storage application. 10. The method of claim 1, wherein the application is a containerized application and wherein receiving, establishing a short condition register, and establishing a long condition register are executed in a containerized storage application. 11. A non-transitory computer readable medium that stores computer executable code, which when executed by one or more processors, implements a method for handling requests for data corresponding to a volume of data, the method comprising: establishing a short condition register for the chunk of data as a function of the offset and the size, wherein the short condition register is set if the offset corresponding to the request is larger than an offset for the data block that would be aligned with a block boundary of the volume of data or if a total of the offset and the size is less than an end of an aligned data block of the volume of data; establishing a long condition register for the chunk of data as a function of the offset and the size, wherein the long condition register is set if the offset corresponding to the request and the size is more than the end of an aligned data block of the volume of data; and performing a retrieve operation from the volume of data or a persist operation to the volume of data as a function of the short condition register and the long condition register. 12. The non-transitory computer readable medium of claim 11, wherein block boundaries of the volume of data are determined as a function of a block size of the volume of data. 13. The non-transitory computer readable medium of claim 11, wherein the volume of data is a virtual volume of data. 14. The non-transitory computer readable medium of claim 11, wherein the volume of data is a physical volume of data. 15. The non-transitory computer readable medium of claim 11, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request. 16. The non-transitory computer readable medium of claim 11, wherein when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data blocks, and write the multiple modified in-memory data blocks to the volume of data. 17. The non-transitory computer readable medium of claim 11, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request; and when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data blocks, and write the multiple modified in-memory data blocks to the volume of data. 18. A method for handling requests for data corresponding to a volume of data, the method comprising: receiving a request from an application, the request related to retrieving a chunk of data from a volume of data or persisting a chunk of data to the volume of data, the request comprising an offset and a size of the chunk of data; establishing a short condition register for the chunk of data as a function of the offset and the size, wherein the short condition register is set if the chunk of data does not cross a block boundary of the volume of data; establishing a long condition register for the chunk of data as a function of the offset and the size, wherein the long condition register is set if the chunk of data does cross a block boundary of the volume of data; and performing a retrieve operation from the volume of data or a persist operation to the volume of data as a function of the short condition register and the long condition register. 19. The method of claim 18, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request. 20. The method of claim 18, wherein when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data to blocks, and write the multiple modified in-memory data blocks to the volume of data.
A methods and systems for handling requests for data corresponding to a volume of data are disclosed. A method involves receiving a request from an application, the request related to retrieving a chunk of data from a volume of data or persisting a chunk of data to the volume of data, the request comprising an offset and a size of the chunk of data, establishing a short condition register for the chunk of data as a function of the offset and the size, establishing a long condition register for the chunk of data as a function of the offset and the size, and performing a retrieve operation from the volume of data or a persist operation to the volume of data as a function of the short condition register and the long condition register.1. A method for handling requests for data corresponding to a volume of data, the method comprising: receiving a request from an application, the request related to retrieving a chunk of data from a volume of data or persisting a chunk of data to the volume of data, the request comprising an offset and a size of the chunk of data; establishing a short condition register for the chunk of data as a function of the offset and the size, wherein the short condition register is set if the offset corresponding to the request is larger than an offset for the data block that would be aligned with a block boundary of the volume of data or if a total of the offset and the size is less than an end of an aligned data block of the volume of data; establishing a long condition register for the chunk of data as a function of the offset and the size, wherein the long condition register is set if the offset corresponding to the request and the size is more than the end of an aligned data block of the volume of data; and performing a retrieve operation from the volume of data or a persist operation to the volume of data as a function of the short condition register and the long condition register. 2. The method of claim 1, wherein block boundaries of the volume of data are determined as a function of a block size of the volume of data. 3. The method of claim 2, wherein the block size of the volume is 4 kilobytes (kB) of data. 4. The method of claim 1, wherein the volume of data is a virtual volume of data. 5. The method of claim 1, wherein the volume of data is a physical volume of data. 6. The method of claim 1, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request. 7. The method of claim 1, wherein when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data blocks, and write the multiple modified in-memory data blocks to the volume of data. 8. The method of claim 1, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request; and when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data blocks, and write the multiple modified in-memory data blocks to the volume of data. 9. The method of claim 1, wherein the receiving, the establishing a short condition register, and the establishing a long condition register are executed in a containerized storage application. 10. The method of claim 1, wherein the application is a containerized application and wherein receiving, establishing a short condition register, and establishing a long condition register are executed in a containerized storage application. 11. A non-transitory computer readable medium that stores computer executable code, which when executed by one or more processors, implements a method for handling requests for data corresponding to a volume of data, the method comprising: establishing a short condition register for the chunk of data as a function of the offset and the size, wherein the short condition register is set if the offset corresponding to the request is larger than an offset for the data block that would be aligned with a block boundary of the volume of data or if a total of the offset and the size is less than an end of an aligned data block of the volume of data; establishing a long condition register for the chunk of data as a function of the offset and the size, wherein the long condition register is set if the offset corresponding to the request and the size is more than the end of an aligned data block of the volume of data; and performing a retrieve operation from the volume of data or a persist operation to the volume of data as a function of the short condition register and the long condition register. 12. The non-transitory computer readable medium of claim 11, wherein block boundaries of the volume of data are determined as a function of a block size of the volume of data. 13. The non-transitory computer readable medium of claim 11, wherein the volume of data is a virtual volume of data. 14. The non-transitory computer readable medium of claim 11, wherein the volume of data is a physical volume of data. 15. The non-transitory computer readable medium of claim 11, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request. 16. The non-transitory computer readable medium of claim 11, wherein when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data blocks, and write the multiple modified in-memory data blocks to the volume of data. 17. The non-transitory computer readable medium of claim 11, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request; and when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data blocks, and write the multiple modified in-memory data blocks to the volume of data. 18. A method for handling requests for data corresponding to a volume of data, the method comprising: receiving a request from an application, the request related to retrieving a chunk of data from a volume of data or persisting a chunk of data to the volume of data, the request comprising an offset and a size of the chunk of data; establishing a short condition register for the chunk of data as a function of the offset and the size, wherein the short condition register is set if the chunk of data does not cross a block boundary of the volume of data; establishing a long condition register for the chunk of data as a function of the offset and the size, wherein the long condition register is set if the chunk of data does cross a block boundary of the volume of data; and performing a retrieve operation from the volume of data or a persist operation to the volume of data as a function of the short condition register and the long condition register. 19. The method of claim 18, wherein when performing a retrieve operation from the volume of data: when the short condition register is not set and the long condition register is not set, then retrieve a data block that corresponds to the request; when the short condition register is set and the long condition register is not set, then retrieve a data block that corresponds to the request from the volume of data but return only a subset of the data block to the application to satisfy the request; when the short condition register is not set and the long condition register is set, then retrieve multiple data blocks from the volume of data and return data from the multiple data blocks to satisfy the request; when the short condition register is set and the long condition register is set, then retrieve multiple data blocks and return data from the multiple data blocks, including only a subset of at least one of the data blocks, to satisfy the request. 20. The method of claim 18, wherein when performing a persist operation to the volume of data: when the short condition register is not set and the long condition register is not set, then persist the chunk of data to the corresponding data block of the volume of data; when the short condition register is set and the long condition register is not set, then retrieve the entire data block corresponding to the offset into an in-memory data block, amended a subset of the in-memory data block with the chunk of data, and write the entire modified in-memory data block to the volume of data; when the short condition register is not set and the long condition register is set, then retrieve multiple entire physical data blocks corresponding to the offset and size into multiple in-memory data blocks, amend a subset of the multiple in-memory data blocks with the chunk of data, and write the multiple modified in-memory data blocks to the volume of data; when the short condition register is set and the long condition register is set, then retrieve multiple entire data blocks corresponding to the offset and size into multiple in-memory data blocks, amended a subset of the multiple in-memory data blocks with the chunk of data, including only a subset of at least one of the data to blocks, and write the multiple modified in-memory data blocks to the volume of data.
2,100
274,003
15,954,831
2,131
The present disclosure provides a storage management method, a device and a computer-readable medium. The method comprises: receiving a request for creating a storage space, the request at least comprising a storage capacity and a RAID configuration of the storage space, the RAID configuration at least indicating a RAID type; allocating an extent based on the storage capacity; creating a RAID group for the extent based on the RAID type; and storing metadata of the RAID group in the extent, the metadata indicating a configuration of the RAID group and a configuration of a user data region in the extent.
1. A method of storage management, comprising: receiving a request for creating a storage space, the request at least including a storage capacity and a RAID configuration of the storage space, the RAID configuration at least indicating a RAID type; allocating an extent based on the storage capacity; creating a RAID group for the extent based on the RAID type; and storing metadata of the RAID group into the extent, the metadata indicating a configuration of the RAID group and a configuration of a user data region in the extent. 2. The method of claim 1, wherein the metadata includes: first metadata for recording configuration information of a RAID unit in the RAID group, the RAID group including a plurality of RAID units, and second metadata for recording an extent to be rebuilt amongst extents mapped by the RAID unit. 3. The method of claim 2, wherein storing metadata of the RAID group into the extent comprises: storing the first metadata in a mirror stripe created for the extent, the mirror stripe being stored at a start of the extent; and storing the second metadata at an end of the extent, a region between the start and the end of the extent being the user data region. 4. The method of claim 1, further comprising: maintaining a dynamic mapping, the dynamic mapping including at least one of the following: a first multi-tuple including identification information of the storage space, a storage capacity of the storage space, and a RAID configuration and reference information of the storage space, the reference information indicating a logic block address (LBA) corresponding to the storage space, a second multi-tuple including a mapping relationship between a logic unit number (LUN) of the storage space and an address of the RAID group, the second multi-tuple including at least one sub-multi-tuple of a same size, and a third multi-tuple including a mapping relationship between the RAID group of the storage space and the extent. 5. The method of claim 4, further comprising: detecting whether an idle storage unit is present in the second multi-tuple for storing the address of the RAID group; in response to absence of the idle storage unit in the second multi-tuple, allocating a sub-multi-tuple including a plurality of idle storage units; and arranging the sub-multi-tuple at an end of the second multi-tuple. 6. The method of claim 1, wherein the received request for creating a storage space is a request for expanding an existed storage space. 7. The method of claim 4, further comprising: receiving a reducing request for a storage space to be reduced, the reducing request including indication information for indicating a predetermined RAID group to be reduced; erasing metadata on an extent corresponding to the predetermined RAID group according to the indication information; and distributing to an extent pool the extent having the metadata erased, the extent pool including a plurality of extents. 8. The method of claim 7, further comprising: marking the third multi-tuple corresponding to the predetermined RAID group in the dynamic mapping as invalid so as to invalid the predetermined RAID group. 9. The method of claim 7, further comprising: marking a predetermined storage unit that was used to store the address of the predetermined RAID group in the second multi-tuple of the dynamic mapping, as idle. 10. The method of claim 9, further comprising: in response to all storage units in a predetermined sub-multi-tuple where the predetermined storage unit locates are all marked as idle, releasing a corresponding relationship of the predetermined sub-multi-tuple and the second multi-tuple so that a storage space in a memory occupied by the predetermined sub-multi-tuple can be used to store other data. 11. A device, comprising: at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions executable by the at least one processing unit, the instructions, when executed by the at least one processing unit, causing the device to perform act including: receiving a request for creating a storage space, the request at least including a storage capacity and a RAID configuration of the storage space, the RAID configuration at least indicating a RAID type, allocating an extent based on the storage capacity, creating a RAID group for the extent based on the RAID type, and storing metadata of the RAID group into the extent, the metadata indicating a configuration of the RAID group and a configuration of a user data region in the extent. 12. The device of claim 11, wherein the metadata includes: first metadata for recording configuration information of a RAID unit in the RAID group, the RAID group including a plurality of RAID units, and second metadata for recording an extent to be rebuilt amongst extents mapped by the RAID unit. 13. The device of claim 12, wherein storing metadata of the RAID group in the extent comprises: storing the first metadata in a mirror stripe created for the extent, the mirror stripe being stored at a start of the extent; and storing the second metadata at an end of the extent, a region between the start and the end of the extent being the user data region. 14. The device of claim 11, wherein the acts further include: maintaining a dynamic mapping, the dynamic mapping including at least one of the following: a first multi-tuple including identification information of the storage space, a storage capacity of the storage space, and a RAID configuration and reference information of the storage space, the reference information indicating a logic block address (LBA) corresponding to the storage space, a second multi-tuple including a mapping relationship between a logic unit number (LUN) of the storage space and an address of the RAID group, the second multi-tuple including at least one sub-multi-tuple of a same size, and a third multi-tuple including a mapping relationship between the RAID group of the storage space and the extent. 15. The device of claim 14, wherein the acts further include: detecting whether an idle storage unit is present in the second multi-tuple for storing the address of the RAID group; in response to absence of the idle storage unit in the second multi-tuple, allocating a sub-multi-tuple including a plurality of idle storage units; and arranging the sub-multi-tuple at an end of the second multi-tuple. 16. The device of claim 11, wherein the received request for creating a storage space is a request for expanding an existed storage space. 17. The device of claim 14, wherein the actions further comprise: receiving a reducing request for a storage space to be reduced, the reducing request including indication information for indicating a predetermined RAID group to be reduced; erasing metadata on an extent corresponding to the predetermined RAID group according to the indication information; and distributing to an extent pool the extent having the metadata erased, the extent pool including a plurality of extent. 18. The device of claim 17, wherein the acts further include: marking the third multi-tuple corresponding to the predetermined RAID group in the dynamic mapping as invalid so as to invalid the predetermined RAID group. 19. The device of claim 17, wherein the acts further include: marking a predetermined storage unit that was used to store the address of the predetermined RAID group in the second multi-tuple of the dynamic mapping, as idle. 20. The device of claim 19, wherein the acts further include: in response to all storage units for a predetermined sub-multi-tuple where the predetermined storage unit locates are all marked as idle, releasing a corresponding relationship of the predetermined sub-multi-tuple and the second multi-tuple so that the storage space in a memory occupied by the predetermined sub-multi-tuple can be used to store other data. 21. (canceled)
The present disclosure provides a storage management method, a device and a computer-readable medium. The method comprises: receiving a request for creating a storage space, the request at least comprising a storage capacity and a RAID configuration of the storage space, the RAID configuration at least indicating a RAID type; allocating an extent based on the storage capacity; creating a RAID group for the extent based on the RAID type; and storing metadata of the RAID group in the extent, the metadata indicating a configuration of the RAID group and a configuration of a user data region in the extent.1. A method of storage management, comprising: receiving a request for creating a storage space, the request at least including a storage capacity and a RAID configuration of the storage space, the RAID configuration at least indicating a RAID type; allocating an extent based on the storage capacity; creating a RAID group for the extent based on the RAID type; and storing metadata of the RAID group into the extent, the metadata indicating a configuration of the RAID group and a configuration of a user data region in the extent. 2. The method of claim 1, wherein the metadata includes: first metadata for recording configuration information of a RAID unit in the RAID group, the RAID group including a plurality of RAID units, and second metadata for recording an extent to be rebuilt amongst extents mapped by the RAID unit. 3. The method of claim 2, wherein storing metadata of the RAID group into the extent comprises: storing the first metadata in a mirror stripe created for the extent, the mirror stripe being stored at a start of the extent; and storing the second metadata at an end of the extent, a region between the start and the end of the extent being the user data region. 4. The method of claim 1, further comprising: maintaining a dynamic mapping, the dynamic mapping including at least one of the following: a first multi-tuple including identification information of the storage space, a storage capacity of the storage space, and a RAID configuration and reference information of the storage space, the reference information indicating a logic block address (LBA) corresponding to the storage space, a second multi-tuple including a mapping relationship between a logic unit number (LUN) of the storage space and an address of the RAID group, the second multi-tuple including at least one sub-multi-tuple of a same size, and a third multi-tuple including a mapping relationship between the RAID group of the storage space and the extent. 5. The method of claim 4, further comprising: detecting whether an idle storage unit is present in the second multi-tuple for storing the address of the RAID group; in response to absence of the idle storage unit in the second multi-tuple, allocating a sub-multi-tuple including a plurality of idle storage units; and arranging the sub-multi-tuple at an end of the second multi-tuple. 6. The method of claim 1, wherein the received request for creating a storage space is a request for expanding an existed storage space. 7. The method of claim 4, further comprising: receiving a reducing request for a storage space to be reduced, the reducing request including indication information for indicating a predetermined RAID group to be reduced; erasing metadata on an extent corresponding to the predetermined RAID group according to the indication information; and distributing to an extent pool the extent having the metadata erased, the extent pool including a plurality of extents. 8. The method of claim 7, further comprising: marking the third multi-tuple corresponding to the predetermined RAID group in the dynamic mapping as invalid so as to invalid the predetermined RAID group. 9. The method of claim 7, further comprising: marking a predetermined storage unit that was used to store the address of the predetermined RAID group in the second multi-tuple of the dynamic mapping, as idle. 10. The method of claim 9, further comprising: in response to all storage units in a predetermined sub-multi-tuple where the predetermined storage unit locates are all marked as idle, releasing a corresponding relationship of the predetermined sub-multi-tuple and the second multi-tuple so that a storage space in a memory occupied by the predetermined sub-multi-tuple can be used to store other data. 11. A device, comprising: at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions executable by the at least one processing unit, the instructions, when executed by the at least one processing unit, causing the device to perform act including: receiving a request for creating a storage space, the request at least including a storage capacity and a RAID configuration of the storage space, the RAID configuration at least indicating a RAID type, allocating an extent based on the storage capacity, creating a RAID group for the extent based on the RAID type, and storing metadata of the RAID group into the extent, the metadata indicating a configuration of the RAID group and a configuration of a user data region in the extent. 12. The device of claim 11, wherein the metadata includes: first metadata for recording configuration information of a RAID unit in the RAID group, the RAID group including a plurality of RAID units, and second metadata for recording an extent to be rebuilt amongst extents mapped by the RAID unit. 13. The device of claim 12, wherein storing metadata of the RAID group in the extent comprises: storing the first metadata in a mirror stripe created for the extent, the mirror stripe being stored at a start of the extent; and storing the second metadata at an end of the extent, a region between the start and the end of the extent being the user data region. 14. The device of claim 11, wherein the acts further include: maintaining a dynamic mapping, the dynamic mapping including at least one of the following: a first multi-tuple including identification information of the storage space, a storage capacity of the storage space, and a RAID configuration and reference information of the storage space, the reference information indicating a logic block address (LBA) corresponding to the storage space, a second multi-tuple including a mapping relationship between a logic unit number (LUN) of the storage space and an address of the RAID group, the second multi-tuple including at least one sub-multi-tuple of a same size, and a third multi-tuple including a mapping relationship between the RAID group of the storage space and the extent. 15. The device of claim 14, wherein the acts further include: detecting whether an idle storage unit is present in the second multi-tuple for storing the address of the RAID group; in response to absence of the idle storage unit in the second multi-tuple, allocating a sub-multi-tuple including a plurality of idle storage units; and arranging the sub-multi-tuple at an end of the second multi-tuple. 16. The device of claim 11, wherein the received request for creating a storage space is a request for expanding an existed storage space. 17. The device of claim 14, wherein the actions further comprise: receiving a reducing request for a storage space to be reduced, the reducing request including indication information for indicating a predetermined RAID group to be reduced; erasing metadata on an extent corresponding to the predetermined RAID group according to the indication information; and distributing to an extent pool the extent having the metadata erased, the extent pool including a plurality of extent. 18. The device of claim 17, wherein the acts further include: marking the third multi-tuple corresponding to the predetermined RAID group in the dynamic mapping as invalid so as to invalid the predetermined RAID group. 19. The device of claim 17, wherein the acts further include: marking a predetermined storage unit that was used to store the address of the predetermined RAID group in the second multi-tuple of the dynamic mapping, as idle. 20. The device of claim 19, wherein the acts further include: in response to all storage units for a predetermined sub-multi-tuple where the predetermined storage unit locates are all marked as idle, releasing a corresponding relationship of the predetermined sub-multi-tuple and the second multi-tuple so that the storage space in a memory occupied by the predetermined sub-multi-tuple can be used to store other data. 21. (canceled)
2,100
274,004
15,954,812
2,131
Embodiments of the present disclosure provide a method and an apparatus for storage management. For example, there is provided a method comprising: creating a plurality of profiles for address mapping, the profiles comprising a part of mapping relation in the total mapping table and creating an index for a part of the plurality of profiles to accelerate the lookup speed. A corresponding device and computer program product are also disclosed.
1. A method of storage management, comprising: creating a plurality of profiles for address mapping, a profile containing a part of an address mapping table stored on a non-volatile storage device, and the profile indicating: a logical starting address of an initial logical block of a plurality of logical blocks, a physical starting address corresponding to the logical starting address, the number of the plurality of logical blocks, and a modification flag bit indicating whether the profile is changed with respect to the address mapping table; storing the plurality of profiles in a cache in an order of the corresponding logical starting addresses; and creating an index for a part of the plurality of profiles in the cache. 2. The method of claim 1, wherein creating an index for a part of the plurality of profiles in the cache comprises: selecting the part of profiles randomly to create the index. 3. The method of claim 1, wherein creating an index for a part of the plurality of profiles in the cache comprises: selecting, based on an access frequency, the part of profiles to create the index. 4. The method of claim 1, further comprising: creating a first record for a first physical extent, the first record indicating a physical starting address of the first physical extent and a first number of physical blocks contained therein, at least a part of the plurality of logical blocks indicated by a first profile of the plurality of profiles being mapped to the first physical extent; creating a second record for a second physical extent, the second record indicating a physical starting address of the second physical extent and a second number of physical blocks contained therein, at least a part of the plurality of logical blocks indicated by a second profile of the plurality of profiles being mapped to the second physical extent; in response to a physical end address of the first physical extent and a physical starting address of the second physical extent being continuous, merging the first record and the second record to generate a third record, the third record indicating the physical starting address of the first physical extent and a sum of the first number and the second number; and storing the third record in the cache. 5. The method of claim 4, further comprising: in response to the physical end address of the first physical extent and the physical starting address of the second physical extent being non-continuous, storing the first record and the second record in the cache. 6. The method of claim 1, further comprising: receiving a request for a target profile, the request indicating an index associated with the target profile; and searching the target profile in the cache based on the index. 7. The method of claim 6, further comprising: in response to the target profile being absent in the cache, creating the target profile based on the address mapping table. 8. The method of claim 6, wherein the request comprises a request to modify a part of the address mapping table contained in the target profile, the method further comprising: in response to the target profile being present in the cache, updating the part of the address mapping table contained in the target profile. 9. The method of claim 1, further comprising: in response to the number of idle profiles in the cache being lower than a first threshold, triggering reclaiming of the plurality of profiles; and in response to the number of idle profiles in the cache being greater than a second threshold, ceasing the reclaiming. 10. The method of claim 9, wherein triggering reclaiming of the plurality of profiles comprises: reclaiming, among the plurality of profiles, profiles not having been changed with respect to the address mapping table. 11. The method of claim 9, wherein triggering reclaiming of the plurality of profiles comprises: reclaiming, among the plurality of profiles, profiles having been changed with respect to the address mapping table, comprising: storing, in the non-volatile storage device, changed address mapping in the changed profiles. 12. The method of claim 9, further comprising: removing an index of a reclaimed profile. 13. An electronic device, comprising: a processor; and a memory coupled to the processor, the memory having instructions stored therein which, when executed by the processor, cause the electronic device to perform acts comprising: creating a plurality of profiles for address mapping, one profile containing a part of an address mapping table stored on a non-volatile storage device and the profile indicating: a logical starting address of an initial logical block of a plurality of logical blocks, a physical starting address corresponding to the logical starting address, the number of the plurality of logical blocks, and a modification flag bit indicating whether the profile is changed with respect to the address mapping table; storing the plurality of profiles in a cache in an order of the corresponding logical starting addresses; creating an index for a part of the plurality of profiles in the cache. 14. The device of claim 13, wherein creating an index for a part of the plurality of profiles in the cache comprises: selecting the part of profiles randomly to create the index. 15. The device of claim 13, wherein creating an index for a part of the plurality of profiles in the cache comprises: selecting, based on an access frequency, the part of profiles to create the index. 16. The device of claim 13, wherein the acts further comprise: creating a first record for a first physical extent, the first record indicating a physical starting address of the first physical extent and a first number of physical blocks contained therein, at least a part of the plurality of logical blocks indicated by a first profile of the plurality of profiles being mapped to the first physical extent; creating a second record for a second physical extent, the second record indicating a physical starting address of the second physical extent and a second number of the physical blocks contained therein, g at least a part of the plurality of logical blocks indicated by a second profile of the plurality of profiles being mapped to the second physical extent; in response to a physical end address of the first physical extent and a physical starting address of the second physical extent being continuous, merging the first record and the second record to generate a third record, the third record indicating the physical starting address of the first physical extent and a sum of the first number and the second number; and storing the third record in the cache. 17. The device of claim 16, wherein the acts further comprise: in response to the physical end address of the first physical extent and the physical starting address of the second physical extent being non-continuous, storing the first record and the second record in the cache. 18. The device of claim 13, wherein the acts further comprise: receiving a request for a target profile, the request indicating an index associated with the target profile; searching the target profile in the cache based on the index. 19. The device of claim 18, wherein the acts further comprise: in response to the target profile being absent in the cache, creating the target profile based on the address mapping table. 20. The device of claim 18, wherein the request comprises a request to modify a part of the address mapping table contained in the target profile, the acts further comprising: in response to the target profile being present in the cache, updating the part of the address mapping table contained in the target profile. 21-25. (canceled)
Embodiments of the present disclosure provide a method and an apparatus for storage management. For example, there is provided a method comprising: creating a plurality of profiles for address mapping, the profiles comprising a part of mapping relation in the total mapping table and creating an index for a part of the plurality of profiles to accelerate the lookup speed. A corresponding device and computer program product are also disclosed.1. A method of storage management, comprising: creating a plurality of profiles for address mapping, a profile containing a part of an address mapping table stored on a non-volatile storage device, and the profile indicating: a logical starting address of an initial logical block of a plurality of logical blocks, a physical starting address corresponding to the logical starting address, the number of the plurality of logical blocks, and a modification flag bit indicating whether the profile is changed with respect to the address mapping table; storing the plurality of profiles in a cache in an order of the corresponding logical starting addresses; and creating an index for a part of the plurality of profiles in the cache. 2. The method of claim 1, wherein creating an index for a part of the plurality of profiles in the cache comprises: selecting the part of profiles randomly to create the index. 3. The method of claim 1, wherein creating an index for a part of the plurality of profiles in the cache comprises: selecting, based on an access frequency, the part of profiles to create the index. 4. The method of claim 1, further comprising: creating a first record for a first physical extent, the first record indicating a physical starting address of the first physical extent and a first number of physical blocks contained therein, at least a part of the plurality of logical blocks indicated by a first profile of the plurality of profiles being mapped to the first physical extent; creating a second record for a second physical extent, the second record indicating a physical starting address of the second physical extent and a second number of physical blocks contained therein, at least a part of the plurality of logical blocks indicated by a second profile of the plurality of profiles being mapped to the second physical extent; in response to a physical end address of the first physical extent and a physical starting address of the second physical extent being continuous, merging the first record and the second record to generate a third record, the third record indicating the physical starting address of the first physical extent and a sum of the first number and the second number; and storing the third record in the cache. 5. The method of claim 4, further comprising: in response to the physical end address of the first physical extent and the physical starting address of the second physical extent being non-continuous, storing the first record and the second record in the cache. 6. The method of claim 1, further comprising: receiving a request for a target profile, the request indicating an index associated with the target profile; and searching the target profile in the cache based on the index. 7. The method of claim 6, further comprising: in response to the target profile being absent in the cache, creating the target profile based on the address mapping table. 8. The method of claim 6, wherein the request comprises a request to modify a part of the address mapping table contained in the target profile, the method further comprising: in response to the target profile being present in the cache, updating the part of the address mapping table contained in the target profile. 9. The method of claim 1, further comprising: in response to the number of idle profiles in the cache being lower than a first threshold, triggering reclaiming of the plurality of profiles; and in response to the number of idle profiles in the cache being greater than a second threshold, ceasing the reclaiming. 10. The method of claim 9, wherein triggering reclaiming of the plurality of profiles comprises: reclaiming, among the plurality of profiles, profiles not having been changed with respect to the address mapping table. 11. The method of claim 9, wherein triggering reclaiming of the plurality of profiles comprises: reclaiming, among the plurality of profiles, profiles having been changed with respect to the address mapping table, comprising: storing, in the non-volatile storage device, changed address mapping in the changed profiles. 12. The method of claim 9, further comprising: removing an index of a reclaimed profile. 13. An electronic device, comprising: a processor; and a memory coupled to the processor, the memory having instructions stored therein which, when executed by the processor, cause the electronic device to perform acts comprising: creating a plurality of profiles for address mapping, one profile containing a part of an address mapping table stored on a non-volatile storage device and the profile indicating: a logical starting address of an initial logical block of a plurality of logical blocks, a physical starting address corresponding to the logical starting address, the number of the plurality of logical blocks, and a modification flag bit indicating whether the profile is changed with respect to the address mapping table; storing the plurality of profiles in a cache in an order of the corresponding logical starting addresses; creating an index for a part of the plurality of profiles in the cache. 14. The device of claim 13, wherein creating an index for a part of the plurality of profiles in the cache comprises: selecting the part of profiles randomly to create the index. 15. The device of claim 13, wherein creating an index for a part of the plurality of profiles in the cache comprises: selecting, based on an access frequency, the part of profiles to create the index. 16. The device of claim 13, wherein the acts further comprise: creating a first record for a first physical extent, the first record indicating a physical starting address of the first physical extent and a first number of physical blocks contained therein, at least a part of the plurality of logical blocks indicated by a first profile of the plurality of profiles being mapped to the first physical extent; creating a second record for a second physical extent, the second record indicating a physical starting address of the second physical extent and a second number of the physical blocks contained therein, g at least a part of the plurality of logical blocks indicated by a second profile of the plurality of profiles being mapped to the second physical extent; in response to a physical end address of the first physical extent and a physical starting address of the second physical extent being continuous, merging the first record and the second record to generate a third record, the third record indicating the physical starting address of the first physical extent and a sum of the first number and the second number; and storing the third record in the cache. 17. The device of claim 16, wherein the acts further comprise: in response to the physical end address of the first physical extent and the physical starting address of the second physical extent being non-continuous, storing the first record and the second record in the cache. 18. The device of claim 13, wherein the acts further comprise: receiving a request for a target profile, the request indicating an index associated with the target profile; searching the target profile in the cache based on the index. 19. The device of claim 18, wherein the acts further comprise: in response to the target profile being absent in the cache, creating the target profile based on the address mapping table. 20. The device of claim 18, wherein the request comprises a request to modify a part of the address mapping table contained in the target profile, the acts further comprising: in response to the target profile being present in the cache, updating the part of the address mapping table contained in the target profile. 21-25. (canceled)
2,100
274,005
15,954,661
2,131
A storage system includes a storage device that is configured to execute garbage collection and includes a first processor, and a control device that is configured to control the storage device and includes a memory and a second processor coupled to the memory, wherein the second processor is configured to receive a command for the storage device, store the received command into the memory, determine whether the number of commands stored in the memory is equal to or less than a first value, and transmit, to the storage device, a first instruction to start the garbage collection when the number of commands stored in the memory is equal to or less than the first value, and wherein the first processor is configured to start the garbage collection based on the first instruction.
1. A storage system comprising: a storage device that is configured to execute garbage collection and includes a first processor; and a control device that is configured to control the storage device and includes a memory and a second processor coupled to the memory, wherein the second processor is configured to receive a command for the storage device, store the received command into the memory, determine whether the number of commands stored in the memory is equal to or less than a first value, and transmit, to the storage device, a first instruction to start the garbage collection when the number of commands stored in the memory is equal to or less than the first value, and wherein the first processor is configured to start the garbage collection based on the first instruction. 2. The storage system according to claim 1, wherein the second processor is configured to transmit, to the storage device, a second instruction to stop the garbage collection when the number of commands stored in the memory becomes equal to or greater than a second value after the transmission of the first instruction, and wherein the first processor is configured to stop the garbage collection based on the second instruction. 3. The storage system according to claim 1, wherein the second processor is configured to transmit, to the storage device, the first instruction when an available capacity of the storage device is equal to or less than a first capacity value and the number of commands stored in the memory is equal to or less than the first value. 4. The storage system according to claim 2, wherein the second processor is configured to transmit, to the storage device, the second instruction when an available capacity of the storage device is equal to or greater than a second capacity value or when the number of commands stored in the memory is equal to or greater than the second value. 5. The storage system according to claim 1, wherein the second processor is configured to transmit, to the storage device, the first instruction when the number of commands stored in the memory is continuously equal to or less than the first value during a first time period. 6. The storage system according to claim 2, wherein the second processor is configured to transmit, to the storage device, the second instruction when the number of commands stored in the memory is continuously equal to or greater than the second value during a second time period. 7. The storage system according to claim 1, wherein the second processor is configured to: execute access to the storage device by executing the command stored in the memory, and transmit, based on the access, a notification indicating the completion of processes executed based on the command. 8. The storage system according to claim 2, wherein the first processor is configured to: start the garbage collection regardless of the first instruction when the available capacity is equal to or less than a third capacity value, and stop the garbage collection regardless of the second instruction when the available capacity becomes equal to or greater than a fourth capacity value greater than the third capacity value after the start of the garbage collection. 9. The storage system according to claim 8, wherein the first capacity value is greater than the third capacity value and less than the fourth capacity value. 10. The storage system according to claim 1, further comprising: a power supply device, wherein the second processor is configured to control the storage device so as to increase the amount of power to be supplied from the power supply device to the storage device when the second processor transmits the first instruction. 11. A control device for a storage device, the storage device being configured to execute garbage collection, the control device comprising: a memory; and a processor coupled to the memory and configured to: receive a command for the storage device, store the received command into the memory, determine whether the number of commands stored in the memory is equal to or less than a first value, and transmit, to the storage device, a first instruction to start the garbage collection when the number of commands stored in the memory is equal to or less than the first value, and wherein the storage device is configured to start the garbage collection based on the first instruction. 12. The control device according to claim 11, wherein the processor is configured to transmit, to the storage device, a second instruction to stop the garbage collection when the number of commands stored in the memory becomes equal to or greater than a second value after the transmission of the first instruction, and wherein the storage device is configured to stop the garbage collection based on the second instruction. 13. The control device according to claim 11, wherein the processor is configured to transmit, to the storage device, the first instruction when an available capacity of the storage device is equal to or less than a first capacity value and the number of commands stored in the memory is equal to or less than the first value. 14. The control device system according to claim 12, wherein the processor is configured to transmit, to the storage device, the second instruction when an available capacity of the storage device is equal to or greater than a second capacity value or when the number of commands stored in the memory is equal to or greater than the second value. 15. The control device according to claim 11, wherein the processor is configured to transmit, to the storage device, the first instruction when the number of commands stored in the memory is continuously equal to or less than the first value during a first time period. 16. The control device according to claim 12, wherein the processor is configured to transmit, to the storage device, the second instruction when the number of commands stored in the memory is continuously equal to or greater than the second value during a second time period. 17. The control device according to claim 11, wherein the processor is configured to: execute access to the storage device by executing the command stored in the memory, and transmit, based on the access, a notification indicating the completion of processes executed based on the command. 18. The control device according to claim 12, wherein the storage device is configured to: start the garbage collection regardless of the first instruction when the available capacity is equal to or less than a third capacity value, and stop the garbage collection regardless of the second instruction when the available capacity becomes equal to or greater than a fourth capacity value greater than the third capacity value after the start of the garbage collection. 19. The control device according to claim 18, wherein the first capacity value is greater than the third capacity value and less than the fourth capacity value. 20. A method of controlling garbage collection executed by a storage device, the method comprising: receiving a command for the storage device; storing the received command into a memory; determining whether the number of commands stored in the memory is equal to or less than a first value; transmitting, to the storage device, a first instruction to start the garbage collection when the number of commands stored in the memory is equal to or less than the first value; and starting, by the storage device, the garbage collection based on the first instruction.
A storage system includes a storage device that is configured to execute garbage collection and includes a first processor, and a control device that is configured to control the storage device and includes a memory and a second processor coupled to the memory, wherein the second processor is configured to receive a command for the storage device, store the received command into the memory, determine whether the number of commands stored in the memory is equal to or less than a first value, and transmit, to the storage device, a first instruction to start the garbage collection when the number of commands stored in the memory is equal to or less than the first value, and wherein the first processor is configured to start the garbage collection based on the first instruction.1. A storage system comprising: a storage device that is configured to execute garbage collection and includes a first processor; and a control device that is configured to control the storage device and includes a memory and a second processor coupled to the memory, wherein the second processor is configured to receive a command for the storage device, store the received command into the memory, determine whether the number of commands stored in the memory is equal to or less than a first value, and transmit, to the storage device, a first instruction to start the garbage collection when the number of commands stored in the memory is equal to or less than the first value, and wherein the first processor is configured to start the garbage collection based on the first instruction. 2. The storage system according to claim 1, wherein the second processor is configured to transmit, to the storage device, a second instruction to stop the garbage collection when the number of commands stored in the memory becomes equal to or greater than a second value after the transmission of the first instruction, and wherein the first processor is configured to stop the garbage collection based on the second instruction. 3. The storage system according to claim 1, wherein the second processor is configured to transmit, to the storage device, the first instruction when an available capacity of the storage device is equal to or less than a first capacity value and the number of commands stored in the memory is equal to or less than the first value. 4. The storage system according to claim 2, wherein the second processor is configured to transmit, to the storage device, the second instruction when an available capacity of the storage device is equal to or greater than a second capacity value or when the number of commands stored in the memory is equal to or greater than the second value. 5. The storage system according to claim 1, wherein the second processor is configured to transmit, to the storage device, the first instruction when the number of commands stored in the memory is continuously equal to or less than the first value during a first time period. 6. The storage system according to claim 2, wherein the second processor is configured to transmit, to the storage device, the second instruction when the number of commands stored in the memory is continuously equal to or greater than the second value during a second time period. 7. The storage system according to claim 1, wherein the second processor is configured to: execute access to the storage device by executing the command stored in the memory, and transmit, based on the access, a notification indicating the completion of processes executed based on the command. 8. The storage system according to claim 2, wherein the first processor is configured to: start the garbage collection regardless of the first instruction when the available capacity is equal to or less than a third capacity value, and stop the garbage collection regardless of the second instruction when the available capacity becomes equal to or greater than a fourth capacity value greater than the third capacity value after the start of the garbage collection. 9. The storage system according to claim 8, wherein the first capacity value is greater than the third capacity value and less than the fourth capacity value. 10. The storage system according to claim 1, further comprising: a power supply device, wherein the second processor is configured to control the storage device so as to increase the amount of power to be supplied from the power supply device to the storage device when the second processor transmits the first instruction. 11. A control device for a storage device, the storage device being configured to execute garbage collection, the control device comprising: a memory; and a processor coupled to the memory and configured to: receive a command for the storage device, store the received command into the memory, determine whether the number of commands stored in the memory is equal to or less than a first value, and transmit, to the storage device, a first instruction to start the garbage collection when the number of commands stored in the memory is equal to or less than the first value, and wherein the storage device is configured to start the garbage collection based on the first instruction. 12. The control device according to claim 11, wherein the processor is configured to transmit, to the storage device, a second instruction to stop the garbage collection when the number of commands stored in the memory becomes equal to or greater than a second value after the transmission of the first instruction, and wherein the storage device is configured to stop the garbage collection based on the second instruction. 13. The control device according to claim 11, wherein the processor is configured to transmit, to the storage device, the first instruction when an available capacity of the storage device is equal to or less than a first capacity value and the number of commands stored in the memory is equal to or less than the first value. 14. The control device system according to claim 12, wherein the processor is configured to transmit, to the storage device, the second instruction when an available capacity of the storage device is equal to or greater than a second capacity value or when the number of commands stored in the memory is equal to or greater than the second value. 15. The control device according to claim 11, wherein the processor is configured to transmit, to the storage device, the first instruction when the number of commands stored in the memory is continuously equal to or less than the first value during a first time period. 16. The control device according to claim 12, wherein the processor is configured to transmit, to the storage device, the second instruction when the number of commands stored in the memory is continuously equal to or greater than the second value during a second time period. 17. The control device according to claim 11, wherein the processor is configured to: execute access to the storage device by executing the command stored in the memory, and transmit, based on the access, a notification indicating the completion of processes executed based on the command. 18. The control device according to claim 12, wherein the storage device is configured to: start the garbage collection regardless of the first instruction when the available capacity is equal to or less than a third capacity value, and stop the garbage collection regardless of the second instruction when the available capacity becomes equal to or greater than a fourth capacity value greater than the third capacity value after the start of the garbage collection. 19. The control device according to claim 18, wherein the first capacity value is greater than the third capacity value and less than the fourth capacity value. 20. A method of controlling garbage collection executed by a storage device, the method comprising: receiving a command for the storage device; storing the received command into a memory; determining whether the number of commands stored in the memory is equal to or less than a first value; transmitting, to the storage device, a first instruction to start the garbage collection when the number of commands stored in the memory is equal to or less than the first value; and starting, by the storage device, the garbage collection based on the first instruction.
2,100
274,006
15,954,797
2,131
Embodiments of the present invention provide a method, system, and computer program product for allocating storage extents. Extent input/output information pertaining to an extent on a storage device is received, by a computer, where the extant input/output information includes an access rate of data stored on the extent. The computer determines one or more periods of time where the input/output information exceeds a preconfigured threshold. The computer generates one or more of a first policy and a second policy based on the determined one or more periods where the first policy includes allocating the extent to a high performance disk within a tier storage system when data is stored during the determined periods and the second policy includes reallocating the extent from a low performance disk within the tier storage system to a high performance storage device within the tier storage system during the one or more determined periods.
1. A method for allocating storage extents, the method comprising: determining, by the computer, one or more periods of time where input/output information exceeds a preconfigured threshold; and generating, by the computer, one or more of a first policy and a second policy based on the determined one or more periods wherein the first policy includes allocating the extent to a high performance storage device within a tier storage system when data is stored during the one or more determined periods and the second policy includes reallocating the extent from a low performance storage device within the tier storage system to a high performance storage device within the tier storage system during the one or more determined periods, and wherein an allocation engine is used to generate one or more of the first policy and the second policy for a thin provisioned storage system, and wherein an extent switch engine is used to generate one or more of the first policy and second policy for a non-thin provisioned storage system, and wherein the allocation engine compiles a plurality of monthly historical maximums to determine a relationship between two or more dates, a plurality of extent creation times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold, and wherein the extent switch engine complies a plurality of monthly historical maximums to determine a relationship between two or more dates, one or more extent dirty times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold. 2. The method of claim 1, further comprising: based on determining the storage device is thin provisioned, executing the first policy; and based on determining the storage device is not thin provisioned, executing the second policy. 3. The method of claim 1, wherein generating the first policy includes a user assigning a first maximum resource usage percentage associated with one or more storage devices on the computer wherein the first maximum resource usage percentage equals a first policy hit ratio divided by a sum of the first policy hit ratio and a second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 4. The method of claim 1, wherein generating the second policy includes a user assigning a second maximum resource usage percentage associated with the one or more storage devices on the computer wherein the second maximum resource usage percentage equals the second policy hit ratio divided by the sum of the first policy hit ratio and the second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 5. The method of claim 1, wherein the input/output information further includes a read rate for the extent, a write rate for the extent, a sequential rate for the extent, a random rate for the extent, and total input/output accesses for the extent during a time period. 6. The method of claim 1, wherein determining one or more periods of time further comprises determining an input/output impact on the extent during a time interval within the one or more determined time periods, and wherein the input/output impact is determined by multiplying total input/output occurrences during the time interval by an input/output weight, and wherein the input/output weight is based on a proximity in time between a first data stored on the extent and a first access of the data stored on the extent. 7. The method of claim 2, wherein executing the first policy and executing the second policy occurs prior to the one or more determined periods. 8. A computer system for allocating storage extents, the computer system comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising: determining, by the computer, one or more periods of time where input/output information exceeds a preconfigured threshold; and generating, by the computer, one or more of a first policy and a second policy based on the determined one or more periods wherein the first policy includes allocating the extent to a high performance storage device within a tier storage system when data is stored during the one or more determined periods and the second policy includes reallocating the extent from a low performance storage device within the tier storage system to a high performance storage device within the tier storage system during the one or more determined periods, and wherein an allocation engine is used to generate one or more of the first policy and the second policy for a thin provisioned storage system, and wherein an extent switch engine is used to generate one or more of the first policy and second policy for a non-thin provisioned storage system, and wherein the allocation engine compiles a plurality of monthly historical maximums to determine a relationship between two or more dates, a plurality of extent creation times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold, and wherein the extent switch engine complies a plurality of monthly historical maximums to determine a relationship between two or more dates, one or more extent dirty times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold. 9. The computer system of claim 8, further comprising: based on determining the storage device is thin provisioned, executing the first policy; and based on determining the storage device is not thin provisioned, executing the second policy. 10. The computer system of claim 8, wherein generating the first policy includes a user assigning a first maximum resource usage percentage associated with one or more storage devices on the computer wherein the first maximum resource usage percentage equals a first policy hit ratio divided by a sum of the first policy hit ratio and a second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 11. The computer system of claim 8, wherein generating the second policy includes a user assigning a second maximum resource usage percentage associated with the one or more storage devices on the computer wherein the second maximum resource usage percentage equals the second policy hit ratio divided by the sum of the first policy hit ratio and the second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 12. The computer system of claim 8, wherein the input/output information further includes a read rate for the extent, a write rate for the extent, a sequential rate for the extent, a random rate for the extent, and total input/output accesses for the extent during a time period. 13. The computer system of claim 8, wherein determining one or more periods of time further comprises determining an input/output impact on the extent during a time interval within the one or more determined time periods, and wherein the input/output impact is determined by multiplying total input/output occurrences during the time interval by an input/output weight, and wherein the input/output weight is based on a proximity in time between a first data stored on the extent and a first access of the data stored on the extent. 14. The computer system of claim 9, wherein executing the first policy and executing the second policy occurs prior to the one or more determined periods. 15. A computer program product for allocating storage extents the computer program product comprising: one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium, the program instructions executable by a processor that is capable of performing a method, the method comprising: determining, by the computer, one or more periods of time where the input/output information exceeds a preconfigured threshold; and generating one or more of a first policy and a second policy based on the determined one or more periods wherein the first policy includes allocating the extent to a high performance storage device within a tier storage system when data is stored during the one or more determined periods and the second policy includes reallocating the extent from a low performance storage device within the tier storage system to a high performance storage device within the tier storage system during the one or more determined periods, and wherein an allocation engine is used to generate one or more of the first policy and the second policy for a thin provisioned storage system, and wherein an extent switch engine is used to generate one or more of the first policy and second policy for a non-thin provisioned storage system, and wherein the allocation engine compiles a plurality of monthly historical maximums to determine a relationship between two or more dates, a plurality of extent creation times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold, and wherein the extent switch engine complies a plurality of monthly historical maximums to determine a relationship between two or more dates, one or more extent dirty times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold. 16. The computer program product of claim 15, further comprising: based on determining the storage device is thin provisioned, executing the first policy; and based on determining the storage device is not thin provisioned, executing the second policy. 17. The computer program product of claim 15, wherein generating the first policy includes a user assigning a first maximum resource usage percentage associated with one or more storage devices on the computer wherein the first maximum resource usage percentage equals a first policy hit ratio divided by a sum of the first policy hit ratio and a second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 18. The computer program product of claim 15, wherein generating the second policy includes a user assigning a second maximum resource usage percentage associated with the one or more storage devices on the computer wherein the second maximum resource usage percentage equals the second policy hit ratio divided by the sum of the first policy hit ratio and the second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 19. The computer program product of claim 15, wherein the input/output information further includes a read rate for the extent, a write rate for the extent, a sequential rate for the extent, a random rate for the extent, and total input/output accesses for the extent during a time period. 20. The computer program product of claim 15, wherein determining one or more periods of time further comprises determining an input/output impact on the extent during a time interval within the one or more determined time periods, and wherein the input/output impact is determined by multiplying total input/output occurrences during the time interval by an input/output weight, and wherein the input/output weight is based on a proximity in time between a first data stored on the extent and a first access of the data stored on the extent.
Embodiments of the present invention provide a method, system, and computer program product for allocating storage extents. Extent input/output information pertaining to an extent on a storage device is received, by a computer, where the extant input/output information includes an access rate of data stored on the extent. The computer determines one or more periods of time where the input/output information exceeds a preconfigured threshold. The computer generates one or more of a first policy and a second policy based on the determined one or more periods where the first policy includes allocating the extent to a high performance disk within a tier storage system when data is stored during the determined periods and the second policy includes reallocating the extent from a low performance disk within the tier storage system to a high performance storage device within the tier storage system during the one or more determined periods.1. A method for allocating storage extents, the method comprising: determining, by the computer, one or more periods of time where input/output information exceeds a preconfigured threshold; and generating, by the computer, one or more of a first policy and a second policy based on the determined one or more periods wherein the first policy includes allocating the extent to a high performance storage device within a tier storage system when data is stored during the one or more determined periods and the second policy includes reallocating the extent from a low performance storage device within the tier storage system to a high performance storage device within the tier storage system during the one or more determined periods, and wherein an allocation engine is used to generate one or more of the first policy and the second policy for a thin provisioned storage system, and wherein an extent switch engine is used to generate one or more of the first policy and second policy for a non-thin provisioned storage system, and wherein the allocation engine compiles a plurality of monthly historical maximums to determine a relationship between two or more dates, a plurality of extent creation times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold, and wherein the extent switch engine complies a plurality of monthly historical maximums to determine a relationship between two or more dates, one or more extent dirty times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold. 2. The method of claim 1, further comprising: based on determining the storage device is thin provisioned, executing the first policy; and based on determining the storage device is not thin provisioned, executing the second policy. 3. The method of claim 1, wherein generating the first policy includes a user assigning a first maximum resource usage percentage associated with one or more storage devices on the computer wherein the first maximum resource usage percentage equals a first policy hit ratio divided by a sum of the first policy hit ratio and a second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 4. The method of claim 1, wherein generating the second policy includes a user assigning a second maximum resource usage percentage associated with the one or more storage devices on the computer wherein the second maximum resource usage percentage equals the second policy hit ratio divided by the sum of the first policy hit ratio and the second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 5. The method of claim 1, wherein the input/output information further includes a read rate for the extent, a write rate for the extent, a sequential rate for the extent, a random rate for the extent, and total input/output accesses for the extent during a time period. 6. The method of claim 1, wherein determining one or more periods of time further comprises determining an input/output impact on the extent during a time interval within the one or more determined time periods, and wherein the input/output impact is determined by multiplying total input/output occurrences during the time interval by an input/output weight, and wherein the input/output weight is based on a proximity in time between a first data stored on the extent and a first access of the data stored on the extent. 7. The method of claim 2, wherein executing the first policy and executing the second policy occurs prior to the one or more determined periods. 8. A computer system for allocating storage extents, the computer system comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising: determining, by the computer, one or more periods of time where input/output information exceeds a preconfigured threshold; and generating, by the computer, one or more of a first policy and a second policy based on the determined one or more periods wherein the first policy includes allocating the extent to a high performance storage device within a tier storage system when data is stored during the one or more determined periods and the second policy includes reallocating the extent from a low performance storage device within the tier storage system to a high performance storage device within the tier storage system during the one or more determined periods, and wherein an allocation engine is used to generate one or more of the first policy and the second policy for a thin provisioned storage system, and wherein an extent switch engine is used to generate one or more of the first policy and second policy for a non-thin provisioned storage system, and wherein the allocation engine compiles a plurality of monthly historical maximums to determine a relationship between two or more dates, a plurality of extent creation times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold, and wherein the extent switch engine complies a plurality of monthly historical maximums to determine a relationship between two or more dates, one or more extent dirty times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold. 9. The computer system of claim 8, further comprising: based on determining the storage device is thin provisioned, executing the first policy; and based on determining the storage device is not thin provisioned, executing the second policy. 10. The computer system of claim 8, wherein generating the first policy includes a user assigning a first maximum resource usage percentage associated with one or more storage devices on the computer wherein the first maximum resource usage percentage equals a first policy hit ratio divided by a sum of the first policy hit ratio and a second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 11. The computer system of claim 8, wherein generating the second policy includes a user assigning a second maximum resource usage percentage associated with the one or more storage devices on the computer wherein the second maximum resource usage percentage equals the second policy hit ratio divided by the sum of the first policy hit ratio and the second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 12. The computer system of claim 8, wherein the input/output information further includes a read rate for the extent, a write rate for the extent, a sequential rate for the extent, a random rate for the extent, and total input/output accesses for the extent during a time period. 13. The computer system of claim 8, wherein determining one or more periods of time further comprises determining an input/output impact on the extent during a time interval within the one or more determined time periods, and wherein the input/output impact is determined by multiplying total input/output occurrences during the time interval by an input/output weight, and wherein the input/output weight is based on a proximity in time between a first data stored on the extent and a first access of the data stored on the extent. 14. The computer system of claim 9, wherein executing the first policy and executing the second policy occurs prior to the one or more determined periods. 15. A computer program product for allocating storage extents the computer program product comprising: one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium, the program instructions executable by a processor that is capable of performing a method, the method comprising: determining, by the computer, one or more periods of time where the input/output information exceeds a preconfigured threshold; and generating one or more of a first policy and a second policy based on the determined one or more periods wherein the first policy includes allocating the extent to a high performance storage device within a tier storage system when data is stored during the one or more determined periods and the second policy includes reallocating the extent from a low performance storage device within the tier storage system to a high performance storage device within the tier storage system during the one or more determined periods, and wherein an allocation engine is used to generate one or more of the first policy and the second policy for a thin provisioned storage system, and wherein an extent switch engine is used to generate one or more of the first policy and second policy for a non-thin provisioned storage system, and wherein the allocation engine compiles a plurality of monthly historical maximums to determine a relationship between two or more dates, a plurality of extent creation times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold, and wherein the extent switch engine complies a plurality of monthly historical maximums to determine a relationship between two or more dates, one or more extent dirty times, a plurality of input/output densities, and whether a plurality of data written during a cycle is hot or cold. 16. The computer program product of claim 15, further comprising: based on determining the storage device is thin provisioned, executing the first policy; and based on determining the storage device is not thin provisioned, executing the second policy. 17. The computer program product of claim 15, wherein generating the first policy includes a user assigning a first maximum resource usage percentage associated with one or more storage devices on the computer wherein the first maximum resource usage percentage equals a first policy hit ratio divided by a sum of the first policy hit ratio and a second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 18. The computer program product of claim 15, wherein generating the second policy includes a user assigning a second maximum resource usage percentage associated with the one or more storage devices on the computer wherein the second maximum resource usage percentage equals the second policy hit ratio divided by the sum of the first policy hit ratio and the second policy hit ratio, and wherein the first policy hit ratio equals a first data search hit total of the one or more storage devices associated with the first policy divided by a first total data accesses of the one or more storage devices associated with the first policy, and wherein the second policy hit ratio equals a second data search hit total of the one or more storage devices associated with the second policy divided by a second total data accesses of the one or more storage devices associated with the second policy. 19. The computer program product of claim 15, wherein the input/output information further includes a read rate for the extent, a write rate for the extent, a sequential rate for the extent, a random rate for the extent, and total input/output accesses for the extent during a time period. 20. The computer program product of claim 15, wherein determining one or more periods of time further comprises determining an input/output impact on the extent during a time interval within the one or more determined time periods, and wherein the input/output impact is determined by multiplying total input/output occurrences during the time interval by an input/output weight, and wherein the input/output weight is based on a proximity in time between a first data stored on the extent and a first access of the data stored on the extent.
2,100
274,007
15,955,055
2,131
Embodiments of the present disclosure relate to a method, device and computer readable medium for managing storage. The method comprises: in response to obtaining, at a first storage processor, an access request for a storage unit, determining whether the storage unit is currently accessible, the storage unit including at least one storage area. The method further comprises: in response to the storage unit being currently inaccessible, determining whether the first storage processor has an access right to the storage unit. In addition, the method further comprises: in response to the first storage processor having no access right, requesting a second storage processor for the access right, the second storage processor being associated with a mirror storage unit of the storage unit, and the first and second storage processors having exclusive write access rights.
1. A method of managing storage, comprising: in response to obtaining, at a first storage processor, an access request for a storage unit, determining whether the storage unit is currently accessible, the storage unit including at least one storage area; in response to the storage unit being currently inaccessible, determining whether the first storage processor has an access right to the storage unit; and in response to the first storage processor having no access right, requesting a second storage processor for the access right, the second storage processor being associated with a mirror storage unit of the storage unit, and the first and second storage processors having exclusive write access rights. 2. The method according to claim 1, further comprising: adding the access request into a waiting queue to wait to be executed. 3. The method according to claim 1, further comprising: in response to the first storage processor having the access right, adding the access request into a waiting queue to wait to be executed. 4. The method according to claim 1, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and determining whether the storage unit is currently accessible comprises: updating the current accessibility indicator based on the access request; and determining whether the storage unit is currently accessible. 5. The method according to claim 1, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and an access right indicator indicates the access right, the method further comprising: in response to receiving, from the second storage processor, a response indicating that the access right is granted, updating the current accessibility indicator and the access right indicator based on the response; and obtaining from a waiting queue a request to be executed. 6. The method according to claim 1, wherein the access request is a write access request, and requesting a second storage processor for the access right in response to the first storage processor having no access right comprises: in response to the first storage processor having no write access right, writing data to be written into a temporary storage unit associated with the storage unit, the temporary storage unit including at least one temporary storage area, and the data to be written being obtained at the first storage processor; requesting the second storage processor for the write access right; and sending the data to be written to the second storage processor. 7. The method according to claim 1, wherein the access request is a write access request, the method further comprising: in response to the storage unit being currently write-accessible, writing data in a temporary storage unit associated with the storage unit into the storage unit, the temporary storage unit including at least one temporary storage area. 8. The method according to claim 1, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and an access right indicator indicates the access right, the method further comprising: in response to the storage unit being currently accessible, accessing the storage unit; in response to the accessing of the storage unit being completed, determining whether the access request is a write access request; in response to the access request being the write access request, updating the current accessibility indicator based on the write access request; and obtaining from a waiting queue a request to be executed. 9. The method according to claim 8, further comprising: in response to the access request not being the write access request, determining whether there is existing a read access request for the storage unit; in response to absence of the read access request for the storage unit, updating the current accessibility indicator based on the access right indicator; and obtaining from the waiting queue the request to be executed. 10. A method of managing storage, comprising: in response to obtaining, at a first storage processor, a request for an access right to a storage unit of the first storage processor from a second storage processor, determining whether to permit the access right requested by the second storage processor to be granted or not currently, the storage unit including at least one storage area, the second storage processor being associated with a mirror storage unit of the storage unit, and the first and second storage processors having exclusive write access rights; in response to permitting the requested access right to be granted, updating an access right indicator based on the request, the access right indicator indicating an access right of the first storage processor to the storage unit; and sending, to the second storage processor, a response indicating that the requested access right is granted. 11. The method according to claim 10, further comprising: in response to preventing the requested access right from being granted, adding the request into a waiting queue to wait to be executed. 12. The method according to claim 10, wherein the request is a request for a write access right, the method further comprising: writing data to be written into a temporary storage unit associated with the storage unit, the temporary storage unit including at least one temporary storage area, and the data to be written being obtained at the first storage processor. 13. The method according to claim 12, wherein updating an access right indicator based on the request in response to permitting the requested access right to be granted comprises: in response to permitting the requested write access right to be granted, writing the data to be written from the temporary storage unit into the storage unit; and updating the access right indicator based on the requested write access right. 14. A device for managing storage, comprising: at least a processing unit; and at least one memory coupled to the at least one processing unit and storing instructions executed by the at least one processing unit, the instructions, when executed by the at least one processing unit, causing the device to perform acts including: in response to obtaining, at a first storage processor, an access request for a storage unit, determining whether the storage unit is currently accessible, the storage unit including at least one storage area; in response to the storage unit being currently inaccessible, determining whether the first storage processor has an access right to the storage unit; and in response to the first storage processor having no access right, requesting a second storage processor for the access right, the second storage processor being associated with a mirror storage unit of the storage unit, and the first and second storage processors having exclusive write access rights. 15. The device according to claim 14, wherein the acts further include: adding the access request into a waiting queue to wait to be executed. 16. The device according to claim 14, wherein the acts further include: in response to the first storage processor having the access right, adding the access request into a waiting queue to wait to be executed. 17. The device according to claim 14, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and determining whether the storage unit is currently accessible comprises: updating the current accessibility indicator based on the access request; and determining whether the storage unit is currently accessible. 18. The device according to claim 14, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and an access right indicator indicates the access right, and the acts further include: in response to receiving, from the second storage processor, a response indicating that the access right is granted, updating the current accessibility indicator and the access right indicator based on the response; and obtaining from a waiting queue a request to be executed. 19. The device according to claim 14, wherein the access request is a write access request, and requesting a second storage processor for the access right in response to the first storage processor having no access right comprises: in response to the first storage processor having no write access right, writing data to be written into a temporary storage unit associated with the storage unit, the temporary storage unit including at least one temporary storage area, and the data to be written being obtained at the first storage processor; requesting the second storage processor for the write access right; and sending the data to be written to the second storage processor. 20. The device according to claim 14, wherein the access request is a write access request, and the acts further include: in response to the storage unit being currently write-accessible, writing data in a temporary storage unit associated with the storage unit into the storage unit, the temporary storage unit including at least one temporary storage area. 21-28. (canceled)
Embodiments of the present disclosure relate to a method, device and computer readable medium for managing storage. The method comprises: in response to obtaining, at a first storage processor, an access request for a storage unit, determining whether the storage unit is currently accessible, the storage unit including at least one storage area. The method further comprises: in response to the storage unit being currently inaccessible, determining whether the first storage processor has an access right to the storage unit. In addition, the method further comprises: in response to the first storage processor having no access right, requesting a second storage processor for the access right, the second storage processor being associated with a mirror storage unit of the storage unit, and the first and second storage processors having exclusive write access rights.1. A method of managing storage, comprising: in response to obtaining, at a first storage processor, an access request for a storage unit, determining whether the storage unit is currently accessible, the storage unit including at least one storage area; in response to the storage unit being currently inaccessible, determining whether the first storage processor has an access right to the storage unit; and in response to the first storage processor having no access right, requesting a second storage processor for the access right, the second storage processor being associated with a mirror storage unit of the storage unit, and the first and second storage processors having exclusive write access rights. 2. The method according to claim 1, further comprising: adding the access request into a waiting queue to wait to be executed. 3. The method according to claim 1, further comprising: in response to the first storage processor having the access right, adding the access request into a waiting queue to wait to be executed. 4. The method according to claim 1, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and determining whether the storage unit is currently accessible comprises: updating the current accessibility indicator based on the access request; and determining whether the storage unit is currently accessible. 5. The method according to claim 1, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and an access right indicator indicates the access right, the method further comprising: in response to receiving, from the second storage processor, a response indicating that the access right is granted, updating the current accessibility indicator and the access right indicator based on the response; and obtaining from a waiting queue a request to be executed. 6. The method according to claim 1, wherein the access request is a write access request, and requesting a second storage processor for the access right in response to the first storage processor having no access right comprises: in response to the first storage processor having no write access right, writing data to be written into a temporary storage unit associated with the storage unit, the temporary storage unit including at least one temporary storage area, and the data to be written being obtained at the first storage processor; requesting the second storage processor for the write access right; and sending the data to be written to the second storage processor. 7. The method according to claim 1, wherein the access request is a write access request, the method further comprising: in response to the storage unit being currently write-accessible, writing data in a temporary storage unit associated with the storage unit into the storage unit, the temporary storage unit including at least one temporary storage area. 8. The method according to claim 1, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and an access right indicator indicates the access right, the method further comprising: in response to the storage unit being currently accessible, accessing the storage unit; in response to the accessing of the storage unit being completed, determining whether the access request is a write access request; in response to the access request being the write access request, updating the current accessibility indicator based on the write access request; and obtaining from a waiting queue a request to be executed. 9. The method according to claim 8, further comprising: in response to the access request not being the write access request, determining whether there is existing a read access request for the storage unit; in response to absence of the read access request for the storage unit, updating the current accessibility indicator based on the access right indicator; and obtaining from the waiting queue the request to be executed. 10. A method of managing storage, comprising: in response to obtaining, at a first storage processor, a request for an access right to a storage unit of the first storage processor from a second storage processor, determining whether to permit the access right requested by the second storage processor to be granted or not currently, the storage unit including at least one storage area, the second storage processor being associated with a mirror storage unit of the storage unit, and the first and second storage processors having exclusive write access rights; in response to permitting the requested access right to be granted, updating an access right indicator based on the request, the access right indicator indicating an access right of the first storage processor to the storage unit; and sending, to the second storage processor, a response indicating that the requested access right is granted. 11. The method according to claim 10, further comprising: in response to preventing the requested access right from being granted, adding the request into a waiting queue to wait to be executed. 12. The method according to claim 10, wherein the request is a request for a write access right, the method further comprising: writing data to be written into a temporary storage unit associated with the storage unit, the temporary storage unit including at least one temporary storage area, and the data to be written being obtained at the first storage processor. 13. The method according to claim 12, wherein updating an access right indicator based on the request in response to permitting the requested access right to be granted comprises: in response to permitting the requested write access right to be granted, writing the data to be written from the temporary storage unit into the storage unit; and updating the access right indicator based on the requested write access right. 14. A device for managing storage, comprising: at least a processing unit; and at least one memory coupled to the at least one processing unit and storing instructions executed by the at least one processing unit, the instructions, when executed by the at least one processing unit, causing the device to perform acts including: in response to obtaining, at a first storage processor, an access request for a storage unit, determining whether the storage unit is currently accessible, the storage unit including at least one storage area; in response to the storage unit being currently inaccessible, determining whether the first storage processor has an access right to the storage unit; and in response to the first storage processor having no access right, requesting a second storage processor for the access right, the second storage processor being associated with a mirror storage unit of the storage unit, and the first and second storage processors having exclusive write access rights. 15. The device according to claim 14, wherein the acts further include: adding the access request into a waiting queue to wait to be executed. 16. The device according to claim 14, wherein the acts further include: in response to the first storage processor having the access right, adding the access request into a waiting queue to wait to be executed. 17. The device according to claim 14, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and determining whether the storage unit is currently accessible comprises: updating the current accessibility indicator based on the access request; and determining whether the storage unit is currently accessible. 18. The device according to claim 14, wherein a current accessibility indicator indicates whether the storage unit is currently accessible, and an access right indicator indicates the access right, and the acts further include: in response to receiving, from the second storage processor, a response indicating that the access right is granted, updating the current accessibility indicator and the access right indicator based on the response; and obtaining from a waiting queue a request to be executed. 19. The device according to claim 14, wherein the access request is a write access request, and requesting a second storage processor for the access right in response to the first storage processor having no access right comprises: in response to the first storage processor having no write access right, writing data to be written into a temporary storage unit associated with the storage unit, the temporary storage unit including at least one temporary storage area, and the data to be written being obtained at the first storage processor; requesting the second storage processor for the write access right; and sending the data to be written to the second storage processor. 20. The device according to claim 14, wherein the access request is a write access request, and the acts further include: in response to the storage unit being currently write-accessible, writing data in a temporary storage unit associated with the storage unit into the storage unit, the temporary storage unit including at least one temporary storage area. 21-28. (canceled)
2,100
274,008
15,955,028
2,131
Embodiments of the present disclosure relate to a method and device and computer readable medium for storage management. The method comprises determining a queuing condition of I/O requests of a cache of a first file system in a storage, the cache including at least one flash block. The method further includes determining a load condition of the cache based on the queuing condition of the I/O requests. Moreover, the method further includes in response to determining that the cache is in a busy status, allocating to the cache at least one additional flash block from a second file system in the storage, the second file system being different from the first file system.
1. A method of storage management, comprising: determining a queuing condition of I/O requests of a cache of a first file system in a storage, the cache including at least one flash block; determining a load condition of the cache based on the queuing condition of the I/O requests; and in response to determining that the cache is in a busy status, allocating to the cache at least one additional flash block from a second file system in the storage, the second file system being different from the first file system. 2. The method according to claim 1, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests queued for the cache reaching a first threshold number, determining that the cache is in the busy status. 3. The method according to claim 1, wherein determining the load condition of the cache comprises: in response to the number of I/O requests queued for the cache reaching a second threshold number for a first period of time that exceeds a predetermined length, determining that the cache is in the busy status. 4. The method according to claim 1, wherein the at least one flash block includes N flash blocks, and allocating the at least one additional flash block to the cache comprises: allocating M additional flash blocks to the cache, M and N being natural numbers and M being a multiple of N. 5. The method according to claim 1, further comprising: in response to determining, based on the queuing condition of the I/O requests, that the cache is in an idle status, determining whether the cache includes unused flash blocks; and in response to the cache including the unused flash blocks, removing at least one of the unused flash blocks from the cache. 6. The method according to claim 5, wherein determining the load condition of the cache comprises: in response to absence of I/O requests queued for the cache for a second period of time, determining that the cache is in the idle status. 7. The method according to claim 5, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests completed for the cache for a third period of time failing to reach a third threshold number, determining that the cache is in the idle status. 8. The method according to claim 5, wherein the at least one flash block includes a plurality of flash blocks, the method further comprising: in response to determining that the number of unused flash blocks in the plurality of flash blocks exceeds a predetermined number, removing the predetermined number of unused flash blocks from the cache. 9. A device for storage management, comprising: at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions executable by the at least one processing unit, the instructions, when executed by the at least one processing unit, causing the device to perform acts including: determining a queuing condition of I/O requests of a cache of a first file system in a storage, the cache including at least one flash block; determining a load condition of the cache based on the queuing condition of the I/O requests; and in response to determining that the cache is in a busy status, allocating to the cache at least one additional flash block from a second file system in the storage, the second file system being different from the first file system. 10. The device according to claim 9, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests queued for the cache reaching a first threshold number, determining that the cache is in the busy status. 11. The device according to claim 9, wherein determining the load condition of the cache comprises: in response to the number of I/O requests queued for the cache reaching a second threshold number for a first period of time that exceeds a predetermined length, determining that the cache is in the busy status. 12. The device according to claim 9, wherein the at least one flash block includes N flash blocks, and allocating the at least one additional flash block to the cache comprises: allocating M additional flash blocks to the cache, M and N being natural numbers and M being a multiple of N. 13. The device according claim 9, wherein the acts further include: in response to determining, based on the queuing condition of the I/O requests, that the cache is in an idle status, determining whether the cache includes unused flash blocks; and in response to the cache including the unused flash blocks, removing at least one of the unused flash block from the cache. 14. The device according to claim 13, wherein determining the load condition of the cache comprises: in response to absence of I/O requests queued for the cache for a second period of time, determining that the cache is in the idle status. 15. The device according to claim 13, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests completed for the cache for a third period of time failing to reach a third threshold number, determining that the cache is in the idle status. 16. The device according to claim 13, wherein the at least one flash block includes a plurality of flash blocks, the acts further including: in response to determining that the number of unused flash blocks of the plurality of flash blocks exceeds a predetermined number, removing the predetermined number of unused flash blocks from the cache. 17. A computer readable storage medium having computer readable program instructions stored thereon, the computer readable program instructions, when executed by a processing unit, causing the processing unit to perform the steps of: determining a queuing condition of I/O requests of a cache of a first file system in a storage, the cache including at least one flash block; determining a load condition of the cache based on the queuing condition of the I/O requests; and in response to determining that the cache is in a busy status, allocating to the cache at least one additional flash block from a second file system in the storage, the second file system being different from the first file system. 18. The computer readable storage medium of claim 17, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests queued for the cache reaching a first threshold number, determining that the cache is in the busy status. 19. The computer readable storage medium according to claim 17, wherein determining the load condition of the cache comprises: in response to the number of I/O requests queued for the cache reaching a second threshold number for a first period of time that exceeds a predetermined length, determining that the cache is in the busy status. 20. The computer readable storage medium according to claim 17, wherein the at least one flash block includes N flash blocks, and allocating the at least one additional flash block to the cache comprises: allocating M additional flash blocks to the cache, M and N being natural numbers and M being a multiple of N.
Embodiments of the present disclosure relate to a method and device and computer readable medium for storage management. The method comprises determining a queuing condition of I/O requests of a cache of a first file system in a storage, the cache including at least one flash block. The method further includes determining a load condition of the cache based on the queuing condition of the I/O requests. Moreover, the method further includes in response to determining that the cache is in a busy status, allocating to the cache at least one additional flash block from a second file system in the storage, the second file system being different from the first file system.1. A method of storage management, comprising: determining a queuing condition of I/O requests of a cache of a first file system in a storage, the cache including at least one flash block; determining a load condition of the cache based on the queuing condition of the I/O requests; and in response to determining that the cache is in a busy status, allocating to the cache at least one additional flash block from a second file system in the storage, the second file system being different from the first file system. 2. The method according to claim 1, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests queued for the cache reaching a first threshold number, determining that the cache is in the busy status. 3. The method according to claim 1, wherein determining the load condition of the cache comprises: in response to the number of I/O requests queued for the cache reaching a second threshold number for a first period of time that exceeds a predetermined length, determining that the cache is in the busy status. 4. The method according to claim 1, wherein the at least one flash block includes N flash blocks, and allocating the at least one additional flash block to the cache comprises: allocating M additional flash blocks to the cache, M and N being natural numbers and M being a multiple of N. 5. The method according to claim 1, further comprising: in response to determining, based on the queuing condition of the I/O requests, that the cache is in an idle status, determining whether the cache includes unused flash blocks; and in response to the cache including the unused flash blocks, removing at least one of the unused flash blocks from the cache. 6. The method according to claim 5, wherein determining the load condition of the cache comprises: in response to absence of I/O requests queued for the cache for a second period of time, determining that the cache is in the idle status. 7. The method according to claim 5, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests completed for the cache for a third period of time failing to reach a third threshold number, determining that the cache is in the idle status. 8. The method according to claim 5, wherein the at least one flash block includes a plurality of flash blocks, the method further comprising: in response to determining that the number of unused flash blocks in the plurality of flash blocks exceeds a predetermined number, removing the predetermined number of unused flash blocks from the cache. 9. A device for storage management, comprising: at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions executable by the at least one processing unit, the instructions, when executed by the at least one processing unit, causing the device to perform acts including: determining a queuing condition of I/O requests of a cache of a first file system in a storage, the cache including at least one flash block; determining a load condition of the cache based on the queuing condition of the I/O requests; and in response to determining that the cache is in a busy status, allocating to the cache at least one additional flash block from a second file system in the storage, the second file system being different from the first file system. 10. The device according to claim 9, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests queued for the cache reaching a first threshold number, determining that the cache is in the busy status. 11. The device according to claim 9, wherein determining the load condition of the cache comprises: in response to the number of I/O requests queued for the cache reaching a second threshold number for a first period of time that exceeds a predetermined length, determining that the cache is in the busy status. 12. The device according to claim 9, wherein the at least one flash block includes N flash blocks, and allocating the at least one additional flash block to the cache comprises: allocating M additional flash blocks to the cache, M and N being natural numbers and M being a multiple of N. 13. The device according claim 9, wherein the acts further include: in response to determining, based on the queuing condition of the I/O requests, that the cache is in an idle status, determining whether the cache includes unused flash blocks; and in response to the cache including the unused flash blocks, removing at least one of the unused flash block from the cache. 14. The device according to claim 13, wherein determining the load condition of the cache comprises: in response to absence of I/O requests queued for the cache for a second period of time, determining that the cache is in the idle status. 15. The device according to claim 13, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests completed for the cache for a third period of time failing to reach a third threshold number, determining that the cache is in the idle status. 16. The device according to claim 13, wherein the at least one flash block includes a plurality of flash blocks, the acts further including: in response to determining that the number of unused flash blocks of the plurality of flash blocks exceeds a predetermined number, removing the predetermined number of unused flash blocks from the cache. 17. A computer readable storage medium having computer readable program instructions stored thereon, the computer readable program instructions, when executed by a processing unit, causing the processing unit to perform the steps of: determining a queuing condition of I/O requests of a cache of a first file system in a storage, the cache including at least one flash block; determining a load condition of the cache based on the queuing condition of the I/O requests; and in response to determining that the cache is in a busy status, allocating to the cache at least one additional flash block from a second file system in the storage, the second file system being different from the first file system. 18. The computer readable storage medium of claim 17, wherein determining the load condition of the cache comprises: in response to the number of the I/O requests queued for the cache reaching a first threshold number, determining that the cache is in the busy status. 19. The computer readable storage medium according to claim 17, wherein determining the load condition of the cache comprises: in response to the number of I/O requests queued for the cache reaching a second threshold number for a first period of time that exceeds a predetermined length, determining that the cache is in the busy status. 20. The computer readable storage medium according to claim 17, wherein the at least one flash block includes N flash blocks, and allocating the at least one additional flash block to the cache comprises: allocating M additional flash blocks to the cache, M and N being natural numbers and M being a multiple of N.
2,100
274,009
15,955,004
2,131
A method and a manager for managing a storage system including a manager and a storage device. The storage device includes a data region and a metadata region. The data region is divided into data blocks. The metadata region stores metadata describing zeroing states of the data blocks. The method comprises allocating a metadata cache in a memory of the manager. The metadata cache includes respective zeroing indication bits indicative of the zeroing states of the corresponding data blocks. The allocating procedure comprises allocating a user data cache for reading or writing user data and allocating a background zeroing cache for a background zeroing operation of the storage device. The method further comprises, in response to receiving an I/O request for the storage system, processing the I/O request with the metadata cache.
1. A method of managing a storage system, the storage system including a manager and a storage device, the storage device including a data region being divided into data blocks and a metadata region storing metadata describing zeroing states of the data blocks, the method comprising: allocating a metadata cache in a memory of the manager, the metadata cache including respective zeroing indication bits indicative of the zeroing states of the corresponding data blocks, the allocating comprising: allocating a user data cache for reading or writing user data, and allocating a background zeroing cache for a background zeroing operation of the storage device; and in response to receiving an I/O request for the storage system, processing the I/O request with the metadata cache. 2. The method of claim 1, wherein processing the I/O request with the metadata cache comprises: in response to the I/O request being a read request, determining whether zeroing indication bits in the metadata cache associated with data blocks corresponding to the read request are all set as a predetermined value; and in response to the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request being all set as the predetermined value, sending the read request to the storage device. 3. The method of claim 2, further comprising: in response to at least one of the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request being not set as the predetermined value, reading metadata associated with the data blocks corresponding to the read request; and updating the metadata cache with the read metadata. 4. The method of claim 1, wherein updating the metadata cache with the read metadata comprises: in response to the I/O request being a write request, determining whether zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request are all set as a predetermined value; and in response to the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request being all set as the predetermined value, performing a write operation to the storage device. 5. The method of claim 4, further comprising: in response to at least one of the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request being not set as the predetermined value, reading metadata associated with the data blocks corresponding to the write request; and updating, with the read metadata, the metadata cache associated with the data blocks corresponding to the write request. 6. The method of claim 5, wherein updating, with the read metadata, the metadata cache associated with the data blocks corresponding to the write request comprises: determining whether the data blocks corresponding to the write request have been zeroed; and in response to the data blocks corresponding to the write request having not been zeroed, zeroing the data blocks corresponding to the write request; performing a write operation to the storage device; updating the metadata associated with the data blocks corresponding to the write request; and updating the metadata cache associated with the data blocks corresponding to the write request. 7. The method of claim 1, wherein processing the I/O request with the metadata cache comprises: obtaining a range of data blocks corresponding to the I/O request; determining whether the range of data blocks corresponding to the I/O request overlaps with a range of data blocks corresponding to the background zeroing cache; in response to the range of data blocks corresponding to the I/O request overlapping with the range of data blocks corresponding to the background zeroing cache, invalidating the overlapping portion in the background zeroing cache; and in response to the range of data blocks corresponding to the I/O request not overlapping with the range of data blocks corresponding to the background zeroing cache, determining whether the range of data blocks corresponding to the I/O request overlaps with the user data cache. 8. The method of claim 7, further comprising: in response to the range of data blocks corresponding to the I/O request overlapping with the range of data blocks corresponding to the user data cache, updating the user data cache; and in response to the range of data blocks corresponding to the I/O request not overlapping with the range of data blocks corresponding to the user data cache, obtaining a free user data cache. 9. The method of claim 1, wherein the background zeroing operation comprises: looking up a data block to be zeroed with a checkpoint, the checkpoint indicating an index of the data block to be zeroed; determining whether zeroing indication bits in the background zeroing cache corresponding to the data block to be zeroed are all set as a predetermined value; and in response to the zeroing indication bits in the background zeroing cache associated with the data block to be zeroed being all set as the predetermined value, updating the checkpoint to a next data block. 10. The method of claim 9, further comprising: in response to at least one of the zeroing indication bits in the background zeroing cache associated with the data block to be zeroed being not set as the predetermined value, reading metadata corresponding to the data block to be zeroed; and updating the metadata cache with the read metadata. 11. A manager for managing a storage system, the storage system including the manager and a storage device, the storage device including a data region and a metadata region, the data region being divided into data blocks, the metadata region storing metadata describing zeroing states of the data blocks, the manager including a processor and a memory coupled to the processor and having instructions stored thereon, the instructions, when executed by the processor, causing the manager to perform acts including: allocating a metadata cache in the memory, the metadata cache including respective zeroing indication bits indicative of the zeroing states of the corresponding data blocks, the allocating comprising: allocating a user data cache for reading or writing user data, and allocating a background zeroing cache for a background zeroing operation of the storage device; and in response to receiving an I/O request for the storage system, processing the I/O request with the metadata cache. 12. The manager of claim 11, wherein processing the I/O request with the metadata cache comprises: in response to the I/O request being a read request, determining whether zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request are all set as a predetermined value; and in response to the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request being all set as the predetermined value, sending the read request to the storage device. 13. The manager of claim 12, wherein the acts further include: in response to at least one of the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request being not set as the predetermined value, reading metadata associated with the data blocks corresponding to the read request; and updating the metadata cache with the read metadata. 14. The manager of claim 11, wherein updating the metadata cache with the read metadata comprises: in response to the I/O request being a write request, determining whether zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request are all set as a predetermined value; and in response to the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request being all set as the predetermined value, performing a write operation to the storage device. 15. The manager of claim 14, the acts further comprising: in response to at least one of the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request being not set as the predetermined value, reading metadata associated with the data blocks corresponding to the write request; and updating, with the read metadata, the metadata cache associated with the data blocks corresponding to the write request. 16. The manager of claim 15, wherein updating, with the read metadata, the metadata cache associated with the data blocks corresponding to the write request comprises: determining whether the data blocks corresponding to the write request have been zeroed; and in response to the data blocks corresponding to the write request having not been zeroed, zeroing the data blocks corresponding to the write request; performing a write operation to the storage device; updating the metadata associated with the data blocks corresponding to the write request; and updating the metadata cache associated with the data blocks corresponding to the write request. 17. The manager of claim 11, wherein processing the I/O request with the metadata cache comprises: obtaining a range of data blocks corresponding to the I/O request; determining whether the range of data blocks corresponding to the I/O request overlaps with a range of data blocks corresponding to the background zeroing cache; in response to the range of the data blocks corresponding to the I/O request overlapping with the range of the data blocks corresponding to the background zeroing cache, invalidating the overlapping portion in the background zeroing cache; and in response to the range of the data blocks corresponding to the I/O request not overlapping with the range of the data blocks corresponding to the background zeroing cache, determining whether the range of the data blocks corresponding to the I/O request overlaps with the user data cache. 18. The manager of claim 17, further comprising: in response to the range of the data blocks corresponding to the I/O request overlapping with the range of the data blocks corresponding to the user data cache, updating the user data cache; and in response to the range of the data blocks corresponding to the I/O request not overlapping with the range of the data blocks corresponding to the user data cache, obtaining a free user data cache. 19. The manager of claim 11, wherein the background zeroing operation comprises: looking up a data block to be zeroed with a checkpoint, the checkpoint indicating an index of the data block to be zeroed; determining whether zeroing indication bits in the background zeroing cache corresponding to the data block to be zeroed are all set as a predetermined value; and in response to the zeroing indication bits in the background zeroing cache associated with the data block to be zeroed being all set as the predetermined value, updating the checkpoint to a next data block. 20. The manager of claim 19, wherein the background zeroing operation comprises: in response to at least one of the zeroing indication bits in the background zeroing cache associated with the data block to be zeroed being not set as the predetermined value, reading the metadata corresponding to the data block to be zeroed; and updating the metadata cache with the read metadata. 21. (canceled)
A method and a manager for managing a storage system including a manager and a storage device. The storage device includes a data region and a metadata region. The data region is divided into data blocks. The metadata region stores metadata describing zeroing states of the data blocks. The method comprises allocating a metadata cache in a memory of the manager. The metadata cache includes respective zeroing indication bits indicative of the zeroing states of the corresponding data blocks. The allocating procedure comprises allocating a user data cache for reading or writing user data and allocating a background zeroing cache for a background zeroing operation of the storage device. The method further comprises, in response to receiving an I/O request for the storage system, processing the I/O request with the metadata cache.1. A method of managing a storage system, the storage system including a manager and a storage device, the storage device including a data region being divided into data blocks and a metadata region storing metadata describing zeroing states of the data blocks, the method comprising: allocating a metadata cache in a memory of the manager, the metadata cache including respective zeroing indication bits indicative of the zeroing states of the corresponding data blocks, the allocating comprising: allocating a user data cache for reading or writing user data, and allocating a background zeroing cache for a background zeroing operation of the storage device; and in response to receiving an I/O request for the storage system, processing the I/O request with the metadata cache. 2. The method of claim 1, wherein processing the I/O request with the metadata cache comprises: in response to the I/O request being a read request, determining whether zeroing indication bits in the metadata cache associated with data blocks corresponding to the read request are all set as a predetermined value; and in response to the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request being all set as the predetermined value, sending the read request to the storage device. 3. The method of claim 2, further comprising: in response to at least one of the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request being not set as the predetermined value, reading metadata associated with the data blocks corresponding to the read request; and updating the metadata cache with the read metadata. 4. The method of claim 1, wherein updating the metadata cache with the read metadata comprises: in response to the I/O request being a write request, determining whether zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request are all set as a predetermined value; and in response to the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request being all set as the predetermined value, performing a write operation to the storage device. 5. The method of claim 4, further comprising: in response to at least one of the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request being not set as the predetermined value, reading metadata associated with the data blocks corresponding to the write request; and updating, with the read metadata, the metadata cache associated with the data blocks corresponding to the write request. 6. The method of claim 5, wherein updating, with the read metadata, the metadata cache associated with the data blocks corresponding to the write request comprises: determining whether the data blocks corresponding to the write request have been zeroed; and in response to the data blocks corresponding to the write request having not been zeroed, zeroing the data blocks corresponding to the write request; performing a write operation to the storage device; updating the metadata associated with the data blocks corresponding to the write request; and updating the metadata cache associated with the data blocks corresponding to the write request. 7. The method of claim 1, wherein processing the I/O request with the metadata cache comprises: obtaining a range of data blocks corresponding to the I/O request; determining whether the range of data blocks corresponding to the I/O request overlaps with a range of data blocks corresponding to the background zeroing cache; in response to the range of data blocks corresponding to the I/O request overlapping with the range of data blocks corresponding to the background zeroing cache, invalidating the overlapping portion in the background zeroing cache; and in response to the range of data blocks corresponding to the I/O request not overlapping with the range of data blocks corresponding to the background zeroing cache, determining whether the range of data blocks corresponding to the I/O request overlaps with the user data cache. 8. The method of claim 7, further comprising: in response to the range of data blocks corresponding to the I/O request overlapping with the range of data blocks corresponding to the user data cache, updating the user data cache; and in response to the range of data blocks corresponding to the I/O request not overlapping with the range of data blocks corresponding to the user data cache, obtaining a free user data cache. 9. The method of claim 1, wherein the background zeroing operation comprises: looking up a data block to be zeroed with a checkpoint, the checkpoint indicating an index of the data block to be zeroed; determining whether zeroing indication bits in the background zeroing cache corresponding to the data block to be zeroed are all set as a predetermined value; and in response to the zeroing indication bits in the background zeroing cache associated with the data block to be zeroed being all set as the predetermined value, updating the checkpoint to a next data block. 10. The method of claim 9, further comprising: in response to at least one of the zeroing indication bits in the background zeroing cache associated with the data block to be zeroed being not set as the predetermined value, reading metadata corresponding to the data block to be zeroed; and updating the metadata cache with the read metadata. 11. A manager for managing a storage system, the storage system including the manager and a storage device, the storage device including a data region and a metadata region, the data region being divided into data blocks, the metadata region storing metadata describing zeroing states of the data blocks, the manager including a processor and a memory coupled to the processor and having instructions stored thereon, the instructions, when executed by the processor, causing the manager to perform acts including: allocating a metadata cache in the memory, the metadata cache including respective zeroing indication bits indicative of the zeroing states of the corresponding data blocks, the allocating comprising: allocating a user data cache for reading or writing user data, and allocating a background zeroing cache for a background zeroing operation of the storage device; and in response to receiving an I/O request for the storage system, processing the I/O request with the metadata cache. 12. The manager of claim 11, wherein processing the I/O request with the metadata cache comprises: in response to the I/O request being a read request, determining whether zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request are all set as a predetermined value; and in response to the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request being all set as the predetermined value, sending the read request to the storage device. 13. The manager of claim 12, wherein the acts further include: in response to at least one of the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the read request being not set as the predetermined value, reading metadata associated with the data blocks corresponding to the read request; and updating the metadata cache with the read metadata. 14. The manager of claim 11, wherein updating the metadata cache with the read metadata comprises: in response to the I/O request being a write request, determining whether zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request are all set as a predetermined value; and in response to the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request being all set as the predetermined value, performing a write operation to the storage device. 15. The manager of claim 14, the acts further comprising: in response to at least one of the zeroing indication bits in the metadata cache associated with the data blocks corresponding to the write request being not set as the predetermined value, reading metadata associated with the data blocks corresponding to the write request; and updating, with the read metadata, the metadata cache associated with the data blocks corresponding to the write request. 16. The manager of claim 15, wherein updating, with the read metadata, the metadata cache associated with the data blocks corresponding to the write request comprises: determining whether the data blocks corresponding to the write request have been zeroed; and in response to the data blocks corresponding to the write request having not been zeroed, zeroing the data blocks corresponding to the write request; performing a write operation to the storage device; updating the metadata associated with the data blocks corresponding to the write request; and updating the metadata cache associated with the data blocks corresponding to the write request. 17. The manager of claim 11, wherein processing the I/O request with the metadata cache comprises: obtaining a range of data blocks corresponding to the I/O request; determining whether the range of data blocks corresponding to the I/O request overlaps with a range of data blocks corresponding to the background zeroing cache; in response to the range of the data blocks corresponding to the I/O request overlapping with the range of the data blocks corresponding to the background zeroing cache, invalidating the overlapping portion in the background zeroing cache; and in response to the range of the data blocks corresponding to the I/O request not overlapping with the range of the data blocks corresponding to the background zeroing cache, determining whether the range of the data blocks corresponding to the I/O request overlaps with the user data cache. 18. The manager of claim 17, further comprising: in response to the range of the data blocks corresponding to the I/O request overlapping with the range of the data blocks corresponding to the user data cache, updating the user data cache; and in response to the range of the data blocks corresponding to the I/O request not overlapping with the range of the data blocks corresponding to the user data cache, obtaining a free user data cache. 19. The manager of claim 11, wherein the background zeroing operation comprises: looking up a data block to be zeroed with a checkpoint, the checkpoint indicating an index of the data block to be zeroed; determining whether zeroing indication bits in the background zeroing cache corresponding to the data block to be zeroed are all set as a predetermined value; and in response to the zeroing indication bits in the background zeroing cache associated with the data block to be zeroed being all set as the predetermined value, updating the checkpoint to a next data block. 20. The manager of claim 19, wherein the background zeroing operation comprises: in response to at least one of the zeroing indication bits in the background zeroing cache associated with the data block to be zeroed being not set as the predetermined value, reading the metadata corresponding to the data block to be zeroed; and updating the metadata cache with the read metadata. 21. (canceled)
2,100
274,010
15,953,680
2,131
In a data processing system, a store request is provided having corresponding store data and a corresponding access address, and a memory coherency required attribute corresponding to the access address of the store request is provided. When the store request results in a write-through store due to a cache hit or results in a cache miss, the corresponding access address and store data is stored in a selected entry of the store buffer and a merge allowed indicator is stored in the selected entry which indicates whether or not the selected entry is a candidate for merging. The merge allowed indicator is determined based on the memory coherency required attribute from the MMU and a store buffer coherency enable control bit of the cache. Entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory are merged.
1. A data processing system, comprising: a memory; a central processing unit (CPU) configured to provide a store request having corresponding store data and a corresponding access address which indicates a memory location in the memory for storing the store data; a memory management unit (MMU) coupled to the CPU and configured to receive the access address of the store request and provide a memory coherency required attribute corresponding to the access address of the store request; and a cache coupled to the CPU, MMU, and memory, the cache having a cache array, store buffer, and a control register configured to store a store buffer coherency enable control bit, the cache configured to receive the store request and the memory coherency required attribute, and configured to, when the store request results in a write-through cache store or a cache miss, store the store request in a selected entry of the store buffer and store a merge allowed indicator in the selected entry of the store buffer corresponding to the store request which indicates whether or not the selected entry of the store buffer is a candidate for merging, wherein the merge allowed indicator is determined based on the memory coherency required attribute from the MMU and the store buffer coherency enable control bit of the control register. 2. The data processing system of claim 1, wherein the cache is further configured to, when the store request results in a write-through cache store, also store the store request into the cache array. 3. The data processing system of claim 1, wherein the cache is further configured to, when the store request results in a cache miss, not store the store request in the cache array in response to the cache miss. 4. The data processing system of claim 1, wherein the store buffer coherency enable control bit indicates whether or not merging of an entry in the store buffer is allowed whose corresponding access address falls within a memory coherency region of the memory. 5. The data processing system of claim 1, wherein the CPU provides the corresponding access address as a virtual address, and the MMU is configured to translate the virtual address into a physical address and provides the physical address with the memory coherency required attribute. 6. The data processing system of claim 5, wherein the cache receives the corresponding access address as the physical address from the MMU and uses the physical address to determine a hit or miss in the cache array. 7. The data processing system of claim 6, wherein the store buffer storing the store request into the selected entry of the store buffer comprises storing the physical address and the store data corresponding to the store request into the selected entry. 8. The data processing system of claim 1, wherein the store buffer is configured as a first-in first-out (FIFO) storage circuit, and wherein the cache further comprises store buffer write control circuitry configured to select the selected entry based on FIFO operation. 9. The data processing system of claim 1, wherein the cache further comprises store buffer merge circuitry configured to merge entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory. 10. The data processing system of claim 9, wherein storing the store request into the selected entry comprises storing the access address and store data corresponding to the store request into the selected entry, wherein the store buffer merge circuitry is configured to merge entries of the store buffer by combining store data of entries being merged into a single merged entry. 11. The data processing system of claim 1, wherein the store buffer coherency enable control bit, when asserted, indicates that merging of the selected entry is allowed when the corresponding access address is in a memory coherency region and, when negated, indicates that merging of the selected entry is not allowed when the corresponding access address is in a memory coherency region. 12. The data processing system of claim 1, wherein an asserted merge allowed indicator for the selected entry corresponds to the memory coherency required attribute indicating that the corresponding access address is not in a memory coherency region of the memory or that both the memory coherency required attribute indicates that the corresponding access address is in a memory coherency region and the store buffer coherency enable control bit is asserted. 13. In a data processing system having a central processing unit (CPU), a memory, a memory management unit (MMU), and a cache, a method comprising: providing, by the CPU, a store request having corresponding store data and a corresponding access address which indicates a memory location in the memory for storing the store data; providing, by the MMU, a memory coherency required attribute corresponding to the access address of the store request; determining whether the access address hits or misses in a cache array of the cache; when the store request results in a write-through store due to a cache hit in the cache array or results in a cache miss in the cache array, storing the corresponding access address and store data in a selected entry of the store buffer and storing a merge allowed indicator in the selected entry of the store buffer which indicates whether or not the selected entry of the store buffer is a candidate for merging, wherein the merge allowed indicator is determined based on the memory coherency required attribute from the MMU and a store buffer coherency enable control bit of the cache; and merging entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory by combining store data of entries being merged into a single merged entry. 14. The method of claim 13, wherein when the store request results in a write-through store due to a cache hit, the method further comprises: storing the store request into a hit entry of the cache array. 15. The method of claim 13, wherein the store buffer coherency enable control bit indicates whether or not merging of an entry in the store buffer is allowed whose corresponding access address falls within a memory coherency region of the memory. 16. The method of claim 13, wherein the CPU provides the corresponding access address as a virtual address, and the MMU translates the virtual address into a physical address and provides the physical address with the memory coherency required attribute. 17. The method of claim 16, wherein storing the corresponding access address in the selected entry of the store buffer comprises storing the physical address in the selected entry. 18. The method of claim 13, wherein the store buffer coherency enable control bit, when asserted, indicates that merging of the selected entry is allowed when the corresponding access address is in a memory coherency region and, when negated, indicates that merging of the selected entry is not allowed when the corresponding access address is in a memory coherency region. 19. The method of claim 13, wherein storing the merge allowed indicator in the selected entry of the store buffer comprises: storing an asserted merge allowed indicator when the memory coherency required attribute indicates that the corresponding access address is not in a memory coherency region of the memory or when both the memory coherency required attribute indicates that the corresponding access address is in a memory coherency region and the store buffer coherency enable control bit is asserted. 20. A data processing system, comprising: a memory; a central processing unit (CPU) configured to provide a store request having corresponding store data and a corresponding access address which indicates a memory location in the memory for storing the store data; a memory management unit (MMU) coupled to the CPU and configured to translate the access address of the store request into a physical address and provide the physical address and a memory coherency required attribute corresponding to the physical address of the store request; a cache coupled to the CPU, MMU, and memory, the cache having a cache array, store buffer, and a control register configured to store a store buffer coherency enable control bit, the cache configured to receive the physical address and the memory coherency required attribute from the MMU, and configured to, when the physical address results in a write-through store due to a cache hit or results in a cache miss, store the physical address and store data corresponding to the store request in a selected entry of the store buffer and store a merge allowed indicator in the selected entry of the store buffer corresponding to the store request which indicates whether or not the selected entry of the store buffer is a candidate for merging, wherein the merge allowed indicator is determined based on the memory coherency required attribute from the MMU and the store buffer coherency enable control bit of the control register; and store buffer merge circuitry configured to merge entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory, wherein the store buffer merge circuitry is configured to merge entries of the store buffer by combining store data of entries being merged into a single merged entry.
In a data processing system, a store request is provided having corresponding store data and a corresponding access address, and a memory coherency required attribute corresponding to the access address of the store request is provided. When the store request results in a write-through store due to a cache hit or results in a cache miss, the corresponding access address and store data is stored in a selected entry of the store buffer and a merge allowed indicator is stored in the selected entry which indicates whether or not the selected entry is a candidate for merging. The merge allowed indicator is determined based on the memory coherency required attribute from the MMU and a store buffer coherency enable control bit of the cache. Entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory are merged.1. A data processing system, comprising: a memory; a central processing unit (CPU) configured to provide a store request having corresponding store data and a corresponding access address which indicates a memory location in the memory for storing the store data; a memory management unit (MMU) coupled to the CPU and configured to receive the access address of the store request and provide a memory coherency required attribute corresponding to the access address of the store request; and a cache coupled to the CPU, MMU, and memory, the cache having a cache array, store buffer, and a control register configured to store a store buffer coherency enable control bit, the cache configured to receive the store request and the memory coherency required attribute, and configured to, when the store request results in a write-through cache store or a cache miss, store the store request in a selected entry of the store buffer and store a merge allowed indicator in the selected entry of the store buffer corresponding to the store request which indicates whether or not the selected entry of the store buffer is a candidate for merging, wherein the merge allowed indicator is determined based on the memory coherency required attribute from the MMU and the store buffer coherency enable control bit of the control register. 2. The data processing system of claim 1, wherein the cache is further configured to, when the store request results in a write-through cache store, also store the store request into the cache array. 3. The data processing system of claim 1, wherein the cache is further configured to, when the store request results in a cache miss, not store the store request in the cache array in response to the cache miss. 4. The data processing system of claim 1, wherein the store buffer coherency enable control bit indicates whether or not merging of an entry in the store buffer is allowed whose corresponding access address falls within a memory coherency region of the memory. 5. The data processing system of claim 1, wherein the CPU provides the corresponding access address as a virtual address, and the MMU is configured to translate the virtual address into a physical address and provides the physical address with the memory coherency required attribute. 6. The data processing system of claim 5, wherein the cache receives the corresponding access address as the physical address from the MMU and uses the physical address to determine a hit or miss in the cache array. 7. The data processing system of claim 6, wherein the store buffer storing the store request into the selected entry of the store buffer comprises storing the physical address and the store data corresponding to the store request into the selected entry. 8. The data processing system of claim 1, wherein the store buffer is configured as a first-in first-out (FIFO) storage circuit, and wherein the cache further comprises store buffer write control circuitry configured to select the selected entry based on FIFO operation. 9. The data processing system of claim 1, wherein the cache further comprises store buffer merge circuitry configured to merge entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory. 10. The data processing system of claim 9, wherein storing the store request into the selected entry comprises storing the access address and store data corresponding to the store request into the selected entry, wherein the store buffer merge circuitry is configured to merge entries of the store buffer by combining store data of entries being merged into a single merged entry. 11. The data processing system of claim 1, wherein the store buffer coherency enable control bit, when asserted, indicates that merging of the selected entry is allowed when the corresponding access address is in a memory coherency region and, when negated, indicates that merging of the selected entry is not allowed when the corresponding access address is in a memory coherency region. 12. The data processing system of claim 1, wherein an asserted merge allowed indicator for the selected entry corresponds to the memory coherency required attribute indicating that the corresponding access address is not in a memory coherency region of the memory or that both the memory coherency required attribute indicates that the corresponding access address is in a memory coherency region and the store buffer coherency enable control bit is asserted. 13. In a data processing system having a central processing unit (CPU), a memory, a memory management unit (MMU), and a cache, a method comprising: providing, by the CPU, a store request having corresponding store data and a corresponding access address which indicates a memory location in the memory for storing the store data; providing, by the MMU, a memory coherency required attribute corresponding to the access address of the store request; determining whether the access address hits or misses in a cache array of the cache; when the store request results in a write-through store due to a cache hit in the cache array or results in a cache miss in the cache array, storing the corresponding access address and store data in a selected entry of the store buffer and storing a merge allowed indicator in the selected entry of the store buffer which indicates whether or not the selected entry of the store buffer is a candidate for merging, wherein the merge allowed indicator is determined based on the memory coherency required attribute from the MMU and a store buffer coherency enable control bit of the cache; and merging entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory by combining store data of entries being merged into a single merged entry. 14. The method of claim 13, wherein when the store request results in a write-through store due to a cache hit, the method further comprises: storing the store request into a hit entry of the cache array. 15. The method of claim 13, wherein the store buffer coherency enable control bit indicates whether or not merging of an entry in the store buffer is allowed whose corresponding access address falls within a memory coherency region of the memory. 16. The method of claim 13, wherein the CPU provides the corresponding access address as a virtual address, and the MMU translates the virtual address into a physical address and provides the physical address with the memory coherency required attribute. 17. The method of claim 16, wherein storing the corresponding access address in the selected entry of the store buffer comprises storing the physical address in the selected entry. 18. The method of claim 13, wherein the store buffer coherency enable control bit, when asserted, indicates that merging of the selected entry is allowed when the corresponding access address is in a memory coherency region and, when negated, indicates that merging of the selected entry is not allowed when the corresponding access address is in a memory coherency region. 19. The method of claim 13, wherein storing the merge allowed indicator in the selected entry of the store buffer comprises: storing an asserted merge allowed indicator when the memory coherency required attribute indicates that the corresponding access address is not in a memory coherency region of the memory or when both the memory coherency required attribute indicates that the corresponding access address is in a memory coherency region and the store buffer coherency enable control bit is asserted. 20. A data processing system, comprising: a memory; a central processing unit (CPU) configured to provide a store request having corresponding store data and a corresponding access address which indicates a memory location in the memory for storing the store data; a memory management unit (MMU) coupled to the CPU and configured to translate the access address of the store request into a physical address and provide the physical address and a memory coherency required attribute corresponding to the physical address of the store request; a cache coupled to the CPU, MMU, and memory, the cache having a cache array, store buffer, and a control register configured to store a store buffer coherency enable control bit, the cache configured to receive the physical address and the memory coherency required attribute from the MMU, and configured to, when the physical address results in a write-through store due to a cache hit or results in a cache miss, store the physical address and store data corresponding to the store request in a selected entry of the store buffer and store a merge allowed indicator in the selected entry of the store buffer corresponding to the store request which indicates whether or not the selected entry of the store buffer is a candidate for merging, wherein the merge allowed indicator is determined based on the memory coherency required attribute from the MMU and the store buffer coherency enable control bit of the control register; and store buffer merge circuitry configured to merge entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory, wherein the store buffer merge circuitry is configured to merge entries of the store buffer by combining store data of entries being merged into a single merged entry.
2,100
274,011
15,954,307
2,131
A storage device includes a flash memory array and a controller. The flash memory array stores a plurality of user data. After the controller finishes initialization, the controller accesses the user data stored in the flash memory array according to a plurality of host commands and an H2F mapping table, and records a plurality of address information about the user data in a powered-ON access table.
1. A storage device, comprising: a flash memory array, storing a plurality of user data; and a controller, wherein after the controller finishes initialization, the controller accesses the user data in the flash memory array according to a plurality of host commands and an H2F mapping table and records a plurality of address information about the user data in a powered-ON access table. 2. The storage device of claim 1, wherein the size of the powered-ON access table is less than the H2F mapping table. 3. The storage device of claim 1, wherein the address information is a plurality of logic addresses of the user data. 4. The storage device of claim 1, wherein the address information further comprises a plurality of physical addresses of the user data. 5. The storage device of claim 1, wherein the controller records, according to an access order of the user data, the address information in the powered-ON access table. 6. A storage device, comprising: a flash memory array, storing a plurality of user data; and a controller, after the controller finishes initialization, the controller determines whether a powered-ON access table exists in the flash memory array, wherein when the powered-ON access table exists in the flash memory array, the controller prefetches the user data 7. The storage device of claim 6, wherein the controller further prefetches the user data corresponding to the powered-ON access table to the data register according to an H2F mapping table. 8. The storage device of claim 6, wherein when the user data corresponding to the powered-ON access table exceeds capacity of the data register, the controller only prefetches a part of the user data corresponding to the powered-ON access table to the data register. 9. The storage device of claim 6, wherein the controller sequentially prefetches the user data corresponding to the powered-ON access table to the data register. 10. The storage device of claim 6, wherein after the controller finishes the initialization and before the controller receives a host command, the controller prefetches the user data corresponding to the powered-ON access table to the data register. 11. A method for utilizing a powered-ON access table of a storage device, comprising: executing an initialization; and determining whether the powered-ON access table exists, wherein when the powered-ON access table exists, prefetching user data corresponding to the powered-ON access table from a flash memory array to a data register. 12. The method of claim 11, further comprising: accessing the storage device according to a plurality of host commands, the powered-ON access table, and an H2F mapping table. 13. The method of claim 11, further comprising: when the powered-ON access table does not exist, accessing the storage device according to a plurality of host commands and an H2F mapping table.
A storage device includes a flash memory array and a controller. The flash memory array stores a plurality of user data. After the controller finishes initialization, the controller accesses the user data stored in the flash memory array according to a plurality of host commands and an H2F mapping table, and records a plurality of address information about the user data in a powered-ON access table.1. A storage device, comprising: a flash memory array, storing a plurality of user data; and a controller, wherein after the controller finishes initialization, the controller accesses the user data in the flash memory array according to a plurality of host commands and an H2F mapping table and records a plurality of address information about the user data in a powered-ON access table. 2. The storage device of claim 1, wherein the size of the powered-ON access table is less than the H2F mapping table. 3. The storage device of claim 1, wherein the address information is a plurality of logic addresses of the user data. 4. The storage device of claim 1, wherein the address information further comprises a plurality of physical addresses of the user data. 5. The storage device of claim 1, wherein the controller records, according to an access order of the user data, the address information in the powered-ON access table. 6. A storage device, comprising: a flash memory array, storing a plurality of user data; and a controller, after the controller finishes initialization, the controller determines whether a powered-ON access table exists in the flash memory array, wherein when the powered-ON access table exists in the flash memory array, the controller prefetches the user data 7. The storage device of claim 6, wherein the controller further prefetches the user data corresponding to the powered-ON access table to the data register according to an H2F mapping table. 8. The storage device of claim 6, wherein when the user data corresponding to the powered-ON access table exceeds capacity of the data register, the controller only prefetches a part of the user data corresponding to the powered-ON access table to the data register. 9. The storage device of claim 6, wherein the controller sequentially prefetches the user data corresponding to the powered-ON access table to the data register. 10. The storage device of claim 6, wherein after the controller finishes the initialization and before the controller receives a host command, the controller prefetches the user data corresponding to the powered-ON access table to the data register. 11. A method for utilizing a powered-ON access table of a storage device, comprising: executing an initialization; and determining whether the powered-ON access table exists, wherein when the powered-ON access table exists, prefetching user data corresponding to the powered-ON access table from a flash memory array to a data register. 12. The method of claim 11, further comprising: accessing the storage device according to a plurality of host commands, the powered-ON access table, and an H2F mapping table. 13. The method of claim 11, further comprising: when the powered-ON access table does not exist, accessing the storage device according to a plurality of host commands and an H2F mapping table.
2,100
274,012
15,954,198
2,131
A system and method is disclosed for managing data in a non-volatile memory. The system may include a non-volatile memory having multiple non-volatile memory sub-drives. A controller of the memory system is configured to route incoming host data to a desired sub-drive, keep data within the same sub-drive as its source during a garbage collection operation, and re-map data between sub-drives, separate from any garbage collection operation, when a sub-drive overflows its designated amount logical address space. The method may include initial data sorting of host writes into sub-drives based on any number of hot/cold sorting functions. In one implementation, the initial host write data sorting may be based on a host list of recently written blocks for each sub-drive and a second write to a logical address encompassed by the list may trigger routing the host write to a hotter sub-drive than the current sub-drive.
1. A non-volatile memory system comprising: a non-volatile memory having a plurality of sub-drives, each of the plurality of sub-drives associated with superblocks of data within a respective data temperature range; a controller in communication with the plurality of sub-drives, the controller configured to: sort data as it is received in a host write command into one of the plurality of sub-drives based on a determined data temperature of data in the host write command; for each respective sub-drive of the plurality of sub-drives, other than for a sub-drive associated with a hottest data temperature range, maintain a list of superblocks containing most recently written data for that respective sub-drive; and when a logical address of data in a received host write command is present in the list for that respective sub-drive, automatically route the data in the received host write command to a different one of the plurality of sub-drives associated with a hotter data temperature range than a data temperature range of that respective sub-drive. 2. The non-volatile memory system of claim 1, wherein the controller is further configured to, when the logical address of data in the received host write command is absent from the list for that respective sub-drive, route the data in the received host write command to a same sub-drive as currently contains data associated with the logical address of the data in the received host write command. 3. The non-volatile memory system of claim 2, wherein each list for each respective sub-drive comprises a list of superblocks containing logical addresses of data in most recent host writes to the respective sub-drive. 4. The non-volatile memory system of claim 3, wherein the controller is further configured to only relocate valid data during a garbage collection operation within a same sub-drive. 5. The non-volatile memory system of claim 4, wherein each list includes both most recent host writes and most recent garbage collection writes to the respective sub-drive. 6. The non-volatile memory system of claim 3, wherein each list comprises a first in first out (FIFO) list having a fixed length of entries. 7. The non-volatile memory of claim 6, wherein each list comprises a same fixed length. 8. The non-volatile memory of claim 6, wherein each list comprises a different fixed length. 9. The non-volatile memory of claim 8, wherein the controller is further configured to: when the logical address of data in the received host write command is absent from the list, and when the data for the received host write is routed to a respective sub-drive and the list for the respective sub-drive is filled to the fixed length for the respective sub-drive, push an oldest entry off of an end of the list and insert a new entry at a beginning of the list. 10. The non-volatile memory of claim 6, wherein the controller is further configured to: after initially routing data from a host write command to a sub-drive, only physically copy data from a superblock in a respective sub-drive to another superblock in the respective sub-drive during a garbage collection operation; and only logically remap data already written to the respective sub-drive to another of the plurality of sub-drives when a logical capacity assigned to the respective sub-drive is exceeded. 11. A method for initially sorting data in a non-volatile memory system, wherein the non-volatile memory system has a plurality of sub-drives each associated with a different data temperature range, comprising a controller of the non-volatile memory system: maintaining a list of most recent host writes for one of the plurality of sub-drives; comparing logical addresses of data in a received host write command to logical addresses of data in the list of most recent host writes for the one of the plurality of sub-drives; and when a logical address of data in the received host write command is present in the list, automatically routing the data in the received host write command to a different one of the plurality of sub-drives associated with a hotter data temperature range than the one of the plurality of sub-drives. 12. The method of claim 11, further comprising, when the logical address of the data in the received host write command is not present in the list, automatically routing the data in the received host write command to a same sub-drive as currently contains data associated with the logical address. 13. The method of claim 11, wherein the list comprises a list of superblocks containing the logical address of data in the most recent host data writes. 14. The method of claim 13, wherein: the list comprises a first in first out list having a fixed length of write entries; and when the list is filled to the fixed length, pushing an oldest write entry off of an end of the list and inserting a new write entry at a beginning of the list when the logical address of data in the received host write command is absent from the list. 15. The method of claim 13, wherein the list includes both recent host data writes and recent garbage collection writes. 16. The method of claim 13, wherein maintaining the list of most recent host writes comprises maintaining lists of most recent host writes for each of the plurality of sub-drives other than a hottest data temperature range sub-drive of the plurality of sub-drives. 17. The method of claim 11, further comprising: logically remapping a coldest superblock from the different one of the plurality of sub-drives, to the one of the plurality of sub-drives, when routing the data in the received host write command to the different one of the plurality of sub-drives results in the different one of the plurality of sub-drives exceeding a predetermined logical address space. 18. The method of claim 15, further comprising: only physically copying data from one superblock in a respective sub-drive to another superblock in the respective sub-drive during a garbage collection operation; and only moving data from one of the plurality of sub-drives to another of the plurality of sub-drives via logical remapping. 19. A non-volatile memory system comprising: a non-volatile memory having a plurality of sub-drives, each of the plurality of sub-drives associated with superblocks of data within a respective data temperature range; means for maintaining, for a portion of the plurality of sub-drives, separate lists of most recently written data from a host; means for initially routing data associated with an incoming host write command to a particular one of the plurality of sub-drives based on the separate lists of most recently written data from the host; means for only physically moving data already stored in any of the plurality of sub-drives between superblocks in a same sub-drive; and means for only moving data already stored in a sub-drive to another sub-drive by logical remapping. 20. The non-volatile memory system of claim 19, wherein: the plurality of sub-drives comprises a first sub-drive associated with superblocks of data in a hottest data temperature range and multiple sub-drives each associated with superblocks of data in respective different temperature range lower than the hottest data temperature range; and the portion of the plurality of sub-drives comprises all sub-drives other than the first sub-drive.
A system and method is disclosed for managing data in a non-volatile memory. The system may include a non-volatile memory having multiple non-volatile memory sub-drives. A controller of the memory system is configured to route incoming host data to a desired sub-drive, keep data within the same sub-drive as its source during a garbage collection operation, and re-map data between sub-drives, separate from any garbage collection operation, when a sub-drive overflows its designated amount logical address space. The method may include initial data sorting of host writes into sub-drives based on any number of hot/cold sorting functions. In one implementation, the initial host write data sorting may be based on a host list of recently written blocks for each sub-drive and a second write to a logical address encompassed by the list may trigger routing the host write to a hotter sub-drive than the current sub-drive.1. A non-volatile memory system comprising: a non-volatile memory having a plurality of sub-drives, each of the plurality of sub-drives associated with superblocks of data within a respective data temperature range; a controller in communication with the plurality of sub-drives, the controller configured to: sort data as it is received in a host write command into one of the plurality of sub-drives based on a determined data temperature of data in the host write command; for each respective sub-drive of the plurality of sub-drives, other than for a sub-drive associated with a hottest data temperature range, maintain a list of superblocks containing most recently written data for that respective sub-drive; and when a logical address of data in a received host write command is present in the list for that respective sub-drive, automatically route the data in the received host write command to a different one of the plurality of sub-drives associated with a hotter data temperature range than a data temperature range of that respective sub-drive. 2. The non-volatile memory system of claim 1, wherein the controller is further configured to, when the logical address of data in the received host write command is absent from the list for that respective sub-drive, route the data in the received host write command to a same sub-drive as currently contains data associated with the logical address of the data in the received host write command. 3. The non-volatile memory system of claim 2, wherein each list for each respective sub-drive comprises a list of superblocks containing logical addresses of data in most recent host writes to the respective sub-drive. 4. The non-volatile memory system of claim 3, wherein the controller is further configured to only relocate valid data during a garbage collection operation within a same sub-drive. 5. The non-volatile memory system of claim 4, wherein each list includes both most recent host writes and most recent garbage collection writes to the respective sub-drive. 6. The non-volatile memory system of claim 3, wherein each list comprises a first in first out (FIFO) list having a fixed length of entries. 7. The non-volatile memory of claim 6, wherein each list comprises a same fixed length. 8. The non-volatile memory of claim 6, wherein each list comprises a different fixed length. 9. The non-volatile memory of claim 8, wherein the controller is further configured to: when the logical address of data in the received host write command is absent from the list, and when the data for the received host write is routed to a respective sub-drive and the list for the respective sub-drive is filled to the fixed length for the respective sub-drive, push an oldest entry off of an end of the list and insert a new entry at a beginning of the list. 10. The non-volatile memory of claim 6, wherein the controller is further configured to: after initially routing data from a host write command to a sub-drive, only physically copy data from a superblock in a respective sub-drive to another superblock in the respective sub-drive during a garbage collection operation; and only logically remap data already written to the respective sub-drive to another of the plurality of sub-drives when a logical capacity assigned to the respective sub-drive is exceeded. 11. A method for initially sorting data in a non-volatile memory system, wherein the non-volatile memory system has a plurality of sub-drives each associated with a different data temperature range, comprising a controller of the non-volatile memory system: maintaining a list of most recent host writes for one of the plurality of sub-drives; comparing logical addresses of data in a received host write command to logical addresses of data in the list of most recent host writes for the one of the plurality of sub-drives; and when a logical address of data in the received host write command is present in the list, automatically routing the data in the received host write command to a different one of the plurality of sub-drives associated with a hotter data temperature range than the one of the plurality of sub-drives. 12. The method of claim 11, further comprising, when the logical address of the data in the received host write command is not present in the list, automatically routing the data in the received host write command to a same sub-drive as currently contains data associated with the logical address. 13. The method of claim 11, wherein the list comprises a list of superblocks containing the logical address of data in the most recent host data writes. 14. The method of claim 13, wherein: the list comprises a first in first out list having a fixed length of write entries; and when the list is filled to the fixed length, pushing an oldest write entry off of an end of the list and inserting a new write entry at a beginning of the list when the logical address of data in the received host write command is absent from the list. 15. The method of claim 13, wherein the list includes both recent host data writes and recent garbage collection writes. 16. The method of claim 13, wherein maintaining the list of most recent host writes comprises maintaining lists of most recent host writes for each of the plurality of sub-drives other than a hottest data temperature range sub-drive of the plurality of sub-drives. 17. The method of claim 11, further comprising: logically remapping a coldest superblock from the different one of the plurality of sub-drives, to the one of the plurality of sub-drives, when routing the data in the received host write command to the different one of the plurality of sub-drives results in the different one of the plurality of sub-drives exceeding a predetermined logical address space. 18. The method of claim 15, further comprising: only physically copying data from one superblock in a respective sub-drive to another superblock in the respective sub-drive during a garbage collection operation; and only moving data from one of the plurality of sub-drives to another of the plurality of sub-drives via logical remapping. 19. A non-volatile memory system comprising: a non-volatile memory having a plurality of sub-drives, each of the plurality of sub-drives associated with superblocks of data within a respective data temperature range; means for maintaining, for a portion of the plurality of sub-drives, separate lists of most recently written data from a host; means for initially routing data associated with an incoming host write command to a particular one of the plurality of sub-drives based on the separate lists of most recently written data from the host; means for only physically moving data already stored in any of the plurality of sub-drives between superblocks in a same sub-drive; and means for only moving data already stored in a sub-drive to another sub-drive by logical remapping. 20. The non-volatile memory system of claim 19, wherein: the plurality of sub-drives comprises a first sub-drive associated with superblocks of data in a hottest data temperature range and multiple sub-drives each associated with superblocks of data in respective different temperature range lower than the hottest data temperature range; and the portion of the plurality of sub-drives comprises all sub-drives other than the first sub-drive.
2,100
274,013
15,954,171
2,131
A system and method is disclosed for managing data in a non-volatile memory. The system may include a non-volatile memory having multiple non-volatile memory sub-drives. A controller of the memory system is configured to route incoming host data to a desired sub-drive, keep data within the same sub-drive as its source during a garbage collection operation, and re-map data between sub-drives, separate from any garbage collection operation, when a sub-drive overflows its designated amount logical address space. The method may include initial data sorting of host writes into sub-drives based on any number of hot/cold sorting functions. In one implementation, the initial host write data sorting may be based on a host list of recently written blocks for each sub-drive and a second write to a logical address encompassed by the list may trigger routing the host write to a hotter sub-drive than the current sub-drive.
1. A method for managing data in a memory system having a controller in communication with a non-volatile memory having a plurality of sub-drives, the method comprising the controller: receiving a host data write at the memory system; directing the host data write to one of the plurality of sub-drives based on a first sorting technique; initiating a garbage collection operation in a particular sub-drive in response to a detected garbage collection trigger; moving valid data during the garbage collection operation from a source superblock in the particular sub-drive only to a relocation superblock in the particular sub-drive such that the valid data remains in the particular sub-drive; determining whether a proportion of a total logical address space of the non-volatile memory currently associated with one of the plurality of sub-drives exceeds a predetermined threshold; and when the proportion exceeds the predetermined threshold for the one of the plurality of sub-drives, re-mapping a superblock from the one of the plurality of sub-drives to another of the plurality of sub-drives, based on a current data temperature of the superblock, independently of any garbage collection operation. 2. The method of claim 1, wherein re-mapping the superblock comprises selecting a coldest superblock of a sub-drive containing more valid data than the total logical address space of the non-volatile memory currently associated with the sub-drive and re-mapping the coldest superblock to a next colder sub-drive of the non-volatile memory. 3. The method of claim 1, wherein the detected garbage collection trigger comprises a number of shared free blocks for the plurality of sub-drives falling below a predetermined threshold. 4. The method of claim 3, wherein moving valid data during the garbage collection operation from the source superblock further comprises: first selecting a sub-drive from which to select the source superblock based on an analysis of write amplification for an entirety of the non-volatile memory for a current workload. 5. The method of claim 4, wherein selecting the sub-drive from which to select the source superblock comprises selecting the sub-drive based on a calculated target overprovisioning of each of the plurality of sub-drives. 6. The method of claim 4, further comprising selecting the source superblock for the selected sub-drive based on an amount of valid data in the source superblock. 7. The method of claim 1, further comprising selecting, as the particular sub-drive for initiating the garbage collection operation, a sub-drive having a greatest amount of overprovisioning over a target overprovisioning level, wherein the target overprovisioning level differs for each of the plurality of sub-drives and is based on a respective portion of the total logical address space of the non-volatile memory currently associated with each of the plurality of sub-drives and a total host write workload attributed to each of the plurality of sub-drives. 8. A non-volatile memory system comprising: a non-volatile memory having a plurality of sub-drives; a controller in communication with the plurality of sub-drives, the controller configured to: sort data associated with a host write command, as the data associated with the host write command is received from a host, into one of the plurality of sub-drives based on a determined data temperature of the data associated with the host write command; and sort data already stored in a first sub-drive of the plurality of sub-drives into a different sub-drive of the plurality of sub-drives by logically remapping a portion of the data already stored in the first sub-drive, without rewriting the portion of the data into a different physical location, independently of any garbage collection operation in the first sub-drive. 9. The non-volatile memory system of claim 8, wherein the controller is further configured to only relocate valid data during a garbage collection operation within a same sub-drive. 10. The non-volatile memory system of claim 8, wherein the controller is further configured to: select one of the plurality of sub-drives as the first sub-drive from which to logically remap the portion of the data in response to an amount of valid data in the one of the plurality of sub-drives exceeding a predetermined amount of logical address space assigned to the one of the plurality of sub-drives. 11. The non-volatile memory system of claim 8, wherein: the portion of the data already stored in the first sub-drive comprises a superblock of the first sub-drive; the different sub-drive comprises a next colder sub-drive of the plurality of sub-drives, wherein the next colder sub-drive comprises a sub-drive associated with data having a data temperature less than a data temperature of data associated with the first sub-drive; and to remap the portion of the data already stored in the first sub-drive, the controller is further configured to select a coldest superblock of the first sub-drive and logically remap the coldest superblock to the different sub-drive. 12. The non-volatile memory system of claim 8, wherein: the portion of the data already stored in the first sub-drive comprises a superblock of the first sub-drive; the different sub-drive comprises a next hotter sub-drive of the plurality of sub-drives, wherein the next hotter sub-drive comprises a sub-drive associated with data having a data temperature greater than a data temperature of data associated with the first sub-drive; and to remap the portion of the data already stored in the first sub-drive, the controller is further configured to select a hottest superblock of the first sub-drive and logically remap the hottest superblock to the different sub-drive. 13. The non-volatile memory system of claim 8, further comprising: a free block pool, the free block pool comprising a plurality of superblocks in the non-volatile memory assignable to any of the plurality of sub-drives for data storage; and wherein the controller is further configured to: initiate a garbage collection operation in one of the plurality of sub-drives in response to detecting that an amount of superblocks in the free block pool has fallen below a predetermined minimum threshold; and select a source superblock for the garbage collection operation from a sub-drive having an amount of overprovisioning that is greater than a target overprovisioning for the sub-drive. 14. A method for managing data in a memory system having a controller in communication with a non-volatile memory having a plurality of sub-drives, the method comprising the controller: receiving a host data write at the memory system; directing the host data write to one of the plurality of sub-drives based on a first sorting technique; determining whether an amount of valid data in one of the plurality of sub-drives exceeds a predetermined amount of a logical address space for the one of the plurality of sub-drives; and when the amount of valid data in the one of the plurality of sub-drives exceeds the predetermined amount, re-mapping a superblock from the one of the plurality of sub-drives to another of the plurality of sub-drives, without rewriting any data from the superblock to another physical location, based on a current data temperature of the superblock. 15. The method of claim 14, wherein re-mapping the superblock comprises selecting a coldest superblock of a sub-drive containing more valid data than a total logical address space of the non-volatile memory currently associated with the sub-drive and re-mapping the coldest superblock to a next colder sub-drive of the plurality of sub-drives in the non-volatile memory. 16. The method of claim 14, wherein re-mapping the superblock from the one of the plurality of sub-drives to another of the plurality of sub-drives comprises: remapping the superblock to a next colder sub-drive of the plurality of sub-drives, wherein the next colder sub-drive comprises a sub-drive associated with data having a data temperature less than a data temperature of data associated with the one of the plurality of sub-drives. 17. The method of claim 16, further comprising selecting a coldest superblock of the one of the plurality of sub-drives and logically remapping the coldest superblock to the next colder sub-drive. 18. The method of claim 14, wherein re-mapping the superblock from the one of the plurality of sub-drives to another of the plurality of sub-drives comprises: remapping the superblock to a next hotter sub-drive, the next hotter sub-drive comprising a sub-drive associated with data having a data temperature greater than a data temperature of data associated with the one of the plurality of sub-drives. 19. The method of claim 18, further comprising selecting a hottest superblock of the one of the plurality of sub-drives and logically remapping the hottest superblock to the next hotter sub-drive. 20. The method of claim 14, wherein directing the host data write to one of the plurality of sub-drives based on a first sorting technique comprises: writing the host data, the host data associated with logical block addresses, to a next hotter sub-drive relative to last sub-drive that previous data associated with the logical block addresses was written to.
A system and method is disclosed for managing data in a non-volatile memory. The system may include a non-volatile memory having multiple non-volatile memory sub-drives. A controller of the memory system is configured to route incoming host data to a desired sub-drive, keep data within the same sub-drive as its source during a garbage collection operation, and re-map data between sub-drives, separate from any garbage collection operation, when a sub-drive overflows its designated amount logical address space. The method may include initial data sorting of host writes into sub-drives based on any number of hot/cold sorting functions. In one implementation, the initial host write data sorting may be based on a host list of recently written blocks for each sub-drive and a second write to a logical address encompassed by the list may trigger routing the host write to a hotter sub-drive than the current sub-drive.1. A method for managing data in a memory system having a controller in communication with a non-volatile memory having a plurality of sub-drives, the method comprising the controller: receiving a host data write at the memory system; directing the host data write to one of the plurality of sub-drives based on a first sorting technique; initiating a garbage collection operation in a particular sub-drive in response to a detected garbage collection trigger; moving valid data during the garbage collection operation from a source superblock in the particular sub-drive only to a relocation superblock in the particular sub-drive such that the valid data remains in the particular sub-drive; determining whether a proportion of a total logical address space of the non-volatile memory currently associated with one of the plurality of sub-drives exceeds a predetermined threshold; and when the proportion exceeds the predetermined threshold for the one of the plurality of sub-drives, re-mapping a superblock from the one of the plurality of sub-drives to another of the plurality of sub-drives, based on a current data temperature of the superblock, independently of any garbage collection operation. 2. The method of claim 1, wherein re-mapping the superblock comprises selecting a coldest superblock of a sub-drive containing more valid data than the total logical address space of the non-volatile memory currently associated with the sub-drive and re-mapping the coldest superblock to a next colder sub-drive of the non-volatile memory. 3. The method of claim 1, wherein the detected garbage collection trigger comprises a number of shared free blocks for the plurality of sub-drives falling below a predetermined threshold. 4. The method of claim 3, wherein moving valid data during the garbage collection operation from the source superblock further comprises: first selecting a sub-drive from which to select the source superblock based on an analysis of write amplification for an entirety of the non-volatile memory for a current workload. 5. The method of claim 4, wherein selecting the sub-drive from which to select the source superblock comprises selecting the sub-drive based on a calculated target overprovisioning of each of the plurality of sub-drives. 6. The method of claim 4, further comprising selecting the source superblock for the selected sub-drive based on an amount of valid data in the source superblock. 7. The method of claim 1, further comprising selecting, as the particular sub-drive for initiating the garbage collection operation, a sub-drive having a greatest amount of overprovisioning over a target overprovisioning level, wherein the target overprovisioning level differs for each of the plurality of sub-drives and is based on a respective portion of the total logical address space of the non-volatile memory currently associated with each of the plurality of sub-drives and a total host write workload attributed to each of the plurality of sub-drives. 8. A non-volatile memory system comprising: a non-volatile memory having a plurality of sub-drives; a controller in communication with the plurality of sub-drives, the controller configured to: sort data associated with a host write command, as the data associated with the host write command is received from a host, into one of the plurality of sub-drives based on a determined data temperature of the data associated with the host write command; and sort data already stored in a first sub-drive of the plurality of sub-drives into a different sub-drive of the plurality of sub-drives by logically remapping a portion of the data already stored in the first sub-drive, without rewriting the portion of the data into a different physical location, independently of any garbage collection operation in the first sub-drive. 9. The non-volatile memory system of claim 8, wherein the controller is further configured to only relocate valid data during a garbage collection operation within a same sub-drive. 10. The non-volatile memory system of claim 8, wherein the controller is further configured to: select one of the plurality of sub-drives as the first sub-drive from which to logically remap the portion of the data in response to an amount of valid data in the one of the plurality of sub-drives exceeding a predetermined amount of logical address space assigned to the one of the plurality of sub-drives. 11. The non-volatile memory system of claim 8, wherein: the portion of the data already stored in the first sub-drive comprises a superblock of the first sub-drive; the different sub-drive comprises a next colder sub-drive of the plurality of sub-drives, wherein the next colder sub-drive comprises a sub-drive associated with data having a data temperature less than a data temperature of data associated with the first sub-drive; and to remap the portion of the data already stored in the first sub-drive, the controller is further configured to select a coldest superblock of the first sub-drive and logically remap the coldest superblock to the different sub-drive. 12. The non-volatile memory system of claim 8, wherein: the portion of the data already stored in the first sub-drive comprises a superblock of the first sub-drive; the different sub-drive comprises a next hotter sub-drive of the plurality of sub-drives, wherein the next hotter sub-drive comprises a sub-drive associated with data having a data temperature greater than a data temperature of data associated with the first sub-drive; and to remap the portion of the data already stored in the first sub-drive, the controller is further configured to select a hottest superblock of the first sub-drive and logically remap the hottest superblock to the different sub-drive. 13. The non-volatile memory system of claim 8, further comprising: a free block pool, the free block pool comprising a plurality of superblocks in the non-volatile memory assignable to any of the plurality of sub-drives for data storage; and wherein the controller is further configured to: initiate a garbage collection operation in one of the plurality of sub-drives in response to detecting that an amount of superblocks in the free block pool has fallen below a predetermined minimum threshold; and select a source superblock for the garbage collection operation from a sub-drive having an amount of overprovisioning that is greater than a target overprovisioning for the sub-drive. 14. A method for managing data in a memory system having a controller in communication with a non-volatile memory having a plurality of sub-drives, the method comprising the controller: receiving a host data write at the memory system; directing the host data write to one of the plurality of sub-drives based on a first sorting technique; determining whether an amount of valid data in one of the plurality of sub-drives exceeds a predetermined amount of a logical address space for the one of the plurality of sub-drives; and when the amount of valid data in the one of the plurality of sub-drives exceeds the predetermined amount, re-mapping a superblock from the one of the plurality of sub-drives to another of the plurality of sub-drives, without rewriting any data from the superblock to another physical location, based on a current data temperature of the superblock. 15. The method of claim 14, wherein re-mapping the superblock comprises selecting a coldest superblock of a sub-drive containing more valid data than a total logical address space of the non-volatile memory currently associated with the sub-drive and re-mapping the coldest superblock to a next colder sub-drive of the plurality of sub-drives in the non-volatile memory. 16. The method of claim 14, wherein re-mapping the superblock from the one of the plurality of sub-drives to another of the plurality of sub-drives comprises: remapping the superblock to a next colder sub-drive of the plurality of sub-drives, wherein the next colder sub-drive comprises a sub-drive associated with data having a data temperature less than a data temperature of data associated with the one of the plurality of sub-drives. 17. The method of claim 16, further comprising selecting a coldest superblock of the one of the plurality of sub-drives and logically remapping the coldest superblock to the next colder sub-drive. 18. The method of claim 14, wherein re-mapping the superblock from the one of the plurality of sub-drives to another of the plurality of sub-drives comprises: remapping the superblock to a next hotter sub-drive, the next hotter sub-drive comprising a sub-drive associated with data having a data temperature greater than a data temperature of data associated with the one of the plurality of sub-drives. 19. The method of claim 18, further comprising selecting a hottest superblock of the one of the plurality of sub-drives and logically remapping the hottest superblock to the next hotter sub-drive. 20. The method of claim 14, wherein directing the host data write to one of the plurality of sub-drives based on a first sorting technique comprises: writing the host data, the host data associated with logical block addresses, to a next hotter sub-drive relative to last sub-drive that previous data associated with the logical block addresses was written to.
2,100
274,014
15,954,223
2,131
An optimized operating method for a non-volatile memory. A microcontroller allocates the non-volatile memory to store write data issued by a host. The microcontroller dynamically adjusts a first-writing-mode threshold. The first-writing-mode threshold value is provided for the microcontroller to determine whether to use a first writing mode to allocate the non-volatile memory to store the write data issued by the host. In comparison with the first writing mode, more bits of data are stored in one storage cell in a second writing mode.
1. A data storage device, comprising: a non-volatile memory; and a microcontroller, allocating the non-volatile memory to store write data issued by a host, wherein: the microcontroller dynamically adjusts a first-writing-mode threshold; the first-writing-mode threshold value is provided for the microcontroller to determine whether to use a first writing mode to allocate the non-volatile memory to store the write data issued by the host; and in comparison with the first writing mode, more bits of data are stored in one storage cell in a second writing mode. 2. The data storage device as claimed in claim 1, wherein: the microcontroller dynamically adjusts the first-writing-mode threshold when releasing a space of the non-volatile memory. 3. The data storage device as claimed in claim 1, wherein: when releasing a space of the non-volatile memory by foreground operations, the microcontroller increases the first-writing-mode threshold; and the microcontroller responds to the host by the foreground operations. 4. The data storage device as claimed in claim 1, wherein: when releasing a space of the non-volatile memory by background operations, the microcontroller decreases the first-writing-mode threshold; and the microcontroller performs the background operations without being requested to by the host. 5. The data storage device as claimed in claim 1, wherein: the non-volatile memory is a flash memory managed in blocks; in the first-writing mode, data is stored in single-level cells with each storage cell storing one bit of data; and in the second-writing mode, each storage cell stores more than one bit. 6. The data storage device as claimed in claim 5, wherein: when using the second-writing mode to copy data stored in the first-writing mode and thereby a block is released to store subsequent write data issued by the host, the microcontroller increases the first-writing-mode threshold. 7. The data storage device as claimed in claim 5, wherein: when updating data and thereby a block is released, the microcontroller increases the first-writing-mode threshold. 8. The data storage device as claimed in claim 5, wherein: when releasing a block by garbage collection performed by background operations, the microcontroller decreases the first-writing-mode threshold; and the microcontroller performs the background operations without being requested to by the host. 9. The data storage device as claimed in claim 2, wherein: the microcontroller uses the first writing mode to allocate the non-volatile memory to store the write data issued by the host when a spare space of the non-volatile memory is determined based on the first-writing-mode threshold as sufficient. 10. The data storage device as claimed in claim 9, wherein: the microcontroller uses the second writing mode to allocate the non-volatile memory to store the write data issued by the host when the spare space of the non-volatile memory is determined based on the first-writing-mode threshold as insufficient. 11. A method for operating a non-volatile memory, comprising: allocating a non-volatile memory to store write data issued by a host; dynamically adjusting a first-writing-mode threshold; and considering the first-writing-mode threshold value, determining whether to use a first writing mode to allocate the non-volatile memory to store the write data issued by the host, wherein in comparison with the first writing mode, more bits of data are stored in one storage cell in a second writing mode. 12. The method as claimed in claim 11, further comprising: dynamically adjusting the first-writing-mode threshold when releasing a space of the non-volatile memory. 13. The method as claimed in claim 11, further comprising: increasing the first-writing-mode threshold when releasing a space of the non-volatile memory by foreground operations, wherein the foreground operations are performed to respond to the host. 14. The method as claimed in claim 11, further comprising: decreasing the first-writing-mode threshold when releasing a space of the non-volatile memory by background operations, wherein the background operations are performed without being requested to by the host. 15. The method as claimed in claim 11, wherein: the non-volatile memory is a flash memory managed in blocks; in the first-writing mode, data is stored in single-level cells with each storage cell storing one bit of data; and in the second-writing mode, each storage cell stores more than one bit. 16. The method as claimed in claim 15, wherein: when using the second-writing mode to copy data stored in the first-writing mode and thereby a block is released to store subsequent write data issued by the host, the first-writing-mode threshold is increased. 17. The method as claimed in claim 15, wherein: when updating data and thereby a block is released, the first-writing-mode threshold is increased. 18. The method as claimed in claim 15, further comprising: decreasing the first-writing-mode threshold when releasing a block by garbage collection performed by background operations, wherein the background operations are performed without being requested to by the host. 19. The method as claimed in claim 12, wherein: the first writing mode is used to allocate the non-volatile memory to store the write data issued by the host when a spare space of the non-volatile memory is determined based on the first-writing-mode threshold as sufficient. 20. The method as claimed in claim 19, wherein: the second writing mode is used to allocate the non-volatile memory to store the write data issued by the host when the spare space of the non-volatile memory is determined based on the first-writing-mode threshold as insufficient.
An optimized operating method for a non-volatile memory. A microcontroller allocates the non-volatile memory to store write data issued by a host. The microcontroller dynamically adjusts a first-writing-mode threshold. The first-writing-mode threshold value is provided for the microcontroller to determine whether to use a first writing mode to allocate the non-volatile memory to store the write data issued by the host. In comparison with the first writing mode, more bits of data are stored in one storage cell in a second writing mode.1. A data storage device, comprising: a non-volatile memory; and a microcontroller, allocating the non-volatile memory to store write data issued by a host, wherein: the microcontroller dynamically adjusts a first-writing-mode threshold; the first-writing-mode threshold value is provided for the microcontroller to determine whether to use a first writing mode to allocate the non-volatile memory to store the write data issued by the host; and in comparison with the first writing mode, more bits of data are stored in one storage cell in a second writing mode. 2. The data storage device as claimed in claim 1, wherein: the microcontroller dynamically adjusts the first-writing-mode threshold when releasing a space of the non-volatile memory. 3. The data storage device as claimed in claim 1, wherein: when releasing a space of the non-volatile memory by foreground operations, the microcontroller increases the first-writing-mode threshold; and the microcontroller responds to the host by the foreground operations. 4. The data storage device as claimed in claim 1, wherein: when releasing a space of the non-volatile memory by background operations, the microcontroller decreases the first-writing-mode threshold; and the microcontroller performs the background operations without being requested to by the host. 5. The data storage device as claimed in claim 1, wherein: the non-volatile memory is a flash memory managed in blocks; in the first-writing mode, data is stored in single-level cells with each storage cell storing one bit of data; and in the second-writing mode, each storage cell stores more than one bit. 6. The data storage device as claimed in claim 5, wherein: when using the second-writing mode to copy data stored in the first-writing mode and thereby a block is released to store subsequent write data issued by the host, the microcontroller increases the first-writing-mode threshold. 7. The data storage device as claimed in claim 5, wherein: when updating data and thereby a block is released, the microcontroller increases the first-writing-mode threshold. 8. The data storage device as claimed in claim 5, wherein: when releasing a block by garbage collection performed by background operations, the microcontroller decreases the first-writing-mode threshold; and the microcontroller performs the background operations without being requested to by the host. 9. The data storage device as claimed in claim 2, wherein: the microcontroller uses the first writing mode to allocate the non-volatile memory to store the write data issued by the host when a spare space of the non-volatile memory is determined based on the first-writing-mode threshold as sufficient. 10. The data storage device as claimed in claim 9, wherein: the microcontroller uses the second writing mode to allocate the non-volatile memory to store the write data issued by the host when the spare space of the non-volatile memory is determined based on the first-writing-mode threshold as insufficient. 11. A method for operating a non-volatile memory, comprising: allocating a non-volatile memory to store write data issued by a host; dynamically adjusting a first-writing-mode threshold; and considering the first-writing-mode threshold value, determining whether to use a first writing mode to allocate the non-volatile memory to store the write data issued by the host, wherein in comparison with the first writing mode, more bits of data are stored in one storage cell in a second writing mode. 12. The method as claimed in claim 11, further comprising: dynamically adjusting the first-writing-mode threshold when releasing a space of the non-volatile memory. 13. The method as claimed in claim 11, further comprising: increasing the first-writing-mode threshold when releasing a space of the non-volatile memory by foreground operations, wherein the foreground operations are performed to respond to the host. 14. The method as claimed in claim 11, further comprising: decreasing the first-writing-mode threshold when releasing a space of the non-volatile memory by background operations, wherein the background operations are performed without being requested to by the host. 15. The method as claimed in claim 11, wherein: the non-volatile memory is a flash memory managed in blocks; in the first-writing mode, data is stored in single-level cells with each storage cell storing one bit of data; and in the second-writing mode, each storage cell stores more than one bit. 16. The method as claimed in claim 15, wherein: when using the second-writing mode to copy data stored in the first-writing mode and thereby a block is released to store subsequent write data issued by the host, the first-writing-mode threshold is increased. 17. The method as claimed in claim 15, wherein: when updating data and thereby a block is released, the first-writing-mode threshold is increased. 18. The method as claimed in claim 15, further comprising: decreasing the first-writing-mode threshold when releasing a block by garbage collection performed by background operations, wherein the background operations are performed without being requested to by the host. 19. The method as claimed in claim 12, wherein: the first writing mode is used to allocate the non-volatile memory to store the write data issued by the host when a spare space of the non-volatile memory is determined based on the first-writing-mode threshold as sufficient. 20. The method as claimed in claim 19, wherein: the second writing mode is used to allocate the non-volatile memory to store the write data issued by the host when the spare space of the non-volatile memory is determined based on the first-writing-mode threshold as insufficient.
2,100
274,015
15,954,539
2,131
In one embodiment, efficient content-addressable memory entry integrity checking is performed that protects the accuracy of lookup operations. Single-bit position lookup operations are performed resulting in match vectors that include a match result for each of the content-addressable memory entries at the single-bit position. An error detection value is determined for the match vector, and compared to a predetermined detection code for the single-bit position to identify whether an error is detected in at least one of the content-addressable memory entries. In one embodiment, a particular cumulative entry error detection vector storing entry error detection information for each of the content-addressable memory entries is updated based on the match vector. The particular cumulative entry error detection vector is compared to a predetermined entry error detection vector to determine which, if any, of the content-addressable memory entries has an identifiable error, which is then corrected.
1. A method, comprising: performing a single-bit position lookup operation in a plurality of content-addressable memory entries against a particular bit value at a single-bit position within a lookup word resulting in a match vector that includes an entry match result at the single-bit position for each of the plurality of content-addressable memory entries; determining an error detection value for the match vector; and comparing the error detection value to a predetermined detection code for the single-bit position to identify whether an error is detected in the single-bit position of at least one of the plurality of content-addressable memory entries. 2. The method of claim 1, comprising: based on the match vector, a lookup unit updating a particular cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries; and the lookup unit comparing the particular cumulative entry error detection vector to a predetermined entry error detection vector to determine which, if any, of the plurality of content-addressable memory entries has an identifiable error. 3. The method of claim 2, wherein said comparing the particular cumulative entry error detection vector to the predetermined entry error detection vector identifies a specific entry of the plurality of content-addressable memory entries as having an error; and the method includes responsive to said identifying the specific entry as having an error, writing the specific entry with a correct programming vector. 4. The method of claim 1, wherein responsive to said comparing the error detection value to a predetermined detection code identifying at least one error in the single-bit position, correcting said at least one error in the single-bit position of at least one of the plurality of content-addressable memory entries. 5. The method of claim 1, wherein each of the error detection value for the match vector and the predetermined detection code is a parity bit. 6. The method of claim 5, wherein each of the plurality of content-addressable memory entries is a ternary content-addressable memory entry. 7. The method of claim 1, wherein said performing the lookup operation includes using a global mask value to mask all bit positions except the single-bit position. 8. The method of claim 1, comprising: performing a specific single-bit position lookup operation in the plurality of content-addressable memory entries against a bit value opposite the particular bit value at the single-bit position within a specific lookup word resulting in a specific match vector that includes a specific entry match result at the single-bit position for each of the plurality of content-addressable memory entries; determining a specific error detection value for the specific match vector; and comparing the specific error detection value to a specific predetermined detection code at the single-bit position for the bit value opposite the particular bit to identify whether a specific error is detected in the single-bit position of at least one of the plurality of content-addressable memory entries; 9. The method of claim 8, wherein each of said comparing the error detection value operation and comparing the specific error detection operation identified an error in the single-bit position of at least one of the plurality of content-addressable memory entries. 10. The method of claim 8, wherein exactly one of said comparing the error detection value operation and comparing the specific error detection operation identified an error in the single-bit position of at least one of the plurality of content-addressable memory entries. 11. The method of claim 8, comprising: based on the match vector, a lookup unit updating a particular cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries; the lookup unit comparing the particular cumulative entry error detection vector to a predetermined entry error detection vector to determine which, if any, of the plurality of content-addressable memory entries has an identifiable error; based on the specific match vector, a lookup unit updating a specific cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries; and comparing the specific cumulative entry error detection vector to a predetermined specific entry error detection vector to determine which, if any, of the plurality of content-addressable memory entries has an identifiable error. 12. The method of claim 8, wherein a predetermined detection code vector at the single-bit position includes the predetermined detection code; and wherein the method includes: updating a particular entry of the plurality of content-addressable memory entries with a new programmed vector; and updating the predetermined detection code vector based on pre-update values, prior to said updating the particular entry, of both of the predetermined detection code vector and the particular entry, and based on the new programmed vector. 13. A method, comprising: for each particular single-bit position of a lookup word, a lookup unit performing a single-bit position lookup operation in a plurality of content-addressable memory entries against a first particular bit value at said particular single-bit position within the lookup word resulting in a particular match vector that includes an entry match result at the single-bit position for each of the plurality of content-addressable memory entries; and the lookup unit updating a first cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries based on the particular match vector; and performing first error entry position processing based on the first cumulative entry error detection vector to detect and then correct which, if any, of said entries of the plurality of content-addressable memory entries have an identifiable error. 14. The method of claim 13, wherein said performing first error entry position processing includes comparing the first cumulative entry error detection vector with a predetermined first entry error detection vector to determine a first said entry having an identifiable error, and a first errored single-bit position determined based on a corresponding said particular match vector; and wherein the method includes the lookup unit correcting the first errored single-bit position of the first said entry. 15. The method of claim 14, wherein each of the plurality of content-addressable memory entries are ternary content-addressable memory entries; and wherein the lookup unit said correcting the first errored single-bit position of the first said entry, includes storing a correct mask bit value and a data bit value of the first errored single-bit position of the first said entry 16. The method of claim 13, comprising: for each specific single-bit position of a lookup word, the lookup unit performing a single-bit position lookup operation in the plurality of content-addressable memory entries against a second particular bit value, opposite the first particular bit value, at said particular single-bit position within the lookup word resulting in a specific match vector that includes an entry match result at the single-bit position for each of the plurality of content-addressable memory entries; and the lookup unit updating a second cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries based on the specific match vector; and performing second error entry position processing based on the second cumulative entry error detection vector to detect and then correct which specific, if any, of said entries of the plurality of content-addressable memory entries have an identifiable error. 17. The method of claim 16, wherein said performing second error entry position processing includes comparing the second cumulative entry error detection vector with a predetermined second entry error detection vector in said detecting which specific, if any, of said entries of the plurality of content-addressable memory entries have an identifiable error. 18. The method of claim 17, wherein responsive to said performing first error entry position processing determining a first said entry to have an identifiable error, a first errored single-bit position determined based on a corresponding said particular match vector, to said performing second error entry position processing determining the first said entry to have an identifiable error, the first errored single-bit position determined based on a corresponding said specific match vector, the lookup unit correcting the first errored single-bit position of the first said entry. 19. An apparatus, comprising: a content-addressable memory performing a single-bit position lookup operation in a plurality of content-addressable memory entries against a particular bit value at a single-bit position within a lookup word resulting in a match vector that includes an entry match result at the single-bit position for each of the plurality of content-addressable memory entries; bit-operation hardware determining an error detection value for the match vector; and comparison hardware that compares the error detection value to a predetermined detection code for the single-bit position to identify whether an error is detected in the single-bit position of at least one of the plurality of content-addressable memory entries. 20. The apparatus claim 19, wherein the apparatus, based on the match vector, updates a particular cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries, and compares comparing the particular cumulative entry error detection vector to a predetermined entry error detection vector to determine which, if any, of the plurality of content-addressable memory entries has an identifiable error.
In one embodiment, efficient content-addressable memory entry integrity checking is performed that protects the accuracy of lookup operations. Single-bit position lookup operations are performed resulting in match vectors that include a match result for each of the content-addressable memory entries at the single-bit position. An error detection value is determined for the match vector, and compared to a predetermined detection code for the single-bit position to identify whether an error is detected in at least one of the content-addressable memory entries. In one embodiment, a particular cumulative entry error detection vector storing entry error detection information for each of the content-addressable memory entries is updated based on the match vector. The particular cumulative entry error detection vector is compared to a predetermined entry error detection vector to determine which, if any, of the content-addressable memory entries has an identifiable error, which is then corrected.1. A method, comprising: performing a single-bit position lookup operation in a plurality of content-addressable memory entries against a particular bit value at a single-bit position within a lookup word resulting in a match vector that includes an entry match result at the single-bit position for each of the plurality of content-addressable memory entries; determining an error detection value for the match vector; and comparing the error detection value to a predetermined detection code for the single-bit position to identify whether an error is detected in the single-bit position of at least one of the plurality of content-addressable memory entries. 2. The method of claim 1, comprising: based on the match vector, a lookup unit updating a particular cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries; and the lookup unit comparing the particular cumulative entry error detection vector to a predetermined entry error detection vector to determine which, if any, of the plurality of content-addressable memory entries has an identifiable error. 3. The method of claim 2, wherein said comparing the particular cumulative entry error detection vector to the predetermined entry error detection vector identifies a specific entry of the plurality of content-addressable memory entries as having an error; and the method includes responsive to said identifying the specific entry as having an error, writing the specific entry with a correct programming vector. 4. The method of claim 1, wherein responsive to said comparing the error detection value to a predetermined detection code identifying at least one error in the single-bit position, correcting said at least one error in the single-bit position of at least one of the plurality of content-addressable memory entries. 5. The method of claim 1, wherein each of the error detection value for the match vector and the predetermined detection code is a parity bit. 6. The method of claim 5, wherein each of the plurality of content-addressable memory entries is a ternary content-addressable memory entry. 7. The method of claim 1, wherein said performing the lookup operation includes using a global mask value to mask all bit positions except the single-bit position. 8. The method of claim 1, comprising: performing a specific single-bit position lookup operation in the plurality of content-addressable memory entries against a bit value opposite the particular bit value at the single-bit position within a specific lookup word resulting in a specific match vector that includes a specific entry match result at the single-bit position for each of the plurality of content-addressable memory entries; determining a specific error detection value for the specific match vector; and comparing the specific error detection value to a specific predetermined detection code at the single-bit position for the bit value opposite the particular bit to identify whether a specific error is detected in the single-bit position of at least one of the plurality of content-addressable memory entries; 9. The method of claim 8, wherein each of said comparing the error detection value operation and comparing the specific error detection operation identified an error in the single-bit position of at least one of the plurality of content-addressable memory entries. 10. The method of claim 8, wherein exactly one of said comparing the error detection value operation and comparing the specific error detection operation identified an error in the single-bit position of at least one of the plurality of content-addressable memory entries. 11. The method of claim 8, comprising: based on the match vector, a lookup unit updating a particular cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries; the lookup unit comparing the particular cumulative entry error detection vector to a predetermined entry error detection vector to determine which, if any, of the plurality of content-addressable memory entries has an identifiable error; based on the specific match vector, a lookup unit updating a specific cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries; and comparing the specific cumulative entry error detection vector to a predetermined specific entry error detection vector to determine which, if any, of the plurality of content-addressable memory entries has an identifiable error. 12. The method of claim 8, wherein a predetermined detection code vector at the single-bit position includes the predetermined detection code; and wherein the method includes: updating a particular entry of the plurality of content-addressable memory entries with a new programmed vector; and updating the predetermined detection code vector based on pre-update values, prior to said updating the particular entry, of both of the predetermined detection code vector and the particular entry, and based on the new programmed vector. 13. A method, comprising: for each particular single-bit position of a lookup word, a lookup unit performing a single-bit position lookup operation in a plurality of content-addressable memory entries against a first particular bit value at said particular single-bit position within the lookup word resulting in a particular match vector that includes an entry match result at the single-bit position for each of the plurality of content-addressable memory entries; and the lookup unit updating a first cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries based on the particular match vector; and performing first error entry position processing based on the first cumulative entry error detection vector to detect and then correct which, if any, of said entries of the plurality of content-addressable memory entries have an identifiable error. 14. The method of claim 13, wherein said performing first error entry position processing includes comparing the first cumulative entry error detection vector with a predetermined first entry error detection vector to determine a first said entry having an identifiable error, and a first errored single-bit position determined based on a corresponding said particular match vector; and wherein the method includes the lookup unit correcting the first errored single-bit position of the first said entry. 15. The method of claim 14, wherein each of the plurality of content-addressable memory entries are ternary content-addressable memory entries; and wherein the lookup unit said correcting the first errored single-bit position of the first said entry, includes storing a correct mask bit value and a data bit value of the first errored single-bit position of the first said entry 16. The method of claim 13, comprising: for each specific single-bit position of a lookup word, the lookup unit performing a single-bit position lookup operation in the plurality of content-addressable memory entries against a second particular bit value, opposite the first particular bit value, at said particular single-bit position within the lookup word resulting in a specific match vector that includes an entry match result at the single-bit position for each of the plurality of content-addressable memory entries; and the lookup unit updating a second cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries based on the specific match vector; and performing second error entry position processing based on the second cumulative entry error detection vector to detect and then correct which specific, if any, of said entries of the plurality of content-addressable memory entries have an identifiable error. 17. The method of claim 16, wherein said performing second error entry position processing includes comparing the second cumulative entry error detection vector with a predetermined second entry error detection vector in said detecting which specific, if any, of said entries of the plurality of content-addressable memory entries have an identifiable error. 18. The method of claim 17, wherein responsive to said performing first error entry position processing determining a first said entry to have an identifiable error, a first errored single-bit position determined based on a corresponding said particular match vector, to said performing second error entry position processing determining the first said entry to have an identifiable error, the first errored single-bit position determined based on a corresponding said specific match vector, the lookup unit correcting the first errored single-bit position of the first said entry. 19. An apparatus, comprising: a content-addressable memory performing a single-bit position lookup operation in a plurality of content-addressable memory entries against a particular bit value at a single-bit position within a lookup word resulting in a match vector that includes an entry match result at the single-bit position for each of the plurality of content-addressable memory entries; bit-operation hardware determining an error detection value for the match vector; and comparison hardware that compares the error detection value to a predetermined detection code for the single-bit position to identify whether an error is detected in the single-bit position of at least one of the plurality of content-addressable memory entries. 20. The apparatus claim 19, wherein the apparatus, based on the match vector, updates a particular cumulative entry error detection vector storing entry error detection information for each of the plurality of content-addressable memory entries, and compares comparing the particular cumulative entry error detection vector to a predetermined entry error detection vector to determine which, if any, of the plurality of content-addressable memory entries has an identifiable error.
2,100
274,016
15,953,770
2,131
The embodiments of the present disclosure provide a computer-implemented method. The method includes caching data from a persistent storage device into a cache. The method also includes caching a physical address and a logical address of the data in the persistent storage device into the cache. The method further includes in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. The embodiments of the present disclosure also provide an electronic apparatus and a computer program product.
1. A computer-implemented method, comprising: caching data from a persistent storage device into a cache; caching a physical address and a logical address of the data in the persistent storage device into the cache; and in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. 2. The method of claim 1, wherein caching the physical address and the logical address into the cache comprises: caching the physical address and the logical address using a two-dimensional hash table. 3. The method of claim 2, wherein the two-dimensional hash table includes: a first dimensional hash table for mapping the physical address to the logical address and the data by using the physical address as a key, and a second dimensional hash table for mapping the logical address to the physical address by using the logical address as a key. 4. The method of claim 1, wherein: the logical address corresponds to one physical address or is prevented from corresponding to any physical addresses; and the physical address corresponds to at least one logical address or is prevented from corresponding to any logical addresses. 5. The method of claim 1, further comprising: caching an indicator into the cache; and setting the indicator to a positive state or a negative state to indicate whether the data is directly rewritable in the cache. 6. The method of claim 5, wherein setting the indicator comprises: if the physical address corresponds to the logical address only, setting the indicator to the positive state; and if the physical address corresponds to a plurality of logical addresses, or it is undetermined whether the physical address corresponds to the logical address only, setting the indicator to the negative state. 7. The method of claim 5, wherein setting the indicator further comprises: in response to performing at least one of a snapshot operation and a deduplication operation on the data in the storage device, setting the indicator to the negative state. 8. The method of claim 5, wherein caching the data from the storage device into the cache comprises: in response to a request for a read operation on the data, determining whether the data is cached in the cache; in response to determining that the data is absent from the cache, duplicating the data from the storage device into the cache; and setting the indicator to the negative state. 9. The method of claim 5, wherein accessing the data cached in the cache comprises: in response to the access request being a rewrite request, determining whether the indicator is in the positive state or in the negative state; in response to determining that the indicator is in the positive state, directly performing a rewrite operation on the data in the cache; and in response to determining that the indicator is in the negative state, caching data for rewriting in a further position in the cache, and setting an indicator indicating whether the data for rewriting is directly rewritable to the positive state. 10. An electronic apparatus, comprising: at least one processor; and at least one memory including computer instructions, the at least one memory and the computer instructions being configured, with the processor, to cause the electronic apparatus to: cache data from a persistent storage device into a cache; cache a physical address and a logical address of the data in the persistent storage device into the cache; and in response to receiving an access request for the data, access the data cached in the cache using at least one of the physical address and the logical address. 11. The electronic apparatus of claim 10, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: cache the physical address and the logical address using a two-dimensional hash table. 12. The electronic apparatus of claim 11, wherein the two-dimensional hash table includes: a first dimensional hash table for mapping the physical address to the logical address and the data using the physical address as a key, and a second dimensional hash table for mapping the logical address to the physical address by using the logical address as a key. 13. The electronic apparatus of claim 10, wherein: the logical address corresponds to one physical address or is prevented from corresponding to any physical addresses; and the physical address corresponds to at least one logical address or is prevented from corresponding to any logical addresses. 14. The electronic apparatus of claim 10, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: cache an indicator into the cache; and set the indicator to a positive state or a negative state to indicate whether the data is directly rewritable in the cache. 15. The electronic apparatus of claim 14, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: if the physical address corresponds to the logical address only, set the indicator to the positive state; and if the physical address corresponds to a plurality of logical addresses, or it is undetermined whether the physical address corresponds to the logical address only, set the indicator to the negative state. 16. The electronic apparatus of claim 14, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: in response to performing at least one of a snapshot operation and a deduplication operation on the data in the storage device, set the indicator to the negative state. 17. The electronic apparatus of claim 14, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: in response to a request for a read operation on the data, determine whether the data is cached in the cache; in response to determining that the data is absent from the cache, duplicate the data from the storage device into the cache; and set the indicator to the negative state. 18. The electronic apparatus of claim 14, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: in response to the access request being a rewrite request, determine whether the indicator is in the positive state or in the negative state; in response to determining that the indicator is in the positive state, directly perform a rewrite operation on the data in the cache; and in response to determining that the indicator is in the negative state, cache data for rewriting in a further position in the cache; and set an indicator indicating whether the data for rewriting is directly rewritable to the positive state. 19. A computer program product being tangibly stored on a non-volatile computer-readable medium and including machine-executable instructions, the machine-executable instructions, when executed, causing a machine to perform a step of: caching data from a persistent storage device into a cache; caching a physical address and a logical address of the data in the persistent storage device into the cache; and in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. 20. The computer program product of claim 19, wherein caching the physical address and the logical address into the cache comprises: caching the physical address and the logical address using a two-dimensional hash table.
The embodiments of the present disclosure provide a computer-implemented method. The method includes caching data from a persistent storage device into a cache. The method also includes caching a physical address and a logical address of the data in the persistent storage device into the cache. The method further includes in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. The embodiments of the present disclosure also provide an electronic apparatus and a computer program product.1. A computer-implemented method, comprising: caching data from a persistent storage device into a cache; caching a physical address and a logical address of the data in the persistent storage device into the cache; and in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. 2. The method of claim 1, wherein caching the physical address and the logical address into the cache comprises: caching the physical address and the logical address using a two-dimensional hash table. 3. The method of claim 2, wherein the two-dimensional hash table includes: a first dimensional hash table for mapping the physical address to the logical address and the data by using the physical address as a key, and a second dimensional hash table for mapping the logical address to the physical address by using the logical address as a key. 4. The method of claim 1, wherein: the logical address corresponds to one physical address or is prevented from corresponding to any physical addresses; and the physical address corresponds to at least one logical address or is prevented from corresponding to any logical addresses. 5. The method of claim 1, further comprising: caching an indicator into the cache; and setting the indicator to a positive state or a negative state to indicate whether the data is directly rewritable in the cache. 6. The method of claim 5, wherein setting the indicator comprises: if the physical address corresponds to the logical address only, setting the indicator to the positive state; and if the physical address corresponds to a plurality of logical addresses, or it is undetermined whether the physical address corresponds to the logical address only, setting the indicator to the negative state. 7. The method of claim 5, wherein setting the indicator further comprises: in response to performing at least one of a snapshot operation and a deduplication operation on the data in the storage device, setting the indicator to the negative state. 8. The method of claim 5, wherein caching the data from the storage device into the cache comprises: in response to a request for a read operation on the data, determining whether the data is cached in the cache; in response to determining that the data is absent from the cache, duplicating the data from the storage device into the cache; and setting the indicator to the negative state. 9. The method of claim 5, wherein accessing the data cached in the cache comprises: in response to the access request being a rewrite request, determining whether the indicator is in the positive state or in the negative state; in response to determining that the indicator is in the positive state, directly performing a rewrite operation on the data in the cache; and in response to determining that the indicator is in the negative state, caching data for rewriting in a further position in the cache, and setting an indicator indicating whether the data for rewriting is directly rewritable to the positive state. 10. An electronic apparatus, comprising: at least one processor; and at least one memory including computer instructions, the at least one memory and the computer instructions being configured, with the processor, to cause the electronic apparatus to: cache data from a persistent storage device into a cache; cache a physical address and a logical address of the data in the persistent storage device into the cache; and in response to receiving an access request for the data, access the data cached in the cache using at least one of the physical address and the logical address. 11. The electronic apparatus of claim 10, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: cache the physical address and the logical address using a two-dimensional hash table. 12. The electronic apparatus of claim 11, wherein the two-dimensional hash table includes: a first dimensional hash table for mapping the physical address to the logical address and the data using the physical address as a key, and a second dimensional hash table for mapping the logical address to the physical address by using the logical address as a key. 13. The electronic apparatus of claim 10, wherein: the logical address corresponds to one physical address or is prevented from corresponding to any physical addresses; and the physical address corresponds to at least one logical address or is prevented from corresponding to any logical addresses. 14. The electronic apparatus of claim 10, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: cache an indicator into the cache; and set the indicator to a positive state or a negative state to indicate whether the data is directly rewritable in the cache. 15. The electronic apparatus of claim 14, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: if the physical address corresponds to the logical address only, set the indicator to the positive state; and if the physical address corresponds to a plurality of logical addresses, or it is undetermined whether the physical address corresponds to the logical address only, set the indicator to the negative state. 16. The electronic apparatus of claim 14, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: in response to performing at least one of a snapshot operation and a deduplication operation on the data in the storage device, set the indicator to the negative state. 17. The electronic apparatus of claim 14, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: in response to a request for a read operation on the data, determine whether the data is cached in the cache; in response to determining that the data is absent from the cache, duplicate the data from the storage device into the cache; and set the indicator to the negative state. 18. The electronic apparatus of claim 14, wherein the at least one memory and the computer instructions are further configured, with the processor, to cause the electronic apparatus to: in response to the access request being a rewrite request, determine whether the indicator is in the positive state or in the negative state; in response to determining that the indicator is in the positive state, directly perform a rewrite operation on the data in the cache; and in response to determining that the indicator is in the negative state, cache data for rewriting in a further position in the cache; and set an indicator indicating whether the data for rewriting is directly rewritable to the positive state. 19. A computer program product being tangibly stored on a non-volatile computer-readable medium and including machine-executable instructions, the machine-executable instructions, when executed, causing a machine to perform a step of: caching data from a persistent storage device into a cache; caching a physical address and a logical address of the data in the persistent storage device into the cache; and in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. 20. The computer program product of claim 19, wherein caching the physical address and the logical address into the cache comprises: caching the physical address and the logical address using a two-dimensional hash table.
2,100
274,017
15,954,029
2,131
A storage system, includes a controller and a solid state disk. The controller creates multiple segments in advance, selects a first die from the multiple dies, selects a first segment from the multiple segments, determines an available offset of the first segment, generates a write request, where the write request includes a write address, target data, and a data length of the target data, and the write address includes an identifier of a channel coupled to the first die, an identifier of the first die, an identifier of the first segment, and the available offset, and sends the write request to the solid state disk. The solid state disk receives the write request, and stores the target data according to the write address and the data length.
1. A storage system, comprising: a solid state disk comprising a plurality of channels, each one of the channels being coupled to a plurality of dies; and a controller capable of communicating with the solid state disk and configured to: create a plurality of segments; select a first die from the dies; select a first segment from the segments; determine an available offset of the first segment; generate a write request comprising a write address, target data, and a data length of the target data, the write address comprising an identifier of a channel coupled to the first die, an identifier of the first die, an identifier of the first segment, and the available offset; and send the write request to the solid state disk, and the solid state disk being configured to: receive the write request; and store the target data according to the write address and the data length of the target data. 2. The storage system according to claim 1, wherein the controller is further configured to: record states of the dies; and select a stateless die from the dies as the first die. 3. The storage system according to claim 1, wherein the controller is further configured to: determine an access frequency of the target data based on a host logical block address of the target data; and select a die in which an amount of stored data, whose access frequency is greater than an access frequency threshold, is less than a first threshold as the first die when the access frequency of the target data is greater than the access frequency threshold. 4. The storage system according to claim 1, wherein the controller is further configured to: record an amount of valid data stored in each of the dies; and select a die in which an amount of valid data is less than a second threshold as the first die. 5. The storage system according to claim 1, wherein the controller is further configured to: record a wear degree of each of the dies; and select a die whose wear degree is less than a wear degree threshold as the first die. 6. The storage system according to claim 1, wherein the controller is further configured to: record a quantity of read requests to be processed in each of the dies; and select a die in which a quantity of read requests to be processed is less than a third threshold as the first die. 7. The storage system according to claim 1, wherein the controller is further configured to: select a certain segment as the first segment when the certain segment is already allocated to the first die and has available storage space; or select a blank segment from the segments as the first segment. 8. The storage system according to claim 1, wherein the controller is further configured to: generate a mapping relationship configured to record a mapping between a host logical block address of the target data and the channel coupled to the first die, the first die, the first segment, and the available offset; and store the mapping relationship in a system mapping table. 9. The storage system according to claim 1, wherein the solid state disk is further configured to: query a local mapping table according to the identifier of the first segment and the available offset comprised in the write address, the local mapping table being configured to store a mapping relationship between a segment and a physical block address of the solid state disk; determine a page identifier according to the available offset comprised in the write address, and write, based on the data length of the target data and starting from a page corresponding to the page identifier, the target data into a block corresponding to the first segment when the block corresponding to the first segment is recorded in the local mapping table; and select a blank block from a plurality of blocks of the first die based on the identifier of the channel coupled to the first die and the identifier of the first die, determine the page identifier according to the available offset comprised in the write address, and write, based on the data length of the target data and starting from the page corresponding to the page identifier, the target data into the blank block when the block corresponding to the first segment is not recorded in the local mapping table. 10. The storage system according to claim 9, wherein the solid state disk is further configured to: generate a new mapping relationship configured to record a mapping between the first segment and the blank block; and store the new mapping relationship in the local mapping table. 11. A solid state disk, comprising: a processor; a memory coupled to the processor; a communications interface coupled to the processor; and a plurality of channels, the processor and the memory being respectively coupled to a plurality of dies via each of the channels, each of the dies comprising a plurality of blocks, the processor, the memory, and the communications interface being capable of communicating with each other, the communications interface being configured to receive a write request comprising a write address, target data, and a data length of the target data, and the write address comprising an identifier of a first die, an identifier of a channel coupled to the first die, an identifier of a first segment, and an available offset, the memory being configured to store a local mapping table configured to record a mapping relationship between a segment and a physical block address of the solid state disk, and the processor being configured to: query the local mapping table according to the identifier of the first segment and the available offset comprised in the write address; and determine a page identifier according to the available offset, and write, based on the data length of the target data and starting from a page corresponding to the page identifier, the target data into a block corresponding to the first segment when the block corresponding to the first segment is recorded in the local mapping table; and select a blank block from a plurality of blocks of the first die based on the identifier of the channel coupled to the first die and the identifier of the first die, determine the page identifier according to the available offset comprised in the write address, and write, based on the data length of the target data and starting from the page corresponding to the page identifier, the target data into the blank block when the block corresponding to the first segment is not recorded in the local mapping table. 12. The solid state disk according to claim 11, wherein the processor is further configured to: generate a new mapping relationship configured to record a mapping between the first segment and the blank block; and store the new mapping relationship in the local mapping table. 13. A solid state disk, comprising: a processor; a memory coupled to the processor; a communications interface coupled to the processor; and a plurality of channels, the processor and the memory being respectively coupled to a plurality of dies via each of the channels, each of the dies comprising a plurality of blocks, the processor, the memory, and the communications interface being capable of communicating with each other, the communications interface being configured to receive a write request comprising a write address, target data, and a data length of the target data, the write address comprising an identifier of a first segment and an available offset, and the identifier of the first segment carries an identifier of a first die and an identifier of a channel coupled to the first die, the memory being configured to store a local mapping table configured to record a mapping relationship between a segment and a physical block address of the solid state disk, and the processor being configured to: query the local mapping table according to the identifier of the first segment and the available offset comprised in the write address; determine a page identifier according to the available offset, and write, based on the data length of the target data and starting from a page corresponding to the page identifier, the target data into a block corresponding to the first segment when the block corresponding to the first segment is recorded in the local mapping table; and parse the identifier of the first segment to obtain the identifier of the first die and the identifier of the channel coupled to the first die, select a blank block from a plurality of blocks of the first die based on the identifier of the channel coupled to the first die and the identifier of the first die, determine the page identifier according to the available offset comprised in the write address, and write, based on the data length of the target data and starting from the page corresponding to the page identifier, the target data into the blank block when the block corresponding to the first segment is not recorded in the local mapping table. 14. The solid state disk according to claim 13, wherein the processor is further configured to: generate a new mapping relationship configured to record a mapping between the first segment and the blank block; and store the new mapping relationship in the local mapping table.
A storage system, includes a controller and a solid state disk. The controller creates multiple segments in advance, selects a first die from the multiple dies, selects a first segment from the multiple segments, determines an available offset of the first segment, generates a write request, where the write request includes a write address, target data, and a data length of the target data, and the write address includes an identifier of a channel coupled to the first die, an identifier of the first die, an identifier of the first segment, and the available offset, and sends the write request to the solid state disk. The solid state disk receives the write request, and stores the target data according to the write address and the data length.1. A storage system, comprising: a solid state disk comprising a plurality of channels, each one of the channels being coupled to a plurality of dies; and a controller capable of communicating with the solid state disk and configured to: create a plurality of segments; select a first die from the dies; select a first segment from the segments; determine an available offset of the first segment; generate a write request comprising a write address, target data, and a data length of the target data, the write address comprising an identifier of a channel coupled to the first die, an identifier of the first die, an identifier of the first segment, and the available offset; and send the write request to the solid state disk, and the solid state disk being configured to: receive the write request; and store the target data according to the write address and the data length of the target data. 2. The storage system according to claim 1, wherein the controller is further configured to: record states of the dies; and select a stateless die from the dies as the first die. 3. The storage system according to claim 1, wherein the controller is further configured to: determine an access frequency of the target data based on a host logical block address of the target data; and select a die in which an amount of stored data, whose access frequency is greater than an access frequency threshold, is less than a first threshold as the first die when the access frequency of the target data is greater than the access frequency threshold. 4. The storage system according to claim 1, wherein the controller is further configured to: record an amount of valid data stored in each of the dies; and select a die in which an amount of valid data is less than a second threshold as the first die. 5. The storage system according to claim 1, wherein the controller is further configured to: record a wear degree of each of the dies; and select a die whose wear degree is less than a wear degree threshold as the first die. 6. The storage system according to claim 1, wherein the controller is further configured to: record a quantity of read requests to be processed in each of the dies; and select a die in which a quantity of read requests to be processed is less than a third threshold as the first die. 7. The storage system according to claim 1, wherein the controller is further configured to: select a certain segment as the first segment when the certain segment is already allocated to the first die and has available storage space; or select a blank segment from the segments as the first segment. 8. The storage system according to claim 1, wherein the controller is further configured to: generate a mapping relationship configured to record a mapping between a host logical block address of the target data and the channel coupled to the first die, the first die, the first segment, and the available offset; and store the mapping relationship in a system mapping table. 9. The storage system according to claim 1, wherein the solid state disk is further configured to: query a local mapping table according to the identifier of the first segment and the available offset comprised in the write address, the local mapping table being configured to store a mapping relationship between a segment and a physical block address of the solid state disk; determine a page identifier according to the available offset comprised in the write address, and write, based on the data length of the target data and starting from a page corresponding to the page identifier, the target data into a block corresponding to the first segment when the block corresponding to the first segment is recorded in the local mapping table; and select a blank block from a plurality of blocks of the first die based on the identifier of the channel coupled to the first die and the identifier of the first die, determine the page identifier according to the available offset comprised in the write address, and write, based on the data length of the target data and starting from the page corresponding to the page identifier, the target data into the blank block when the block corresponding to the first segment is not recorded in the local mapping table. 10. The storage system according to claim 9, wherein the solid state disk is further configured to: generate a new mapping relationship configured to record a mapping between the first segment and the blank block; and store the new mapping relationship in the local mapping table. 11. A solid state disk, comprising: a processor; a memory coupled to the processor; a communications interface coupled to the processor; and a plurality of channels, the processor and the memory being respectively coupled to a plurality of dies via each of the channels, each of the dies comprising a plurality of blocks, the processor, the memory, and the communications interface being capable of communicating with each other, the communications interface being configured to receive a write request comprising a write address, target data, and a data length of the target data, and the write address comprising an identifier of a first die, an identifier of a channel coupled to the first die, an identifier of a first segment, and an available offset, the memory being configured to store a local mapping table configured to record a mapping relationship between a segment and a physical block address of the solid state disk, and the processor being configured to: query the local mapping table according to the identifier of the first segment and the available offset comprised in the write address; and determine a page identifier according to the available offset, and write, based on the data length of the target data and starting from a page corresponding to the page identifier, the target data into a block corresponding to the first segment when the block corresponding to the first segment is recorded in the local mapping table; and select a blank block from a plurality of blocks of the first die based on the identifier of the channel coupled to the first die and the identifier of the first die, determine the page identifier according to the available offset comprised in the write address, and write, based on the data length of the target data and starting from the page corresponding to the page identifier, the target data into the blank block when the block corresponding to the first segment is not recorded in the local mapping table. 12. The solid state disk according to claim 11, wherein the processor is further configured to: generate a new mapping relationship configured to record a mapping between the first segment and the blank block; and store the new mapping relationship in the local mapping table. 13. A solid state disk, comprising: a processor; a memory coupled to the processor; a communications interface coupled to the processor; and a plurality of channels, the processor and the memory being respectively coupled to a plurality of dies via each of the channels, each of the dies comprising a plurality of blocks, the processor, the memory, and the communications interface being capable of communicating with each other, the communications interface being configured to receive a write request comprising a write address, target data, and a data length of the target data, the write address comprising an identifier of a first segment and an available offset, and the identifier of the first segment carries an identifier of a first die and an identifier of a channel coupled to the first die, the memory being configured to store a local mapping table configured to record a mapping relationship between a segment and a physical block address of the solid state disk, and the processor being configured to: query the local mapping table according to the identifier of the first segment and the available offset comprised in the write address; determine a page identifier according to the available offset, and write, based on the data length of the target data and starting from a page corresponding to the page identifier, the target data into a block corresponding to the first segment when the block corresponding to the first segment is recorded in the local mapping table; and parse the identifier of the first segment to obtain the identifier of the first die and the identifier of the channel coupled to the first die, select a blank block from a plurality of blocks of the first die based on the identifier of the channel coupled to the first die and the identifier of the first die, determine the page identifier according to the available offset comprised in the write address, and write, based on the data length of the target data and starting from the page corresponding to the page identifier, the target data into the blank block when the block corresponding to the first segment is not recorded in the local mapping table. 14. The solid state disk according to claim 13, wherein the processor is further configured to: generate a new mapping relationship configured to record a mapping between the first segment and the blank block; and store the new mapping relationship in the local mapping table.
2,100
274,018
15,954,127
2,131
A characteristic data pre-processing system includes a data acquisition device that collects characteristic data including first cell distribution data defined according to first default read levels, and second cell distribution data defined according to second default read levels, a data pre-processing apparatus that merges the first cell distribution data and the second cell distribution data according crop ranges to generate training data, wherein the crop ranges are defined according to the first default levels and the second default levels, and a database that stores the training data communicated from the data pre-processing apparatus.
1. A characteristic data pre-processing system, comprising: a data acquisition device that collects characteristic data including first cell distribution data defined according to first default read levels, and second cell distribution data defined according to second default read levels; a data pre-processing apparatus that merges the first cell distribution data and the second cell distribution data according crop ranges to generate training data, wherein the crop ranges are defined according to the first default levels and the second default levels; and a database that stores the training data communicated from the data pre-processing apparatus. 2. The characteristic data pre-processing system, wherein a number of first default levels, a number of second default levels, a number of first cell distribution data, and a number of second cell distribution data are respectively is equal to an integer value of ‘m’. 3. The characteristic data pre-processing system of claim 1, wherein the first cell distribution data is defined using a first resolution and the second cell distribution data is defined using a second resolution different from the first resolution, and the data pre-processing apparatus further comprises: an average unspooling module that equalizes the first resolution of the first cell distribution data and the second resolution of the second cell distribution data. 4. The characteristic data pre-processing system of claim 3, wherein the average unspooling module equalizes the first resolution of the first cell distribution data and the second resolution of the second cell distribution data into a third resolution, and the third resolution is a common divisor of the first and second resolutions. 5. The characteristic data pre-processing system of claim 4, wherein the average unspooling module uniformly divides a cell count corresponding to the first resolution of the first cell distribution data into a cell count corresponding to the third resolution. 6. The characteristic data pre-processing system of claim 1, wherein the data pre-processing apparatus further comprises a data cleaning module that removes meaningless data from the training data before communicating the training data to the database. 7. The characteristic data pre-processing system of claim 2, where ‘m’ is equal to 2N−1, and ‘N’ is a number of read data bits of a memory device providing the characteristic data. 8. The characteristic data pre-processing system of claim 7, wherein the memory device is a NAND flash memory device, N=3, and each of the first default read levels and second default read levels respectively includes first to seventh sequentially increasing default read levels. 9. The characteristic data pre-processing system of claim 8, wherein the m crop ranges include first to seventh crop ranges, the first, second and third crop ranges are based on the first, second and third read levels, respectively, the fifth, sixth and seventh crop ranges are based on the fifth, sixth and seventh read levels, respectively, and the fourth, fifth and sixth crop ranges are based on the same read level. 10. The characteristic data pre-processing system of claim 9, wherein the fourth and sixth crop ranges are based on the sixth read level. 11. A characteristic data pre-processing apparatus, comprising: a data pre-processing system that receives m first cell distribution data derived from a first NAND flash memory using a first resolution and m second cell distribution data derived from a second NAND flash memory using a second resolution different from the first resolution, where ‘m’ is an integer equal to a number of read bits for each of the first NAND flash memory and the second NAND flash memory, wherein the data pre-processing apparatus comprises: an average unspooling module that equalizes the first and second resolutions; and a data merging module that merges the first and second cell distribution data according to m crop ranges to generate corresponding training data. 12. The characteristic data pre-processing apparatus of claim 11, wherein the average unspooling module equalizes the first resolution and the second resolution using a third resolution smaller than either one of the first and second resolutions. 13. The characteristic data pre-processing apparatus of claim 11, further comprising: a data cleaning module that removes meaningless data from the training data. 14. The characteristic data pre-processing apparatus of claim 11, wherein the m crop ranges are defined according to m default read levels for the first and second cell distribution data. 15. The characteristic data pre-processing apparatus of claim 14, wherein the m crop ranges include 1st to m-th crop ranges, and sizes of the k-th crop range are equal to each other between the first and second cell distribution data, where ‘k’ is an integer. 16. A memory control system, comprising: a data acquisition device collecting a plurality of characteristic data including first and second cell distribution data; a data pre-processing apparatus merging the first and second cell distribution data according to predetermined crop ranges to generate training data; a database including the training data; and a machine learning model learning the training data to derive a control coefficient. 17. The memory control system of claim 16, wherein the machine learning model includes: an algorithm selection module selecting an appropriate algorithm by analyzing the plurality of characteristic data; an attribute selection module selecting a core attribute of the characteristic data; and a learning model constructing a prediction model using the algorithm and the core attribute. 18. The memory control system of claim 17, wherein the machine learning module clusters the plurality of characteristic data and classifies the clustered characteristic data into a plurality of classes. 19. The memory control system of claim 17, wherein the attribute selection module selects the core attribute through correlation analysis of attributes of the characteristic data and an optimum read level. 20. The memory control system of claim 19, wherein the prediction module derives an independent control coefficient for each of the crop ranges. 21.-30. (canceled)
A characteristic data pre-processing system includes a data acquisition device that collects characteristic data including first cell distribution data defined according to first default read levels, and second cell distribution data defined according to second default read levels, a data pre-processing apparatus that merges the first cell distribution data and the second cell distribution data according crop ranges to generate training data, wherein the crop ranges are defined according to the first default levels and the second default levels, and a database that stores the training data communicated from the data pre-processing apparatus.1. A characteristic data pre-processing system, comprising: a data acquisition device that collects characteristic data including first cell distribution data defined according to first default read levels, and second cell distribution data defined according to second default read levels; a data pre-processing apparatus that merges the first cell distribution data and the second cell distribution data according crop ranges to generate training data, wherein the crop ranges are defined according to the first default levels and the second default levels; and a database that stores the training data communicated from the data pre-processing apparatus. 2. The characteristic data pre-processing system, wherein a number of first default levels, a number of second default levels, a number of first cell distribution data, and a number of second cell distribution data are respectively is equal to an integer value of ‘m’. 3. The characteristic data pre-processing system of claim 1, wherein the first cell distribution data is defined using a first resolution and the second cell distribution data is defined using a second resolution different from the first resolution, and the data pre-processing apparatus further comprises: an average unspooling module that equalizes the first resolution of the first cell distribution data and the second resolution of the second cell distribution data. 4. The characteristic data pre-processing system of claim 3, wherein the average unspooling module equalizes the first resolution of the first cell distribution data and the second resolution of the second cell distribution data into a third resolution, and the third resolution is a common divisor of the first and second resolutions. 5. The characteristic data pre-processing system of claim 4, wherein the average unspooling module uniformly divides a cell count corresponding to the first resolution of the first cell distribution data into a cell count corresponding to the third resolution. 6. The characteristic data pre-processing system of claim 1, wherein the data pre-processing apparatus further comprises a data cleaning module that removes meaningless data from the training data before communicating the training data to the database. 7. The characteristic data pre-processing system of claim 2, where ‘m’ is equal to 2N−1, and ‘N’ is a number of read data bits of a memory device providing the characteristic data. 8. The characteristic data pre-processing system of claim 7, wherein the memory device is a NAND flash memory device, N=3, and each of the first default read levels and second default read levels respectively includes first to seventh sequentially increasing default read levels. 9. The characteristic data pre-processing system of claim 8, wherein the m crop ranges include first to seventh crop ranges, the first, second and third crop ranges are based on the first, second and third read levels, respectively, the fifth, sixth and seventh crop ranges are based on the fifth, sixth and seventh read levels, respectively, and the fourth, fifth and sixth crop ranges are based on the same read level. 10. The characteristic data pre-processing system of claim 9, wherein the fourth and sixth crop ranges are based on the sixth read level. 11. A characteristic data pre-processing apparatus, comprising: a data pre-processing system that receives m first cell distribution data derived from a first NAND flash memory using a first resolution and m second cell distribution data derived from a second NAND flash memory using a second resolution different from the first resolution, where ‘m’ is an integer equal to a number of read bits for each of the first NAND flash memory and the second NAND flash memory, wherein the data pre-processing apparatus comprises: an average unspooling module that equalizes the first and second resolutions; and a data merging module that merges the first and second cell distribution data according to m crop ranges to generate corresponding training data. 12. The characteristic data pre-processing apparatus of claim 11, wherein the average unspooling module equalizes the first resolution and the second resolution using a third resolution smaller than either one of the first and second resolutions. 13. The characteristic data pre-processing apparatus of claim 11, further comprising: a data cleaning module that removes meaningless data from the training data. 14. The characteristic data pre-processing apparatus of claim 11, wherein the m crop ranges are defined according to m default read levels for the first and second cell distribution data. 15. The characteristic data pre-processing apparatus of claim 14, wherein the m crop ranges include 1st to m-th crop ranges, and sizes of the k-th crop range are equal to each other between the first and second cell distribution data, where ‘k’ is an integer. 16. A memory control system, comprising: a data acquisition device collecting a plurality of characteristic data including first and second cell distribution data; a data pre-processing apparatus merging the first and second cell distribution data according to predetermined crop ranges to generate training data; a database including the training data; and a machine learning model learning the training data to derive a control coefficient. 17. The memory control system of claim 16, wherein the machine learning model includes: an algorithm selection module selecting an appropriate algorithm by analyzing the plurality of characteristic data; an attribute selection module selecting a core attribute of the characteristic data; and a learning model constructing a prediction model using the algorithm and the core attribute. 18. The memory control system of claim 17, wherein the machine learning module clusters the plurality of characteristic data and classifies the clustered characteristic data into a plurality of classes. 19. The memory control system of claim 17, wherein the attribute selection module selects the core attribute through correlation analysis of attributes of the characteristic data and an optimum read level. 20. The memory control system of claim 19, wherein the prediction module derives an independent control coefficient for each of the crop ranges. 21.-30. (canceled)
2,100
274,019
15,768,557
2,131
According to an example, cache operations may be managed by detecting that a cacheline in a cache is being dirtied, determining a current epoch number, in which the current epoch number is associated with a store operation and wherein the epoch number is incremented each time a thread of execution completes a flush-barrier checkpoint, and inserting an association of the cacheline to the current epoch number into a field of the cacheline that is being dirtied.
1. A method for managing cache operations, said method comprising: detecting that a cacheline in a cache is being dirtied; determining a current epoch number, wherein the current epoch number is associated with a store operation and wherein the epoch number is incremented each time a thread of execution completes a flush-barrier checkpoint; and inserting, by a cache management logic, an association of the cacheline to the current epoch number into a field of the cacheline that is being dirtied. 2. The method according to claim 1, further comprising: executing an epoch-specific flush instruction on a processor, wherein the epoch-specific flush instruction includes an identification of a particular epoch number, and wherein execution of the epoch-specific flush instruction causes each cacheline in the cache having an associated epoch number that matches or is prior to the particular epoch number to be written back to a memory. 3. The method according to claim 2, wherein execution of the epoch-specific flush instruction is completed when all of the write-backs of the cachelines associated with epoch numbers that match or are prior to the particular epoch number have been committed to the memory. 4. The method according to claim 2, further comprising: sending a snoop or a probe message to another cache across a protocol layer, wherein the snoop or probe message is to cause a cache management logic in the another cache to perform the epoch-specific flush behavior on cachelines in the another cache having an associated epoch number that matches or falls below the particular epoch number to be written back to a memory. 5. The method according to claim 2, further comprising: sending a message identifying the particular epoch number to a memory controller of the memory following execution of the epoch-specific flush instruction, wherein the memory controller is to return a completion message responsive to receipt of the message following a final writeback of the cachelines having an associated epoch number that matches or is prior to the particular epoch number to the memory. 6. The method according to claim 1, further comprising: identifying the current epoch number associated with a coherency domain; and incrementing the current epoch number following completion of a flush-barrier checkpoint by a thread of execution in that coherency domain. 7. The method according to claim 1, further comprising: determining that the dirtied cacheline is modified prior to being written back to the memory and that the current epoch number has been incremented; maintaining the association with the current epoch number following the modification such that the cacheline remains associated with the current epoch number prior to the current epoch number being incremented. 8. The method according to claim 1, further comprising: transferring the cacheline to another cache in a coherency domain while maintaining the association of the current epoch number to the cacheline in the field of the cacheline. 9. The method according to claim 1, wherein inserting the association of the current epoch number to the cacheline into the field of the cacheline further comprises one of: inserting the current epoch number into the field of the cacheline; and inserting a link field in the field of the cacheline, wherein the link field includes a pointer to a next cacheline associated with the current epoch number, such that the cacheline is part of a linked list of cachelines associated with the current epoch number. 10. A cache comprising: a cache array on which is stored a plurality of cachelines; and cache management logic that is to control management of the plurality of cachelines, wherein the cache management logic is to: detect that a cacheline in the cache array is being dirtied; determine a current epoch number, wherein the current epoch number is associated with a store operation and wherein the epoch number is incremented each time a thread of execution completes a flush-barrier checkpoint; insert an association of the cacheline that is being dirtied to the current epoch number into a field of the cacheline. 11. The cache according to claim 10, wherein the cache management logic is further to perform an epoch-specific flush behavior, wherein the epoch-specific flush behavior includes an identification of a particular epoch number, and wherein performance of the epoch-specific flush behavior causes each cacheline in the cache array having an associated epoch number that matches or is prior to the particular epoch number to be written back to a memory. 12. The cache according to claim 11, wherein the cache management logic is further to send a snoop or a probe message to another cache across a protocol layer, wherein the snoop or probe message is to cause a cache management logic in the another cache to perform the epoch-specific flush behavior on cachelines in the another cache having an associated epoch number that matches or falls below the particular epoch number to be written back to a memory. 13. The cache according to claim 11, wherein the cache management logic is further to send a message identifying the particular epoch number to a memory controller following execution of an epoch-specific flush instruction, wherein the memory controller is to return a completion message responsive to receipt of the message following a final writeback of the cachelines having an associated epoch number that matches or is prior to the particular epoch number to the memory. 14. The cache according to claim 11, wherein, to insert the association of the current epoch number to the cacheline that is being dirtied into the field of the cacheline, the cache management logic is further to one of: insert the current epoch number into the field of the cacheline that is being dirtied; and insert a link field in the field of the cacheline that is being dirtied, wherein the link field includes a pointer to a next cacheline associated with the current epoch number, such that the cacheline that is being dirtied is part of a linked list of cachelines associated with the current epoch number. 15. A method for managing cache operations using epochs, said method comprising: determining a current epoch number, wherein the current epoch number is associated with a store operation and wherein the epoch number is incremented each time a thread of execution completes a flush-barrier checkpoint; inserting an association of the current epoch number to a cacheline into a field of a cacheline that is being dirtied; executing an epoch-specific flush instruction, wherein execution of the epoch-specific flush instruction causes each cacheline having an associated epoch number that matches or falls below the particular epoch number to be written back to a memory; sending a message identifying the particular epoch number to a memory controller following execution of the epoch-specific flush instruction; and incrementing the current epoch number in response to receipt of response message from the memory controller to the external message.
According to an example, cache operations may be managed by detecting that a cacheline in a cache is being dirtied, determining a current epoch number, in which the current epoch number is associated with a store operation and wherein the epoch number is incremented each time a thread of execution completes a flush-barrier checkpoint, and inserting an association of the cacheline to the current epoch number into a field of the cacheline that is being dirtied.1. A method for managing cache operations, said method comprising: detecting that a cacheline in a cache is being dirtied; determining a current epoch number, wherein the current epoch number is associated with a store operation and wherein the epoch number is incremented each time a thread of execution completes a flush-barrier checkpoint; and inserting, by a cache management logic, an association of the cacheline to the current epoch number into a field of the cacheline that is being dirtied. 2. The method according to claim 1, further comprising: executing an epoch-specific flush instruction on a processor, wherein the epoch-specific flush instruction includes an identification of a particular epoch number, and wherein execution of the epoch-specific flush instruction causes each cacheline in the cache having an associated epoch number that matches or is prior to the particular epoch number to be written back to a memory. 3. The method according to claim 2, wherein execution of the epoch-specific flush instruction is completed when all of the write-backs of the cachelines associated with epoch numbers that match or are prior to the particular epoch number have been committed to the memory. 4. The method according to claim 2, further comprising: sending a snoop or a probe message to another cache across a protocol layer, wherein the snoop or probe message is to cause a cache management logic in the another cache to perform the epoch-specific flush behavior on cachelines in the another cache having an associated epoch number that matches or falls below the particular epoch number to be written back to a memory. 5. The method according to claim 2, further comprising: sending a message identifying the particular epoch number to a memory controller of the memory following execution of the epoch-specific flush instruction, wherein the memory controller is to return a completion message responsive to receipt of the message following a final writeback of the cachelines having an associated epoch number that matches or is prior to the particular epoch number to the memory. 6. The method according to claim 1, further comprising: identifying the current epoch number associated with a coherency domain; and incrementing the current epoch number following completion of a flush-barrier checkpoint by a thread of execution in that coherency domain. 7. The method according to claim 1, further comprising: determining that the dirtied cacheline is modified prior to being written back to the memory and that the current epoch number has been incremented; maintaining the association with the current epoch number following the modification such that the cacheline remains associated with the current epoch number prior to the current epoch number being incremented. 8. The method according to claim 1, further comprising: transferring the cacheline to another cache in a coherency domain while maintaining the association of the current epoch number to the cacheline in the field of the cacheline. 9. The method according to claim 1, wherein inserting the association of the current epoch number to the cacheline into the field of the cacheline further comprises one of: inserting the current epoch number into the field of the cacheline; and inserting a link field in the field of the cacheline, wherein the link field includes a pointer to a next cacheline associated with the current epoch number, such that the cacheline is part of a linked list of cachelines associated with the current epoch number. 10. A cache comprising: a cache array on which is stored a plurality of cachelines; and cache management logic that is to control management of the plurality of cachelines, wherein the cache management logic is to: detect that a cacheline in the cache array is being dirtied; determine a current epoch number, wherein the current epoch number is associated with a store operation and wherein the epoch number is incremented each time a thread of execution completes a flush-barrier checkpoint; insert an association of the cacheline that is being dirtied to the current epoch number into a field of the cacheline. 11. The cache according to claim 10, wherein the cache management logic is further to perform an epoch-specific flush behavior, wherein the epoch-specific flush behavior includes an identification of a particular epoch number, and wherein performance of the epoch-specific flush behavior causes each cacheline in the cache array having an associated epoch number that matches or is prior to the particular epoch number to be written back to a memory. 12. The cache according to claim 11, wherein the cache management logic is further to send a snoop or a probe message to another cache across a protocol layer, wherein the snoop or probe message is to cause a cache management logic in the another cache to perform the epoch-specific flush behavior on cachelines in the another cache having an associated epoch number that matches or falls below the particular epoch number to be written back to a memory. 13. The cache according to claim 11, wherein the cache management logic is further to send a message identifying the particular epoch number to a memory controller following execution of an epoch-specific flush instruction, wherein the memory controller is to return a completion message responsive to receipt of the message following a final writeback of the cachelines having an associated epoch number that matches or is prior to the particular epoch number to the memory. 14. The cache according to claim 11, wherein, to insert the association of the current epoch number to the cacheline that is being dirtied into the field of the cacheline, the cache management logic is further to one of: insert the current epoch number into the field of the cacheline that is being dirtied; and insert a link field in the field of the cacheline that is being dirtied, wherein the link field includes a pointer to a next cacheline associated with the current epoch number, such that the cacheline that is being dirtied is part of a linked list of cachelines associated with the current epoch number. 15. A method for managing cache operations using epochs, said method comprising: determining a current epoch number, wherein the current epoch number is associated with a store operation and wherein the epoch number is incremented each time a thread of execution completes a flush-barrier checkpoint; inserting an association of the current epoch number to a cacheline into a field of a cacheline that is being dirtied; executing an epoch-specific flush instruction, wherein execution of the epoch-specific flush instruction causes each cacheline having an associated epoch number that matches or falls below the particular epoch number to be written back to a memory; sending a message identifying the particular epoch number to a memory controller following execution of the epoch-specific flush instruction; and incrementing the current epoch number in response to receipt of response message from the memory controller to the external message.
2,100
274,020
15,952,292
2,131
A storage control apparatus configured to control a storage device including a storage medium having a limited number of writes, includes a memory, and a processor coupled to the memory and configured to store, in the memory, address conversion information associating logical addresses used for data identification by an information processing apparatus accessing to the storage device, and physical addresses indicating positions where the data is stored on the storage medium, write the data additionally and collectively to the storage medium, and when the data is updated, maintain storing a reference logical address associated with the data before updated and the data before updated on the storage medium.
1. A storage control apparatus configured to control a storage device including a storage medium having a limited number of writes, comprising: a memory; and a processor coupled to the memory and configured to: store, in the memory, address conversion information associating logical addresses used for data identification by an information processing apparatus accessing to the storage device, and physical addresses indicating positions where the data is stored on the storage medium, write the data additionally and collectively to the storage medium, and when the data is updated, maintain storing a reference logical address associated with the data before updated and the data before updated on the storage medium. 2. The storage control apparatus according to claim 1, wherein when a physical address associated with the reference logical address by the address conversion information does not match a physical address of the data, the processor determines that the data is invalid data and a target of garbage collection. 3. The storage control apparatus according to claim 2, wherein the processor records a physical address indicating a position where the data is appended and bulk-written to the storage medium in the address conversion information in association with the logical address as a meta-address, appends and bulk-writes the address conversion information to the storage medium, and when a physical address indicating a position where the address conversion information is appended and bulk-written to the storage medium does not match the meta-address associated with the logical address included in the address conversion information, the processor determines that the address conversion information is invalid and a target of garbage collection. 4. The storage control apparatus according to claim 3, wherein when a meta-address associated with the logical address included in the address conversion information does not exist, the processor determines that the address conversion information is invalid and a target of garbage collection. 5. The storage control apparatus according to claim 2, wherein the processor determines whether or not the data is data managed by the storage control apparatus itself, when the data is not data managed by the storage control apparatus itself, the processor requests a storage control apparatus in charge of the data to determine whether or not the data is valid, and when a response indicating that the data is not valid is received, the processor determines that the data is a target of garbage collection. 6. A storage control method for a storage control apparatus including a memory and a processor coupled to the memory, the storage control apparatus configured to control a storage device including a storage medium having a limited number of writes, comprising: storing, in the memory, address conversion information associating logical addresses used for data identification by an information processing apparatus accessing to the storage device, and physical addresses indicating positions where the data is stored on the storage medium; writing the data additionally and collectively to the storage medium; and when the data is updated, maintaining storing a reference logical address associated with the data before updated and the data before updated on the storage medium.
A storage control apparatus configured to control a storage device including a storage medium having a limited number of writes, includes a memory, and a processor coupled to the memory and configured to store, in the memory, address conversion information associating logical addresses used for data identification by an information processing apparatus accessing to the storage device, and physical addresses indicating positions where the data is stored on the storage medium, write the data additionally and collectively to the storage medium, and when the data is updated, maintain storing a reference logical address associated with the data before updated and the data before updated on the storage medium.1. A storage control apparatus configured to control a storage device including a storage medium having a limited number of writes, comprising: a memory; and a processor coupled to the memory and configured to: store, in the memory, address conversion information associating logical addresses used for data identification by an information processing apparatus accessing to the storage device, and physical addresses indicating positions where the data is stored on the storage medium, write the data additionally and collectively to the storage medium, and when the data is updated, maintain storing a reference logical address associated with the data before updated and the data before updated on the storage medium. 2. The storage control apparatus according to claim 1, wherein when a physical address associated with the reference logical address by the address conversion information does not match a physical address of the data, the processor determines that the data is invalid data and a target of garbage collection. 3. The storage control apparatus according to claim 2, wherein the processor records a physical address indicating a position where the data is appended and bulk-written to the storage medium in the address conversion information in association with the logical address as a meta-address, appends and bulk-writes the address conversion information to the storage medium, and when a physical address indicating a position where the address conversion information is appended and bulk-written to the storage medium does not match the meta-address associated with the logical address included in the address conversion information, the processor determines that the address conversion information is invalid and a target of garbage collection. 4. The storage control apparatus according to claim 3, wherein when a meta-address associated with the logical address included in the address conversion information does not exist, the processor determines that the address conversion information is invalid and a target of garbage collection. 5. The storage control apparatus according to claim 2, wherein the processor determines whether or not the data is data managed by the storage control apparatus itself, when the data is not data managed by the storage control apparatus itself, the processor requests a storage control apparatus in charge of the data to determine whether or not the data is valid, and when a response indicating that the data is not valid is received, the processor determines that the data is a target of garbage collection. 6. A storage control method for a storage control apparatus including a memory and a processor coupled to the memory, the storage control apparatus configured to control a storage device including a storage medium having a limited number of writes, comprising: storing, in the memory, address conversion information associating logical addresses used for data identification by an information processing apparatus accessing to the storage device, and physical addresses indicating positions where the data is stored on the storage medium; writing the data additionally and collectively to the storage medium; and when the data is updated, maintaining storing a reference logical address associated with the data before updated and the data before updated on the storage medium.
2,100
274,021
15,952,637
2,131
A backup control method includes receiving a plurality of pieces of data transmitted from a plurality of data storage devices, classifying the plurality of pieces of data into respective data groups in accordance with the plurality of data storage devices of transmission sources, generating first compressed data by compressing one or more pieces of data classified into a first data group, and transmitting the first compressed data to a backup device storing backups.
1. A backup control method executed by a computer, the method comprising: receiving a plurality of pieces of data transmitted from a plurality of data storage devices; classifying the plurality of pieces of data into respective data groups in accordance with the plurality of data storage devices of transmission sources; generating first compressed data by compressing one or more pieces of data classified into a first data group; and transmitting the first compressed data to a backup device storing backups. 2. The backup control method according to claim 1, further comprising: receiving a restoration request from a first data storage device relating to the first data group; obtaining, from the backup device, the first compressed data associated with the first data group from among the respective data groups; generating the one or more pieces of data by decompressing the first compressed data; and transmitting the one or more pieces of data to the first data storage device. 3. The backup control method according to claim 1, further comprising: after the transmitting the first compressed data, generating a second compressed data by compressing one or more pieces of data classified into a second data group, and transmitting the second compressed data to the backup device. 4. The backup control method according to claim 1, further comprising: among the plurality of data groups generated by the classifying, when presence of a group having a number of pieces of data no less than a threshold value is detected, determining a data group having the largest number of pieces of data among the plurality of data groups to be a target of compression processing. 5. The backup control method according to claim 1, further comprising: among the plurality of data groups generated by the classifying, when presence of a group having an amount of data no less than a threshold value is detected, determining a data group having the largest amount of data among the plurality of data groups to be a target of compression processing. 6. The backup control method according to claim 1, further comprising: when a restoration request is received from a first data storage device, obtaining compressed data related to the restoration request from the backup device, generating another one or more pieces of data by decompressing the obtained compressed data; and transmitting the other one or more pieces of data to the first data storage device; and when a predetermined time period has passed from the transmitting of the other one or more pieces of data, deleting the other one or more pieces of data stored in the computer. 7. A backup control device comprising: a memory; and a processor coupled to the memory and the processor configured to: receive a plurality of pieces of data transmitted from a plurality of data storage devices, perform classification of the plurality of pieces of data into respective data groups in accordance with the plurality of data storage devices of transmission sources, generate first compressed data by compressing one or more pieces of data classified into a first data group, and perform transmission of the first compressed data to a backup device storing backups. 8. The backup control device according to claim 7, the processor further configured to: receive a restoration request from a first data storage device relating to the first data group, obtain, from the backup device, the first compressed data associated with the first data group from among the respective data groups, generate the one or more pieces of data by decompressing the first compressed data, and transmit the one or more pieces of data to the first data storage device. 9. The backup control device according to claim 7, the processor further configured to: after the transmission of the first compressed data, generate a second compressed data by compressing one or more pieces of data classified into a second data group, and transmit the second compressed data to the backup device. 10. The backup control device according to claim 7, the processor further configured to: among the plurality of data groups generated by the classification, when presence of a group having a number of pieces of data no less than a threshold value is detected, determine a data group having the largest number of pieces of data among the plurality of data groups to be a target of compression processing. 11. The backup control device according to claim 7, the processor further configured to: among the plurality of data groups generated by the classification, when presence of a group having an amount of data no less than a threshold value is detected, determine a data group having the largest amount of data among the plurality of data groups to be a target of compression processing. 12. The backup control device according to claim 7, the processor further configured to: when a restoration request is received from a first data storage device, obtain compressed data related to the restoration request from the backup device, generate another one or more pieces of data by decompressing the obtained compressed data; and transmit the other one or more pieces of data to the first data storage device; and when a predetermined time period has passed from the transmitting of the other one or more pieces of data, delete the other one or more pieces of data stored in the computer. 13. A non-transitory computer-readable medium storing a backup control program that causes a computer to execute a process comprising: receiving a plurality of pieces of data transmitted from a plurality of data storage devices; classifying the plurality of pieces of data into respective data groups in accordance with the plurality of data storage devices of transmission sources; generating first compressed data by compressing one or more pieces of data classified into a first data group; and transmitting the first compressed data to a backup device storing backups.
A backup control method includes receiving a plurality of pieces of data transmitted from a plurality of data storage devices, classifying the plurality of pieces of data into respective data groups in accordance with the plurality of data storage devices of transmission sources, generating first compressed data by compressing one or more pieces of data classified into a first data group, and transmitting the first compressed data to a backup device storing backups.1. A backup control method executed by a computer, the method comprising: receiving a plurality of pieces of data transmitted from a plurality of data storage devices; classifying the plurality of pieces of data into respective data groups in accordance with the plurality of data storage devices of transmission sources; generating first compressed data by compressing one or more pieces of data classified into a first data group; and transmitting the first compressed data to a backup device storing backups. 2. The backup control method according to claim 1, further comprising: receiving a restoration request from a first data storage device relating to the first data group; obtaining, from the backup device, the first compressed data associated with the first data group from among the respective data groups; generating the one or more pieces of data by decompressing the first compressed data; and transmitting the one or more pieces of data to the first data storage device. 3. The backup control method according to claim 1, further comprising: after the transmitting the first compressed data, generating a second compressed data by compressing one or more pieces of data classified into a second data group, and transmitting the second compressed data to the backup device. 4. The backup control method according to claim 1, further comprising: among the plurality of data groups generated by the classifying, when presence of a group having a number of pieces of data no less than a threshold value is detected, determining a data group having the largest number of pieces of data among the plurality of data groups to be a target of compression processing. 5. The backup control method according to claim 1, further comprising: among the plurality of data groups generated by the classifying, when presence of a group having an amount of data no less than a threshold value is detected, determining a data group having the largest amount of data among the plurality of data groups to be a target of compression processing. 6. The backup control method according to claim 1, further comprising: when a restoration request is received from a first data storage device, obtaining compressed data related to the restoration request from the backup device, generating another one or more pieces of data by decompressing the obtained compressed data; and transmitting the other one or more pieces of data to the first data storage device; and when a predetermined time period has passed from the transmitting of the other one or more pieces of data, deleting the other one or more pieces of data stored in the computer. 7. A backup control device comprising: a memory; and a processor coupled to the memory and the processor configured to: receive a plurality of pieces of data transmitted from a plurality of data storage devices, perform classification of the plurality of pieces of data into respective data groups in accordance with the plurality of data storage devices of transmission sources, generate first compressed data by compressing one or more pieces of data classified into a first data group, and perform transmission of the first compressed data to a backup device storing backups. 8. The backup control device according to claim 7, the processor further configured to: receive a restoration request from a first data storage device relating to the first data group, obtain, from the backup device, the first compressed data associated with the first data group from among the respective data groups, generate the one or more pieces of data by decompressing the first compressed data, and transmit the one or more pieces of data to the first data storage device. 9. The backup control device according to claim 7, the processor further configured to: after the transmission of the first compressed data, generate a second compressed data by compressing one or more pieces of data classified into a second data group, and transmit the second compressed data to the backup device. 10. The backup control device according to claim 7, the processor further configured to: among the plurality of data groups generated by the classification, when presence of a group having a number of pieces of data no less than a threshold value is detected, determine a data group having the largest number of pieces of data among the plurality of data groups to be a target of compression processing. 11. The backup control device according to claim 7, the processor further configured to: among the plurality of data groups generated by the classification, when presence of a group having an amount of data no less than a threshold value is detected, determine a data group having the largest amount of data among the plurality of data groups to be a target of compression processing. 12. The backup control device according to claim 7, the processor further configured to: when a restoration request is received from a first data storage device, obtain compressed data related to the restoration request from the backup device, generate another one or more pieces of data by decompressing the obtained compressed data; and transmit the other one or more pieces of data to the first data storage device; and when a predetermined time period has passed from the transmitting of the other one or more pieces of data, delete the other one or more pieces of data stored in the computer. 13. A non-transitory computer-readable medium storing a backup control program that causes a computer to execute a process comprising: receiving a plurality of pieces of data transmitted from a plurality of data storage devices; classifying the plurality of pieces of data into respective data groups in accordance with the plurality of data storage devices of transmission sources; generating first compressed data by compressing one or more pieces of data classified into a first data group; and transmitting the first compressed data to a backup device storing backups.
2,100
274,022
15,951,896
2,131
Apparatuses and methods related to command selection policy for electronic memory or storage are described. Commands to a memory controller may be prioritized based on a type of command, a timing of when one command was received relative to another command, a timing of when one command is ready to be issued to a memory device, or some combination of such factors. For instance, a memory controller may employ a first-ready, first-come, first-served (FRFCFS) policy in which certain types of commands (e.g., read commands) are prioritized over other types of commands (e.g., write commands). The policy may employ exceptions to such an FRFCFS policy based on dependencies or relationships among or between commands. An example can include inserting a command into a priority queue based on a category corresponding to respective commands, and iterating through a plurality of priority queues in order of priority to select a command to issue.
1. A method for command selection, comprising: receiving a read command to a memory controller, wherein the read command comprises an address for a bank and a channel of a memory device; inserting the read command into a queue of the memory controller; blocking a first number of write commands to the bank; issuing, to the memory device, an activation command associated with the read command; blocking a second number of write commands to the channel; and issuing the read command to the memory device. 2. The method of claim 1, further comprising blocking the second number of write commands after a predetermined duration of time. 3. The method of claim 2, wherein the predetermined duration of time comprises a difference between a row-to-column delay and a write-to-read delay. 4. The method of claim 2, wherein the predetermined duration of time is selected based on a read-after-write dependence associated with the read command. 5. The method of claim 4, further comprising selecting the predetermined duration of time to include a difference between a row-to-column delay and a column-to-column delay based on a determination that the read-after-write dependence is associated with the read command. 6. The method of claim 1, further comprising issuing a row pre-charge command responsive to blocking the first number of write commands to the bank. 7. The method of claim 6, further comprising responsive to determining that a prior write command to a row does not block the read command, issuing the row pre-charge command. 8. The method of claim 6, wherein issuing the activation command further comprises issuing the activation command after issuing the row pre-charge command. 9. The method of claim 8, wherein the activation command is a row activation command. 10. A controller, comprising: a queue; and logic configured to: insert a read command, for a bank and a channel of a memory device, into the queue; block a first number of write commands to the bank; issue an activation command associated with the read command; block a second number of write commands to the channel; and select the read command for issuance based on a first-ready, first-come, first-served (FRFCFS) policy in which the read command is prioritized over the first number of write commands and the second number of write commands. 11. The controller of claim 10, wherein the logic is further configured to select a write command for issuance to the memory device only if the write command is associated with the read command having a read-after-write dependence. 12. The controller of claim 10, wherein the logic is further configured to select commands from the first number of write commands and the second number of write commands for issuance before the first number of write commands are blocked. 13. The controller of claim 10, wherein the logic is further configured to select commands from the second number of write commands for issuance before the second number of write commands are blocked. 14. The controller of claim 10, wherein the logic is further configured to select commands from a third number of write commands until the read command is selected for issuance. 15. The controller of claim 14, wherein the third number of write commands have a read-after write dependence to the read command and wherein the third number of write commands includes the write command. 16. The controller of claim 15, wherein the third number of write commands are assigned the read-after-write dependent to the read commands responsive to the third number of write commands accessing a same row as the read command. 17. The controller of claim 15, wherein the third number of write commands are assigned an elevated priority over the first number of write commands and the second number of write commands based on the read-after write dependence. 18. A method for command selection, comprising: receiving, at a memory controller, a read command for a bank and a channel of a memory device; generating metadata associated with the read command; determining whether there is a write command with a read-after-write dependence to the read command based on the metadata; inserting the read command into a prioritized queue; blocking a first number of write commands to the bank; issuing an activation command associated with the read command; blocking a second number of write commands to the channel; and selecting the write command and the read command for issuance based on a first-ready, first-come, first-served (FRFCFS) policy in response to determining that the write command has the read-after-write dependence to the read command. 19. The method of claim 18, further comprising responsive to determining that the read command does not have the read-after-write dependence, select the read command, from the prioritized queue, for issuance based on the FRFCFS policy. 20. The method of claim 18, wherein processing the read command further comprises determining whether the bank is one of a set of banks with outstanding read commands or elevated write commands. 21. The method of claim 20, further comprising: inserting a bank identifier (ID) to the set of banks responsive to determining that the bank is not a bank of the set of banks; marking the write command as having the read-after-write dependence responsive to determining that the write command and the read command access a same row; and wherein the metadata includes the set of banks and the same row. 22. An apparatus, comprising: a memory device; and a memory controller coupled to the memory device and configured to: iterate through a plurality of commands in a queue and at each iteration: responsive to determining that a command from the plurality of commands is not a read command, not an elevated write command, and that a first bank identifier (ID) associated with the command is in a list of banks with outstanding reads, execute a bank block and mark the command as not issuable; responsive to determining that the command is the read command and that the command has a read-after-write dependence, execute the bank block and mark the command as not issuable; responsive to determining that the command is the read command and that the command does not have the read-after-write dependence, execute a channel block and mark the plurality of commands, excepting the command, as not issuable; and responsive to determining that the command is an elevated write command, execute the channel block and mark the plurality of commands, excepting the command, as not issuable. 23. The apparatus of claim 22, wherein the memory controller is configured to iterate through the plurality of commands repeatedly over a predetermined interval of time. 24. The apparatus of claim 22, wherein the memory controller is further configured to, at each iteration: responsive to determining that the command is the read command, that the command does not have the read-after-write dependence, and that there exists a different read command with a different read-after-write dependence through a bank and to an open row, execute a pre-charge block of the bank, execute a bank block, and mark the command as not issuable; and responsive to determining that the command is the read command, that the command does not have the different read-after-write dependence, and that the different read command with the read-after-write dependence through the bank and to the open row does not exist, execute the channel block and mark the plurality of commands, excepting the command, as not issuable responsive to determining that a predetermined duration of time has expired since the row was activated. 25. The apparatus of claim 22, wherein the memory controller is further configured to, at each iteration: responsive to determining that the command is an elevated write command and that a predetermined duration of time has expired since the row was activated, execute the channel block and mark the plurality of commands, excepting the command, as not issuable. 26. The apparatus of claim 22, wherein the memory controller is further configured to select a highest priority command with a highest priority, from the queue, using a first-ready, first-come, first-served (FRFCFS) priority. 27. The apparatus of claim 26, wherein the memory controller is further configured to: insert a current time into a list of activation bank of times responsive to determining that the highest priority command has been activated; remove a bank activation time from the list of activation bank times responsive to determining that a bank associated with the highest priority command has been pre-charged; and remove a second bank ID from the list of banks, responsive to determining that the highest priority command is an elevated write command. 28. A method for command selection, comprising: receiving a read command to a memory controller, wherein the read command comprises an address of a partition and a channel of a memory device; inserting the read command into a queue of the memory controller; deprioritizing a first number of write commands to the partition; deprioritizing a second number of write commands to the channel; and selecting the read command to the memory device for issuance from the queue. 29. The method of claim 27, further comprising deprioritizing non-elevated write commands to the partition responsive to deprioritizing the second number of write commands to the channel. 30. The method of claim 28, wherein the queue comprises a plurality of registers of the memory controller.
Apparatuses and methods related to command selection policy for electronic memory or storage are described. Commands to a memory controller may be prioritized based on a type of command, a timing of when one command was received relative to another command, a timing of when one command is ready to be issued to a memory device, or some combination of such factors. For instance, a memory controller may employ a first-ready, first-come, first-served (FRFCFS) policy in which certain types of commands (e.g., read commands) are prioritized over other types of commands (e.g., write commands). The policy may employ exceptions to such an FRFCFS policy based on dependencies or relationships among or between commands. An example can include inserting a command into a priority queue based on a category corresponding to respective commands, and iterating through a plurality of priority queues in order of priority to select a command to issue.1. A method for command selection, comprising: receiving a read command to a memory controller, wherein the read command comprises an address for a bank and a channel of a memory device; inserting the read command into a queue of the memory controller; blocking a first number of write commands to the bank; issuing, to the memory device, an activation command associated with the read command; blocking a second number of write commands to the channel; and issuing the read command to the memory device. 2. The method of claim 1, further comprising blocking the second number of write commands after a predetermined duration of time. 3. The method of claim 2, wherein the predetermined duration of time comprises a difference between a row-to-column delay and a write-to-read delay. 4. The method of claim 2, wherein the predetermined duration of time is selected based on a read-after-write dependence associated with the read command. 5. The method of claim 4, further comprising selecting the predetermined duration of time to include a difference between a row-to-column delay and a column-to-column delay based on a determination that the read-after-write dependence is associated with the read command. 6. The method of claim 1, further comprising issuing a row pre-charge command responsive to blocking the first number of write commands to the bank. 7. The method of claim 6, further comprising responsive to determining that a prior write command to a row does not block the read command, issuing the row pre-charge command. 8. The method of claim 6, wherein issuing the activation command further comprises issuing the activation command after issuing the row pre-charge command. 9. The method of claim 8, wherein the activation command is a row activation command. 10. A controller, comprising: a queue; and logic configured to: insert a read command, for a bank and a channel of a memory device, into the queue; block a first number of write commands to the bank; issue an activation command associated with the read command; block a second number of write commands to the channel; and select the read command for issuance based on a first-ready, first-come, first-served (FRFCFS) policy in which the read command is prioritized over the first number of write commands and the second number of write commands. 11. The controller of claim 10, wherein the logic is further configured to select a write command for issuance to the memory device only if the write command is associated with the read command having a read-after-write dependence. 12. The controller of claim 10, wherein the logic is further configured to select commands from the first number of write commands and the second number of write commands for issuance before the first number of write commands are blocked. 13. The controller of claim 10, wherein the logic is further configured to select commands from the second number of write commands for issuance before the second number of write commands are blocked. 14. The controller of claim 10, wherein the logic is further configured to select commands from a third number of write commands until the read command is selected for issuance. 15. The controller of claim 14, wherein the third number of write commands have a read-after write dependence to the read command and wherein the third number of write commands includes the write command. 16. The controller of claim 15, wherein the third number of write commands are assigned the read-after-write dependent to the read commands responsive to the third number of write commands accessing a same row as the read command. 17. The controller of claim 15, wherein the third number of write commands are assigned an elevated priority over the first number of write commands and the second number of write commands based on the read-after write dependence. 18. A method for command selection, comprising: receiving, at a memory controller, a read command for a bank and a channel of a memory device; generating metadata associated with the read command; determining whether there is a write command with a read-after-write dependence to the read command based on the metadata; inserting the read command into a prioritized queue; blocking a first number of write commands to the bank; issuing an activation command associated with the read command; blocking a second number of write commands to the channel; and selecting the write command and the read command for issuance based on a first-ready, first-come, first-served (FRFCFS) policy in response to determining that the write command has the read-after-write dependence to the read command. 19. The method of claim 18, further comprising responsive to determining that the read command does not have the read-after-write dependence, select the read command, from the prioritized queue, for issuance based on the FRFCFS policy. 20. The method of claim 18, wherein processing the read command further comprises determining whether the bank is one of a set of banks with outstanding read commands or elevated write commands. 21. The method of claim 20, further comprising: inserting a bank identifier (ID) to the set of banks responsive to determining that the bank is not a bank of the set of banks; marking the write command as having the read-after-write dependence responsive to determining that the write command and the read command access a same row; and wherein the metadata includes the set of banks and the same row. 22. An apparatus, comprising: a memory device; and a memory controller coupled to the memory device and configured to: iterate through a plurality of commands in a queue and at each iteration: responsive to determining that a command from the plurality of commands is not a read command, not an elevated write command, and that a first bank identifier (ID) associated with the command is in a list of banks with outstanding reads, execute a bank block and mark the command as not issuable; responsive to determining that the command is the read command and that the command has a read-after-write dependence, execute the bank block and mark the command as not issuable; responsive to determining that the command is the read command and that the command does not have the read-after-write dependence, execute a channel block and mark the plurality of commands, excepting the command, as not issuable; and responsive to determining that the command is an elevated write command, execute the channel block and mark the plurality of commands, excepting the command, as not issuable. 23. The apparatus of claim 22, wherein the memory controller is configured to iterate through the plurality of commands repeatedly over a predetermined interval of time. 24. The apparatus of claim 22, wherein the memory controller is further configured to, at each iteration: responsive to determining that the command is the read command, that the command does not have the read-after-write dependence, and that there exists a different read command with a different read-after-write dependence through a bank and to an open row, execute a pre-charge block of the bank, execute a bank block, and mark the command as not issuable; and responsive to determining that the command is the read command, that the command does not have the different read-after-write dependence, and that the different read command with the read-after-write dependence through the bank and to the open row does not exist, execute the channel block and mark the plurality of commands, excepting the command, as not issuable responsive to determining that a predetermined duration of time has expired since the row was activated. 25. The apparatus of claim 22, wherein the memory controller is further configured to, at each iteration: responsive to determining that the command is an elevated write command and that a predetermined duration of time has expired since the row was activated, execute the channel block and mark the plurality of commands, excepting the command, as not issuable. 26. The apparatus of claim 22, wherein the memory controller is further configured to select a highest priority command with a highest priority, from the queue, using a first-ready, first-come, first-served (FRFCFS) priority. 27. The apparatus of claim 26, wherein the memory controller is further configured to: insert a current time into a list of activation bank of times responsive to determining that the highest priority command has been activated; remove a bank activation time from the list of activation bank times responsive to determining that a bank associated with the highest priority command has been pre-charged; and remove a second bank ID from the list of banks, responsive to determining that the highest priority command is an elevated write command. 28. A method for command selection, comprising: receiving a read command to a memory controller, wherein the read command comprises an address of a partition and a channel of a memory device; inserting the read command into a queue of the memory controller; deprioritizing a first number of write commands to the partition; deprioritizing a second number of write commands to the channel; and selecting the read command to the memory device for issuance from the queue. 29. The method of claim 27, further comprising deprioritizing non-elevated write commands to the partition responsive to deprioritizing the second number of write commands to the channel. 30. The method of claim 28, wherein the queue comprises a plurality of registers of the memory controller.
2,100
274,023
15,582,335
2,131
A backup system comprises a tape backup storage storing a set of tape backup data, a snapshot backup storage storing a nearest snapshot, and a processor. The processor is configured to determine the nearest snapshot, wherein a snapshot time of the nearest snapshot is nearest in time to a backup time, and determine the set of tape backup data, wherein the set of tape backup data and the nearest snapshot enable recovery of a backup data.
1. A backup system, comprising: a tape backup storage storing a set of tape backup data that includes a set of one or more incremental backups; a snapshot backup storage storing a nearest snapshot; and a processor configured to: determine that the nearest snapshot is after a backup time; determine, for a incremental backup that occurs before the nearest snapshot, one or more changed blocks and version information associated with the one or more changed blocks, wherein the one or more changed blocks are added to a set of changed blocks; and use the determined one or more change blocks and the determined nearest snapshot to recover the backup data to the version of the change block that occurs immediately before the backup time. 2. The backup system of claim 1, wherein the processor is configured to receive a request to recover backup data associated with a backup time. 3. The backup system of claim 1, wherein the snapshot backup storage stores online backups. 4. The backup system of claim 1, wherein the snapshot backup storage comprises a backup system with fast access. 5. The backup system of claim 1, wherein the snapshot backup storage comprises a disk based backup storage system. 6. The backup system of claim 1, wherein the snapshot backup storage comprises a random access memory based backup storage system. 7. The backup system of claim 1, wherein the snapshot backup storage comprises a deduplicated backup storage system. 8. The backup system of claim 1, further comprising an input interface configured to receive a request to recover the backup data associated with the backup time. 9. The backup system of claim 1, further comprises an output interface configured to provide the backup data. 10. The backup system of claim 1, wherein the processor is further configured to determine the backup data. 11. The backup system of claim 10, wherein the backup data is determined using the set of tape backup data and the nearest snapshot. 12. The backup system of claim 11, wherein a new snapshot corresponding to the back time is determined. 13. The backup system of claim 1, wherein the backup data is determined using previous incremental backups to the nearest snapshot and determining changed blocks to recover the backup data. 14. The backup system of claim 1, wherein the processor is configured to determine that the nearest snapshot is before the backup time in the event the nearest snapshot is before the backup time the backup data is determined using subsequent incremental backups to the nearest snapshot and determining changed blocks to recover the backup data. 15. The system of claim 1, wherein the backup system uses the tape backup storage to store the backup data more frequently than the backup system uses the snapshot backup storage to store the backup data. 16. A method for backup, comprising: determining, using a processor, that a nearest snapshot is after a backup time, wherein the nearest snapshot is stored on a snapshot backup storage; determining, for a incremental backup that occurs before the nearest snapshot, one or more changed blocks and version information associated with the one or more changed blocks, wherein the one or more changed blocks are added to a set of changed blocks; and using the determined one or more change blocks and the determined nearest snapshot to recover the backup data to the version of the change block that occurs immediately before the backup time. 17. The method of claim 16, further comprising receiving a request to recover backup data associated with a backup time. 18. The method of claim 16, wherein the snapshot backup storage stores online backups. 19. The method of claim 16, wherein the snapshot backup storage comprises a deduplicated backup storage system. 20. A computer program product for backup, the computer program product being embodied in a non-transitory computer readable storage medium and comprising computer instructions for: determining that a nearest snapshot is after a backup time, wherein the nearest snapshot is stored on a snapshot backup storage; determining, for a incremental backup that occurs before the nearest snapshot, one or more changed blocks and version information associated with the one or more changed blocks, wherein the one or more changed blocks are added to a set of changed blocks; using the determined one or more change blocks and the determined nearest snapshot to recover the backup data to the version of the change block that occurs immediately before the backup time.
A backup system comprises a tape backup storage storing a set of tape backup data, a snapshot backup storage storing a nearest snapshot, and a processor. The processor is configured to determine the nearest snapshot, wherein a snapshot time of the nearest snapshot is nearest in time to a backup time, and determine the set of tape backup data, wherein the set of tape backup data and the nearest snapshot enable recovery of a backup data.1. A backup system, comprising: a tape backup storage storing a set of tape backup data that includes a set of one or more incremental backups; a snapshot backup storage storing a nearest snapshot; and a processor configured to: determine that the nearest snapshot is after a backup time; determine, for a incremental backup that occurs before the nearest snapshot, one or more changed blocks and version information associated with the one or more changed blocks, wherein the one or more changed blocks are added to a set of changed blocks; and use the determined one or more change blocks and the determined nearest snapshot to recover the backup data to the version of the change block that occurs immediately before the backup time. 2. The backup system of claim 1, wherein the processor is configured to receive a request to recover backup data associated with a backup time. 3. The backup system of claim 1, wherein the snapshot backup storage stores online backups. 4. The backup system of claim 1, wherein the snapshot backup storage comprises a backup system with fast access. 5. The backup system of claim 1, wherein the snapshot backup storage comprises a disk based backup storage system. 6. The backup system of claim 1, wherein the snapshot backup storage comprises a random access memory based backup storage system. 7. The backup system of claim 1, wherein the snapshot backup storage comprises a deduplicated backup storage system. 8. The backup system of claim 1, further comprising an input interface configured to receive a request to recover the backup data associated with the backup time. 9. The backup system of claim 1, further comprises an output interface configured to provide the backup data. 10. The backup system of claim 1, wherein the processor is further configured to determine the backup data. 11. The backup system of claim 10, wherein the backup data is determined using the set of tape backup data and the nearest snapshot. 12. The backup system of claim 11, wherein a new snapshot corresponding to the back time is determined. 13. The backup system of claim 1, wherein the backup data is determined using previous incremental backups to the nearest snapshot and determining changed blocks to recover the backup data. 14. The backup system of claim 1, wherein the processor is configured to determine that the nearest snapshot is before the backup time in the event the nearest snapshot is before the backup time the backup data is determined using subsequent incremental backups to the nearest snapshot and determining changed blocks to recover the backup data. 15. The system of claim 1, wherein the backup system uses the tape backup storage to store the backup data more frequently than the backup system uses the snapshot backup storage to store the backup data. 16. A method for backup, comprising: determining, using a processor, that a nearest snapshot is after a backup time, wherein the nearest snapshot is stored on a snapshot backup storage; determining, for a incremental backup that occurs before the nearest snapshot, one or more changed blocks and version information associated with the one or more changed blocks, wherein the one or more changed blocks are added to a set of changed blocks; and using the determined one or more change blocks and the determined nearest snapshot to recover the backup data to the version of the change block that occurs immediately before the backup time. 17. The method of claim 16, further comprising receiving a request to recover backup data associated with a backup time. 18. The method of claim 16, wherein the snapshot backup storage stores online backups. 19. The method of claim 16, wherein the snapshot backup storage comprises a deduplicated backup storage system. 20. A computer program product for backup, the computer program product being embodied in a non-transitory computer readable storage medium and comprising computer instructions for: determining that a nearest snapshot is after a backup time, wherein the nearest snapshot is stored on a snapshot backup storage; determining, for a incremental backup that occurs before the nearest snapshot, one or more changed blocks and version information associated with the one or more changed blocks, wherein the one or more changed blocks are added to a set of changed blocks; using the determined one or more change blocks and the determined nearest snapshot to recover the backup data to the version of the change block that occurs immediately before the backup time.
2,100
274,024
15,497,547
2,131
Systems and methods for determining locality of an incoming command relative to previously identified write or read streams is disclosed. NVM Express (NVMe) implements a paired submission queue and completion queue mechanism, with host software on the host device placing commands into multiple submission queues. The memory device fetches the commands from the multiple submission queues, which results in the incoming commands being interspersed. In order to determine whether the incoming commands should be assigned to previously identified read or write streams, the locality of the incoming commands relative to the previously identified read or write streams is analyzed. One example of locality is proximity in address space. In response to determining locality, the incoming commands are assigned to the various streams.
1. A method comprising: accessing an identified read stream, the identified read stream comprising one or more read commands and an address range, the address range determined based on addresses in the one or more read commands received in a memory device; accessing an incoming command; reviewing an address of the incoming command, wherein the address of the incoming command is not contiguous with the address range of the identified read stream; analyzing proximity of the address of the incoming command with part or all of the address range of the identified read stream; determining, based on the analysis of proximity, whether the incoming command is associated with the identified read stream; and performing at least one access to non-volatile memory in the memory device based on the identified read stream. 2. The method of claim 1, wherein the incoming command comprises the address and a size of the incoming command; wherein an address range for the incoming command is defined by the address and the size of the incoming command; and wherein analyzing proximity of the incoming command with part or all of the address range in the identified read stream comprises analyzing proximity of the address range for the incoming command with part or all of the address range in the identified read stream. 3. The method of claim 2, wherein the proximity is based on an amount of data to read in at least one of the commands in the identified read stream. 4. The method of claim 3, wherein the identified read stream comprises a plurality of commands; wherein a last command comprises the command last associated with the identified read stream; and wherein the last command includes the amount of data to read. 5. The method of claim 2, wherein the proximity comprises a predetermined minimum proximity or a predetermined maximum proximity. 6. The method of claim 2, wherein the address range of the identified read stream is defined by a lower address and an upper address; further comprising determining an address gap; wherein analyzing proximity comprises: determining an extended address range of the identified address stream, the extended address range being determined by extending the lower address by the address gap and by extending the upper address by the address gap; and determining whether the address range of the incoming command at least partly overlaps with the extended address range of the identified address stream, wherein determining, based on the analysis of proximity, whether the incoming command is associated with the identified read stream comprises: in response to determining that the address range of the incoming command at least partly overlaps with the extended address range of the identified address stream, determining the incoming command is associated with the identified read stream; and in response to determining that the address range of the incoming command does not at least partly overlap with the extended address range of the identified address stream, determining the incoming command is not associated with the identified read stream. 7. The method of claim 6, wherein the address gap is determined based on a predetermined minimum address gap, a predetermined maximum address gap, and a length of a command most recently associated with the identified read stream. 8. The method of claim 1, wherein performing at least one access to non-volatile memory in the memory device based on the identified read stream comprises performing a read look ahead in the non-volatile memory based on the identified read stream. 9. The method of claim 8, further comprising determining a direction of the identified read stream; and wherein the read look ahead is performed using the direction of the identified read stream. 10. The method of claim 9, wherein the address range of the identified read stream is defined by a lower address and an upper address; wherein the direction of the identified read stream comprises a lower address direction and an upper address direction, the lower address direction including addresses less than the lower address, the upper address direction including addresses greater than the upper address; and wherein determining the direction of the identified read stream comprises determining whether additional commands not yet associated with the identified read stream include addresses in the lower address direction or addresses in the upper address direction. 11. A non-volatile memory device comprising: a non-volatile memory having a plurality of memory cells; a communication interface configured to communicate with a host device; and a controller in communication with the non-volatile memory and the communication interface, the controller configured to: identify a write command stream, the write command stream including an address range; receive a non-write command via the communication interface; determine whether the non-write command is associated with the write command stream; and in response to determining that the non-write command is associated with the write command stream, perform a speculative access of the non-volatile memory using part or all of the address range of the write command stream. 12. The non-volatile memory device of claim 11, wherein the non-write command comprises a read command. 13. The non-volatile memory device of claim 12, wherein the read command comprises a read address; and wherein the controller is configured to determine whether the read command is associated with the write command by determining whether the read address is within the address range of the write command stream. 14. The non-volatile memory device of claim 13, wherein the speculative access of the non-volatile memory comprises a read look ahead operation. 15. The non-volatile memory device of claim 14, wherein the controller is configured to perform the read look ahead operation in response to analyzing a single read command. 16. A non-volatile memory device comprising: a non-volatile memory having a plurality of memory cells; a communication interface configured to communicate with a host device; and a controller in communication with the non-volatile memory and the communication interface, the controller configured to: access an identified read stream, the identified read stream comprising an address range and one or more read commands from the host device, the address range defined by a lower address and an upper address for the one or more read commands received in a memory device; determine whether additional commands not yet associated with the identified read stream include addresses in a lower address direction or addresses in an upper address direction, the lower address direction including addresses less than the lower address, the upper address direction including addresses greater than the upper address; and performing a read look ahead of the non-volatile memory based, at least in part, on whether the identified read stream has addresses in the lower address direction or the upper address direction. 17. The non-volatile memory device of claim 16, wherein the identified read stream comprises a plurality of commands; wherein a first command comprises the command first associated with the identified read stream; wherein a last command comprises the command last associated with the identified read stream; and wherein determining whether additional commands not yet associated with the identified read stream include addresses in the lower address direction or addresses in the upper address direction comprises comparing an address associated with the last command with an address associated with the first command. 18. A method comprising: accessing an identified read stream, the identified read stream comprising one or more read commands and an address range, the address range determined based on addresses in the one or more read commands received in a memory device; accessing an incoming command; comparing an address of the incoming command with the address range of the identified read stream; determining, based on the comparison, whether the incoming command is associated with the identified read stream; and in response to determining that the incoming command is associated with the identified read stream, executing the incoming command by performing at least one operation on a volatile memory in the memory device without performing the at least one operation on a non-volatile memory in the memory device. 19. The method of claim 18, wherein the command comprises a write command; and wherein the at least one operation comprises saving data associated with the write command in the volatile memory, with saving of the data to the non-volatile memory deferred in an expectation that the data will be written again. 20. The method of claim 18, wherein the command comprises a read command; and wherein the at least one operation comprises reading data associated with the read command from the volatile memory without reading the data from the non-volatile memory. 21. A non-volatile memory device comprising: a non-volatile memory having a plurality of memory cells; a communication interface configured to communicate with a host device; and means for accessing an identified read stream, the identified read stream comprising one or more read commands and an address range, the address range determined based on addresses in the one or more read commands received in a memory device; means for accessing an incoming command received via the communication interface; means for reviewing an address of the incoming command, wherein the address of the incoming command is not contiguous with the address range of the identified read stream; means for analyzing proximity of the address of the incoming command with part or all of the address range of the identified read stream; means for determining, based on the analysis of proximity, whether the incoming command is associated with the identified read stream; and means for performing at least one access to the non-volatile memory based on the identified read stream. 22. The non-volatile memory device of claim 21, wherein the incoming command comprises the address and a size of the incoming command; wherein an address range for the incoming command is defined by the address and the size of the incoming command; and wherein the means for analyzing proximity of the incoming command with part or all of the address range in the identified read stream comprises means for analyzing proximity of the address range for the incoming command with part or all of the address range in the identified read stream. 23. The non-volatile memory device of claim 22, wherein the proximity is based on an amount of data to read in at least one of the commands in the identified read stream.
Systems and methods for determining locality of an incoming command relative to previously identified write or read streams is disclosed. NVM Express (NVMe) implements a paired submission queue and completion queue mechanism, with host software on the host device placing commands into multiple submission queues. The memory device fetches the commands from the multiple submission queues, which results in the incoming commands being interspersed. In order to determine whether the incoming commands should be assigned to previously identified read or write streams, the locality of the incoming commands relative to the previously identified read or write streams is analyzed. One example of locality is proximity in address space. In response to determining locality, the incoming commands are assigned to the various streams.1. A method comprising: accessing an identified read stream, the identified read stream comprising one or more read commands and an address range, the address range determined based on addresses in the one or more read commands received in a memory device; accessing an incoming command; reviewing an address of the incoming command, wherein the address of the incoming command is not contiguous with the address range of the identified read stream; analyzing proximity of the address of the incoming command with part or all of the address range of the identified read stream; determining, based on the analysis of proximity, whether the incoming command is associated with the identified read stream; and performing at least one access to non-volatile memory in the memory device based on the identified read stream. 2. The method of claim 1, wherein the incoming command comprises the address and a size of the incoming command; wherein an address range for the incoming command is defined by the address and the size of the incoming command; and wherein analyzing proximity of the incoming command with part or all of the address range in the identified read stream comprises analyzing proximity of the address range for the incoming command with part or all of the address range in the identified read stream. 3. The method of claim 2, wherein the proximity is based on an amount of data to read in at least one of the commands in the identified read stream. 4. The method of claim 3, wherein the identified read stream comprises a plurality of commands; wherein a last command comprises the command last associated with the identified read stream; and wherein the last command includes the amount of data to read. 5. The method of claim 2, wherein the proximity comprises a predetermined minimum proximity or a predetermined maximum proximity. 6. The method of claim 2, wherein the address range of the identified read stream is defined by a lower address and an upper address; further comprising determining an address gap; wherein analyzing proximity comprises: determining an extended address range of the identified address stream, the extended address range being determined by extending the lower address by the address gap and by extending the upper address by the address gap; and determining whether the address range of the incoming command at least partly overlaps with the extended address range of the identified address stream, wherein determining, based on the analysis of proximity, whether the incoming command is associated with the identified read stream comprises: in response to determining that the address range of the incoming command at least partly overlaps with the extended address range of the identified address stream, determining the incoming command is associated with the identified read stream; and in response to determining that the address range of the incoming command does not at least partly overlap with the extended address range of the identified address stream, determining the incoming command is not associated with the identified read stream. 7. The method of claim 6, wherein the address gap is determined based on a predetermined minimum address gap, a predetermined maximum address gap, and a length of a command most recently associated with the identified read stream. 8. The method of claim 1, wherein performing at least one access to non-volatile memory in the memory device based on the identified read stream comprises performing a read look ahead in the non-volatile memory based on the identified read stream. 9. The method of claim 8, further comprising determining a direction of the identified read stream; and wherein the read look ahead is performed using the direction of the identified read stream. 10. The method of claim 9, wherein the address range of the identified read stream is defined by a lower address and an upper address; wherein the direction of the identified read stream comprises a lower address direction and an upper address direction, the lower address direction including addresses less than the lower address, the upper address direction including addresses greater than the upper address; and wherein determining the direction of the identified read stream comprises determining whether additional commands not yet associated with the identified read stream include addresses in the lower address direction or addresses in the upper address direction. 11. A non-volatile memory device comprising: a non-volatile memory having a plurality of memory cells; a communication interface configured to communicate with a host device; and a controller in communication with the non-volatile memory and the communication interface, the controller configured to: identify a write command stream, the write command stream including an address range; receive a non-write command via the communication interface; determine whether the non-write command is associated with the write command stream; and in response to determining that the non-write command is associated with the write command stream, perform a speculative access of the non-volatile memory using part or all of the address range of the write command stream. 12. The non-volatile memory device of claim 11, wherein the non-write command comprises a read command. 13. The non-volatile memory device of claim 12, wherein the read command comprises a read address; and wherein the controller is configured to determine whether the read command is associated with the write command by determining whether the read address is within the address range of the write command stream. 14. The non-volatile memory device of claim 13, wherein the speculative access of the non-volatile memory comprises a read look ahead operation. 15. The non-volatile memory device of claim 14, wherein the controller is configured to perform the read look ahead operation in response to analyzing a single read command. 16. A non-volatile memory device comprising: a non-volatile memory having a plurality of memory cells; a communication interface configured to communicate with a host device; and a controller in communication with the non-volatile memory and the communication interface, the controller configured to: access an identified read stream, the identified read stream comprising an address range and one or more read commands from the host device, the address range defined by a lower address and an upper address for the one or more read commands received in a memory device; determine whether additional commands not yet associated with the identified read stream include addresses in a lower address direction or addresses in an upper address direction, the lower address direction including addresses less than the lower address, the upper address direction including addresses greater than the upper address; and performing a read look ahead of the non-volatile memory based, at least in part, on whether the identified read stream has addresses in the lower address direction or the upper address direction. 17. The non-volatile memory device of claim 16, wherein the identified read stream comprises a plurality of commands; wherein a first command comprises the command first associated with the identified read stream; wherein a last command comprises the command last associated with the identified read stream; and wherein determining whether additional commands not yet associated with the identified read stream include addresses in the lower address direction or addresses in the upper address direction comprises comparing an address associated with the last command with an address associated with the first command. 18. A method comprising: accessing an identified read stream, the identified read stream comprising one or more read commands and an address range, the address range determined based on addresses in the one or more read commands received in a memory device; accessing an incoming command; comparing an address of the incoming command with the address range of the identified read stream; determining, based on the comparison, whether the incoming command is associated with the identified read stream; and in response to determining that the incoming command is associated with the identified read stream, executing the incoming command by performing at least one operation on a volatile memory in the memory device without performing the at least one operation on a non-volatile memory in the memory device. 19. The method of claim 18, wherein the command comprises a write command; and wherein the at least one operation comprises saving data associated with the write command in the volatile memory, with saving of the data to the non-volatile memory deferred in an expectation that the data will be written again. 20. The method of claim 18, wherein the command comprises a read command; and wherein the at least one operation comprises reading data associated with the read command from the volatile memory without reading the data from the non-volatile memory. 21. A non-volatile memory device comprising: a non-volatile memory having a plurality of memory cells; a communication interface configured to communicate with a host device; and means for accessing an identified read stream, the identified read stream comprising one or more read commands and an address range, the address range determined based on addresses in the one or more read commands received in a memory device; means for accessing an incoming command received via the communication interface; means for reviewing an address of the incoming command, wherein the address of the incoming command is not contiguous with the address range of the identified read stream; means for analyzing proximity of the address of the incoming command with part or all of the address range of the identified read stream; means for determining, based on the analysis of proximity, whether the incoming command is associated with the identified read stream; and means for performing at least one access to the non-volatile memory based on the identified read stream. 22. The non-volatile memory device of claim 21, wherein the incoming command comprises the address and a size of the incoming command; wherein an address range for the incoming command is defined by the address and the size of the incoming command; and wherein the means for analyzing proximity of the incoming command with part or all of the address range in the identified read stream comprises means for analyzing proximity of the address range for the incoming command with part or all of the address range in the identified read stream. 23. The non-volatile memory device of claim 22, wherein the proximity is based on an amount of data to read in at least one of the commands in the identified read stream.
2,100
274,025
15,497,258
2,131
A method of operating a storage device including at least one nonvolatile storage and a storage controller configured to control the nonvolatile storage. A first type of request, original data and a first request information associated with the original data are received, in the storage controller, from an external host device, a compression operation to compress the original data to generate compressed data is performed in the storage controller, in response to the first type of request, and a write operation to write the compressed data in a data storage area of the nonvolatile storage is performed in the storage controller. The data storage area of the nonvolatile storage may store the first request information associated with the original data. The external host may manage mapping information in the form of a mapping table associated with compression/decompression at the storage device.
1. A method of operating a storage device, comprising: receiving from an external host device, a first type of request, original data and a first request information associated with the original data based on a mapping information managed by the external host device; performing, by a storage controller, a compression operation on the original data to generate compressed data, in response to receiving the first type of request; and writing the compressed data in a data storage area of a nonvolatile storage controlled by the storage controller; wherein the first request information associated with the original data is stored in the data storage area of the nonvolatile storage. 2. The method of claim 1, further comprising: transmitting, by the storage controller, a return information associated with the writing of the compressed data to the external host device after the writing of the compressed data in the nonvolatile storage is completed. 3. The method of claim 2, wherein the first request information includes a size of the original data, and wherein the return information includes a size information of the compressed data in the data storage area and a status signal indicating a status of the writing of the compressed data. 4. The method of claim 3, wherein the first request information further includes a starting logical block address of a first stripe of the nonvolatile storage in which the compressed data is stored based on the mapping information in the external host device, and wherein the return information further includes the compressed data. 5. The method of claim 3, wherein the first request information further includes a first stripe identifier that identifies a first stripe of a plurality of stripes of the nonvolatile storage, and an engine identifier that identifies a selected compression engine of a plurality of compression engines, wherein the selected compression engine performs the compression operation on the original data to generate the compressed data, and wherein the return information further includes a starting logical block address of a first stripe of the nonvolatile storage in which the compressed data is stored. 6. The method of claim 5, wherein in response to the storage controller determining that available sectors of the first stripe of the nonvolatile storage are insufficient to store all of the compressed data, the storage controller stores the compressed data in a second stripe of the nonvolatile storage different from the first stripe of the nonvolatile storage, and the storage controller transmits to the external host device, a second stripe identifier of the second stripe and information of the available sectors of the first stripe as the return information. 7. The method of claim 1, wherein the storage controller includes at least one compression/decompression engine, and the compression operation on the original data to generate the compressed data is performed by at least one compression engine of the at least one compression/decompression engine. 8. The method of claim 1, wherein the storage controller includes a compression/decompression engine including a plurality of compression engines, and the compression operation on the original data to generate the compressed data is performed by a compression engine of the plurality of compression engines, and wherein the compression engine is selected according to a compression engine identifier that identifies one of the plurality of compression engines. 9. The method of claim 1, wherein the storage controller includes at least one compression/decompression engine including a plurality of compression engines, the compression operation on the original data to generate the compressed data is performed in each of a plurality of compression engines in the at least one compression/decompression engine, and the storage controller transmits the compressed data output from one compression engine of the plurality of compression engines to the nonvolatile storage, wherein the one compression engine is selected based on a compression ratio of each of the plurality of compression engines. 10. The method of claim 1, wherein after the writing of the compressed data is completed, the method further comprises: receiving, by the storage controller, a second type of request, and a second request information associated with the compressed data from the external host device; reading, by the storage controller, the compressed data from the data storage area in response to the second type of request; decompressing, by the storage controller, the compressed data to recover the original data; and transmitting, by the storage controller, the recovered original data to the external host device. 11. The method of claim 1, further comprising: monitoring, by the external host device, an amount of a data stream transmitted between the external host device and the storage device; determining, by the external host device, whether the monitored amount of the data stream is greater than a threshold value; and adjusting, by the external host device, a number of issuances of the first type of request adaptively based on the determining whether the monitored amount of the data stream is greater than a threshold value. 12. The method of claim 11, wherein the number of issuances of the first type of request is decreased in response to determining the monitored amount of the data stream is greater than the threshold value. 13. The method of claim 11, wherein the number of issuances of the first type of request is increased in response to determining the monitored amount of the data stream is less than the threshold value. 14. The method of claim 10, wherein the first type of request corresponds to a write after compression command (WAC_CMD) to direct that the original data is compressed into the compressed data and then the compressed data is written, and wherein the second type of request corresponds to a decompression after read command (DAC_CMD) to direct that the compressed data is read and then the read compressed data is decompressed. 15. A data storage system comprising: a storage device including a processor and at least one nonvolatile storage, wherein the storage device is configured to: perform a compression operation on original data to generate compressed data, in response to a first type of request and a starting logical block address, perform a write operation to write the compressed data in a data storage area of the nonvolatile storage, wherein the data storage area of the nonvolatile storage corresponds to the starting logical block address; and a host interface configured to interface the storage device to a host device. 16. The data storage system of claim 15, wherein the host device is configured to control the storage device, and the host device is configured to send the first type of request, the original data and the starting logical block address to the storage device via the host interface. 17. The data storage system of claim 16, wherein the host device includes a mapping table configured to map a logical address of the data storage area to a page offset, a sector offset and a sector number, and wherein the page offset indicates a number of a physical page of the data storage associated with the logical address, the sector offset indicates a number of a first sector of the physical page in which the compressed data is initially stored, and the sector number indicates of a number of at least one sector of the physical page in which the compressed data is stored. 18. The data storage system of claim 17, wherein the storage device includes a bus, and further comprises a storage controller configured to generate the compressed data and transmit the compressed data to the nonvolatile storage in response to the first type of request, wherein the storage controller comprises: a processor, coupled to the bus, configured to control an overall operation of the storage controller; a compression/decompression engine, coupled to the bus, configured to receive the original data, and configured to perform a compression operation on the original data to generate the compressed data; and a nonvolatile interface, coupled to the processor and the compression/decompression engine via the bus, configured to provide the compressed data to the nonvolatile storage. 19. A data storage device comprising: a storage controller including a processor and at least one nonvolatile storage; a buffer connected to the storage controller; and a host interface configured to interface with a host device, wherein the storage controller is configured to perform compression and decompression operations, and respectively write at least one of compressed data or decompressed data in the at least one nonvolatile storage according to mapping information provided via the host interface. 20. The data storage device of claim 19, wherein the mapping information is received from a host device via the host interface.
A method of operating a storage device including at least one nonvolatile storage and a storage controller configured to control the nonvolatile storage. A first type of request, original data and a first request information associated with the original data are received, in the storage controller, from an external host device, a compression operation to compress the original data to generate compressed data is performed in the storage controller, in response to the first type of request, and a write operation to write the compressed data in a data storage area of the nonvolatile storage is performed in the storage controller. The data storage area of the nonvolatile storage may store the first request information associated with the original data. The external host may manage mapping information in the form of a mapping table associated with compression/decompression at the storage device.1. A method of operating a storage device, comprising: receiving from an external host device, a first type of request, original data and a first request information associated with the original data based on a mapping information managed by the external host device; performing, by a storage controller, a compression operation on the original data to generate compressed data, in response to receiving the first type of request; and writing the compressed data in a data storage area of a nonvolatile storage controlled by the storage controller; wherein the first request information associated with the original data is stored in the data storage area of the nonvolatile storage. 2. The method of claim 1, further comprising: transmitting, by the storage controller, a return information associated with the writing of the compressed data to the external host device after the writing of the compressed data in the nonvolatile storage is completed. 3. The method of claim 2, wherein the first request information includes a size of the original data, and wherein the return information includes a size information of the compressed data in the data storage area and a status signal indicating a status of the writing of the compressed data. 4. The method of claim 3, wherein the first request information further includes a starting logical block address of a first stripe of the nonvolatile storage in which the compressed data is stored based on the mapping information in the external host device, and wherein the return information further includes the compressed data. 5. The method of claim 3, wherein the first request information further includes a first stripe identifier that identifies a first stripe of a plurality of stripes of the nonvolatile storage, and an engine identifier that identifies a selected compression engine of a plurality of compression engines, wherein the selected compression engine performs the compression operation on the original data to generate the compressed data, and wherein the return information further includes a starting logical block address of a first stripe of the nonvolatile storage in which the compressed data is stored. 6. The method of claim 5, wherein in response to the storage controller determining that available sectors of the first stripe of the nonvolatile storage are insufficient to store all of the compressed data, the storage controller stores the compressed data in a second stripe of the nonvolatile storage different from the first stripe of the nonvolatile storage, and the storage controller transmits to the external host device, a second stripe identifier of the second stripe and information of the available sectors of the first stripe as the return information. 7. The method of claim 1, wherein the storage controller includes at least one compression/decompression engine, and the compression operation on the original data to generate the compressed data is performed by at least one compression engine of the at least one compression/decompression engine. 8. The method of claim 1, wherein the storage controller includes a compression/decompression engine including a plurality of compression engines, and the compression operation on the original data to generate the compressed data is performed by a compression engine of the plurality of compression engines, and wherein the compression engine is selected according to a compression engine identifier that identifies one of the plurality of compression engines. 9. The method of claim 1, wherein the storage controller includes at least one compression/decompression engine including a plurality of compression engines, the compression operation on the original data to generate the compressed data is performed in each of a plurality of compression engines in the at least one compression/decompression engine, and the storage controller transmits the compressed data output from one compression engine of the plurality of compression engines to the nonvolatile storage, wherein the one compression engine is selected based on a compression ratio of each of the plurality of compression engines. 10. The method of claim 1, wherein after the writing of the compressed data is completed, the method further comprises: receiving, by the storage controller, a second type of request, and a second request information associated with the compressed data from the external host device; reading, by the storage controller, the compressed data from the data storage area in response to the second type of request; decompressing, by the storage controller, the compressed data to recover the original data; and transmitting, by the storage controller, the recovered original data to the external host device. 11. The method of claim 1, further comprising: monitoring, by the external host device, an amount of a data stream transmitted between the external host device and the storage device; determining, by the external host device, whether the monitored amount of the data stream is greater than a threshold value; and adjusting, by the external host device, a number of issuances of the first type of request adaptively based on the determining whether the monitored amount of the data stream is greater than a threshold value. 12. The method of claim 11, wherein the number of issuances of the first type of request is decreased in response to determining the monitored amount of the data stream is greater than the threshold value. 13. The method of claim 11, wherein the number of issuances of the first type of request is increased in response to determining the monitored amount of the data stream is less than the threshold value. 14. The method of claim 10, wherein the first type of request corresponds to a write after compression command (WAC_CMD) to direct that the original data is compressed into the compressed data and then the compressed data is written, and wherein the second type of request corresponds to a decompression after read command (DAC_CMD) to direct that the compressed data is read and then the read compressed data is decompressed. 15. A data storage system comprising: a storage device including a processor and at least one nonvolatile storage, wherein the storage device is configured to: perform a compression operation on original data to generate compressed data, in response to a first type of request and a starting logical block address, perform a write operation to write the compressed data in a data storage area of the nonvolatile storage, wherein the data storage area of the nonvolatile storage corresponds to the starting logical block address; and a host interface configured to interface the storage device to a host device. 16. The data storage system of claim 15, wherein the host device is configured to control the storage device, and the host device is configured to send the first type of request, the original data and the starting logical block address to the storage device via the host interface. 17. The data storage system of claim 16, wherein the host device includes a mapping table configured to map a logical address of the data storage area to a page offset, a sector offset and a sector number, and wherein the page offset indicates a number of a physical page of the data storage associated with the logical address, the sector offset indicates a number of a first sector of the physical page in which the compressed data is initially stored, and the sector number indicates of a number of at least one sector of the physical page in which the compressed data is stored. 18. The data storage system of claim 17, wherein the storage device includes a bus, and further comprises a storage controller configured to generate the compressed data and transmit the compressed data to the nonvolatile storage in response to the first type of request, wherein the storage controller comprises: a processor, coupled to the bus, configured to control an overall operation of the storage controller; a compression/decompression engine, coupled to the bus, configured to receive the original data, and configured to perform a compression operation on the original data to generate the compressed data; and a nonvolatile interface, coupled to the processor and the compression/decompression engine via the bus, configured to provide the compressed data to the nonvolatile storage. 19. A data storage device comprising: a storage controller including a processor and at least one nonvolatile storage; a buffer connected to the storage controller; and a host interface configured to interface with a host device, wherein the storage controller is configured to perform compression and decompression operations, and respectively write at least one of compressed data or decompressed data in the at least one nonvolatile storage according to mapping information provided via the host interface. 20. The data storage device of claim 19, wherein the mapping information is received from a host device via the host interface.
2,100
274,026
15,497,162
2,131
Improvements to traditional schemes for storing data for processing tasks and for executing those processing tasks are disclosed. A set of data for which processing tasks are to be executed is processed through a hierarchy to distribute the data through various elements of a computer system. Levels of the hierarchy represent different types of memory or storage elements. Higher levels represent coarser portions of memory or storage elements and lower levels represent finer portions of memory or storage elements. Data proceeds through the hierarchy as “tasks” at different levels. Tasks at non-leaf nodes comprise tasks to subdivide data for storage in the finer granularity memories or storage units associated with a lower hierarchy level. Tasks at leaf nodes comprise processing work, such as a portion of a calculation. Two techniques for organizing the tasks in the hierarchy presented herein include a queue-based technique and a graph-based technique.
1. A method for distributing processing data according to a memory hierarchy and performing payload processing on the processing data, the method comprising: detecting that a first task, associated with first data, is available for processing at a first node at a first hierarchy level of the memory hierarchy; determining that the first node is a non-leaf node; determining that sufficient capacity exists for processing and storage of data associated with the first task at a second node that comprises a leaf node in a second hierarchy level of the memory hierarchy, the second hierarchy level being lower than the first hierarchy level; responsive to determining that the first node is a non-leaf node and that sufficient capacity exists for the data associated with the first task, processing the first data by dividing the first data to generate a first plurality of sub-tasks and storing the first plurality of sub-tasks in a second memory or storage unit associated with the second node; and processing the data at a processing unit associated with the leaf node, wherein the first hierarchy level and the second hierarchy level comprise hierarchy levels of one of a queue-based hierarchy or a directed acyclic graph-based hierarchy, and wherein tasks at leaf nodes of the memory hierarchy comprise portions of the payload processing that are performed by processing units associated with the leaf nodes, and tasks at non-leaf nodes of the memory hierarchy comprise tasks for dividing and transmitting the processing data to nodes at lower levels of the memory hierarchy. 2. The method of claim 1, wherein: the first node and the second node comprise nodes of a queue-based hierarchy, the first node including a first queue storing a first “ready” queue entry for the first task. 3. The method of claim 2, wherein processing the first data comprises: converting the first “ready” queue entry to a first “wait” queue entry that indicates that the first task is waiting for the first plurality of sub-tasks to complete, generating a first plurality of “ready” queue entries for the first plurality of sub-tasks, and storing the first plurality of “ready” queue entries in a second queue associated with the second node. 4. The method of claim 2, further comprising performing a load balancing operation by: transferring one or more tasks from the first node to the third node that is a sister of the first node. 5. The method of claim 1, wherein: the first task and the plurality of sub-tasks comprise vertices of a directed acyclic graph-based hierarchy, the first task including directed edges pointing to the sub-tasks of the plurality of sub-tasks. 6. The method of claim 5, wherein processing the first data comprises: generating the plurality of sub-tasks and generating the directed edges pointing to the sub-tasks of the plurality of sub-tasks. 7. The method of claim 5, further comprising performing a load balancing operation by: determining that a number of tasks assigned to the first node is greater than a number of tasks assigned to a third node that is a sister of the first node; and in response, transferring one or more tasks from the first node to the third node. 8. The method of claim 1, further comprising: responsive to determining that all sub-tasks of the first plurality of sub-tasks are complete, determining that the first task is complete. 9. The method of claim 1 wherein: the sub-tasks of the first plurality of sub-tasks comprise payload processing tasks and not data splitting tasks. 10. A computer system comprising: a processor; a set of one or more memories; a set of one or more storage units; and a set of one or more processing units, wherein the processor is configured to execute a hierarchy controller to distribute processing data according to a memory hierarchy and cause payload processing to occur on that processing data, by: detecting that a first task, associated with first data, is available for processing at a first node at a first hierarchy level of the memory hierarchy; determining that the first node is a non-leaf node; determining that sufficient capacity exists for processing and storage of data associated with the first task at a second node in a second hierarchy level of the memory hierarchy, the second hierarchy level being lower than the first hierarchy level; responsive to determining that the first node is a non-leaf node and that sufficient capacity exists for the data associated with the first task, processing the first data by dividing the first data to generate a first plurality of sub-tasks and storing the first plurality of sub-tasks in a second memory of the set of memories or storage unit of the set of storage units associated with the second node; and processing the data at a processing unit, of the set of processing units, associated with the leaf node; wherein the first hierarchy level and the second hierarchy level comprise hierarchy levels of one of a queue-based hierarchy or a directed acyclic graph-based hierarchy, and wherein tasks at leaf nodes of the memory hierarchy comprise portions of the payload processing that are performed by processing units, of the set of processing units, and tasks at non-leaf nodes of the memory hierarchy comprise tasks for dividing and transmitting the processing data to nodes at lower levels of the memory hierarchy. 11. The computer system of claim 10, wherein: the first node and the second node comprise nodes of a queue-based hierarchy, the first node including a first queue storing a first “ready” queue entry for the first task. 12. The computer system of claim 11, wherein the processor is configured to process the first data by: converting the first “ready” queue entry to a first “wait” queue entry that indicates that the first task is waiting for the first plurality of sub-tasks to complete, generating a first plurality of “ready” queue entries for the first plurality of sub-tasks, and storing the first plurality of “ready” queue entries in a second queue associated with the second node. 13. The computer system of claim 11, wherein the processor is further configured to perform a load balancing operation by: transfer one or more tasks from the first node to a third node that is a sister of the first node. 14. The computer system of claim 10, wherein: the first task and the plurality of sub-tasks comprise vertices of a directed acyclic graph-based hierarchy, the first task including directed edges pointing to the sub-tasks of the plurality of sub-tasks. 15. The computer system of claim 14, wherein the processor is configured to process the first data by: generating the plurality of sub-tasks and generating the directed edges pointing to the sub-tasks of the plurality of sub-tasks. 16. The computer system of claim 14, wherein the processor is further configured to perform a load balancing operation by: determining that a number of tasks assigned to the first node is greater than a number of tasks assigned to a third node that is a sister of the first node; and in response, transferring one or more tasks from the first node to the third node. 17. The computer system of claim 10, wherein the processor is further configured to: responsive to determining that all sub-tasks of the first plurality of sub-tasks are complete, determining that the first task is complete. 18. The computer system of claim 10 wherein: the sub-tasks of the first plurality of sub-tasks comprise payload processing tasks and not data splitting tasks. 19. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to distribute processing data according to a memory hierarchy and perform payload processing on the processing data by: detecting that a first task, associated with first data, is available for processing at a first node at a first hierarchy level of the memory hierarchy; determining that the first node is a non-leaf node; determining that sufficient capacity exists for processing and storage of data associated with the first task in a at a second node that comprises a leaf node in a second hierarchy level of the memory hierarchy, the second heirarchy level being lower than the first hierarchy level; responsive to determining that the first node is a non-leaf node and that sufficient capacity exists for the data associated with the first task, processing the first data by dividing the first data to generate a first plurality of sub-tasks and storing the first plurality of sub-tasks in a second memory or storage unit associated with the second node; and processing the data at a processing unit associated with the leaf node, wherein the first hierarchy level and the second hierarchy level comprise hierarchy levels of one of a queue-based hierarchy or a directed acyclic graph-based hierarchy, and wherein tasks at leaf nodes of the memory hierarchy comprise portions of the payload processing that are performed by processing units associated with the leaf nodes, and tasks at non-leaf nodes of the memory hierarchy comprise tasks for dividing and transmitting the processing data to nodes at lower levels of the memory hierarchy. 20. The non-transitory computer-readable medium of claim 19, wherein: the sub-tasks of the first plurality of sub-tasks comprise payload processing tasks and not data splitting tasks.
Improvements to traditional schemes for storing data for processing tasks and for executing those processing tasks are disclosed. A set of data for which processing tasks are to be executed is processed through a hierarchy to distribute the data through various elements of a computer system. Levels of the hierarchy represent different types of memory or storage elements. Higher levels represent coarser portions of memory or storage elements and lower levels represent finer portions of memory or storage elements. Data proceeds through the hierarchy as “tasks” at different levels. Tasks at non-leaf nodes comprise tasks to subdivide data for storage in the finer granularity memories or storage units associated with a lower hierarchy level. Tasks at leaf nodes comprise processing work, such as a portion of a calculation. Two techniques for organizing the tasks in the hierarchy presented herein include a queue-based technique and a graph-based technique.1. A method for distributing processing data according to a memory hierarchy and performing payload processing on the processing data, the method comprising: detecting that a first task, associated with first data, is available for processing at a first node at a first hierarchy level of the memory hierarchy; determining that the first node is a non-leaf node; determining that sufficient capacity exists for processing and storage of data associated with the first task at a second node that comprises a leaf node in a second hierarchy level of the memory hierarchy, the second hierarchy level being lower than the first hierarchy level; responsive to determining that the first node is a non-leaf node and that sufficient capacity exists for the data associated with the first task, processing the first data by dividing the first data to generate a first plurality of sub-tasks and storing the first plurality of sub-tasks in a second memory or storage unit associated with the second node; and processing the data at a processing unit associated with the leaf node, wherein the first hierarchy level and the second hierarchy level comprise hierarchy levels of one of a queue-based hierarchy or a directed acyclic graph-based hierarchy, and wherein tasks at leaf nodes of the memory hierarchy comprise portions of the payload processing that are performed by processing units associated with the leaf nodes, and tasks at non-leaf nodes of the memory hierarchy comprise tasks for dividing and transmitting the processing data to nodes at lower levels of the memory hierarchy. 2. The method of claim 1, wherein: the first node and the second node comprise nodes of a queue-based hierarchy, the first node including a first queue storing a first “ready” queue entry for the first task. 3. The method of claim 2, wherein processing the first data comprises: converting the first “ready” queue entry to a first “wait” queue entry that indicates that the first task is waiting for the first plurality of sub-tasks to complete, generating a first plurality of “ready” queue entries for the first plurality of sub-tasks, and storing the first plurality of “ready” queue entries in a second queue associated with the second node. 4. The method of claim 2, further comprising performing a load balancing operation by: transferring one or more tasks from the first node to the third node that is a sister of the first node. 5. The method of claim 1, wherein: the first task and the plurality of sub-tasks comprise vertices of a directed acyclic graph-based hierarchy, the first task including directed edges pointing to the sub-tasks of the plurality of sub-tasks. 6. The method of claim 5, wherein processing the first data comprises: generating the plurality of sub-tasks and generating the directed edges pointing to the sub-tasks of the plurality of sub-tasks. 7. The method of claim 5, further comprising performing a load balancing operation by: determining that a number of tasks assigned to the first node is greater than a number of tasks assigned to a third node that is a sister of the first node; and in response, transferring one or more tasks from the first node to the third node. 8. The method of claim 1, further comprising: responsive to determining that all sub-tasks of the first plurality of sub-tasks are complete, determining that the first task is complete. 9. The method of claim 1 wherein: the sub-tasks of the first plurality of sub-tasks comprise payload processing tasks and not data splitting tasks. 10. A computer system comprising: a processor; a set of one or more memories; a set of one or more storage units; and a set of one or more processing units, wherein the processor is configured to execute a hierarchy controller to distribute processing data according to a memory hierarchy and cause payload processing to occur on that processing data, by: detecting that a first task, associated with first data, is available for processing at a first node at a first hierarchy level of the memory hierarchy; determining that the first node is a non-leaf node; determining that sufficient capacity exists for processing and storage of data associated with the first task at a second node in a second hierarchy level of the memory hierarchy, the second hierarchy level being lower than the first hierarchy level; responsive to determining that the first node is a non-leaf node and that sufficient capacity exists for the data associated with the first task, processing the first data by dividing the first data to generate a first plurality of sub-tasks and storing the first plurality of sub-tasks in a second memory of the set of memories or storage unit of the set of storage units associated with the second node; and processing the data at a processing unit, of the set of processing units, associated with the leaf node; wherein the first hierarchy level and the second hierarchy level comprise hierarchy levels of one of a queue-based hierarchy or a directed acyclic graph-based hierarchy, and wherein tasks at leaf nodes of the memory hierarchy comprise portions of the payload processing that are performed by processing units, of the set of processing units, and tasks at non-leaf nodes of the memory hierarchy comprise tasks for dividing and transmitting the processing data to nodes at lower levels of the memory hierarchy. 11. The computer system of claim 10, wherein: the first node and the second node comprise nodes of a queue-based hierarchy, the first node including a first queue storing a first “ready” queue entry for the first task. 12. The computer system of claim 11, wherein the processor is configured to process the first data by: converting the first “ready” queue entry to a first “wait” queue entry that indicates that the first task is waiting for the first plurality of sub-tasks to complete, generating a first plurality of “ready” queue entries for the first plurality of sub-tasks, and storing the first plurality of “ready” queue entries in a second queue associated with the second node. 13. The computer system of claim 11, wherein the processor is further configured to perform a load balancing operation by: transfer one or more tasks from the first node to a third node that is a sister of the first node. 14. The computer system of claim 10, wherein: the first task and the plurality of sub-tasks comprise vertices of a directed acyclic graph-based hierarchy, the first task including directed edges pointing to the sub-tasks of the plurality of sub-tasks. 15. The computer system of claim 14, wherein the processor is configured to process the first data by: generating the plurality of sub-tasks and generating the directed edges pointing to the sub-tasks of the plurality of sub-tasks. 16. The computer system of claim 14, wherein the processor is further configured to perform a load balancing operation by: determining that a number of tasks assigned to the first node is greater than a number of tasks assigned to a third node that is a sister of the first node; and in response, transferring one or more tasks from the first node to the third node. 17. The computer system of claim 10, wherein the processor is further configured to: responsive to determining that all sub-tasks of the first plurality of sub-tasks are complete, determining that the first task is complete. 18. The computer system of claim 10 wherein: the sub-tasks of the first plurality of sub-tasks comprise payload processing tasks and not data splitting tasks. 19. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to distribute processing data according to a memory hierarchy and perform payload processing on the processing data by: detecting that a first task, associated with first data, is available for processing at a first node at a first hierarchy level of the memory hierarchy; determining that the first node is a non-leaf node; determining that sufficient capacity exists for processing and storage of data associated with the first task in a at a second node that comprises a leaf node in a second hierarchy level of the memory hierarchy, the second heirarchy level being lower than the first hierarchy level; responsive to determining that the first node is a non-leaf node and that sufficient capacity exists for the data associated with the first task, processing the first data by dividing the first data to generate a first plurality of sub-tasks and storing the first plurality of sub-tasks in a second memory or storage unit associated with the second node; and processing the data at a processing unit associated with the leaf node, wherein the first hierarchy level and the second hierarchy level comprise hierarchy levels of one of a queue-based hierarchy or a directed acyclic graph-based hierarchy, and wherein tasks at leaf nodes of the memory hierarchy comprise portions of the payload processing that are performed by processing units associated with the leaf nodes, and tasks at non-leaf nodes of the memory hierarchy comprise tasks for dividing and transmitting the processing data to nodes at lower levels of the memory hierarchy. 20. The non-transitory computer-readable medium of claim 19, wherein: the sub-tasks of the first plurality of sub-tasks comprise payload processing tasks and not data splitting tasks.
2,100
274,027
15,496,033
2,131
Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: examining information of first through Nth storage volumes and based on the examining providing for each storage volume of the first through Nth storage volumes a predicted storage space savings value, the predicted storage space savings value indicating a predicted terabyte volume of storage space savings producible by performance of data compression of data stored on the storage volume; predicting a per terabyte compression cost savings associated with compressing one or more storage volume of the first through Nth storage volumes, and providing a ranking of storage volumes of the first through Nth storage volumes based on the examining and the predicting; and scheduling a compression of storage volumes of the first through Nth storage volumes based on the ranking of storage volumes of the first through Nth storage volumes.
1-12. (canceled) 13. A computer program product comprising: a computer readable storage medium readable by one or more processing circuit and storing instructions for execution by one or more processor for performing a method comprising: examining information of first through Nth storage volumes and based on the examining providing for each storage volume of the first through Nth storage volumes a predicted storage space savings value, the predicted storage space savings value indicating a predicted terabyte volume of storage space savings producible by performance of data compression of data stored on the storage volume; predicting a per terabyte compression cost savings associated with compressing one or more storage volume of the first through Nth storage volumes, and providing a ranking of storage volumes of the first through Nth storage volumes based on the examining and the predicting; and scheduling a compression of storage volumes of the first through Nth storage volumes based on the ranking of storage volumes of the first through Nth storage volumes, wherein the examining is based on one or more of the following selected from the group consisting of a storage volume size factor, a storage volume I/O pattern factor, a storage volume host attachment pattern factor, a storage volume data characteristic factor, and storage volume data compressibility factor. 14. The computer program product of claim 13, wherein the examining is based on each of a storage volume size factor, a storage volume I/O pattern factor, a storage volume host attachment pattern factor, a storage volume data characteristic factor, and storage volume data compressibility factor. 15. The computer program product of claim 13, wherein the predicting is based on each of a storage volume tier factor, a storage volume model factor, and a licensing provision factor. 16. The computer program product of claim 13, wherein the method includes examining for at least one of the first through Nth storage volumes an exclusion criteria, wherein the exclusion criteria includes each of a negative cost savings criteria, a licensing provision criteria, and a design requirement criteria. 17. A system comprising: a memory; at least one processor in communication with memory; and program instructions executable by one or more processor via the memory to perform a method comprising: a memory; examining information of first through Nth storage volumes and based on the examining providing for each storage volume of the first through Nth storage volumes a predicted storage space savings value, the predicted storage space savings value indicating a predicted terabyte volume of storage space savings producible by performance of data compression of data stored on the storage volume; predicting a per terabyte compression cost savings associated with compressing one or more storage volume of the first through Nth storage volumes, and providing a ranking of storage volumes of the first through Nth storage volumes based on the examining and the predicting; and scheduling a compression of storage volumes of the first through Nth storage volumes based on the ranking of storage volumes of the first through Nth storage volumes, wherein the predicting is based on one or more of the following selected from the group consisting of a storage volume tier factor, a storage volume model factor, and a licensing provision factor. 18. The system of claim 17, wherein the examining is based on one or more of the following selected from the group consisting of a storage volume size factor, a storage volume I/O pattern factor, a storage volume host attachment pattern factor, a storage volume data characteristic factor, and storage volume data compressibility factor. 19. The system of claim 17, wherein the predicting is based on each of a storage volume tier factor, a storage volume model factor, and a licensing provision factor. 20. The system of claim 17, wherein the method includes examining for at least one of the first through Nth storage volumes an exclusion criteria, wherein the exclusion criteria includes one or more of the following selected from the group consisting of: a negative cost savings criteria, a licensing provision criteria, and a design requirement criteria. 21. The system of claim 17, wherein the predicting is based on a storage volume tier factor. 22. The system of claim 17, wherein the predicting is based on a storage volume model factor. 23. The system of claim 17, wherein the predicting is based on a licensing provision factor. 24. The computer program product of claim 13, wherein the examining is based on a storage volume size factor 25. The computer program product of claim 13, wherein the examining is based on a storage volume I/O pattern factor. 26. The computer program product of claim 13, wherein the examining is based on a storage volume host attachment pattern factor. 27. The computer program product of claim 13, wherein the examining is based on a storage volume data characteristic factor 28. The computer program product of claim 13, wherein the examining is based on a storage volume data compressibility factor. 29. A computer program product comprising: a computer readable storage medium readable by one or more processing circuit and storing instructions for execution by one or more processor for performing a method comprising: examining information of first through Nth storage volumes and based on the examining providing for each storage volume of the first through Nth storage volumes a predicted storage space savings value, the predicted storage space savings value indicating a predicted terabyte volume of storage space savings producible by performance of data compression of data stored on the storage volume; predicting a per terabyte compression cost savings associated with compressing one or more storage volume of the first through Nth storage volumes, and providing a ranking of storage volumes of the first through Nth storage volumes based on the examining and the predicting; and scheduling a compression of storage volumes of the first through Nth storage volumes based on the ranking of storage volumes of the first through Nth storage volumes, wherein the method includes examining for at least one of the first through Nth storage volumes an exclusion criteria, wherein the exclusion criteria includes one or more of the following selected from the group consisting of: a negative cost savings criteria, a licensing provision criteria, and a design requirement criteria. 30. The system of claim 29, wherein the exclusion criteria includes a negative cost savings criteria. 31. The system of claim 29, wherein the exclusion criteria includes a licensing provision criteria. 32. The system of claim 29, wherein the exclusion criteria includes a design requirement criteria.
Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: examining information of first through Nth storage volumes and based on the examining providing for each storage volume of the first through Nth storage volumes a predicted storage space savings value, the predicted storage space savings value indicating a predicted terabyte volume of storage space savings producible by performance of data compression of data stored on the storage volume; predicting a per terabyte compression cost savings associated with compressing one or more storage volume of the first through Nth storage volumes, and providing a ranking of storage volumes of the first through Nth storage volumes based on the examining and the predicting; and scheduling a compression of storage volumes of the first through Nth storage volumes based on the ranking of storage volumes of the first through Nth storage volumes.1-12. (canceled) 13. A computer program product comprising: a computer readable storage medium readable by one or more processing circuit and storing instructions for execution by one or more processor for performing a method comprising: examining information of first through Nth storage volumes and based on the examining providing for each storage volume of the first through Nth storage volumes a predicted storage space savings value, the predicted storage space savings value indicating a predicted terabyte volume of storage space savings producible by performance of data compression of data stored on the storage volume; predicting a per terabyte compression cost savings associated with compressing one or more storage volume of the first through Nth storage volumes, and providing a ranking of storage volumes of the first through Nth storage volumes based on the examining and the predicting; and scheduling a compression of storage volumes of the first through Nth storage volumes based on the ranking of storage volumes of the first through Nth storage volumes, wherein the examining is based on one or more of the following selected from the group consisting of a storage volume size factor, a storage volume I/O pattern factor, a storage volume host attachment pattern factor, a storage volume data characteristic factor, and storage volume data compressibility factor. 14. The computer program product of claim 13, wherein the examining is based on each of a storage volume size factor, a storage volume I/O pattern factor, a storage volume host attachment pattern factor, a storage volume data characteristic factor, and storage volume data compressibility factor. 15. The computer program product of claim 13, wherein the predicting is based on each of a storage volume tier factor, a storage volume model factor, and a licensing provision factor. 16. The computer program product of claim 13, wherein the method includes examining for at least one of the first through Nth storage volumes an exclusion criteria, wherein the exclusion criteria includes each of a negative cost savings criteria, a licensing provision criteria, and a design requirement criteria. 17. A system comprising: a memory; at least one processor in communication with memory; and program instructions executable by one or more processor via the memory to perform a method comprising: a memory; examining information of first through Nth storage volumes and based on the examining providing for each storage volume of the first through Nth storage volumes a predicted storage space savings value, the predicted storage space savings value indicating a predicted terabyte volume of storage space savings producible by performance of data compression of data stored on the storage volume; predicting a per terabyte compression cost savings associated with compressing one or more storage volume of the first through Nth storage volumes, and providing a ranking of storage volumes of the first through Nth storage volumes based on the examining and the predicting; and scheduling a compression of storage volumes of the first through Nth storage volumes based on the ranking of storage volumes of the first through Nth storage volumes, wherein the predicting is based on one or more of the following selected from the group consisting of a storage volume tier factor, a storage volume model factor, and a licensing provision factor. 18. The system of claim 17, wherein the examining is based on one or more of the following selected from the group consisting of a storage volume size factor, a storage volume I/O pattern factor, a storage volume host attachment pattern factor, a storage volume data characteristic factor, and storage volume data compressibility factor. 19. The system of claim 17, wherein the predicting is based on each of a storage volume tier factor, a storage volume model factor, and a licensing provision factor. 20. The system of claim 17, wherein the method includes examining for at least one of the first through Nth storage volumes an exclusion criteria, wherein the exclusion criteria includes one or more of the following selected from the group consisting of: a negative cost savings criteria, a licensing provision criteria, and a design requirement criteria. 21. The system of claim 17, wherein the predicting is based on a storage volume tier factor. 22. The system of claim 17, wherein the predicting is based on a storage volume model factor. 23. The system of claim 17, wherein the predicting is based on a licensing provision factor. 24. The computer program product of claim 13, wherein the examining is based on a storage volume size factor 25. The computer program product of claim 13, wherein the examining is based on a storage volume I/O pattern factor. 26. The computer program product of claim 13, wherein the examining is based on a storage volume host attachment pattern factor. 27. The computer program product of claim 13, wherein the examining is based on a storage volume data characteristic factor 28. The computer program product of claim 13, wherein the examining is based on a storage volume data compressibility factor. 29. A computer program product comprising: a computer readable storage medium readable by one or more processing circuit and storing instructions for execution by one or more processor for performing a method comprising: examining information of first through Nth storage volumes and based on the examining providing for each storage volume of the first through Nth storage volumes a predicted storage space savings value, the predicted storage space savings value indicating a predicted terabyte volume of storage space savings producible by performance of data compression of data stored on the storage volume; predicting a per terabyte compression cost savings associated with compressing one or more storage volume of the first through Nth storage volumes, and providing a ranking of storage volumes of the first through Nth storage volumes based on the examining and the predicting; and scheduling a compression of storage volumes of the first through Nth storage volumes based on the ranking of storage volumes of the first through Nth storage volumes, wherein the method includes examining for at least one of the first through Nth storage volumes an exclusion criteria, wherein the exclusion criteria includes one or more of the following selected from the group consisting of: a negative cost savings criteria, a licensing provision criteria, and a design requirement criteria. 30. The system of claim 29, wherein the exclusion criteria includes a negative cost savings criteria. 31. The system of claim 29, wherein the exclusion criteria includes a licensing provision criteria. 32. The system of claim 29, wherein the exclusion criteria includes a design requirement criteria.
2,100
274,028
15,495,994
2,131
A page aligning method for a data storage device is provided. The data storage device includes a non-volatile memory and the page aligning method includes steps of: executing a system initialization on the non-volatile memory to obtain a remaining storage capacity; selecting a number from a lookup table as an initial storage capacity according to the remaining storage capacity and a lookup table; and referring the initial storage capacity as a fixed capacity in the data storage device and writing the initial storage capacity into the non-volatile memory. A lookup table generating method and the data storage device are also provided.
1. A page aligning method for a data storage device, wherein the data storage device comprises a non-volatile memory, and the page aligning method comprises steps of: executing a system initialization on the non-volatile memory to obtain a remaining storage capacity; selecting a number from a lookup table as an initial storage capacity according to the remaining storage capacity; and referring the initial storage capacity as a fixed capacity in the data storage device and writing the initial storage capacity into the non-volatile memory. 2. The page aligning method according to claim 1, wherein an information block is generated when the system initialization is executed on the non-volatile memory. 3. The page aligning method according to claim 1, wherein the initial storage capacity is smaller than the remaining storage capacity. 4. The page aligning method according to claim 1, wherein the initial storage capacity is smaller than an unallocated space determined by an operating system. 5. The page aligning method according to claim 1, wherein the number is a simulative initial storage capacity. 6. The page aligning method according to claim 1, wherein the initial storage capacity is written into an information block in the non-volatile memory. 7. The page aligning method according to claim 1, wherein the initial storage capacity was set during a lookup table generating method. 8. A lookup table generating method for alignment of pages in a data storage device, wherein the data storage device comprises a non-volatile memory and the lookup table generating method comprises steps of: setting a simulative initial storage capacity; setting a hidden space; simulating a formatting process on the simulative initial storage capacity without altering the hidden space to generate a valid storage capacity; and storing a current simulative initial storage capacity into a lookup table when the current valid storage capacity satisfies a predetermined condition and the hidden space is smaller than a default value. 9. The lookup table generating method according to claim 8, wherein the steps of storing the current simulative initial storage capacity into the lookup table when the current valid storage capacity satisfies the predetermined condition and the hidden space is smaller than the default value comprises steps of: determining whether the valid storage capacity satisfies the predetermined condition; if yes, determining whether the hidden space is smaller than the default value; and if no, storing the current simulative initial storage capacity into the lookup table. 10. The lookup table generating method according to claim 9, wherein the step of determining whether the valid storage capacity satisfies the predetermined condition comprises steps of: determining whether the valid storage capacity satisfies the predetermined condition; and if no, increasing the hidden space. 11. The lookup table generating method according to claim 9, wherein the step of determining whether the hidden space is smaller than the default value comprises steps of: determining whether the hidden space is smaller than the default value; and if no, increasing the simulative initial storage capacity. 12. The lookup table generating method according to claim 8, wherein the predetermined condition is that the valid storage capacity is N times of a capacity of a page in the non-volatile memory, wherein N is a positive integer. 13. The lookup table generating method according to claim 8, wherein the hidden space is located between a system area and a user area. 14. The lookup table generating method according to claim 8, wherein the hidden space is located adjacent to a master boot record (MBR). 15. A data storage device, comprising: a non-volatile memory for storing data, wherein the non-volatile memory comprises a plurality of data blocks and each of the data blocks comprises a plurality of data pages; and a memory controller for controlling operations of the non-volatile memory and determining a valid storage capacity of the data storage device according to a fixed capacity stored in an information block. 16. The data storage device according to claim 15, wherein the valid storage capacity is smaller than or equal to a remaining storage capacity. 17. The data storage device according to claim 15, wherein the fixed capacity is determined based on a lookup table and a remaining storage capacity. 18. A data storage device, comprising: a plurality of data blocks, wherein each of the data blocks comprises a plurality of data pages for storing data; and a memory controller, logically defining the data blocks into an information block and a plurality of remaining blocks and determining a valid storage capacity of the remaining storage capacity of the remaining blocks according to a fixed capacity. 19. The data storage device according to claim 18, wherein the fixed capacity is determined based on a lookup table and the remaining storage capacity. 20. The data storage device according to claim 18, wherein the information block also stores an in-system programming firmware and product information.
A page aligning method for a data storage device is provided. The data storage device includes a non-volatile memory and the page aligning method includes steps of: executing a system initialization on the non-volatile memory to obtain a remaining storage capacity; selecting a number from a lookup table as an initial storage capacity according to the remaining storage capacity and a lookup table; and referring the initial storage capacity as a fixed capacity in the data storage device and writing the initial storage capacity into the non-volatile memory. A lookup table generating method and the data storage device are also provided.1. A page aligning method for a data storage device, wherein the data storage device comprises a non-volatile memory, and the page aligning method comprises steps of: executing a system initialization on the non-volatile memory to obtain a remaining storage capacity; selecting a number from a lookup table as an initial storage capacity according to the remaining storage capacity; and referring the initial storage capacity as a fixed capacity in the data storage device and writing the initial storage capacity into the non-volatile memory. 2. The page aligning method according to claim 1, wherein an information block is generated when the system initialization is executed on the non-volatile memory. 3. The page aligning method according to claim 1, wherein the initial storage capacity is smaller than the remaining storage capacity. 4. The page aligning method according to claim 1, wherein the initial storage capacity is smaller than an unallocated space determined by an operating system. 5. The page aligning method according to claim 1, wherein the number is a simulative initial storage capacity. 6. The page aligning method according to claim 1, wherein the initial storage capacity is written into an information block in the non-volatile memory. 7. The page aligning method according to claim 1, wherein the initial storage capacity was set during a lookup table generating method. 8. A lookup table generating method for alignment of pages in a data storage device, wherein the data storage device comprises a non-volatile memory and the lookup table generating method comprises steps of: setting a simulative initial storage capacity; setting a hidden space; simulating a formatting process on the simulative initial storage capacity without altering the hidden space to generate a valid storage capacity; and storing a current simulative initial storage capacity into a lookup table when the current valid storage capacity satisfies a predetermined condition and the hidden space is smaller than a default value. 9. The lookup table generating method according to claim 8, wherein the steps of storing the current simulative initial storage capacity into the lookup table when the current valid storage capacity satisfies the predetermined condition and the hidden space is smaller than the default value comprises steps of: determining whether the valid storage capacity satisfies the predetermined condition; if yes, determining whether the hidden space is smaller than the default value; and if no, storing the current simulative initial storage capacity into the lookup table. 10. The lookup table generating method according to claim 9, wherein the step of determining whether the valid storage capacity satisfies the predetermined condition comprises steps of: determining whether the valid storage capacity satisfies the predetermined condition; and if no, increasing the hidden space. 11. The lookup table generating method according to claim 9, wherein the step of determining whether the hidden space is smaller than the default value comprises steps of: determining whether the hidden space is smaller than the default value; and if no, increasing the simulative initial storage capacity. 12. The lookup table generating method according to claim 8, wherein the predetermined condition is that the valid storage capacity is N times of a capacity of a page in the non-volatile memory, wherein N is a positive integer. 13. The lookup table generating method according to claim 8, wherein the hidden space is located between a system area and a user area. 14. The lookup table generating method according to claim 8, wherein the hidden space is located adjacent to a master boot record (MBR). 15. A data storage device, comprising: a non-volatile memory for storing data, wherein the non-volatile memory comprises a plurality of data blocks and each of the data blocks comprises a plurality of data pages; and a memory controller for controlling operations of the non-volatile memory and determining a valid storage capacity of the data storage device according to a fixed capacity stored in an information block. 16. The data storage device according to claim 15, wherein the valid storage capacity is smaller than or equal to a remaining storage capacity. 17. The data storage device according to claim 15, wherein the fixed capacity is determined based on a lookup table and a remaining storage capacity. 18. A data storage device, comprising: a plurality of data blocks, wherein each of the data blocks comprises a plurality of data pages for storing data; and a memory controller, logically defining the data blocks into an information block and a plurality of remaining blocks and determining a valid storage capacity of the remaining storage capacity of the remaining blocks according to a fixed capacity. 19. The data storage device according to claim 18, wherein the fixed capacity is determined based on a lookup table and the remaining storage capacity. 20. The data storage device according to claim 18, wherein the information block also stores an in-system programming firmware and product information.
2,100
274,029
15,496,637
2,131
Systems, apparatuses, and methods for tracking page reuse and migrating pages are disclosed. In one embodiment, a system includes one or more processors, a memory access monitor, and multiple memory regions. The memory access monitor tracks accesses to memory pages in a system memory during a programmable interval. If the number of accesses to a given page is greater than a programmable threshold during the programmable interval, then the memory access monitor generates an interrupt for software to migrate the given page from the system memory to a local memory. If the number of accesses to the given page is less than or equal to the programmable threshold during the programmable interval, then the given page remains in the system memory. After the programmable interval, the memory access monitor starts tracking the number of accesses to a new page in a subsequent interval.
1. A system comprising: a first memory; a second memory; and a plurality of control units; wherein a first control unit is configured to: count, in a first stage filter, X valid requests traversing a first memory channel, wherein X is a positive integer; send a subsequent request to a second stage filter after counting X valid requests, wherein the second stage filter comprises a table for tracking a number of accesses to pages; increment a counter if a first entry already exists in the table for a first page targeted by the subsequent request; and cause the first page to be migrated from the first memory to the second memory responsive to the counter reaching a threshold. 2. The system as recited in claim 1, wherein the first control unit is further configured to: generate a first interrupt responsive to detecting that the counter in the first entry has exceeded the threshold; and convey the first interrupt to a second control unit; wherein the second control unit is configured to convey the first interrupt to an interrupt handler. 3. The system as recited in claim 2, wherein the system comprises one or more processors, wherein program instructions executed by the one or more processors are configured to migrate the given page from the first memory to the second memory responsive to the interrupt handler receiving the first interrupt. 4. The system as recited in claim 1, wherein the first interrupt includes an identifier of the first memory channel on which the first page was accessed. 5. The system as recited in claim 1, wherein the first control unit is configured to monitor accesses to a plurality of pages during a programmable interval. 6. The system as recited in claim 5, wherein responsive to determining a given programmable interval for a given page has expired, the first control unit is configured to evict an entry for the given page from the table. 7. The system as recited in claim 6, wherein a number of accesses to each page of the plurality of pages is monitored during a separate programmable interval for each page. 8. A method comprising: counting, in a first stage filter of a first control unit, X valid requests traversing a first memory channel, wherein X is a positive integer; sending a subsequent request to a second stage filter after counting X valid requests, wherein the second stage filter comprises a table for tracking a number of accesses to pages; incrementing a counter if a first entry already exists in the table for a first page targeted by the subsequent request; and causing the first page to be migrated from a first memory to a second memory responsive to the counter reaching a threshold. 9. The method as recited in claim 8, further comprising: generating a first interrupt responsive to detecting that the counter in the first entry has exceeded the threshold; conveying the first interrupt to a second control unit; and conveying the first interrupt from the second control unit to an interrupt handler. 10. The method as recited in claim 9, further comprising migrating the given page from the first memory to the second memory responsive to the interrupt handler receiving the first interrupt. 11. The method as recited in claim 8, wherein the first interrupt includes an identifier of the first memory channel on which the first page was accessed. 12. The method as recited in claim 8, further comprising monitoring accesses to a plurality of pages during a programmable interval. 13. The method as recited in claim 12, further comprising evicting an entry for the given page from the table responsive to determining a given programmable interval for a given page has expired. 14. The method as recited in claim 13, wherein each page of the plurality of pages has a separate programmable interval. 15. An apparatus comprising: a first memory; a second memory; one or more processors; and a plurality of control units; wherein a first control unit is configured to; count, in a first stage filter, X valid requests traversing a first memory channel, wherein X is a positive integer; send a subsequent request to a second stage filter after counting X valid requests, wherein the second stage filter comprises a table for tracking a number of accesses to pages; 16. The apparatus as recited in claim 15, wherein the first control unit is further configured to: generate a first interrupt responsive to detecting that the counter in the first entry has exceeded the threshold; and convey the first interrupt to a second control unit; wherein the second control unit is configured to convey the first interrupt to an interrupt handler. 17. The apparatus as recited in claim 16, wherein program instructions executed by the one or more processors are configured to migrate the given page from the first memory to the second memory responsive to the interrupt handler receiving the first interrupt. 18. The apparatus as recited in claim 15, wherein the first interrupt includes an identifier of the first memory channel on which the first page was accessed. 19. The apparatus as recited in claim 15, wherein the first control unit is configured to monitor accesses to a plurality of pages during a programmable interval. 20. The apparatus as recited in claim 19, wherein responsive to determining a given programmable interval for a given page has expired, the first control unit is configured to evict an entry for the given page from the table.
Systems, apparatuses, and methods for tracking page reuse and migrating pages are disclosed. In one embodiment, a system includes one or more processors, a memory access monitor, and multiple memory regions. The memory access monitor tracks accesses to memory pages in a system memory during a programmable interval. If the number of accesses to a given page is greater than a programmable threshold during the programmable interval, then the memory access monitor generates an interrupt for software to migrate the given page from the system memory to a local memory. If the number of accesses to the given page is less than or equal to the programmable threshold during the programmable interval, then the given page remains in the system memory. After the programmable interval, the memory access monitor starts tracking the number of accesses to a new page in a subsequent interval.1. A system comprising: a first memory; a second memory; and a plurality of control units; wherein a first control unit is configured to: count, in a first stage filter, X valid requests traversing a first memory channel, wherein X is a positive integer; send a subsequent request to a second stage filter after counting X valid requests, wherein the second stage filter comprises a table for tracking a number of accesses to pages; increment a counter if a first entry already exists in the table for a first page targeted by the subsequent request; and cause the first page to be migrated from the first memory to the second memory responsive to the counter reaching a threshold. 2. The system as recited in claim 1, wherein the first control unit is further configured to: generate a first interrupt responsive to detecting that the counter in the first entry has exceeded the threshold; and convey the first interrupt to a second control unit; wherein the second control unit is configured to convey the first interrupt to an interrupt handler. 3. The system as recited in claim 2, wherein the system comprises one or more processors, wherein program instructions executed by the one or more processors are configured to migrate the given page from the first memory to the second memory responsive to the interrupt handler receiving the first interrupt. 4. The system as recited in claim 1, wherein the first interrupt includes an identifier of the first memory channel on which the first page was accessed. 5. The system as recited in claim 1, wherein the first control unit is configured to monitor accesses to a plurality of pages during a programmable interval. 6. The system as recited in claim 5, wherein responsive to determining a given programmable interval for a given page has expired, the first control unit is configured to evict an entry for the given page from the table. 7. The system as recited in claim 6, wherein a number of accesses to each page of the plurality of pages is monitored during a separate programmable interval for each page. 8. A method comprising: counting, in a first stage filter of a first control unit, X valid requests traversing a first memory channel, wherein X is a positive integer; sending a subsequent request to a second stage filter after counting X valid requests, wherein the second stage filter comprises a table for tracking a number of accesses to pages; incrementing a counter if a first entry already exists in the table for a first page targeted by the subsequent request; and causing the first page to be migrated from a first memory to a second memory responsive to the counter reaching a threshold. 9. The method as recited in claim 8, further comprising: generating a first interrupt responsive to detecting that the counter in the first entry has exceeded the threshold; conveying the first interrupt to a second control unit; and conveying the first interrupt from the second control unit to an interrupt handler. 10. The method as recited in claim 9, further comprising migrating the given page from the first memory to the second memory responsive to the interrupt handler receiving the first interrupt. 11. The method as recited in claim 8, wherein the first interrupt includes an identifier of the first memory channel on which the first page was accessed. 12. The method as recited in claim 8, further comprising monitoring accesses to a plurality of pages during a programmable interval. 13. The method as recited in claim 12, further comprising evicting an entry for the given page from the table responsive to determining a given programmable interval for a given page has expired. 14. The method as recited in claim 13, wherein each page of the plurality of pages has a separate programmable interval. 15. An apparatus comprising: a first memory; a second memory; one or more processors; and a plurality of control units; wherein a first control unit is configured to; count, in a first stage filter, X valid requests traversing a first memory channel, wherein X is a positive integer; send a subsequent request to a second stage filter after counting X valid requests, wherein the second stage filter comprises a table for tracking a number of accesses to pages; 16. The apparatus as recited in claim 15, wherein the first control unit is further configured to: generate a first interrupt responsive to detecting that the counter in the first entry has exceeded the threshold; and convey the first interrupt to a second control unit; wherein the second control unit is configured to convey the first interrupt to an interrupt handler. 17. The apparatus as recited in claim 16, wherein program instructions executed by the one or more processors are configured to migrate the given page from the first memory to the second memory responsive to the interrupt handler receiving the first interrupt. 18. The apparatus as recited in claim 15, wherein the first interrupt includes an identifier of the first memory channel on which the first page was accessed. 19. The apparatus as recited in claim 15, wherein the first control unit is configured to monitor accesses to a plurality of pages during a programmable interval. 20. The apparatus as recited in claim 19, wherein responsive to determining a given programmable interval for a given page has expired, the first control unit is configured to evict an entry for the given page from the table.
2,100
274,030
15,496,761
2,131
Apparatus and method for managing metadata in a data storage device. In some embodiments, a metadata object has entries that describe data sets stored in a non-volatile write cache. During an archival (persistence) operation, the metadata object is divided into portions, and the portions are copied in turn to a non-volatile memory at a rate that maintains a measured latency within a predetermined threshold. A journal is formed of time-ordered entries that describe changes to the metadata object after the copying of the associated portions to the non-volatile memory. The journal is subsequently stored to the non-volatile memory, and may be subsequently combined with the previously stored portions to recreate the metadata object in a local memory. The measured performance latency may be related to a specified customer command completion time (CCT) for host commands.
1. A method comprising: maintaining a metadata object as a data structure in a local memory, the metadata object having a plurality of entries that describe data sets stored in a write cache comprising a non-volatile memory; dividing the metadata object into a plurality of portions, each portion describing an associated range of logical addresses and having a size selected responsive to a latency associated with transfers between the write cache and a host device; copying each portion in turn to a non-volatile memory to maintain the latency at a desired level with respect to a predetermined threshold; generating a journal as a data structure in the local memory having a plurality of time-ordered entries that describe changes to the metadata object during and after the copying of the associated portions to the non-volatile memory; and storing the journal to the non-volatile memory after all of the portions of the media cache memory table have been stored in the non-volatile memory. 2. The method of claim 1, further comprising subsequent steps of transferring the journal and the portions of the metadata object from the non-volatile memory to the local memory responsive to a re-initialization operation, merging the time-ordered entries of the journal with the portions of the metadata object to provide a current version metadata object, and updating the current version metadata object responsive to subsequent transfers of additional sets of user data into and out of the write cache. 3. The method of claim 1, wherein the metadata object is a master table comprising a plurality of table entries, each table entry associating a logical address of a user data block to a corresponding physical address of the user data block in the write cache. 4. The method of claim 3, wherein the master table comprises at least a selected one of a B+ tree, a linear tree or a two level tree. 5. The method of claim 1, wherein each of the portions of the metadata object is nominally the same size. 6. The method of claim 1, wherein the write cache comprises a media cache, the media cache comprises a first portion of a rotatable data recording medium, and the non-volatile memory comprises a different, second portion of the rotatable data recording medium. 7. The method of claim 1, wherein the write cache comprises a semiconductor memory. 8. The method of claim 1, further comprising storing a first portion of the plurality of portions of the metadata object to the non-volatile memory, receiving a host command from the host device associated with a second portion of the metadata object prior to writing the second portion of the metadata object to the non-volatile memory, and servicing the received host command and updating the second portion prior to writing the second portion to the non-volatile memory. 9. The method of claim 1, wherein the latency comprises a data transfer rate associated with a transfer of data with the host device. 10. The method of claim 9, wherein the predetermined threshold is a threshold data transfer rate selected in relation to a specified command completion time comprising an elapsed time from receipt of a host command to completion of the host command. 11. The method of claim 1, further comprising temporarily applying a block to the metadata object in the local memory during the transfer of the respective portions thereof to the non-volatile memory so that no updates are applied to the metadata object during the application of said block. 12. The method of claim 10, further comprising receiving a host command for a selected portion not yet transferred to the non-volatile memory, temporarily unblocking the selected portion while maintaining the remaining portions of the media cache master table in a blocked condition to facilitate execution of the received host command and updating of the selected portion, reblocking the updated selected portion and writing the updated selected portion to the non-volatile memory. 13. The method of claim 1, wherein the portions stored in the non-volatile memory form an incoherent media cache master table, and the method further comprises replacing selected entries in the incoherent media cache master table with the entries in the update table to form a coherent media cache master table in the local memory. 14. The method of claim 1, wherein for each portion in turn, the method comprises copying the portion to the non-volatile main memory responsive to an absence of a pending host access command for the associated range of logical addresses and blocking execution of any subsequently received host access commands for the associated range of logical addresses during said copying, else delaying said copying and blocking steps for the portion responsive to a presence of a pending host access command for the associated range of logical addresses until completion of execution of said pending host access command. 15. A data storage device, comprising: a non-volatile memory configured to store user data from a host device; a write cache comprising a non-volatile cache memory configured to temporarily store user data prior to transfer to the non-volatile memory; a local memory which stores a metadata object as a data structure having a plurality of entries that describe the user data stored in the write cache; and a cache manager circuit configured to divide the metadata object into a plurality of portions each associated with a different range of logical addresses for the user data stored in the write cache, to copy each portion in turn to the non-volatile memory to maintain a measured latency associated with data transfers between the data storage device and the host device within a predetermined threshold, to generate a journal as a data structure in the local memory having a plurality of entries that describe changes to the metadata object after the copying of the associated portions to the non-volatile memory, and to store the journal to the non-volatile memory after all of the portions of the media cache memory table have been stored in the non-volatile memory. 16. The data storage device of claim 15, wherein the cache manager circuit is further configured to subsequently direct a loading of the journal and the portions of the metadata object to the local memory responsive to a re-initialization sequence for the data storage device, to merge the entries in the journal with the portions of the metadata object to provide a current version metadata object, and to update the current version metadata object responsive to transfers of user data to and from the write cache. 17. The data storage device of claim 15, wherein the metadata object comprises a plurality of entries, each entry associating a logical address of a user data block to a corresponding physical address of the user data block in the write cache. 18. The data storage device of claim 15, wherein for each selected portion in turn, the cache manager circuit operates to copy the selected portion to the non-volatile memory responsive to an absence of a pending host access command for the associated range of logical addresses and block execution of any subsequently received host access commands for the associated range of logical addresses during said copying, and wherein, responsive to a presence of a pending host access command for the associated range of logical addresses for the selected portion, delaying the copying of the selected portion until the pending host access command is executed. 19. The data storage device of claim 15, wherein the measured latency comprises a data transfer rate between the data storage device and the host device. 20. The data storage device of claim 15, wherein the write cache is characterized as a media cache comprising a first portion of a rotatable data recording medium, and the non-volatile memory comprises a different, second portion of the rotatable data recording medium.
Apparatus and method for managing metadata in a data storage device. In some embodiments, a metadata object has entries that describe data sets stored in a non-volatile write cache. During an archival (persistence) operation, the metadata object is divided into portions, and the portions are copied in turn to a non-volatile memory at a rate that maintains a measured latency within a predetermined threshold. A journal is formed of time-ordered entries that describe changes to the metadata object after the copying of the associated portions to the non-volatile memory. The journal is subsequently stored to the non-volatile memory, and may be subsequently combined with the previously stored portions to recreate the metadata object in a local memory. The measured performance latency may be related to a specified customer command completion time (CCT) for host commands.1. A method comprising: maintaining a metadata object as a data structure in a local memory, the metadata object having a plurality of entries that describe data sets stored in a write cache comprising a non-volatile memory; dividing the metadata object into a plurality of portions, each portion describing an associated range of logical addresses and having a size selected responsive to a latency associated with transfers between the write cache and a host device; copying each portion in turn to a non-volatile memory to maintain the latency at a desired level with respect to a predetermined threshold; generating a journal as a data structure in the local memory having a plurality of time-ordered entries that describe changes to the metadata object during and after the copying of the associated portions to the non-volatile memory; and storing the journal to the non-volatile memory after all of the portions of the media cache memory table have been stored in the non-volatile memory. 2. The method of claim 1, further comprising subsequent steps of transferring the journal and the portions of the metadata object from the non-volatile memory to the local memory responsive to a re-initialization operation, merging the time-ordered entries of the journal with the portions of the metadata object to provide a current version metadata object, and updating the current version metadata object responsive to subsequent transfers of additional sets of user data into and out of the write cache. 3. The method of claim 1, wherein the metadata object is a master table comprising a plurality of table entries, each table entry associating a logical address of a user data block to a corresponding physical address of the user data block in the write cache. 4. The method of claim 3, wherein the master table comprises at least a selected one of a B+ tree, a linear tree or a two level tree. 5. The method of claim 1, wherein each of the portions of the metadata object is nominally the same size. 6. The method of claim 1, wherein the write cache comprises a media cache, the media cache comprises a first portion of a rotatable data recording medium, and the non-volatile memory comprises a different, second portion of the rotatable data recording medium. 7. The method of claim 1, wherein the write cache comprises a semiconductor memory. 8. The method of claim 1, further comprising storing a first portion of the plurality of portions of the metadata object to the non-volatile memory, receiving a host command from the host device associated with a second portion of the metadata object prior to writing the second portion of the metadata object to the non-volatile memory, and servicing the received host command and updating the second portion prior to writing the second portion to the non-volatile memory. 9. The method of claim 1, wherein the latency comprises a data transfer rate associated with a transfer of data with the host device. 10. The method of claim 9, wherein the predetermined threshold is a threshold data transfer rate selected in relation to a specified command completion time comprising an elapsed time from receipt of a host command to completion of the host command. 11. The method of claim 1, further comprising temporarily applying a block to the metadata object in the local memory during the transfer of the respective portions thereof to the non-volatile memory so that no updates are applied to the metadata object during the application of said block. 12. The method of claim 10, further comprising receiving a host command for a selected portion not yet transferred to the non-volatile memory, temporarily unblocking the selected portion while maintaining the remaining portions of the media cache master table in a blocked condition to facilitate execution of the received host command and updating of the selected portion, reblocking the updated selected portion and writing the updated selected portion to the non-volatile memory. 13. The method of claim 1, wherein the portions stored in the non-volatile memory form an incoherent media cache master table, and the method further comprises replacing selected entries in the incoherent media cache master table with the entries in the update table to form a coherent media cache master table in the local memory. 14. The method of claim 1, wherein for each portion in turn, the method comprises copying the portion to the non-volatile main memory responsive to an absence of a pending host access command for the associated range of logical addresses and blocking execution of any subsequently received host access commands for the associated range of logical addresses during said copying, else delaying said copying and blocking steps for the portion responsive to a presence of a pending host access command for the associated range of logical addresses until completion of execution of said pending host access command. 15. A data storage device, comprising: a non-volatile memory configured to store user data from a host device; a write cache comprising a non-volatile cache memory configured to temporarily store user data prior to transfer to the non-volatile memory; a local memory which stores a metadata object as a data structure having a plurality of entries that describe the user data stored in the write cache; and a cache manager circuit configured to divide the metadata object into a plurality of portions each associated with a different range of logical addresses for the user data stored in the write cache, to copy each portion in turn to the non-volatile memory to maintain a measured latency associated with data transfers between the data storage device and the host device within a predetermined threshold, to generate a journal as a data structure in the local memory having a plurality of entries that describe changes to the metadata object after the copying of the associated portions to the non-volatile memory, and to store the journal to the non-volatile memory after all of the portions of the media cache memory table have been stored in the non-volatile memory. 16. The data storage device of claim 15, wherein the cache manager circuit is further configured to subsequently direct a loading of the journal and the portions of the metadata object to the local memory responsive to a re-initialization sequence for the data storage device, to merge the entries in the journal with the portions of the metadata object to provide a current version metadata object, and to update the current version metadata object responsive to transfers of user data to and from the write cache. 17. The data storage device of claim 15, wherein the metadata object comprises a plurality of entries, each entry associating a logical address of a user data block to a corresponding physical address of the user data block in the write cache. 18. The data storage device of claim 15, wherein for each selected portion in turn, the cache manager circuit operates to copy the selected portion to the non-volatile memory responsive to an absence of a pending host access command for the associated range of logical addresses and block execution of any subsequently received host access commands for the associated range of logical addresses during said copying, and wherein, responsive to a presence of a pending host access command for the associated range of logical addresses for the selected portion, delaying the copying of the selected portion until the pending host access command is executed. 19. The data storage device of claim 15, wherein the measured latency comprises a data transfer rate between the data storage device and the host device. 20. The data storage device of claim 15, wherein the write cache is characterized as a media cache comprising a first portion of a rotatable data recording medium, and the non-volatile memory comprises a different, second portion of the rotatable data recording medium.
2,100
274,031
15,496,525
2,131
A computer implemented method for avoiding false activation of hang avoidance mechanisms of a system is provided. The computer implemented method includes receiving, by a nest of the system, rejects from a processor core of the system. The rejects are issued based on a cache line being locked by the processor core. The computer implemented method includes accumulating the rejects by the nest. The computer implemented method includes determining, by the nest, when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold. The computer implemented method also includes triggering, by the nest, a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold.
1. A computer implemented method for avoiding false activation of hang avoidance mechanisms of a system, comprising: receiving, by a nest of the system, rejects from a processor core of the system, wherein the rejects are issued based on a cache line being locked by the processor core; accumulating, by the nest, the rejects; determining, by the nest, when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold; and triggering, by the nest, a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold. 2. The computer implemented method of claim 1, wherein the rejects are issued by the processor core in response to the nest attempting to release control of the cache line. 3. The computer implemented method of claim 1, wherein the reject comprises an indication that the cache line is locked. 4. The computer implemented method of claim 1, wherein the global reset comprises an instruction message that causes the hang avoidance mechanisms throughout the system to reset their corresponding counters. 5. The computer implemented method of claim 1, comprising executing, by the processor core, a next instruction access intent lock instruction to purposefully hold onto the cache line for an extended period of time. 6. The computer implemented method of claim 1, comprising executing, by a second processor core of the system, an access attempt of the cache line. 7. The computer implemented method of claim 6, comprising sending, by the nest, a message on behalf of the second processor core to the processor core to access the cache line in response to the access attempt. 8. A computer program product for avoiding false activation of hang avoidance mechanisms of a system, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by the system to cause a nest of the system to: receive rejects from a processor core of the system, wherein the rejects are issued based on a cache line being locked by the processor core; accumulate the rejects; determine when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold; and trigger a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold. 9. The computer program product of claim 8, wherein the rejects are issued by the processor core in response to the nest attempting to release control of the cache line. 10. The computer program product of claim 8, wherein the reject comprises an indication that the cache line is locked. 11. The computer program product of claim 8, wherein the global reset comprises an instruction message that causes the hang avoidance mechanisms throughout the system to reset their corresponding counters. 12. The computer program product of claim 8, wherein the program instructions are further executable by the system to cause the processor core to a next instruction access intent lock instruction to purposefully hold onto the cache line for an extended period of time. 13. The computer program product of claim 8, wherein the program instructions are further executable by the system to cause a second processor core of the system to execute an access attempt of the cache line. 14. The computer program product of claim 13, wherein the program instructions are further executable by the system to cause the nest to send a message on behalf of the second processor core to the processor core to access the cache line in response to the access attempt. 15. A system, comprising a nest, a memory, and a processor core, the memory storing thereon program instructions for avoiding false activation of hang avoidance mechanisms of the system, the program instructions executable by the system to cause the nest to: receive rejects from a processor core of the system, wherein the rejects are issued based on a cache line being locked by the processor core; accumulate the rejects; determine when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold; and trigger a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold. 16. The system of claim 15, wherein the rejects are issued by the processor core in response to the nest attempting to release control of the cache line. 17. The system of claim 15, wherein the reject comprises an indication that the cache line is locked. 18. The system of claim 15, wherein the global reset comprises an instruction message that causes the hang avoidance mechanisms throughout the system to reset their corresponding counters. 19. The system of claim 15, wherein the program instructions are further executable by the system to cause the processor core to a next instruction access intent lock instruction to purposefully hold onto the cache line for an extended period of time. 20. The system of claim 15, wherein the program instructions are further executable by the system to cause a second processor core of the system to execute an access attempt of the cache line.
A computer implemented method for avoiding false activation of hang avoidance mechanisms of a system is provided. The computer implemented method includes receiving, by a nest of the system, rejects from a processor core of the system. The rejects are issued based on a cache line being locked by the processor core. The computer implemented method includes accumulating the rejects by the nest. The computer implemented method includes determining, by the nest, when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold. The computer implemented method also includes triggering, by the nest, a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold.1. A computer implemented method for avoiding false activation of hang avoidance mechanisms of a system, comprising: receiving, by a nest of the system, rejects from a processor core of the system, wherein the rejects are issued based on a cache line being locked by the processor core; accumulating, by the nest, the rejects; determining, by the nest, when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold; and triggering, by the nest, a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold. 2. The computer implemented method of claim 1, wherein the rejects are issued by the processor core in response to the nest attempting to release control of the cache line. 3. The computer implemented method of claim 1, wherein the reject comprises an indication that the cache line is locked. 4. The computer implemented method of claim 1, wherein the global reset comprises an instruction message that causes the hang avoidance mechanisms throughout the system to reset their corresponding counters. 5. The computer implemented method of claim 1, comprising executing, by the processor core, a next instruction access intent lock instruction to purposefully hold onto the cache line for an extended period of time. 6. The computer implemented method of claim 1, comprising executing, by a second processor core of the system, an access attempt of the cache line. 7. The computer implemented method of claim 6, comprising sending, by the nest, a message on behalf of the second processor core to the processor core to access the cache line in response to the access attempt. 8. A computer program product for avoiding false activation of hang avoidance mechanisms of a system, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by the system to cause a nest of the system to: receive rejects from a processor core of the system, wherein the rejects are issued based on a cache line being locked by the processor core; accumulate the rejects; determine when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold; and trigger a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold. 9. The computer program product of claim 8, wherein the rejects are issued by the processor core in response to the nest attempting to release control of the cache line. 10. The computer program product of claim 8, wherein the reject comprises an indication that the cache line is locked. 11. The computer program product of claim 8, wherein the global reset comprises an instruction message that causes the hang avoidance mechanisms throughout the system to reset their corresponding counters. 12. The computer program product of claim 8, wherein the program instructions are further executable by the system to cause the processor core to a next instruction access intent lock instruction to purposefully hold onto the cache line for an extended period of time. 13. The computer program product of claim 8, wherein the program instructions are further executable by the system to cause a second processor core of the system to execute an access attempt of the cache line. 14. The computer program product of claim 13, wherein the program instructions are further executable by the system to cause the nest to send a message on behalf of the second processor core to the processor core to access the cache line in response to the access attempt. 15. A system, comprising a nest, a memory, and a processor core, the memory storing thereon program instructions for avoiding false activation of hang avoidance mechanisms of the system, the program instructions executable by the system to cause the nest to: receive rejects from a processor core of the system, wherein the rejects are issued based on a cache line being locked by the processor core; accumulate the rejects; determine when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold; and trigger a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold. 16. The system of claim 15, wherein the rejects are issued by the processor core in response to the nest attempting to release control of the cache line. 17. The system of claim 15, wherein the reject comprises an indication that the cache line is locked. 18. The system of claim 15, wherein the global reset comprises an instruction message that causes the hang avoidance mechanisms throughout the system to reset their corresponding counters. 19. The system of claim 15, wherein the program instructions are further executable by the system to cause the processor core to a next instruction access intent lock instruction to purposefully hold onto the cache line for an extended period of time. 20. The system of claim 15, wherein the program instructions are further executable by the system to cause a second processor core of the system to execute an access attempt of the cache line.
2,100
274,032
15,495,120
2,131
A storage system includes a first storage apparatus configured to execute, when first data stored in a first physical address of the first storage apparatus corresponding to a first logical address is identical with second data stored in a second physical address of the first storage apparatus corresponding to a second logical address, a first redundancy removal processing for erasing the second data and correlating both of the first logical address and the second logical address with the first physical address, and a control apparatus being configured to specify a first read frequency for the first logical address, specify a second read frequency for the second logical address, and execute, when a total value of the first read frequency and the second read frequency is greater than a first value, a transmission of the first data from the first storage apparatus to the second storage apparatus.
1. A storage system comprising: a first storage apparatus configured to execute, when first data stored in a first physical address of the first storage apparatus corresponding to a first logical address is identical with second data stored in a second physical address of the first storage apparatus corresponding to a second logical address, a first redundancy removal processing for erasing the second data and correlating both of the first logical address and the second logical address with the first physical address; a second storage apparatus having a second response speed higher than a first response speed of the first storage apparatus; and a control apparatus including a memory and a processor coupled to the memory, the processor being configured to: specify a first read frequency for the first logical address, specify a second read frequency for the second logical address, and execute, when a total value of the first read frequency and the second read frequency is greater than a first value, a transmission of the first data from the first storage apparatus to the second storage apparatus. 2. The storage system according to claim 1, wherein the processor is further configured to: correlate a first hash value of the first data with the first read frequency for the first logical address, correlate a second hash value of the second data with the second read frequency for the second logical address, and determine, when the first hash value is identical with the second hash value, that the first data is identical with the second data. 3. The storage system according to claim 1, wherein the second storage apparatus is configured to execute, when third data stored in a third physical address of the second storage apparatus corresponding to a third logical address is identical with fourth data stored in a fourth physical address of the second storage apparatus corresponding to a fourth logical address, a second redundancy removal processing for erasing the fourth data and correlating both of the third logical address and the fourth logical address with the third physical address. 4. The storage system according to claim 3, wherein the processor is further configured to: specify a third read frequency for the third logical address, specify a fourth read frequency for the fourth logical address, and execute, when a total value of the third read frequency and the fourth read frequency is less than a second value, a transmission of the third data from the second storage apparatus to the first storage apparatus. 5. The storage system according to claim 1, wherein the processor is configured to: in the transmission, store the first data in a third physical address of the second storage apparatus, and correlate the first logical address and the second logical address with the third physical address. 6. The storage system according to claim 1, wherein the first storage apparatus is a hard disk drive (HDD), and the second storage apparatus is a solid state drive (SSD). 7. The storage system according to claim 2, wherein the processor is configured to: specify the first logical address based on the first hash value, read the first data based on the first logical address, and in the transmission, transmit the first data to the second storage apparatus. 8. The storage system according to claim 3, wherein the processor is further configured to: when a request for writing of fifth data into a fifth logical address of which an access frequency is higher than a third value, determine a sixth address of the second storage apparatus as a write destination of the fifth data, when a write frequency regarding the sixth address is lower than a fourth value, execute the second redundancy removal processing for the fifth data and store the fifth data for which the second redundancy processing is executed in the second storage apparatus, and when the write frequency regarding the sixth address is higher than the fourth value and the sixth data is already stored in a seventh address of the second storage apparatus, store the fifth data in an eighth address of the second storage apparatus. 9. A control apparatus for a first storage apparatus and a second storage apparatus, the first storage apparatus being configured to execute, when first data stored in a first physical address of the first storage apparatus corresponding to a first logical address is identical with second data stored in a second physical address of the first storage apparatus corresponding to a second logical address, a first redundancy removal processing for erasing the second data and correlating both of the first logical address and the second logical address with the first physical address, the second storage apparatus having a second response speed higher than a first response speed of the first storage apparatus, the control apparatus comprising: a memory; and a processor coupled to the memory and configured to: specify a first read frequency for the first logical address, specify a second read frequency for the second logical address, and execute, when a total value of the first read frequency and the second read frequency is greater than a first value, a transmission of the first data from the first storage apparatus to the second storage apparatus. 10. The control apparatus according to claim 9, wherein the processor is further configured to: correlate a first hash value of the first data with the first read frequency for the first logical address, correlate a second hash value of the second data with the second read frequency for the second logical address, and determine, when the first hash value is identical with the second hash value, that the first data is identical with the second data. 11. The control apparatus according to claim 9, wherein the second storage apparatus is configured to execute, when third data stored in a third physical address of the second storage apparatus corresponding to a third logical address is identical with fourth data stored in a fourth physical address of the second storage apparatus corresponding to a fourth logical address, a second redundancy removal processing for erasing the fourth data and correlating both of the third logical address and the fourth logical address with the third physical address. 12. The control apparatus according to claim 11, wherein the processor is further configured to: specify a third read frequency for the third logical address, specify a fourth read frequency for the fourth logical address, and execute, when a total value of the third read frequency and the fourth read frequency is less than a second value, a transmission of the third data from the second storage apparatus to the first storage apparatus. 13. The control apparatus according to claim 9, wherein the processor is configured to: in the transmission, store the first data in a third physical address of the second storage apparatus, and correlate the first logical address and the second logical address with the third physical address. 14. The control apparatus according to claim 9, wherein the first storage apparatus is a hard disk drive (HDD), and the second storage apparatus is a solid state drive (SSD). 15. The control apparatus according to claim 10, wherein the processor is configured to: specify the first logical address based on the first hash value, read the first data based on the first logical address, and in the transmission, transmit the first data to the second storage apparatus. 16. The control apparatus according to claim 11, wherein the processor is further configured to: when a request for writing of fifth data into a fifth logical address of which an access frequency is higher than a third value, determine a sixth address of the second storage apparatus as a write destination of the fifth data, when a write frequency regarding the sixth address is lower than a fourth value, execute the second redundancy removal processing for the fifth data and store the fifth data for which the second redundancy processing is executed in the second storage apparatus, and when the write frequency regarding the sixth address is higher than the fourth value and the sixth data is already stored in a seventh address of the second storage apparatus, store the fifth data in an eighth address of the second storage apparatus. 17. A method of transmitting data using a first storage apparatus and a second storage apparatus having a second response speed higher than a first response speed of the first storage apparatus, the method comprising: executing, by a first storage apparatus, when first data stored in a first physical address of the first storage apparatus corresponding to a first logical address is identical with second data stored in a second physical address of the first storage apparatus corresponding to a second logical address, a first redundancy removal processing for erasing the second data and correlating both of the first logical address and the second logical address with the first physical address; specifying a first read frequency for the first logical address; specifying a second read frequency for the second logical address; and executing, when a total value of the first read frequency and the second read frequency is greater than a first value, a transmission of the first data from the first storage apparatus to the second storage apparatus. 18. The method according to claim 17, further comprising: correlating a first hash value of the first data with the first read frequency for the first logical address; correlating a second hash value of the second data with the second read frequency for the second logical address; and determining, when the first hash value is identical with the second hash value, that the first data is identical with the second data. 19. The method according to claim 17, wherein the second storage apparatus is configured to execute, when third data stored in a third physical address of the second storage apparatus corresponding to a third logical address is identical with fourth data stored in a fourth physical address of the second storage apparatus corresponding to a fourth logical address, a second redundancy removal processing for erasing the fourth data and correlating both of the third logical address and the fourth logical address with the third physical address. 20. The method according to claim 19, further comprising: specifying a third read frequency for the third logical address; specifying a fourth read frequency for the fourth logical address; and executing, when a total value of the third read frequency and the fourth read frequency is less than a second value, a transmission of the third data from the second storage apparatus to the first storage apparatus.
A storage system includes a first storage apparatus configured to execute, when first data stored in a first physical address of the first storage apparatus corresponding to a first logical address is identical with second data stored in a second physical address of the first storage apparatus corresponding to a second logical address, a first redundancy removal processing for erasing the second data and correlating both of the first logical address and the second logical address with the first physical address, and a control apparatus being configured to specify a first read frequency for the first logical address, specify a second read frequency for the second logical address, and execute, when a total value of the first read frequency and the second read frequency is greater than a first value, a transmission of the first data from the first storage apparatus to the second storage apparatus.1. A storage system comprising: a first storage apparatus configured to execute, when first data stored in a first physical address of the first storage apparatus corresponding to a first logical address is identical with second data stored in a second physical address of the first storage apparatus corresponding to a second logical address, a first redundancy removal processing for erasing the second data and correlating both of the first logical address and the second logical address with the first physical address; a second storage apparatus having a second response speed higher than a first response speed of the first storage apparatus; and a control apparatus including a memory and a processor coupled to the memory, the processor being configured to: specify a first read frequency for the first logical address, specify a second read frequency for the second logical address, and execute, when a total value of the first read frequency and the second read frequency is greater than a first value, a transmission of the first data from the first storage apparatus to the second storage apparatus. 2. The storage system according to claim 1, wherein the processor is further configured to: correlate a first hash value of the first data with the first read frequency for the first logical address, correlate a second hash value of the second data with the second read frequency for the second logical address, and determine, when the first hash value is identical with the second hash value, that the first data is identical with the second data. 3. The storage system according to claim 1, wherein the second storage apparatus is configured to execute, when third data stored in a third physical address of the second storage apparatus corresponding to a third logical address is identical with fourth data stored in a fourth physical address of the second storage apparatus corresponding to a fourth logical address, a second redundancy removal processing for erasing the fourth data and correlating both of the third logical address and the fourth logical address with the third physical address. 4. The storage system according to claim 3, wherein the processor is further configured to: specify a third read frequency for the third logical address, specify a fourth read frequency for the fourth logical address, and execute, when a total value of the third read frequency and the fourth read frequency is less than a second value, a transmission of the third data from the second storage apparatus to the first storage apparatus. 5. The storage system according to claim 1, wherein the processor is configured to: in the transmission, store the first data in a third physical address of the second storage apparatus, and correlate the first logical address and the second logical address with the third physical address. 6. The storage system according to claim 1, wherein the first storage apparatus is a hard disk drive (HDD), and the second storage apparatus is a solid state drive (SSD). 7. The storage system according to claim 2, wherein the processor is configured to: specify the first logical address based on the first hash value, read the first data based on the first logical address, and in the transmission, transmit the first data to the second storage apparatus. 8. The storage system according to claim 3, wherein the processor is further configured to: when a request for writing of fifth data into a fifth logical address of which an access frequency is higher than a third value, determine a sixth address of the second storage apparatus as a write destination of the fifth data, when a write frequency regarding the sixth address is lower than a fourth value, execute the second redundancy removal processing for the fifth data and store the fifth data for which the second redundancy processing is executed in the second storage apparatus, and when the write frequency regarding the sixth address is higher than the fourth value and the sixth data is already stored in a seventh address of the second storage apparatus, store the fifth data in an eighth address of the second storage apparatus. 9. A control apparatus for a first storage apparatus and a second storage apparatus, the first storage apparatus being configured to execute, when first data stored in a first physical address of the first storage apparatus corresponding to a first logical address is identical with second data stored in a second physical address of the first storage apparatus corresponding to a second logical address, a first redundancy removal processing for erasing the second data and correlating both of the first logical address and the second logical address with the first physical address, the second storage apparatus having a second response speed higher than a first response speed of the first storage apparatus, the control apparatus comprising: a memory; and a processor coupled to the memory and configured to: specify a first read frequency for the first logical address, specify a second read frequency for the second logical address, and execute, when a total value of the first read frequency and the second read frequency is greater than a first value, a transmission of the first data from the first storage apparatus to the second storage apparatus. 10. The control apparatus according to claim 9, wherein the processor is further configured to: correlate a first hash value of the first data with the first read frequency for the first logical address, correlate a second hash value of the second data with the second read frequency for the second logical address, and determine, when the first hash value is identical with the second hash value, that the first data is identical with the second data. 11. The control apparatus according to claim 9, wherein the second storage apparatus is configured to execute, when third data stored in a third physical address of the second storage apparatus corresponding to a third logical address is identical with fourth data stored in a fourth physical address of the second storage apparatus corresponding to a fourth logical address, a second redundancy removal processing for erasing the fourth data and correlating both of the third logical address and the fourth logical address with the third physical address. 12. The control apparatus according to claim 11, wherein the processor is further configured to: specify a third read frequency for the third logical address, specify a fourth read frequency for the fourth logical address, and execute, when a total value of the third read frequency and the fourth read frequency is less than a second value, a transmission of the third data from the second storage apparatus to the first storage apparatus. 13. The control apparatus according to claim 9, wherein the processor is configured to: in the transmission, store the first data in a third physical address of the second storage apparatus, and correlate the first logical address and the second logical address with the third physical address. 14. The control apparatus according to claim 9, wherein the first storage apparatus is a hard disk drive (HDD), and the second storage apparatus is a solid state drive (SSD). 15. The control apparatus according to claim 10, wherein the processor is configured to: specify the first logical address based on the first hash value, read the first data based on the first logical address, and in the transmission, transmit the first data to the second storage apparatus. 16. The control apparatus according to claim 11, wherein the processor is further configured to: when a request for writing of fifth data into a fifth logical address of which an access frequency is higher than a third value, determine a sixth address of the second storage apparatus as a write destination of the fifth data, when a write frequency regarding the sixth address is lower than a fourth value, execute the second redundancy removal processing for the fifth data and store the fifth data for which the second redundancy processing is executed in the second storage apparatus, and when the write frequency regarding the sixth address is higher than the fourth value and the sixth data is already stored in a seventh address of the second storage apparatus, store the fifth data in an eighth address of the second storage apparatus. 17. A method of transmitting data using a first storage apparatus and a second storage apparatus having a second response speed higher than a first response speed of the first storage apparatus, the method comprising: executing, by a first storage apparatus, when first data stored in a first physical address of the first storage apparatus corresponding to a first logical address is identical with second data stored in a second physical address of the first storage apparatus corresponding to a second logical address, a first redundancy removal processing for erasing the second data and correlating both of the first logical address and the second logical address with the first physical address; specifying a first read frequency for the first logical address; specifying a second read frequency for the second logical address; and executing, when a total value of the first read frequency and the second read frequency is greater than a first value, a transmission of the first data from the first storage apparatus to the second storage apparatus. 18. The method according to claim 17, further comprising: correlating a first hash value of the first data with the first read frequency for the first logical address; correlating a second hash value of the second data with the second read frequency for the second logical address; and determining, when the first hash value is identical with the second hash value, that the first data is identical with the second data. 19. The method according to claim 17, wherein the second storage apparatus is configured to execute, when third data stored in a third physical address of the second storage apparatus corresponding to a third logical address is identical with fourth data stored in a fourth physical address of the second storage apparatus corresponding to a fourth logical address, a second redundancy removal processing for erasing the fourth data and correlating both of the third logical address and the fourth logical address with the third physical address. 20. The method according to claim 19, further comprising: specifying a third read frequency for the third logical address; specifying a fourth read frequency for the fourth logical address; and executing, when a total value of the third read frequency and the fourth read frequency is less than a second value, a transmission of the third data from the second storage apparatus to the first storage apparatus.
2,100
274,033
15,495,145
2,131
Methods and systems for memory-side shared caching include determining whether a requested memory access is directed to shared portion of memory by referencing a lock address list in a memory controller. If the requested memory access is for the shared portion of memory, it is determined whether an associated data object is present in a memory-side cache. If the associated data object is present in the memory-side cache, the memory-side cache is accessed. If the associated data object is not present in the memory-side cache, an external memory is accessed.
1. A method for memory-side shared caching, comprising: determining whether a requested memory access is directed to shared portion of memory by referencing a lock address list in a memory controller; if the requested memory access is for the shared portion of memory, determining whether an associated data object is present in a memory-side cache; if the associated data object is present in the memory-side cache, accessing the memory-side cache; and if the associated data object is not present in the memory-side cache, accessing an external memory. 2. The method of claim 1, further comprising performing a cache replacement with the associated data object if the associated data object is not present in the memory-side cache to enter the requested data object in the memory-side cache. 3. The method of claim 2, further comprising recording an address for the requested data object in a memory-side cache tag entry. 4. The method of claim 3, further comprising setting a “valid” control bit in a memory-side cache tag to indicate that an entry in the memory-side cache stores a valid data object. 5. The method of claim 3, further comprising setting a “consume” control bit in a memory-side cache tag for the associated data object to indicate whether a paired access to the associated data object has completed. 6. The method of claim 1, further comprising accessing the associated data object from a memory outside the memory controller if the requested memory access is not directed to shared portion of memory. 7. A non-transitory computer readable storage medium comprising a computer readable program for memory-side caching, wherein the computer readable program when executed on a computer causes the computer to perform the steps of claim 1. 8. A memory controller, comprising: an input/output (I/O) interface configured to communicate with one or more external processing elements and an external memory; a memory-side cache configured to store shared data objects; a lock address list; and a cache operation module configured to determine whether a requested memory access is directed to a shared portion of memory by referencing the lock address list, to determine whether an associated data object is present in the memory-side cache if the requested memory access is for the shared portion of memory, to access the memory-side cache if the associated data object is present in the memory-side cache, and to access the external memory if the associated data object is not present in the memory-side cache. 9. The memory controller of claim 8, wherein the cache operation module is further configured to perform a cache replacement with the associated data object if the associated data object is not present in the memory-side cache to enter the requested data object in the memory-side cache. 10. The memory controller of claim 9, wherein the cache operation module is further configured to record an address for the requested data object in a memory-side cache tag entry. 11. The memory controller of claim 10, wherein the cache operation module is further configured to set a “valid” control bit in a memory-side cache tag to indicate that an entry in the memory-side cache stores a valid data object. 12. The memory controller of claim 8, wherein the cache operation module is further configured to set a “consume” control bit in a memory-side cache tag for the associated data object to indicate whether a paired access to the associated data object has completed. 13. The memory controller of claim 8, wherein the cache operation module is further configured to access the associated data object from the memory outside the memory controller if the requested memory access is not directed to shared portion of memory. 14. A processing system, comprising: one or more processing elements; a main memory; and a memory controller in communication with the one or more processing elements and the main memory, the memory controller comprising: a memory-side cache configured to store shared data objects; a lock address list; and a cache operation module configured to determine whether a requested memory access is directed to a shared portion of memory by referencing the lock address list, to determine whether an associated data object is present in the memory-side cache if the requested memory access is for the shared portion of memory, to access the memory-side cache if the associated data object is present in the memory-side cache, and to access the main memory if the associated data object is not present in the memory-side cache. 15. The processing system of claim 14, wherein the cache operation module is further configured to perform a cache replacement with the requested data object if the requested data object is not present in the memory-side cache to enter the requested data object in the memory-side cache. 16. The processing system of claim 15, wherein the cache operation module is further configured to set a “valid” control bit to indicate that an entry in the memory-side cache stores a valid data object. 17. The processing system of claim 14, wherein the cache operation module is further configured to set a “consume” control bit for the requested data object to indicate whether a paired access to the requested data object has completed. 18. The processing system of claim 14, wherein the cache operation module is further configured to access the associated data object from main memory if the requested memory access is not directed to a shared portion of memory. 19. The processing system of claim 14, wherein the one or more processing elements each comprise a processor-side cache configured to store non-shared data objects.
Methods and systems for memory-side shared caching include determining whether a requested memory access is directed to shared portion of memory by referencing a lock address list in a memory controller. If the requested memory access is for the shared portion of memory, it is determined whether an associated data object is present in a memory-side cache. If the associated data object is present in the memory-side cache, the memory-side cache is accessed. If the associated data object is not present in the memory-side cache, an external memory is accessed.1. A method for memory-side shared caching, comprising: determining whether a requested memory access is directed to shared portion of memory by referencing a lock address list in a memory controller; if the requested memory access is for the shared portion of memory, determining whether an associated data object is present in a memory-side cache; if the associated data object is present in the memory-side cache, accessing the memory-side cache; and if the associated data object is not present in the memory-side cache, accessing an external memory. 2. The method of claim 1, further comprising performing a cache replacement with the associated data object if the associated data object is not present in the memory-side cache to enter the requested data object in the memory-side cache. 3. The method of claim 2, further comprising recording an address for the requested data object in a memory-side cache tag entry. 4. The method of claim 3, further comprising setting a “valid” control bit in a memory-side cache tag to indicate that an entry in the memory-side cache stores a valid data object. 5. The method of claim 3, further comprising setting a “consume” control bit in a memory-side cache tag for the associated data object to indicate whether a paired access to the associated data object has completed. 6. The method of claim 1, further comprising accessing the associated data object from a memory outside the memory controller if the requested memory access is not directed to shared portion of memory. 7. A non-transitory computer readable storage medium comprising a computer readable program for memory-side caching, wherein the computer readable program when executed on a computer causes the computer to perform the steps of claim 1. 8. A memory controller, comprising: an input/output (I/O) interface configured to communicate with one or more external processing elements and an external memory; a memory-side cache configured to store shared data objects; a lock address list; and a cache operation module configured to determine whether a requested memory access is directed to a shared portion of memory by referencing the lock address list, to determine whether an associated data object is present in the memory-side cache if the requested memory access is for the shared portion of memory, to access the memory-side cache if the associated data object is present in the memory-side cache, and to access the external memory if the associated data object is not present in the memory-side cache. 9. The memory controller of claim 8, wherein the cache operation module is further configured to perform a cache replacement with the associated data object if the associated data object is not present in the memory-side cache to enter the requested data object in the memory-side cache. 10. The memory controller of claim 9, wherein the cache operation module is further configured to record an address for the requested data object in a memory-side cache tag entry. 11. The memory controller of claim 10, wherein the cache operation module is further configured to set a “valid” control bit in a memory-side cache tag to indicate that an entry in the memory-side cache stores a valid data object. 12. The memory controller of claim 8, wherein the cache operation module is further configured to set a “consume” control bit in a memory-side cache tag for the associated data object to indicate whether a paired access to the associated data object has completed. 13. The memory controller of claim 8, wherein the cache operation module is further configured to access the associated data object from the memory outside the memory controller if the requested memory access is not directed to shared portion of memory. 14. A processing system, comprising: one or more processing elements; a main memory; and a memory controller in communication with the one or more processing elements and the main memory, the memory controller comprising: a memory-side cache configured to store shared data objects; a lock address list; and a cache operation module configured to determine whether a requested memory access is directed to a shared portion of memory by referencing the lock address list, to determine whether an associated data object is present in the memory-side cache if the requested memory access is for the shared portion of memory, to access the memory-side cache if the associated data object is present in the memory-side cache, and to access the main memory if the associated data object is not present in the memory-side cache. 15. The processing system of claim 14, wherein the cache operation module is further configured to perform a cache replacement with the requested data object if the requested data object is not present in the memory-side cache to enter the requested data object in the memory-side cache. 16. The processing system of claim 15, wherein the cache operation module is further configured to set a “valid” control bit to indicate that an entry in the memory-side cache stores a valid data object. 17. The processing system of claim 14, wherein the cache operation module is further configured to set a “consume” control bit for the requested data object to indicate whether a paired access to the requested data object has completed. 18. The processing system of claim 14, wherein the cache operation module is further configured to access the associated data object from main memory if the requested memory access is not directed to a shared portion of memory. 19. The processing system of claim 14, wherein the one or more processing elements each comprise a processor-side cache configured to store non-shared data objects.
2,100
274,034
15,495,296
2,131
Systems, apparatuses, and methods for migrating memory pages are disclosed herein. In response to detecting that a migration of a first page between memory locations is being initiated, a first page table entry (PTE) corresponding to the first page is located and a migration pending indication is stored in the first PTE. In one embodiment, the migration pending indication is encoded in the first PTE by disabling read and write permissions. If a translation request targeting the first PTE is received by the MMU and the translation request corresponds to a read request, a read operation is allowed to the first page. Otherwise, if the translation request corresponds to a write request, a write operation to the first page is blocked and a silent retry request is generated and conveyed to the requesting client.
1. A system comprising: a memory subsystem; and a processor coupled to the memory subsystem; wherein the system is configured to: detect that a first page will be migrated from a first memory location to a second memory location in the memory subsystem; locate a first page table entry (PTE) corresponding to the first page; and store a migration pending indication in the first PTE. 2. The system as recited in claim 1, wherein responsive to detecting a translation request which targets the first PTE and detecting the migration pending indication in the first PTE, the system is configured to: if the translation request corresponds to a read request targeting the first page, allow a read operation to be performed to the first page; and if the translation request corresponds to a write request targeting the first page, prevent a write operation from being performed to the page and generate a silent retry request. 3. The system as recited in claim 2, wherein the system is configured to convey the silent retry request to a requesting client. 4. The system as recited in claim 3, wherein the requesting client is configured to retry the write request at a later point in time. 5. The system as recited in claim 1, wherein the migration pending indication is encoded in the first PTE by disabling read and write permissions of the first PTE. 6. The system as recited in claim 1, wherein responsive to the migration of the first page from the first memory location to the second memory location being completed, the system is configured to: clear the migration pending indication; and generate an invalidation request for any cached translations corresponding to the first PTE. 7. The system as recited in claim 1, wherein: the memory subsystem comprises a first memory and a second memory; the first memory location is in the first memory; and the second memory location is in the second memory. 8. A method comprising: detecting by a computing system that a first page will be migrated from a first memory location to a second memory location; locating a first page table entry (PTE) corresponding to the first page; and storing a migration pending indication in the first PTE. 9. The method as recited in claim 8, wherein responsive to detecting a translation request which targets the first PTE and detecting the migration pending indication in the first PTE, the method further comprises: if the translation request corresponds to a read request targeting the first page, allowing a read operation to be performed to the first page; and if the translation request corresponds to a write request targeting the first page, preventing a write operation from being performed to the page and generating a silent retry request. 10. The method as recited in claim 9, wherein responsive to detecting the migration pending indication in the PTE, the method further comprising conveying the silent retry request to a requesting client. 11. The method as recited in claim 10, further comprising the requesting client retrying the write request at a later point in time. 12. The method as recited in claim 8, wherein the migration pending indication is encoded in the first PTE by disabling read and write permissions of the first PTE. 13. The method as recited in claim 8, wherein responsive to the migration of the first page from the first memory location to the second memory location being completed, the method further comprising: clearing the migration pending indication; and generating an invalidation request for any cached translations corresponding to the first PTE. 14. The method as recited in claim 8, wherein the first memory location is in a first memory, and wherein the second memory location is in a second memory. 15. An apparatus comprising: a memory subsystem; and a memory management unit (MMU); wherein the MMU is configured to: detect that a first page will be migrated from a first memory location to a second memory location in the memory subsystem; locate a first page table entry (PTE) corresponding to the first page; and store a migration pending indication in the first PTE. 16. The apparatus as recited in claim 15, wherein responsive to detecting a translation request which targets the first PTE and detecting the migration pending indication in the first PTE, the MMU is configured to: if the translation request corresponds to a read request targeting the first page, allow a read operation to be performed to the first page; and if the translation request corresponds to a write request targeting the first page, prevent a write operation from being performed to the page and generate a silent retry request. 17. The apparatus as recited in claim 16, wherein responsive to detecting the migration pending indication in the PTE, the apparatus is configured to convey the silent retry request to a requesting client. 18. The apparatus as recited in claim 17, wherein the requesting client is configured to retry the write request at a later point in time. 19. The apparatus as recited in claim 15, wherein the migration pending indication is encoded in the first PTE by disabling read and write permissions of the first PTE. 20. The apparatus as recited in claim 15, wherein responsive to the migration of the first page from the first memory location to the second memory location being completed, the apparatus is configured to: clear the migration pending indication; and generate an invalidation request for any cached translations corresponding to the first PTE.
Systems, apparatuses, and methods for migrating memory pages are disclosed herein. In response to detecting that a migration of a first page between memory locations is being initiated, a first page table entry (PTE) corresponding to the first page is located and a migration pending indication is stored in the first PTE. In one embodiment, the migration pending indication is encoded in the first PTE by disabling read and write permissions. If a translation request targeting the first PTE is received by the MMU and the translation request corresponds to a read request, a read operation is allowed to the first page. Otherwise, if the translation request corresponds to a write request, a write operation to the first page is blocked and a silent retry request is generated and conveyed to the requesting client.1. A system comprising: a memory subsystem; and a processor coupled to the memory subsystem; wherein the system is configured to: detect that a first page will be migrated from a first memory location to a second memory location in the memory subsystem; locate a first page table entry (PTE) corresponding to the first page; and store a migration pending indication in the first PTE. 2. The system as recited in claim 1, wherein responsive to detecting a translation request which targets the first PTE and detecting the migration pending indication in the first PTE, the system is configured to: if the translation request corresponds to a read request targeting the first page, allow a read operation to be performed to the first page; and if the translation request corresponds to a write request targeting the first page, prevent a write operation from being performed to the page and generate a silent retry request. 3. The system as recited in claim 2, wherein the system is configured to convey the silent retry request to a requesting client. 4. The system as recited in claim 3, wherein the requesting client is configured to retry the write request at a later point in time. 5. The system as recited in claim 1, wherein the migration pending indication is encoded in the first PTE by disabling read and write permissions of the first PTE. 6. The system as recited in claim 1, wherein responsive to the migration of the first page from the first memory location to the second memory location being completed, the system is configured to: clear the migration pending indication; and generate an invalidation request for any cached translations corresponding to the first PTE. 7. The system as recited in claim 1, wherein: the memory subsystem comprises a first memory and a second memory; the first memory location is in the first memory; and the second memory location is in the second memory. 8. A method comprising: detecting by a computing system that a first page will be migrated from a first memory location to a second memory location; locating a first page table entry (PTE) corresponding to the first page; and storing a migration pending indication in the first PTE. 9. The method as recited in claim 8, wherein responsive to detecting a translation request which targets the first PTE and detecting the migration pending indication in the first PTE, the method further comprises: if the translation request corresponds to a read request targeting the first page, allowing a read operation to be performed to the first page; and if the translation request corresponds to a write request targeting the first page, preventing a write operation from being performed to the page and generating a silent retry request. 10. The method as recited in claim 9, wherein responsive to detecting the migration pending indication in the PTE, the method further comprising conveying the silent retry request to a requesting client. 11. The method as recited in claim 10, further comprising the requesting client retrying the write request at a later point in time. 12. The method as recited in claim 8, wherein the migration pending indication is encoded in the first PTE by disabling read and write permissions of the first PTE. 13. The method as recited in claim 8, wherein responsive to the migration of the first page from the first memory location to the second memory location being completed, the method further comprising: clearing the migration pending indication; and generating an invalidation request for any cached translations corresponding to the first PTE. 14. The method as recited in claim 8, wherein the first memory location is in a first memory, and wherein the second memory location is in a second memory. 15. An apparatus comprising: a memory subsystem; and a memory management unit (MMU); wherein the MMU is configured to: detect that a first page will be migrated from a first memory location to a second memory location in the memory subsystem; locate a first page table entry (PTE) corresponding to the first page; and store a migration pending indication in the first PTE. 16. The apparatus as recited in claim 15, wherein responsive to detecting a translation request which targets the first PTE and detecting the migration pending indication in the first PTE, the MMU is configured to: if the translation request corresponds to a read request targeting the first page, allow a read operation to be performed to the first page; and if the translation request corresponds to a write request targeting the first page, prevent a write operation from being performed to the page and generate a silent retry request. 17. The apparatus as recited in claim 16, wherein responsive to detecting the migration pending indication in the PTE, the apparatus is configured to convey the silent retry request to a requesting client. 18. The apparatus as recited in claim 17, wherein the requesting client is configured to retry the write request at a later point in time. 19. The apparatus as recited in claim 15, wherein the migration pending indication is encoded in the first PTE by disabling read and write permissions of the first PTE. 20. The apparatus as recited in claim 15, wherein responsive to the migration of the first page from the first memory location to the second memory location being completed, the apparatus is configured to: clear the migration pending indication; and generate an invalidation request for any cached translations corresponding to the first PTE.
2,100
274,035
15,495,305
2,131
Methods and systems for processing Physical Region Pages (PRP)/Scatter Gather Lists (SGL) entries include splitting a command to be processed into a plurality of sub-commands, storing said plurality of sub-commands in a first set of buffers among a plurality of buffers, processing said plurality of sub-commands from said first set of buffers, storing at least one sub-command that remains after storing the first set of buffers in a second set of buffers, while said plurality of sub-commands in the first set of buffers is being processed and processing said at least one sub-command from said second set of buffers, after processing sub-commands from said first set of buffers.
1. A method for processing Physical Region Page (PRP)/Scatter Gather List (SGL) entries comprising: splitting a command to be processed into a plurality of sub-commands; storing a first subset of the plurality of sub-commands in a first set of buffers among a plurality of buffers; processing a portion of the first subset of the plurality of sub-commands from the first set of buffers; storing a second subset of the plurality of sub-commands in a second set of buffers among the plurality of buffers, while the first subset of the plurality of sub-commands in the first set of buffers is being processed; and processing the second subset of the plurality of sub-commands from the second set of buffers, after processing the portion of the first subset of the plurality of sub-commands from the first set of buffers. 2. The method of claim 1, further comprising: responsive to storing the first subset of the plurality of sub-commands in the first set of buffers among the plurality of buffers, setting a first buffer index indicator that indicates that the first set of buffers among the plurality of buffers includes valid data for processing. 3. The method of claim 2, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers is performed responsive to detection that the first buffer index indicator corresponding to the first set of buffers has been set. 4. The method of claim 3, further comprising: responsive to processing the second subset of the plurality of sub-commands from the second set of buffers, clearing a second buffer index indicator that indicates that the second set of buffers among the plurality of buffers no longer includes valid data for processing. 5. The method of claim 1, further comprising: storing a configuration element in the first set of buffers that indicates a current status of the processing of the portion of the first subset of the plurality of sub-commands; and updating the configuration element in the first set of buffers after processing the portion of the first subset of the plurality of sub-commands. 6. The method of claim 1, wherein the command to be processed is a first command, wherein the plurality of sub-commands is a first plurality of sub-commands, and further comprising: receiving a second command to be processed; splitting the second command to be processed into a second plurality of sub-commands; storing a third subset of the second plurality of sub-commands in a third set of buffers among the plurality of buffers; and processing a portion of the third subset of the second plurality of sub-commands from the third set of buffers after processing the portion of the first subset of the first plurality of sub-commands from the first set of buffers and before processing the second subset of the first plurality of sub-commands from the second set of buffers. 7. The method of claim 1, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers comprises storing data associated with the portion of the first subset of the plurality of sub-commands in at least one third buffer of the plurality of buffers that is different from the first set of buffers. 8. A system for processing Physical Region Page (PRP)/Scatter Gather List (SGL) entries comprising: a processor; a non-transitory memory comprising instructions, the instructions configured to cause the processor to: split a command to be processed into a plurality of sub-commands; store a first subset of the plurality of sub-commands in a first set of buffers among a plurality of buffers; process a portion of the first subset of the plurality of sub-commands from the first set of buffers; store a second subset of the plurality of sub-commands in a second set of buffers among the plurality of buffers, while the first subset of plurality of sub-commands in the first set of buffers is being processed; and process the second subset of the plurality of sub-commands from the second set of buffers, after processing the portion of the first subset of the plurality of sub-commands from the first set of buffers. 9. The system of claim 8, wherein the instructions are further configured to cause the processor to: responsive to storing the first subset of the plurality of sub-commands in the first set of buffers among the plurality of buffers, set a first buffer index indicator that indicates that the first set of buffers among the plurality of buffers includes valid data for processing. 10. The system of claim 9, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers is performed responsive to detection that the first buffer index indicator corresponding to the first set of buffers has been set. 11. The system of claim 10, wherein the instructions are further configured to cause the processor to: responsive to processing the second subset of the plurality of sub-commands from the second set of buffers, clear a second buffer index indicator that indicates that the second set of buffers among the plurality of buffers no longer includes valid data for processing. 12. The system of claim 8, further wherein the instructions are further configured to cause the processor to: store a configuration element in the first set of buffers that indicates a current status of the processing of the first subset of the plurality of sub-commands; and update the configuration element in the first set of buffers after processing the portion of the first subset of the plurality of sub-commands. 13. The system of claim 8, wherein the command to be processed is a first command, wherein the plurality of sub-commands is a first plurality of sub-commands, and wherein the instructions are further configured to cause the processor to: receive a second command to be processed; split the second command to be processed into a second plurality of sub-commands; store a third subset of the second plurality of sub-commands in a third set of buffers among the plurality of buffers; and process a portion of the third subset of the second plurality of sub-commands from the third set of buffers after processing the portion of the first subset of the first plurality of sub-commands from the first set of buffers and before processing the second subset of the plurality of sub-commands from the second set of buffers. 14. The system of claim 8, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers comprises storing data associated with the portion of the first subset of the plurality of sub-commands in at least one third buffer of the plurality of buffers that is different from the first set of buffers. 15. A computer program product comprising: a tangible non-transitory computer readable storage medium comprising computer readable program code embodied in the medium that when executed by at least one processor causes the at least one processor to perform operations comprising: splitting a command to be processed into a plurality of sub-commands; storing a first subset of the plurality of sub-commands in a first set of buffers among a plurality of buffers; processing a portion of the first subset of the plurality of sub-commands from the first set of buffers; storing a second subset of the plurality of sub-commands in a second set of buffers among the plurality of buffers, while the first subset of the plurality of sub-commands in the first set of buffers is being processed; and processing the second subset of the plurality of sub-commands from the second set of buffers, after processing the portion of the first subset of the plurality of sub-commands from the first set of buffers. 16. The computer program product of claim 15, wherein the operations further comprise: responsive to storing the first subset of the plurality of sub-commands in the first set of buffers among the plurality of buffers, setting a first buffer index indicator that indicates that the first set of buffers among the plurality of buffers includes valid data for processing. 17. The computer program product of claim 16, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers is performed responsive to detection that the first buffer index indicator corresponding to the first set of buffers has been set. 18. The computer program product of claim 17, wherein the operations further comprise: responsive to processing the second subset of the plurality of sub-commands from the second set of buffers, clearing a second buffer index indicator that indicates that the second set of buffers among the plurality of buffers no longer includes valid data for processing. 19. The computer program product of claim 15, wherein the operations further comprise: storing a configuration element in the first set of buffers that indicates a current status of the processing of the first subset of the plurality of sub-commands; and updating the configuration element in the first set of buffers after processing the portion of the first subset of the plurality of sub-commands. 20. The computer program product of claim 15, wherein the command to be processed is a first command, wherein the plurality of sub-commands is a first plurality of sub-commands, and further comprising: receiving a second command to be processed; splitting the second command to be processed into a second plurality of sub-commands; storing a third subset of the second plurality of sub-commands in a third set of buffers among the plurality of buffers; and processing a portion of the third subset of the second plurality of sub-commands from the third set of buffers after processing the portion of the first subset of the first plurality of sub-commands from the first set of buffers and before processing the second subset of the first plurality of sub-commands from the second set of buffers.
Methods and systems for processing Physical Region Pages (PRP)/Scatter Gather Lists (SGL) entries include splitting a command to be processed into a plurality of sub-commands, storing said plurality of sub-commands in a first set of buffers among a plurality of buffers, processing said plurality of sub-commands from said first set of buffers, storing at least one sub-command that remains after storing the first set of buffers in a second set of buffers, while said plurality of sub-commands in the first set of buffers is being processed and processing said at least one sub-command from said second set of buffers, after processing sub-commands from said first set of buffers.1. A method for processing Physical Region Page (PRP)/Scatter Gather List (SGL) entries comprising: splitting a command to be processed into a plurality of sub-commands; storing a first subset of the plurality of sub-commands in a first set of buffers among a plurality of buffers; processing a portion of the first subset of the plurality of sub-commands from the first set of buffers; storing a second subset of the plurality of sub-commands in a second set of buffers among the plurality of buffers, while the first subset of the plurality of sub-commands in the first set of buffers is being processed; and processing the second subset of the plurality of sub-commands from the second set of buffers, after processing the portion of the first subset of the plurality of sub-commands from the first set of buffers. 2. The method of claim 1, further comprising: responsive to storing the first subset of the plurality of sub-commands in the first set of buffers among the plurality of buffers, setting a first buffer index indicator that indicates that the first set of buffers among the plurality of buffers includes valid data for processing. 3. The method of claim 2, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers is performed responsive to detection that the first buffer index indicator corresponding to the first set of buffers has been set. 4. The method of claim 3, further comprising: responsive to processing the second subset of the plurality of sub-commands from the second set of buffers, clearing a second buffer index indicator that indicates that the second set of buffers among the plurality of buffers no longer includes valid data for processing. 5. The method of claim 1, further comprising: storing a configuration element in the first set of buffers that indicates a current status of the processing of the portion of the first subset of the plurality of sub-commands; and updating the configuration element in the first set of buffers after processing the portion of the first subset of the plurality of sub-commands. 6. The method of claim 1, wherein the command to be processed is a first command, wherein the plurality of sub-commands is a first plurality of sub-commands, and further comprising: receiving a second command to be processed; splitting the second command to be processed into a second plurality of sub-commands; storing a third subset of the second plurality of sub-commands in a third set of buffers among the plurality of buffers; and processing a portion of the third subset of the second plurality of sub-commands from the third set of buffers after processing the portion of the first subset of the first plurality of sub-commands from the first set of buffers and before processing the second subset of the first plurality of sub-commands from the second set of buffers. 7. The method of claim 1, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers comprises storing data associated with the portion of the first subset of the plurality of sub-commands in at least one third buffer of the plurality of buffers that is different from the first set of buffers. 8. A system for processing Physical Region Page (PRP)/Scatter Gather List (SGL) entries comprising: a processor; a non-transitory memory comprising instructions, the instructions configured to cause the processor to: split a command to be processed into a plurality of sub-commands; store a first subset of the plurality of sub-commands in a first set of buffers among a plurality of buffers; process a portion of the first subset of the plurality of sub-commands from the first set of buffers; store a second subset of the plurality of sub-commands in a second set of buffers among the plurality of buffers, while the first subset of plurality of sub-commands in the first set of buffers is being processed; and process the second subset of the plurality of sub-commands from the second set of buffers, after processing the portion of the first subset of the plurality of sub-commands from the first set of buffers. 9. The system of claim 8, wherein the instructions are further configured to cause the processor to: responsive to storing the first subset of the plurality of sub-commands in the first set of buffers among the plurality of buffers, set a first buffer index indicator that indicates that the first set of buffers among the plurality of buffers includes valid data for processing. 10. The system of claim 9, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers is performed responsive to detection that the first buffer index indicator corresponding to the first set of buffers has been set. 11. The system of claim 10, wherein the instructions are further configured to cause the processor to: responsive to processing the second subset of the plurality of sub-commands from the second set of buffers, clear a second buffer index indicator that indicates that the second set of buffers among the plurality of buffers no longer includes valid data for processing. 12. The system of claim 8, further wherein the instructions are further configured to cause the processor to: store a configuration element in the first set of buffers that indicates a current status of the processing of the first subset of the plurality of sub-commands; and update the configuration element in the first set of buffers after processing the portion of the first subset of the plurality of sub-commands. 13. The system of claim 8, wherein the command to be processed is a first command, wherein the plurality of sub-commands is a first plurality of sub-commands, and wherein the instructions are further configured to cause the processor to: receive a second command to be processed; split the second command to be processed into a second plurality of sub-commands; store a third subset of the second plurality of sub-commands in a third set of buffers among the plurality of buffers; and process a portion of the third subset of the second plurality of sub-commands from the third set of buffers after processing the portion of the first subset of the first plurality of sub-commands from the first set of buffers and before processing the second subset of the plurality of sub-commands from the second set of buffers. 14. The system of claim 8, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers comprises storing data associated with the portion of the first subset of the plurality of sub-commands in at least one third buffer of the plurality of buffers that is different from the first set of buffers. 15. A computer program product comprising: a tangible non-transitory computer readable storage medium comprising computer readable program code embodied in the medium that when executed by at least one processor causes the at least one processor to perform operations comprising: splitting a command to be processed into a plurality of sub-commands; storing a first subset of the plurality of sub-commands in a first set of buffers among a plurality of buffers; processing a portion of the first subset of the plurality of sub-commands from the first set of buffers; storing a second subset of the plurality of sub-commands in a second set of buffers among the plurality of buffers, while the first subset of the plurality of sub-commands in the first set of buffers is being processed; and processing the second subset of the plurality of sub-commands from the second set of buffers, after processing the portion of the first subset of the plurality of sub-commands from the first set of buffers. 16. The computer program product of claim 15, wherein the operations further comprise: responsive to storing the first subset of the plurality of sub-commands in the first set of buffers among the plurality of buffers, setting a first buffer index indicator that indicates that the first set of buffers among the plurality of buffers includes valid data for processing. 17. The computer program product of claim 16, wherein processing the portion of the first subset of the plurality of sub-commands from the first set of buffers is performed responsive to detection that the first buffer index indicator corresponding to the first set of buffers has been set. 18. The computer program product of claim 17, wherein the operations further comprise: responsive to processing the second subset of the plurality of sub-commands from the second set of buffers, clearing a second buffer index indicator that indicates that the second set of buffers among the plurality of buffers no longer includes valid data for processing. 19. The computer program product of claim 15, wherein the operations further comprise: storing a configuration element in the first set of buffers that indicates a current status of the processing of the first subset of the plurality of sub-commands; and updating the configuration element in the first set of buffers after processing the portion of the first subset of the plurality of sub-commands. 20. The computer program product of claim 15, wherein the command to be processed is a first command, wherein the plurality of sub-commands is a first plurality of sub-commands, and further comprising: receiving a second command to be processed; splitting the second command to be processed into a second plurality of sub-commands; storing a third subset of the second plurality of sub-commands in a third set of buffers among the plurality of buffers; and processing a portion of the third subset of the second plurality of sub-commands from the third set of buffers after processing the portion of the first subset of the first plurality of sub-commands from the first set of buffers and before processing the second subset of the first plurality of sub-commands from the second set of buffers.
2,100
274,036
15,494,618
2,131
A computer-implemented method includes pseudo-invalidating a first Dynamic Address Translation (DAT) table of a DAT structure associated with a workload. A page fault occurring during translation of a virtual memory address of data required by the workload is detected. Responsive to the page fault, the DAT structure is traversed. The DAT structure includes one or more DAT tables, and each DAT entry in each of the one or more DAT tables is associated with an in-use bit indicating whether the DAT entry is in use. Traversing the DAT structure includes pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred; and identifying a first page frame referenced by the virtual memory address for which the page fault occurred. The data in the first page frame is processed responsive to the page fault.
1. A computer-implemented method comprising: pseudo-invalidating a first Dynamic Address Translation (DAT) table of a DAT structure associated with a workload, wherein the pseudo-invalidating the first DAT table comprises marking each DAT entry in the first DAT table as invalid; detecting a page fault occurring during translation of a virtual memory address of data required by the workload; traversing the DAT structure, responsive to the page fault, wherein the DAT structure is configured to translate virtual memory addresses to physical memory addresses, wherein the DAT structure comprises one or more DAT tables, and wherein each DAT entry in each of the one or more DAT tables is associated with an in-use bit indicating whether the DAT entry is in use; wherein the traversing comprises: pseudo-invalidating, by a computer processor, each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred; and identifying a first page frame referenced by the virtual memory address for which the page fault occurred; and processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload. 2. The computer-implemented method of claim 1, wherein the pseudo-invalidating the first DAT table of the DAT structure is responsive to detecting that the workload has been moved from a first cluster of processor cores to a second cluster of processor cores. 3. The computer-implemented method of claim 2, wherein the processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload comprises: identifying a second page frame located closer than the first page frame to the workload that has been moved; and copying data from the first page frame to the second page frame, responsive to the workload having been moved and responsive to the page fault. 4. The computer-implemented method of claim 2, wherein a first DAT entry of the first DAT table is associated with a valid in-use bit when pseudo-invalidated. 5. The computer-implemented method of claim 2, wherein the pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred further comprises iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred. 6. The computer-implemented method of claim 5, wherein the iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred comprises, for a first DAT entry of the one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred: identifying a lower-level DAT entry involved in translating the virtual memory address for which the page fault occurred; pseudo-invalidating the lower-level DAT entry; and validating the first DAT entry. 7. The computer-implemented method of claim 1, wherein the pseudo-invalidating each of the one or more DAT entries that are involved in translating the virtual memory address for which the page fault occurred comprises marking each of the one or more DAT entries as invalid. 8. A system comprising: a memory having computer-readable instructions; and one or more processors for executing the computer-readable instructions, the computer-readable instructions comprising: pseudo-invalidating a first Dynamic Address Translation (DAT) table of a DAT structure associated with a workload, wherein the pseudo-invalidating the first DAT table comprises marking each DAT entry in the first DAT table as invalid; detecting a page fault occurring during translation of a virtual memory address of data required by the workload; traversing the DAT structure, responsive to the page fault, wherein the DAT structure is configured to translate virtual memory addresses to physical memory addresses, wherein the DAT structure comprises one or more DAT tables, and wherein each DAT entry in each of the one or more DAT tables is associated with an in-use bit indicating whether the DAT entry is in use; wherein the traversing comprises: pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred; and identifying a first page frame referenced by the virtual memory address for which the page fault occurred; and processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload. 9. The system of claim 8, wherein the pseudo-invalidating the first DAT table of the DAT structure is responsive to detecting that the workload has been moved from a first cluster of processor cores to a second cluster of processor cores. 10. The system of claim 9, wherein the processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload comprises: identifying a second page frame located closer than the first page frame to the workload that has been moved; and copying data from the first page frame to the second page frame, responsive to the workload having been moved and responsive to the page fault. 11. The system of claim 9, wherein a first DAT entry of the first DAT table is associated with a valid in-use bit when pseudo-invalidated. 12. The system of claim 9, wherein the pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred further comprises iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred. 13. The system of claim 12, wherein the iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred comprises, for a first DAT entry of the one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred: identifying a lower-level DAT entry involved in translating the virtual memory address for which the page fault occurred; pseudo-invalidating the lower-level DAT entry; and validating the first DAT entry. 14. The system of claim 8, wherein the pseudo-invalidating each of the one or more DAT entries that are involved in translating the virtual memory address for which the page fault occurred comprises marking each of the one or more DAT entries as invalid. 15. A computer-program product for selectively processing data associated with a workload, the computer-program product comprising a computer-readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: pseudo-invalidating a first Dynamic Address Translation (DAT) table of a DAT structure associated with a workload, wherein the pseudo-invalidating the first DAT table comprises marking each DAT entry in the first DAT table as invalid; detecting a page fault occurring during translation of a virtual memory address of data required by the workload; traversing the DAT structure, responsive to the page fault, wherein the DAT structure is configured to translate virtual memory addresses to physical memory addresses, wherein the DAT structure comprises one or more DAT tables, and wherein each DAT entry in each of the one or more DAT tables is associated with an in-use bit indicating whether the DAT entry is in use; wherein the traversing comprises: pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred; and identifying a first page frame referenced by the virtual memory address for which the page fault occurred; and processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload. 16. The computer-program product of claim 15, wherein the pseudo-invalidating the first DAT table of the DAT structure is responsive to detecting that the workload has been moved from a first cluster of processor cores to a second cluster of processor cores. 17. The computer-program product of claim 16, wherein the processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload comprises: identifying a second page frame located closer than the first page frame to the workload that has been moved; and copying data from the first page frame to the second page frame, responsive to the workload having been moved and responsive to the page fault. 18. The computer-program product of claim 16, wherein a first DAT entry of the first DAT table is associated with a valid in-use bit when pseudo-invalidated. 19. The computer-program product of claim 16, wherein the pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred further comprises iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred. 20. The computer-program product of claim 19, wherein the iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred comprises, for a first DAT entry of the one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred: identifying a lower-level DAT entry involved in translating the virtual memory address for which the page fault occurred; pseudo-invalidating the lower-level DAT entry; and validating the first DAT entry.
A computer-implemented method includes pseudo-invalidating a first Dynamic Address Translation (DAT) table of a DAT structure associated with a workload. A page fault occurring during translation of a virtual memory address of data required by the workload is detected. Responsive to the page fault, the DAT structure is traversed. The DAT structure includes one or more DAT tables, and each DAT entry in each of the one or more DAT tables is associated with an in-use bit indicating whether the DAT entry is in use. Traversing the DAT structure includes pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred; and identifying a first page frame referenced by the virtual memory address for which the page fault occurred. The data in the first page frame is processed responsive to the page fault.1. A computer-implemented method comprising: pseudo-invalidating a first Dynamic Address Translation (DAT) table of a DAT structure associated with a workload, wherein the pseudo-invalidating the first DAT table comprises marking each DAT entry in the first DAT table as invalid; detecting a page fault occurring during translation of a virtual memory address of data required by the workload; traversing the DAT structure, responsive to the page fault, wherein the DAT structure is configured to translate virtual memory addresses to physical memory addresses, wherein the DAT structure comprises one or more DAT tables, and wherein each DAT entry in each of the one or more DAT tables is associated with an in-use bit indicating whether the DAT entry is in use; wherein the traversing comprises: pseudo-invalidating, by a computer processor, each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred; and identifying a first page frame referenced by the virtual memory address for which the page fault occurred; and processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload. 2. The computer-implemented method of claim 1, wherein the pseudo-invalidating the first DAT table of the DAT structure is responsive to detecting that the workload has been moved from a first cluster of processor cores to a second cluster of processor cores. 3. The computer-implemented method of claim 2, wherein the processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload comprises: identifying a second page frame located closer than the first page frame to the workload that has been moved; and copying data from the first page frame to the second page frame, responsive to the workload having been moved and responsive to the page fault. 4. The computer-implemented method of claim 2, wherein a first DAT entry of the first DAT table is associated with a valid in-use bit when pseudo-invalidated. 5. The computer-implemented method of claim 2, wherein the pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred further comprises iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred. 6. The computer-implemented method of claim 5, wherein the iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred comprises, for a first DAT entry of the one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred: identifying a lower-level DAT entry involved in translating the virtual memory address for which the page fault occurred; pseudo-invalidating the lower-level DAT entry; and validating the first DAT entry. 7. The computer-implemented method of claim 1, wherein the pseudo-invalidating each of the one or more DAT entries that are involved in translating the virtual memory address for which the page fault occurred comprises marking each of the one or more DAT entries as invalid. 8. A system comprising: a memory having computer-readable instructions; and one or more processors for executing the computer-readable instructions, the computer-readable instructions comprising: pseudo-invalidating a first Dynamic Address Translation (DAT) table of a DAT structure associated with a workload, wherein the pseudo-invalidating the first DAT table comprises marking each DAT entry in the first DAT table as invalid; detecting a page fault occurring during translation of a virtual memory address of data required by the workload; traversing the DAT structure, responsive to the page fault, wherein the DAT structure is configured to translate virtual memory addresses to physical memory addresses, wherein the DAT structure comprises one or more DAT tables, and wherein each DAT entry in each of the one or more DAT tables is associated with an in-use bit indicating whether the DAT entry is in use; wherein the traversing comprises: pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred; and identifying a first page frame referenced by the virtual memory address for which the page fault occurred; and processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload. 9. The system of claim 8, wherein the pseudo-invalidating the first DAT table of the DAT structure is responsive to detecting that the workload has been moved from a first cluster of processor cores to a second cluster of processor cores. 10. The system of claim 9, wherein the processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload comprises: identifying a second page frame located closer than the first page frame to the workload that has been moved; and copying data from the first page frame to the second page frame, responsive to the workload having been moved and responsive to the page fault. 11. The system of claim 9, wherein a first DAT entry of the first DAT table is associated with a valid in-use bit when pseudo-invalidated. 12. The system of claim 9, wherein the pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred further comprises iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred. 13. The system of claim 12, wherein the iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred comprises, for a first DAT entry of the one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred: identifying a lower-level DAT entry involved in translating the virtual memory address for which the page fault occurred; pseudo-invalidating the lower-level DAT entry; and validating the first DAT entry. 14. The system of claim 8, wherein the pseudo-invalidating each of the one or more DAT entries that are involved in translating the virtual memory address for which the page fault occurred comprises marking each of the one or more DAT entries as invalid. 15. A computer-program product for selectively processing data associated with a workload, the computer-program product comprising a computer-readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: pseudo-invalidating a first Dynamic Address Translation (DAT) table of a DAT structure associated with a workload, wherein the pseudo-invalidating the first DAT table comprises marking each DAT entry in the first DAT table as invalid; detecting a page fault occurring during translation of a virtual memory address of data required by the workload; traversing the DAT structure, responsive to the page fault, wherein the DAT structure is configured to translate virtual memory addresses to physical memory addresses, wherein the DAT structure comprises one or more DAT tables, and wherein each DAT entry in each of the one or more DAT tables is associated with an in-use bit indicating whether the DAT entry is in use; wherein the traversing comprises: pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred; and identifying a first page frame referenced by the virtual memory address for which the page fault occurred; and processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload. 16. The computer-program product of claim 15, wherein the pseudo-invalidating the first DAT table of the DAT structure is responsive to detecting that the workload has been moved from a first cluster of processor cores to a second cluster of processor cores. 17. The computer-program product of claim 16, wherein the processing the data in the first page frame responsive to the page fault occurring during translation of the virtual memory address of the data required by the workload comprises: identifying a second page frame located closer than the first page frame to the workload that has been moved; and copying data from the first page frame to the second page frame, responsive to the workload having been moved and responsive to the page fault. 18. The computer-program product of claim 16, wherein a first DAT entry of the first DAT table is associated with a valid in-use bit when pseudo-invalidated. 19. The computer-program product of claim 16, wherein the pseudo-invalidating each of one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred further comprises iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred. 20. The computer-program product of claim 19, wherein the iteratively pseudo-invalidating each DAT entry referenced by a higher-level DAT entry involved in translating the virtual memory address for which the page fault occurred comprises, for a first DAT entry of the one or more DAT entries in the DAT structure that are involved in translating the virtual memory address for which the page fault occurred: identifying a lower-level DAT entry involved in translating the virtual memory address for which the page fault occurred; pseudo-invalidating the lower-level DAT entry; and validating the first DAT entry.
2,100
274,037
15,494,601
2,131
Systems and methods (including hardware and software) are disclosed where all common RAID storage levels are implemented for multi-queue hardware by isolating RAID stripes to a single central processing unit (CPU) core affinity. Fixed CPU affinity is used for any piece of data that may be modified. Instead of blocking CPUs that must access or modify a piece of data, the request is efficiently moved to the CPU that owns that data. In this manner the system is completely asynchronous, efficient, and scalable.
1. A method for lock-free RAID implementation, comprising: receiving at a first core a client input/output (IO) request having a data address; computing a stripe number as a function of the data address; computing a central processing unit (CPU) core number as a function of the stripe number; routing the request to a second core having the computed CPU core number. 2. The method of claim 1, further comprising updating local data structure for the stripe in the computer CPU cache on the second core. 3. The method of claim 2, further comprising checking and updating data cache in a local memory controller on the second core. 4. The method of claim 3, further comprising updating data on drives consistently for the stripe. 5. The method of claim 4, further comprising routing the request back to the first core. 6. The method of claim 5, further comprising completing the IO request. 7. The method of claim 1, wherein neither the first core nor the second core have a lock on the stripe. 8. A storage appliance, comprising: a plurality of central processing unit (CPU) sockets, each socket including a plurality of cores; wherein each core operates independently without locks. 9. The storage appliance of claim 8, wherein when acted upon by a processor, is adapted for performing the following steps: receiving at a first core a client input/output (IO) request having a data address; computing a stripe number as a function of the data address; computing a central processing unit (CPU) core number as a function of the stripe number; routing the request to a second core having the computed CPU core number. 10. The storage appliance of claim 9, wherein the steps further comprise updating local data structure for the stripe in the computer CPU cache on the second core. 11. The storage appliance of claim 10, wherein the steps further comprise checking and updating data cache in a local memory controller on the second core. 12. The storage appliance of claim 11, wherein the steps further comprise updating data on drives consistently for the stripe. 13. The storage appliance of claim 12, wherein the steps further comprise routing the request back to the first core. 14. The storage appliance of claim 13, wherein the steps further comprise completing the IO request. 15. The storage appliance of claim 9, wherein none of the plurality of cores has a lock on the stripe. 16. A storage appliance, comprising: a plurality of central processing unit (CPU) sockets, each socket including a plurality of cores; wherein each core operates independently without locks; wherein when acted upon by a processor, is adapted for performing the following steps: receiving at a first core a client input/output (IO) request having a data address; computing a stripe number as a function of the data address; computing a central processing unit (CPU) core number as a function of the stripe number; routing the request to a second core having the computed CPU core number; wherein none of the plurality of cores has a lock on the stripe. 17. The storage appliance of claim 16, wherein the steps further comprise updating local data structure for the stripe in the computer CPU cache on the second core. 18. The storage appliance of claim 17, wherein the steps further comprise checking and updating data cache in a local memory controller on the second core. 19. The storage appliance of claim 18, wherein the steps further comprise updating data on drives consistently for the stripe. 20. The storage appliance of claim 19, wherein the steps further comprise routing the request back to the first core.
Systems and methods (including hardware and software) are disclosed where all common RAID storage levels are implemented for multi-queue hardware by isolating RAID stripes to a single central processing unit (CPU) core affinity. Fixed CPU affinity is used for any piece of data that may be modified. Instead of blocking CPUs that must access or modify a piece of data, the request is efficiently moved to the CPU that owns that data. In this manner the system is completely asynchronous, efficient, and scalable.1. A method for lock-free RAID implementation, comprising: receiving at a first core a client input/output (IO) request having a data address; computing a stripe number as a function of the data address; computing a central processing unit (CPU) core number as a function of the stripe number; routing the request to a second core having the computed CPU core number. 2. The method of claim 1, further comprising updating local data structure for the stripe in the computer CPU cache on the second core. 3. The method of claim 2, further comprising checking and updating data cache in a local memory controller on the second core. 4. The method of claim 3, further comprising updating data on drives consistently for the stripe. 5. The method of claim 4, further comprising routing the request back to the first core. 6. The method of claim 5, further comprising completing the IO request. 7. The method of claim 1, wherein neither the first core nor the second core have a lock on the stripe. 8. A storage appliance, comprising: a plurality of central processing unit (CPU) sockets, each socket including a plurality of cores; wherein each core operates independently without locks. 9. The storage appliance of claim 8, wherein when acted upon by a processor, is adapted for performing the following steps: receiving at a first core a client input/output (IO) request having a data address; computing a stripe number as a function of the data address; computing a central processing unit (CPU) core number as a function of the stripe number; routing the request to a second core having the computed CPU core number. 10. The storage appliance of claim 9, wherein the steps further comprise updating local data structure for the stripe in the computer CPU cache on the second core. 11. The storage appliance of claim 10, wherein the steps further comprise checking and updating data cache in a local memory controller on the second core. 12. The storage appliance of claim 11, wherein the steps further comprise updating data on drives consistently for the stripe. 13. The storage appliance of claim 12, wherein the steps further comprise routing the request back to the first core. 14. The storage appliance of claim 13, wherein the steps further comprise completing the IO request. 15. The storage appliance of claim 9, wherein none of the plurality of cores has a lock on the stripe. 16. A storage appliance, comprising: a plurality of central processing unit (CPU) sockets, each socket including a plurality of cores; wherein each core operates independently without locks; wherein when acted upon by a processor, is adapted for performing the following steps: receiving at a first core a client input/output (IO) request having a data address; computing a stripe number as a function of the data address; computing a central processing unit (CPU) core number as a function of the stripe number; routing the request to a second core having the computed CPU core number; wherein none of the plurality of cores has a lock on the stripe. 17. The storage appliance of claim 16, wherein the steps further comprise updating local data structure for the stripe in the computer CPU cache on the second core. 18. The storage appliance of claim 17, wherein the steps further comprise checking and updating data cache in a local memory controller on the second core. 19. The storage appliance of claim 18, wherein the steps further comprise updating data on drives consistently for the stripe. 20. The storage appliance of claim 19, wherein the steps further comprise routing the request back to the first core.
2,100
274,038
15,495,902
2,131
Example implementations relate to command source verification. An example device can include instructions executable to send a command via a predefined path to a predefined location within a memory resource storing instructions executable to verify a source of the command using a predefined protocol and execute the command in response to the source verification.
1. A device, comprising: a first memory resource storing executable instructions; a first processing resource to execute the instructions stored on the first memory resource to: send a command to a second processing resource via a predefined path to a predefined location within the first memory resource, wherein a second memory resource stores instructions executable by the second processing resource to: verify a source of the command is the first processing resource using a predefined protocol; and execute the command in response to the source verification, wherein the first processing resource and the second processing resource run in parallel to one another. 2. The device of claim 1, wherein the first memory resource is associated with a system management mode (SMM). 3. The device of claim 1, wherein the second processing resource is a secure processor. 4. The device of claim 1, wherein the instructions executable to verify a source of the command is the first processing resource include instructions executable to verify a security privilege level of the first memory resource. 5. The device of claim 1, wherein the first memory resource is associated with a highest privilege mode available on the device. 6. A system, comprising: a shared memory resource storing executable instructions; a first device, comprising a first processing resource to execute the instructions stored on the shared memory resource to send a command to a second device via a predefined path by: filling a predefined command buffer in the shared memory resource; and setting a flag located in the shared memory resource to a particular value; and the second device communicatively coupled to the first device and comprising a second processing resource to execute the instructions stored on the shared memory resource to verify a source of the command by: determining whether the command buffer overlaps with the shared memory resource; determining whether the flag has the non-zero value in response to a determination the command buffer overlaps with the shared memory resource; and executing the command in response to a non-zero value determination. 7. The system of claim 6, wherein the shared memory resource is a memory resource associated with highly privileged code. 8. The system of claim 6, wherein the instructions executable to verify the source of the command further comprise instructions executable to verify the source of the command by exiting processing of the command in response to a determination that the command buffer overlaps with the shared memory resource, but is not a specified command buffer. 9. The system of claim 6, wherein the instructions executable to verify the source of the command further comprises instruction executable to verify the source of the command by exiting processing of the command in response to a determination that the flag does not have a particular value. 10. The system of claim 6, further comprising a basic input/output system (BIOS) comprising instructions stored on the shared memory resource and executable by the first processing resource to pass an address in the shared memory resource of the predefined command buffer to the second prior to untrusted code being able to execute. 11. The system of claim 6, further comprising a BIOS including instructions stored on the shared memory resource and executable by the first processing resource to pass an address in the shared memory resource of the flag to the second device prior to untrusted code being able to execute. 12. The system of claim 6, wherein the particular value comprises a value that matches an identification of the command. 13. A method, comprising: receiving a corrupt command from an unprivileged device; receiving, via a predefined path, a privileged command at a predefined location within a first memory resource; in response to receipt of the corrupt command and the privileged command, verifying a source of the corrupt command and a source of the privileged command; exiting processing of the corrupt command in response to a failure to verify the source of the corrupt command; and executing the privileged command in response to verification of the source of the privileged command based on the predefined path and predefined location. 14. The method of claim 13, wherein exiting processing includes refraining from reading and writing instructions to a buffer associated with the first and the second memory resources. 15. The method of claim 13, wherein receiving the corrupt command from the unprivileged device comprises receiving the corrupt command from a device associated with a lower security privilege mode than the source of the privileged command.
Example implementations relate to command source verification. An example device can include instructions executable to send a command via a predefined path to a predefined location within a memory resource storing instructions executable to verify a source of the command using a predefined protocol and execute the command in response to the source verification.1. A device, comprising: a first memory resource storing executable instructions; a first processing resource to execute the instructions stored on the first memory resource to: send a command to a second processing resource via a predefined path to a predefined location within the first memory resource, wherein a second memory resource stores instructions executable by the second processing resource to: verify a source of the command is the first processing resource using a predefined protocol; and execute the command in response to the source verification, wherein the first processing resource and the second processing resource run in parallel to one another. 2. The device of claim 1, wherein the first memory resource is associated with a system management mode (SMM). 3. The device of claim 1, wherein the second processing resource is a secure processor. 4. The device of claim 1, wherein the instructions executable to verify a source of the command is the first processing resource include instructions executable to verify a security privilege level of the first memory resource. 5. The device of claim 1, wherein the first memory resource is associated with a highest privilege mode available on the device. 6. A system, comprising: a shared memory resource storing executable instructions; a first device, comprising a first processing resource to execute the instructions stored on the shared memory resource to send a command to a second device via a predefined path by: filling a predefined command buffer in the shared memory resource; and setting a flag located in the shared memory resource to a particular value; and the second device communicatively coupled to the first device and comprising a second processing resource to execute the instructions stored on the shared memory resource to verify a source of the command by: determining whether the command buffer overlaps with the shared memory resource; determining whether the flag has the non-zero value in response to a determination the command buffer overlaps with the shared memory resource; and executing the command in response to a non-zero value determination. 7. The system of claim 6, wherein the shared memory resource is a memory resource associated with highly privileged code. 8. The system of claim 6, wherein the instructions executable to verify the source of the command further comprise instructions executable to verify the source of the command by exiting processing of the command in response to a determination that the command buffer overlaps with the shared memory resource, but is not a specified command buffer. 9. The system of claim 6, wherein the instructions executable to verify the source of the command further comprises instruction executable to verify the source of the command by exiting processing of the command in response to a determination that the flag does not have a particular value. 10. The system of claim 6, further comprising a basic input/output system (BIOS) comprising instructions stored on the shared memory resource and executable by the first processing resource to pass an address in the shared memory resource of the predefined command buffer to the second prior to untrusted code being able to execute. 11. The system of claim 6, further comprising a BIOS including instructions stored on the shared memory resource and executable by the first processing resource to pass an address in the shared memory resource of the flag to the second device prior to untrusted code being able to execute. 12. The system of claim 6, wherein the particular value comprises a value that matches an identification of the command. 13. A method, comprising: receiving a corrupt command from an unprivileged device; receiving, via a predefined path, a privileged command at a predefined location within a first memory resource; in response to receipt of the corrupt command and the privileged command, verifying a source of the corrupt command and a source of the privileged command; exiting processing of the corrupt command in response to a failure to verify the source of the corrupt command; and executing the privileged command in response to verification of the source of the privileged command based on the predefined path and predefined location. 14. The method of claim 13, wherein exiting processing includes refraining from reading and writing instructions to a buffer associated with the first and the second memory resources. 15. The method of claim 13, wherein receiving the corrupt command from the unprivileged device comprises receiving the corrupt command from a device associated with a lower security privilege mode than the source of the privileged command.
2,100
274,039
15,495,711
2,131
An apparatus, method and computer program product are disclosed. The apparatus includes a strategy module that determines restore information, writes the restore information into a restore information file, and writes the restore information file to a master volume containing target data; a snapshot module that creates a snapshot backup of the master volume; and a restoration module that restores the target data and restore information file, and restores application consistency of the target data. The method includes determining restore information, writing restore information to a file, writing the file to a volume containing data, backing up data by a snapshot backup of the volume, restoring data and the file, and restoring application consistency of the data. The computer program product comprises a computer readable storage medium that stores code to perform determining a backup strategy, backing up data, and restoring data.
1. An apparatus comprising: a strategy module that determines restore information based upon a backup type, writes the restore information into a restore information file, and writes the restore information file to a master volume containing target data; a snapshot module that creates a snapshot backup of the master volume; and a restoration module that restores the target data and the restore information file from the snapshot backup to a destination volume, and restores application consistency of the target data based on the restore information; wherein at least a portion of the information module, the backup module and the restoration module comprise one or more of hardware and executable code, the executable code stored on one or more computer readable storage media. 2. The apparatus of claim 1, further comprising an information module that queries a system for system information; wherein the strategy module determines the backup type based on the system information. 3. The apparatus of claim 2, wherein the system information comprises information identifying a type of application used, information identifying a type of operating system used, and information identifying a type of hypervisor software used. 4. The apparatus of claim 1, wherein the restore information comprises information about the system, the backup type and the target data needed to restore application consistency of the target data. 5. The apparatus of claim 1, wherein the restore information comprises recovery instructions. 6. The apparatus of claim 5, wherein the restore information file comprises a self-executing script that executes the recovery instructions. 7. The apparatus of claim 6, wherein the restoration module restores application consistency of the target data based on the restore information by running the self-executing script. 8. The apparatus of claim 1, further comprising a cleanup module that deletes the restore information file from the master volume after creating the snapshot backup of the master volume. 9. The apparatus of claim 1, further comprising a cleanup module that deletes the restore information file from the destination volume after restoring application consistency of the target data. 10. A method for backup and restoration of data, comprising: determining restore information based on a backup type, writing the restore information into a restore information file, writing the restore information file to a master volume containing target data; backing up data by creating a snapshot backup of the master volume; restoring the target data and the restore information file from the snapshot backup to a destination volume, and restoring application consistency of the target data based on the restore information. 11. The method of claim 10, further comprising querying a system for system information; and determining a backup type based on the system information. 12. The method of claim 11, wherein the system information comprises information identifying a type of application used, information identifying a type of operating system used, and information identifying a type of hypervisor software used. 13. The method of claim 10, wherein the restore information comprises information about the system, the backup type and the target data needed to restore application consistency of the target data. 14. The method of claim 10, wherein the restore information comprises recovery instructions. 15. The method of claim 14, wherein the restore information file comprises a self-executing script that executes the recovery instructions. 16. The method of claim 15, wherein restoring application consistency of the target data based on the restore information comprises running the self-executing script. 17. The method of claim 10, wherein backing up data further comprises deleting the restore information file from the master volume after creating the snapshot backup of the master volume. 18. The method of claim 10, wherein restoring data further comprises deleting the restore information file from the destination volume after restoring application consistency of the target data. 19. A computer program product comprising a computer readable storage medium that stores code executable by a processor, the executable code comprising code to perform determining restore information based on a backup type; writing the restore information into a restore information file on a master volume containing target data; backing up data by creating a snapshot backup of the master volume, wherein the snapshot backup comprises the target data and the restore information file; restoring the target data and the restore information file from the snapshot backup; and restoring application consistency of the target data based on the restore information. 20. The computer program product of claim 19, wherein the restore information comprises recovery instructions, the restore information file comprises a self-executing script; and restoring application consistency of the target data based on the restore information comprises running the self-executing script.
An apparatus, method and computer program product are disclosed. The apparatus includes a strategy module that determines restore information, writes the restore information into a restore information file, and writes the restore information file to a master volume containing target data; a snapshot module that creates a snapshot backup of the master volume; and a restoration module that restores the target data and restore information file, and restores application consistency of the target data. The method includes determining restore information, writing restore information to a file, writing the file to a volume containing data, backing up data by a snapshot backup of the volume, restoring data and the file, and restoring application consistency of the data. The computer program product comprises a computer readable storage medium that stores code to perform determining a backup strategy, backing up data, and restoring data.1. An apparatus comprising: a strategy module that determines restore information based upon a backup type, writes the restore information into a restore information file, and writes the restore information file to a master volume containing target data; a snapshot module that creates a snapshot backup of the master volume; and a restoration module that restores the target data and the restore information file from the snapshot backup to a destination volume, and restores application consistency of the target data based on the restore information; wherein at least a portion of the information module, the backup module and the restoration module comprise one or more of hardware and executable code, the executable code stored on one or more computer readable storage media. 2. The apparatus of claim 1, further comprising an information module that queries a system for system information; wherein the strategy module determines the backup type based on the system information. 3. The apparatus of claim 2, wherein the system information comprises information identifying a type of application used, information identifying a type of operating system used, and information identifying a type of hypervisor software used. 4. The apparatus of claim 1, wherein the restore information comprises information about the system, the backup type and the target data needed to restore application consistency of the target data. 5. The apparatus of claim 1, wherein the restore information comprises recovery instructions. 6. The apparatus of claim 5, wherein the restore information file comprises a self-executing script that executes the recovery instructions. 7. The apparatus of claim 6, wherein the restoration module restores application consistency of the target data based on the restore information by running the self-executing script. 8. The apparatus of claim 1, further comprising a cleanup module that deletes the restore information file from the master volume after creating the snapshot backup of the master volume. 9. The apparatus of claim 1, further comprising a cleanup module that deletes the restore information file from the destination volume after restoring application consistency of the target data. 10. A method for backup and restoration of data, comprising: determining restore information based on a backup type, writing the restore information into a restore information file, writing the restore information file to a master volume containing target data; backing up data by creating a snapshot backup of the master volume; restoring the target data and the restore information file from the snapshot backup to a destination volume, and restoring application consistency of the target data based on the restore information. 11. The method of claim 10, further comprising querying a system for system information; and determining a backup type based on the system information. 12. The method of claim 11, wherein the system information comprises information identifying a type of application used, information identifying a type of operating system used, and information identifying a type of hypervisor software used. 13. The method of claim 10, wherein the restore information comprises information about the system, the backup type and the target data needed to restore application consistency of the target data. 14. The method of claim 10, wherein the restore information comprises recovery instructions. 15. The method of claim 14, wherein the restore information file comprises a self-executing script that executes the recovery instructions. 16. The method of claim 15, wherein restoring application consistency of the target data based on the restore information comprises running the self-executing script. 17. The method of claim 10, wherein backing up data further comprises deleting the restore information file from the master volume after creating the snapshot backup of the master volume. 18. The method of claim 10, wherein restoring data further comprises deleting the restore information file from the destination volume after restoring application consistency of the target data. 19. A computer program product comprising a computer readable storage medium that stores code executable by a processor, the executable code comprising code to perform determining restore information based on a backup type; writing the restore information into a restore information file on a master volume containing target data; backing up data by creating a snapshot backup of the master volume, wherein the snapshot backup comprises the target data and the restore information file; restoring the target data and the restore information file from the snapshot backup; and restoring application consistency of the target data based on the restore information. 20. The computer program product of claim 19, wherein the restore information comprises recovery instructions, the restore information file comprises a self-executing script; and restoring application consistency of the target data based on the restore information comprises running the self-executing script.
2,100
274,040
15,495,707
2,131
Systems, apparatuses, and methods for implementing a virtualized translation lookaside buffer (TLB) are disclosed herein. In one embodiment, a system includes at least an execution unit and a first TLB. The system supports the execution of a plurality of virtual machines in a virtualization environment. The system detects a translation request generated by a first virtual machine with a first virtual memory identifier (VMID). The translation request is conveyed from the execution unit to the first TLB. The first TLB performs a lookup of its cache using at least a portion of a first virtual address and the first VMID. If the lookup misses in the cache, the first TLB allocates an entry which is addressable by the first virtual address and the first VMID, and the first TLB sends the translation request with the first VMID to a second TLB.
1. A system comprising: an execution unit; and a first translation lookaside buffer (TLB), wherein the first TLB comprises a cache of entries storing virtual-to-physical address translations; wherein the system is configured to: execute a plurality of virtual machines; detect a virtual-to-physical address translation request generated by a first virtual machine with a first virtual memory identifier (VMID); convey a translation request from the execution unit to the first TLB, wherein the translation request comprises a first virtual address and the first VMID; and perform a lookup of the cache with a portion of the first virtual address and the first VMID. 2. The system as recited in claim 1, wherein the translation request further comprises a first virtual function identifier (VFID). 3. The system as recited in claim 2, wherein the system is further configured to perform the lookup of the cache with at least the first VFID. 4. The system as recited in claim 1, wherein the system is further configured to retrieve a first physical address from a first entry responsive to determining the lookup matches on the first entry based on the portion of the first virtual address and the first VMID. 5. The system as recited in claim 4, wherein the system further comprises a second TLB, and wherein the first TLB is further configured to convey the first virtual address and the first VMID to the second TLB responsive to determining the lookup missed in the cache. 6. The system as recited in claim 5, wherein responsive to determining the lookup missed in the cache, the first TLB is configured to allocate, in the cache, a second entry for the translation request, wherein the second entry is addressable by the portion of the first virtual address and the first VMID. 7. The system as recited in claim 1, wherein the system further comprises a table walker, wherein the table walker is configured to identify a particular page table register based on the first VMID. 8. A method comprising: executing a plurality of virtual machines; detecting a virtual-to-physical address translation request generated by a first virtual machine with a first virtual memory identifier (VMID); conveying a translation request from an execution unit to a first TLB, wherein the translation request comprises a first virtual address and the first VMID; and performing a lookup of the cache with a portion of the first virtual address and the first VMID. 9. The method as recited in claim 8, wherein the translation request further comprises a first virtual function identifier (VFID). 10. The method as recited in claim 9, further comprising performing the lookup of the cache with at least the first VFID. 11. The method as recited in claim 8, further comprising retrieve a first physical address from a first entry responsive to determining the lookup matches on the first entry based on the portion of the first virtual address and the first VMID. 12. The method as recited in claim 11, further comprising conveying the first virtual address and the first VMID to a second TLB responsive to determining the lookup missed in the cache. 13. The method as recited in claim 11, wherein responsive to determining the lookup missed in the cache, the method further comprising allocating, in the cache, a second entry for the translation request, wherein the second entry is addressable by the portion of the first virtual address and the first VMID. 14. The method as recited in claim 8, further comprising identifying, by a table walker, a particular page table register based on the first VMID. 15. A translation lookaside buffer (TLB) comprising: a cache; and control logic; wherein the TLB is configured to: receive a translation request, wherein the translation request comprises a first virtual address and a first virtual memory identifier (VMID); and perform a lookup of the cache with a portion of the first virtual address and the first VMID. 16. The TLB as recited in claim 15, wherein the translation request further comprises a first virtual function identifier (VFID). 17. The TLB as recited in claim 16, wherein the TLB is configured to perform the lookup of the cache with at least the first VFID. 18. The TLB as recited in claim 15, wherein responsive to determining that the first entry includes the first indication, the TLB is configured to retrieve a first physical address from a first entry responsive to determining the lookup matches on the first entry based on the portion of the first virtual address and the first VMID. 19. The TLB as recited in claim 18, wherein the TLB is further configured to convey the first virtual address and the first VMID to the second TLB responsive to determining the lookup missed in the cache. 20. The TLB as recited in claim 19, wherein responsive to determining the lookup missed in the cache, the TLB is configured to allocate, in the cache, a second entry for the translation request, wherein the second entry is addressable by the portion of the first virtual address and the first VMID.
Systems, apparatuses, and methods for implementing a virtualized translation lookaside buffer (TLB) are disclosed herein. In one embodiment, a system includes at least an execution unit and a first TLB. The system supports the execution of a plurality of virtual machines in a virtualization environment. The system detects a translation request generated by a first virtual machine with a first virtual memory identifier (VMID). The translation request is conveyed from the execution unit to the first TLB. The first TLB performs a lookup of its cache using at least a portion of a first virtual address and the first VMID. If the lookup misses in the cache, the first TLB allocates an entry which is addressable by the first virtual address and the first VMID, and the first TLB sends the translation request with the first VMID to a second TLB.1. A system comprising: an execution unit; and a first translation lookaside buffer (TLB), wherein the first TLB comprises a cache of entries storing virtual-to-physical address translations; wherein the system is configured to: execute a plurality of virtual machines; detect a virtual-to-physical address translation request generated by a first virtual machine with a first virtual memory identifier (VMID); convey a translation request from the execution unit to the first TLB, wherein the translation request comprises a first virtual address and the first VMID; and perform a lookup of the cache with a portion of the first virtual address and the first VMID. 2. The system as recited in claim 1, wherein the translation request further comprises a first virtual function identifier (VFID). 3. The system as recited in claim 2, wherein the system is further configured to perform the lookup of the cache with at least the first VFID. 4. The system as recited in claim 1, wherein the system is further configured to retrieve a first physical address from a first entry responsive to determining the lookup matches on the first entry based on the portion of the first virtual address and the first VMID. 5. The system as recited in claim 4, wherein the system further comprises a second TLB, and wherein the first TLB is further configured to convey the first virtual address and the first VMID to the second TLB responsive to determining the lookup missed in the cache. 6. The system as recited in claim 5, wherein responsive to determining the lookup missed in the cache, the first TLB is configured to allocate, in the cache, a second entry for the translation request, wherein the second entry is addressable by the portion of the first virtual address and the first VMID. 7. The system as recited in claim 1, wherein the system further comprises a table walker, wherein the table walker is configured to identify a particular page table register based on the first VMID. 8. A method comprising: executing a plurality of virtual machines; detecting a virtual-to-physical address translation request generated by a first virtual machine with a first virtual memory identifier (VMID); conveying a translation request from an execution unit to a first TLB, wherein the translation request comprises a first virtual address and the first VMID; and performing a lookup of the cache with a portion of the first virtual address and the first VMID. 9. The method as recited in claim 8, wherein the translation request further comprises a first virtual function identifier (VFID). 10. The method as recited in claim 9, further comprising performing the lookup of the cache with at least the first VFID. 11. The method as recited in claim 8, further comprising retrieve a first physical address from a first entry responsive to determining the lookup matches on the first entry based on the portion of the first virtual address and the first VMID. 12. The method as recited in claim 11, further comprising conveying the first virtual address and the first VMID to a second TLB responsive to determining the lookup missed in the cache. 13. The method as recited in claim 11, wherein responsive to determining the lookup missed in the cache, the method further comprising allocating, in the cache, a second entry for the translation request, wherein the second entry is addressable by the portion of the first virtual address and the first VMID. 14. The method as recited in claim 8, further comprising identifying, by a table walker, a particular page table register based on the first VMID. 15. A translation lookaside buffer (TLB) comprising: a cache; and control logic; wherein the TLB is configured to: receive a translation request, wherein the translation request comprises a first virtual address and a first virtual memory identifier (VMID); and perform a lookup of the cache with a portion of the first virtual address and the first VMID. 16. The TLB as recited in claim 15, wherein the translation request further comprises a first virtual function identifier (VFID). 17. The TLB as recited in claim 16, wherein the TLB is configured to perform the lookup of the cache with at least the first VFID. 18. The TLB as recited in claim 15, wherein responsive to determining that the first entry includes the first indication, the TLB is configured to retrieve a first physical address from a first entry responsive to determining the lookup matches on the first entry based on the portion of the first virtual address and the first VMID. 19. The TLB as recited in claim 18, wherein the TLB is further configured to convey the first virtual address and the first VMID to the second TLB responsive to determining the lookup missed in the cache. 20. The TLB as recited in claim 19, wherein responsive to determining the lookup missed in the cache, the TLB is configured to allocate, in the cache, a second entry for the translation request, wherein the second entry is addressable by the portion of the first virtual address and the first VMID.
2,100
274,041
15,493,292
2,131
An operation method of a semiconductor memory device including a memory cell array and an internal processor configured to perform an internal processing operation includes receiving at the memory device a first mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode, receiving at the memory device processing information for the memory device, when the first mode indicator indicates that the memory device should operate in the processor mode, storing the processing information in a first memory cell region of the memory cell array, using the stored processing information to perform internal processing by the internal processor, and storing a result of the internal processing in the memory cell array.
1. A method for a memory device including a memory cell array and an internal processor, the method including: receiving at the memory device a first mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode receiving at the memory device processing information for the memory device; when the first mode indicator indicates that the memory device should operate in the processor mode, storing the processing information in a first memory cell region of the memory cell array; using the stored processing information to perform internal processing by the internal processor; and storing a result of the internal processing in the memory cell array. 2. The method of claim 1, wherein storing the result includes storing the result in a second cell memory region of the memory cell array. 3. The method of claim 1, wherein the first memory cell region is a redundant memory cell region. 4. The method of claim 1, further comprising: receiving at the memory device a second mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode; receiving at the memory device a data signal including data; and when the second mode indicator indicates that the memory device should operate in the normal mode, storing the data from the data signal in a second memory cell region of the memory cell array. 5. The method of claim 4, wherein the first memory cell region is a redundant memory cell region and the second memory cell region is a normal memory cell region. 6. The method of claim 5, further comprising: receiving at the memory device a first address, and using the first address to store the processing information in the first memory cell region; and receiving at the memory device a second address, and using the second address to store the data in the second memory cell region. 7. The method of claim 6, wherein: the first address is the same as the second address. 8. The method of claim 1, wherein the first mode indicator is one of: a command, an address bit, an MRS code, and a signal on a dedicated pin. 9. The method of claim 1, wherein using the stored processing information to perform internal processing by the internal processor further includes: receiving, by the memory device, a read command from a host; and based on the read command, accessing the stored processing information in order to perform the internal processing by the internal processor. 10. The method of claim 9, wherein using the stored processing information to perform internal processing by the internal processor further includes: transmitting the stored processing information from the first memory cell region to the internal processor to control internal processing by the internal processor. 11. The method of claim 1, further comprising: when the first mode indicator indicates that the memory device should operate in the processor mode, sending by the internal processor a signal for the memory cell array, which signal selects the first memory cell region. 12. A method for a memory device including a memory cell array and an internal processor, the method including: receiving at the memory device a first mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode; receiving at the memory device processing information for the memory device; when the first mode indicator indicates that the memory device should operate in the processor mode, storing the processing information in a first memory region of the memory cell array, the first memory region being a redundant memory cell region; receiving at the memory device a second mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode; receiving at the memory device a data signal that includes data; and when the second mode indicator indicates that the memory device should operate in the normal mode, storing the data from the data signal in a second memory region of the memory cell array, the second memory region being a normal memory cell region. 13. The method of claim 12, further comprising: receiving at the memory device a first address, and using the first address to store the processing information in the first memory region; and receiving at the memory device a second address, and using the second address to store the data in the second memory region. 14. The method of claim 13, wherein: the first address is the same as the second address. 15. The method of claim 12, further comprising: using the stored processing information to perform internal processing by the internal processor; and storing a result of the internal processing in the memory cell array. 16. The method of claim 12, wherein the first mode indicator is one of: a command, an address bit, an MRS code, and a signal on a dedicated pin. 17. The method of claim 12, wherein storing the processing information in the first memory region of the memory cell array is performed in response to a write command. 18. A method for a memory device including a memory cell array and an internal processor, the method including: receiving at the memory device a first mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode; receiving and storing at a first region of the memory device processing information for the memory device, the first region being a memory cell region and the processing information received from a separate, second region of the memory device; using the stored processing information to perform internal processing by the internal processor; and storing a result of the internal processing in the memory cell array. 19. The method of claim 18, wherein the separate region of the memory device is one of: a storage circuit; a register; and a fuse circuit. 20. The method of claim 18, wherein receiving and storing at the first region of the memory device processing information for the memory device is performed upon powering on the memory device.
An operation method of a semiconductor memory device including a memory cell array and an internal processor configured to perform an internal processing operation includes receiving at the memory device a first mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode, receiving at the memory device processing information for the memory device, when the first mode indicator indicates that the memory device should operate in the processor mode, storing the processing information in a first memory cell region of the memory cell array, using the stored processing information to perform internal processing by the internal processor, and storing a result of the internal processing in the memory cell array.1. A method for a memory device including a memory cell array and an internal processor, the method including: receiving at the memory device a first mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode receiving at the memory device processing information for the memory device; when the first mode indicator indicates that the memory device should operate in the processor mode, storing the processing information in a first memory cell region of the memory cell array; using the stored processing information to perform internal processing by the internal processor; and storing a result of the internal processing in the memory cell array. 2. The method of claim 1, wherein storing the result includes storing the result in a second cell memory region of the memory cell array. 3. The method of claim 1, wherein the first memory cell region is a redundant memory cell region. 4. The method of claim 1, further comprising: receiving at the memory device a second mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode; receiving at the memory device a data signal including data; and when the second mode indicator indicates that the memory device should operate in the normal mode, storing the data from the data signal in a second memory cell region of the memory cell array. 5. The method of claim 4, wherein the first memory cell region is a redundant memory cell region and the second memory cell region is a normal memory cell region. 6. The method of claim 5, further comprising: receiving at the memory device a first address, and using the first address to store the processing information in the first memory cell region; and receiving at the memory device a second address, and using the second address to store the data in the second memory cell region. 7. The method of claim 6, wherein: the first address is the same as the second address. 8. The method of claim 1, wherein the first mode indicator is one of: a command, an address bit, an MRS code, and a signal on a dedicated pin. 9. The method of claim 1, wherein using the stored processing information to perform internal processing by the internal processor further includes: receiving, by the memory device, a read command from a host; and based on the read command, accessing the stored processing information in order to perform the internal processing by the internal processor. 10. The method of claim 9, wherein using the stored processing information to perform internal processing by the internal processor further includes: transmitting the stored processing information from the first memory cell region to the internal processor to control internal processing by the internal processor. 11. The method of claim 1, further comprising: when the first mode indicator indicates that the memory device should operate in the processor mode, sending by the internal processor a signal for the memory cell array, which signal selects the first memory cell region. 12. A method for a memory device including a memory cell array and an internal processor, the method including: receiving at the memory device a first mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode; receiving at the memory device processing information for the memory device; when the first mode indicator indicates that the memory device should operate in the processor mode, storing the processing information in a first memory region of the memory cell array, the first memory region being a redundant memory cell region; receiving at the memory device a second mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode; receiving at the memory device a data signal that includes data; and when the second mode indicator indicates that the memory device should operate in the normal mode, storing the data from the data signal in a second memory region of the memory cell array, the second memory region being a normal memory cell region. 13. The method of claim 12, further comprising: receiving at the memory device a first address, and using the first address to store the processing information in the first memory region; and receiving at the memory device a second address, and using the second address to store the data in the second memory region. 14. The method of claim 13, wherein: the first address is the same as the second address. 15. The method of claim 12, further comprising: using the stored processing information to perform internal processing by the internal processor; and storing a result of the internal processing in the memory cell array. 16. The method of claim 12, wherein the first mode indicator is one of: a command, an address bit, an MRS code, and a signal on a dedicated pin. 17. The method of claim 12, wherein storing the processing information in the first memory region of the memory cell array is performed in response to a write command. 18. A method for a memory device including a memory cell array and an internal processor, the method including: receiving at the memory device a first mode indicator that indicates whether the memory device should operate in a processor mode or in a normal mode; receiving and storing at a first region of the memory device processing information for the memory device, the first region being a memory cell region and the processing information received from a separate, second region of the memory device; using the stored processing information to perform internal processing by the internal processor; and storing a result of the internal processing in the memory cell array. 19. The method of claim 18, wherein the separate region of the memory device is one of: a storage circuit; a register; and a fuse circuit. 20. The method of claim 18, wherein receiving and storing at the first region of the memory device processing information for the memory device is performed upon powering on the memory device.
2,100
274,042
15,493,277
2,131
A system and method for implementing optimized data replication across cloud storage nodes, the system comprising a cluster of computer system devices. The system comprises one or more memory devices and a plurality of processors. The one or more memory devices, stores a set of program modules. A processor among the plurality of processor executes the set of program modules. The set of program modules comprises an input module and a data transfer module. The input module receives a first instruction to add a first computer system device to the cluster, wherein the first computer system device comprises a first memory device. The data transfer module copies data in at least one memory device in the cluster of computer system devices, to the first memory device, based on number of computer system devices in the cluster being lesser than a predefined number.
1. A system for implementing optimized data replication across cloud storage nodes, the system comprising: a cluster of computer system devices; one or more memory devices, comprised in one or more computer system devices of the cluster of computer system devices, wherein each memory device among the one or more memory devices stores: a set of program modules; a plurality of processors, a processor among the plurality of processor being comprised in a computer system device of the cluster of computer system devices, wherein at least one processor executes the set of program modules, the set of program modules comprising: an input module, executed by the at least one processor, configured to: receive a first instruction to add a first computer system device to the cluster, wherein the first computer system device comprises a first memory device; and a data transfer module, executed by the processor, configured to copy data in at least one memory device in the cluster of computer system devices, to the first memory device, based on number of computer system devices in the cluster being lesser than a predefined number. 2. The system of claim 1, wherein the input module receives the first instruction from at least one of a user and at least one computer system device in the cluster. 3. The system of claim 1, wherein data in the at least one memory device is at least one of images, videos, documents, computer instructions, and databases. 4. The system of claim 1, wherein each computer system device in the cluster of computer system device is at least one of a laptop, a server, a network hardware device, a personal computer, and a smart phone, or any combination thereof. 5. The system of claim 1, wherein each computer system device in the cluster of computer system device is connected to each other via a network. 6. The system of claim 1, wherein the network is at least one of Bluetooth, WI-FI, mobile networks, and a WiMax network. 7. A method of implementing optimized data replication across cloud storage nodes, the method comprising: receiving by at least one processor via an input module, a first instruction to add a first computer system device to the cluster, wherein the first computer system device comprises a first memory device; and copying by the at least one processor via a data transfer module, data in at least one memory device in the cluster of computer system devices, to the first memory device, based on number of computer system devices in the cluster being lesser than a predefined number. 8. The method of claim 7, wherein the input module receives the first instruction from at least one of a user and at least one computer system device in the cluster. 9. The method of claim 7, wherein data in the at least one memory device is at least one of images, videos, documents, computer instructions, and databases. 10. The method of claim 7, wherein each computer system device in the cluster of computer system device is at least one of a laptop, a server, a network hardware device, a personal computer, and a smart phone, or any combination thereof. 11. The method of claim 7, wherein each computer system device in the cluster of computer system device is connected to each other via a network. 12. The method of claim 7, wherein the network is at least one of Bluetooth, WI-FI, mobile networks, and a WiMax network.
A system and method for implementing optimized data replication across cloud storage nodes, the system comprising a cluster of computer system devices. The system comprises one or more memory devices and a plurality of processors. The one or more memory devices, stores a set of program modules. A processor among the plurality of processor executes the set of program modules. The set of program modules comprises an input module and a data transfer module. The input module receives a first instruction to add a first computer system device to the cluster, wherein the first computer system device comprises a first memory device. The data transfer module copies data in at least one memory device in the cluster of computer system devices, to the first memory device, based on number of computer system devices in the cluster being lesser than a predefined number.1. A system for implementing optimized data replication across cloud storage nodes, the system comprising: a cluster of computer system devices; one or more memory devices, comprised in one or more computer system devices of the cluster of computer system devices, wherein each memory device among the one or more memory devices stores: a set of program modules; a plurality of processors, a processor among the plurality of processor being comprised in a computer system device of the cluster of computer system devices, wherein at least one processor executes the set of program modules, the set of program modules comprising: an input module, executed by the at least one processor, configured to: receive a first instruction to add a first computer system device to the cluster, wherein the first computer system device comprises a first memory device; and a data transfer module, executed by the processor, configured to copy data in at least one memory device in the cluster of computer system devices, to the first memory device, based on number of computer system devices in the cluster being lesser than a predefined number. 2. The system of claim 1, wherein the input module receives the first instruction from at least one of a user and at least one computer system device in the cluster. 3. The system of claim 1, wherein data in the at least one memory device is at least one of images, videos, documents, computer instructions, and databases. 4. The system of claim 1, wherein each computer system device in the cluster of computer system device is at least one of a laptop, a server, a network hardware device, a personal computer, and a smart phone, or any combination thereof. 5. The system of claim 1, wherein each computer system device in the cluster of computer system device is connected to each other via a network. 6. The system of claim 1, wherein the network is at least one of Bluetooth, WI-FI, mobile networks, and a WiMax network. 7. A method of implementing optimized data replication across cloud storage nodes, the method comprising: receiving by at least one processor via an input module, a first instruction to add a first computer system device to the cluster, wherein the first computer system device comprises a first memory device; and copying by the at least one processor via a data transfer module, data in at least one memory device in the cluster of computer system devices, to the first memory device, based on number of computer system devices in the cluster being lesser than a predefined number. 8. The method of claim 7, wherein the input module receives the first instruction from at least one of a user and at least one computer system device in the cluster. 9. The method of claim 7, wherein data in the at least one memory device is at least one of images, videos, documents, computer instructions, and databases. 10. The method of claim 7, wherein each computer system device in the cluster of computer system device is at least one of a laptop, a server, a network hardware device, a personal computer, and a smart phone, or any combination thereof. 11. The method of claim 7, wherein each computer system device in the cluster of computer system device is connected to each other via a network. 12. The method of claim 7, wherein the network is at least one of Bluetooth, WI-FI, mobile networks, and a WiMax network.
2,100
274,043
15,493,505
2,131
Examples described herein include systems and methods which include an apparatus comprising a memory array including a plurality of memory cells and a memory controller coupled to the memory array. The memory controller comprises a memory mapper configured to configure a memory map on the basis of a memory command associated with a memory access operation. The memory map comprises a specific sequence of memory access instructions to access at least one memory cell of the memory array. For example, the specific sequence of memory access instructions for a diagonal memory command comprises a sequence of memory access instructions that each access a memory cell along a diagonal of the memory array.
1. An apparatus comprising: a memory array comprising a plurality of memory cells; and a memory controller coupled to the memory array, the memory controller comprising a memory mapper configured to configure a memory map based on a memory command associated with a memory access operation, wherein the memory map comprises a specific sequence of memory access instructions to access at least one memory cell of the memory array. 2. The apparatus of claim 1, wherein the memory access instructions are specific to a type of memory command. 3. The apparatus of claim 2, wherein the type of memory command comprises a row memory command, a column memory command, a diagonal memory command, a determinant memory command, or any matrix memory command. 4. The apparatus of claim 1, wherein each memory access instruction of the specific sequence of memory access instructions comprises an instruction for a respective address of a memory cell of the plurality of memory cells. 5. The apparatus of claim 1, wherein the memory controller is implemented in a processor. 6. The apparatus of claim 1, wherein the memory controller is configured to receive the memory command via a bus coupled to a network interface configured to communicate with a cloud computing network. 7. The apparatus of claim 1, further comprising: a memory interface coupled to the memory controller and configured to communicate with the memory array. 8. The apparatus of claim 7, wherein the memory mapper is configured to provide the memory map to the memory array via the memory interface. 9. The apparatus of claim 7, wherein the memory interface comprises a plurality of terminals, wherein at least one port of the plurality of terminals is configured to receive at least one of a memory command signal, an address signal, a clock signal, or a data signal. 10. The apparatus of claim 1, wherein the memory controller further comprises an address translator configured to translate the memory map based on the memory access operation and another memory access operation. 11. The apparatus of claim 10, wherein the other memory access operation is provided a different memory map than the memory map. 12. The apparatus of claim 1, further comprising: another memory array comprising another plurality of memory cells, wherein the memory controller is coupled to the other memory array, wherein the specific sequence of memory access instructions comprises instructions to access at least one memory cell of the memory array and at least one memory cell of the other memory array. 13. The apparatus of claim 1, wherein the memory command comprises a diagonal memory command, and wherein the specific sequence of memory access instructions to access at least one memory cell of the memory array comprises a sequence of memory access instructions that each access a memory cell along a diagonal of the memory array. 14. The apparatus of claim 1, wherein the specific sequence of memory access instructions to access at least one memory cell of the memory array comprises a sequence of memory access instructions defined by an operation order of the memory command. 15. A method comprising: obtaining a memory command associated with a memory access operation; retrieving a memory map for the memory access operation based at least on the memory command; and performing the memory access operation based on the memory map. 16. The method of claim 15, wherein the memory map comprises a specific sequence of memory access instructions to access a plurality of memory cells. 17. The method of claim 16, wherein performing the memory access operation based on the memory map comprises accessing respective addresses of the plurality of memory cells based on the memory map. 18. The method of claim 15, wherein the memory map is based on an operation order of the memory command. 19. A method comprising: obtaining a memory command associated with a memory access operation; determining that the memory access operation is associated with a memory map different than another memory map utilized in another memory operation; translating the memory map based at least on the memory map and the another memory map; and providing a translated memory map to perform the memory access operation. 20. The method of claim 19, wherein the memory map is based on an operation order of the memory command, and wherein the other memory map is based on another operation order of a previous memory command that was provided to perform a previous memory access operation. 21. The method of claim 19, wherein translating the memory map based at least on the memory map and the other memory map comprises: identifying a plurality of memory addresses of the other memory map; and allocating the plurality of memory addresses into the memory map based on an operation order of the memory command. 22. The method of claim 21, further comprising: accessing a plurality of memory cells based on the memory map.
Examples described herein include systems and methods which include an apparatus comprising a memory array including a plurality of memory cells and a memory controller coupled to the memory array. The memory controller comprises a memory mapper configured to configure a memory map on the basis of a memory command associated with a memory access operation. The memory map comprises a specific sequence of memory access instructions to access at least one memory cell of the memory array. For example, the specific sequence of memory access instructions for a diagonal memory command comprises a sequence of memory access instructions that each access a memory cell along a diagonal of the memory array.1. An apparatus comprising: a memory array comprising a plurality of memory cells; and a memory controller coupled to the memory array, the memory controller comprising a memory mapper configured to configure a memory map based on a memory command associated with a memory access operation, wherein the memory map comprises a specific sequence of memory access instructions to access at least one memory cell of the memory array. 2. The apparatus of claim 1, wherein the memory access instructions are specific to a type of memory command. 3. The apparatus of claim 2, wherein the type of memory command comprises a row memory command, a column memory command, a diagonal memory command, a determinant memory command, or any matrix memory command. 4. The apparatus of claim 1, wherein each memory access instruction of the specific sequence of memory access instructions comprises an instruction for a respective address of a memory cell of the plurality of memory cells. 5. The apparatus of claim 1, wherein the memory controller is implemented in a processor. 6. The apparatus of claim 1, wherein the memory controller is configured to receive the memory command via a bus coupled to a network interface configured to communicate with a cloud computing network. 7. The apparatus of claim 1, further comprising: a memory interface coupled to the memory controller and configured to communicate with the memory array. 8. The apparatus of claim 7, wherein the memory mapper is configured to provide the memory map to the memory array via the memory interface. 9. The apparatus of claim 7, wherein the memory interface comprises a plurality of terminals, wherein at least one port of the plurality of terminals is configured to receive at least one of a memory command signal, an address signal, a clock signal, or a data signal. 10. The apparatus of claim 1, wherein the memory controller further comprises an address translator configured to translate the memory map based on the memory access operation and another memory access operation. 11. The apparatus of claim 10, wherein the other memory access operation is provided a different memory map than the memory map. 12. The apparatus of claim 1, further comprising: another memory array comprising another plurality of memory cells, wherein the memory controller is coupled to the other memory array, wherein the specific sequence of memory access instructions comprises instructions to access at least one memory cell of the memory array and at least one memory cell of the other memory array. 13. The apparatus of claim 1, wherein the memory command comprises a diagonal memory command, and wherein the specific sequence of memory access instructions to access at least one memory cell of the memory array comprises a sequence of memory access instructions that each access a memory cell along a diagonal of the memory array. 14. The apparatus of claim 1, wherein the specific sequence of memory access instructions to access at least one memory cell of the memory array comprises a sequence of memory access instructions defined by an operation order of the memory command. 15. A method comprising: obtaining a memory command associated with a memory access operation; retrieving a memory map for the memory access operation based at least on the memory command; and performing the memory access operation based on the memory map. 16. The method of claim 15, wherein the memory map comprises a specific sequence of memory access instructions to access a plurality of memory cells. 17. The method of claim 16, wherein performing the memory access operation based on the memory map comprises accessing respective addresses of the plurality of memory cells based on the memory map. 18. The method of claim 15, wherein the memory map is based on an operation order of the memory command. 19. A method comprising: obtaining a memory command associated with a memory access operation; determining that the memory access operation is associated with a memory map different than another memory map utilized in another memory operation; translating the memory map based at least on the memory map and the another memory map; and providing a translated memory map to perform the memory access operation. 20. The method of claim 19, wherein the memory map is based on an operation order of the memory command, and wherein the other memory map is based on another operation order of a previous memory command that was provided to perform a previous memory access operation. 21. The method of claim 19, wherein translating the memory map based at least on the memory map and the other memory map comprises: identifying a plurality of memory addresses of the other memory map; and allocating the plurality of memory addresses into the memory map based on an operation order of the memory command. 22. The method of claim 21, further comprising: accessing a plurality of memory cells based on the memory map.
2,100
274,044
15,493,403
2,131
Storage virtualization techniques allow files to be stored remotely, for example, by a cloud storage provider, but in a manner that appears to a user or application running on a local computing device as if the files are stored locally—even though the data of those files may not be resident on the local computing device. That is, the contents of files that may exist in the cloud look and behave as if they were stored locally on a computing device.
1. In a computing device comprising a processor, memory, and secondary storage, the memory storing computer-executable instructions that, when executed by the processor, implement a file system for managing the storage of files on the secondary storage, a method for storage virtualization of files, comprising: storing, on the secondary storage, a placeholder for a file, the file comprising data at least some of which is stored on a network remotely from the secondary storage, the placeholder containing metadata associated with the file, a sparse data stream containing none or some data of the file that is not stored remotely, and information which enables any remotely stored data of the file to be retrieved from the network; receiving, from an application executing on the processor of the computing device, a request to read at least a portion of the data of the file; determining from the information contained in the placeholder whether any of the requested data is stored remotely from the secondary storage; and for any data determined from the placeholder to be stored remotely from the secondary storage, formulating one or more requests to a storage virtualization provider to retrieve the remotely stored data; and transmitting the one or more requests to the storage virtualization provider. 2. The method recited in claim 1, further comprising: receiving, from the storage virtualization provider in response to the one or more requests, the requested data; and issuing a request to the file system of the computing device to write the received data to the sparse data stream of the placeholder on the secondary storage. 3. The method recited in claim 2, further comprising updating the metadata contained in the placeholder to indicate that the placeholder has been modified. 4. The method recited in claim 1, further comprising: receiving, from the storage virtualization provider in response to the one or more requests, the requested data; and providing the requested data directly to the application in response to request to read the portion of the data of the file, without issuing a request to the file system to write the received data to the sparse data stream of the placeholder on the secondary storage. 5. The method recited in claim 1, the information in the placeholder which enables any remotely stored data of the file to be retrieved from the network comprising an identifier associated with the storage virtualization provider and the file name for the file. 6. The method recited in claim 1, the information in the placeholder which enables any remotely stored data of the file to be retrieved from the network comprising a data structure that identifies which extents of data of the file, if any, are stored within the sparse data stream of the placeholder on the secondary storage and which extents of the data of the file are stored remotely from the secondary storage, the determining whether any of the requested data is stored remotely from the secondary storage being performed using the data structure. 7. The method recited in claim 6, the data structure comprises a bitmap having a sequence of bits, each bit representing a different extent of the data of the file. 8. The method recited in claim 1, the formulating one or more requests to the storage virtualization provider further comprising: determining whether for any portion of the requested data stored remotely from the secondary storage has previously been requested from the storage virtualization provider but not yet received; and trimming the one or more requests to the storage virtualization provider so that the one or more requests do not overlap with any such previously requested but not yet received data. 9. The method recited in claim 1, further comprising: setting a timeout period for the one or more requests transmitted to the storage virtualization provider; and indicating that the one or more requests failed if a response to the one or more requests is not received from the storage virtualization provider before the expiration of the timeout period. 10. In a computing device comprising a processor, memory, and secondary storage, the memory storing computer-executable instructions that, when executed by the processor, implement a file system for managing the storage of files on the secondary storage, a method for storage virtualization of files, comprising: receiving, from a storage virtualization provider, a request to create a placeholder for a file, at least some of the data of the file to be stored on a network remotely from the secondary storage, the request comprising a file name for the file; creating, in response to the request, a zero length file representing the placeholder for the file, the placeholder comprising metadata associated with the file and a sparse data stream containing none or some data of the file; adding to the placeholder information which enables any remotely stored data for the file to be retrieved; and storing the placeholder on the secondary storage of the computing device, the placeholder appearing to an application executing on the processor of the storage device as a regular file managed by the file system. 11. The method recited in claim 10, the information which enables any remotely stored data for the file to be retrieved comprising an identifier associated with the storage virtualization provider and the file name for the file. 12. The method recited in claim 11, the information which enables any remotely stored data for the file to be retrieved further comprising a tag that associates the placeholder with a filter of the file system that is configured to create and manage placeholders for files on the secondary storage. 13. The method recited in claim 10, further comprising marking the placeholder as a sparse file such that the file system will not allocate space on the secondary storage for all of the data of the file. 14. The method recited in claim 10, the information which enables any remotely stored data for the file to be retrieved further comprising a data structure that identifies which extents of the file, if any, are stored within the sparse data stream of the placeholder on the secondary storage and which extents of the file are stored remotely from the secondary storage. 15. The method recited in claim 14, the data structure comprising a bitmap having a sequence of bits, each bit representing a different extent of the file. 16. The method recited in claim 10, further comprising: prior to performing the creating, adding, and storing, determining whether an attribute associated with the file indicates that a placeholder should not be created for the file; and preventing the creating, adding, and storing when the attribute indicates that a placeholder should not be created. 17. A computing device comprising a processor, memory, and secondary storage, the memory storing computer executable instructions that, when executed by the processor, implement an architecture for storage virtualization comprising: a storage virtualization provider for retrieving remotely stored file data from a storage location on a network; a storage virtualization filter residing in a file system of the computing device that creates and manages placeholders for files on the secondary storage of the computing device, each placeholder for a file comprising metadata concerning the file but at least some of the data for the file being stored remotely by the storage virtualization provider, the storage virtualization filter notifying the storage virtualization provider of access attempts to files having placeholders on the secondary storage and whose data is managed by the storage virtualization provider and storage virtualization filter; and a library that abstracts details of communications between the storage virtualization provider and the storage virtualization filter. 18. The computing device recited in claim 17, wherein the storage virtualization provider and library execute in user mode on the computing device, and the storage virtualization filter executes in kernel mode. 19. The computing device recited in claim 17, each placeholder further comprising information which enables any remotely stored data for the file it represents to be retrieved by the storage virtualization filter in cooperation with the storage virtualization provider. 20. The computing device recited in claim 17, the information which enables any remotely stored data for the file to be retrieved comprises a data structure that identifies which extents of the file, if any, are stored within a sparse data stream of the placeholder on the secondary storage and which extents of the file are stored remotely from the secondary storage.
Storage virtualization techniques allow files to be stored remotely, for example, by a cloud storage provider, but in a manner that appears to a user or application running on a local computing device as if the files are stored locally—even though the data of those files may not be resident on the local computing device. That is, the contents of files that may exist in the cloud look and behave as if they were stored locally on a computing device.1. In a computing device comprising a processor, memory, and secondary storage, the memory storing computer-executable instructions that, when executed by the processor, implement a file system for managing the storage of files on the secondary storage, a method for storage virtualization of files, comprising: storing, on the secondary storage, a placeholder for a file, the file comprising data at least some of which is stored on a network remotely from the secondary storage, the placeholder containing metadata associated with the file, a sparse data stream containing none or some data of the file that is not stored remotely, and information which enables any remotely stored data of the file to be retrieved from the network; receiving, from an application executing on the processor of the computing device, a request to read at least a portion of the data of the file; determining from the information contained in the placeholder whether any of the requested data is stored remotely from the secondary storage; and for any data determined from the placeholder to be stored remotely from the secondary storage, formulating one or more requests to a storage virtualization provider to retrieve the remotely stored data; and transmitting the one or more requests to the storage virtualization provider. 2. The method recited in claim 1, further comprising: receiving, from the storage virtualization provider in response to the one or more requests, the requested data; and issuing a request to the file system of the computing device to write the received data to the sparse data stream of the placeholder on the secondary storage. 3. The method recited in claim 2, further comprising updating the metadata contained in the placeholder to indicate that the placeholder has been modified. 4. The method recited in claim 1, further comprising: receiving, from the storage virtualization provider in response to the one or more requests, the requested data; and providing the requested data directly to the application in response to request to read the portion of the data of the file, without issuing a request to the file system to write the received data to the sparse data stream of the placeholder on the secondary storage. 5. The method recited in claim 1, the information in the placeholder which enables any remotely stored data of the file to be retrieved from the network comprising an identifier associated with the storage virtualization provider and the file name for the file. 6. The method recited in claim 1, the information in the placeholder which enables any remotely stored data of the file to be retrieved from the network comprising a data structure that identifies which extents of data of the file, if any, are stored within the sparse data stream of the placeholder on the secondary storage and which extents of the data of the file are stored remotely from the secondary storage, the determining whether any of the requested data is stored remotely from the secondary storage being performed using the data structure. 7. The method recited in claim 6, the data structure comprises a bitmap having a sequence of bits, each bit representing a different extent of the data of the file. 8. The method recited in claim 1, the formulating one or more requests to the storage virtualization provider further comprising: determining whether for any portion of the requested data stored remotely from the secondary storage has previously been requested from the storage virtualization provider but not yet received; and trimming the one or more requests to the storage virtualization provider so that the one or more requests do not overlap with any such previously requested but not yet received data. 9. The method recited in claim 1, further comprising: setting a timeout period for the one or more requests transmitted to the storage virtualization provider; and indicating that the one or more requests failed if a response to the one or more requests is not received from the storage virtualization provider before the expiration of the timeout period. 10. In a computing device comprising a processor, memory, and secondary storage, the memory storing computer-executable instructions that, when executed by the processor, implement a file system for managing the storage of files on the secondary storage, a method for storage virtualization of files, comprising: receiving, from a storage virtualization provider, a request to create a placeholder for a file, at least some of the data of the file to be stored on a network remotely from the secondary storage, the request comprising a file name for the file; creating, in response to the request, a zero length file representing the placeholder for the file, the placeholder comprising metadata associated with the file and a sparse data stream containing none or some data of the file; adding to the placeholder information which enables any remotely stored data for the file to be retrieved; and storing the placeholder on the secondary storage of the computing device, the placeholder appearing to an application executing on the processor of the storage device as a regular file managed by the file system. 11. The method recited in claim 10, the information which enables any remotely stored data for the file to be retrieved comprising an identifier associated with the storage virtualization provider and the file name for the file. 12. The method recited in claim 11, the information which enables any remotely stored data for the file to be retrieved further comprising a tag that associates the placeholder with a filter of the file system that is configured to create and manage placeholders for files on the secondary storage. 13. The method recited in claim 10, further comprising marking the placeholder as a sparse file such that the file system will not allocate space on the secondary storage for all of the data of the file. 14. The method recited in claim 10, the information which enables any remotely stored data for the file to be retrieved further comprising a data structure that identifies which extents of the file, if any, are stored within the sparse data stream of the placeholder on the secondary storage and which extents of the file are stored remotely from the secondary storage. 15. The method recited in claim 14, the data structure comprising a bitmap having a sequence of bits, each bit representing a different extent of the file. 16. The method recited in claim 10, further comprising: prior to performing the creating, adding, and storing, determining whether an attribute associated with the file indicates that a placeholder should not be created for the file; and preventing the creating, adding, and storing when the attribute indicates that a placeholder should not be created. 17. A computing device comprising a processor, memory, and secondary storage, the memory storing computer executable instructions that, when executed by the processor, implement an architecture for storage virtualization comprising: a storage virtualization provider for retrieving remotely stored file data from a storage location on a network; a storage virtualization filter residing in a file system of the computing device that creates and manages placeholders for files on the secondary storage of the computing device, each placeholder for a file comprising metadata concerning the file but at least some of the data for the file being stored remotely by the storage virtualization provider, the storage virtualization filter notifying the storage virtualization provider of access attempts to files having placeholders on the secondary storage and whose data is managed by the storage virtualization provider and storage virtualization filter; and a library that abstracts details of communications between the storage virtualization provider and the storage virtualization filter. 18. The computing device recited in claim 17, wherein the storage virtualization provider and library execute in user mode on the computing device, and the storage virtualization filter executes in kernel mode. 19. The computing device recited in claim 17, each placeholder further comprising information which enables any remotely stored data for the file it represents to be retrieved by the storage virtualization filter in cooperation with the storage virtualization provider. 20. The computing device recited in claim 17, the information which enables any remotely stored data for the file to be retrieved comprises a data structure that identifies which extents of the file, if any, are stored within a sparse data stream of the placeholder on the secondary storage and which extents of the file are stored remotely from the secondary storage.
2,100
274,045
15,492,736
2,131
A cache system stores a number of different datasets. The cache system includes a number of cache units, each in a state associated with one of the datasets. In response to determining that a hit ratio of a cache unit drops below a threshold, the state of the cache unit is changed and the dataset is replaced with that associated with the new state.
1. A method comprising: determining, by a processing device, that a hit ratio is below a first hit ratio threshold associated with a first cache unit and above a second hit ratio threshold associated with a second cache unit, wherein the first hit ratio threshold is different from the second hit ratio threshold; and responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, loading a dataset into the first cache unit rather than the second cache unit. 2. The method of claim 1, further comprising: loading a first dataset into the first cache unit and the second cache unit prior to determining the that the hit ratio is below the first hit ratio threshold and above the second hit ratio threshold, wherein the dataset loaded into the first cache unit rather than the second cache unit is a second dataset. 3. The method of claim 1, further comprising: determining that the hit ratio is below the second hit ratio threshold associated with the second cache unit; and responsive to determining that the hit ratio is below the second hit ratio threshold associated with the second cache unit, loading a third dataset into the second cache unit. 4. The method of claim 1, further comprising: determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and equal to the second hit ratio threshold associated with the second cache unit; and responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and equal to the second hit ratio threshold associated with the second cache unit, loading the dataset into the first cache unit. 5. The method of claim 2, further comprising: receiving an alteration to the first dataset; and in response to receiving the alteration, loading an altered first dataset into a third cache unit. 6. The method of claim 1, further comprising: receiving a request to access a data store; and selecting the first cache unit units to service the request. 7. The method of claim 1, wherein determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit comprises: identifying a first number indicative of a number of data requests received by the first cache unit and second cache unit within a window, each data request requesting a data unit; identifying a second number indicative of a number of the data requests requesting a data unit in a first dataset within the window; identifying the hit ratio of the first cache unit and second cache unit by dividing the second number by the first number; and comparing the hit ratio to the first hit ratio threshold and the second hit ratio threshold. 8. The method of claim 1, further comprising generating a plurality of datasets, wherein generating the plurality of datasets comprises: receiving a first plurality of data requests requesting a respective first plurality of data units; loading the first plurality of data units into the first cache unit; receiving a second plurality of data requests requesting a respective second plurality of data units; determining a hit ratio of the first cache unit in responding to the second plurality of data requests; determining that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold; and responsive to the determination that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold, storing the first plurality of data units as a first dataset of the plurality of datasets. 9. A non-transitory computer-readable medium comprising instructions that, when executed by a processing device, cause the processing device to: determine, by the processing device, that a hit ratio is below a hit first ratio threshold associated with a first cache unit and above a second hit ratio threshold associated with a second cache unit, wherein the first hit ratio threshold is different from the second hit ratio threshold; and responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, load a dataset into the first cache unit rather than the second cache unit. 10. The non-transitory computer-readable medium of claim 9, the processing device further to: load a first dataset into the first cache unit and the second cache unit prior to determining the that the hit ratio is below the first hit ratio threshold and above the second hit ratio threshold, wherein the dataset loaded into the first cache unit rather than the second cache unit is a second dataset. 11. The non-transitory computer-readable medium of claim 9, the processing device further to: determine that the hit ratio is below the second hit ratio threshold associated with the second cache unit; and responsive to determining that the hit ratio is below the second hit ratio threshold associated with the second cache unit, load a third dataset into the second cache unit. 12. The non-transitory computer-readable medium of claim 9, the processing device further to: determine that the hit ratio is below the first hit ratio threshold associated with the first cache unit and equal to the second hit ratio threshold associated with the second cache unit; and 13. The non-transitory computer-readable medium of claim 10, the processing device further to: receive an alteration to the first dataset; and in response to receiving the alteration, load an altered first dataset into a third cache unit. 14. The non-transitory computer-readable medium of claim 9, wherein to determine that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit the processing device further to: identify a first number indicative of a number of data requests received by the first cache unit and the second cache unit within a window, each data request requesting a data unit; identify a second number indicative of a number of the data requests within the window requesting a data unit in a first dataset; identify the hit ratio of the first cache unit and the second cache unit by dividing the second number by the first number; and compare the hit ratio to the first hit ratio threshold and the second hit ratio threshold. 15. The non-transitory computer-readable medium of claim 9, the processing device further to generate a plurality of datasets, wherein to generate the plurality of datasets, the processing device further to: receive a first plurality of data requests requesting a respective first plurality of data units; load the first plurality of data units into the first cache unit; receive a second plurality of data requests requesting a respective second plurality of data units; determine a hit ratio of the first cache unit in responding to the second plurality of data requests; determine that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold; and responsive to the determination that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold, store the first plurality of data units as a first dataset of the plurality of datasets. 16. A system comprising: a memory; and a processing device, operatively coupled to the memory, to: determine that a hit ratio is below a first hit ratio threshold associated with a first cache unit and above a second hit ratio threshold associated with a second cache unit, wherein the first hit ratio threshold is different from the second hit ratio threshold; and responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, load a dataset into the first cache unit rather than the second cache unit. 17. The system of claim 16, the processing device further to: load a first dataset into the first cache unit and the second cache unit prior to determining the that the hit ratio is below the first hit ratio threshold and above the second hit ratio threshold, wherein the dataset loaded into the first cache unit rather than the second cache unit is a second dataset. 18. The system of claim 16, wherein the processing device is further to: determine that the hit ratio is below the second hit ratio threshold associated with the second cache unit; and responsive to the determination that the hit ratio of is below the second hit ratio threshold associated with the second cache unit, load a third dataset into the second cache unit. 19. The system of claim 16, wherein to determine that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, the processing device is to: identify a first number indicative of a number of data requests received by the first cache unit and the second cache unit within a window, each data request requesting a data unit; identify a second number indicative of a number of the data requests requesting a data unit in a first dataset within the window; identify the hit ratio of the first cache unit and the second cache unit by dividing the second number by the first number; and compare the hit ratio to the first hit ratio threshold and the second hit ratio threshold. 20. The system of claim 16, the processing device further to generate a plurality of datasets, wherein to generate the plurality of datasets, the processing device further to: receive a first plurality of data requests requesting a respective first plurality of data units; load the first plurality of data units into the first cache unit; receive a second plurality of data requests requesting a respective second plurality of data units; determine a hit ratio of the first cache unit in responding to the second plurality of data requests; determine that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold; and responsive to the determination that the hit ratio of the cache unit in responding to the second plurality of data requests is below a dataset generation hit generation threshold, store the first plurality of data units as a first dataset of the plurality of datasets.
A cache system stores a number of different datasets. The cache system includes a number of cache units, each in a state associated with one of the datasets. In response to determining that a hit ratio of a cache unit drops below a threshold, the state of the cache unit is changed and the dataset is replaced with that associated with the new state.1. A method comprising: determining, by a processing device, that a hit ratio is below a first hit ratio threshold associated with a first cache unit and above a second hit ratio threshold associated with a second cache unit, wherein the first hit ratio threshold is different from the second hit ratio threshold; and responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, loading a dataset into the first cache unit rather than the second cache unit. 2. The method of claim 1, further comprising: loading a first dataset into the first cache unit and the second cache unit prior to determining the that the hit ratio is below the first hit ratio threshold and above the second hit ratio threshold, wherein the dataset loaded into the first cache unit rather than the second cache unit is a second dataset. 3. The method of claim 1, further comprising: determining that the hit ratio is below the second hit ratio threshold associated with the second cache unit; and responsive to determining that the hit ratio is below the second hit ratio threshold associated with the second cache unit, loading a third dataset into the second cache unit. 4. The method of claim 1, further comprising: determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and equal to the second hit ratio threshold associated with the second cache unit; and responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and equal to the second hit ratio threshold associated with the second cache unit, loading the dataset into the first cache unit. 5. The method of claim 2, further comprising: receiving an alteration to the first dataset; and in response to receiving the alteration, loading an altered first dataset into a third cache unit. 6. The method of claim 1, further comprising: receiving a request to access a data store; and selecting the first cache unit units to service the request. 7. The method of claim 1, wherein determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit comprises: identifying a first number indicative of a number of data requests received by the first cache unit and second cache unit within a window, each data request requesting a data unit; identifying a second number indicative of a number of the data requests requesting a data unit in a first dataset within the window; identifying the hit ratio of the first cache unit and second cache unit by dividing the second number by the first number; and comparing the hit ratio to the first hit ratio threshold and the second hit ratio threshold. 8. The method of claim 1, further comprising generating a plurality of datasets, wherein generating the plurality of datasets comprises: receiving a first plurality of data requests requesting a respective first plurality of data units; loading the first plurality of data units into the first cache unit; receiving a second plurality of data requests requesting a respective second plurality of data units; determining a hit ratio of the first cache unit in responding to the second plurality of data requests; determining that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold; and responsive to the determination that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold, storing the first plurality of data units as a first dataset of the plurality of datasets. 9. A non-transitory computer-readable medium comprising instructions that, when executed by a processing device, cause the processing device to: determine, by the processing device, that a hit ratio is below a hit first ratio threshold associated with a first cache unit and above a second hit ratio threshold associated with a second cache unit, wherein the first hit ratio threshold is different from the second hit ratio threshold; and responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, load a dataset into the first cache unit rather than the second cache unit. 10. The non-transitory computer-readable medium of claim 9, the processing device further to: load a first dataset into the first cache unit and the second cache unit prior to determining the that the hit ratio is below the first hit ratio threshold and above the second hit ratio threshold, wherein the dataset loaded into the first cache unit rather than the second cache unit is a second dataset. 11. The non-transitory computer-readable medium of claim 9, the processing device further to: determine that the hit ratio is below the second hit ratio threshold associated with the second cache unit; and responsive to determining that the hit ratio is below the second hit ratio threshold associated with the second cache unit, load a third dataset into the second cache unit. 12. The non-transitory computer-readable medium of claim 9, the processing device further to: determine that the hit ratio is below the first hit ratio threshold associated with the first cache unit and equal to the second hit ratio threshold associated with the second cache unit; and 13. The non-transitory computer-readable medium of claim 10, the processing device further to: receive an alteration to the first dataset; and in response to receiving the alteration, load an altered first dataset into a third cache unit. 14. The non-transitory computer-readable medium of claim 9, wherein to determine that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit the processing device further to: identify a first number indicative of a number of data requests received by the first cache unit and the second cache unit within a window, each data request requesting a data unit; identify a second number indicative of a number of the data requests within the window requesting a data unit in a first dataset; identify the hit ratio of the first cache unit and the second cache unit by dividing the second number by the first number; and compare the hit ratio to the first hit ratio threshold and the second hit ratio threshold. 15. The non-transitory computer-readable medium of claim 9, the processing device further to generate a plurality of datasets, wherein to generate the plurality of datasets, the processing device further to: receive a first plurality of data requests requesting a respective first plurality of data units; load the first plurality of data units into the first cache unit; receive a second plurality of data requests requesting a respective second plurality of data units; determine a hit ratio of the first cache unit in responding to the second plurality of data requests; determine that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold; and responsive to the determination that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold, store the first plurality of data units as a first dataset of the plurality of datasets. 16. A system comprising: a memory; and a processing device, operatively coupled to the memory, to: determine that a hit ratio is below a first hit ratio threshold associated with a first cache unit and above a second hit ratio threshold associated with a second cache unit, wherein the first hit ratio threshold is different from the second hit ratio threshold; and responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, load a dataset into the first cache unit rather than the second cache unit. 17. The system of claim 16, the processing device further to: load a first dataset into the first cache unit and the second cache unit prior to determining the that the hit ratio is below the first hit ratio threshold and above the second hit ratio threshold, wherein the dataset loaded into the first cache unit rather than the second cache unit is a second dataset. 18. The system of claim 16, wherein the processing device is further to: determine that the hit ratio is below the second hit ratio threshold associated with the second cache unit; and responsive to the determination that the hit ratio of is below the second hit ratio threshold associated with the second cache unit, load a third dataset into the second cache unit. 19. The system of claim 16, wherein to determine that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, the processing device is to: identify a first number indicative of a number of data requests received by the first cache unit and the second cache unit within a window, each data request requesting a data unit; identify a second number indicative of a number of the data requests requesting a data unit in a first dataset within the window; identify the hit ratio of the first cache unit and the second cache unit by dividing the second number by the first number; and compare the hit ratio to the first hit ratio threshold and the second hit ratio threshold. 20. The system of claim 16, the processing device further to generate a plurality of datasets, wherein to generate the plurality of datasets, the processing device further to: receive a first plurality of data requests requesting a respective first plurality of data units; load the first plurality of data units into the first cache unit; receive a second plurality of data requests requesting a respective second plurality of data units; determine a hit ratio of the first cache unit in responding to the second plurality of data requests; determine that the hit ratio of the first cache unit in responding to the second plurality of data requests is below a dataset generation hit ratio threshold; and responsive to the determination that the hit ratio of the cache unit in responding to the second plurality of data requests is below a dataset generation hit generation threshold, store the first plurality of data units as a first dataset of the plurality of datasets.
2,100
274,046
15,488,324
2,131
Techniques and mechanisms to efficiently cache data based on compression of such data. The technologies of the present disclosure include cache systems, methods, and computer readable media to support operations performed with data that is compressed prior to being written as a cache line in a cache memory. In some embodiments, a cache controller determines the size of compressed data to be stored as a cache line. The cache controller identifies a logical block address (LBA) range to cache the compressed data, where such identifying is based on the size of the compressed data and on reference information describing multiple LBA ranges of the cache memory. One or more such LBA ranges are of different respective sizes. In other embodiments, LBA ranges of the cache memory concurrently store respective compressed cache lines, wherein the LBA ranges and are of different respective sizes.
1. A device comprising: a cache controller including logic, at least a portion of which is in hardware, the logic to: send a first command to cause a compression operation for first data to produce first compressed data; determine a size of the first compressed data; identify a first logical block address (LBA) range of a cache memory, the first LBA range is identified based on the size of the first compressed data and reference information, the reference information to specify a plurality of LBA ranges of the cache memory, the plurality of LBA ranges including LBA ranges of different respective sizes; and based on identification of the first LBA range, the logic to: update the reference information to indicate that the first LBA range is allocated to the first compressed data; and cause the first compressed data is to be written to the first LBA range. 2. The device of claim 1, the logic to: determine a size of second compressed data other than any compressed data generated based on the first command, the size of the second compressed data to differ from the size of the first compressed data; identify a second LBA range of the cache memory, the second LBA range is identified based on the size of the second compressed data and the reference information; and based on identification of the second LBA range, the logic to: update the reference information to indicate that the second LBA range is allocated to the second compressed data; and cause the second compressed data to be written to the second LBA range; wherein the first LBA range stores the first compressed data while the second LBA range stores the second compressed data. 3. The device of claim 1, the the logic to: associate the first data with a first tag and to include the first tag in the first command. 4. The device of claim 3, the logic to cause the first compressed data is to be written to the first LBA range includes the logic to issues a write command to include the first tag and the first LBA range to the cache memory. 5. The device of claim 1, wherein the logic to identify the first LBA range includes the logic to identify second compressed data to evict from the cache memory. 6. The device of claim 1, a storage memory separate from the cache memory is to store a version of the first data while the first LBA range is to store the first compressed data, for separate LBA ranges included in the plurality of LBA ranges, the reference information specifies: an LBA range size for the separate LBA ranges included in the plurality of LBA ranges; and a different respective LBA of the storage memory is mapped to the first LBA range. 7. The device of claim 1, the reference information to identify one or more LBA ranges other than any LBA ranges of the cache memory that currently store valid data, the reference information to further identify a respective size of the LBA range for each of the one or more LBA ranges. 8. The device of claim 1, the reference information specifies, for separate LBA ranges from among the plurality of LBA ranges, a recency of use of the separate LBA ranges, the logic to identify the first LBA range based on a recency of use of the first LBA range. 9. The device of claim 1, the logic to identify the first LBA range includes the logic to: identify a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is equal to a minimum number of sectors sufficient to store the first compressed data; and select the first LBA range from among the subset. 10. The device of claim 9, the logic to select the first LBA range from among the subset based on the first LBA range being a least recently used LBA range of the subset. 11. The device of claim 1, the logic to identify the first LBA range includes the logic to: identify a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is closest to a minimum number of sectors sufficient to store the first compressed data; and the logic to select the first LBA range from among the subset. 12. A method at a cache controller, the method comprising: sending a first command to cause a compression operation for first data, to produce first compressed data; determining a size of the first compressed data; identifying a first logical block address (LBA) range of a cache memory, the first LBA range is identified based on the size of the first compressed data and reference information, the reference information specifying a plurality of LBA ranges of the cache memory, the plurality of LBA ranges including LBA ranges of different respective sizes; and based on identification of the first LBA range: updating the reference information to indicate that the first LBA range is allocated to the first compressed data; and causing the first compressed data is to be written to the first LBA range. 13. The method of claim 12, comprising: determining a size of second compressed data other than any compressed data generated based on the first command, the size of the second compressed data to differ from the size of the first compressed data; identifying a second LBA range of the cache, the second LBA range is identified based on the size of the second compressed data and the reference information; and based on identification of the second LBA range: updating the reference information to indicate that the second LBA range is allocated to the second compressed data; and causing the second compressed data is to be written to the second LBA range; wherein the first LBA range stores the first compressed data while the second LBA range stores the second compressed data. 14. The method of claim 12, the reference information specifies, for separate LBA ranges from among the plurality of LBA ranges, a recency of use of the separate LBA ranges, identifying the first LBA range is based on a recency of use of the first LBA range. 15. The method of claim 12, wherein identifying the first LBA range includes: identifying a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is closest to a minimum number of sectors sufficient to store the first compressed data; and selecting the first LBA range from among the subset. 16. A computer-readable storage medium having stored thereon instructions which, when executed by one or more processing units of a system, cause a the system to: send a first command to cause a compression operation for first data, to produce first compressed data; determine a size of the first compressed data; identify a first logical block address (LBA) range of a cache memory, the first LBA range is identified based on the size of the first compressed data and reference information, the reference information specifying a plurality of LBA ranges of the cache memory, the plurality of LBA ranges including LBA ranges of different respective sizes; and based on identification of the first LBA range: update the reference information to indicate that the first LBA range is allocated to the first compressed data; and cause the first compressed data is to be written to the first LBA range. 17. The computer-readable storage medium of claim 16, comprising the instructions to further cause the system to: determine a size of second compressed data other than any compressed data generated based on the first command, the size of the second compressed data to differ from the size of the first compressed data; identify a second LBA range of the cache, the second LBA range is identified based on the size of the second compressed data and the reference information; and based on identification of the second LBA range: update the reference information to indicate that the second LBA range is allocated to the second compressed data; and cause the second compressed data is to be written to the second LBA range; wherein the first LBA range stores the first compressed data while the second LBA range stores the second compressed data. 18. The computer-readable storage medium of claim 16, wherein the reference information identifies one or more LBA ranges other than any LBA ranges of the cache memory that currently store valid data, and the reference information further identifies a respective size of an LBA range for the one or more LBA ranges. 19. The computer-readable storage medium of claim 16, the reference information specifies, for separate LBA ranges from among the plurality of LBA ranges, a recency of use of the separate LBA ranges, the first LBA range identified based on a recency of use of the first LBA range. 20. The computer-readable storage medium of claim 16, the instructions to cause the system to identify the first LBA range includes the system to: identify a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is equal to a minimum number of sectors sufficient to store the first compressed data; and select the first LBA range from among the subset. 21. The computer-readable storage medium of claim 20, the instructions to cause the system to select the first LBA range from among only the subset is based on the first LBA range being a least recently used LBA range of the subset. 22. A system comprising: a cache device including a cache memory; a cache controller coupled to the cache device, the cache controller including logic, at least a portion of which is in hardware, the logic to: send a first command to cause a compression operation for first data to produce first compressed data; determine a size of the first compressed data; identify a first logical block address (LBA) range of a cache memory, the first LBA range is identified based on the size of the first compressed data and reference information, the reference information to specify a plurality of LBA ranges of the cache memory, the plurality of LBA ranges including LBA ranges of different respective sizes; and based on identification of the first LBA range, the logic to: update the reference information to indicate that the first LBA range is allocated to the first compressed data; and cause the first compressed data to be written to the first LBA range. 23. The system of claim 22, the logic of the cache controller further to: determine a size of second compressed data other than any compressed data generated based on the first command, the size of the second compressed data to differ from the size of the first compressed data; identify a second LBA range of the cache memory, the second LBA range identified based on the size of the second compressed data and the reference information; and based on identification of the second LBA range, the logic to: update the reference information to indicate that the second LBA range is allocated to the second compressed data; and cause the second compressed data to be written to the second LBA range; wherein the first LBA range stores the first compressed data while the second LBA range stores the second compressed data. 24. The system of claim 22, the reference information specifies, for separate LBA ranges from among the plurality of LBA ranges, a recency of use of the separate LBA ranges, the logic to identify the first LBA range based on a recency of use of the first LBA range. 25. The system of claim 22, the logic of the cache controller to identify the first LBA range includes the logic to: identify a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is equal to a minimum number of sectors sufficient to store the first compressed data; and select the first LBA range from among the subset.
Techniques and mechanisms to efficiently cache data based on compression of such data. The technologies of the present disclosure include cache systems, methods, and computer readable media to support operations performed with data that is compressed prior to being written as a cache line in a cache memory. In some embodiments, a cache controller determines the size of compressed data to be stored as a cache line. The cache controller identifies a logical block address (LBA) range to cache the compressed data, where such identifying is based on the size of the compressed data and on reference information describing multiple LBA ranges of the cache memory. One or more such LBA ranges are of different respective sizes. In other embodiments, LBA ranges of the cache memory concurrently store respective compressed cache lines, wherein the LBA ranges and are of different respective sizes.1. A device comprising: a cache controller including logic, at least a portion of which is in hardware, the logic to: send a first command to cause a compression operation for first data to produce first compressed data; determine a size of the first compressed data; identify a first logical block address (LBA) range of a cache memory, the first LBA range is identified based on the size of the first compressed data and reference information, the reference information to specify a plurality of LBA ranges of the cache memory, the plurality of LBA ranges including LBA ranges of different respective sizes; and based on identification of the first LBA range, the logic to: update the reference information to indicate that the first LBA range is allocated to the first compressed data; and cause the first compressed data is to be written to the first LBA range. 2. The device of claim 1, the logic to: determine a size of second compressed data other than any compressed data generated based on the first command, the size of the second compressed data to differ from the size of the first compressed data; identify a second LBA range of the cache memory, the second LBA range is identified based on the size of the second compressed data and the reference information; and based on identification of the second LBA range, the logic to: update the reference information to indicate that the second LBA range is allocated to the second compressed data; and cause the second compressed data to be written to the second LBA range; wherein the first LBA range stores the first compressed data while the second LBA range stores the second compressed data. 3. The device of claim 1, the the logic to: associate the first data with a first tag and to include the first tag in the first command. 4. The device of claim 3, the logic to cause the first compressed data is to be written to the first LBA range includes the logic to issues a write command to include the first tag and the first LBA range to the cache memory. 5. The device of claim 1, wherein the logic to identify the first LBA range includes the logic to identify second compressed data to evict from the cache memory. 6. The device of claim 1, a storage memory separate from the cache memory is to store a version of the first data while the first LBA range is to store the first compressed data, for separate LBA ranges included in the plurality of LBA ranges, the reference information specifies: an LBA range size for the separate LBA ranges included in the plurality of LBA ranges; and a different respective LBA of the storage memory is mapped to the first LBA range. 7. The device of claim 1, the reference information to identify one or more LBA ranges other than any LBA ranges of the cache memory that currently store valid data, the reference information to further identify a respective size of the LBA range for each of the one or more LBA ranges. 8. The device of claim 1, the reference information specifies, for separate LBA ranges from among the plurality of LBA ranges, a recency of use of the separate LBA ranges, the logic to identify the first LBA range based on a recency of use of the first LBA range. 9. The device of claim 1, the logic to identify the first LBA range includes the logic to: identify a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is equal to a minimum number of sectors sufficient to store the first compressed data; and select the first LBA range from among the subset. 10. The device of claim 9, the logic to select the first LBA range from among the subset based on the first LBA range being a least recently used LBA range of the subset. 11. The device of claim 1, the logic to identify the first LBA range includes the logic to: identify a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is closest to a minimum number of sectors sufficient to store the first compressed data; and the logic to select the first LBA range from among the subset. 12. A method at a cache controller, the method comprising: sending a first command to cause a compression operation for first data, to produce first compressed data; determining a size of the first compressed data; identifying a first logical block address (LBA) range of a cache memory, the first LBA range is identified based on the size of the first compressed data and reference information, the reference information specifying a plurality of LBA ranges of the cache memory, the plurality of LBA ranges including LBA ranges of different respective sizes; and based on identification of the first LBA range: updating the reference information to indicate that the first LBA range is allocated to the first compressed data; and causing the first compressed data is to be written to the first LBA range. 13. The method of claim 12, comprising: determining a size of second compressed data other than any compressed data generated based on the first command, the size of the second compressed data to differ from the size of the first compressed data; identifying a second LBA range of the cache, the second LBA range is identified based on the size of the second compressed data and the reference information; and based on identification of the second LBA range: updating the reference information to indicate that the second LBA range is allocated to the second compressed data; and causing the second compressed data is to be written to the second LBA range; wherein the first LBA range stores the first compressed data while the second LBA range stores the second compressed data. 14. The method of claim 12, the reference information specifies, for separate LBA ranges from among the plurality of LBA ranges, a recency of use of the separate LBA ranges, identifying the first LBA range is based on a recency of use of the first LBA range. 15. The method of claim 12, wherein identifying the first LBA range includes: identifying a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is closest to a minimum number of sectors sufficient to store the first compressed data; and selecting the first LBA range from among the subset. 16. A computer-readable storage medium having stored thereon instructions which, when executed by one or more processing units of a system, cause a the system to: send a first command to cause a compression operation for first data, to produce first compressed data; determine a size of the first compressed data; identify a first logical block address (LBA) range of a cache memory, the first LBA range is identified based on the size of the first compressed data and reference information, the reference information specifying a plurality of LBA ranges of the cache memory, the plurality of LBA ranges including LBA ranges of different respective sizes; and based on identification of the first LBA range: update the reference information to indicate that the first LBA range is allocated to the first compressed data; and cause the first compressed data is to be written to the first LBA range. 17. The computer-readable storage medium of claim 16, comprising the instructions to further cause the system to: determine a size of second compressed data other than any compressed data generated based on the first command, the size of the second compressed data to differ from the size of the first compressed data; identify a second LBA range of the cache, the second LBA range is identified based on the size of the second compressed data and the reference information; and based on identification of the second LBA range: update the reference information to indicate that the second LBA range is allocated to the second compressed data; and cause the second compressed data is to be written to the second LBA range; wherein the first LBA range stores the first compressed data while the second LBA range stores the second compressed data. 18. The computer-readable storage medium of claim 16, wherein the reference information identifies one or more LBA ranges other than any LBA ranges of the cache memory that currently store valid data, and the reference information further identifies a respective size of an LBA range for the one or more LBA ranges. 19. The computer-readable storage medium of claim 16, the reference information specifies, for separate LBA ranges from among the plurality of LBA ranges, a recency of use of the separate LBA ranges, the first LBA range identified based on a recency of use of the first LBA range. 20. The computer-readable storage medium of claim 16, the instructions to cause the system to identify the first LBA range includes the system to: identify a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is equal to a minimum number of sectors sufficient to store the first compressed data; and select the first LBA range from among the subset. 21. The computer-readable storage medium of claim 20, the instructions to cause the system to select the first LBA range from among only the subset is based on the first LBA range being a least recently used LBA range of the subset. 22. A system comprising: a cache device including a cache memory; a cache controller coupled to the cache device, the cache controller including logic, at least a portion of which is in hardware, the logic to: send a first command to cause a compression operation for first data to produce first compressed data; determine a size of the first compressed data; identify a first logical block address (LBA) range of a cache memory, the first LBA range is identified based on the size of the first compressed data and reference information, the reference information to specify a plurality of LBA ranges of the cache memory, the plurality of LBA ranges including LBA ranges of different respective sizes; and based on identification of the first LBA range, the logic to: update the reference information to indicate that the first LBA range is allocated to the first compressed data; and cause the first compressed data to be written to the first LBA range. 23. The system of claim 22, the logic of the cache controller further to: determine a size of second compressed data other than any compressed data generated based on the first command, the size of the second compressed data to differ from the size of the first compressed data; identify a second LBA range of the cache memory, the second LBA range identified based on the size of the second compressed data and the reference information; and based on identification of the second LBA range, the logic to: update the reference information to indicate that the second LBA range is allocated to the second compressed data; and cause the second compressed data to be written to the second LBA range; wherein the first LBA range stores the first compressed data while the second LBA range stores the second compressed data. 24. The system of claim 22, the reference information specifies, for separate LBA ranges from among the plurality of LBA ranges, a recency of use of the separate LBA ranges, the logic to identify the first LBA range based on a recency of use of the first LBA range. 25. The system of claim 22, the logic of the cache controller to identify the first LBA range includes the logic to: identify a subset of the plurality of LBA ranges based on the LBA ranges of the subset separately having a total number of sectors of the LBA range that is equal to a minimum number of sectors sufficient to store the first compressed data; and select the first LBA range from among the subset.
2,100
274,047
15,959,532
1,673
Provided herein are methods and compositions related to treating and/or preventing kidney related diseases and disorders, treating and/or preventing acute kidney injury, and for improving kidney health in a subject by administering to the subject (e.g., orally administering to the subject) a composition comprising nicotinamide riboside and/or pterostilbene.
1. A method of treating or preventing kidney damage in a subject comprising administering to the subject a composition comprising nicotinamide riboside. 2. The method of claim 1, wherein the composition further comprises pterostilbene. 3. The method of claim 1 or claim 2, wherein the kidney damage is the result of decreased blood flood to the kidneys, back up of urine in the kidneys, sepsis, trauma, an autoimmune disease, cancer, drug-induced nephrotoxicity, or severe dehydration. 4. The method of any one of claims 1 to 3, wherein the kidney damage is caused by acute kidney injury. 5. A method of treating or preventing acute kidney injury comprising administering to the subject comprising administering to the subject a composition comprising nicotinamide riboside. 6. The method of claim 5, wherein the composition further comprises pterostilbene. 7. The method of claim 5 or 6, wherein the acute kidney injury is a result of decreased blood flow to the kidneys. 8. The method of claim 6, wherein the decreased blood flow is the result of hypotension, blood loss, severe diarrhea, heat attack, heart failure, deceased heart function, organ failure, drug-induced nephrotoxicity, trauma, or surgery. 9. The method of claim 8, wherein the drug-induced nephrotoxicity is NSAID-induced nephrotoxicity. 10. The method of claim 5 or 6, wherein the acute kidney injury is a result of cancer, sepsis, vasculitis, interstitial nephritis, scleroderma, tubular necrosis, glomerulonephritis, or thrombotic microangiopathy. 11. The method of claim 10, wherein the cancer is multiple myeloma. 12. The method of claim 5 or 6, wherein the acute kidney injury is the result of blockage of the urinary tract. 13. The method of claim 10, wherein the blockage is caused by neurogenic bladder, retroperitoneal fibrosis, bladder cancer, prostate cancer, cervical cancer, an enlarged prostate, kidney stones, blood clots, or tumors. 14. A method of treating kidney disease in a subject comprising administering to the subject a composition comprising nicotinamide riboside. 15. The method of claim 14, wherein the composition further comprises pterostilbene. 16. The method of claim 14 or 15, wherein the kidney disease is the result of diabetes or hypertension. 17. A method of claim 14 or 15, wherein the kidney disease is the result of a systemic disease, a viral disease, urinary tract infections, polycystic kidney disease, or a condition resulting in inflammation of glomeruli. 18. The method of claim 17, wherein the systemic disease is lupus. 19. A method of increasing blood flow to the kidneys comprising administering to the subject a composition comprising nicotinamide riboside. 20. The method of claim 19 or 20, wherein the subject has acute kidney injury, kidney damage, or kidney disease. 21. The method of claim 16, wherein the composition further comprises pterostilbene. 22. The method of any one of claims 1 to 21, wherein the administration of the composition comprises administering one or more doses of the composition. 23. The method of claim 22, wherein each dose of the composition comprises at least 200 mg of nicotinamide riboside. 24. The method of claim 22, wherein each dose of the composition comprises at least 250 mg of nicotinamide riboside. 25. The method of claim 22, wherein each dose of the composition comprises at least 300 mg of nicotinamide riboside. 26. The method of claim 22, wherein each dose of the composition comprises at least 350 mg of nicotinamide riboside. 27. The method of claim 22, wherein each dose of the composition comprises at least 400 mg of nicotinamide riboside. 28. The method of claim 22, wherein each dose of the composition comprises at least 450 mg of nicotinamide riboside. 29. The method of claim 22, wherein each dose of the composition comprises at least 500 mg of nicotinamide riboside. 30. The method of claim 22, wherein each dose of the composition comprises at least 550 mg of nicotinamide riboside. 31. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 15 mg of pterostilbene. 32. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 25 mg of pterostilbene. 33. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 50 mg of pterostilbene. 34. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 75 mg of pterostilbene. 35. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 100 mg of pterostilbene. 36. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 125 mg of pterostilbene. 37. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 150 mg of pterostilbene. 38. The method of any one of claims 22 to 37, wherein two or more doses of the composition are administered. 39. The method of any one of claims 22 to 38, wherein thirty or more doses of the composition are administered. 40. The method of any one of claims 22 to 39, wherein fifty or more doses of the composition are administered. 41. The method of any one of claims 22 to 40, wherein one hundred or more doses of the composition are administered. 42. The method of any one of claims 22 to 41, wherein the dose of the composition is administered at least once a week. 43. The method of any one of claims 22 to 41, wherein the dose is administered at least twice a week. 44. The method of any one of claims 22 to 41, wherein the dose is administered at least three times a week. 45. The method of any one of claims 22 to 41, wherein the dose is administered at least once a day. 46. The method of any one of claims 22 to 41, wherein the dose is administered at least twice a day. 47. The method of any one of claims 42 to 46, wherein the doses are administered for at least 7 days. 48. The method of any one of claims 42 to 46, wherein the doses are administered for at least 30 days. 49. The method of any one of claims 42 to 46, wherein the doses are administered for at least 60 days. 50. The method of any one of claims 42 to 46, wherein the doses are administered for at least 90 days. 51. The method of any one of claims 1 to 50, wherein the composition is formulated as a pill, a tablet, or a capsule. 52. The method of any one of claims 1 to 51, wherein the composition is administered orally. 53. The method of any one of claims 1 to 52, wherein the composition is self-administered.
Provided herein are methods and compositions related to treating and/or preventing kidney related diseases and disorders, treating and/or preventing acute kidney injury, and for improving kidney health in a subject by administering to the subject (e.g., orally administering to the subject) a composition comprising nicotinamide riboside and/or pterostilbene.1. A method of treating or preventing kidney damage in a subject comprising administering to the subject a composition comprising nicotinamide riboside. 2. The method of claim 1, wherein the composition further comprises pterostilbene. 3. The method of claim 1 or claim 2, wherein the kidney damage is the result of decreased blood flood to the kidneys, back up of urine in the kidneys, sepsis, trauma, an autoimmune disease, cancer, drug-induced nephrotoxicity, or severe dehydration. 4. The method of any one of claims 1 to 3, wherein the kidney damage is caused by acute kidney injury. 5. A method of treating or preventing acute kidney injury comprising administering to the subject comprising administering to the subject a composition comprising nicotinamide riboside. 6. The method of claim 5, wherein the composition further comprises pterostilbene. 7. The method of claim 5 or 6, wherein the acute kidney injury is a result of decreased blood flow to the kidneys. 8. The method of claim 6, wherein the decreased blood flow is the result of hypotension, blood loss, severe diarrhea, heat attack, heart failure, deceased heart function, organ failure, drug-induced nephrotoxicity, trauma, or surgery. 9. The method of claim 8, wherein the drug-induced nephrotoxicity is NSAID-induced nephrotoxicity. 10. The method of claim 5 or 6, wherein the acute kidney injury is a result of cancer, sepsis, vasculitis, interstitial nephritis, scleroderma, tubular necrosis, glomerulonephritis, or thrombotic microangiopathy. 11. The method of claim 10, wherein the cancer is multiple myeloma. 12. The method of claim 5 or 6, wherein the acute kidney injury is the result of blockage of the urinary tract. 13. The method of claim 10, wherein the blockage is caused by neurogenic bladder, retroperitoneal fibrosis, bladder cancer, prostate cancer, cervical cancer, an enlarged prostate, kidney stones, blood clots, or tumors. 14. A method of treating kidney disease in a subject comprising administering to the subject a composition comprising nicotinamide riboside. 15. The method of claim 14, wherein the composition further comprises pterostilbene. 16. The method of claim 14 or 15, wherein the kidney disease is the result of diabetes or hypertension. 17. A method of claim 14 or 15, wherein the kidney disease is the result of a systemic disease, a viral disease, urinary tract infections, polycystic kidney disease, or a condition resulting in inflammation of glomeruli. 18. The method of claim 17, wherein the systemic disease is lupus. 19. A method of increasing blood flow to the kidneys comprising administering to the subject a composition comprising nicotinamide riboside. 20. The method of claim 19 or 20, wherein the subject has acute kidney injury, kidney damage, or kidney disease. 21. The method of claim 16, wherein the composition further comprises pterostilbene. 22. The method of any one of claims 1 to 21, wherein the administration of the composition comprises administering one or more doses of the composition. 23. The method of claim 22, wherein each dose of the composition comprises at least 200 mg of nicotinamide riboside. 24. The method of claim 22, wherein each dose of the composition comprises at least 250 mg of nicotinamide riboside. 25. The method of claim 22, wherein each dose of the composition comprises at least 300 mg of nicotinamide riboside. 26. The method of claim 22, wherein each dose of the composition comprises at least 350 mg of nicotinamide riboside. 27. The method of claim 22, wherein each dose of the composition comprises at least 400 mg of nicotinamide riboside. 28. The method of claim 22, wherein each dose of the composition comprises at least 450 mg of nicotinamide riboside. 29. The method of claim 22, wherein each dose of the composition comprises at least 500 mg of nicotinamide riboside. 30. The method of claim 22, wherein each dose of the composition comprises at least 550 mg of nicotinamide riboside. 31. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 15 mg of pterostilbene. 32. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 25 mg of pterostilbene. 33. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 50 mg of pterostilbene. 34. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 75 mg of pterostilbene. 35. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 100 mg of pterostilbene. 36. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 125 mg of pterostilbene. 37. The method of any one of claims 22 to 30, wherein each dose of the composition comprises at least 150 mg of pterostilbene. 38. The method of any one of claims 22 to 37, wherein two or more doses of the composition are administered. 39. The method of any one of claims 22 to 38, wherein thirty or more doses of the composition are administered. 40. The method of any one of claims 22 to 39, wherein fifty or more doses of the composition are administered. 41. The method of any one of claims 22 to 40, wherein one hundred or more doses of the composition are administered. 42. The method of any one of claims 22 to 41, wherein the dose of the composition is administered at least once a week. 43. The method of any one of claims 22 to 41, wherein the dose is administered at least twice a week. 44. The method of any one of claims 22 to 41, wherein the dose is administered at least three times a week. 45. The method of any one of claims 22 to 41, wherein the dose is administered at least once a day. 46. The method of any one of claims 22 to 41, wherein the dose is administered at least twice a day. 47. The method of any one of claims 42 to 46, wherein the doses are administered for at least 7 days. 48. The method of any one of claims 42 to 46, wherein the doses are administered for at least 30 days. 49. The method of any one of claims 42 to 46, wherein the doses are administered for at least 60 days. 50. The method of any one of claims 42 to 46, wherein the doses are administered for at least 90 days. 51. The method of any one of claims 1 to 50, wherein the composition is formulated as a pill, a tablet, or a capsule. 52. The method of any one of claims 1 to 51, wherein the composition is administered orally. 53. The method of any one of claims 1 to 52, wherein the composition is self-administered.
1,600
274,048
15,521,427
1,673
The present invention provides a skin barrier function-improving agent including an inositol derivative as an active ingredient in which inositol and saccharide are bonded, and a composition for improving a skin barrier function including the above skin barrier function-improving agent and a pharmacologically acceptable carrier.
1. A skin barrier function-improving agent comprising an inositol derivative as an active ingredient in which inositol and saccharide are bonded. 2. The skin barrier function-improving agent according to claim 1, wherein the saccharide is monosaccharide or oligosaccharide. 3. The skin barrier function-improving agent according to claim 2, wherein the monosaccharide is glucose. 4. The skin barrier function-improving agent according to claim 2, wherein the oligosaccharide contains glucose as a structural unit. 5. The skin barrier function-improving agent according to claim 1, wherein the inositol is myo-inositol. 6. The skin barrier function-improving agent according to claim 1, which promotes the production of TRPV4. 7. The skin barrier function-improving agent according to claim 1, which promotes the production of claudins. 8. The skin barrier function-improving agent according to claim 1, which promotes the production of occludin. 9. A composition for improving a skin barrier function comprising: the skin barrier function-improving agent according to claim 1; and a pharmacologically acceptable carrier. 10. The composition for improving a skin barrier function according to claim 9, wherein the amount of the skin barrier function-improving agent is 0.01 to 50% by mass. 11. The composition for improving a skin barrier function according to claim 9, which is a skin external agent. 12. The composition for improving a skin barrier function according to claim 9, which is a cosmetic.
The present invention provides a skin barrier function-improving agent including an inositol derivative as an active ingredient in which inositol and saccharide are bonded, and a composition for improving a skin barrier function including the above skin barrier function-improving agent and a pharmacologically acceptable carrier.1. A skin barrier function-improving agent comprising an inositol derivative as an active ingredient in which inositol and saccharide are bonded. 2. The skin barrier function-improving agent according to claim 1, wherein the saccharide is monosaccharide or oligosaccharide. 3. The skin barrier function-improving agent according to claim 2, wherein the monosaccharide is glucose. 4. The skin barrier function-improving agent according to claim 2, wherein the oligosaccharide contains glucose as a structural unit. 5. The skin barrier function-improving agent according to claim 1, wherein the inositol is myo-inositol. 6. The skin barrier function-improving agent according to claim 1, which promotes the production of TRPV4. 7. The skin barrier function-improving agent according to claim 1, which promotes the production of claudins. 8. The skin barrier function-improving agent according to claim 1, which promotes the production of occludin. 9. A composition for improving a skin barrier function comprising: the skin barrier function-improving agent according to claim 1; and a pharmacologically acceptable carrier. 10. The composition for improving a skin barrier function according to claim 9, wherein the amount of the skin barrier function-improving agent is 0.01 to 50% by mass. 11. The composition for improving a skin barrier function according to claim 9, which is a skin external agent. 12. The composition for improving a skin barrier function according to claim 9, which is a cosmetic.
1,600
274,049
15,520,735
1,673
The present invention relates to hydroxy-triglycerides, their synthesis, a pharmaceutical and/or nutraceutical composition which comprises at least one of said hydroxy-triglycerides, and a method which comprises the administration to a patient of a therapeutically effective quantity of at least one of said hydroxy-triglycerides or at least one of said pharmaceutical and/or nutraceutical compositions, for the prevention and/or treatment of at least one disease selected from cancer, metabolic/cardiovascular diseases, and/or neurological/inflammatory diseases.
1. Compound of Formula I: 2. The compound of Formula I, according to claim 1, wherein said hydrocarbon moieties R1, R2 and R3 comprise, each and independently, an aliphatic chain comprising between 5 and 20 carbon atoms. 3. The compound of Formula I, according to claim 1, wherein said hydrocarbon moieties R1, R2 and R3 comprise, each and independently, an aliphatic chain comprising between 16 and 20 carbon atoms. 4. The compound of Formula I, according to claim 1, wherein a is a whole number between 1 and 6; b is a whole number between 2 and 6; and c is a whole number chosen from 0 and 3. 5. The compound of Formula I, according to claim 1, wherein R1, R2 and R3 are chosen, each and independently, from (CH2)6—(CH═CH—CH2)1—(CH2)6—CH3, (CH2)6—(CH═CH—CH2)2—(CH2)3—CH3, (CH2)6—(CH═CH—CH2)3—CH3, (CH2)3—(CH═CH—CH2)3—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)4—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)5—CH3, and CH2—(CH═CH—CH2)6—CH3. 6. The compound of Formula I, according to claim 1, wherein R1, R2 and R3 are chosen, each and independently, from (CH2)6—(CH═CH—CH2)2—(CH2)3—CH3, (CH2)6—(CH—CH—CH2)3—CH3, (CH2)3—(CH═CH—CH2)3—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)4—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)5—CH3, and CH2—(CH═CH—CH2)6—CH3. 7. Method for the production of a compound of Formula I, according to claim 1, wherein said method comprises: A) formation of a 2-hydroxy-protected fatty acid of Formula III 8. A method of preventing and/or treating at least one disease, wherein said at least one disease is chosen from cancer, metabolic/cardiovascular diseases, and/or neurological/inflammatory diseases, said method comprising using a compound of Formula I: 9. The method according to claim 8, wherein said hydrocarbon moieties R1, R2 and R3 comprise, each and independently, an aliphatic chain of between 5 and 22 carbon atoms of Formula II: —(CH2)a—(CH═CH—CH2)b—(CH2)c—CH3   II wherein a is a whole number between 1 and 6; b is a whole number between 0 and 6; and c is a whole number between 0 and 6. 10. The method according to claim 8, wherein R1, R2 and R3 are chosen, each and independently, from (CH2)4—CH3, (CH2)6—(CH═CH—CH2)1—(CH2)6—CH3, (CH2)6—(CH═CH—CH2)2—(CH2)3—CH3, (CH2)6—(CH═CH—CH2)3—CH3, (CH2)3—(CH═CH—CH2)3—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)4—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)5—CH3, and CH2—(CH═CH—CH2)6—CH3. 11. The method according to claim 8, wherein at least one disease is selected from the group consisting of: a) a cancer selected from the group consisting of lung cancer, breast cancer, prostate cancer, leukaemias, gliomas, brain tumours, pancreatic cancer, liver cancer, cervical cancer, neuroendocrine cancer, mesotheliomas, male gonadal tumors, female gonadal tumours, head cancer, neck cancer, kidney tumours, melanoma; b) a metabolic/cardiovascular disease selected from the group consisting of hypertension, atherosclerosis, arteriosclerosis, heart attacks, ictus, arrhythmia, hypertriglyceridemia, hypercholesterolemia, dyslipidaemias, obesity, diabetes, and metabolic syndrome; and c) a neurological/inflammatory disease selected from the group consisting of Alzheimer's disease, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, multiple sclerosis, spinal injury, adult polyglucosan body disease, depression, anxiety, pain, schizophrenia, insomnia, general inflammation, uveitis, rheumatism, inflammatory processes derived from arthritis, arthrosis, and aging. 12. The method according to claim 8, wherein at least one disease is selected from the group consisting of: a) a cancer selected from the group consisting of lung cancer, breast cancer, prostate cancer, leukaemias, gliomas, pancreatic cancer, liver cancer, cervical cancer, and neuroendocrine cancer; b) a metabolic/cardiovascular disease selected from the group consisting of hypertension, hypertriglyceridemia, hypercholesterolemia, obesity, and diabetes; and c) a neurological/inflammatory disease selected from the group consisting of Alzheimer's disease, and adult polyglucosan body disease. 13. Pharmaceutical and/or nutraceutical composition which comprises a) at least one compound of Formula I, according to claim 1; and b) at least one excipient. 14. Pharmaceutical and/or nutraceutical composition, which comprises a) at least two different compounds of Formula I, according to claim 1; and b) at least one excipient. 15. Method of preparation of a pharmaceutical and/or nutraceutical composition, which comprises mixing a) at least one compound of Formula I, according to claim 1; and b) at least one excipient.
The present invention relates to hydroxy-triglycerides, their synthesis, a pharmaceutical and/or nutraceutical composition which comprises at least one of said hydroxy-triglycerides, and a method which comprises the administration to a patient of a therapeutically effective quantity of at least one of said hydroxy-triglycerides or at least one of said pharmaceutical and/or nutraceutical compositions, for the prevention and/or treatment of at least one disease selected from cancer, metabolic/cardiovascular diseases, and/or neurological/inflammatory diseases.1. Compound of Formula I: 2. The compound of Formula I, according to claim 1, wherein said hydrocarbon moieties R1, R2 and R3 comprise, each and independently, an aliphatic chain comprising between 5 and 20 carbon atoms. 3. The compound of Formula I, according to claim 1, wherein said hydrocarbon moieties R1, R2 and R3 comprise, each and independently, an aliphatic chain comprising between 16 and 20 carbon atoms. 4. The compound of Formula I, according to claim 1, wherein a is a whole number between 1 and 6; b is a whole number between 2 and 6; and c is a whole number chosen from 0 and 3. 5. The compound of Formula I, according to claim 1, wherein R1, R2 and R3 are chosen, each and independently, from (CH2)6—(CH═CH—CH2)1—(CH2)6—CH3, (CH2)6—(CH═CH—CH2)2—(CH2)3—CH3, (CH2)6—(CH═CH—CH2)3—CH3, (CH2)3—(CH═CH—CH2)3—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)4—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)5—CH3, and CH2—(CH═CH—CH2)6—CH3. 6. The compound of Formula I, according to claim 1, wherein R1, R2 and R3 are chosen, each and independently, from (CH2)6—(CH═CH—CH2)2—(CH2)3—CH3, (CH2)6—(CH—CH—CH2)3—CH3, (CH2)3—(CH═CH—CH2)3—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)4—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)5—CH3, and CH2—(CH═CH—CH2)6—CH3. 7. Method for the production of a compound of Formula I, according to claim 1, wherein said method comprises: A) formation of a 2-hydroxy-protected fatty acid of Formula III 8. A method of preventing and/or treating at least one disease, wherein said at least one disease is chosen from cancer, metabolic/cardiovascular diseases, and/or neurological/inflammatory diseases, said method comprising using a compound of Formula I: 9. The method according to claim 8, wherein said hydrocarbon moieties R1, R2 and R3 comprise, each and independently, an aliphatic chain of between 5 and 22 carbon atoms of Formula II: —(CH2)a—(CH═CH—CH2)b—(CH2)c—CH3   II wherein a is a whole number between 1 and 6; b is a whole number between 0 and 6; and c is a whole number between 0 and 6. 10. The method according to claim 8, wherein R1, R2 and R3 are chosen, each and independently, from (CH2)4—CH3, (CH2)6—(CH═CH—CH2)1—(CH2)6—CH3, (CH2)6—(CH═CH—CH2)2—(CH2)3—CH3, (CH2)6—(CH═CH—CH2)3—CH3, (CH2)3—(CH═CH—CH2)3—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)4—(CH2)3—CH3, (CH2)2—(CH═CH—CH2)5—CH3, and CH2—(CH═CH—CH2)6—CH3. 11. The method according to claim 8, wherein at least one disease is selected from the group consisting of: a) a cancer selected from the group consisting of lung cancer, breast cancer, prostate cancer, leukaemias, gliomas, brain tumours, pancreatic cancer, liver cancer, cervical cancer, neuroendocrine cancer, mesotheliomas, male gonadal tumors, female gonadal tumours, head cancer, neck cancer, kidney tumours, melanoma; b) a metabolic/cardiovascular disease selected from the group consisting of hypertension, atherosclerosis, arteriosclerosis, heart attacks, ictus, arrhythmia, hypertriglyceridemia, hypercholesterolemia, dyslipidaemias, obesity, diabetes, and metabolic syndrome; and c) a neurological/inflammatory disease selected from the group consisting of Alzheimer's disease, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, multiple sclerosis, spinal injury, adult polyglucosan body disease, depression, anxiety, pain, schizophrenia, insomnia, general inflammation, uveitis, rheumatism, inflammatory processes derived from arthritis, arthrosis, and aging. 12. The method according to claim 8, wherein at least one disease is selected from the group consisting of: a) a cancer selected from the group consisting of lung cancer, breast cancer, prostate cancer, leukaemias, gliomas, pancreatic cancer, liver cancer, cervical cancer, and neuroendocrine cancer; b) a metabolic/cardiovascular disease selected from the group consisting of hypertension, hypertriglyceridemia, hypercholesterolemia, obesity, and diabetes; and c) a neurological/inflammatory disease selected from the group consisting of Alzheimer's disease, and adult polyglucosan body disease. 13. Pharmaceutical and/or nutraceutical composition which comprises a) at least one compound of Formula I, according to claim 1; and b) at least one excipient. 14. Pharmaceutical and/or nutraceutical composition, which comprises a) at least two different compounds of Formula I, according to claim 1; and b) at least one excipient. 15. Method of preparation of a pharmaceutical and/or nutraceutical composition, which comprises mixing a) at least one compound of Formula I, according to claim 1; and b) at least one excipient.
1,600
274,050
15,489,186
1,673
Hydroxy aliphatic substituted phenyl aminoalkyl ether compounds of formula (I), and compositions thereof, are useful as a medicament in the treatment of nervous system diseases and/or the treatment of developmental, behavioral and/or mental disorders associated with cognitive deficits.
1-28. (canceled) 29. A method of increasing neuroplasticity in a subject in need thereof, said method comprising administering to said subject a compound having formula (I) 30. The method of claim 29, wherein the subject in need thereof is affected by at least one disorder selected from the group consisting of fragile X syndrome, Down syndrome, Angelman syndrome, Rett syndrome, autistic disorders, Asperger syndrome, bipolar disorder, schizophrenia, cerebral dementias, post traumatic stress disorders, Pick's disease, sleep disorders, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, frontotemporal dementia, Friedrich's ataxia, epilepsy, stroke, depression, neuropathic pain or fibromyalgia, brain injury, Creutzfeldt-Jakob disease, frontotemporal lobar degeneration, Lewy body disease, multiple sclerosis, multiple system atrophy, spinal and bulbar muscular atrophy, spinal cord injuries, spinocerebellar ataxias, progressive supranuclear palsy, and tuberous sclerosis. 31. The method of claim 29, wherein the subject in need thereof is affected by at least one disorder selected from the group consisting of fragile X syndrome, Down syndrome, Angelman syndrome, Rett syndrome, autistic disorders, Asperger syndrome, bipolar disorder, schizophrenia, cerebral dementias, post traumatic stress disorders, Pick's disease, sleep disorders, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, frontotemporal dementia, Friedrich's ataxia, epilepsy, stroke, depression, and neuropathic pain or fibromyalgia. 32. The method of claim 29, wherein the subject in need thereof is affected by fragile X syndrome. 33. The method of claim 29, wherein in the compound of formula (I) or the pharmaceutically acceptable salt or stereoisomer thereof, A is a biradical selected from the group consisting of C1-C6 alkylene; and a) R5 is selected from the group consisting of H and an 8-, 9- or 10-membered bicyclic heteroaryl containing 1 or 2 heteroatoms, said heteroatoms being independently selected from the group consisting of N, O and S; said heteroaryl being optionally substituted by at least one group selected from the group consisting of halogen, —OH, C1-C8, alkyl; R6 is selected from the group consisting of H; C1-C3 alkyl and C1-C8, acyl; and R7 is selected from the group consisting of H and C1-C3 alkyl; or b) R5 and R6 taken together form a group selected from the group consisting of —O—CH2—CH2— and —CH(R11)—CH2—CH2—; and R7 is H; or c) R5 is H; and R6 and R7 taken together form a group —(CH2)m—CO—. 34. The method of claim 33, wherein in the compound of formula (I) or the pharmaceutically acceptable salt or stereoisomer thereof, R4 is selected from the group consisting of: H; a C6-C10 aryl, optionally substituted by at least one group selected from the group consisting of halogen, —OH, and C1-C8 alkyl; and a 5- or 6-membered monocyclic heteroaryl containing 1 or 2 heteroatoms, said heteroatoms being independently selected from the group consisting of N, O and S; said heteroaryl being optionally substituted by at least one group selected from the group consisting of halogen, —OH, and C1-C8 alkyl; a) R5 is selected from the group consisting of H and an 8-, 9- or 10-membered bicyclic heteroaryl containing 1 or 2 heteroatoms, said heteroatoms being independently selected from the group consisting of N, O and S; said heteroaryl being optionally substituted by at least one group selected from the group consisting of halogen, —OH, C1-C8 alkyl; R6 is selected from the group consisting of H and C1-C3 alkyl; and R7 is selected from the group consisting of H and C1-C3 alkyl; or b) R5 and R6 taken together form a group selected from the group consisting of —O—CH2—CH2— and —CH(R11)—CH2—CH2—; and R7 is H; or c) R5 is H; and R6 and R7 taken together form a group —(CH2)m—CO—; R8 is selected from the group consisting of H; acetyl; valproyl and lipoyl; R11 is a phenyl group, optionally substituted by at least one substituent selected from the group consisting of halogen and —OH; and m is an integer selected from the group consisting of 3 and 4. 35. The method of claim 34, wherein in the compound of formula (I) or the pharmaceutically acceptable salt or stereoisomer thereof, R1 and R3 represent hydrogen atoms; R2 is a group A-OR8, wherein A represents a C1-C6 alkylene biradical; R4 is selected from the group consisting of H; a C6-C10 aryl; and a 5- or 6-membered monocyclic heteroaryl containing 1 or 2 heteroatoms, said heteroatoms being independently selected from the group consisting of N, O and S; and: a) R5 is H; R6 is selected from the group consisting of H and C1-C3 alkyl; and R7 is selected from the group consisting of H and methyl; or b) R5 and R6 taken together form a group selected from the group consisting of —O—CH2—CH2— and —CH(R11)—CH2—CH2—; and R7 is H; or c) R5 is H; and R6 and R7 taken together form a group —(CH2)m—CO—; R8 is H; and R11 is a phenyl group optionally substituted by at least one halogen atom. 36. The method of claim 35, wherein in the compound of formula (I) or the pharmaceutically acceptable salt or stereoisomer thereof, R2 is a group A-OR8; wherein A represents an ethylene biradical and R8 represents a hydrogen atom; R4 represents a phenyl group; R5 and R6 represent hydrogen atoms; R7 represents a methyl group; and n has a value of 1. 37. The method of claim 29, wherein the compound of formula (I) is selected from the group consisting of: a) 1-[2-[4-(2-Hydroxyethyl)phenoxy]ethyl]pyrrolidin-2-one; b) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride; c) (R)-2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride; d) (S)-2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride; e) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol L-ascorbic acid salt; f) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol ferulic acid salt; g) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol caffeic acid salt; h) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol valproic acid salt; i) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol (R)-lipoic acid salt; j) 4-[3-(Methylamino)-1-phenylpropoxy]phenethyl acetate hydrochloride; k) N-[3-[4-(2-Hydroxyethyl)phenoxy]-3-phenylpropyl]-N-methyl-2-propylpentanamide; l) 2-[4-(3-Amino-1-phenylpropoxy)phenyl]ethanol hydrochloride; m) (R)-2-[4-[3-(Methylamino)-1-(thiophen-2-yl)propoxy]phenyl]ethanol; n) 4-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]butan-1-ol hydrochloride; o) (E)-tert-Butyl [3-[4-(3-hydroxyprop-1-enyl)phenoxy]-3-phenylpropyl](methyl) carbamate; p) 3-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]propan-1-ol hydrochloride; q) [4-[3-(Methylamino)-1-phenylpropoxy]phenyl]methanol hydrochloride; r) 2-[4-(Morpholin-2-ylmethoxy)phenyl]ethanol hydrochloride; s) [5-[3-(Methylamino)-1-phenylpropoxy]-1,3-phenylene]dimethanol hydrochloride; t) 2,2′-[5-[3-(Methylamino)-1-phenylpropoxy]-1,3-phenylene]diethanol; u) 3,3′-[5-[3-(Methylamino)-1-phenylpropoxy]-1,3-phenylene]dipropan-1-ol; v) 4,4′-[5-[3-(Methylamino)-1-phenylpropoxy]-1,3-phenylene]dibutan-1-ol; w) (2R,3S,4S,5R,6R)-2-(Hydroxymethyl)-6-[4-[3-(methylamino)-1-phenylpropoxy]phenethoxy]tetrahydro-2H-pyran-3,4,5-triol; x) 2-[4-(3-Dimethylamino-1-phenylpropoxy)phenyl]ethanol; y) 4-[4-(3-Methylamino-1-phenylpropoxy)phenyl]but-2-en-1-ol; z) 6-[4-(3-Methylamino-1-phenylpropoxy)phenyl]hexan-1-ol; aa) 6-[4-(3-Methylamino-1-phenylpropoxy)phenyl]hex-5-en-1-ol; bb) (S)-2-[4-(3-Methylamino-1-thiophen-2-ylpropoxy)phenyl]ethanol; cc) 2-[4-(Morpholin-2-yl(phenyl)methoxy)phenyl]ethanol; dd) 2-[4-[[(3S,4R)-4-(4-Fluorophenyl)piperidin-3-yl]methoxy]phenyl]ethanol; ee) 2-[2-Dimethylamino-1-[4-(2-hydroxyethyl)phenoxy]ethyl]cyclohexanol; ff) 1-[2-Dimethylamino-1-[4-(2-hydroxyethyl)phenoxy]-ethyl]cyclohexanol; gg) 2-[4-[2-(4-Fluoroindol-1-yl)-4-methylaminobutoxy]phenyl]ethanol; hh) 2-Propylpentanoic acid 2-[4-(3-methylamino-1-phenylpropoxy)phenyl] ethyl ester, ii) 5(R)-[1,2]Dithiolan-3-ylpentanoic acid 2-[4-(3-methylamino-1-phenylpropoxy)phenyl]ethyl ester, salts thereof; and stereoisomers thereof. 38. The method of claim 37, wherein the compound of formula (I) is 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol, a salt thereof, or a stereoisomer thereof. 39. The method of claim 37, wherein the compound of formula (I) is 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride or a stereoisomer thereof. 40. The method of claim 37, wherein the compound of formula (I) is (R)-2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride. 41. The method of claim 37, wherein the compound of formula (I) is (S)-2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride. 42. A method of treating a subject suffering from a neurological disorder, said method comprising administering to said subject a compound having formula (I) 43. The method of claim 42, wherein said neurological disorder is impaired learning ability. 44. The method of claim 42, wherein said neurological disorder is impaired memory or memory recognition. 45. The method of claim 42, wherein said neurological disorder is depression. 46. The method of claim 42, wherein said neurological disorder is fragile X syndrome.
Hydroxy aliphatic substituted phenyl aminoalkyl ether compounds of formula (I), and compositions thereof, are useful as a medicament in the treatment of nervous system diseases and/or the treatment of developmental, behavioral and/or mental disorders associated with cognitive deficits.1-28. (canceled) 29. A method of increasing neuroplasticity in a subject in need thereof, said method comprising administering to said subject a compound having formula (I) 30. The method of claim 29, wherein the subject in need thereof is affected by at least one disorder selected from the group consisting of fragile X syndrome, Down syndrome, Angelman syndrome, Rett syndrome, autistic disorders, Asperger syndrome, bipolar disorder, schizophrenia, cerebral dementias, post traumatic stress disorders, Pick's disease, sleep disorders, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, frontotemporal dementia, Friedrich's ataxia, epilepsy, stroke, depression, neuropathic pain or fibromyalgia, brain injury, Creutzfeldt-Jakob disease, frontotemporal lobar degeneration, Lewy body disease, multiple sclerosis, multiple system atrophy, spinal and bulbar muscular atrophy, spinal cord injuries, spinocerebellar ataxias, progressive supranuclear palsy, and tuberous sclerosis. 31. The method of claim 29, wherein the subject in need thereof is affected by at least one disorder selected from the group consisting of fragile X syndrome, Down syndrome, Angelman syndrome, Rett syndrome, autistic disorders, Asperger syndrome, bipolar disorder, schizophrenia, cerebral dementias, post traumatic stress disorders, Pick's disease, sleep disorders, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, frontotemporal dementia, Friedrich's ataxia, epilepsy, stroke, depression, and neuropathic pain or fibromyalgia. 32. The method of claim 29, wherein the subject in need thereof is affected by fragile X syndrome. 33. The method of claim 29, wherein in the compound of formula (I) or the pharmaceutically acceptable salt or stereoisomer thereof, A is a biradical selected from the group consisting of C1-C6 alkylene; and a) R5 is selected from the group consisting of H and an 8-, 9- or 10-membered bicyclic heteroaryl containing 1 or 2 heteroatoms, said heteroatoms being independently selected from the group consisting of N, O and S; said heteroaryl being optionally substituted by at least one group selected from the group consisting of halogen, —OH, C1-C8, alkyl; R6 is selected from the group consisting of H; C1-C3 alkyl and C1-C8, acyl; and R7 is selected from the group consisting of H and C1-C3 alkyl; or b) R5 and R6 taken together form a group selected from the group consisting of —O—CH2—CH2— and —CH(R11)—CH2—CH2—; and R7 is H; or c) R5 is H; and R6 and R7 taken together form a group —(CH2)m—CO—. 34. The method of claim 33, wherein in the compound of formula (I) or the pharmaceutically acceptable salt or stereoisomer thereof, R4 is selected from the group consisting of: H; a C6-C10 aryl, optionally substituted by at least one group selected from the group consisting of halogen, —OH, and C1-C8 alkyl; and a 5- or 6-membered monocyclic heteroaryl containing 1 or 2 heteroatoms, said heteroatoms being independently selected from the group consisting of N, O and S; said heteroaryl being optionally substituted by at least one group selected from the group consisting of halogen, —OH, and C1-C8 alkyl; a) R5 is selected from the group consisting of H and an 8-, 9- or 10-membered bicyclic heteroaryl containing 1 or 2 heteroatoms, said heteroatoms being independently selected from the group consisting of N, O and S; said heteroaryl being optionally substituted by at least one group selected from the group consisting of halogen, —OH, C1-C8 alkyl; R6 is selected from the group consisting of H and C1-C3 alkyl; and R7 is selected from the group consisting of H and C1-C3 alkyl; or b) R5 and R6 taken together form a group selected from the group consisting of —O—CH2—CH2— and —CH(R11)—CH2—CH2—; and R7 is H; or c) R5 is H; and R6 and R7 taken together form a group —(CH2)m—CO—; R8 is selected from the group consisting of H; acetyl; valproyl and lipoyl; R11 is a phenyl group, optionally substituted by at least one substituent selected from the group consisting of halogen and —OH; and m is an integer selected from the group consisting of 3 and 4. 35. The method of claim 34, wherein in the compound of formula (I) or the pharmaceutically acceptable salt or stereoisomer thereof, R1 and R3 represent hydrogen atoms; R2 is a group A-OR8, wherein A represents a C1-C6 alkylene biradical; R4 is selected from the group consisting of H; a C6-C10 aryl; and a 5- or 6-membered monocyclic heteroaryl containing 1 or 2 heteroatoms, said heteroatoms being independently selected from the group consisting of N, O and S; and: a) R5 is H; R6 is selected from the group consisting of H and C1-C3 alkyl; and R7 is selected from the group consisting of H and methyl; or b) R5 and R6 taken together form a group selected from the group consisting of —O—CH2—CH2— and —CH(R11)—CH2—CH2—; and R7 is H; or c) R5 is H; and R6 and R7 taken together form a group —(CH2)m—CO—; R8 is H; and R11 is a phenyl group optionally substituted by at least one halogen atom. 36. The method of claim 35, wherein in the compound of formula (I) or the pharmaceutically acceptable salt or stereoisomer thereof, R2 is a group A-OR8; wherein A represents an ethylene biradical and R8 represents a hydrogen atom; R4 represents a phenyl group; R5 and R6 represent hydrogen atoms; R7 represents a methyl group; and n has a value of 1. 37. The method of claim 29, wherein the compound of formula (I) is selected from the group consisting of: a) 1-[2-[4-(2-Hydroxyethyl)phenoxy]ethyl]pyrrolidin-2-one; b) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride; c) (R)-2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride; d) (S)-2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride; e) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol L-ascorbic acid salt; f) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol ferulic acid salt; g) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol caffeic acid salt; h) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol valproic acid salt; i) 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol (R)-lipoic acid salt; j) 4-[3-(Methylamino)-1-phenylpropoxy]phenethyl acetate hydrochloride; k) N-[3-[4-(2-Hydroxyethyl)phenoxy]-3-phenylpropyl]-N-methyl-2-propylpentanamide; l) 2-[4-(3-Amino-1-phenylpropoxy)phenyl]ethanol hydrochloride; m) (R)-2-[4-[3-(Methylamino)-1-(thiophen-2-yl)propoxy]phenyl]ethanol; n) 4-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]butan-1-ol hydrochloride; o) (E)-tert-Butyl [3-[4-(3-hydroxyprop-1-enyl)phenoxy]-3-phenylpropyl](methyl) carbamate; p) 3-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]propan-1-ol hydrochloride; q) [4-[3-(Methylamino)-1-phenylpropoxy]phenyl]methanol hydrochloride; r) 2-[4-(Morpholin-2-ylmethoxy)phenyl]ethanol hydrochloride; s) [5-[3-(Methylamino)-1-phenylpropoxy]-1,3-phenylene]dimethanol hydrochloride; t) 2,2′-[5-[3-(Methylamino)-1-phenylpropoxy]-1,3-phenylene]diethanol; u) 3,3′-[5-[3-(Methylamino)-1-phenylpropoxy]-1,3-phenylene]dipropan-1-ol; v) 4,4′-[5-[3-(Methylamino)-1-phenylpropoxy]-1,3-phenylene]dibutan-1-ol; w) (2R,3S,4S,5R,6R)-2-(Hydroxymethyl)-6-[4-[3-(methylamino)-1-phenylpropoxy]phenethoxy]tetrahydro-2H-pyran-3,4,5-triol; x) 2-[4-(3-Dimethylamino-1-phenylpropoxy)phenyl]ethanol; y) 4-[4-(3-Methylamino-1-phenylpropoxy)phenyl]but-2-en-1-ol; z) 6-[4-(3-Methylamino-1-phenylpropoxy)phenyl]hexan-1-ol; aa) 6-[4-(3-Methylamino-1-phenylpropoxy)phenyl]hex-5-en-1-ol; bb) (S)-2-[4-(3-Methylamino-1-thiophen-2-ylpropoxy)phenyl]ethanol; cc) 2-[4-(Morpholin-2-yl(phenyl)methoxy)phenyl]ethanol; dd) 2-[4-[[(3S,4R)-4-(4-Fluorophenyl)piperidin-3-yl]methoxy]phenyl]ethanol; ee) 2-[2-Dimethylamino-1-[4-(2-hydroxyethyl)phenoxy]ethyl]cyclohexanol; ff) 1-[2-Dimethylamino-1-[4-(2-hydroxyethyl)phenoxy]-ethyl]cyclohexanol; gg) 2-[4-[2-(4-Fluoroindol-1-yl)-4-methylaminobutoxy]phenyl]ethanol; hh) 2-Propylpentanoic acid 2-[4-(3-methylamino-1-phenylpropoxy)phenyl] ethyl ester, ii) 5(R)-[1,2]Dithiolan-3-ylpentanoic acid 2-[4-(3-methylamino-1-phenylpropoxy)phenyl]ethyl ester, salts thereof; and stereoisomers thereof. 38. The method of claim 37, wherein the compound of formula (I) is 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol, a salt thereof, or a stereoisomer thereof. 39. The method of claim 37, wherein the compound of formula (I) is 2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride or a stereoisomer thereof. 40. The method of claim 37, wherein the compound of formula (I) is (R)-2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride. 41. The method of claim 37, wherein the compound of formula (I) is (S)-2-[4-[3-(Methylamino)-1-phenylpropoxy]phenyl]ethanol hydrochloride. 42. A method of treating a subject suffering from a neurological disorder, said method comprising administering to said subject a compound having formula (I) 43. The method of claim 42, wherein said neurological disorder is impaired learning ability. 44. The method of claim 42, wherein said neurological disorder is impaired memory or memory recognition. 45. The method of claim 42, wherein said neurological disorder is depression. 46. The method of claim 42, wherein said neurological disorder is fragile X syndrome.
1,600
274,051
15,519,471
1,673
Disclosed is a simplified, readily scalable series of individual methods that collectively constitute a method for the synthesis of C2′epiAmB, an efficacious and reduced-toxicity derivative of amphotericin B (AmB), beginning from AmB. Also provided are various compounds corresponding to intermediates in accordance with the series of methods.
1. A compound, represented by 2. The compound of claim 1, wherein Ra is 2-alken-1-yl. 3. The compound of claim 1, wherein R2 is 2-alken-1-yl. 4. The compound of claim 1, wherein R3 is substituted or unsubstituted aryl. 5. The compound of claim 1, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; and R3 is substituted or unsubstituted aryl. 6. The compound of claim 1, wherein Ra is 2-propen-1-yl. 7. The compound of claim 1, wherein R2 is 2-propen-1-yl. 8. The compound of claim 1, wherein R3 is para-methoxyphenyl (PMP). 9. The compound of claim 1, represented by 10. A compound, represented by 11. The compound of claim 10, wherein Ra is 2-alken-1-yl. 12. The compound of claim 10, wherein R2 is 2-alken-1-yl. 13. The compound of claim 10, wherein R3 is substituted or unsubstituted aryl. 14. The compound of claim 10, wherein Rc is substituted or unsubstituted phenyl. 15. The compound of claim 10, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; and Rc is substituted or unsubstituted phenyl. 16. The compound of claim 10, wherein Ra is 2-propen-1-yl. 17. The compound of claim 10, wherein R2 is 2-propen-1-yl. 18. The compound of claim 10, wherein R3 is para-methoxyphenyl (PMP). 19. The compound of claim 10, wherein R4 is p-(tert-butyl)benzoyl. 20. The compound of claim 10, represented by 21. A compound, represented by 22. The compound of claim 21, wherein Ra is 2-alken-1-yl. 23. The compound of claim 21, wherein R2 is 2-alken-1-yl. 24. The compound of claim 21, wherein R3 is substituted or unsubstituted aryl. 25. The compound of claim 21, wherein Rc is substituted or unsubstituted phenyl. 26. The compound of claim 21, wherein Rb is C1-C6 alkyl. 27. The compound of claim 21, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; Rc is substituted or unsubstituted phenyl; and Rb is C1-C6 alkyl. 28. The compound of claim 21, wherein Ra is 2-propen-1-yl. 29. The compound of claim 21, wherein R2 is 2-propen-1-yl. 30. The compound of claim 21, wherein R3 is para-methoxyphenyl (PMP). 31. The compound of claim 21, wherein R4 is p-(tert-butyl)benzoyl. 32. The compound of claim 21, wherein R5 is diethylisopropylsilyl. 33. The compound of claim 21, represented by 34. A compound, represented by 35. The compound of claim 34, wherein Ra is 2-alken-1-yl. 36. The compound of claim 34, wherein R2 is 2-alken-1-yl. 37. The compound of claim 34, wherein R3 is substituted or unsubstituted aryl. 38. The compound of claim 34, wherein Rb is C1-C6 alkyl. 39. The compound of claim 34, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; and Rb is C1-C6 alkyl. 40. The compound of claim 34, wherein Ra is 2-propen-1-yl. 41. The compound of claim 34, wherein R2 is 2-propen-1-yl. 42. The compound of claim 34, wherein R3 is para-methoxyphenyl (PMP). 43. The compound of claim 34, wherein R5 is diethylisopropylsilyl. 44. The compound of claim 34, represented by 45. A compound, represented by 46. The compound of claim 45, wherein Ra is 2-alken-1-yl. 47. The compound of claim 45, wherein R2 is 2-alken-1-yl. 48. The compound of claim 45, wherein R3 is substituted or unsubstituted aryl. 49. The compound of claim 45, wherein Rc is substituted or unsubstituted phenyl. 50. The compound of claim 45, wherein Rb is C1-C6 alkyl. 51. The compound of claim 45, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; Rc is substituted or unsubstituted phenyl; and Rb is C1-C6 alkyl. 52. The compound of claim 45, wherein Ra is 2-propen-1-yl. 53. The compound of claim 45, wherein R2 is 2-propen-1-yl. 54. The compound of claim 45, wherein R3 is para-methoxyphenyl (PMP). 55. The compound of claim 45, wherein R6 is p-nitrobenzoyl. 56. The compound of claim 45, wherein R5 is diethylisopropylsilyl. 57. The compound of claim 45, represented by 58. A compound, represented by 59. The compound of claim 58, wherein Ra is 2-alken-1-yl. 60. The compound of claim 58, wherein R2 is 2-alken-1-yl. 61. The compound of claim 58, wherein R3 is substituted or unsubstituted aryl. 62. The compound of claim 58, wherein Rb is C1-C6 alkyl. 63. The compound of claim 58, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; and Rb is C1-C6 alkyl. 64. The compound of claim 58, wherein Ra is 2-propen-1-yl. 65. The compound of claim 58, wherein R2 is 2-propen-1-yl. 66. The compound of claim 58, wherein R3 is para-methoxyphenyl (PMP). 67. The compound of claim 58, wherein R5 is diethylisopropylsilyl. 68. The compound of claim 58, represented by 69. A compound, represented by 70. The compound of claim 69, wherein Ra is 2-alken-1-yl. 71. The compound of claim 69, wherein R2 is 2-alken-1-yl. 72. The compound of claim 69, wherein R3 is substituted or unsubstituted aryl. 73. The compound of claim 69, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; and R3 is substituted or unsubstituted aryl. 74. The compound of claim 69, wherein Ra is 2-propen-1-yl. 75. The compound of claim 69, wherein R2 is 2-propen-1-yl. 76. The compound of claim 69, wherein R3 is para-methoxyphenyl (PMP). 77. The compound of claim 69, represented by 78. A compound, represented by 79. The compound of claim 78, wherein R3 is substituted or unsubstituted aryl. 80. The compound of claim 78, represented by 81. A method of making 2′epiAmB 82. The method of claim 81, wherein R6 is substituted aryloyl. 83. The method of claim 81, wherein Rd is aryl. 84. The method of claim 81, wherein R6 is substituted aryloyl; and Rd is aryl. 85. The method of claim 81, wherein R6 is para-nitrobenzoyl. 86. The method of claim 81, wherein Rd is phenyl. 87. The method of claim 81, wherein solvent 7 is benzene. 88. The method of claim 81, wherein di(alkyl)azodicarboxylate is di(isopropyl)azodicarboxylate (DIAD) or di(ethyl)azodicarboxylate (DEAD). 89. The method of claim 81, wherein R6 is para-nitrobenzoyl; di(alkyl)azodicarboxylate is di(isopropyl)azodicarboxylate (DIAD); and Rd is phenyl. 90. The method of claim 81, wherein R6 is para-nitrobenzoyl; di(alkyl)azodicarboxylate is di(isopropyl)azodicarboxylate (DIAD); Rd is phenyl; and solvent 7 is benzene. 91. The method of any one of claims 81-90, further comprising the step of: 92. The method of claim 91, wherein M is an alkali metal cation. 93. The method of claim 91, wherein M is K. 94. The method of claim 91, wherein solvent 8 is a mixture of tetrahydrofuran (THF) and MeOH. 95. The method of claim 91, wherein M is K; and solvent 8 is a mixture of tetrahydrofuran (THF) and MeOH. 96. The method of any one of claims 91-95, further comprising the step of: 97. The method of claim 96, wherein fluoride reagent is a fluoride salt. 98. The method of claim 96, wherein fluoride reagent is hydrogen fluoride pyridine. 99. The method of claim 96, wherein solvent 9 is tetrahydrofuran (THF). 100. The method of claim 96, wherein fluoride reagent is hydrogen fluoride pyridine; and solvent 9 is tetrahydrofuran (THF). 101. The method of any one of claims 96-100, further comprising the step of: 102. The method of claim 101, wherein Pd reagent is Pd(0). 103. The method of claim 101, wherein Re is aryl. 104. The method of claim 101, wherein Rf is aryl. 105. The method of claim 101, wherein said RfCO2H or 1,3-diketone is RfCO2H. 106. The method of claim 101, wherein Pd reagent is Pd(0); Re is aryl; Rf is aryl; and said RfCO2H or 1,3-diketone is RfCO2H. 107. The method of claim 101, wherein ligand is (PPh3)4. 108. The method of claim 101, wherein said RfCO2H or 1,3-diketone is thiosalicylic acid. 109. The method of claim 101, wherein solvent 10 is dimethylformamide (DMF). 110. The method of claim 101, wherein Pd reagent is Pd(0); ligand is (PPh3)4; and said RfCO2H or 1,3-diketone is thiosalicylic acid. 111. The method of claim 101, wherein Pd reagent is Pd(0); ligand is (PPh3)4; said RfCO2H or 1,3-diketone is thiosalicylic acid; and solvent 10 is dimethylformamide (DMF). 112. The method of any one of claims 101-111, further comprising the step of: 113. The method of claim 112, wherein acid is camphorsulfonic acid (CSA). 114. The method of claim 112, wherein solvent 11 is a mixture of water and MeCN. 115. The method of claim 112, wherein acid is camphorsulfonic acid (CSA); and solvent 11 is a mixture of water and MeCN. 116. The method of any one of claims 81-115, further comprising the step of: 117. The method of claim 116, wherein M is an alkali metal cation. 118. The method of claim 116, wherein solvent 6 is a mixture of a polar aprotic solvent and a polar protic solvent. 119. The method of claim 116, wherein R4 is p-(tert-butyl)benzoyl. 120. The method of claim 116, wherein M is K. 121. The method of claim 116, wherein R4 is p-(tert-butyl)benzoyl; and M is K. 122. The method of claim 116, wherein solvent 6 is a mixture of tetrahydrofuran (THF) and MeOH. 123. The method of claim 116, wherein R4 is p-(tert-butyl)benzoyl; M is K; and solvent 6 is a mixture of tetrahydrofuran (THF) and MeOH. 124. The method of any one of claims 116-123, further comprising the step of: 125. The method of claim 124, wherein Rb is C1-C6 alkyl. 126. The method of claim 124, wherein X5 is sulfonate. 127. The method of claim 124, wherein Rb is C1-C6 alkyl; and X5 is sulfonate. 128. The method of claim 124, wherein solvent 5 is a mixture of a polar aprotic solvent and a nonpolar aprotic solvent. 129. The method of claim 124, wherein solvent 5 is a mixture of dichloromethane (DCM) and hexanes. 130. The method of claim 124, wherein R5—X5 is diethyl(isopropyl)silyl trifluoromethanesulfonate. 131. The method of claim 124, wherein R5—X5 is diethyl(isopropyl)silyl trifluoromethanesulfonate; and solvent 5 is a mixture of dichloromethane (DCM) and hexanes. 132. The method of any one of claims 124-131, further comprising the step of: 133. The method of claim 132, wherein R4 is substituted aryloyl. 134. The method of claim 132, wherein X4 is halide. 135. The method of claim 132, wherein R4 is substituted phenyl; and X4 is halide. 136. The method of claim 132, wherein R4—X4 is p-(tert-butyl)benzoyl chloride. 137. The method of claim 132, wherein solvent 4 is tetrahydrofuran (THF). 138. The method of claim 132, wherein R4—X4 is p-(tert-butyl)benzoyl chloride; and solvent 4 is tetrahydrofuran (THF). 139. The method of any one of claims 132-138, further comprising the step of: 140. The method of claim 139, wherein Ra is 2-alken-1-yl. 141. The method of claim 139, wherein X1 is succinimidyl. 142. The method of claim 139, wherein Ra is 2-alken-1-yl; and X1 is succinimidyl. 143. The method of claim 139, wherein R2 is 2-alken-1-yl. 144. The method of claim 139, wherein X2 is halide. 145. The method of claim 139, wherein R2 is 2-alken-1-yl; and X2 is halide. 146. The method of claim 139, wherein R3 is substituted aryl. 147. The method of claim 139, wherein Ra is 2-propen-1-yl. 148. The method of claim 139, wherein R2 is 2-propen-1-yl. 149. The method of claim 139, wherein R3 is para-methoxyphenyl. 150. The method of claim 139, wherein solvent 1 is a mixture of a polar aprotic solvent and a polar protic solvent. 151. The method of claim 139, wherein solvent 2 is a mixture of a polar aprotic solvent and a polar protic solvent. 152. The method of claim 139, wherein solvent 3 is a mixture of a polar aprotic solvent and a polar protic solvent. 153. The method of claim 139, wherein solvent 1 is a mixture of dimethylformamide (DMF) and MeOH. 154. The method of claim 139, wherein solvent 2 is a mixture of dimethylformamide (DMF) and MeOH. 155. The method of claim 139, wherein solvent 3 is a mixture of tetrahydrofuran (THF) and MeOH. 156. The method of claim 139, wherein Bronsted acid is camphorsulfonic acid (CSA). 157. The method of claim 139, wherein Ra is 2-propen-1-yl; X1 is succinimidyl; R2 is 2-propen-1-yl; X2 is halide; and R3 is para-methoxyphenyl. 158. The method of claim 139, wherein Ra is 2-propen-1-yl; X1 is succinimidyl; R2 is 2-propen-1-yl; X2 is halide; R3 is para-methoxyphenyl; solvent 1 is a mixture of dimethylformamide (DMF) and MeOH; solvent 2 is a mixture of DMF and MeOH; and solvent 3 is a mixture of tetrahydrofuran (THF) and MeOH.
Disclosed is a simplified, readily scalable series of individual methods that collectively constitute a method for the synthesis of C2′epiAmB, an efficacious and reduced-toxicity derivative of amphotericin B (AmB), beginning from AmB. Also provided are various compounds corresponding to intermediates in accordance with the series of methods.1. A compound, represented by 2. The compound of claim 1, wherein Ra is 2-alken-1-yl. 3. The compound of claim 1, wherein R2 is 2-alken-1-yl. 4. The compound of claim 1, wherein R3 is substituted or unsubstituted aryl. 5. The compound of claim 1, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; and R3 is substituted or unsubstituted aryl. 6. The compound of claim 1, wherein Ra is 2-propen-1-yl. 7. The compound of claim 1, wherein R2 is 2-propen-1-yl. 8. The compound of claim 1, wherein R3 is para-methoxyphenyl (PMP). 9. The compound of claim 1, represented by 10. A compound, represented by 11. The compound of claim 10, wherein Ra is 2-alken-1-yl. 12. The compound of claim 10, wherein R2 is 2-alken-1-yl. 13. The compound of claim 10, wherein R3 is substituted or unsubstituted aryl. 14. The compound of claim 10, wherein Rc is substituted or unsubstituted phenyl. 15. The compound of claim 10, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; and Rc is substituted or unsubstituted phenyl. 16. The compound of claim 10, wherein Ra is 2-propen-1-yl. 17. The compound of claim 10, wherein R2 is 2-propen-1-yl. 18. The compound of claim 10, wherein R3 is para-methoxyphenyl (PMP). 19. The compound of claim 10, wherein R4 is p-(tert-butyl)benzoyl. 20. The compound of claim 10, represented by 21. A compound, represented by 22. The compound of claim 21, wherein Ra is 2-alken-1-yl. 23. The compound of claim 21, wherein R2 is 2-alken-1-yl. 24. The compound of claim 21, wherein R3 is substituted or unsubstituted aryl. 25. The compound of claim 21, wherein Rc is substituted or unsubstituted phenyl. 26. The compound of claim 21, wherein Rb is C1-C6 alkyl. 27. The compound of claim 21, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; Rc is substituted or unsubstituted phenyl; and Rb is C1-C6 alkyl. 28. The compound of claim 21, wherein Ra is 2-propen-1-yl. 29. The compound of claim 21, wherein R2 is 2-propen-1-yl. 30. The compound of claim 21, wherein R3 is para-methoxyphenyl (PMP). 31. The compound of claim 21, wherein R4 is p-(tert-butyl)benzoyl. 32. The compound of claim 21, wherein R5 is diethylisopropylsilyl. 33. The compound of claim 21, represented by 34. A compound, represented by 35. The compound of claim 34, wherein Ra is 2-alken-1-yl. 36. The compound of claim 34, wherein R2 is 2-alken-1-yl. 37. The compound of claim 34, wherein R3 is substituted or unsubstituted aryl. 38. The compound of claim 34, wherein Rb is C1-C6 alkyl. 39. The compound of claim 34, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; and Rb is C1-C6 alkyl. 40. The compound of claim 34, wherein Ra is 2-propen-1-yl. 41. The compound of claim 34, wherein R2 is 2-propen-1-yl. 42. The compound of claim 34, wherein R3 is para-methoxyphenyl (PMP). 43. The compound of claim 34, wherein R5 is diethylisopropylsilyl. 44. The compound of claim 34, represented by 45. A compound, represented by 46. The compound of claim 45, wherein Ra is 2-alken-1-yl. 47. The compound of claim 45, wherein R2 is 2-alken-1-yl. 48. The compound of claim 45, wherein R3 is substituted or unsubstituted aryl. 49. The compound of claim 45, wherein Rc is substituted or unsubstituted phenyl. 50. The compound of claim 45, wherein Rb is C1-C6 alkyl. 51. The compound of claim 45, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; Rc is substituted or unsubstituted phenyl; and Rb is C1-C6 alkyl. 52. The compound of claim 45, wherein Ra is 2-propen-1-yl. 53. The compound of claim 45, wherein R2 is 2-propen-1-yl. 54. The compound of claim 45, wherein R3 is para-methoxyphenyl (PMP). 55. The compound of claim 45, wherein R6 is p-nitrobenzoyl. 56. The compound of claim 45, wherein R5 is diethylisopropylsilyl. 57. The compound of claim 45, represented by 58. A compound, represented by 59. The compound of claim 58, wherein Ra is 2-alken-1-yl. 60. The compound of claim 58, wherein R2 is 2-alken-1-yl. 61. The compound of claim 58, wherein R3 is substituted or unsubstituted aryl. 62. The compound of claim 58, wherein Rb is C1-C6 alkyl. 63. The compound of claim 58, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; R3 is substituted or unsubstituted aryl; and Rb is C1-C6 alkyl. 64. The compound of claim 58, wherein Ra is 2-propen-1-yl. 65. The compound of claim 58, wherein R2 is 2-propen-1-yl. 66. The compound of claim 58, wherein R3 is para-methoxyphenyl (PMP). 67. The compound of claim 58, wherein R5 is diethylisopropylsilyl. 68. The compound of claim 58, represented by 69. A compound, represented by 70. The compound of claim 69, wherein Ra is 2-alken-1-yl. 71. The compound of claim 69, wherein R2 is 2-alken-1-yl. 72. The compound of claim 69, wherein R3 is substituted or unsubstituted aryl. 73. The compound of claim 69, wherein Ra is 2-alken-1-yl; R2 is 2-alken-1-yl; and R3 is substituted or unsubstituted aryl. 74. The compound of claim 69, wherein Ra is 2-propen-1-yl. 75. The compound of claim 69, wherein R2 is 2-propen-1-yl. 76. The compound of claim 69, wherein R3 is para-methoxyphenyl (PMP). 77. The compound of claim 69, represented by 78. A compound, represented by 79. The compound of claim 78, wherein R3 is substituted or unsubstituted aryl. 80. The compound of claim 78, represented by 81. A method of making 2′epiAmB 82. The method of claim 81, wherein R6 is substituted aryloyl. 83. The method of claim 81, wherein Rd is aryl. 84. The method of claim 81, wherein R6 is substituted aryloyl; and Rd is aryl. 85. The method of claim 81, wherein R6 is para-nitrobenzoyl. 86. The method of claim 81, wherein Rd is phenyl. 87. The method of claim 81, wherein solvent 7 is benzene. 88. The method of claim 81, wherein di(alkyl)azodicarboxylate is di(isopropyl)azodicarboxylate (DIAD) or di(ethyl)azodicarboxylate (DEAD). 89. The method of claim 81, wherein R6 is para-nitrobenzoyl; di(alkyl)azodicarboxylate is di(isopropyl)azodicarboxylate (DIAD); and Rd is phenyl. 90. The method of claim 81, wherein R6 is para-nitrobenzoyl; di(alkyl)azodicarboxylate is di(isopropyl)azodicarboxylate (DIAD); Rd is phenyl; and solvent 7 is benzene. 91. The method of any one of claims 81-90, further comprising the step of: 92. The method of claim 91, wherein M is an alkali metal cation. 93. The method of claim 91, wherein M is K. 94. The method of claim 91, wherein solvent 8 is a mixture of tetrahydrofuran (THF) and MeOH. 95. The method of claim 91, wherein M is K; and solvent 8 is a mixture of tetrahydrofuran (THF) and MeOH. 96. The method of any one of claims 91-95, further comprising the step of: 97. The method of claim 96, wherein fluoride reagent is a fluoride salt. 98. The method of claim 96, wherein fluoride reagent is hydrogen fluoride pyridine. 99. The method of claim 96, wherein solvent 9 is tetrahydrofuran (THF). 100. The method of claim 96, wherein fluoride reagent is hydrogen fluoride pyridine; and solvent 9 is tetrahydrofuran (THF). 101. The method of any one of claims 96-100, further comprising the step of: 102. The method of claim 101, wherein Pd reagent is Pd(0). 103. The method of claim 101, wherein Re is aryl. 104. The method of claim 101, wherein Rf is aryl. 105. The method of claim 101, wherein said RfCO2H or 1,3-diketone is RfCO2H. 106. The method of claim 101, wherein Pd reagent is Pd(0); Re is aryl; Rf is aryl; and said RfCO2H or 1,3-diketone is RfCO2H. 107. The method of claim 101, wherein ligand is (PPh3)4. 108. The method of claim 101, wherein said RfCO2H or 1,3-diketone is thiosalicylic acid. 109. The method of claim 101, wherein solvent 10 is dimethylformamide (DMF). 110. The method of claim 101, wherein Pd reagent is Pd(0); ligand is (PPh3)4; and said RfCO2H or 1,3-diketone is thiosalicylic acid. 111. The method of claim 101, wherein Pd reagent is Pd(0); ligand is (PPh3)4; said RfCO2H or 1,3-diketone is thiosalicylic acid; and solvent 10 is dimethylformamide (DMF). 112. The method of any one of claims 101-111, further comprising the step of: 113. The method of claim 112, wherein acid is camphorsulfonic acid (CSA). 114. The method of claim 112, wherein solvent 11 is a mixture of water and MeCN. 115. The method of claim 112, wherein acid is camphorsulfonic acid (CSA); and solvent 11 is a mixture of water and MeCN. 116. The method of any one of claims 81-115, further comprising the step of: 117. The method of claim 116, wherein M is an alkali metal cation. 118. The method of claim 116, wherein solvent 6 is a mixture of a polar aprotic solvent and a polar protic solvent. 119. The method of claim 116, wherein R4 is p-(tert-butyl)benzoyl. 120. The method of claim 116, wherein M is K. 121. The method of claim 116, wherein R4 is p-(tert-butyl)benzoyl; and M is K. 122. The method of claim 116, wherein solvent 6 is a mixture of tetrahydrofuran (THF) and MeOH. 123. The method of claim 116, wherein R4 is p-(tert-butyl)benzoyl; M is K; and solvent 6 is a mixture of tetrahydrofuran (THF) and MeOH. 124. The method of any one of claims 116-123, further comprising the step of: 125. The method of claim 124, wherein Rb is C1-C6 alkyl. 126. The method of claim 124, wherein X5 is sulfonate. 127. The method of claim 124, wherein Rb is C1-C6 alkyl; and X5 is sulfonate. 128. The method of claim 124, wherein solvent 5 is a mixture of a polar aprotic solvent and a nonpolar aprotic solvent. 129. The method of claim 124, wherein solvent 5 is a mixture of dichloromethane (DCM) and hexanes. 130. The method of claim 124, wherein R5—X5 is diethyl(isopropyl)silyl trifluoromethanesulfonate. 131. The method of claim 124, wherein R5—X5 is diethyl(isopropyl)silyl trifluoromethanesulfonate; and solvent 5 is a mixture of dichloromethane (DCM) and hexanes. 132. The method of any one of claims 124-131, further comprising the step of: 133. The method of claim 132, wherein R4 is substituted aryloyl. 134. The method of claim 132, wherein X4 is halide. 135. The method of claim 132, wherein R4 is substituted phenyl; and X4 is halide. 136. The method of claim 132, wherein R4—X4 is p-(tert-butyl)benzoyl chloride. 137. The method of claim 132, wherein solvent 4 is tetrahydrofuran (THF). 138. The method of claim 132, wherein R4—X4 is p-(tert-butyl)benzoyl chloride; and solvent 4 is tetrahydrofuran (THF). 139. The method of any one of claims 132-138, further comprising the step of: 140. The method of claim 139, wherein Ra is 2-alken-1-yl. 141. The method of claim 139, wherein X1 is succinimidyl. 142. The method of claim 139, wherein Ra is 2-alken-1-yl; and X1 is succinimidyl. 143. The method of claim 139, wherein R2 is 2-alken-1-yl. 144. The method of claim 139, wherein X2 is halide. 145. The method of claim 139, wherein R2 is 2-alken-1-yl; and X2 is halide. 146. The method of claim 139, wherein R3 is substituted aryl. 147. The method of claim 139, wherein Ra is 2-propen-1-yl. 148. The method of claim 139, wherein R2 is 2-propen-1-yl. 149. The method of claim 139, wherein R3 is para-methoxyphenyl. 150. The method of claim 139, wherein solvent 1 is a mixture of a polar aprotic solvent and a polar protic solvent. 151. The method of claim 139, wherein solvent 2 is a mixture of a polar aprotic solvent and a polar protic solvent. 152. The method of claim 139, wherein solvent 3 is a mixture of a polar aprotic solvent and a polar protic solvent. 153. The method of claim 139, wherein solvent 1 is a mixture of dimethylformamide (DMF) and MeOH. 154. The method of claim 139, wherein solvent 2 is a mixture of dimethylformamide (DMF) and MeOH. 155. The method of claim 139, wherein solvent 3 is a mixture of tetrahydrofuran (THF) and MeOH. 156. The method of claim 139, wherein Bronsted acid is camphorsulfonic acid (CSA). 157. The method of claim 139, wherein Ra is 2-propen-1-yl; X1 is succinimidyl; R2 is 2-propen-1-yl; X2 is halide; and R3 is para-methoxyphenyl. 158. The method of claim 139, wherein Ra is 2-propen-1-yl; X1 is succinimidyl; R2 is 2-propen-1-yl; X2 is halide; R3 is para-methoxyphenyl; solvent 1 is a mixture of dimethylformamide (DMF) and MeOH; solvent 2 is a mixture of DMF and MeOH; and solvent 3 is a mixture of tetrahydrofuran (THF) and MeOH.
1,600
274,052
15,033,725
1,673
Provided herein are compounds, compositions and methods for the treatment of Flaviviridae infections, including HCV infections. In certain embodiments, compounds and compositions of nucleoside derivatives are disclosed, which can be administered either alone or in combination with other anti-viral agents. In certain embodiments, the compounds are D-alanine phosphoramidate pronucleotides of 2′-methyl 2′-fluoro guanosine nucleoside which display remarkable efficacy and bioavailability for the treatment of for example, HCV infection in a human. In certain embodiments, the compounds are of Formula I or a pharmaceutically acceptable salt, solvate, stereoisomeric form, tautomeric form or polymorphic form thereof; where W and R are as described herein.
1. A compound of Formula I: 2. The compound of claim 1 according to Formula Ia or Ib: 3. The compound of claim 1, wherein the alkoxyl is —OR′, wherein R′ is alkyl or cycloalkyl, and wherein alkyl is C1 to C10 alkyl, and cycloalkyl is C3 to C15 cycloalkyl. 4. The compound of claim 1, wherein the alkoxyl is selected from the group consisting of methoxyl, ethoxyl, n-propoxyl, isopropoxyl, n-butoxyl, tert-butoxyl, sec-butoxyl, n-pentoxyl, n-hexoxyl, and 1,2-dimethylbutoxyl. 5. The compound of claim 1, wherein W is O and R is hydrogen, hydroxyl or alkoxyl. 6. The compound of claim 5, wherein W is O and R is hydroxyl or alkoxyl. 7. The compound of claim 6, wherein W is O and R is hydroxyl, methoxyl or ethoxyl. 8. The compound of any of claim 7, wherein W is O and R is ethoxyl. 9. The compound of claim 1, wherein W is S and R is hydrogen, hydroxyl or alkoxyl. 10. The compound of claim 9, wherein W is S and R is hydroxyl or alkoxyl. 11. The compound of claim 10, wherein W is S and R is hydroxyl, methoxyl or ethoxyl. 12. The compound of claim 11, wherein W is S and R is ethoxyl. 13. The compound of claim 1 having the structure: 14. A substantially pure compound of claim 1. 15. A pharmaceutical composition comprising the compound of any of the preceding claims and a pharmaceutically acceptable excipient, carrier or diluent. 16. The pharmaceutical composition of claim 15, wherein the composition is an oral formulation. 17. A method for the treatment of a host infected with a hepatitis C virus, comprising the administration of an effective treatment amount of a compound of claim 1. 18. (canceled) 19. The method of claim 17, wherein the administration directs a substantial amount of the compound, or pharmaceutically acceptable salt or stereoisomer thereof, to a liver of the host. 20. The method of claim 17, wherein the compound or composition is administered in combination or alternation with a second anti-viral agent selected from the group consisting of an interferon, a nucleotide analogue, a polymerase inhibitor, an NS3 protease inhibitor, an NS5A inhibitor, an entry inhibitor, a non-nucleoside polymerase inhibitor, a cyclosporine immune inhibitor, an NS4A antagonist, an NS4B-RNA binding inhibitor, a locked nucleic acid mRNA inhibitor, a cyclophilin inhibitor, and combinations thereof. 21. The method of claim 20, wherein the second anti-viral agent is selected from the group consisting of telaprevir, boceprevir, simeprevir, interferon alfacon-1, interferon alfa-2b, pegylated interferon alpha 2a, pegylated interferon alpha 2b, ribavirin, and combinations thereof. 22. (canceled)
Provided herein are compounds, compositions and methods for the treatment of Flaviviridae infections, including HCV infections. In certain embodiments, compounds and compositions of nucleoside derivatives are disclosed, which can be administered either alone or in combination with other anti-viral agents. In certain embodiments, the compounds are D-alanine phosphoramidate pronucleotides of 2′-methyl 2′-fluoro guanosine nucleoside which display remarkable efficacy and bioavailability for the treatment of for example, HCV infection in a human. In certain embodiments, the compounds are of Formula I or a pharmaceutically acceptable salt, solvate, stereoisomeric form, tautomeric form or polymorphic form thereof; where W and R are as described herein.1. A compound of Formula I: 2. The compound of claim 1 according to Formula Ia or Ib: 3. The compound of claim 1, wherein the alkoxyl is —OR′, wherein R′ is alkyl or cycloalkyl, and wherein alkyl is C1 to C10 alkyl, and cycloalkyl is C3 to C15 cycloalkyl. 4. The compound of claim 1, wherein the alkoxyl is selected from the group consisting of methoxyl, ethoxyl, n-propoxyl, isopropoxyl, n-butoxyl, tert-butoxyl, sec-butoxyl, n-pentoxyl, n-hexoxyl, and 1,2-dimethylbutoxyl. 5. The compound of claim 1, wherein W is O and R is hydrogen, hydroxyl or alkoxyl. 6. The compound of claim 5, wherein W is O and R is hydroxyl or alkoxyl. 7. The compound of claim 6, wherein W is O and R is hydroxyl, methoxyl or ethoxyl. 8. The compound of any of claim 7, wherein W is O and R is ethoxyl. 9. The compound of claim 1, wherein W is S and R is hydrogen, hydroxyl or alkoxyl. 10. The compound of claim 9, wherein W is S and R is hydroxyl or alkoxyl. 11. The compound of claim 10, wherein W is S and R is hydroxyl, methoxyl or ethoxyl. 12. The compound of claim 11, wherein W is S and R is ethoxyl. 13. The compound of claim 1 having the structure: 14. A substantially pure compound of claim 1. 15. A pharmaceutical composition comprising the compound of any of the preceding claims and a pharmaceutically acceptable excipient, carrier or diluent. 16. The pharmaceutical composition of claim 15, wherein the composition is an oral formulation. 17. A method for the treatment of a host infected with a hepatitis C virus, comprising the administration of an effective treatment amount of a compound of claim 1. 18. (canceled) 19. The method of claim 17, wherein the administration directs a substantial amount of the compound, or pharmaceutically acceptable salt or stereoisomer thereof, to a liver of the host. 20. The method of claim 17, wherein the compound or composition is administered in combination or alternation with a second anti-viral agent selected from the group consisting of an interferon, a nucleotide analogue, a polymerase inhibitor, an NS3 protease inhibitor, an NS5A inhibitor, an entry inhibitor, a non-nucleoside polymerase inhibitor, a cyclosporine immune inhibitor, an NS4A antagonist, an NS4B-RNA binding inhibitor, a locked nucleic acid mRNA inhibitor, a cyclophilin inhibitor, and combinations thereof. 21. The method of claim 20, wherein the second anti-viral agent is selected from the group consisting of telaprevir, boceprevir, simeprevir, interferon alfacon-1, interferon alfa-2b, pegylated interferon alpha 2a, pegylated interferon alpha 2b, ribavirin, and combinations thereof. 22. (canceled)
1,600
274,053
15,141,992
1,673
A compound of formula (I) wherein Ar can be one six-membered or two fused six-membered aromatic rings; R8 and R9 can be hydrogen, alkyl, cycloalkyl, halogens, amino, alkylamino, dialkylamino, nitro, cyano, alkyoxy, aryloxy, thiol, alkylthiol, arythiol, or aryl; Q can be O, S or CY2, where Y may be H, alkyl or halogens; X can be O, NH, S, N-alkyl, (CHR2)m where m is 1 to 10, and CY2; Z can be O, S, NH, or N-alkyl; U″ is H and U′ can be H or CH2; wherein: T can be OH, H, halogens, O-alkyl, O-acyl, O-aryl, CN, NH2 or N3; T′ and T″ can be H or halogen; and W can be H or a phosphate group. Compounds show anti-viral activity, for example with respect to varicella zoster virus.
1. A compound having the formula: 2. A pharmaceutical composition comprising a compound of claim 1 and a pharmaceutically acceptable excipient. 3. A method of prophylaxis or treatment of a viral infection comprising administering to a patient in need of such treatment an effective dose of a compound of claim 1.
A compound of formula (I) wherein Ar can be one six-membered or two fused six-membered aromatic rings; R8 and R9 can be hydrogen, alkyl, cycloalkyl, halogens, amino, alkylamino, dialkylamino, nitro, cyano, alkyoxy, aryloxy, thiol, alkylthiol, arythiol, or aryl; Q can be O, S or CY2, where Y may be H, alkyl or halogens; X can be O, NH, S, N-alkyl, (CHR2)m where m is 1 to 10, and CY2; Z can be O, S, NH, or N-alkyl; U″ is H and U′ can be H or CH2; wherein: T can be OH, H, halogens, O-alkyl, O-acyl, O-aryl, CN, NH2 or N3; T′ and T″ can be H or halogen; and W can be H or a phosphate group. Compounds show anti-viral activity, for example with respect to varicella zoster virus.1. A compound having the formula: 2. A pharmaceutical composition comprising a compound of claim 1 and a pharmaceutically acceptable excipient. 3. A method of prophylaxis or treatment of a viral infection comprising administering to a patient in need of such treatment an effective dose of a compound of claim 1.
1,600
274,054
15,142,282
1,673
The present invention related to a composition comprising an aqueous soluble-chitosan and a pharmaceutically acceptable carrier. Said composition can be used to increase lipase activity while having no harm in animal physiology. Together with the well known biocompatibility of chitosan, the present invention proves that the aqueous soluble-chitosan may be a potential candidate for body weight control.
1. A method for increasing the activity of adipose triglyceride lipase of a subject of high fat diet, comprising: applying a subject an effective amount of an aqueous soluble-chitosan; wherein said aqueous soluble-chitosan is a chitosan modified by alkyl sultone. 2. The method according to claim 1, wherein said aqueous soluble-chitosan has a molecular weight of 0.3 to 1,500 kDa. 3. The method according to claim 1, wherein said alkyl sultone is 1,3-propanesultone, 1,4-propylenesultone, 1,4-butanesultone, 2,4-butanesultone, or a mixture thereof 4. The method according to claim 1, wherein said effective amount is 1 to 500 mg/kgBW. 5. A method for treating obesity, comprising: applying a subject suffering obesity an effective amount of an aqueous soluble-chitosan; wherein said aqueous soluble-chitosan is a chitosan modified by alkyl sultone. 6. The method according to claim 5, wherein said aqueous soluble-chitosan has a molecular weight of 0.3 to 1,500 kDa. 7. The method according to claim 5, wherein said alkyl sultone is 1,3-propanesultone, 1,4-propylenesultone, 1,4-butanesultone, 2,4-butanesultone, or a mixture thereof 8. The method according to claim 5, wherein said effective amount is 1 to 500 mg/kgBW.
The present invention related to a composition comprising an aqueous soluble-chitosan and a pharmaceutically acceptable carrier. Said composition can be used to increase lipase activity while having no harm in animal physiology. Together with the well known biocompatibility of chitosan, the present invention proves that the aqueous soluble-chitosan may be a potential candidate for body weight control.1. A method for increasing the activity of adipose triglyceride lipase of a subject of high fat diet, comprising: applying a subject an effective amount of an aqueous soluble-chitosan; wherein said aqueous soluble-chitosan is a chitosan modified by alkyl sultone. 2. The method according to claim 1, wherein said aqueous soluble-chitosan has a molecular weight of 0.3 to 1,500 kDa. 3. The method according to claim 1, wherein said alkyl sultone is 1,3-propanesultone, 1,4-propylenesultone, 1,4-butanesultone, 2,4-butanesultone, or a mixture thereof 4. The method according to claim 1, wherein said effective amount is 1 to 500 mg/kgBW. 5. A method for treating obesity, comprising: applying a subject suffering obesity an effective amount of an aqueous soluble-chitosan; wherein said aqueous soluble-chitosan is a chitosan modified by alkyl sultone. 6. The method according to claim 5, wherein said aqueous soluble-chitosan has a molecular weight of 0.3 to 1,500 kDa. 7. The method according to claim 5, wherein said alkyl sultone is 1,3-propanesultone, 1,4-propylenesultone, 1,4-butanesultone, 2,4-butanesultone, or a mixture thereof 8. The method according to claim 5, wherein said effective amount is 1 to 500 mg/kgBW.
1,600
274,055
15,141,183
1,673
A method for producing chitosan from naturally occurring chitin-containing raw material, such as crustacean shells, includes an optional pretreatment step to remove non-chitin rich organic material for example, shrimp flesh, from the raw material, e.g., shrimp shells. The optional pre-treatment is followed by a demineralization step utilizing a mild hydrochloric acid solution and a deproteination step utilizing a mild sodium hydroxide solution. The deproteination step is followed by a deacetylation step to remove the acetyl group from N-acetylglucosamine (chitin) to form an amine group, yielding d-glucosamine (chitosan). Each step is followed by a washing step and the product is dried, preferably at a temperature not in excess of about 65° C. Known purification and grinding steps may also be used to produce the final chitosan product. The process is carried out in equipment comprising a series of substantially identical or similar tanks (18, 26, 36, etc.) and dryers (62, 62′), suitably interconnected.
1. A process for the manufacture of chitosan from a naturally occurring chitin source consists essentially of the following steps: (a) a naturally occurring chitin source is demineralized by immersing it in a demineralization (“DMIN”) hydrochloric acid solution of from about 0.5 to about 2 M at a temperature of from about 20° C. to about 30° C. and for a DMIN time period of from about 0.5 to about 2 hours to demineralize the chitin source, and then separating the resulting demineralized chitin source from the acid solution, washing the chitin source in a DMIN wash water for a DMIN wash period of from about 0.5 to about 2 hours to remove the hydrochloric acid and calcium salts therefrom, and then separating the demineralized chitin source from the DMIN wash water; (b) subjecting the demineralized chitin source to deproteination (“DPRO”) by treating the demineralized chitin source in a DPRO sodium hydroxide solution containing from about 1% to about 10% w/w NaOH for a DPRO time period of from about 4 to about 24 hours and at a temperature of from about 60° C. to about 80° C. to deproteinize the demineralized chitin source, and then separating the resulting demineralized and deproteinized chitin source from the deproteination sodium hydroxide solution, washing the separated demineralized and deproteinized chitin source in a DPRO wash water for a DPRO wash period of from about 0.5 to about 2 hours to remove the sodium hydroxide from the demineralized and deproteinized chitin source, and then separating the demineralized and deproteinized chitin source from the deproteination wash water; (c) separating residual water from the chitin source obtained in step (b); (d) immersing the chitin source obtained from step (b) into a sodium hydroxide deacetylation (“DEAC”) solution containing from about 40% to about 50% w/w NaOH and carrying out deacetylation at a temperature of from about 90° C. to about 110° C. and for a DEAC time period of from about 1 to about 3 hours to convert acetyl groups of the chitin source obtained from step (c) to amine groups, to thereby form a chitosan biopolymer having d-glucosamine as the monomer of the chitin biopolymer, then separating the resulting chitosan biopolymer from the DEAC solution and washing the separated chitosan biopolymer in a DEAC wash water for a DEAC wash period sufficient to remove sodium hydroxide from the chitosan polymer, and then separating the chitosan biopolymer from the DEAC wash water; and (e) residual water is then separated from the chitosan biopolymer which is then dried in air at a temperature of not more than about 65° C. for a drying time period sufficient to reduce the moisture content of the chitosan biopolymer to below about 10% by weight to provide a medical-grade quality chitosan. 2. The process of claim 1 wherein step (e) is carried out under conditions comprising that the temperature is from about 50° C. to about 65° C. and the drying period is from about 2 to about 5 hours. 3. The process of claim 1 wherein: step (a) is carried out under conditions comprising that the hydrochloric acid solution is from about 0.9 to about 1.1 M, the temperature is from about 22° C. to about 26° C., the DMIN time period is from about 0.75 to about 1.25 hours, and the DMIN wash period is from about 0.9 to about 1.1 hours; step (b) is carried out under conditions comprising that the sodium hydroxide solution contains from about 4% to about 6% w/w NaOH, the temperature is from about 70° C. to about 75° C., the deproteination period is from about 4 to about 6 hours, and the DPRT wash period is from about 0.9 to about 1.1 hours; and step (d) is carried out under conditions comprising that the sodium hydroxide solution contains from about 45% to about 50% w/w NaOH at a temperature of from about 100° C. to about 110° C. and the deacetylation wash period is from about 0.9 to about 1.1 hours.
A method for producing chitosan from naturally occurring chitin-containing raw material, such as crustacean shells, includes an optional pretreatment step to remove non-chitin rich organic material for example, shrimp flesh, from the raw material, e.g., shrimp shells. The optional pre-treatment is followed by a demineralization step utilizing a mild hydrochloric acid solution and a deproteination step utilizing a mild sodium hydroxide solution. The deproteination step is followed by a deacetylation step to remove the acetyl group from N-acetylglucosamine (chitin) to form an amine group, yielding d-glucosamine (chitosan). Each step is followed by a washing step and the product is dried, preferably at a temperature not in excess of about 65° C. Known purification and grinding steps may also be used to produce the final chitosan product. The process is carried out in equipment comprising a series of substantially identical or similar tanks (18, 26, 36, etc.) and dryers (62, 62′), suitably interconnected.1. A process for the manufacture of chitosan from a naturally occurring chitin source consists essentially of the following steps: (a) a naturally occurring chitin source is demineralized by immersing it in a demineralization (“DMIN”) hydrochloric acid solution of from about 0.5 to about 2 M at a temperature of from about 20° C. to about 30° C. and for a DMIN time period of from about 0.5 to about 2 hours to demineralize the chitin source, and then separating the resulting demineralized chitin source from the acid solution, washing the chitin source in a DMIN wash water for a DMIN wash period of from about 0.5 to about 2 hours to remove the hydrochloric acid and calcium salts therefrom, and then separating the demineralized chitin source from the DMIN wash water; (b) subjecting the demineralized chitin source to deproteination (“DPRO”) by treating the demineralized chitin source in a DPRO sodium hydroxide solution containing from about 1% to about 10% w/w NaOH for a DPRO time period of from about 4 to about 24 hours and at a temperature of from about 60° C. to about 80° C. to deproteinize the demineralized chitin source, and then separating the resulting demineralized and deproteinized chitin source from the deproteination sodium hydroxide solution, washing the separated demineralized and deproteinized chitin source in a DPRO wash water for a DPRO wash period of from about 0.5 to about 2 hours to remove the sodium hydroxide from the demineralized and deproteinized chitin source, and then separating the demineralized and deproteinized chitin source from the deproteination wash water; (c) separating residual water from the chitin source obtained in step (b); (d) immersing the chitin source obtained from step (b) into a sodium hydroxide deacetylation (“DEAC”) solution containing from about 40% to about 50% w/w NaOH and carrying out deacetylation at a temperature of from about 90° C. to about 110° C. and for a DEAC time period of from about 1 to about 3 hours to convert acetyl groups of the chitin source obtained from step (c) to amine groups, to thereby form a chitosan biopolymer having d-glucosamine as the monomer of the chitin biopolymer, then separating the resulting chitosan biopolymer from the DEAC solution and washing the separated chitosan biopolymer in a DEAC wash water for a DEAC wash period sufficient to remove sodium hydroxide from the chitosan polymer, and then separating the chitosan biopolymer from the DEAC wash water; and (e) residual water is then separated from the chitosan biopolymer which is then dried in air at a temperature of not more than about 65° C. for a drying time period sufficient to reduce the moisture content of the chitosan biopolymer to below about 10% by weight to provide a medical-grade quality chitosan. 2. The process of claim 1 wherein step (e) is carried out under conditions comprising that the temperature is from about 50° C. to about 65° C. and the drying period is from about 2 to about 5 hours. 3. The process of claim 1 wherein: step (a) is carried out under conditions comprising that the hydrochloric acid solution is from about 0.9 to about 1.1 M, the temperature is from about 22° C. to about 26° C., the DMIN time period is from about 0.75 to about 1.25 hours, and the DMIN wash period is from about 0.9 to about 1.1 hours; step (b) is carried out under conditions comprising that the sodium hydroxide solution contains from about 4% to about 6% w/w NaOH, the temperature is from about 70° C. to about 75° C., the deproteination period is from about 4 to about 6 hours, and the DPRT wash period is from about 0.9 to about 1.1 hours; and step (d) is carried out under conditions comprising that the sodium hydroxide solution contains from about 45% to about 50% w/w NaOH at a temperature of from about 100° C. to about 110° C. and the deacetylation wash period is from about 0.9 to about 1.1 hours.
1,600
274,056
15,139,924
1,673
The present invention is directed to compounds, compositions and methods for treating or preventing Flaviviridae family of viruses (including HCV, Yellow fever, Dengue, Chikungunya and West Nile virus), RSV, HEV, and influenza infection and cancer in human subjects or other animal hosts.
1. A compound of Formula (A) or (B): 2. The compounds of claim 1, wherein the compounds can be present in the β-D or β-L configuration. 3. The compounds of claim 1, having one of the following formulas: 4. A compound of claim 1, having one of the following formulas: 5. A compound of claim 1, having the formula: 6. A compound of claim 1, having the formula: 7. The compound of claim 1 wherein the sugar is partially deuterated. 8. A pharmaceutical composition comprising a compound of claim 1, and a pharmaceutically-acceptable carrier. 9. The composition of claim 8, wherein the composition is a transdermal composition or a nanoparticulate composition. 10. The pharmaceutical composition of claim 8, further comprising a second antiviral agent. 11. The pharmaceutical composition of claim 10, wherein the second antiviral agent is selected from the group consisting of an interferon, ribavirin, an NS3 protease inhibitor, an NS5A inhibitor, a non-nucleoside polymerase inhibitor, a helicase inhibitor, a polymerase inhibitor, a nucleotide or nucleoside analogue, an inhibitor of IRES dependent translation, and combinations thereof. 12. A method for treating a host infected with Flaviviridae family of viruses, preventing an infection from a Flaviviridae family of viruses, or reducing the biological activity of an infection with Flaviviridae including HCV, Yellow fever, Dengue, Chikungunya and West Nile virus comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof. 13. The method of claim 12, wherein the virus is selected from the group consisting of HCV, Yellow fever, Dengue, Chikungunya and West Nile virus. 14. The method of claim 12, wherein the compound is administered in combination with another anti-Flaviviridae virus agent. 15. A method for treating a host infected with Norovirus or Saporovirus, preventing an Norovirus or Saporovirus infection, or reducing the biological activity of an Norovirus or Saporovirus infection in a host, comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof. 16. A method for treating a host infected with RSV or influenza, preventing an RSV or influenza infection, or reducing the biological activity of an RSV or influenza infection in a host, comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof. 17. A method for treating a host with cancer, comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof. 18. A method for treating a host infected with HEV, preventing an infection from HEV, or reducing the biological activity of an infection with HEV comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof.
The present invention is directed to compounds, compositions and methods for treating or preventing Flaviviridae family of viruses (including HCV, Yellow fever, Dengue, Chikungunya and West Nile virus), RSV, HEV, and influenza infection and cancer in human subjects or other animal hosts.1. A compound of Formula (A) or (B): 2. The compounds of claim 1, wherein the compounds can be present in the β-D or β-L configuration. 3. The compounds of claim 1, having one of the following formulas: 4. A compound of claim 1, having one of the following formulas: 5. A compound of claim 1, having the formula: 6. A compound of claim 1, having the formula: 7. The compound of claim 1 wherein the sugar is partially deuterated. 8. A pharmaceutical composition comprising a compound of claim 1, and a pharmaceutically-acceptable carrier. 9. The composition of claim 8, wherein the composition is a transdermal composition or a nanoparticulate composition. 10. The pharmaceutical composition of claim 8, further comprising a second antiviral agent. 11. The pharmaceutical composition of claim 10, wherein the second antiviral agent is selected from the group consisting of an interferon, ribavirin, an NS3 protease inhibitor, an NS5A inhibitor, a non-nucleoside polymerase inhibitor, a helicase inhibitor, a polymerase inhibitor, a nucleotide or nucleoside analogue, an inhibitor of IRES dependent translation, and combinations thereof. 12. A method for treating a host infected with Flaviviridae family of viruses, preventing an infection from a Flaviviridae family of viruses, or reducing the biological activity of an infection with Flaviviridae including HCV, Yellow fever, Dengue, Chikungunya and West Nile virus comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof. 13. The method of claim 12, wherein the virus is selected from the group consisting of HCV, Yellow fever, Dengue, Chikungunya and West Nile virus. 14. The method of claim 12, wherein the compound is administered in combination with another anti-Flaviviridae virus agent. 15. A method for treating a host infected with Norovirus or Saporovirus, preventing an Norovirus or Saporovirus infection, or reducing the biological activity of an Norovirus or Saporovirus infection in a host, comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof. 16. A method for treating a host infected with RSV or influenza, preventing an RSV or influenza infection, or reducing the biological activity of an RSV or influenza infection in a host, comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof. 17. A method for treating a host with cancer, comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof. 18. A method for treating a host infected with HEV, preventing an infection from HEV, or reducing the biological activity of an infection with HEV comprising administering an effective amount of a compound of claim 1 to a patient in need of treatment thereof.
1,600
274,057
15,139,370
1,673
The present invention relates to novel 5′-substituted nucleoside compounds, pharmaceutical compositions comprising the compounds, and methods of using the compounds to treat cancer, more particularly for the treatment of cancer, in particular glioblastomas, melanoma, sarcomas, gastric cancer, pancreatic cancer, cholangiocarcinoma, bladder cancer, breast cancer, non-small cell lung cancer, leukemias including acute myeloid leukemia, and lymphomas.
1. A compound of the formula: 2. The compound or salt thereof according to claim 1 of the formula: 3. The compound or salt thereof according to claim 1 wherein the configuration of the chiral carbon to which the R2 substituent is attached, is R, S or a mixture thereof. 4. The compound which is 5. The compound according to claim 4 which is crystalline and characterized by an X-ray powder diffraction pattern (Cu radiation, λ=1.54060 Å) comprising a peak at 25.1° in combination with one or more of the peaks selected from the group consisting of 17.0°, 13.6°, 20.5°, 24.0°, and 14.5° (2θ±0.2°). 6. The compound according to claim 1 which is 7. The compound according to claim 1 which is 8. The compound according to claim 1 which is 9. The compound according to claim 1 wherein R1a is hydrogen, chloro, or cyclopropoxy. 10. The compound according to claim 1 wherein R1b is hydrogen or chloro. 11. The compound according to claim 1 wherein R2 is hydrogen or methyl. 12. The compound according to claim 1 wherein R3 is amino. 13. A pharmaceutical composition comprising a compound of the formula: 14. A method of treating cancer wherein the cancer is selected from the group consisting of glioblastomas, melanoma, sarcomas, gastric cancer, pancreatic cancer, cholangiocarcinoma, bladder cancer, breast cancer, non-small cell lung cancer, leukemias including acute myeloid leukemia, and lymphomas, in a patient in need of such treatment comprising administering the patient an effective amount of a compound of the formula:
The present invention relates to novel 5′-substituted nucleoside compounds, pharmaceutical compositions comprising the compounds, and methods of using the compounds to treat cancer, more particularly for the treatment of cancer, in particular glioblastomas, melanoma, sarcomas, gastric cancer, pancreatic cancer, cholangiocarcinoma, bladder cancer, breast cancer, non-small cell lung cancer, leukemias including acute myeloid leukemia, and lymphomas.1. A compound of the formula: 2. The compound or salt thereof according to claim 1 of the formula: 3. The compound or salt thereof according to claim 1 wherein the configuration of the chiral carbon to which the R2 substituent is attached, is R, S or a mixture thereof. 4. The compound which is 5. The compound according to claim 4 which is crystalline and characterized by an X-ray powder diffraction pattern (Cu radiation, λ=1.54060 Å) comprising a peak at 25.1° in combination with one or more of the peaks selected from the group consisting of 17.0°, 13.6°, 20.5°, 24.0°, and 14.5° (2θ±0.2°). 6. The compound according to claim 1 which is 7. The compound according to claim 1 which is 8. The compound according to claim 1 which is 9. The compound according to claim 1 wherein R1a is hydrogen, chloro, or cyclopropoxy. 10. The compound according to claim 1 wherein R1b is hydrogen or chloro. 11. The compound according to claim 1 wherein R2 is hydrogen or methyl. 12. The compound according to claim 1 wherein R3 is amino. 13. A pharmaceutical composition comprising a compound of the formula: 14. A method of treating cancer wherein the cancer is selected from the group consisting of glioblastomas, melanoma, sarcomas, gastric cancer, pancreatic cancer, cholangiocarcinoma, bladder cancer, breast cancer, non-small cell lung cancer, leukemias including acute myeloid leukemia, and lymphomas, in a patient in need of such treatment comprising administering the patient an effective amount of a compound of the formula:
1,600
274,058
15,140,045
1,673
The present invention relates to compounds useful for the treatment or prevention of bacteria infections. These compounds have formula I:
1.-112. (canceled) 113. A method of treating or preventing a bacteria infection in a subject, comprising administering a therapeutically effective amount of a compound represented by Structural Formula ID or a pharmaceutically acceptable salt thereof or a composition comprising same and a pharmaceutically acceptable carrier, adjuvant, or vehicle: 114. The method of claim 113, wherein G is C(JH1)(JH2); JH1 is OH, F, or —CH2CH2OH; JH2 is OH, CH3, cyclopropyl, F, CH2CH3, —CH2CH2OH, —CH2CH(OH)CH2OH, or phenyl optionally substituted with OCH3; or JH1 and JH2, together with the carbon atom to which they are attached, form ═N—OH or a 6-membered saturated monocyclic ring having 0-2 heteroatoms selected from oxygen, nitrogen, or sulfur; wherein said ring is optionally substituted with C1-6alkyl, OH, NH2, —C(O)OCH3, —C(O)OC(CH3)3, —C(O)C(CH3)2OH, or —S(O)2CH3. 115. The method of claim 113, wherein the Ring HH is selected from cyclopentyl, cyclohexyl, piperidinyl, piperazinyl, 1,3-dithianyl, or tetrahydropyranyl. 116. The method of claim 113, wherein XJH is C1-6alkyl and QJH is C3-6cycloaliphatic, oxetanyl, tetrahydropyrrolidinyl, piperidinyl, piperazinyl, or morpholinyl. 117. The method of claim 113, wherein Ring H, together with Ring HH, is selected from one of the following formulae: 118. The method of claim 117, wherein the compound has formula ID-a: 119. The method of claim 118, wherein Ring HH is cyclopentyl, cyclohexyl, tetrahydropyranyl, 1,3 dithianyl, piperazinyl, piperidinyl, or oxepanyl. 120. The method of claim 119, wherein Ring HH is piperidinyl or tetrahydropyranyl. 121. The method of claim 120, wherein the compound has formula ID-b: 122. The method of claim 121, wherein JHH is H, C(O)(C1-6alkyl), C(O)O(C1-6alkyl), S(O)2(C1-6alkyl), C(O)(C3-6cycloalkyl), C(O)(3-6 membered heterocyclyl), C(O)(5-6 membered heteroaryl), C(O)—(C1-4alkyl)-(5-6 membered heteroaryl), C(O)—(C1-4alkyl)-(heterocyclyl); wherein said heteroaryl or heterocyclyl has 1-3 heteroatoms selected from oxygen, nitrogen, or sulfur; JHH is optionally substituted with OH, O(C1-6alkyl), oxo, C1-6alkyl, CN, or halo. 123. The method of claim 121, wherein JHH is H, C(O)CH3, C(O)OC(CH3)3, C(O)OCH(CH3)2, C(O)OCH2CH3, C(O)OC(OH)(CH3)2, S(O)2CH3, C(O)CH(CH3)2, C(O)C(CH3)3, C(O)CH(CH3)OCH3, 124. The method of claim 113, wherein the compound of formula (ID) is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 125. The method of claim 113, wherein the compound of formula (ID) is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 126. The method of claim 113, wherein the compound is represented by any of the following structural formulae or a pharmaceutically acceptable salt thereof: 127. The method of claim 113, wherein the bacteria infection is urinary tract infection or inflammatory bowel disease. 128. The method of claim 113, wherein the bacteria infection is colitis. 129. The method of claim 113, wherein the bacteria infection is Crohn's disease. 130. The method of claim 124, wherein the bacteria infection is colitis. 131. The method of claim 124, wherein the bacteria infection is Crohn's disease. 132. The method of claim 124, wherein the bacteria infection is urinary tract infection or inflammatory bowel disease. 133. The method of claim 125, wherein the bacteria infection is colitis. 134. The method of claim 125, wherein the bacteria infection is Crohn's disease. 135. The method of claim 125, wherein the bacteria infection is urinary tract infection or inflammatory bowel disease. 136. A method of inhibiting FimH in a subject, comprising administering a therapeutically effective amount of a compound represented by Structural Formula ID as defined in claim 113 or a pharmaceutically acceptable salt thereof or a composition comprising same and a pharmaceutically acceptable carrier, adjuvant, or vehicle. 137. A method of inhibiting adhesion of E. coli in a subject, comprising administering a therapeutically effective amount of a compound represented by Structural Formula ID as defined in claim 113 or a pharmaceutically acceptable salt thereof or a composition comprising same and a pharmaceutically acceptable carrier, adjuvant, or vehicle. 138. A method of blocking the interaction between type 1 pili and CEACAM6 in a subject, comprising administering a therapeutically effective amount of a compound represented by Structural Formula ID as defined in claim 113 or a pharmaceutically acceptable salt thereof or a composition comprising same and a pharmaceutically acceptable carrier, adjuvant, or vehicle. 139. The method of claim 136, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 140. The method of claim 136, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 141. The method of claim 137, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 142. The method of claim 137, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 143. The method of claim 138, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 144. The method of claim 138, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof:
The present invention relates to compounds useful for the treatment or prevention of bacteria infections. These compounds have formula I:1.-112. (canceled) 113. A method of treating or preventing a bacteria infection in a subject, comprising administering a therapeutically effective amount of a compound represented by Structural Formula ID or a pharmaceutically acceptable salt thereof or a composition comprising same and a pharmaceutically acceptable carrier, adjuvant, or vehicle: 114. The method of claim 113, wherein G is C(JH1)(JH2); JH1 is OH, F, or —CH2CH2OH; JH2 is OH, CH3, cyclopropyl, F, CH2CH3, —CH2CH2OH, —CH2CH(OH)CH2OH, or phenyl optionally substituted with OCH3; or JH1 and JH2, together with the carbon atom to which they are attached, form ═N—OH or a 6-membered saturated monocyclic ring having 0-2 heteroatoms selected from oxygen, nitrogen, or sulfur; wherein said ring is optionally substituted with C1-6alkyl, OH, NH2, —C(O)OCH3, —C(O)OC(CH3)3, —C(O)C(CH3)2OH, or —S(O)2CH3. 115. The method of claim 113, wherein the Ring HH is selected from cyclopentyl, cyclohexyl, piperidinyl, piperazinyl, 1,3-dithianyl, or tetrahydropyranyl. 116. The method of claim 113, wherein XJH is C1-6alkyl and QJH is C3-6cycloaliphatic, oxetanyl, tetrahydropyrrolidinyl, piperidinyl, piperazinyl, or morpholinyl. 117. The method of claim 113, wherein Ring H, together with Ring HH, is selected from one of the following formulae: 118. The method of claim 117, wherein the compound has formula ID-a: 119. The method of claim 118, wherein Ring HH is cyclopentyl, cyclohexyl, tetrahydropyranyl, 1,3 dithianyl, piperazinyl, piperidinyl, or oxepanyl. 120. The method of claim 119, wherein Ring HH is piperidinyl or tetrahydropyranyl. 121. The method of claim 120, wherein the compound has formula ID-b: 122. The method of claim 121, wherein JHH is H, C(O)(C1-6alkyl), C(O)O(C1-6alkyl), S(O)2(C1-6alkyl), C(O)(C3-6cycloalkyl), C(O)(3-6 membered heterocyclyl), C(O)(5-6 membered heteroaryl), C(O)—(C1-4alkyl)-(5-6 membered heteroaryl), C(O)—(C1-4alkyl)-(heterocyclyl); wherein said heteroaryl or heterocyclyl has 1-3 heteroatoms selected from oxygen, nitrogen, or sulfur; JHH is optionally substituted with OH, O(C1-6alkyl), oxo, C1-6alkyl, CN, or halo. 123. The method of claim 121, wherein JHH is H, C(O)CH3, C(O)OC(CH3)3, C(O)OCH(CH3)2, C(O)OCH2CH3, C(O)OC(OH)(CH3)2, S(O)2CH3, C(O)CH(CH3)2, C(O)C(CH3)3, C(O)CH(CH3)OCH3, 124. The method of claim 113, wherein the compound of formula (ID) is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 125. The method of claim 113, wherein the compound of formula (ID) is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 126. The method of claim 113, wherein the compound is represented by any of the following structural formulae or a pharmaceutically acceptable salt thereof: 127. The method of claim 113, wherein the bacteria infection is urinary tract infection or inflammatory bowel disease. 128. The method of claim 113, wherein the bacteria infection is colitis. 129. The method of claim 113, wherein the bacteria infection is Crohn's disease. 130. The method of claim 124, wherein the bacteria infection is colitis. 131. The method of claim 124, wherein the bacteria infection is Crohn's disease. 132. The method of claim 124, wherein the bacteria infection is urinary tract infection or inflammatory bowel disease. 133. The method of claim 125, wherein the bacteria infection is colitis. 134. The method of claim 125, wherein the bacteria infection is Crohn's disease. 135. The method of claim 125, wherein the bacteria infection is urinary tract infection or inflammatory bowel disease. 136. A method of inhibiting FimH in a subject, comprising administering a therapeutically effective amount of a compound represented by Structural Formula ID as defined in claim 113 or a pharmaceutically acceptable salt thereof or a composition comprising same and a pharmaceutically acceptable carrier, adjuvant, or vehicle. 137. A method of inhibiting adhesion of E. coli in a subject, comprising administering a therapeutically effective amount of a compound represented by Structural Formula ID as defined in claim 113 or a pharmaceutically acceptable salt thereof or a composition comprising same and a pharmaceutically acceptable carrier, adjuvant, or vehicle. 138. A method of blocking the interaction between type 1 pili and CEACAM6 in a subject, comprising administering a therapeutically effective amount of a compound represented by Structural Formula ID as defined in claim 113 or a pharmaceutically acceptable salt thereof or a composition comprising same and a pharmaceutically acceptable carrier, adjuvant, or vehicle. 139. The method of claim 136, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 140. The method of claim 136, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 141. The method of claim 137, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 142. The method of claim 137, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 143. The method of claim 138, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof: 144. The method of claim 138, wherein the compound represented by Structural Formula ID is represented by the following structural formula or a pharmaceutically acceptable salt thereof:
1,600
274,059
15,136,979
1,673
A method for reducing or maintaining platelet inhibition in a patient by administering cangrelor prior to an invasive procedure is described. The method of this invention can be used for patients in need of antiplatelet therapy or at risk of thrombosis. The method can further be used in patients who were previously treated with long-acting platelet inhibitors without increasing the risk of excessive bleeding.
1-30. (canceled) 31. 1. A method of maintaining P2Y12 inhibition in a patient being treated with an oral P2Y12 inhibitor who is in need of surgery, the method comprising: (a) discontinuing the treatment with the oral P2Y12 inhibitor; (b) administering intravenously a 4 μg/kg/min continuous infusion of cangrelor; and (c) continuing the administration of the continuous infusion for the longer of (i) at least two hours, or (ii) the duration of surgery. 32. The method of claim 31, wherein the oral P2Y12 therapy is selected from the group consisting of clopidogrel, prasugrel, and ticagrelor. 33. The method of claim 31, wherein the surgery is selected from percutaneous coronary intervention and coronary artery bypass grafting. 34. The method of claim 31, wherein the cangrelor is administered as a bolus infusion in addition to the continuous infusion. 35. The method of claim 34, wherein the bolus infusion is administered prior to the surgery. 36. The method of claim 35, wherein the continuous infusion is administered immediately after the bolus infusion. 37. The method of claim 34, wherein the bolus infusion is administered in less than one minute. 38. The method of claim 31, wherein the continuous infusion is continued for a total duration of up to about 4 hours. 39. The method of claim 31, wherein a second oral P2Y12 inhibitor is administered after the discontinuation of the continuous infusion. 40. The method of claim 39, wherein the second oral P2Y12 inhibitor is selected from the group consisting of clopidogrel, prasugrel, and ticagrelor.
A method for reducing or maintaining platelet inhibition in a patient by administering cangrelor prior to an invasive procedure is described. The method of this invention can be used for patients in need of antiplatelet therapy or at risk of thrombosis. The method can further be used in patients who were previously treated with long-acting platelet inhibitors without increasing the risk of excessive bleeding.1-30. (canceled) 31. 1. A method of maintaining P2Y12 inhibition in a patient being treated with an oral P2Y12 inhibitor who is in need of surgery, the method comprising: (a) discontinuing the treatment with the oral P2Y12 inhibitor; (b) administering intravenously a 4 μg/kg/min continuous infusion of cangrelor; and (c) continuing the administration of the continuous infusion for the longer of (i) at least two hours, or (ii) the duration of surgery. 32. The method of claim 31, wherein the oral P2Y12 therapy is selected from the group consisting of clopidogrel, prasugrel, and ticagrelor. 33. The method of claim 31, wherein the surgery is selected from percutaneous coronary intervention and coronary artery bypass grafting. 34. The method of claim 31, wherein the cangrelor is administered as a bolus infusion in addition to the continuous infusion. 35. The method of claim 34, wherein the bolus infusion is administered prior to the surgery. 36. The method of claim 35, wherein the continuous infusion is administered immediately after the bolus infusion. 37. The method of claim 34, wherein the bolus infusion is administered in less than one minute. 38. The method of claim 31, wherein the continuous infusion is continued for a total duration of up to about 4 hours. 39. The method of claim 31, wherein a second oral P2Y12 inhibitor is administered after the discontinuation of the continuous infusion. 40. The method of claim 39, wherein the second oral P2Y12 inhibitor is selected from the group consisting of clopidogrel, prasugrel, and ticagrelor.
1,600
274,060
15,136,699
1,673
Isolated or pure compounds that inhibit PDE10 are disclosed that have utility in the treatment of a variety of conditions, including but not limited to psychotic, anxiety, movement disorders and/or neurological disorders such as Parkinson's disease, Huntington's disease, Alzheimer's disease, encephalitis, phobias, epilepsy, aphasia, Bell's palsy, cerebral palsy, sleep disorders, pain, Tourette's syndrome, schizophrenia, delusional disorders, drug-induced psychosis and panic and obsessive-compulsive disorders. Pharmaceutically acceptable salts, stereoisomers, solvates and prodrugs of the compounds are also provided. Also disclosed are compositions containing an isolated or pure compound in combination with a pharmaceutically acceptable carrier, as well as methods relating to the use thereof for inhibiting PDE10 in a warm-blooded animal in need of the same.
1. An isolated compound of the following structure (I): 2. The isolated compound of claim 1, wherein R1 is methyl or hydroxymethyl. 3. The isolated compound of claim 1, wherein R2 is ethyl. 4. The isolated compound of claim 1, wherein R3 and R4 are each independently H, methyl, or glucuronidyl. 5. The isolated compound of claim 1, wherein X is ═O or —OH. 6. The isolated compound of claim 1, wherein the compound is selected from the group consisting of: 7. A compound of the following structure (I): 8. The compound of claim 7, wherein R1 is methyl or hydroxymethyl. 9. The compound of claim 7, wherein R2 is ethyl. 10. The compound of claim 7, wherein R3 and R4 are each independently H, methyl, or glucuronidyl. 11. The compound of claim 7, wherein X is ═O or —OH. 12. The compound of claim 7, wherein the compound is selected from the group consisting of: 13. The compound of claim 7, wherein the purity of the compound is 98.5% or higher. 14. The compound of claim 7, wherein the purity of the compound is 99% or higher. 15. The compound of claim 7, wherein the purity of the compound is 99.5% or higher. 16. A pharmaceutical composition comprising the isolated compound of claim 1 and at least one pharmaceutically acceptable carrier or diluent. 17. A pharmaceutical composition comprising the compound of claim 7 and at least one pharmaceutically acceptable carrier or diluent. 18. A method for inhibiting PDE10 in a warm-blooded animal, comprising administering to the animal an effective amount of an isolated compound of claim 1 or a pharmaceutical composition of claim 16. 19. A method for inhibiting PDE10 in a warm-blooded animal, comprising administering to the animal an effective amount of a compound of claim 7 or a pharmaceutical composition of claim 17. 20. A method for treating neurological disorders in a warm-blooded animal in need thereof, comprising administering to the animal an effective amount of an isolated compound of claim 1 or a pharmaceutical composition of claim 16. 21. A method for treating neurological disorders in a warm-blooded animal in need thereof, comprising administering to the animal an effective amount of a compound of claim 7 or a pharmaceutical composition of claim 17. 22. The method of claim 20, wherein the neurological disorder is selected from the group consisting of psychotic disorders, anxiety disorders, Parkinson's disease, Huntington's disease, Alzheimer's disease, encephalitis, phobias, epilepsy, aphasia, Bell's palsy, cerebral palsy, sleep disorders, pain, Tourette's syndrome, schizophrenia, delusional disorders, bipolar disorders, posttraumatic stress disorders, drug-induced psychosis, panic disorders, obsessive-compulsive disorders, attention-deficit disorders, disruptive behavior disorders, autism, depression, dementia, epilepsy, insomnias, and multiple sclerosis. 23. The method of claim 22, wherein the neurological disorder is schizophrenia. 24. The method of claim 22, wherein the neurological disorder is post-traumatic stress disorder. 25. The method of claim 22, wherein the neurological disorder is Huntington's disease. 26. The method of claim 21, wherein the neurological disorder is selected from the group consisting of psychotic disorders, anxiety disorders, Parkinson's disease, Huntington's disease, Alzheimer's disease, encephalitis, phobias, epilepsy, aphasia, Bell's palsy, cerebral palsy, sleep disorders, pain, Tourette's syndrome, schizophrenia, delusional disorders, bipolar disorders, posttraumatic stress disorders, drug-induced psychosis, panic disorders, obsessive-compulsive disorders, attention-deficit disorders, disruptive behavior disorders, autism, depression, dementia, epilepsy, insomnias, and multiple sclerosis. 27. The method of claim 26, wherein the neurological disorder is schizophrenia. 28. The method of claim 26, wherein the neurological disorder is post-traumatic stress disorder. 29. The method of claim 26, wherein the neurological disorder is Huntington's disease.
Isolated or pure compounds that inhibit PDE10 are disclosed that have utility in the treatment of a variety of conditions, including but not limited to psychotic, anxiety, movement disorders and/or neurological disorders such as Parkinson's disease, Huntington's disease, Alzheimer's disease, encephalitis, phobias, epilepsy, aphasia, Bell's palsy, cerebral palsy, sleep disorders, pain, Tourette's syndrome, schizophrenia, delusional disorders, drug-induced psychosis and panic and obsessive-compulsive disorders. Pharmaceutically acceptable salts, stereoisomers, solvates and prodrugs of the compounds are also provided. Also disclosed are compositions containing an isolated or pure compound in combination with a pharmaceutically acceptable carrier, as well as methods relating to the use thereof for inhibiting PDE10 in a warm-blooded animal in need of the same.1. An isolated compound of the following structure (I): 2. The isolated compound of claim 1, wherein R1 is methyl or hydroxymethyl. 3. The isolated compound of claim 1, wherein R2 is ethyl. 4. The isolated compound of claim 1, wherein R3 and R4 are each independently H, methyl, or glucuronidyl. 5. The isolated compound of claim 1, wherein X is ═O or —OH. 6. The isolated compound of claim 1, wherein the compound is selected from the group consisting of: 7. A compound of the following structure (I): 8. The compound of claim 7, wherein R1 is methyl or hydroxymethyl. 9. The compound of claim 7, wherein R2 is ethyl. 10. The compound of claim 7, wherein R3 and R4 are each independently H, methyl, or glucuronidyl. 11. The compound of claim 7, wherein X is ═O or —OH. 12. The compound of claim 7, wherein the compound is selected from the group consisting of: 13. The compound of claim 7, wherein the purity of the compound is 98.5% or higher. 14. The compound of claim 7, wherein the purity of the compound is 99% or higher. 15. The compound of claim 7, wherein the purity of the compound is 99.5% or higher. 16. A pharmaceutical composition comprising the isolated compound of claim 1 and at least one pharmaceutically acceptable carrier or diluent. 17. A pharmaceutical composition comprising the compound of claim 7 and at least one pharmaceutically acceptable carrier or diluent. 18. A method for inhibiting PDE10 in a warm-blooded animal, comprising administering to the animal an effective amount of an isolated compound of claim 1 or a pharmaceutical composition of claim 16. 19. A method for inhibiting PDE10 in a warm-blooded animal, comprising administering to the animal an effective amount of a compound of claim 7 or a pharmaceutical composition of claim 17. 20. A method for treating neurological disorders in a warm-blooded animal in need thereof, comprising administering to the animal an effective amount of an isolated compound of claim 1 or a pharmaceutical composition of claim 16. 21. A method for treating neurological disorders in a warm-blooded animal in need thereof, comprising administering to the animal an effective amount of a compound of claim 7 or a pharmaceutical composition of claim 17. 22. The method of claim 20, wherein the neurological disorder is selected from the group consisting of psychotic disorders, anxiety disorders, Parkinson's disease, Huntington's disease, Alzheimer's disease, encephalitis, phobias, epilepsy, aphasia, Bell's palsy, cerebral palsy, sleep disorders, pain, Tourette's syndrome, schizophrenia, delusional disorders, bipolar disorders, posttraumatic stress disorders, drug-induced psychosis, panic disorders, obsessive-compulsive disorders, attention-deficit disorders, disruptive behavior disorders, autism, depression, dementia, epilepsy, insomnias, and multiple sclerosis. 23. The method of claim 22, wherein the neurological disorder is schizophrenia. 24. The method of claim 22, wherein the neurological disorder is post-traumatic stress disorder. 25. The method of claim 22, wherein the neurological disorder is Huntington's disease. 26. The method of claim 21, wherein the neurological disorder is selected from the group consisting of psychotic disorders, anxiety disorders, Parkinson's disease, Huntington's disease, Alzheimer's disease, encephalitis, phobias, epilepsy, aphasia, Bell's palsy, cerebral palsy, sleep disorders, pain, Tourette's syndrome, schizophrenia, delusional disorders, bipolar disorders, posttraumatic stress disorders, drug-induced psychosis, panic disorders, obsessive-compulsive disorders, attention-deficit disorders, disruptive behavior disorders, autism, depression, dementia, epilepsy, insomnias, and multiple sclerosis. 27. The method of claim 26, wherein the neurological disorder is schizophrenia. 28. The method of claim 26, wherein the neurological disorder is post-traumatic stress disorder. 29. The method of claim 26, wherein the neurological disorder is Huntington's disease.
1,600
274,061
15,030,936
1,673
A process for producing a cellulose derivative, comprising: a first step including reacting a cellulose and a first reactant comprising a long-chain reactant for reacting with a hydroxy group of the cellulose to introduce a long-chain organic group having 5 or more carbon atoms, in a solid-liquid heterogeneous system, to form a cellulose derivative in a swollen state, the cellulose derivative having the long-chain organic group having 5 or more carbon atoms introduced therein and having a part of hydroxy groups of the cellulose remained, and performing solid-liquid separation to obtain the cellulose derivative as an intermediate; and a second step including reacting the intermediate cellulose derivative and a second reactant comprising a short-chain reactant for reacting with a remaining hydroxy group of the intermediate cellulose derivative to introduce a short-chain organic group having 4 or less carbon atoms to form a final cellulose derivative having the short-chain organic group having 4 or less carbon atoms introduced therein.
1. A process for producing a cellulose derivative, comprising: a first step including reacting a cellulose and a first reactant comprising a long-chain reactant for reacting with a hydroxy group of the cellulose to introduce a long-chain organic group having 5 or more carbon atoms, in a solid-liquid heterogeneous system, to form a cellulose derivative in a swollen state, the cellulose derivative having the long-chain organic group having 5 or more carbon atoms introduced therein and having a part of hydroxy groups of the cellulose remained, and performing solid-liquid separation to obtain the cellulose derivative as an intermediate; and a second step including reacting the intermediate cellulose derivative and a second reactant comprising a short-chain reactant for reacting with a remaining hydroxy group of the intermediate cellulose derivative to introduce a short-chain organic group having 4 or less carbon atoms to form a final cellulose derivative having the short-chain organic group having 4 or less carbon atoms introduced therein. 2. The process for producing a cellulose derivative according to claim 1, wherein the cellulose derivative in a swollen state has a degree of swelling within a range of 10 to 300%. 3. The process for producing a cellulose derivative according to claim 1, wherein the first reactant further comprises a short-chain reactant for reacting with a hydroxy group of the cellulose to introduce a short-chain organic group having 4 or less carbon atoms, and in the first step, a cellulose derivative having the long-chain organic group having 5 or more carbon atoms and the short-chain organic group having 4 or less carbon atoms introduced therein and having a part of hydroxy groups of the cellulose remained is formed. 4. The process for producing a cellulose derivative according to claim 1, wherein the first reactant comprises a cardanol derivative as the long-chain reactant. 5. The process for producing a cellulose derivative according to claim 1, wherein the short-chain reactant of the second reactant is a short-chain acylating agent for introducing a short-chain acyl group having 2 to 4 carbon atoms. 6. The process for producing a cellulose derivative according to claim 1, wherein, in the second step, the final cellulose derivative is recovered as a solid content by removing a reaction solution by distillation. 7. A cellulose derivative produced by the process for production according to claim 1. 8. A cellulose derivative comprising a long-chain organic group having 5 or more carbon atoms and at least one short-chain organic group having 4 or less carbon atoms introduced therein by use of hydroxy groups of a cellulose, wherein the cellulose derivative has a crystal structure derived from a cellulose derivative portion to which the short-chain organic group having 4 or less carbon atoms is linked. 9. The cellulose derivative according to claim 7, wherein an average number of hydroxy groups per glucose unit is less than 1.7. 10. A molding resin composition containing the cellulose derivative according to claim 7.
A process for producing a cellulose derivative, comprising: a first step including reacting a cellulose and a first reactant comprising a long-chain reactant for reacting with a hydroxy group of the cellulose to introduce a long-chain organic group having 5 or more carbon atoms, in a solid-liquid heterogeneous system, to form a cellulose derivative in a swollen state, the cellulose derivative having the long-chain organic group having 5 or more carbon atoms introduced therein and having a part of hydroxy groups of the cellulose remained, and performing solid-liquid separation to obtain the cellulose derivative as an intermediate; and a second step including reacting the intermediate cellulose derivative and a second reactant comprising a short-chain reactant for reacting with a remaining hydroxy group of the intermediate cellulose derivative to introduce a short-chain organic group having 4 or less carbon atoms to form a final cellulose derivative having the short-chain organic group having 4 or less carbon atoms introduced therein.1. A process for producing a cellulose derivative, comprising: a first step including reacting a cellulose and a first reactant comprising a long-chain reactant for reacting with a hydroxy group of the cellulose to introduce a long-chain organic group having 5 or more carbon atoms, in a solid-liquid heterogeneous system, to form a cellulose derivative in a swollen state, the cellulose derivative having the long-chain organic group having 5 or more carbon atoms introduced therein and having a part of hydroxy groups of the cellulose remained, and performing solid-liquid separation to obtain the cellulose derivative as an intermediate; and a second step including reacting the intermediate cellulose derivative and a second reactant comprising a short-chain reactant for reacting with a remaining hydroxy group of the intermediate cellulose derivative to introduce a short-chain organic group having 4 or less carbon atoms to form a final cellulose derivative having the short-chain organic group having 4 or less carbon atoms introduced therein. 2. The process for producing a cellulose derivative according to claim 1, wherein the cellulose derivative in a swollen state has a degree of swelling within a range of 10 to 300%. 3. The process for producing a cellulose derivative according to claim 1, wherein the first reactant further comprises a short-chain reactant for reacting with a hydroxy group of the cellulose to introduce a short-chain organic group having 4 or less carbon atoms, and in the first step, a cellulose derivative having the long-chain organic group having 5 or more carbon atoms and the short-chain organic group having 4 or less carbon atoms introduced therein and having a part of hydroxy groups of the cellulose remained is formed. 4. The process for producing a cellulose derivative according to claim 1, wherein the first reactant comprises a cardanol derivative as the long-chain reactant. 5. The process for producing a cellulose derivative according to claim 1, wherein the short-chain reactant of the second reactant is a short-chain acylating agent for introducing a short-chain acyl group having 2 to 4 carbon atoms. 6. The process for producing a cellulose derivative according to claim 1, wherein, in the second step, the final cellulose derivative is recovered as a solid content by removing a reaction solution by distillation. 7. A cellulose derivative produced by the process for production according to claim 1. 8. A cellulose derivative comprising a long-chain organic group having 5 or more carbon atoms and at least one short-chain organic group having 4 or less carbon atoms introduced therein by use of hydroxy groups of a cellulose, wherein the cellulose derivative has a crystal structure derived from a cellulose derivative portion to which the short-chain organic group having 4 or less carbon atoms is linked. 9. The cellulose derivative according to claim 7, wherein an average number of hydroxy groups per glucose unit is less than 1.7. 10. A molding resin composition containing the cellulose derivative according to claim 7.
1,600
274,062
15,030,891
1,673
The present invention provides a compound of the Formula I: wherein A is: and W, Y, X, R1, R2, R3, and R4 are as defined herein, or a pharmaceutically acceptable salt thereof, for use as an inhibitor of the EP4 receptor.
1. A compound of the formula: 2. The compound or salt according to claim 1 of the formula: 3. The compound or salt according to claim 2 of the formula: 4. The compound or salt according to claim 3 wherein A is: 5. The compound or salt according to claim 4 wherein R1 is CH3. 6. The compound or salt according to claim 5 wherein R3 is H. 7. The compound or salt according to claim 6 wherein R2 is CH2OH, CH2CH2OH, or OCH3. 8. The compound or salt according to claim 7 wherein R2 is CH2OH. 9. The compound or salt according to claim 5 wherein R2 and R3 together are a OCH2O group attached to vicinal carbon atoms. 10. The compound or salt according to claim 4 wherein R4 is Cl. 11. The compounds or salts thereof according to claim 1 which are: 3-[[6-(1,3-benzodioxol-5-yl)-3-methyl-pyridine-2-carbonyl]amino]-2,4-dimethyl-benzoic acid; 3-[[6-[3-(hydroxymethyl)phenyl]-3-methyl-pyridine-2-carbonyl]amino]-2,4-dimethyl-benzoic acid; and 3-[[3-(3-chlorophenyl)naphthalene-1-carbonyl]amino]-2,4-dimethyl-benzoic acid. 12. A method of treating osteoarthritis in a patient, comprising administering to a patient in need of such treatment an effective amount of a compound, or pharmaceutically acceptable salt thereof, according to claim 1. 13. A method of treating rheumatoid arthritis in a patient, comprising administering to a patient in need of such treatment an effective amount of a compound or pharmaceutically acceptable salt thereof, according to claim 1. 14. A method of treating pain associated with osteoarthritis or rheumatoid arthritis in a patient, comprising administering to a patient in need of such treatment an effective amount of a compound or a pharmaceutically acceptable salt thereof, according to claim 1. 15. (canceled) 16. (canceled) 17. (canceled) 18. (canceled) 19. A pharmaceutical composition, comprising a compound or a pharmaceutically acceptable salt thereof according to claim 1 with one or more pharmaceutically acceptable carriers, diluents, or excipients.
The present invention provides a compound of the Formula I: wherein A is: and W, Y, X, R1, R2, R3, and R4 are as defined herein, or a pharmaceutically acceptable salt thereof, for use as an inhibitor of the EP4 receptor.1. A compound of the formula: 2. The compound or salt according to claim 1 of the formula: 3. The compound or salt according to claim 2 of the formula: 4. The compound or salt according to claim 3 wherein A is: 5. The compound or salt according to claim 4 wherein R1 is CH3. 6. The compound or salt according to claim 5 wherein R3 is H. 7. The compound or salt according to claim 6 wherein R2 is CH2OH, CH2CH2OH, or OCH3. 8. The compound or salt according to claim 7 wherein R2 is CH2OH. 9. The compound or salt according to claim 5 wherein R2 and R3 together are a OCH2O group attached to vicinal carbon atoms. 10. The compound or salt according to claim 4 wherein R4 is Cl. 11. The compounds or salts thereof according to claim 1 which are: 3-[[6-(1,3-benzodioxol-5-yl)-3-methyl-pyridine-2-carbonyl]amino]-2,4-dimethyl-benzoic acid; 3-[[6-[3-(hydroxymethyl)phenyl]-3-methyl-pyridine-2-carbonyl]amino]-2,4-dimethyl-benzoic acid; and 3-[[3-(3-chlorophenyl)naphthalene-1-carbonyl]amino]-2,4-dimethyl-benzoic acid. 12. A method of treating osteoarthritis in a patient, comprising administering to a patient in need of such treatment an effective amount of a compound, or pharmaceutically acceptable salt thereof, according to claim 1. 13. A method of treating rheumatoid arthritis in a patient, comprising administering to a patient in need of such treatment an effective amount of a compound or pharmaceutically acceptable salt thereof, according to claim 1. 14. A method of treating pain associated with osteoarthritis or rheumatoid arthritis in a patient, comprising administering to a patient in need of such treatment an effective amount of a compound or a pharmaceutically acceptable salt thereof, according to claim 1. 15. (canceled) 16. (canceled) 17. (canceled) 18. (canceled) 19. A pharmaceutical composition, comprising a compound or a pharmaceutically acceptable salt thereof according to claim 1 with one or more pharmaceutically acceptable carriers, diluents, or excipients.
1,600
274,063
15,031,020
1,673
The growth and/or proliferation of mammalian cells are modulated by modulating the physical interaction between platelets (thrombocytes) and the surface of the cells. Sulfated polysaccharides, preferably glycosaminoglycans, can be used as a medicament for the inhibition of the physical interaction between the cell surface and platelets in the treatment of a medical disorder associated with unwanted cell growth and/or proliferation. The physical interaction between platelets (thrombocytes) and the surface of the cells can be modulated in vitro in order to modulate cell proliferation. Inhibition of the interaction between the cell surface and platelets can inhibit cell growth, and enhancement of the interaction between platelets and the surface of the cell can enhance cell growth.
1. A method of modulating the growth and/or proliferation of mammalian cells comprising modulating a physical interaction between platelets (thrombocytes) and a surface of said cells. 2. The method according to claim 1, wherein the modulating comprises inhibiting the physical interaction between the cell surface and platelets, resulting in inhibition of cell growth and/or proliferation. 3. The method according to claim 1, wherein said mammalian cells are human cells. 4. The method according to claim 2, wherein the inhibiting comprises administering a sulfated polysaccharide to a subject, wherein the resulting inhibition treats a medical disorder associated with unwanted cell growth and/or proliferation in the subject. 5. The method according to claim 4, wherein the degree of sulfation of said polysaccharide is >1.0. 6. The method according to claim 4, wherein the degree of sulfation of said polysaccharide is >1.2. 7. The method according to claim 4, wherein the degree of sulfation of said polysaccharide is >1.4. 8. The method according to claim 4, wherein the sulfated polysaccharide is a glycosaminoglycan. 9. The method according to claim 8, wherein the glycosaminoglycan is characterised by the absence of the terminal pentasaccharide of Heparin. 10. The method according to claim 8, wherein the glycosaminoglycan exhibits an average molecular weight of about 5000 to about 12000 daltons. 11. The method according to claim 8, wherein the glycosaminoglycan is pentosan polysulfate (PPS). 12. The method according to claim 8, wherein the glycosaminoglycan is dextran sulfate (DXS). 13. The method according to claim 8, wherein the glycosaminoglycan is heparin. 14. The method according to claim 13, wherein the heparin is a low molecular weight (LMW) heparin. 15. The method according to claim 14, wherein the low molecular weight heparin is enoxaparin. 16. The method according to claim 14, wherein the low molecular weight heparin is dalteparin. 17. The method according to claim 14, wherein the low molecular weight heparin is tinzaparin. 18. The method according to claim 4, wherein said sulfated polysaccharide is a sulfated alginate. 19. The method according to claim 4, wherein said sulfated polysaccharide is a sulfated fucoidan. 20. The method according to claim 4, wherein the medical disorder associated with unwanted cell growth and/or proliferation is a tumor disease. 21. The method according to claim 20, wherein the polysaccharide is a glycosaminoglycan. 22. The method according to claim 21, wherein the glycosaminoglycan is pentosan polysulfate (PPS) or dextran sulfate (DXS). 23. (canceled) 24. The method according to claim 4, wherein the medical disorder associated with unwanted cell growth and/or proliferation is an auto immune disease. 25.-28. (canceled) 29. A method of treating a tumor disease, comprising modulating the growth and/or proliferation of mammalian cells according to claim 20, wherein said sulfated polysaccharide is locally administered in proximity to a tumor. 30. (canceled) 31. (canceled) 32. A method of inhibiting cell growth and/or proliferation of a cell in vitro comprising adding a sulfated polysaccharide to said cell, wherein said sulfated polysaccharide is a glycosaminoglycan. 33. The method according to claim 32, wherein said sulfated polysaccharide is pentosan polysulfate (PPS) or dextran sulfate (DXS). 34. The method according to claim 32, wherein said sulfated polysaccharide is heparin or LMW heparin and is administered in vitro at 0.01 to 10 U/mL. 35. The method according to claim 32, wherein said sulfated polysaccharide is DXS or PPS and is administered in vitro at 0.01 to 10 ppm in solution. 36. The method according to claim 32, wherein said sulfated polysaccharide is a sulfated alginate or fucoidan. 37.-42. (canceled)
The growth and/or proliferation of mammalian cells are modulated by modulating the physical interaction between platelets (thrombocytes) and the surface of the cells. Sulfated polysaccharides, preferably glycosaminoglycans, can be used as a medicament for the inhibition of the physical interaction between the cell surface and platelets in the treatment of a medical disorder associated with unwanted cell growth and/or proliferation. The physical interaction between platelets (thrombocytes) and the surface of the cells can be modulated in vitro in order to modulate cell proliferation. Inhibition of the interaction between the cell surface and platelets can inhibit cell growth, and enhancement of the interaction between platelets and the surface of the cell can enhance cell growth.1. A method of modulating the growth and/or proliferation of mammalian cells comprising modulating a physical interaction between platelets (thrombocytes) and a surface of said cells. 2. The method according to claim 1, wherein the modulating comprises inhibiting the physical interaction between the cell surface and platelets, resulting in inhibition of cell growth and/or proliferation. 3. The method according to claim 1, wherein said mammalian cells are human cells. 4. The method according to claim 2, wherein the inhibiting comprises administering a sulfated polysaccharide to a subject, wherein the resulting inhibition treats a medical disorder associated with unwanted cell growth and/or proliferation in the subject. 5. The method according to claim 4, wherein the degree of sulfation of said polysaccharide is >1.0. 6. The method according to claim 4, wherein the degree of sulfation of said polysaccharide is >1.2. 7. The method according to claim 4, wherein the degree of sulfation of said polysaccharide is >1.4. 8. The method according to claim 4, wherein the sulfated polysaccharide is a glycosaminoglycan. 9. The method according to claim 8, wherein the glycosaminoglycan is characterised by the absence of the terminal pentasaccharide of Heparin. 10. The method according to claim 8, wherein the glycosaminoglycan exhibits an average molecular weight of about 5000 to about 12000 daltons. 11. The method according to claim 8, wherein the glycosaminoglycan is pentosan polysulfate (PPS). 12. The method according to claim 8, wherein the glycosaminoglycan is dextran sulfate (DXS). 13. The method according to claim 8, wherein the glycosaminoglycan is heparin. 14. The method according to claim 13, wherein the heparin is a low molecular weight (LMW) heparin. 15. The method according to claim 14, wherein the low molecular weight heparin is enoxaparin. 16. The method according to claim 14, wherein the low molecular weight heparin is dalteparin. 17. The method according to claim 14, wherein the low molecular weight heparin is tinzaparin. 18. The method according to claim 4, wherein said sulfated polysaccharide is a sulfated alginate. 19. The method according to claim 4, wherein said sulfated polysaccharide is a sulfated fucoidan. 20. The method according to claim 4, wherein the medical disorder associated with unwanted cell growth and/or proliferation is a tumor disease. 21. The method according to claim 20, wherein the polysaccharide is a glycosaminoglycan. 22. The method according to claim 21, wherein the glycosaminoglycan is pentosan polysulfate (PPS) or dextran sulfate (DXS). 23. (canceled) 24. The method according to claim 4, wherein the medical disorder associated with unwanted cell growth and/or proliferation is an auto immune disease. 25.-28. (canceled) 29. A method of treating a tumor disease, comprising modulating the growth and/or proliferation of mammalian cells according to claim 20, wherein said sulfated polysaccharide is locally administered in proximity to a tumor. 30. (canceled) 31. (canceled) 32. A method of inhibiting cell growth and/or proliferation of a cell in vitro comprising adding a sulfated polysaccharide to said cell, wherein said sulfated polysaccharide is a glycosaminoglycan. 33. The method according to claim 32, wherein said sulfated polysaccharide is pentosan polysulfate (PPS) or dextran sulfate (DXS). 34. The method according to claim 32, wherein said sulfated polysaccharide is heparin or LMW heparin and is administered in vitro at 0.01 to 10 U/mL. 35. The method according to claim 32, wherein said sulfated polysaccharide is DXS or PPS and is administered in vitro at 0.01 to 10 ppm in solution. 36. The method according to claim 32, wherein said sulfated polysaccharide is a sulfated alginate or fucoidan. 37.-42. (canceled)
1,600
274,064
15,134,626
1,673
Methods of treating a fungal infection in a subject, the method comprising administering to the subject a modified saponin.
1. A method of treating a fungal infection in a subject, the method comprising administering to the subject a therapeutically effective amount of a compound of Formula III: 2. The method of claim 1, wherein the compound is: 3. The method of claim 1, wherein the compound is: 4. A method of treating a fungal infection in a subject, the method comprising administering to the subject a therapeutically effective amount of a compound having the structure: 5. The method of claim 1, wherein the fungal infection is infection with a Candida species fungus. 6. The method of claim 6, wherein the Candida species is C. albicans. 7. The method of claim 4, wherein the fungal infection is infection with a Candida species fungus. 8. The method of claim 7, wherein the Candida species is C. albicans.
Methods of treating a fungal infection in a subject, the method comprising administering to the subject a modified saponin.1. A method of treating a fungal infection in a subject, the method comprising administering to the subject a therapeutically effective amount of a compound of Formula III: 2. The method of claim 1, wherein the compound is: 3. The method of claim 1, wherein the compound is: 4. A method of treating a fungal infection in a subject, the method comprising administering to the subject a therapeutically effective amount of a compound having the structure: 5. The method of claim 1, wherein the fungal infection is infection with a Candida species fungus. 6. The method of claim 6, wherein the Candida species is C. albicans. 7. The method of claim 4, wherein the fungal infection is infection with a Candida species fungus. 8. The method of claim 7, wherein the Candida species is C. albicans.
1,600
274,065
15,135,010
1,673
The invention relates to sulphated polysaccharides which have the general structure of the constituent polysaccharides of heparin and which have a molecular weight of less than 8000 Daltons, comprising two antithrombin III-binding hexasaccharide sequences corresponding to formula (I):
1. A sulfonated polysaccharide having a polysaccharide of heparin which has a molecular weight of less than 8000 Daltons, said sulfonated polysaccharide comprising two antithrombin III-affinity sites, wherein said sulfonated polysaccharide is in mixture with other polysaccharides. 2. The sulfonated polysaccharide according to claim 1, comprising two antithrombin III-binding hexasaccharide sequences. 3. The sulfonated polysaccharide according to claim 2, wherein said sulfonated polysaccharide twice comprises the antithrombin III-binding hexasaccharide sequence corresponding to formula (I): 4. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide comprises between 12 and 22 saccharide units. 5. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide corresponds to formula (II): (A)d-(Formula(I))-(B)f-(Formula(I))-(C)9   (II) in which: the A, B and C units, which may be identical to or different from one another, represent disaccharide sequences, the units of formula (I) represent a hexasaccharide sequence corresponding to Formula (I): 6. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide corresponds to formula (II): (A)n-(Formula(I))-(B)m-(Formula(I))-(C)k   (II) in which: the A, B and C units, which may be identical to or different from one another, represent disaccharide sequences, the units of formula (I) represent a hexasaccharide sequence corresponding to formula (I): 7. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide comprises 12 saccharide units and corresponds to formula (III): 8. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide comprises 14 saccharide units. 9. The sulfonated polysaccharide according to claim 8, wherein said sulfonated polysaccharide corresponds to formulae (IV), (V) or (VI): 10. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide is in the form of a sodium salt. 11. A low-molecular-weight or ultra-low-molecular-weight heparin, comprising one or more sulfonated polysaccharides according to claim 1, in mixture with other polysaccharides. 12. The low-molecular-weight or ultra-low-molecular-weight heparin according to claim 11, comprising the sulfonated polysaccharide of formula (III), in mixture with other polysaccharides: 13. A pharmaceutical composition, comprising the sulfonated polysaccharide according to claim 1, or a pharmaceutically acceptable salt thereof, in mixture with other polysaccharides, and at least one pharmaceutically acceptable excipient. 14. A method for treating and preventing thrombosis, comprising administering to a patient in need thereof a therapeutically effective amount of the sulfonated polysaccharide of claim 1, in a mixture with other polysaccharides, or a pharmaceutically acceptable salt thereof.
The invention relates to sulphated polysaccharides which have the general structure of the constituent polysaccharides of heparin and which have a molecular weight of less than 8000 Daltons, comprising two antithrombin III-binding hexasaccharide sequences corresponding to formula (I):1. A sulfonated polysaccharide having a polysaccharide of heparin which has a molecular weight of less than 8000 Daltons, said sulfonated polysaccharide comprising two antithrombin III-affinity sites, wherein said sulfonated polysaccharide is in mixture with other polysaccharides. 2. The sulfonated polysaccharide according to claim 1, comprising two antithrombin III-binding hexasaccharide sequences. 3. The sulfonated polysaccharide according to claim 2, wherein said sulfonated polysaccharide twice comprises the antithrombin III-binding hexasaccharide sequence corresponding to formula (I): 4. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide comprises between 12 and 22 saccharide units. 5. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide corresponds to formula (II): (A)d-(Formula(I))-(B)f-(Formula(I))-(C)9   (II) in which: the A, B and C units, which may be identical to or different from one another, represent disaccharide sequences, the units of formula (I) represent a hexasaccharide sequence corresponding to Formula (I): 6. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide corresponds to formula (II): (A)n-(Formula(I))-(B)m-(Formula(I))-(C)k   (II) in which: the A, B and C units, which may be identical to or different from one another, represent disaccharide sequences, the units of formula (I) represent a hexasaccharide sequence corresponding to formula (I): 7. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide comprises 12 saccharide units and corresponds to formula (III): 8. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide comprises 14 saccharide units. 9. The sulfonated polysaccharide according to claim 8, wherein said sulfonated polysaccharide corresponds to formulae (IV), (V) or (VI): 10. The sulfonated polysaccharide according to claim 1, wherein said sulfonated polysaccharide is in the form of a sodium salt. 11. A low-molecular-weight or ultra-low-molecular-weight heparin, comprising one or more sulfonated polysaccharides according to claim 1, in mixture with other polysaccharides. 12. The low-molecular-weight or ultra-low-molecular-weight heparin according to claim 11, comprising the sulfonated polysaccharide of formula (III), in mixture with other polysaccharides: 13. A pharmaceutical composition, comprising the sulfonated polysaccharide according to claim 1, or a pharmaceutically acceptable salt thereof, in mixture with other polysaccharides, and at least one pharmaceutically acceptable excipient. 14. A method for treating and preventing thrombosis, comprising administering to a patient in need thereof a therapeutically effective amount of the sulfonated polysaccharide of claim 1, in a mixture with other polysaccharides, or a pharmaceutically acceptable salt thereof.
1,600
274,066
15,134,030
1,673
A nucleophilic substitution reaction to crosslink cyclodextrin (CD) polymer with rigid aromatic groups, providing a high surface area, mesoporous CD-containing polymers (P-CDPs). The P-CDPs can be used for removing organic contaminants from water. By encapsulating pollutants to form well-defined host-guest complexes with complementary selectivities to activated carbon (AC) sorbents. The P-CDPs can rapidly sequester pharmaceuticals, pesticides, and other organic micropollutants, achieving equilibrium binding capacity in seconds with adsorption rate constants 15-200 times greater than ACs and nonporous CD sorbents. The CD polymer can be regenerated several times, through a room temperature washing procedure, with no loss in performance.
1. A mesoporous polymeric material comprising one or more cyclodextrins crosslinked with at least an equimolar amount of one or more aryl fluorides. 2. The mesoporous polymeric material of claim 1, wherein the molar ratio of cyclodextrin to aryl fluoride ranges from about 1:1 to about 1:X, wherein X is three times the average number of glucose subunits in the cyclodextrin. 3. The mesoporous polymeric material of claim 2, wherein the molar ratio of cyclodextrin to aryl fluoride is about 1:6. 4. The mesoporous polymeric material of claim 1, wherein the cyclodextrin is selected from the group consisting of α-, β-, γ-cyclodextrin, and combinations thereof. 5. The mesoporous polymeric material of claim 4, wherein the cyclodextrin is β-cyclodextrin. 6. The mesoporous polymeric material of claim 1, wherein the aryl fluoride is selected from the group consisting of tetrafluoroterephthalonitrile, decafluorobiphenyl, octafluoronaphthalene, and combinations thereof. 7. The mesoporous polymeric material of claim 6, wherein the aryl fluoride is tetrafluoroterephthalonitrile. 8. The mesoporous polymeric material of claim 6, wherein the aryl fluoride is decafluorobiphenyl. 9. The mesoporous polymeric material of claim 5, wherein the aryl fluoride is tetrafluoroterephthalonitrile. 10. The mesoporous polymeric material of claim 5, wherein the aryl fluoride is decafluorobiphenyl. 11. The mesoporous polymeric material of claim 5, wherein the aryl fluoride is tetrafluoroterephthalonitrile, and the molar ratio of β-cyclodextrin to tetrafluoroterephthalonitrile is about 1:3. 12. The mesoporous polymeric material of claim 5, wherein the aryl fluoride is decafluorobiphenyl, and the molar ratio of β-cyclodextrin to decafluorobiphenyl is about 1:3. 13. A composition comprising the mesoporous polymeric material of claim 1 covalently bonded to a cellulosic substrate. 14. The composition of claim 13, wherein the cellulosic substrate comprises cotton. 15. The composition of claim 14, wherein the cellulosic substrate is in the form of a fabric. 16. A method of purifying a fluid sample comprising one or more pollutants, the method comprising contacting the fluid sample with the mesoporous polymeric material of claim 1, whereby at least 50 wt. % of the total amount of the one or more pollutants in the fluid sample is adsorbed by the mesoporous polymeric material. 17. The method of claim 16, wherein the fluid sample flows across, around, or through the mesoporous polymeric material. 18. The method of claim 16, wherein the fluid sample is contacted with the mesoporous polymeric material under static conditions for an incubation period and after the incubation period the fluid sample is separated from the mesoporous polymeric material. 19. The method of claim 16, wherein the fluid sample is drinking water, wastewater, ground water, aqueous extract from contaminated soil, or landfill leachate. 20. The method of claim 16, wherein the fluid sample is in the vapor phase. 21. The method of claim 20, wherein the fluid sample comprises one or more volatile organic compounds and air. 22. A method of removing one or more compounds from a fluid sample or determining the presence or absence of one or more compounds in a fluid sample comprising: a) contacting the sample with the mesoporous polymeric material of claim 1 for an incubation period; b) separating the mesoporous polymeric material after the incubation period from the sample; and c) heating the porous polymeric material separated in step b), or contacting the porous polymeric material separated in step b) with a solvent, thereby releasing at least a portion of the compounds from the porous polymeric material; and d1) optionally isolating at least a portion of the compounds released in step c); or d2) determining the presence or absence of the compounds released in step c), wherein the presence of one or more compounds correlates to the presence of the one or more compounds in the sample. 23. The method of claim 22, wherein said determining is carried out by gas chromatography, liquid chromatography, supercritical fluid chromatography, or mass spectrometry. 24. The method of claim 22, wherein the sample is a food and the compounds are volatile organic compounds. 25. The method of claim 22, wherein the sample is a perfume or fragrance and the compounds are volatile organic compounds.
A nucleophilic substitution reaction to crosslink cyclodextrin (CD) polymer with rigid aromatic groups, providing a high surface area, mesoporous CD-containing polymers (P-CDPs). The P-CDPs can be used for removing organic contaminants from water. By encapsulating pollutants to form well-defined host-guest complexes with complementary selectivities to activated carbon (AC) sorbents. The P-CDPs can rapidly sequester pharmaceuticals, pesticides, and other organic micropollutants, achieving equilibrium binding capacity in seconds with adsorption rate constants 15-200 times greater than ACs and nonporous CD sorbents. The CD polymer can be regenerated several times, through a room temperature washing procedure, with no loss in performance.1. A mesoporous polymeric material comprising one or more cyclodextrins crosslinked with at least an equimolar amount of one or more aryl fluorides. 2. The mesoporous polymeric material of claim 1, wherein the molar ratio of cyclodextrin to aryl fluoride ranges from about 1:1 to about 1:X, wherein X is three times the average number of glucose subunits in the cyclodextrin. 3. The mesoporous polymeric material of claim 2, wherein the molar ratio of cyclodextrin to aryl fluoride is about 1:6. 4. The mesoporous polymeric material of claim 1, wherein the cyclodextrin is selected from the group consisting of α-, β-, γ-cyclodextrin, and combinations thereof. 5. The mesoporous polymeric material of claim 4, wherein the cyclodextrin is β-cyclodextrin. 6. The mesoporous polymeric material of claim 1, wherein the aryl fluoride is selected from the group consisting of tetrafluoroterephthalonitrile, decafluorobiphenyl, octafluoronaphthalene, and combinations thereof. 7. The mesoporous polymeric material of claim 6, wherein the aryl fluoride is tetrafluoroterephthalonitrile. 8. The mesoporous polymeric material of claim 6, wherein the aryl fluoride is decafluorobiphenyl. 9. The mesoporous polymeric material of claim 5, wherein the aryl fluoride is tetrafluoroterephthalonitrile. 10. The mesoporous polymeric material of claim 5, wherein the aryl fluoride is decafluorobiphenyl. 11. The mesoporous polymeric material of claim 5, wherein the aryl fluoride is tetrafluoroterephthalonitrile, and the molar ratio of β-cyclodextrin to tetrafluoroterephthalonitrile is about 1:3. 12. The mesoporous polymeric material of claim 5, wherein the aryl fluoride is decafluorobiphenyl, and the molar ratio of β-cyclodextrin to decafluorobiphenyl is about 1:3. 13. A composition comprising the mesoporous polymeric material of claim 1 covalently bonded to a cellulosic substrate. 14. The composition of claim 13, wherein the cellulosic substrate comprises cotton. 15. The composition of claim 14, wherein the cellulosic substrate is in the form of a fabric. 16. A method of purifying a fluid sample comprising one or more pollutants, the method comprising contacting the fluid sample with the mesoporous polymeric material of claim 1, whereby at least 50 wt. % of the total amount of the one or more pollutants in the fluid sample is adsorbed by the mesoporous polymeric material. 17. The method of claim 16, wherein the fluid sample flows across, around, or through the mesoporous polymeric material. 18. The method of claim 16, wherein the fluid sample is contacted with the mesoporous polymeric material under static conditions for an incubation period and after the incubation period the fluid sample is separated from the mesoporous polymeric material. 19. The method of claim 16, wherein the fluid sample is drinking water, wastewater, ground water, aqueous extract from contaminated soil, or landfill leachate. 20. The method of claim 16, wherein the fluid sample is in the vapor phase. 21. The method of claim 20, wherein the fluid sample comprises one or more volatile organic compounds and air. 22. A method of removing one or more compounds from a fluid sample or determining the presence or absence of one or more compounds in a fluid sample comprising: a) contacting the sample with the mesoporous polymeric material of claim 1 for an incubation period; b) separating the mesoporous polymeric material after the incubation period from the sample; and c) heating the porous polymeric material separated in step b), or contacting the porous polymeric material separated in step b) with a solvent, thereby releasing at least a portion of the compounds from the porous polymeric material; and d1) optionally isolating at least a portion of the compounds released in step c); or d2) determining the presence or absence of the compounds released in step c), wherein the presence of one or more compounds correlates to the presence of the one or more compounds in the sample. 23. The method of claim 22, wherein said determining is carried out by gas chromatography, liquid chromatography, supercritical fluid chromatography, or mass spectrometry. 24. The method of claim 22, wherein the sample is a food and the compounds are volatile organic compounds. 25. The method of claim 22, wherein the sample is a perfume or fragrance and the compounds are volatile organic compounds.
1,600
274,067
15,134,016
1,673
This invention relates to compounds represented by formula (I):
1-31. (canceled) 32. A method of treating obesity, comprising administering to a mammal in need thereof a therapeutically effective amount of the compound: 33. A method of treating obesity, comprising administering to a mammal in need thereof a therapeutically effective amount of the compound: 34. A method of treating obesity, comprising administering to a mammal in need thereof a therapeutically effective amount of the compound: 35. A method of treating obesity, comprising administering to a mammal in need thereof a therapeutically effective amount of the compound:
This invention relates to compounds represented by formula (I):1-31. (canceled) 32. A method of treating obesity, comprising administering to a mammal in need thereof a therapeutically effective amount of the compound: 33. A method of treating obesity, comprising administering to a mammal in need thereof a therapeutically effective amount of the compound: 34. A method of treating obesity, comprising administering to a mammal in need thereof a therapeutically effective amount of the compound: 35. A method of treating obesity, comprising administering to a mammal in need thereof a therapeutically effective amount of the compound:
1,600
274,068
15,133,359
1,673
Compounds that selectively inhibit pathological production of human vascular endothelial growth factor (VEGF) and compositions comprising such Compounds are described. Compounds that inhibit viral replication or the production of viral RNA or DNA or viral protein and compositions comprising such Compounds are described. Also described are methods of reducing VEGF using such Compounds and methods for treating cancer and non-neoplastic conditions involving the administration of such Compounds. Further described are methods of inhibiting viral replication or the production of viral RNA or DNA or viral protein using such Compounds and methods for treating viral infections involving the administration of such Compounds. The Compounds may be administered as a single agent therapy or in combination with one or more additional therapies to a human in need of such treatments.
1-8. (canceled) 9. A compound having the structure: 10. The use of claim 9, wherein the acute myelocytic leukemia is selected from myeloblastic, promyelocytic, myelomonocytic, monocytic or erythroleukemia leukemia or myelodysplastic syndrome. 11. The use of claim 9, wherein the chronic leukemia is selected from chronic myelocytic leukemia, chronic lymphocytic leukemia or hairy cell leukemia. 12. The use of claim 9, wherein the lymphoma is selected from Hodgkin's disease or non-Hodgkin's disease. 13. The use of claim 9, wherein the multiple myeloma is selected from smoldering multiple myeloma, nonsecretory myeloma, osteosclerotic myeloma or plasma cell leukemia. 14. A compound having the structure: 15. The use of claim 14, wherein the acute myelocytic leukemia is selected from myeloblastic, promyelocytic, myelomonocytic, monocytic or erythroleukemia leukemia or myelodysplastic syndrome. 16. The use of claim 14, wherein the chronic leukemia is selected from chronic myelocytic leukemia, chronic lymphocytic leukemia or hairy cell leukemia. 17. The use of claim 14, wherein the lymphoma is selected from Hodgkin's disease or non-Hodgkin's disease. 18. The use of claim 14, wherein the multiple myeloma is selected from smoldering multiple myeloma, nonsecretory myeloma, osteosclerotic myeloma or plasma cell leukemia.
Compounds that selectively inhibit pathological production of human vascular endothelial growth factor (VEGF) and compositions comprising such Compounds are described. Compounds that inhibit viral replication or the production of viral RNA or DNA or viral protein and compositions comprising such Compounds are described. Also described are methods of reducing VEGF using such Compounds and methods for treating cancer and non-neoplastic conditions involving the administration of such Compounds. Further described are methods of inhibiting viral replication or the production of viral RNA or DNA or viral protein using such Compounds and methods for treating viral infections involving the administration of such Compounds. The Compounds may be administered as a single agent therapy or in combination with one or more additional therapies to a human in need of such treatments.1-8. (canceled) 9. A compound having the structure: 10. The use of claim 9, wherein the acute myelocytic leukemia is selected from myeloblastic, promyelocytic, myelomonocytic, monocytic or erythroleukemia leukemia or myelodysplastic syndrome. 11. The use of claim 9, wherein the chronic leukemia is selected from chronic myelocytic leukemia, chronic lymphocytic leukemia or hairy cell leukemia. 12. The use of claim 9, wherein the lymphoma is selected from Hodgkin's disease or non-Hodgkin's disease. 13. The use of claim 9, wherein the multiple myeloma is selected from smoldering multiple myeloma, nonsecretory myeloma, osteosclerotic myeloma or plasma cell leukemia. 14. A compound having the structure: 15. The use of claim 14, wherein the acute myelocytic leukemia is selected from myeloblastic, promyelocytic, myelomonocytic, monocytic or erythroleukemia leukemia or myelodysplastic syndrome. 16. The use of claim 14, wherein the chronic leukemia is selected from chronic myelocytic leukemia, chronic lymphocytic leukemia or hairy cell leukemia. 17. The use of claim 14, wherein the lymphoma is selected from Hodgkin's disease or non-Hodgkin's disease. 18. The use of claim 14, wherein the multiple myeloma is selected from smoldering multiple myeloma, nonsecretory myeloma, osteosclerotic myeloma or plasma cell leukemia.
1,600
274,069
15,132,417
1,673
This invention is related to nucleic acid chemistry and describes novel 1,2-dithiolane functionalized nucleoside phosphoramidites (1, Chart 1) and corresponding solid supports (2, Chart 1). In addition to these derivatives, 1,2-dithiolane moiety can also be functionalized to at the various positions of the nucleobase and sugar part as shown in Schemes 1 to 8. The nucleosides of our invention carry a primary hydroxyl for DMTr (4,4′-dimethoxytrityl) function for chain elongation. Furthermore, the phosphoramidite function is attached at the 3′-hydroxyl of the nucleoside. This allows oligonucleotide chain extension under standard DNA and RNA synthesis chemistry conditions and techniques, thus leading to high quality oligonucleotides. These derivatives are useful for introduction of reactive thiol groups either at 3′- or 5′-end of the oligonucleotides on the solid supports such as gold, silver and quantum dots.
1. A nucleoside, comprising: a guanine, a 2′-deoxyribose, a dithiolane derivative at N2 of the guanine; and a phosphoramidite derivative at 3′ or a solid support at 3′, wherein the nucleoside is represented by Structure 1 or Structure 2: 2. A nucleoside, comprising: a pyrimidine; a ribose; and a dithiolane derivative at C5 of the pyrimidine, wherein the nucleoside is represented by Structure 3 or Structure 4: 3. A nucleoside, comprising: a purine; a ribose; and a dithiolane derivative at C8 of the purine, wherein the nucleoside is represented by Structure 5 or Structure 6: 4. A nucleoside, comprising: a pyrimidine; a ribose; and a dithiolane derivative at 2′-O of the ribose, wherein the nucleoside is represented by Structure 7 or Structure 8: 5. A nucleoside, comprising: a purine; a ribose; and a dithiolane derivative at 2′O of the ribose, wherein the nucleoside is represented by Structure 9 or Structure 10: 6. A nucleoside, comprising: a pyrimidine; a ribose; a dithiolane derivative at C5 of the pyrimidine; and a phosphoramidite group at 3′-O of the ribose, wherein the nucleoside is represented by Structure 11 or Structure 12: 7. A nucleoside, comprising: a purine; a ribose; a dithiolane derivative at C8 of the purine; and a phosphoramidite group at 3′-O of the ribose, wherein the nucleoside is represented by Structure 13 or Structure 14: 8. A nucleoside, comprising: a pyrimidine; a ribose; a dithiolane derivative at 2′-O of the ribose; and a phosphoramidite group at 3′-O of the ribose, wherein the nucleoside is represented by Structure 15 or Structure 16: 9. A nucleoside, comprising: a purine; a ribose; a dithiolane derivative at 2′-O of the ribose; and a phosphoramidite group at 3′-O of the ribose, wherein the nucleoside is represented by Structure 17 or Structure 18: 10. A nucleoside, comprising: a ribose; a succinate group at 3′-O of the ribose; and a pyrimidine or a purine, wherein the pyrimidine has a dithiolane derivative at C5 and the pyrimidine is represented by Structure 19 or Structure 20, and the purine has a dithiolane derive at C8 and is represented by Structure 21 or Structure 22: 11. A nucleoside, comprising: a nucleobase; a ribose; a dithiolane derivative at 2′-O of the ribose; and a succinate group at 3′-O of the ribose, wherein the nucleobase is a pyrimidine or a purine, wherein the pyrimidine is represented by Structure 23 or Structure 24, and the purine is represented by Structure 25 or Structure 26: 12. A nucleoside, comprising: a nucleobase; a ribose; a dithiolane derivative at 2′-O of the ribose; and a solid support at 3′O— of the ribose, wherein the nucleobase is a pyrimidine or a purine, wherein the pyrimidine is represented by Structure 27 or Structure 28, and the purine is represented by Structure 29 or Structure 30: 13. A nucleoside, comprising a dithiolane derivative according to claims 1 to 12, wherein the nucleoside is represented by one of Structure 1 through Structure 30. 14. A conjugate, comprising: the nucleoside of claim 13, and a solid support, wherein the solid support is gold or quantum, and the conjugate is presented in FIG. 14.
This invention is related to nucleic acid chemistry and describes novel 1,2-dithiolane functionalized nucleoside phosphoramidites (1, Chart 1) and corresponding solid supports (2, Chart 1). In addition to these derivatives, 1,2-dithiolane moiety can also be functionalized to at the various positions of the nucleobase and sugar part as shown in Schemes 1 to 8. The nucleosides of our invention carry a primary hydroxyl for DMTr (4,4′-dimethoxytrityl) function for chain elongation. Furthermore, the phosphoramidite function is attached at the 3′-hydroxyl of the nucleoside. This allows oligonucleotide chain extension under standard DNA and RNA synthesis chemistry conditions and techniques, thus leading to high quality oligonucleotides. These derivatives are useful for introduction of reactive thiol groups either at 3′- or 5′-end of the oligonucleotides on the solid supports such as gold, silver and quantum dots.1. A nucleoside, comprising: a guanine, a 2′-deoxyribose, a dithiolane derivative at N2 of the guanine; and a phosphoramidite derivative at 3′ or a solid support at 3′, wherein the nucleoside is represented by Structure 1 or Structure 2: 2. A nucleoside, comprising: a pyrimidine; a ribose; and a dithiolane derivative at C5 of the pyrimidine, wherein the nucleoside is represented by Structure 3 or Structure 4: 3. A nucleoside, comprising: a purine; a ribose; and a dithiolane derivative at C8 of the purine, wherein the nucleoside is represented by Structure 5 or Structure 6: 4. A nucleoside, comprising: a pyrimidine; a ribose; and a dithiolane derivative at 2′-O of the ribose, wherein the nucleoside is represented by Structure 7 or Structure 8: 5. A nucleoside, comprising: a purine; a ribose; and a dithiolane derivative at 2′O of the ribose, wherein the nucleoside is represented by Structure 9 or Structure 10: 6. A nucleoside, comprising: a pyrimidine; a ribose; a dithiolane derivative at C5 of the pyrimidine; and a phosphoramidite group at 3′-O of the ribose, wherein the nucleoside is represented by Structure 11 or Structure 12: 7. A nucleoside, comprising: a purine; a ribose; a dithiolane derivative at C8 of the purine; and a phosphoramidite group at 3′-O of the ribose, wherein the nucleoside is represented by Structure 13 or Structure 14: 8. A nucleoside, comprising: a pyrimidine; a ribose; a dithiolane derivative at 2′-O of the ribose; and a phosphoramidite group at 3′-O of the ribose, wherein the nucleoside is represented by Structure 15 or Structure 16: 9. A nucleoside, comprising: a purine; a ribose; a dithiolane derivative at 2′-O of the ribose; and a phosphoramidite group at 3′-O of the ribose, wherein the nucleoside is represented by Structure 17 or Structure 18: 10. A nucleoside, comprising: a ribose; a succinate group at 3′-O of the ribose; and a pyrimidine or a purine, wherein the pyrimidine has a dithiolane derivative at C5 and the pyrimidine is represented by Structure 19 or Structure 20, and the purine has a dithiolane derive at C8 and is represented by Structure 21 or Structure 22: 11. A nucleoside, comprising: a nucleobase; a ribose; a dithiolane derivative at 2′-O of the ribose; and a succinate group at 3′-O of the ribose, wherein the nucleobase is a pyrimidine or a purine, wherein the pyrimidine is represented by Structure 23 or Structure 24, and the purine is represented by Structure 25 or Structure 26: 12. A nucleoside, comprising: a nucleobase; a ribose; a dithiolane derivative at 2′-O of the ribose; and a solid support at 3′O— of the ribose, wherein the nucleobase is a pyrimidine or a purine, wherein the pyrimidine is represented by Structure 27 or Structure 28, and the purine is represented by Structure 29 or Structure 30: 13. A nucleoside, comprising a dithiolane derivative according to claims 1 to 12, wherein the nucleoside is represented by one of Structure 1 through Structure 30. 14. A conjugate, comprising: the nucleoside of claim 13, and a solid support, wherein the solid support is gold or quantum, and the conjugate is presented in FIG. 14.
1,600
274,070
15,132,692
1,673
The invention relates to novel crystalline phases of 5,6-dichloro-2-(isopropylamino)-1-(β-L-ribofuranosyl)-1H-benzimidazole (Maribavir), pharmaceutical compositions thereof and their use in medical therapy.
1. A crystalline solvate of 5,6-dichloro-2-(isopropylamino)-1-(β-L-ribofuranosyl)-1H-benzimidazole, including a stoichiometric ratio of an organic solvent within a cavity of the crystal lattice, said solvent being selected from the group of: methanol, acetonitrile, ethyl acetate, diethylether, n-butylacetate, or 1-propanol, or mixtures thereof. 2. Crystalline Form VI of 5,6-dichloro-2-(isopropylamino)-1-(β-L-ribofuranosyl)-1H-benzimidazole, wherein said compound has unit cell parameters a=b=9.2825 Å, c=41.602 Å, and P41212 space group (recorded at 296 K).
The invention relates to novel crystalline phases of 5,6-dichloro-2-(isopropylamino)-1-(β-L-ribofuranosyl)-1H-benzimidazole (Maribavir), pharmaceutical compositions thereof and their use in medical therapy.1. A crystalline solvate of 5,6-dichloro-2-(isopropylamino)-1-(β-L-ribofuranosyl)-1H-benzimidazole, including a stoichiometric ratio of an organic solvent within a cavity of the crystal lattice, said solvent being selected from the group of: methanol, acetonitrile, ethyl acetate, diethylether, n-butylacetate, or 1-propanol, or mixtures thereof. 2. Crystalline Form VI of 5,6-dichloro-2-(isopropylamino)-1-(β-L-ribofuranosyl)-1H-benzimidazole, wherein said compound has unit cell parameters a=b=9.2825 Å, c=41.602 Å, and P41212 space group (recorded at 296 K).
1,600
274,071
15,030,416
1,673
The present invention relates to methods and pharmaceutical compositions for the treatment of polyomavirus infections. In particular, the present invention relates to a method for treating a polyomavirus infection in a subject in need thereof comprising administering the subject with a therapeutically effective amount of gemcitabine.
1. A method of treating a polyomavirus infection in a subject in need thereof comprising administering to the subject a therapeutically effective amount of gemcitabine or a gemcitabine derivative. 2. The method of claim 1 wherein the polyomavirus is selected from the group consisting of JCV, BKV, KI virus, WU virus, Merkel cell polyomavirus (MCV), Trichodysplasia sinulosa-associated polyomavirus (TSV), HPyV6, HPyV7, and HPyV9. 3. The method of claim 1 wherein the polyomavirus is BK virus. 4. The method of claim 1 wherein the subject has or is suspected of having a latent polyomavirus infection. 5. The method of claim 1 wherein the subject has been diagnosed with an active polyomavirus infection 6. The method of claim 1 wherein the subject is at risk of developing a disease associated with a polyomavirus, and is selected from the group consisting of individuals diagnosed with an active polyomavirus infection, individuals who are immunocompromised and diagnosed with an active polyomavirus infection, and individuals who are immunocompromised and have or are suspected of having a latent polyomavirus infection. 7. The method of claim 6 wherein the individuals who are immunocompromised are selected from the group consisting of AIDS patients; patients on chronic immunosuppressive treatment regimens, patients with cancer, patients with autoimmune conditions being treated with mycophenolate mofetil or a biologic, and elderly patients with weakened immune systems that have or are suspected of having a latent polyomavirus infection. 8. The method of claim 7 wherein the patients on chronic immunosuppressive treatment regimens are organ transplant patients to whom an immunosuppressive agent is administered. 9. The method of claim 8 wherein the organ transplant patients have at least one transplanted organ selected from the group consisting of kidney, bone marrow, liver, lung, stomach, bone, testis, heart, pancreas and intestine. 10. The method of claim 6 wherein the gemcitabine or gemcitabine derivative is administered in concurrent or sequential combination with an immunosuppressive agent. 11. The method of claim 10 wherein the immunosuppressive agent is selected from the group consisting of antibodies that specifically bind to CD20, CD25 or CD3; calcineurin inhibitors interferons steroids; interleukin-1 receptor antagonists; myophenolate mofetil; Prograph®; azathioprine; methotrexate; and TNF-α binding proteins. 12. A method for the prophylactic treatment of a disease associated with polyomaviruses in a subject in need thereof comprising administering to the subject a therapeutically effective amount of gemcitabine. 13. The method of claim 12 wherein the disease associated with polyomaviruses is selected from the group consisting of progressive multifocal leukoencephalopathy (PML), neural tumors, colorectal cancer, prostate cancer, Merkel cell carcinoma, nephritis and/or nephropathy in patients who have undergone renal transplantation, hemorrhagic-cystitis in patients who have undergone a bone marrow or stem cell transplant, and non hemorrhagic cystitis in patients who have undergone a bone marrow or stem cell transplant. 14. The method according to claim 1, wherein the gemcitabine derivative is a compound of formula: 15. The method according to claim 1, wherein the gemcitabine derivative is gemcitabine-5-elaidate, or gemcitabine-5-elaidate ester which has the structure of formula: 16. The method according to claim 1, wherein the gemcitabine derivative is selected from the group consisting of compounds having the general formula (I) to (VIII): 17. The method of claim 16 wherein R1, R2 and R3 are independently selected from hydrogen and C1 to C30 saturated, monounsaturated or polyunsaturated acyl groups. 18. The method according to claim 1, wherein the gemcitabine derivative is selected from the group consisting of 2′-Deoxy-2̂2′-difiuoro-D-cytidme-5,0-bis(ethoxy-L-alaninyl)-phosphate; 2′-Deoxy-2̂2′-difluoro-D-cytidme-5′-0-bis(benzoxy-L-alaiiinyl)-phosphate; 2′-Deoxy-2,2′-difluoro-D-cytidme-5′-0-bis(cyclohexoxy-L-alaninyl)-phosphate; 2′-Deoxy-2̂2̂-difluoro-D-cytidme-5′-0-bis(2,2-dimethylpropoxy-L-alani phosphate; and 2′-Deoxy-2 2′-difluoro-D-cvtidme-5′-0-bis(iso-propoxy-L-alaninyl)-phosphate. 19. The method according to claim 1, wherein the gemcitabine derivative is a pegylated gemcitabine derivative. 20. The method according to claim 1, wherein the gemcitabine or gemcitabine derivative is administered to the subject in manner to reach an active concentration in the nanomolar range. 21. The method of claim 20 wherein the gemcitabine or gemcitabine derivative is administered to the subject to reach a concentration of about 1; 1.05; 1.1; 1.15; 1.2; 1.25; 1.3; 1.35; 1.4; 1.45; 1.5; 1.55; 1.6; 1.65; 1.7; 1.75; 1.8; 1.85; 1.9; 1.95; 2; 2.05; 2.1; 2.15; 2.2; 2.25; 2.3; 2.35; 2.4; 2.45; 2.5; 2.55; 2.6; 2.65; 2.7; 2.75; 2.8; 2.85; 2.9; 2.95; 3; 3.05; 3.1; 3.15; 3.2; 3.25; 3.3; 3.35; 3.4; 3.45; 3.5; 3.55; 3.6; 3.65; 3.7; 3.75; 3.8; 3.85; 3.9; 3.95; 4; 4.05; 4.1; 4.1; 4.15; 4.2; 4.25; 4.3; 4.35; 4.4; 4.45; 4.5; 4.55; 4.6; 4.65; 4.7; 4.75; 4.8; 4.85; 4.9; 4.95; 5; 5.05; 5.1; 5.15; 5.2; 5.25; 5.3; 5.35; 5.4; 5.45; 5.5; 5.55; 5.6; 5.65; 5.7; 5.75; 5.8; 5.85; 5.9; 5.95; 6; 6.05; 6.1; 6.15; 6.2; 6.25; 6.3; 6.35; 6.4; 6.45; 6.5; 6.55; 6.6; 6.65; 6.7; 6.75; 6.8; 6.85; 6.9; 6.95; 7; 7.05; 7.1; 7.15; 7.2; 7.25; 7.3; 7.35; 7.4; 7.45; 7.5; 7.55; 7.6; 7.65; 7.7; 7.75; 7.8; 7.85; 7.9; 7.95; 8; 8.05; 8.1; 8.15; 8.2; 8.25; 8.3; 8.35; 8.4; 8.45; 8.5; 8.55; 8.6; 8.65; 8.7; 8.75; 8.8; 8.85; 8.9; 8.95; 9; 9.05; 9.1; 9.15; 9.2; 9.25; 9.3; 9.35; 9.4; 9.45; 9.5; 9.55; 9.6; 9.65; 9.7; 9.75; 9.8; 9.85; 9.9; 9.95; or 10 nM. 22. The method of claim 7, wherein the patients with cancer are Hodgkin's disease patients or lymphoma patients. 23. The method of claim 7, wherein the biologic is natalizumab, rituximab, or efalizumab, 24. The method of claim 11, wherein the calcineurin inhibitor is selected from the group consisting of ciclosporin, pimecrolimus, tacrolimus, sirolimus and cyclosporine. 25. The method of claim 13, wherein the neural tumor is a medulloblastoma, an oligodendroglioma, an astroglioma or a glioblastoma. 26. The method of claim 16, wherein the nitrogen protecting group is an ester, an amide, an acetal or a ketal. 27. The method of claim 17, wherein the C1 to C30 saturated monounsaturated or polyunsaturated acyl groups are i) C8 to C26 saturated, monounsaturated or polyunsaturated acyl groups or ii) C12 to C24 saturated, monounsaturated or polyunsaturated acyl groups.
The present invention relates to methods and pharmaceutical compositions for the treatment of polyomavirus infections. In particular, the present invention relates to a method for treating a polyomavirus infection in a subject in need thereof comprising administering the subject with a therapeutically effective amount of gemcitabine.1. A method of treating a polyomavirus infection in a subject in need thereof comprising administering to the subject a therapeutically effective amount of gemcitabine or a gemcitabine derivative. 2. The method of claim 1 wherein the polyomavirus is selected from the group consisting of JCV, BKV, KI virus, WU virus, Merkel cell polyomavirus (MCV), Trichodysplasia sinulosa-associated polyomavirus (TSV), HPyV6, HPyV7, and HPyV9. 3. The method of claim 1 wherein the polyomavirus is BK virus. 4. The method of claim 1 wherein the subject has or is suspected of having a latent polyomavirus infection. 5. The method of claim 1 wherein the subject has been diagnosed with an active polyomavirus infection 6. The method of claim 1 wherein the subject is at risk of developing a disease associated with a polyomavirus, and is selected from the group consisting of individuals diagnosed with an active polyomavirus infection, individuals who are immunocompromised and diagnosed with an active polyomavirus infection, and individuals who are immunocompromised and have or are suspected of having a latent polyomavirus infection. 7. The method of claim 6 wherein the individuals who are immunocompromised are selected from the group consisting of AIDS patients; patients on chronic immunosuppressive treatment regimens, patients with cancer, patients with autoimmune conditions being treated with mycophenolate mofetil or a biologic, and elderly patients with weakened immune systems that have or are suspected of having a latent polyomavirus infection. 8. The method of claim 7 wherein the patients on chronic immunosuppressive treatment regimens are organ transplant patients to whom an immunosuppressive agent is administered. 9. The method of claim 8 wherein the organ transplant patients have at least one transplanted organ selected from the group consisting of kidney, bone marrow, liver, lung, stomach, bone, testis, heart, pancreas and intestine. 10. The method of claim 6 wherein the gemcitabine or gemcitabine derivative is administered in concurrent or sequential combination with an immunosuppressive agent. 11. The method of claim 10 wherein the immunosuppressive agent is selected from the group consisting of antibodies that specifically bind to CD20, CD25 or CD3; calcineurin inhibitors interferons steroids; interleukin-1 receptor antagonists; myophenolate mofetil; Prograph®; azathioprine; methotrexate; and TNF-α binding proteins. 12. A method for the prophylactic treatment of a disease associated with polyomaviruses in a subject in need thereof comprising administering to the subject a therapeutically effective amount of gemcitabine. 13. The method of claim 12 wherein the disease associated with polyomaviruses is selected from the group consisting of progressive multifocal leukoencephalopathy (PML), neural tumors, colorectal cancer, prostate cancer, Merkel cell carcinoma, nephritis and/or nephropathy in patients who have undergone renal transplantation, hemorrhagic-cystitis in patients who have undergone a bone marrow or stem cell transplant, and non hemorrhagic cystitis in patients who have undergone a bone marrow or stem cell transplant. 14. The method according to claim 1, wherein the gemcitabine derivative is a compound of formula: 15. The method according to claim 1, wherein the gemcitabine derivative is gemcitabine-5-elaidate, or gemcitabine-5-elaidate ester which has the structure of formula: 16. The method according to claim 1, wherein the gemcitabine derivative is selected from the group consisting of compounds having the general formula (I) to (VIII): 17. The method of claim 16 wherein R1, R2 and R3 are independently selected from hydrogen and C1 to C30 saturated, monounsaturated or polyunsaturated acyl groups. 18. The method according to claim 1, wherein the gemcitabine derivative is selected from the group consisting of 2′-Deoxy-2̂2′-difiuoro-D-cytidme-5,0-bis(ethoxy-L-alaninyl)-phosphate; 2′-Deoxy-2̂2′-difluoro-D-cytidme-5′-0-bis(benzoxy-L-alaiiinyl)-phosphate; 2′-Deoxy-2,2′-difluoro-D-cytidme-5′-0-bis(cyclohexoxy-L-alaninyl)-phosphate; 2′-Deoxy-2̂2̂-difluoro-D-cytidme-5′-0-bis(2,2-dimethylpropoxy-L-alani phosphate; and 2′-Deoxy-2 2′-difluoro-D-cvtidme-5′-0-bis(iso-propoxy-L-alaninyl)-phosphate. 19. The method according to claim 1, wherein the gemcitabine derivative is a pegylated gemcitabine derivative. 20. The method according to claim 1, wherein the gemcitabine or gemcitabine derivative is administered to the subject in manner to reach an active concentration in the nanomolar range. 21. The method of claim 20 wherein the gemcitabine or gemcitabine derivative is administered to the subject to reach a concentration of about 1; 1.05; 1.1; 1.15; 1.2; 1.25; 1.3; 1.35; 1.4; 1.45; 1.5; 1.55; 1.6; 1.65; 1.7; 1.75; 1.8; 1.85; 1.9; 1.95; 2; 2.05; 2.1; 2.15; 2.2; 2.25; 2.3; 2.35; 2.4; 2.45; 2.5; 2.55; 2.6; 2.65; 2.7; 2.75; 2.8; 2.85; 2.9; 2.95; 3; 3.05; 3.1; 3.15; 3.2; 3.25; 3.3; 3.35; 3.4; 3.45; 3.5; 3.55; 3.6; 3.65; 3.7; 3.75; 3.8; 3.85; 3.9; 3.95; 4; 4.05; 4.1; 4.1; 4.15; 4.2; 4.25; 4.3; 4.35; 4.4; 4.45; 4.5; 4.55; 4.6; 4.65; 4.7; 4.75; 4.8; 4.85; 4.9; 4.95; 5; 5.05; 5.1; 5.15; 5.2; 5.25; 5.3; 5.35; 5.4; 5.45; 5.5; 5.55; 5.6; 5.65; 5.7; 5.75; 5.8; 5.85; 5.9; 5.95; 6; 6.05; 6.1; 6.15; 6.2; 6.25; 6.3; 6.35; 6.4; 6.45; 6.5; 6.55; 6.6; 6.65; 6.7; 6.75; 6.8; 6.85; 6.9; 6.95; 7; 7.05; 7.1; 7.15; 7.2; 7.25; 7.3; 7.35; 7.4; 7.45; 7.5; 7.55; 7.6; 7.65; 7.7; 7.75; 7.8; 7.85; 7.9; 7.95; 8; 8.05; 8.1; 8.15; 8.2; 8.25; 8.3; 8.35; 8.4; 8.45; 8.5; 8.55; 8.6; 8.65; 8.7; 8.75; 8.8; 8.85; 8.9; 8.95; 9; 9.05; 9.1; 9.15; 9.2; 9.25; 9.3; 9.35; 9.4; 9.45; 9.5; 9.55; 9.6; 9.65; 9.7; 9.75; 9.8; 9.85; 9.9; 9.95; or 10 nM. 22. The method of claim 7, wherein the patients with cancer are Hodgkin's disease patients or lymphoma patients. 23. The method of claim 7, wherein the biologic is natalizumab, rituximab, or efalizumab, 24. The method of claim 11, wherein the calcineurin inhibitor is selected from the group consisting of ciclosporin, pimecrolimus, tacrolimus, sirolimus and cyclosporine. 25. The method of claim 13, wherein the neural tumor is a medulloblastoma, an oligodendroglioma, an astroglioma or a glioblastoma. 26. The method of claim 16, wherein the nitrogen protecting group is an ester, an amide, an acetal or a ketal. 27. The method of claim 17, wherein the C1 to C30 saturated monounsaturated or polyunsaturated acyl groups are i) C8 to C26 saturated, monounsaturated or polyunsaturated acyl groups or ii) C12 to C24 saturated, monounsaturated or polyunsaturated acyl groups.
1,600
274,072
15,099,837
1,673
Disclosed are derivatives of amphotericin B (AmB) characterized by improved therapeutic index compared to AmB. The AmB derivatives include C16 ureas, carbamates, and amides according to Formula (I); C3′-substituted C16 ureas, carbamates, and amides according to Formula (II); C16 acyls according to Formula (III); C2′epi-C16 ureas, carbamates, and amides according to Formula (IV); and C16 oxazolidinone derivatives according to Formula (V). Also disclosed are pharmaceutical compositions comprising the AmB derivatives, and therapeutic methods of using the AmB derivatives.
1. A compound represented by Formula (I) or a pharmaceutically acceptable salt thereof: 2-13. (canceled) 14. A compound represented by Formula (II) or a pharmaceutically acceptable salt thereof: 15-37. (canceled) 38. A compound represented by Formula (III) or a pharmaceutically acceptable salt thereof: 39-56. (canceled) 57. A compound represented by Formula (IV) or a pharmaceutically acceptable salt thereof: 58-70. (canceled) 71. A compound represented by Formula (V) or a pharmaceutically acceptable salt thereof: 72-77. (canceled) 78. A pharmaceutical composition, comprising a compound of claim 1; and a pharmaceutically acceptable carrier. 79. (canceled) 80. (canceled) 81. A method of treating a fungal infection, comprising administering to a subject in need thereof a therapeutically effective amount of a compound of claim 1, thereby treating the fungal infection. 82. (canceled) 83. (canceled)
Disclosed are derivatives of amphotericin B (AmB) characterized by improved therapeutic index compared to AmB. The AmB derivatives include C16 ureas, carbamates, and amides according to Formula (I); C3′-substituted C16 ureas, carbamates, and amides according to Formula (II); C16 acyls according to Formula (III); C2′epi-C16 ureas, carbamates, and amides according to Formula (IV); and C16 oxazolidinone derivatives according to Formula (V). Also disclosed are pharmaceutical compositions comprising the AmB derivatives, and therapeutic methods of using the AmB derivatives.1. A compound represented by Formula (I) or a pharmaceutically acceptable salt thereof: 2-13. (canceled) 14. A compound represented by Formula (II) or a pharmaceutically acceptable salt thereof: 15-37. (canceled) 38. A compound represented by Formula (III) or a pharmaceutically acceptable salt thereof: 39-56. (canceled) 57. A compound represented by Formula (IV) or a pharmaceutically acceptable salt thereof: 58-70. (canceled) 71. A compound represented by Formula (V) or a pharmaceutically acceptable salt thereof: 72-77. (canceled) 78. A pharmaceutical composition, comprising a compound of claim 1; and a pharmaceutically acceptable carrier. 79. (canceled) 80. (canceled) 81. A method of treating a fungal infection, comprising administering to a subject in need thereof a therapeutically effective amount of a compound of claim 1, thereby treating the fungal infection. 82. (canceled) 83. (canceled)
1,600
274,073
15,029,869
1,673
The invention provides compounds of the formula:
1. A compound represented by formula I: 2. The compound according to claim 1, wherein B is the group (a′): 3. The compound according to claim 2, wherein R6 is NH2. 4. The compound according to claim 1, wherein B is the group (b′): 5. The compound according to claim 4, wherein R8 is H. 6. The compound according to claim 1, wherein B is the group (c′): 7. The compound according to claim 1, wherein R1 is a triphosphate or a tri-thiophosphate of the formula: 8. The compound according to claim 7 wherein U is O. 9. The compound according to claim 1, wherein R1 and R2 together form a bivalent linker of the formula: 10. The compound according to claim 9 wherein U is O. 11. The compound according to claim 9, wherein R3 is C1-C6alkoxy or NHC(R15)(R15′)C(═O)R16. 12. The compound according to claim 1, wherein R1 is the group (iv): 13. The compound according to claim 12 wherein U is O and R24 is H. 14. The compound according to claim 12 wherein R24 is H; R14 is optionally substituted phenyl; one of R15 and R15′ is H is and the other one C1-C3alkyl; R16 is C1-C8alkyl. 15. The compound according to claim 12, wherein one of R15 and R15′ is H and the stereochemistry is as indicated in the partial formula: 16. The compound according to claim 1, wherein R2 is H. 17. The compound according to claim 1, wherein R1 is H. 18. (canceled) 19. (canceled) 20. A pharmaceutical composition comprising a compound according to claim 1 in association with a pharmaceutically acceptable adjuvant, diluent or carrier. 21. A pharmaceutical composition comprising a compound according to claim 1, further comprising one or more additional other antiviral agent(s). 22. A method for the treatment of hepatitis C virus infection comprising administering to a subject in need thereof a therapeutically effective amount of a compound according to claim 1. 23. (canceled)
The invention provides compounds of the formula:1. A compound represented by formula I: 2. The compound according to claim 1, wherein B is the group (a′): 3. The compound according to claim 2, wherein R6 is NH2. 4. The compound according to claim 1, wherein B is the group (b′): 5. The compound according to claim 4, wherein R8 is H. 6. The compound according to claim 1, wherein B is the group (c′): 7. The compound according to claim 1, wherein R1 is a triphosphate or a tri-thiophosphate of the formula: 8. The compound according to claim 7 wherein U is O. 9. The compound according to claim 1, wherein R1 and R2 together form a bivalent linker of the formula: 10. The compound according to claim 9 wherein U is O. 11. The compound according to claim 9, wherein R3 is C1-C6alkoxy or NHC(R15)(R15′)C(═O)R16. 12. The compound according to claim 1, wherein R1 is the group (iv): 13. The compound according to claim 12 wherein U is O and R24 is H. 14. The compound according to claim 12 wherein R24 is H; R14 is optionally substituted phenyl; one of R15 and R15′ is H is and the other one C1-C3alkyl; R16 is C1-C8alkyl. 15. The compound according to claim 12, wherein one of R15 and R15′ is H and the stereochemistry is as indicated in the partial formula: 16. The compound according to claim 1, wherein R2 is H. 17. The compound according to claim 1, wherein R1 is H. 18. (canceled) 19. (canceled) 20. A pharmaceutical composition comprising a compound according to claim 1 in association with a pharmaceutically acceptable adjuvant, diluent or carrier. 21. A pharmaceutical composition comprising a compound according to claim 1, further comprising one or more additional other antiviral agent(s). 22. A method for the treatment of hepatitis C virus infection comprising administering to a subject in need thereof a therapeutically effective amount of a compound according to claim 1. 23. (canceled)
1,600
274,074
15,029,617
1,673
The present invention provides methods for reducing apoptosis of non-cancerous cells during a cancer treatment and beneficial effects associated with reducing such apoptosis. In particular, methods of the invention comprise administering a tyrosine kinase inhibitor to a cancer patient who is undergoing cancer treatment in order to reduce apoptosis of non-cancerous cells.
1. A method for reducing apoptosis of non-cancerous cells during a cancer treatment, said method comprising administering a therapeutically effective amount of a tyrosine kinase inhibitor prior to administering a cancer treatment to a cancer patient. 2. The method of claim 1 further comprising the steps of administering said tyrosine kinase inhibitor after administering said cancer treatment to said cancer patient. 3. The method of claim 1, wherein said steps of administering said tyrosine kinase inhibitor prior to said cancer treatment reduces apoptosis of non-cancerous cells by at least 30%. 4. The method of claim 1, wherein said cancer treatment consists of radiotherapy. 5. The method of claim 1, wherein said tyrosine kinase inhibitor inhibits c-Abl or Src-family kinase, or both. 6. The method of claim 1, wherein said tyrosine kinase inhibitor is selected from the group consisting of dasatinib, imatinib, ponatinib, saracatinib, and a combination thereof. 7. A method for treating a cancer patient, said method comprising administering a tyrosine kinase inhibitor to a cancer patient prior to administering a cancer treatment to protect noncancerous cells from said cancer treatment, wherein administration of said tyrosine kinase inhibitor significantly reduces the amount of apoptosis of noncancerous cells. 8. The method of claim 7, wherein said cancer treatment consists of radiotherapy. 9. The method of claim 7, wherein said cancer treatment consists of chemotherapy. 10. The method of claim 7, wherein said tyrosine kinase inhibitor inhibits c-Abl or Src-family kinase, or both. 11. The method of claim 7, wherein said tyrosine kinase inhibitor is selected from the group consisting of dasatinib, imatinib, ponatinib, saracatinib, and a combination thereof. 12. The method of claim 7, wherein said cancer comprises head and neck cancer, pancreatic cancer, stomach cancer, breast cancer, colon cancer, lung cancer, liver cancer, leukemia, bone cancer, ovarian cancer, cervical cancer, brain cancer, skin cancer, prostate cancer or thyroid cancer. 13. The method of claim 7, wherein said tyrosine kinase is administered to said cancer patient prior to administering said cancer treatment. 14. A method for reducing a side-effect of a cancer treatment in a cancer patient, said method comprising administering a tyrosine kinase inhibitor to said cancer patient prior to administering a cancer treatment to said patient. 15. The method of claim 14, wherein said cancer treatment consists of radiotherapy. 16. The method of claim 14, wherein said cancer treatment consists of chemotherapy. 17. The method of claim 14, wherein said tyrosine kinase inhibitor inhibits c-Abl or Src-family kinase, or both. 18. The method of claim 14, wherein said tyrosine kinase inhibitor is selected from the group consisting of dasatinib, imatinib, ponatinib, saracatinib, and a combination thereof. 19. The method of claim 14, wherein said cancer comprises head and neck cancer, pancreatic cancer, stomach cancer, breast cancer, colon cancer, lung cancer, liver cancer, leukemia, bone cancer, ovarian cancer, cervical cancer, brain cancer, skin cancer, prostate cancer or thyroid cancer. 20. The method of claim 14 further comprising the step of administering a second typrosine kinase inhibitor to said cancer patient after administering said cancer treatment.
The present invention provides methods for reducing apoptosis of non-cancerous cells during a cancer treatment and beneficial effects associated with reducing such apoptosis. In particular, methods of the invention comprise administering a tyrosine kinase inhibitor to a cancer patient who is undergoing cancer treatment in order to reduce apoptosis of non-cancerous cells.1. A method for reducing apoptosis of non-cancerous cells during a cancer treatment, said method comprising administering a therapeutically effective amount of a tyrosine kinase inhibitor prior to administering a cancer treatment to a cancer patient. 2. The method of claim 1 further comprising the steps of administering said tyrosine kinase inhibitor after administering said cancer treatment to said cancer patient. 3. The method of claim 1, wherein said steps of administering said tyrosine kinase inhibitor prior to said cancer treatment reduces apoptosis of non-cancerous cells by at least 30%. 4. The method of claim 1, wherein said cancer treatment consists of radiotherapy. 5. The method of claim 1, wherein said tyrosine kinase inhibitor inhibits c-Abl or Src-family kinase, or both. 6. The method of claim 1, wherein said tyrosine kinase inhibitor is selected from the group consisting of dasatinib, imatinib, ponatinib, saracatinib, and a combination thereof. 7. A method for treating a cancer patient, said method comprising administering a tyrosine kinase inhibitor to a cancer patient prior to administering a cancer treatment to protect noncancerous cells from said cancer treatment, wherein administration of said tyrosine kinase inhibitor significantly reduces the amount of apoptosis of noncancerous cells. 8. The method of claim 7, wherein said cancer treatment consists of radiotherapy. 9. The method of claim 7, wherein said cancer treatment consists of chemotherapy. 10. The method of claim 7, wherein said tyrosine kinase inhibitor inhibits c-Abl or Src-family kinase, or both. 11. The method of claim 7, wherein said tyrosine kinase inhibitor is selected from the group consisting of dasatinib, imatinib, ponatinib, saracatinib, and a combination thereof. 12. The method of claim 7, wherein said cancer comprises head and neck cancer, pancreatic cancer, stomach cancer, breast cancer, colon cancer, lung cancer, liver cancer, leukemia, bone cancer, ovarian cancer, cervical cancer, brain cancer, skin cancer, prostate cancer or thyroid cancer. 13. The method of claim 7, wherein said tyrosine kinase is administered to said cancer patient prior to administering said cancer treatment. 14. A method for reducing a side-effect of a cancer treatment in a cancer patient, said method comprising administering a tyrosine kinase inhibitor to said cancer patient prior to administering a cancer treatment to said patient. 15. The method of claim 14, wherein said cancer treatment consists of radiotherapy. 16. The method of claim 14, wherein said cancer treatment consists of chemotherapy. 17. The method of claim 14, wherein said tyrosine kinase inhibitor inhibits c-Abl or Src-family kinase, or both. 18. The method of claim 14, wherein said tyrosine kinase inhibitor is selected from the group consisting of dasatinib, imatinib, ponatinib, saracatinib, and a combination thereof. 19. The method of claim 14, wherein said cancer comprises head and neck cancer, pancreatic cancer, stomach cancer, breast cancer, colon cancer, lung cancer, liver cancer, leukemia, bone cancer, ovarian cancer, cervical cancer, brain cancer, skin cancer, prostate cancer or thyroid cancer. 20. The method of claim 14 further comprising the step of administering a second typrosine kinase inhibitor to said cancer patient after administering said cancer treatment.
1,600
274,075
15,099,168
1,673
The presently invention relates to methods, formulations and kits comprising deuterated trehalose for treating myopathies, neurodegenerative disorders, or tauopathies associated abnormal protein aggregation.
1. A method for treating or alleviating a disease associated with abnormal protein aggregation and/or inclusion bodies formation in myocytes, neurons and other cells or extracellular compartments or at least one symptom associated therewith, in a human subject in need thereof comprising administering to said subject a therapeutically effective amount of deuterated trehalose or a mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose or a pharmaceutical formulation comprising a therapeutically effective amount of deuterated trehalose or a mixture of several deuterated trehaloses, and optionally comprising non-deuterated trehalose, wherein in said deuterated trehalose at least one hydrogen atom attached to a carbon atom is replaced by a deuterium atom. 2. The method of claim 1, wherein said deuterated trehalose is α,α-deuterated trehalose. 3. The method of claim 1, wherein said deuterated trehalose is selected from α,α-[1,1′-2H2]trehalose having a structure according to Formula I 4. The method of claim 1, wherein said disease is any one of a neurodegenerative disorder, poly-alanine aggregation disorder, poly-glutamine aggregation disorder, a protein codon reiteration disorder, a myopathy and a tauopathy. 5. The method of claim 1, wherein said disease is any one of Huntington's disease, oculopharengeal muscular dystrophy (OPMD), spinocerebellar ataxias (SCA), Friedreich's ataxia, spinal and bulbar muscular atrophy (SBMA), Parkinson's disease, Alzheimer's disease and amyotrophic lateral sclerosis (ALS), dentatorubral-pallidoluysian atrophy (DRPLA), Pick's disease, Corticobasal degeneration (CBD), Progressive supranuclear palsy (PSP) and Frontotemporal dementia and parkinsonism linked to chromosome 17 (FTDP-17). 6. The method of claim 1, wherein said deuterated trehalose or a pharmaceutical formulation comprising thereof is administered parenterally. 7. The method of claim 1, wherein said pharmaceutical formulation is an injectable solution for parenteral administration. 8. The method of claim 1, wherein said deuterated trehalose or said mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, or a pharmaceutical formulation comprising thereof is administered enterally, specifically by oral administration. 9. The method of claim 8, wherein said pharmaceutical formulation is an aqueous solution. 10. The method of claim 8, wherein said pharmaceutical formulation is a solid dosage form. 11. The method of claim 10, wherein said pharmaceutical formulation comprises deuterated trehalose or a mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, as sole active ingredient, and optionally further comprises at least one pharmaceutically acceptable additive, carrier, excipient or diluent. 12. The method of claim 11, wherein the concentration of deuterated trehalose or a mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, in said formulation is between about 0.1% (w/v) to about 50% (w/v). 13-14. (canceled) 15. The method of claim 13, wherein said parenteral administration is any one of intravenous, intramuscular and intraperitoneal administration. 16-19. (canceled) 20. The method of claim 1, wherein said therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, or pharmaceutical formulation comprising thereof is administered at a frequency of between once daily to once per month. 21. The method of claim 20, wherein said therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, or deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, comprised in said pharmaceutical formulation is administered once daily at from about 1 mg/kg/day to about 1 gram/kg/day of deuterated trehalose. 22. The method of claim 21, wherein said therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, or deuterated trehalose comprised or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, in said pharmaceutical formulation is administered at a single injection administration. 23-25. (canceled) 26. The method of claim 1, wherein administration of said therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, comprised in said pharmaceutical formulation adapted for intravenous administration is completed within from about 75 to about 120 minutes, specifically within less than 90 minutes. 27.-29. (canceled) 30. An aqueous pharmaceutical formulation for any one of enteral or parenteral administration, comprising a therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, as a sole active ingredient, wherein in any of said deuterated trehalose at least one hydrogen atom attached to a carbon atom is replaced by a deuterium atom, optionally further comprising at least one of pharmaceutically acceptable additive, excipient, diluent and carrier. 31. (canceled) 32. The aqueous pharmaceutical formulation according to claim 30, wherein said deuterated trehalose is selected from α,α-[1,1′-2H2]trehalose having a structure according to Formula I 33-52. (canceled) 53. A kit comprising: (a) pharmaceutically acceptable deuterated trehalose or active derivative thereof; (b) at least one pharmaceutically acceptable additive, carrier, excipient and diluent; (c) means for preparing an injectable aqueous solution of the deuterated trehalose by mixing said deuterated trehalose with at least one of said additive, carrier, excipient and diluent; (d) means for parenterally administering said injectable solution to a patient in need; (e) instructions for use.
The presently invention relates to methods, formulations and kits comprising deuterated trehalose for treating myopathies, neurodegenerative disorders, or tauopathies associated abnormal protein aggregation.1. A method for treating or alleviating a disease associated with abnormal protein aggregation and/or inclusion bodies formation in myocytes, neurons and other cells or extracellular compartments or at least one symptom associated therewith, in a human subject in need thereof comprising administering to said subject a therapeutically effective amount of deuterated trehalose or a mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose or a pharmaceutical formulation comprising a therapeutically effective amount of deuterated trehalose or a mixture of several deuterated trehaloses, and optionally comprising non-deuterated trehalose, wherein in said deuterated trehalose at least one hydrogen atom attached to a carbon atom is replaced by a deuterium atom. 2. The method of claim 1, wherein said deuterated trehalose is α,α-deuterated trehalose. 3. The method of claim 1, wherein said deuterated trehalose is selected from α,α-[1,1′-2H2]trehalose having a structure according to Formula I 4. The method of claim 1, wherein said disease is any one of a neurodegenerative disorder, poly-alanine aggregation disorder, poly-glutamine aggregation disorder, a protein codon reiteration disorder, a myopathy and a tauopathy. 5. The method of claim 1, wherein said disease is any one of Huntington's disease, oculopharengeal muscular dystrophy (OPMD), spinocerebellar ataxias (SCA), Friedreich's ataxia, spinal and bulbar muscular atrophy (SBMA), Parkinson's disease, Alzheimer's disease and amyotrophic lateral sclerosis (ALS), dentatorubral-pallidoluysian atrophy (DRPLA), Pick's disease, Corticobasal degeneration (CBD), Progressive supranuclear palsy (PSP) and Frontotemporal dementia and parkinsonism linked to chromosome 17 (FTDP-17). 6. The method of claim 1, wherein said deuterated trehalose or a pharmaceutical formulation comprising thereof is administered parenterally. 7. The method of claim 1, wherein said pharmaceutical formulation is an injectable solution for parenteral administration. 8. The method of claim 1, wherein said deuterated trehalose or said mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, or a pharmaceutical formulation comprising thereof is administered enterally, specifically by oral administration. 9. The method of claim 8, wherein said pharmaceutical formulation is an aqueous solution. 10. The method of claim 8, wherein said pharmaceutical formulation is a solid dosage form. 11. The method of claim 10, wherein said pharmaceutical formulation comprises deuterated trehalose or a mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, as sole active ingredient, and optionally further comprises at least one pharmaceutically acceptable additive, carrier, excipient or diluent. 12. The method of claim 11, wherein the concentration of deuterated trehalose or a mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, in said formulation is between about 0.1% (w/v) to about 50% (w/v). 13-14. (canceled) 15. The method of claim 13, wherein said parenteral administration is any one of intravenous, intramuscular and intraperitoneal administration. 16-19. (canceled) 20. The method of claim 1, wherein said therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, or pharmaceutical formulation comprising thereof is administered at a frequency of between once daily to once per month. 21. The method of claim 20, wherein said therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, or deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, comprised in said pharmaceutical formulation is administered once daily at from about 1 mg/kg/day to about 1 gram/kg/day of deuterated trehalose. 22. The method of claim 21, wherein said therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, or deuterated trehalose comprised or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, in said pharmaceutical formulation is administered at a single injection administration. 23-25. (canceled) 26. The method of claim 1, wherein administration of said therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, comprised in said pharmaceutical formulation adapted for intravenous administration is completed within from about 75 to about 120 minutes, specifically within less than 90 minutes. 27.-29. (canceled) 30. An aqueous pharmaceutical formulation for any one of enteral or parenteral administration, comprising a therapeutically effective amount of deuterated trehalose or mixture of several deuterated trehaloses, optionally together with non-deuterated trehalose, as a sole active ingredient, wherein in any of said deuterated trehalose at least one hydrogen atom attached to a carbon atom is replaced by a deuterium atom, optionally further comprising at least one of pharmaceutically acceptable additive, excipient, diluent and carrier. 31. (canceled) 32. The aqueous pharmaceutical formulation according to claim 30, wherein said deuterated trehalose is selected from α,α-[1,1′-2H2]trehalose having a structure according to Formula I 33-52. (canceled) 53. A kit comprising: (a) pharmaceutically acceptable deuterated trehalose or active derivative thereof; (b) at least one pharmaceutically acceptable additive, carrier, excipient and diluent; (c) means for preparing an injectable aqueous solution of the deuterated trehalose by mixing said deuterated trehalose with at least one of said additive, carrier, excipient and diluent; (d) means for parenterally administering said injectable solution to a patient in need; (e) instructions for use.
1,600
274,076
15,967,675
3,638
A hybrid structural member for an insulated structural panel includes a core member surrounded on at least two sides by a high-density structural foam. The hybrid structural member may be manufactured by placing a core member in a cavity of an injection mold and surrounding the core member by insulating foam on at least two sides. The core member may be held in place by screws, posts, pins, a vacuum, or other suitable means.
1. A hybrid structural member, comprising: a core member having first and second planar surfaces, wherein the first and second planar surfaces define a length of the core member; at least one additional surface disposed adjacent to the first or second planar surfaces along its respective length to define a cross-sectional area of the core member; and high density insulating material; wherein the high density material is disposed on the core member such that at least two of the first and second planar surfaces and the additional surface are covered by the high density material, to form a hybrid structural member having dimensions substantially proportional to the length and width of the core member. 2. The hybrid structural member of claim 1, wherein the core member has a rectangular cross-section, wherein the first and second planar surfaces define a length and width of the core member, and wherein the at least one additional surface comprises third and fourth planar surfaces that are respectively parallel to the first and second planar surfaces, and wherein the core member has substantially uniform dimensions throughout its length. 3. The hybrid structural member of claim 2, wherein the high density insulating material covers the first, second, third and fourth planar surfaces. 4. The hybrid structural member of claim 3, wherein the high density insulating material covers at least one end of the core member. 5. The hybrid structural member of claim 2, wherein the core member is comprised of dimensional lumber. 6. The hybrid structural member of claim 2, wherein the core member is comprised of plywood. 7. The hybrid structural member of claim 2, wherein the core member is comprised of glue-laminated wood fibers. 8. The hybrid structural member of claim 1, wherein the cross-section of the core member along its length has the shape of an I-beam. 9. The hybrid structural member of claim 9, wherein the high density insulating material covers the exterior surfaces of the I-beam along its length. 10. The hybrid structural member of claim 10, wherein the high density insulating material covers at least one end of the I-beam. 11. The hybrid structural member of claim 9, wherein the core member is comprised of wood materials comprising at least one of wood, plywood, oriented strand board, and glue-laminated wood fibers. 12. The hybrid structural member of claim 1, wherein the core member is comprised of a metal. 13. The hybrid structural member of claim 1, wherein the core member is comprised of gypsum. 14. The hybrid structural member of claim 1, wherein the core member is comprised of a rigid plastic. 15. The hybrid structural member of claim 1, wherein the core member is comprised of a ceramic. 16. The hybrid structural member of claim 1, wherein the core member is magnesium oxide. 17. The hybrid structural member of claims 1, 2, or 9, wherein the high density insulating material comprises one of polyurethane or polyisocyanurate. 18. The hybrid structural member of claim 11, wherein the density of the high density insulating material is at least 2.2 lb/ft3. 19. A hybrid structural member, comprising: a core member having at least one non-planar surface, wherein the non-planar surface extends the length of the core member; and high density insulating material having a density of at least 2 lb/ft3 comprising at least one of polyurethane or polyisocyanurate; wherein the core member is surrounded by high density material along its entire length, to form a hybrid structural member having dimensions along its length that are substantially proportional to the length and width of the core member.
A hybrid structural member for an insulated structural panel includes a core member surrounded on at least two sides by a high-density structural foam. The hybrid structural member may be manufactured by placing a core member in a cavity of an injection mold and surrounding the core member by insulating foam on at least two sides. The core member may be held in place by screws, posts, pins, a vacuum, or other suitable means.1. A hybrid structural member, comprising: a core member having first and second planar surfaces, wherein the first and second planar surfaces define a length of the core member; at least one additional surface disposed adjacent to the first or second planar surfaces along its respective length to define a cross-sectional area of the core member; and high density insulating material; wherein the high density material is disposed on the core member such that at least two of the first and second planar surfaces and the additional surface are covered by the high density material, to form a hybrid structural member having dimensions substantially proportional to the length and width of the core member. 2. The hybrid structural member of claim 1, wherein the core member has a rectangular cross-section, wherein the first and second planar surfaces define a length and width of the core member, and wherein the at least one additional surface comprises third and fourth planar surfaces that are respectively parallel to the first and second planar surfaces, and wherein the core member has substantially uniform dimensions throughout its length. 3. The hybrid structural member of claim 2, wherein the high density insulating material covers the first, second, third and fourth planar surfaces. 4. The hybrid structural member of claim 3, wherein the high density insulating material covers at least one end of the core member. 5. The hybrid structural member of claim 2, wherein the core member is comprised of dimensional lumber. 6. The hybrid structural member of claim 2, wherein the core member is comprised of plywood. 7. The hybrid structural member of claim 2, wherein the core member is comprised of glue-laminated wood fibers. 8. The hybrid structural member of claim 1, wherein the cross-section of the core member along its length has the shape of an I-beam. 9. The hybrid structural member of claim 9, wherein the high density insulating material covers the exterior surfaces of the I-beam along its length. 10. The hybrid structural member of claim 10, wherein the high density insulating material covers at least one end of the I-beam. 11. The hybrid structural member of claim 9, wherein the core member is comprised of wood materials comprising at least one of wood, plywood, oriented strand board, and glue-laminated wood fibers. 12. The hybrid structural member of claim 1, wherein the core member is comprised of a metal. 13. The hybrid structural member of claim 1, wherein the core member is comprised of gypsum. 14. The hybrid structural member of claim 1, wherein the core member is comprised of a rigid plastic. 15. The hybrid structural member of claim 1, wherein the core member is comprised of a ceramic. 16. The hybrid structural member of claim 1, wherein the core member is magnesium oxide. 17. The hybrid structural member of claims 1, 2, or 9, wherein the high density insulating material comprises one of polyurethane or polyisocyanurate. 18. The hybrid structural member of claim 11, wherein the density of the high density insulating material is at least 2.2 lb/ft3. 19. A hybrid structural member, comprising: a core member having at least one non-planar surface, wherein the non-planar surface extends the length of the core member; and high density insulating material having a density of at least 2 lb/ft3 comprising at least one of polyurethane or polyisocyanurate; wherein the core member is surrounded by high density material along its entire length, to form a hybrid structural member having dimensions along its length that are substantially proportional to the length and width of the core member.
3,600
274,077
15,968,407
3,638
A post sleeve includes a reinforced concrete body preformed around a liner that defines a cavity extending longitudinally within the body, sized to receive a post. Standoff ribs run lengthwise within the cavity and extend inward from inner walls of the cavity. A post in the cavity is supported laterally by the standoff ribs. Drain channels between the ribs permit water to flow past the post and exit the cavity via a lower aperture. A drain tube is coupled to the lower aperture, and extends downward where it is covered with gravel at the bottom of a post hole. Concrete is poured around the post sleeve in the hole. The cavity is adaptable to receive posts of varying sizes, and at various depths. A collar closes a space between the post and the top of the cavity, permitting air circulation within the cavity while shedding water and substantially preventing insects from entering the cavity.
1. (canceled) 2. A post sleeve assembly, comprising: a body having a central longitudinal axis, a first end portion at a first end of the body along the longitudinal axis, a second end portion at a second end of the body opposite the first end along the longitudinal axis, a first aperture at the first end portion, and a second aperture at the second end portion; a cavity extending through the body along the longitudinal axis from the first aperture to the second aperture such that the cavity is exposed to an external environment of the body through the first aperture and through the second aperture, the cavity bounded by an interior wall of the body, the interior wall including a plurality of standoff elements, each standoff element having a respective innermost surface; and a plurality of stops, each of the stops adjacent to a respective one of the standoff elements, each of the stops extending from the interior wall toward the longitudinal axis without extending toward the longitudinal axis beyond the innermost surface of the respective adjacent standoff element, each of the stops having a bearing surface that is substantially perpendicular to the longitudinal axis. 3. The post sleeve assembly of claim 2, further comprising a drain tube having a first end coupled to the second aperture such that fluid exiting the cavity through the second aperture passes into the drain tube. 4. The post sleeve assembly of claim 2, further comprising a socket positioned within the cavity and configured to receive an end of a post. 5. The post sleeve assembly of claim 4 wherein the socket is one of a plurality of sockets positioned in the cavity near the second end portion, the plurality of sockets are arranged concentrically with respect to the longitudinal axis, and each of the sockets has either a different size or a different shape than the other sockets. 6. The post sleeve assembly of claim 5, further comprising a gutter extending across each of the sockets, the gutter permitting fluid to flow out of the cavity through the second aperture when a post is positioned in one of the sockets. 7. The post sleeve assembly of claim 2 wherein at least one of the first and second apertures is threaded. 8. The post sleeve assembly of claim 2 wherein an inner space collectively defined by the innermost surfaces of the plurality of standoff elements has lateral dimensions substantially corresponding to those of an end portion of a 4×4 post. 9. The post sleeve assembly of claim 2 wherein each of the standoff elements includes a standoff rib extending parallel to the longitudinal axis. 10. The post sleeve assembly of claim 9 wherein each of the stops extends between a respective pair of the standoff ribs. 11. The post sleeve assembly of claim 2, further comprising a plate having a plurality of tabs, each tab configured to bear against the bearing surface of a respective stop when the plate is positioned in the cavity. 12. The post sleeve assembly of claim 11 wherein the plate includes a plate aperture configured to provide lateral support to a post extending through the plate aperture and through the cavity along the longitudinal axis. 13. The post sleeve assembly of claim 11 wherein the plate includes a socket configured to receive an end of a post extending through the cavity along the longitudinal axis. 14. The post sleeve assembly of claim 2 wherein the plurality of stops is a first plurality of stops and the post sleeve assembly further comprises a second plurality of stops, each of the second plurality of stops extending from the interior wall toward the longitudinal axis without extending toward the longitudinal axis beyond an innermost surface of an adjacent one of the standoff elements, each of the second plurality of stops having a bearing surface that is substantially perpendicular to the longitudinal axis and offset along the longitudinal axis with respect to the first plurality of stops. 15. The post sleeve assembly of claim 14, further comprising a plate having a plurality of tabs, each tab configured to bear against the bearing surface of a respective one of the second plurality of stops when the plate is positioned in the cavity. 16. The post sleeve assembly of claim 14 wherein the second plurality of stops are offset along the longitudinal axis with respect to the first plurality of stops by four inches. 17. The post sleeve assembly of claim 2, further comprising a rim extending around the first aperture and laterally outward from the first end portion of the body.
A post sleeve includes a reinforced concrete body preformed around a liner that defines a cavity extending longitudinally within the body, sized to receive a post. Standoff ribs run lengthwise within the cavity and extend inward from inner walls of the cavity. A post in the cavity is supported laterally by the standoff ribs. Drain channels between the ribs permit water to flow past the post and exit the cavity via a lower aperture. A drain tube is coupled to the lower aperture, and extends downward where it is covered with gravel at the bottom of a post hole. Concrete is poured around the post sleeve in the hole. The cavity is adaptable to receive posts of varying sizes, and at various depths. A collar closes a space between the post and the top of the cavity, permitting air circulation within the cavity while shedding water and substantially preventing insects from entering the cavity.1. (canceled) 2. A post sleeve assembly, comprising: a body having a central longitudinal axis, a first end portion at a first end of the body along the longitudinal axis, a second end portion at a second end of the body opposite the first end along the longitudinal axis, a first aperture at the first end portion, and a second aperture at the second end portion; a cavity extending through the body along the longitudinal axis from the first aperture to the second aperture such that the cavity is exposed to an external environment of the body through the first aperture and through the second aperture, the cavity bounded by an interior wall of the body, the interior wall including a plurality of standoff elements, each standoff element having a respective innermost surface; and a plurality of stops, each of the stops adjacent to a respective one of the standoff elements, each of the stops extending from the interior wall toward the longitudinal axis without extending toward the longitudinal axis beyond the innermost surface of the respective adjacent standoff element, each of the stops having a bearing surface that is substantially perpendicular to the longitudinal axis. 3. The post sleeve assembly of claim 2, further comprising a drain tube having a first end coupled to the second aperture such that fluid exiting the cavity through the second aperture passes into the drain tube. 4. The post sleeve assembly of claim 2, further comprising a socket positioned within the cavity and configured to receive an end of a post. 5. The post sleeve assembly of claim 4 wherein the socket is one of a plurality of sockets positioned in the cavity near the second end portion, the plurality of sockets are arranged concentrically with respect to the longitudinal axis, and each of the sockets has either a different size or a different shape than the other sockets. 6. The post sleeve assembly of claim 5, further comprising a gutter extending across each of the sockets, the gutter permitting fluid to flow out of the cavity through the second aperture when a post is positioned in one of the sockets. 7. The post sleeve assembly of claim 2 wherein at least one of the first and second apertures is threaded. 8. The post sleeve assembly of claim 2 wherein an inner space collectively defined by the innermost surfaces of the plurality of standoff elements has lateral dimensions substantially corresponding to those of an end portion of a 4×4 post. 9. The post sleeve assembly of claim 2 wherein each of the standoff elements includes a standoff rib extending parallel to the longitudinal axis. 10. The post sleeve assembly of claim 9 wherein each of the stops extends between a respective pair of the standoff ribs. 11. The post sleeve assembly of claim 2, further comprising a plate having a plurality of tabs, each tab configured to bear against the bearing surface of a respective stop when the plate is positioned in the cavity. 12. The post sleeve assembly of claim 11 wherein the plate includes a plate aperture configured to provide lateral support to a post extending through the plate aperture and through the cavity along the longitudinal axis. 13. The post sleeve assembly of claim 11 wherein the plate includes a socket configured to receive an end of a post extending through the cavity along the longitudinal axis. 14. The post sleeve assembly of claim 2 wherein the plurality of stops is a first plurality of stops and the post sleeve assembly further comprises a second plurality of stops, each of the second plurality of stops extending from the interior wall toward the longitudinal axis without extending toward the longitudinal axis beyond an innermost surface of an adjacent one of the standoff elements, each of the second plurality of stops having a bearing surface that is substantially perpendicular to the longitudinal axis and offset along the longitudinal axis with respect to the first plurality of stops. 15. The post sleeve assembly of claim 14, further comprising a plate having a plurality of tabs, each tab configured to bear against the bearing surface of a respective one of the second plurality of stops when the plate is positioned in the cavity. 16. The post sleeve assembly of claim 14 wherein the second plurality of stops are offset along the longitudinal axis with respect to the first plurality of stops by four inches. 17. The post sleeve assembly of claim 2, further comprising a rim extending around the first aperture and laterally outward from the first end portion of the body.
3,600
274,078
15,968,332
3,638
According to one aspect, the invention relates to an optical security component intended for being observed under direct reflection. The component comprises a structure engraved on a layer of a material having a refraction index n2, a thin layer of a dielectric material having a refraction index n1 other than n2, deposited on the structure, and a layer of a material having a refraction index no other than n1, encapsulating the coated structure of the thin layer. The structure has a first pattern modulated by a second pattern such that, in at least one first region (61, 86), the first pattern comprises a low-relief with a first set of facets, the shapes of which are determined such as to generate at least one first concave or convex cylindrical reflective element, and the second pattern forms a first subwavelength grating acting, after depositing the thin layer and encapsulating the structure, as a first wavelength-subtractive filter; in at least one second region (62, 86), the first pattern comprises a low-relief with a second set of facets in which the shapes are determined such as to generate at least one concave or convex cylindrical reflective element (64), and the second pattern forms a second subwavelength grating acting, after depositing the thin layer and encapsulating the structure, as a second wavelength-subtractive filter, separate from the first wave-length-subtractive filter. Each subwavelength grating can be a zero order diffraction grating such as a DID.
1. An optical security component intended to be observed according to an observation face in a spectral band lying between 380 and 780 nm and in direct reflection, comprising: a structure (S) engraved on a layer of a material exhibiting a refractive index n2, a thin layer of a dielectric material exhibiting a refractive index n1 different from n2, deposited on the structure; a layer of a material of refractive index no different from n1, encapsulating the structure overlaid with the thin layer, the structure exhibiting a first pattern modulated by a second pattern in such a way that: in at least one first region, the first pattern comprises a bas-relief with a first set of facets whose shapes are determined so as to generate at least one first cylindrical reflective element concave or convex seen from the observation face, exhibiting a first principal direction, and the second pattern forms a first sub wavelength grating acting, after deposition of the thin layer and encapsulation of the structure, as a first wavelength-subtractive filter; in at least one second region, the first pattern comprises a bas-relief with a second set of facets whose shapes are determined so as to generate at least one second cylindrical reflective element concave or convex seen from the observation face, exhibiting a second principal direction, and the second pattern forms a second sub wavelength grating acting, after deposition of the thin layer and encapsulation of the structure, as a second wavelength-subtractive filter, different from the first wavelength-subtractive filter. 2. The optical security component as claimed in claim 1, wherein the first and second sub wavelength gratings are defined from the projections on each of the first and second sets of facets of two, unidimensional, plane gratings arranged in a plane (II) parallel to the plane of the component and characterized respectively by first and second grating vectors of perpendicular directions, the direction of one of the grating vectors being parallel to one of the first and second principal directions. 3. The optical security component as claimed in claim 2, in which the norm of the grating vector whose direction is parallel to one of the first or second principal directions is variable in such a way that the grating projected on the corresponding set of the facets is of substantially constant spacing. 4. The optical security component as claimed in claim 1, in which the first and second sets of facets form sets of plane surfaces (Fi), oriented along respectively the first and second principal directions, and inclined with respect to the plane of the component in a continuously variable manner to respectively first and second substantially plane central facets (F0). 5. The optical security component as claimed in claim 4, in which the width (to) of the central facet of a set of facets, measured in a direction perpendicular to the principal direction, is at least equal to 5% of the length of the corresponding reflective element, measured in the same direction. 6. The optical security component as claimed in claim 4, in which at least one of the first and second sets of facets exhibits a longitudinal axis (Δ1), parallel to the principal direction of the corresponding reflective element, and centered on the central facet. 7. The optical security component as claimed in claim 4, in which at least one of the first and second central facets forms an end of the corresponding set of facets. 8. The optical security component as claimed in claim 4, in which, in a third region situated in proximity to the central facets of the first and second sets of facets, the first pattern of the structure is formed of a plane surface parallel to the first and second central facets and the second pattern forms one or more sub wavelength gratings acting, after deposition of the thin layer and encapsulation of the structure, as one or more wavelength-subtractive filters. 9. The optical security component as claimed in claim 1, in which the first and second principal directions are parallel. 10. The optical security component as claimed in claim 1, in which in the first region, the bas-relief comprises a set of facets whose shapes are determined so as to generate one or more concave cylindrical reflective elements arranged according to a first line, and, in the second region, the bas-relief comprises a set of facets whose shapes are determined so as to generate one or more convex cylindrical reflective elements arranged according to a second line parallel to the first line. 11. The optical security component as claimed in claim 1, in which the first and second principal directions are non-parallel. 12. The optical security component as claimed in claim 1, suitable for securing a document or a product, and comprising on the face opposite to the observation face a layer for the transfer of the component onto the document or the product. 13. The optical security component as claimed in claim 12, furthermore comprising, on the observation face side, a support film intended to be detached after transfer of the component onto the document or the product. 14. The optical security component as claimed in claim 1, suitable for the manufacture of a security thread for securing banknotes, and comprising on the observation face side and on the face opposite to the observation face, protection layers. 15. The optical security component as claimed in claim 12, furthermore comprising on the side opposite to the observation face, a colored contrast layer. 16. A banknote comprising at least one first optical security component as claimed in claim 14, said first optical security component forming a security thread partially inserted into a support of the banknote. 17. The banknote as claimed in claim 16, furthermore comprising a second optical security component positioned on a face of the banknote and forming two wavelength-subtractive filters similar to the first and second wavelength-subtractive filters of the first optical security component. 18. A method for manufacturing an optical security component intended to be observed in a spectral band lying between 380 and 780 nm and in direct reflection, the method comprising: the deposition on a support film of a first layer of a material of refractive index no; the formation on the first layer of at least one engraved structure (S), the structure (S) exhibiting a first pattern modulated by a second pattern in such a way that: in at least one first region, the first pattern comprises a bas-relief with a first set of facets whose shapes are determined so as to generate at least one first cylindrical reflective element, concave or convex seen from the observation face, exhibiting a first principal direction, and the second pattern forms a first sub wavelength grating acting, after deposition of a thin layer and encapsulation of the structure, as a first wavelength-subtractive filter; in at least one second region, the first pattern comprises a bas-relief with a second set of facets whose shapes are determined so as to generate at least one second cylindrical reflective element, concave or convex seen from the observation face, exhibiting a second principal direction, and the second pattern forms a second sub wavelength grating acting, after deposition of the thin layer and encapsulation of the structure, as a second wavelength-subtractive filter, different from the first wavelength-subtractive filter; the method furthermore comprising: the deposition on the engraved structure (S) of a thin layer of a dielectric material exhibiting a refractive index n1 different from n0; the encapsulation of the structure (S) overlaid with the thin layer by a layer of a material exhibiting a refractive index n2 different from n1. 19. The method for manufacturing a banknote as claimed in claim 17 comprising: the manufacture of a first optical security component, the incorporation of the first optical security component into a support of the banknote, and the fitting in place of the second optical security component on a face of said support.
According to one aspect, the invention relates to an optical security component intended for being observed under direct reflection. The component comprises a structure engraved on a layer of a material having a refraction index n2, a thin layer of a dielectric material having a refraction index n1 other than n2, deposited on the structure, and a layer of a material having a refraction index no other than n1, encapsulating the coated structure of the thin layer. The structure has a first pattern modulated by a second pattern such that, in at least one first region (61, 86), the first pattern comprises a low-relief with a first set of facets, the shapes of which are determined such as to generate at least one first concave or convex cylindrical reflective element, and the second pattern forms a first subwavelength grating acting, after depositing the thin layer and encapsulating the structure, as a first wavelength-subtractive filter; in at least one second region (62, 86), the first pattern comprises a low-relief with a second set of facets in which the shapes are determined such as to generate at least one concave or convex cylindrical reflective element (64), and the second pattern forms a second subwavelength grating acting, after depositing the thin layer and encapsulating the structure, as a second wavelength-subtractive filter, separate from the first wave-length-subtractive filter. Each subwavelength grating can be a zero order diffraction grating such as a DID.1. An optical security component intended to be observed according to an observation face in a spectral band lying between 380 and 780 nm and in direct reflection, comprising: a structure (S) engraved on a layer of a material exhibiting a refractive index n2, a thin layer of a dielectric material exhibiting a refractive index n1 different from n2, deposited on the structure; a layer of a material of refractive index no different from n1, encapsulating the structure overlaid with the thin layer, the structure exhibiting a first pattern modulated by a second pattern in such a way that: in at least one first region, the first pattern comprises a bas-relief with a first set of facets whose shapes are determined so as to generate at least one first cylindrical reflective element concave or convex seen from the observation face, exhibiting a first principal direction, and the second pattern forms a first sub wavelength grating acting, after deposition of the thin layer and encapsulation of the structure, as a first wavelength-subtractive filter; in at least one second region, the first pattern comprises a bas-relief with a second set of facets whose shapes are determined so as to generate at least one second cylindrical reflective element concave or convex seen from the observation face, exhibiting a second principal direction, and the second pattern forms a second sub wavelength grating acting, after deposition of the thin layer and encapsulation of the structure, as a second wavelength-subtractive filter, different from the first wavelength-subtractive filter. 2. The optical security component as claimed in claim 1, wherein the first and second sub wavelength gratings are defined from the projections on each of the first and second sets of facets of two, unidimensional, plane gratings arranged in a plane (II) parallel to the plane of the component and characterized respectively by first and second grating vectors of perpendicular directions, the direction of one of the grating vectors being parallel to one of the first and second principal directions. 3. The optical security component as claimed in claim 2, in which the norm of the grating vector whose direction is parallel to one of the first or second principal directions is variable in such a way that the grating projected on the corresponding set of the facets is of substantially constant spacing. 4. The optical security component as claimed in claim 1, in which the first and second sets of facets form sets of plane surfaces (Fi), oriented along respectively the first and second principal directions, and inclined with respect to the plane of the component in a continuously variable manner to respectively first and second substantially plane central facets (F0). 5. The optical security component as claimed in claim 4, in which the width (to) of the central facet of a set of facets, measured in a direction perpendicular to the principal direction, is at least equal to 5% of the length of the corresponding reflective element, measured in the same direction. 6. The optical security component as claimed in claim 4, in which at least one of the first and second sets of facets exhibits a longitudinal axis (Δ1), parallel to the principal direction of the corresponding reflective element, and centered on the central facet. 7. The optical security component as claimed in claim 4, in which at least one of the first and second central facets forms an end of the corresponding set of facets. 8. The optical security component as claimed in claim 4, in which, in a third region situated in proximity to the central facets of the first and second sets of facets, the first pattern of the structure is formed of a plane surface parallel to the first and second central facets and the second pattern forms one or more sub wavelength gratings acting, after deposition of the thin layer and encapsulation of the structure, as one or more wavelength-subtractive filters. 9. The optical security component as claimed in claim 1, in which the first and second principal directions are parallel. 10. The optical security component as claimed in claim 1, in which in the first region, the bas-relief comprises a set of facets whose shapes are determined so as to generate one or more concave cylindrical reflective elements arranged according to a first line, and, in the second region, the bas-relief comprises a set of facets whose shapes are determined so as to generate one or more convex cylindrical reflective elements arranged according to a second line parallel to the first line. 11. The optical security component as claimed in claim 1, in which the first and second principal directions are non-parallel. 12. The optical security component as claimed in claim 1, suitable for securing a document or a product, and comprising on the face opposite to the observation face a layer for the transfer of the component onto the document or the product. 13. The optical security component as claimed in claim 12, furthermore comprising, on the observation face side, a support film intended to be detached after transfer of the component onto the document or the product. 14. The optical security component as claimed in claim 1, suitable for the manufacture of a security thread for securing banknotes, and comprising on the observation face side and on the face opposite to the observation face, protection layers. 15. The optical security component as claimed in claim 12, furthermore comprising on the side opposite to the observation face, a colored contrast layer. 16. A banknote comprising at least one first optical security component as claimed in claim 14, said first optical security component forming a security thread partially inserted into a support of the banknote. 17. The banknote as claimed in claim 16, furthermore comprising a second optical security component positioned on a face of the banknote and forming two wavelength-subtractive filters similar to the first and second wavelength-subtractive filters of the first optical security component. 18. A method for manufacturing an optical security component intended to be observed in a spectral band lying between 380 and 780 nm and in direct reflection, the method comprising: the deposition on a support film of a first layer of a material of refractive index no; the formation on the first layer of at least one engraved structure (S), the structure (S) exhibiting a first pattern modulated by a second pattern in such a way that: in at least one first region, the first pattern comprises a bas-relief with a first set of facets whose shapes are determined so as to generate at least one first cylindrical reflective element, concave or convex seen from the observation face, exhibiting a first principal direction, and the second pattern forms a first sub wavelength grating acting, after deposition of a thin layer and encapsulation of the structure, as a first wavelength-subtractive filter; in at least one second region, the first pattern comprises a bas-relief with a second set of facets whose shapes are determined so as to generate at least one second cylindrical reflective element, concave or convex seen from the observation face, exhibiting a second principal direction, and the second pattern forms a second sub wavelength grating acting, after deposition of the thin layer and encapsulation of the structure, as a second wavelength-subtractive filter, different from the first wavelength-subtractive filter; the method furthermore comprising: the deposition on the engraved structure (S) of a thin layer of a dielectric material exhibiting a refractive index n1 different from n0; the encapsulation of the structure (S) overlaid with the thin layer by a layer of a material exhibiting a refractive index n2 different from n1. 19. The method for manufacturing a banknote as claimed in claim 17 comprising: the manufacture of a first optical security component, the incorporation of the first optical security component into a support of the banknote, and the fitting in place of the second optical security component on a face of said support.
3,600
274,079
15,968,338
3,638
Disclosed herein is a modular structural and electrical building system. The system includes first and second structural support members. In use, the second end of the first structural support member is adjacent the first end of the second structural support member. The system further includes at least one conductor bar secured to each of the structural members. In addition, the system utilizes conductor bar connectors for maintaining electrical connectivity from one conductor bar to the next on the same structural support member and a power drop connector for connecting electrical power carried by the conductor bar to various pieces of equipment. In addition, a jump cable provides a bridge for electrical power from the conductor bar on the first structural member to the conductor bar secured to an adjacent structural member.
1. A pre-wired building structural member system comprising: at least one roof structural member with first and second longitudinally opposed ends; a plurality of serially aligned conductor bars longitudinally traversing and secured to the roof structural member, the conductor bars further comprising first and second longitudinally disposed ends; at least one conductor bar-to-conductor bar connector for electrically connecting the first end of the conductor bar to the second end of an adjacent conductor bar; and at least one power drop connector for withdrawing electrical power from the serially aligned conductor bars, wherein in a use configuration, the power drop connector is configured for engagement at any location along an entire span of the plurality of conductor bars. 2. The pre-wired building structural member system of claim 1, wherein the first and second ends of the roof structural member are configured as bearing blocks. 3. The pre-wired building structural member system of claim 2, wherein the bearing blocks are supported by a roof beam. 4. The pre-wired building structural member system of claim 3, wherein the roof beam supports the first end of a first roof structural member and the second end of an adjacent roof structural member. 5. The pre-wired building structural member system of claim 4, wherein a jump cable connects the conductor bar on the first structural member to the conductor bar on the second structural member. 6. The pre-wired building structural member system of claim 1, wherein the at least one power drop connector is configured for connection to the conductor bar. 7. The pre-wired building structural member system of claim 1, wherein the conductor bar is a single pole insulated conductor rail. 8. A modularly electrified building structural member system, the system comprising: a first and second structural support member, each support member further comprising first and second longitudinally opposed ends, the second end of the first structural support member adjacent the first end of the second structural support member; at least one conductor bar longitudinally traversing at least a portion of and secured to each of the structural members, the conductor bar further comprising first and second longitudinally opposed ends; at least one conductor bar-to-conductor bar connector for electrically connecting the first end of the conductor bar to the second end of an adjacent conductor bar; at least one power drop connector for expeditiously connecting a device power cord to the electrical power carried by the conductor bar, wherein the power drop connector is configured for engagement along the entire span of the at least one conductor bar longitudinally traversing and secured to each of the structural members; wherein a jump cable for delivering electrical power from the conductor bar traversing and secured to the first structural member to the conductor bar traversing and secured to the second structural member, the conductor bars further comprising first and second longitudinally disposed ends. 9. The modularly electrified building structural member system of claim 8, wherein the first and second longitudinally opposed ends of the structural member are configured as bearing blocks. 10. The modularly electrified building structural member system of claim 9, wherein the bearing blocks are supported by an I-beam. 11. The modularly electrified building structural member system of claim 10, wherein the I-beam supports the first end of a first structural member and the second end of an adjacent structural member. 12. The modularly electrified building structural member system of claim 8, wherein the device is selected from the group consisting of lighting, a fan, a space heater and a crane. 13. The modularly electrified building structural member system of claim 8, wherein the at least one power drop connector is configured for manual insertion and extraction from the conductor bar. 14. The modularly electrified building structural member system of claim 8, wherein the first and second structural support members are selected from the group consisting of joists, trusses, truss purlins, bar joists and girders. 15. A modular structural and electrical building system, the system comprising: a first and second structural support member each support member further comprising first and second longitudinally opposed ends and a plurality of web members spanning between a lower chord and an upper chord, the second end of the first structural support member proximate the first end of the second structural support member; at least one longitudinally extending conductor bar secured to each of the structural members, the conductor bars further comprising first and second longitudinally opposed ends; at least one conductor bar-to-conductor bar connector for maintaining electrical connectivity from the second end of the conductor bar to the first end of an adjacent conductor bar; at least one power drop connector for connecting a power chord from a device to the electrical power carried by the conductor bar, wherein the power drop connector is configured for engagement anywhere along the span of the at least one conductor bar longitudinally traversing and secured to each of the structural members; and a jump cable configured to deliver electrical power from the conductor bar traversing and secured to the first structural member to the conductor bar traversing and secured to the second structural member, the jump cable further comprising first and second longitudinally disposed ends. 16. The modular structural and electrical building system of claim 15, wherein the at least one power drop connector is configured for manual insertion and extraction from the conductor bar. 17. The modular structural and electrical building system of claim 15, wherein the conductor bar further comprises at least two longitudinally extending channels, each channel at least partially lined with electrically conductive material and configured for receiving the power drop connector. 18. The modular structural and electrical building system of claim 15, wherein the conductor bar is selected from the group consisting of a single pole insulated conductor rail, a multi-pole conductor rail, a multi-pole enclosed conductor rail and conduit. 19. The modular structural and electrical building system of claim 15, wherein the engaged power drop connector is manually repositionable along the span of the least one conductor bar. 20. The modular structural and electrical building system of claim 15, wherein the engaged power drop connector is manually fixedly secured at a specified location along the span of the least one conductor bar.
Disclosed herein is a modular structural and electrical building system. The system includes first and second structural support members. In use, the second end of the first structural support member is adjacent the first end of the second structural support member. The system further includes at least one conductor bar secured to each of the structural members. In addition, the system utilizes conductor bar connectors for maintaining electrical connectivity from one conductor bar to the next on the same structural support member and a power drop connector for connecting electrical power carried by the conductor bar to various pieces of equipment. In addition, a jump cable provides a bridge for electrical power from the conductor bar on the first structural member to the conductor bar secured to an adjacent structural member.1. A pre-wired building structural member system comprising: at least one roof structural member with first and second longitudinally opposed ends; a plurality of serially aligned conductor bars longitudinally traversing and secured to the roof structural member, the conductor bars further comprising first and second longitudinally disposed ends; at least one conductor bar-to-conductor bar connector for electrically connecting the first end of the conductor bar to the second end of an adjacent conductor bar; and at least one power drop connector for withdrawing electrical power from the serially aligned conductor bars, wherein in a use configuration, the power drop connector is configured for engagement at any location along an entire span of the plurality of conductor bars. 2. The pre-wired building structural member system of claim 1, wherein the first and second ends of the roof structural member are configured as bearing blocks. 3. The pre-wired building structural member system of claim 2, wherein the bearing blocks are supported by a roof beam. 4. The pre-wired building structural member system of claim 3, wherein the roof beam supports the first end of a first roof structural member and the second end of an adjacent roof structural member. 5. The pre-wired building structural member system of claim 4, wherein a jump cable connects the conductor bar on the first structural member to the conductor bar on the second structural member. 6. The pre-wired building structural member system of claim 1, wherein the at least one power drop connector is configured for connection to the conductor bar. 7. The pre-wired building structural member system of claim 1, wherein the conductor bar is a single pole insulated conductor rail. 8. A modularly electrified building structural member system, the system comprising: a first and second structural support member, each support member further comprising first and second longitudinally opposed ends, the second end of the first structural support member adjacent the first end of the second structural support member; at least one conductor bar longitudinally traversing at least a portion of and secured to each of the structural members, the conductor bar further comprising first and second longitudinally opposed ends; at least one conductor bar-to-conductor bar connector for electrically connecting the first end of the conductor bar to the second end of an adjacent conductor bar; at least one power drop connector for expeditiously connecting a device power cord to the electrical power carried by the conductor bar, wherein the power drop connector is configured for engagement along the entire span of the at least one conductor bar longitudinally traversing and secured to each of the structural members; wherein a jump cable for delivering electrical power from the conductor bar traversing and secured to the first structural member to the conductor bar traversing and secured to the second structural member, the conductor bars further comprising first and second longitudinally disposed ends. 9. The modularly electrified building structural member system of claim 8, wherein the first and second longitudinally opposed ends of the structural member are configured as bearing blocks. 10. The modularly electrified building structural member system of claim 9, wherein the bearing blocks are supported by an I-beam. 11. The modularly electrified building structural member system of claim 10, wherein the I-beam supports the first end of a first structural member and the second end of an adjacent structural member. 12. The modularly electrified building structural member system of claim 8, wherein the device is selected from the group consisting of lighting, a fan, a space heater and a crane. 13. The modularly electrified building structural member system of claim 8, wherein the at least one power drop connector is configured for manual insertion and extraction from the conductor bar. 14. The modularly electrified building structural member system of claim 8, wherein the first and second structural support members are selected from the group consisting of joists, trusses, truss purlins, bar joists and girders. 15. A modular structural and electrical building system, the system comprising: a first and second structural support member each support member further comprising first and second longitudinally opposed ends and a plurality of web members spanning between a lower chord and an upper chord, the second end of the first structural support member proximate the first end of the second structural support member; at least one longitudinally extending conductor bar secured to each of the structural members, the conductor bars further comprising first and second longitudinally opposed ends; at least one conductor bar-to-conductor bar connector for maintaining electrical connectivity from the second end of the conductor bar to the first end of an adjacent conductor bar; at least one power drop connector for connecting a power chord from a device to the electrical power carried by the conductor bar, wherein the power drop connector is configured for engagement anywhere along the span of the at least one conductor bar longitudinally traversing and secured to each of the structural members; and a jump cable configured to deliver electrical power from the conductor bar traversing and secured to the first structural member to the conductor bar traversing and secured to the second structural member, the jump cable further comprising first and second longitudinally disposed ends. 16. The modular structural and electrical building system of claim 15, wherein the at least one power drop connector is configured for manual insertion and extraction from the conductor bar. 17. The modular structural and electrical building system of claim 15, wherein the conductor bar further comprises at least two longitudinally extending channels, each channel at least partially lined with electrically conductive material and configured for receiving the power drop connector. 18. The modular structural and electrical building system of claim 15, wherein the conductor bar is selected from the group consisting of a single pole insulated conductor rail, a multi-pole conductor rail, a multi-pole enclosed conductor rail and conduit. 19. The modular structural and electrical building system of claim 15, wherein the engaged power drop connector is manually repositionable along the span of the least one conductor bar. 20. The modular structural and electrical building system of claim 15, wherein the engaged power drop connector is manually fixedly secured at a specified location along the span of the least one conductor bar.
3,600
274,080
15,967,519
3,638
In accordance with example embodiments of the present disclosure, a method, system and apparatus for a modular sprung floor is disclosed. An example embodiment is a sprung floor module having interchangeable components. Interchangeable components make up standardized assemblies. An example embodiment has a frame module that may be installed in a series to cover a given area. The frame and edge modules comprise a frame that supports a performance surface. Standardized components include fiber-reinforced, composite linear-structural members combined with elastomeric joints and support members.
1. A modular grid structure for a sprung floor comprising: providing a horizontal imaginary grid having an X axis and a Y axis; and at least two elongate members parallel to said X axis; and at least two elongate members parallel to said Y axis; and at least two elastomeric pads, each having a planar surface portion; and a hollow portion open on two sides; and said at least two elastomeric pads fixedly engaged through said hollow portions open on two sides, in an upright orientation, with said elongate members parallel to the X axis; and said at least two elastomeric pads fixedly engaged through said hollow portions open on two sides, in an inverted orientation, with said elongate members parallel to the Y axis; and at least two elastomeric joint members having at least a first through hole and a second through hole; and said first and second through holes being perpendicular with respect to each other; and said elongate members parallel to the X axis fixedly engaged through said first through hole; and said elongate members parallel to the Y axis fixedly engaged through said second through hole in said joint member wherein; said planar portion of said at least two elastomeric pads fixedly engaged, in an inverted orientation, with said elongate members parallel to the Y axis movably engaged with a sub-floor; and said planar portion of said at least two elastomeric pads fixedly engaged, in an upright orientation, with said elongate members parallel to the X axis fixedly engaged with a planar floor surface substantially covering said modular grid structure, providing a sprung floor. 2. The modular grid structure of claim one wherein: said elongate members are comprised of fiber-reinforced composite material having a bending stiffness between 325 Nmm2 and 535 Nmm2. 3. The modular grid structure of claim one wherein: said elongate members are hollow structures comprised of fiber reinforced composite material having a bending stiffness between 325 Nmm2 and 535 Nmm2. 4. The modular grid structure of claim one wherein: said elastomeric pads are comprised of castable elastomeric material having a durometer between Shore-40A and Shore-100A. 5. The modular grid structure of claim one wherein: said joint members are comprised of castable elastomeric material having a durometer between Shore-40A and Shore-100A. 6. The modular grid structure of claim one wherein: the planar surface substantially covering said modular grid structure is comprised of laminated wood. 7. The modular grid structure of claim one further comprising: a first modular grid structure comprising: at least four elongate members parallel with said X axis are engaged with said joint members which are in turn engaged with at least four of said elongate members parallel to said Y-axis providing a grid structure; and said at least four elongate members parallel to said Y axis are each engaged, at one end, part way through said second through hole in said at least two elastomeric joint members; and providing a second grid structure; wherein at least four elongate members of said second grid structure, parallel to said Y axis are engaged, at one end, the remainder of the way through said second through-hole in said at least two elastomeric joint members of said first modular grid structure; wherein multiple modular grid structures engaged in such a manner provide a structure for providing a sprung floor having multiple adjacent planar surfaces. 8. A modular grid structure for a sprung floor comprising: providing a horizontal imaginary grid having an X axis and a Y axis; and at least two elongate members parallel to said X axis; and at least two elongate members parallel to said Y axis; and at least two lateral channels comprising an upper surface and a lower surface; and said upper surface being substantially planar; and a lower surface having an inverted U-shaped cross section; and said at least two lateral channels upper surfaces fixedly engaged with planar sprung-floor surface material; and at least two elastomeric lateral channel supports, each having an upper portion and a lower portion; and said upper portion being substantially rectangular; and said lower portion comprising a through-hole; and said at least two channel support upper portions movably engaged with said lower surface of said lateral channels, residing within said inverted U-shaped cross sections; and said at least two lateral channel supports lower portion through-holes, each fixedly engaged with said at least two elongate members parallel to said X axis; and at least two elastomeric joint members, each comprising at least a first through hole and a second through hole; and said first and second through holes being perpendicular with respect to each other; and said elongate members parallel to the X axis engaged through said first through holes in said joint members; and said elongate members parallel to the Y axis engaged through said second through holes in said joint members wherein; elongate members parallel to the X axis and elongate members parallel to the Y axis so assembled form a grid pattern and support said lateral channels that in turn support a planar surface substantially covering said modular grid structure, providing a sprung floor. 9. The modular sprung floor of claim eight further comprising an edge assembly; and said edge assembly comprising: an elongate member parallel to the Y axis; and at least two short members parallel to the X axis; and an elastomeric joint member in combination with an elastomeric lateral channel support member, engaged with said elongate member parallel to the Y axis and with said at least two short members parallel to the X axis; wherein said short members are co-linearly engaged with said elongate members parallel to the X axis providing a supported lateral channel along one edge of a sprung floor. 10. The modular grid structure of claim eight wherein: elastomeric pads are fixedly engaged between said elongate members parallel to the Y axis and a subfloor. 11. The modular grid structure of claim eight wherein: said elongate members are comprised of fiber reinforced composite material having a bending stiffness between 325 Nmm2 and 535 Nmm2. 12. The modular grid structure of claim seven wherein: said elongate members are hollow structures comprised of fiber reinforced composite material having a bending stiffness between 325 Nmm2 and 535 Nmm2. 13. The modular grid structure of claim eight wherein: said elastomeric lateral channel supports are comprised of castable elastomeric material having a durometer between Shore-40A and Shore-100A. 14. The modular grid structure of claim eight wherein: said elastomeric joint members are comprised of castable elastomeric material having a durometer between Shore-40A and Shore-100A. 15. The modular grid structure of claim eight wherein: the planar surface substantially covering said modular grid structure is comprised of laminated wood.
In accordance with example embodiments of the present disclosure, a method, system and apparatus for a modular sprung floor is disclosed. An example embodiment is a sprung floor module having interchangeable components. Interchangeable components make up standardized assemblies. An example embodiment has a frame module that may be installed in a series to cover a given area. The frame and edge modules comprise a frame that supports a performance surface. Standardized components include fiber-reinforced, composite linear-structural members combined with elastomeric joints and support members.1. A modular grid structure for a sprung floor comprising: providing a horizontal imaginary grid having an X axis and a Y axis; and at least two elongate members parallel to said X axis; and at least two elongate members parallel to said Y axis; and at least two elastomeric pads, each having a planar surface portion; and a hollow portion open on two sides; and said at least two elastomeric pads fixedly engaged through said hollow portions open on two sides, in an upright orientation, with said elongate members parallel to the X axis; and said at least two elastomeric pads fixedly engaged through said hollow portions open on two sides, in an inverted orientation, with said elongate members parallel to the Y axis; and at least two elastomeric joint members having at least a first through hole and a second through hole; and said first and second through holes being perpendicular with respect to each other; and said elongate members parallel to the X axis fixedly engaged through said first through hole; and said elongate members parallel to the Y axis fixedly engaged through said second through hole in said joint member wherein; said planar portion of said at least two elastomeric pads fixedly engaged, in an inverted orientation, with said elongate members parallel to the Y axis movably engaged with a sub-floor; and said planar portion of said at least two elastomeric pads fixedly engaged, in an upright orientation, with said elongate members parallel to the X axis fixedly engaged with a planar floor surface substantially covering said modular grid structure, providing a sprung floor. 2. The modular grid structure of claim one wherein: said elongate members are comprised of fiber-reinforced composite material having a bending stiffness between 325 Nmm2 and 535 Nmm2. 3. The modular grid structure of claim one wherein: said elongate members are hollow structures comprised of fiber reinforced composite material having a bending stiffness between 325 Nmm2 and 535 Nmm2. 4. The modular grid structure of claim one wherein: said elastomeric pads are comprised of castable elastomeric material having a durometer between Shore-40A and Shore-100A. 5. The modular grid structure of claim one wherein: said joint members are comprised of castable elastomeric material having a durometer between Shore-40A and Shore-100A. 6. The modular grid structure of claim one wherein: the planar surface substantially covering said modular grid structure is comprised of laminated wood. 7. The modular grid structure of claim one further comprising: a first modular grid structure comprising: at least four elongate members parallel with said X axis are engaged with said joint members which are in turn engaged with at least four of said elongate members parallel to said Y-axis providing a grid structure; and said at least four elongate members parallel to said Y axis are each engaged, at one end, part way through said second through hole in said at least two elastomeric joint members; and providing a second grid structure; wherein at least four elongate members of said second grid structure, parallel to said Y axis are engaged, at one end, the remainder of the way through said second through-hole in said at least two elastomeric joint members of said first modular grid structure; wherein multiple modular grid structures engaged in such a manner provide a structure for providing a sprung floor having multiple adjacent planar surfaces. 8. A modular grid structure for a sprung floor comprising: providing a horizontal imaginary grid having an X axis and a Y axis; and at least two elongate members parallel to said X axis; and at least two elongate members parallel to said Y axis; and at least two lateral channels comprising an upper surface and a lower surface; and said upper surface being substantially planar; and a lower surface having an inverted U-shaped cross section; and said at least two lateral channels upper surfaces fixedly engaged with planar sprung-floor surface material; and at least two elastomeric lateral channel supports, each having an upper portion and a lower portion; and said upper portion being substantially rectangular; and said lower portion comprising a through-hole; and said at least two channel support upper portions movably engaged with said lower surface of said lateral channels, residing within said inverted U-shaped cross sections; and said at least two lateral channel supports lower portion through-holes, each fixedly engaged with said at least two elongate members parallel to said X axis; and at least two elastomeric joint members, each comprising at least a first through hole and a second through hole; and said first and second through holes being perpendicular with respect to each other; and said elongate members parallel to the X axis engaged through said first through holes in said joint members; and said elongate members parallel to the Y axis engaged through said second through holes in said joint members wherein; elongate members parallel to the X axis and elongate members parallel to the Y axis so assembled form a grid pattern and support said lateral channels that in turn support a planar surface substantially covering said modular grid structure, providing a sprung floor. 9. The modular sprung floor of claim eight further comprising an edge assembly; and said edge assembly comprising: an elongate member parallel to the Y axis; and at least two short members parallel to the X axis; and an elastomeric joint member in combination with an elastomeric lateral channel support member, engaged with said elongate member parallel to the Y axis and with said at least two short members parallel to the X axis; wherein said short members are co-linearly engaged with said elongate members parallel to the X axis providing a supported lateral channel along one edge of a sprung floor. 10. The modular grid structure of claim eight wherein: elastomeric pads are fixedly engaged between said elongate members parallel to the Y axis and a subfloor. 11. The modular grid structure of claim eight wherein: said elongate members are comprised of fiber reinforced composite material having a bending stiffness between 325 Nmm2 and 535 Nmm2. 12. The modular grid structure of claim seven wherein: said elongate members are hollow structures comprised of fiber reinforced composite material having a bending stiffness between 325 Nmm2 and 535 Nmm2. 13. The modular grid structure of claim eight wherein: said elastomeric lateral channel supports are comprised of castable elastomeric material having a durometer between Shore-40A and Shore-100A. 14. The modular grid structure of claim eight wherein: said elastomeric joint members are comprised of castable elastomeric material having a durometer between Shore-40A and Shore-100A. 15. The modular grid structure of claim eight wherein: the planar surface substantially covering said modular grid structure is comprised of laminated wood.
3,600
274,081
15,772,244
3,638
A formwork for manufacturing a concrete structure which is composed of one or several folding portions, provided with one or several hollow portions which in an unfolded and stretched condition have the shape of the contours of portions of the concrete structure to be formed, whereby the formwork can be placed on a bottom in an initial condition and whereby, by pouring or pumping concrete slurry in the hollow portions of the formwork, the formwork can be taken from the initial condition to the stretched condition so as to form the respective portions of the concrete structure.
1. A Formwork for manufacturing a structure, wherein it comprises one or several flexible folding portions in the shape of a fabric, membrane or fleece, whereby the formwork is foldable and is provided with one or several hollow and/or bag-like portions in which a filler can be provided and which, in an unfolded and stretched condition, have the shape of the contours of portions of the structure to be formed, and whereby the formwork can be placed on a bottom in an initial condition, which is a folded condition or an either or not partially or entirely unfolded condition, and whereby by pouring or pumping filler in the hollow and/or bag-like portions of the formwork, the formwork is gradually taken from the initial condition to the stretched condition so as to form the respective portions of the concrete structure. 2. The formwork according to claim 1, wherein one or several hollow and/or bag-like portions of the formwork in the unfolded and stretched condition have the shape of a vertical column of the structure to be formed. 3. The formwork according to claim 2, wherein several hollow and/or bag-like portions of the formwork in the unfolded and stretched condition have the shape of a vertical column of the concrete structure to be formed, whereby the respective hollow and/or bag-like portions are adjacent to one another so as to form a vertical wall of the structure. 4. The formwork according to claim 2, wherein the formwork comprises one or several flexible folding portions in the shape of a fabric, membrane or fleece which, in an unfolded condition of the formwork form a flat portion of the formwork which extends between the portions of the formwork forming the columns of the structure and whereby, by pouring a filler on the flat portions, a top plate of the structure can be formed. 5. The formwork according to claim 1, wherein one or several hollow and/or bag-like portions of the formwork in an unfolded and stretched condition, have the shape of an arched span of the structure to be formed. 6. The formwork according to claim 1, wherein one or several hollow and/or bag-like portions of the formwork, in an unfolded and stretched condition, have the shape of a base of the structure to be formed. 7. The formwork according to claim 6, wherein one or several hollow and/or bag-like portions of the formwork, in an unfolded and stretched condition, have the shape of a foot, a base plate or a beam. 8. The formwork system for forming a formwork according to claim 1, for manufacturing a structure having a pattern with repeatedly recurring structure portions or with symmetrically placed structure portions, wherein the formwork system comprises one or several types of flexible and foldable system portions whereby every system portion of a certain type of system portions always has the same shape corresponding to a specific structure portion to be achieved, and whereby different system portions corresponding to the pattern of the structure to be achieved can be coupled to one another. 9. The formwork system according to claim 8, wherein the structure to be formed contains a regular pattern of columns which are mutually connected by means of connecting portions and whereby an above-mentioned type of flexible and foldable system portion corresponds to a vertical section through said column and one or several accompanying connecting portions. 10. A method for manufacturing a structure, wherein it consists in: assembling a formwork comprising one or several flexible folding portions in the shape of a fabric, membrane or fleece, whereby the foldable formwork is provided with one or several hollow and/or bag-like portions in which filler can be provided and which, in an unfolded and stretched condition, have the shape of the contours of portions of the structure to be formed; placing the formwork on a bottom in an initial condition, which is a folded condition or an either or not partially or entirely unfolded condition; pouring or pumping filler in the cavities of bag-like portions of the formwork whereby the formwork is gradually taken from the initial condition in the stretched condition so as to form the respective portions of the structure; and, letting the filler cure. 11. The method according to claim 10, wherein the formwork unfolds and is erected as of the bottom while filler is being poured or pumped in the hollow or bag-like portions of the formwork. 12. The method according to claim 10, wherein the formwork is provided with one or several hollow and/or bag-like portions whereby by pouring or pumping the filler in these hollow and/or bag-like portions columns of the concrete structure are formed. 13. The method according to claim 12, wherein the formwork between the hollow and/or bag-like portions forming columns, is provided with flat portions formed by flexible portions consisting of a fabric, membrane or fleece and whereby, as soon as the filler has cured in the hollow and/or bag-like portions forming columns, in an additional step of the method, filler is provided on the flat portions so as to form a top plate which is supported by the columns. 14. The method according to claim 10, wherein the foldable formwork is provided with one or several hollow and/or bag-like portions, whereby by pouring or pumping the filler in these hollow and/or bag-like portions, corresponding arched spans of the structure are formed. 15. The method according to claim 10, wherein the structure to be formed shows a pattern with repeatedly recurring structure portions or with symmetrically placed structure portions and whereby the formwork is at least partially assembled, on the building site where the structure is to be formed by means of system portions of a formwork system having a pattern with repeatedly recurring structure portions or with symmetrically placed structure portions, wherein the formwork system comprises one or several types of flexible and foldable system portions whereby every system portion of a certain type of system portions always has the same shape corresponding to a specific structure portion to be achieved, and whereby different system portions corresponding to the pattern of the structure to be achieved can be coupled to one another, and whereby different system portions are coupled to one another in accordance with the pattern to be formed of the structure. 16. The method according to claim 10, wherein, while pouring or pumping, a filler is applied or several fillers are applied, whereby such a filler is one of the following or a combination thereof: a concrete slurry; sand; Argex grains; air; PU foam; PS granules; and water. 17. The method according to claim 10, wherein the formwork is filled in a single step. 18. The method according to claim 10, wherein the formwork is filled in several steps. 19. The method according to claim 10, wherein the structure to be formed shows a pattern with repeatedly recurring structure portions or with symmetrically placed structure portions and whereby the formwork is at least partially assembled, on the building site where the structure is to be formed by means of system portions of a formwork system wherein the structure to be formed contains a regular pattern of columns which are mutually connected by means of connecting portions and whereby an above-mentioned type of flexible and foldable system portion corresponds to a vertical section through said column and one or several accompanying connecting portions, and whereby different system portions corresponding to the pattern of the structure to be achieved can be coupled to one another, and whereby different system portions are coupled to one another in accordance with the pattern to be formed of the structure.
A formwork for manufacturing a concrete structure which is composed of one or several folding portions, provided with one or several hollow portions which in an unfolded and stretched condition have the shape of the contours of portions of the concrete structure to be formed, whereby the formwork can be placed on a bottom in an initial condition and whereby, by pouring or pumping concrete slurry in the hollow portions of the formwork, the formwork can be taken from the initial condition to the stretched condition so as to form the respective portions of the concrete structure.1. A Formwork for manufacturing a structure, wherein it comprises one or several flexible folding portions in the shape of a fabric, membrane or fleece, whereby the formwork is foldable and is provided with one or several hollow and/or bag-like portions in which a filler can be provided and which, in an unfolded and stretched condition, have the shape of the contours of portions of the structure to be formed, and whereby the formwork can be placed on a bottom in an initial condition, which is a folded condition or an either or not partially or entirely unfolded condition, and whereby by pouring or pumping filler in the hollow and/or bag-like portions of the formwork, the formwork is gradually taken from the initial condition to the stretched condition so as to form the respective portions of the concrete structure. 2. The formwork according to claim 1, wherein one or several hollow and/or bag-like portions of the formwork in the unfolded and stretched condition have the shape of a vertical column of the structure to be formed. 3. The formwork according to claim 2, wherein several hollow and/or bag-like portions of the formwork in the unfolded and stretched condition have the shape of a vertical column of the concrete structure to be formed, whereby the respective hollow and/or bag-like portions are adjacent to one another so as to form a vertical wall of the structure. 4. The formwork according to claim 2, wherein the formwork comprises one or several flexible folding portions in the shape of a fabric, membrane or fleece which, in an unfolded condition of the formwork form a flat portion of the formwork which extends between the portions of the formwork forming the columns of the structure and whereby, by pouring a filler on the flat portions, a top plate of the structure can be formed. 5. The formwork according to claim 1, wherein one or several hollow and/or bag-like portions of the formwork in an unfolded and stretched condition, have the shape of an arched span of the structure to be formed. 6. The formwork according to claim 1, wherein one or several hollow and/or bag-like portions of the formwork, in an unfolded and stretched condition, have the shape of a base of the structure to be formed. 7. The formwork according to claim 6, wherein one or several hollow and/or bag-like portions of the formwork, in an unfolded and stretched condition, have the shape of a foot, a base plate or a beam. 8. The formwork system for forming a formwork according to claim 1, for manufacturing a structure having a pattern with repeatedly recurring structure portions or with symmetrically placed structure portions, wherein the formwork system comprises one or several types of flexible and foldable system portions whereby every system portion of a certain type of system portions always has the same shape corresponding to a specific structure portion to be achieved, and whereby different system portions corresponding to the pattern of the structure to be achieved can be coupled to one another. 9. The formwork system according to claim 8, wherein the structure to be formed contains a regular pattern of columns which are mutually connected by means of connecting portions and whereby an above-mentioned type of flexible and foldable system portion corresponds to a vertical section through said column and one or several accompanying connecting portions. 10. A method for manufacturing a structure, wherein it consists in: assembling a formwork comprising one or several flexible folding portions in the shape of a fabric, membrane or fleece, whereby the foldable formwork is provided with one or several hollow and/or bag-like portions in which filler can be provided and which, in an unfolded and stretched condition, have the shape of the contours of portions of the structure to be formed; placing the formwork on a bottom in an initial condition, which is a folded condition or an either or not partially or entirely unfolded condition; pouring or pumping filler in the cavities of bag-like portions of the formwork whereby the formwork is gradually taken from the initial condition in the stretched condition so as to form the respective portions of the structure; and, letting the filler cure. 11. The method according to claim 10, wherein the formwork unfolds and is erected as of the bottom while filler is being poured or pumped in the hollow or bag-like portions of the formwork. 12. The method according to claim 10, wherein the formwork is provided with one or several hollow and/or bag-like portions whereby by pouring or pumping the filler in these hollow and/or bag-like portions columns of the concrete structure are formed. 13. The method according to claim 12, wherein the formwork between the hollow and/or bag-like portions forming columns, is provided with flat portions formed by flexible portions consisting of a fabric, membrane or fleece and whereby, as soon as the filler has cured in the hollow and/or bag-like portions forming columns, in an additional step of the method, filler is provided on the flat portions so as to form a top plate which is supported by the columns. 14. The method according to claim 10, wherein the foldable formwork is provided with one or several hollow and/or bag-like portions, whereby by pouring or pumping the filler in these hollow and/or bag-like portions, corresponding arched spans of the structure are formed. 15. The method according to claim 10, wherein the structure to be formed shows a pattern with repeatedly recurring structure portions or with symmetrically placed structure portions and whereby the formwork is at least partially assembled, on the building site where the structure is to be formed by means of system portions of a formwork system having a pattern with repeatedly recurring structure portions or with symmetrically placed structure portions, wherein the formwork system comprises one or several types of flexible and foldable system portions whereby every system portion of a certain type of system portions always has the same shape corresponding to a specific structure portion to be achieved, and whereby different system portions corresponding to the pattern of the structure to be achieved can be coupled to one another, and whereby different system portions are coupled to one another in accordance with the pattern to be formed of the structure. 16. The method according to claim 10, wherein, while pouring or pumping, a filler is applied or several fillers are applied, whereby such a filler is one of the following or a combination thereof: a concrete slurry; sand; Argex grains; air; PU foam; PS granules; and water. 17. The method according to claim 10, wherein the formwork is filled in a single step. 18. The method according to claim 10, wherein the formwork is filled in several steps. 19. The method according to claim 10, wherein the structure to be formed shows a pattern with repeatedly recurring structure portions or with symmetrically placed structure portions and whereby the formwork is at least partially assembled, on the building site where the structure is to be formed by means of system portions of a formwork system wherein the structure to be formed contains a regular pattern of columns which are mutually connected by means of connecting portions and whereby an above-mentioned type of flexible and foldable system portion corresponds to a vertical section through said column and one or several accompanying connecting portions, and whereby different system portions corresponding to the pattern of the structure to be achieved can be coupled to one another, and whereby different system portions are coupled to one another in accordance with the pattern to be formed of the structure.
3,600
274,082
15,772,136
3,638
A bookbinding device includes a control unit for calculating set parameter values for a clamper and a processing unit on the basis of thickness on a book body to be bound, and initially setting the clamper and the processing unit in accordance with the set parameter values. The control unit includes an input unit for receiving an input of a reference parameter value for two or more different book body thicknesses; a function generation unit for generating, on the basis of the reference parameter value, a function for calculating a set parameter value that has been adjusted in accordance with the book body thickness; a parameter calculation unit for calculating, on the basis of the book body thickness, the set adjusted parameter value by using the function; and an initial setting unit for initially setting the clamper and the processing unit in accordance with the set adjusted parameter value.
1. A book binding device comprising: at least one clamper movable along a predetermined conveying path, the clamper having one or more parameters adjustable depending on the thickness of a book block; a series of processing units arranged along the conveying path to carry out bookbinding for the book block gripped by the clamper, each of the processing units having one or more parameters adjustable depending on the thickness of the book block; a control unit operatively connected to the clamper and processing units to calculate set values of the parameters based on the information about the thickness of a book block to be bound and perform initial settings of the clamper and processing units according to the set values of the parameters, characterized in that the control unit comprises: an input section for receiving inputs of reference values of the parameters for two or more different thicknesses of the book block; a function generation section generating functions for calculating adjusted set values of the parameters depending on the thickness of the book block based on the reference values inputted to the input section; a parameter calculation section calculating the adjusted set values of the parameters using the functions based on the information about the thickness of a book block to be bound; and an initial setting section performing initial settings of the clamper and processing units according to the adjusted set values of the parameters. 2. The book binding device according to claim 1, wherein the functions generated by the function generation section are linear functions. 3. The book binding device according to claim 1, wherein the control unit includes a memory storing a plurality of sets of the reference values of the parameters inputted to the input section, wherein the input section receives an input of choice of one set from the plurality of sets of the reference values of the parameters stored in the memory, and the function generation section generates the functions using the one set of the reference values of the parameters. 4. The book binding device according to claim 1, wherein the series of processing units include a glue application unit and a cover attachment unit, wherein the glue application unit comprises: a glue tank; at least one glue application roller applying glue to a spine of the book block; a wiper provided for the glue application roller to adjust the thickness of the glue on the glue application roller; and a scrape roller wiping out extra glue of the book block, wherein the parameters of the glue application unit include the timing of the start and end of the glue application by the glue application roller to the book block, the thickness of the glue on the glue application roller, the height of the glue application roller, and the height of the scrape roller, wherein the cover attachment unit comprises; a bottom plate; and a pair of nip plates arranged on the bottom plate, wherein the parameters of the cover application unit include the gap distance between the pair of nip plates when the pair of nip plates takes a closed position, and the height of the bottom plate and pair of nip plates when the bottom plate and pair of nip plates attach a cover to the book block. 5. The book binding device according to claim 4, wherein the clamper comprises a pair of clamp plates, and the parameters of the clamper include the gap distance between the pair of clamp plates when the pair of clamp plates takes an open position, and the travelling speed of the clamper, wherein the series of processing units further include a milling unit, the milling unit comprising: a milling cutter; and a pair of guide plates, wherein the parameters of the milling unit include the rotating velocity of the milling cutter and the gap distance between the pair of guide plates, and the parameters of the cover attachment unit further include a time from when the book block arrives at a cover attachment position until when the bottom plate and pair of nip plates raises at a height for attachment of the cover to the book block, and the duration of nipping the book block by the pair of nip plates. 6. The book binding device according to claim 5, wherein the cover attachment unit comprises a cover supplying unit, the cover supplying unit comprising: a shelf on which a stack of covers are placed; and a cover conveying mechanism conveying the cover from the shelf onto the bottom plate and pair of nip plates of the cover attachment unit, the cover conveying mechanism having a pair of scoring roller pairs scoring at predetermined positions on the cover, wherein the parameters of the cover supplying unit include the position of each of the scoring roller pairs.
A bookbinding device includes a control unit for calculating set parameter values for a clamper and a processing unit on the basis of thickness on a book body to be bound, and initially setting the clamper and the processing unit in accordance with the set parameter values. The control unit includes an input unit for receiving an input of a reference parameter value for two or more different book body thicknesses; a function generation unit for generating, on the basis of the reference parameter value, a function for calculating a set parameter value that has been adjusted in accordance with the book body thickness; a parameter calculation unit for calculating, on the basis of the book body thickness, the set adjusted parameter value by using the function; and an initial setting unit for initially setting the clamper and the processing unit in accordance with the set adjusted parameter value.1. A book binding device comprising: at least one clamper movable along a predetermined conveying path, the clamper having one or more parameters adjustable depending on the thickness of a book block; a series of processing units arranged along the conveying path to carry out bookbinding for the book block gripped by the clamper, each of the processing units having one or more parameters adjustable depending on the thickness of the book block; a control unit operatively connected to the clamper and processing units to calculate set values of the parameters based on the information about the thickness of a book block to be bound and perform initial settings of the clamper and processing units according to the set values of the parameters, characterized in that the control unit comprises: an input section for receiving inputs of reference values of the parameters for two or more different thicknesses of the book block; a function generation section generating functions for calculating adjusted set values of the parameters depending on the thickness of the book block based on the reference values inputted to the input section; a parameter calculation section calculating the adjusted set values of the parameters using the functions based on the information about the thickness of a book block to be bound; and an initial setting section performing initial settings of the clamper and processing units according to the adjusted set values of the parameters. 2. The book binding device according to claim 1, wherein the functions generated by the function generation section are linear functions. 3. The book binding device according to claim 1, wherein the control unit includes a memory storing a plurality of sets of the reference values of the parameters inputted to the input section, wherein the input section receives an input of choice of one set from the plurality of sets of the reference values of the parameters stored in the memory, and the function generation section generates the functions using the one set of the reference values of the parameters. 4. The book binding device according to claim 1, wherein the series of processing units include a glue application unit and a cover attachment unit, wherein the glue application unit comprises: a glue tank; at least one glue application roller applying glue to a spine of the book block; a wiper provided for the glue application roller to adjust the thickness of the glue on the glue application roller; and a scrape roller wiping out extra glue of the book block, wherein the parameters of the glue application unit include the timing of the start and end of the glue application by the glue application roller to the book block, the thickness of the glue on the glue application roller, the height of the glue application roller, and the height of the scrape roller, wherein the cover attachment unit comprises; a bottom plate; and a pair of nip plates arranged on the bottom plate, wherein the parameters of the cover application unit include the gap distance between the pair of nip plates when the pair of nip plates takes a closed position, and the height of the bottom plate and pair of nip plates when the bottom plate and pair of nip plates attach a cover to the book block. 5. The book binding device according to claim 4, wherein the clamper comprises a pair of clamp plates, and the parameters of the clamper include the gap distance between the pair of clamp plates when the pair of clamp plates takes an open position, and the travelling speed of the clamper, wherein the series of processing units further include a milling unit, the milling unit comprising: a milling cutter; and a pair of guide plates, wherein the parameters of the milling unit include the rotating velocity of the milling cutter and the gap distance between the pair of guide plates, and the parameters of the cover attachment unit further include a time from when the book block arrives at a cover attachment position until when the bottom plate and pair of nip plates raises at a height for attachment of the cover to the book block, and the duration of nipping the book block by the pair of nip plates. 6. The book binding device according to claim 5, wherein the cover attachment unit comprises a cover supplying unit, the cover supplying unit comprising: a shelf on which a stack of covers are placed; and a cover conveying mechanism conveying the cover from the shelf onto the bottom plate and pair of nip plates of the cover attachment unit, the cover conveying mechanism having a pair of scoring roller pairs scoring at predetermined positions on the cover, wherein the parameters of the cover supplying unit include the position of each of the scoring roller pairs.
3,600
274,083
15,966,721
3,638
A retractable cantilevered watercraft whip mooring system has a retractable canopy which covers the watercraft while moored to a dock. The system includes a pair of whips secured to the dock by a pair of mounts. Each whip includes a tie-down line. Each whip includes a tie-down line. The retractable cantilevered watercraft whip mooring system protects the watercraft from rubbing or bumping against the dock. In addition, a retractable canopy slides over and is secured by the pair of whips. The retractable canopy includes a support baten. The watercraft is secured by the retractable cantilevered watercraft whip mooring system to protect the watercraft from rubbing or bumping against the dock. The canopy slides over the pair of whips. The canopy is securable relative to an exterior side of the watercraft, enabling the watercraft to be protected during severe weather conditions while the watercraft is attached to the dock.
1. A cantilevered watercraft canopy system for securing a watercraft to a dock, said cantilevered watercraft canopy system comprising: a. a first whip secured to said dock by a first dock mount, said first whip being affixed to said retractable canopy along said first edge of said retractable canopy, said first whip including a first tie down line, said first whip secured to said watercraft at a first watercraft attachment point; b. a second whip secured to said dock by a second dock mount, said second whip being affixed to said retractable canopy along said second edge of said retractable canopy, said second whip including a second tie down line, said second whip secured to said watercraft at a second watercraft attachment point; c. a first retractable canopy tie-down line, said first retractable canopy tie-down line further securing said retractable canopy to said dock; d. a retractable canopy having a first and a second edge, said first whip being affixed to said retractable canopy along said first edge, said second whip being affixed to said retractable canopy along said second edge; and e. a second retractable canopy tie-down line, said second retractable canopy tie down line further securing said retractable canopy to said watercraft; 2. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, whereby said watercraft is secured to said dock by said cantilevered watercraft canopy system to protect said watercraft from rubbing or bumping against the dock during severe weather conditions while said watercraft is attached to said dock. 3. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, wherein said retractable canopy is made of a flexible material, and said first and second whips are made of fiber reinforced plastic. 4. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, wherein said retractable canopy is stretchable, stretching over said first and second whips. 5. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, further comprising at least one support baten. 6. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, wherein said retractable canopy is secured directly to said dock. 7. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, further comprising a third whip secured to said dock by a third dock mount, said third whip being affixed to said retractable canopy, said third whip including a third tie down line, said third whip secured to said watercraft at a third watercraft attachment point. 8. A cantilevered watercraft canopy system for securing a watercraft to a dock, said cantilevered watercraft canopy system comprising: a. a first and a second whip secured to said dock, said first whip secured to said dock by a first dock mount, said second whip secured to said dock by a second dock mount, said first whip including a first tie down line, said first whip secured to said watercraft at a first watercraft attachment point, said second whip including a second tie down line, said second whip secured to said watercraft at a second watercraft attachment point; b. a first retractable canopy tie-down line, said first retractable canopy tie-down line further securing said retractable canopy to said dock, and a second retractable canopy tie-down line, said second retractable canopy tie down line further securing said retractable canopy to said watercraft; c. a retractable canopy having a first and a second edge, said first whip being affixed to said retractable canopy along said first edge, said second whip being affixed to said retractable canopy along said second edge; and d. at least one support baten extending along a width of said retractable canopy; 9. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, further comprising a third whip secured to said dock by a third dock mount, said third whip being affixed to said retractable canopy, said third whip including a third tie down line, said third whip secured to said watercraft at a third watercraft attachment point. 10. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said retractable canopy is made of a flexible material, and said first and second whips are made of fiber reinforced plastic. 11. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said canopy is retractable, stretching over said plurality of whips and attached to boat cleats on the opposite side of said watercraft from said dock. 12. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said retractable canopy is secured directly to said dock. 13. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said retractable canopy is stretchable, stretching over said first and second whips. 14. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said retractable canopy is secured directly to said dock. 15. A cantilevered watercraft canopy system for securing a watercraft to a dock, said cantilevered watercraft canopy system comprising: a. a first and a second whip secured to said dock, said first whip secured to said dock by a first dock mount, said second whip secured to said dock by a second dock mount, said first whip including a first tie down line, said first whip secured to said watercraft at a first watercraft attachment point, said second whip including a second tie down line, said second whip secured to said watercraft at a second watercraft attachment point; b. a first retractable canopy tie-down line, said first retractable canopy tie-down line further securing said retractable canopy to said dock, and a second retractable canopy tie-down line, said second retractable canopy tie down line further securing said retractable canopy to said watercraft; c. a retractable canopy having a first and a second edge, said first whip being affixed to said retractable canopy along said first edge, said second whip being affixed to said retractable canopy along said second edge; and d. at least one support baten extending along a width of said retractable canopy; 16. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, further comprising a third whip secured to said dock by a third dock mount, said third whip being affixed to said retractable canopy, said third whip including a third tie down line, said third whip secured to said watercraft at a third watercraft attachment point. 17. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, wherein said retractable canopy is made of a flexible material, and said first and second whips are made of fiber reinforced plastic. 18. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, wherein said retractable canopy is stretchable, stretching over said first and second whips and attachable to boat cleats on the opposite side of said watercraft from said dock. 19. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, wherein said retractable canopy is secured directly to said dock. 20. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, wherein said retractable canopy is secured directly to said dock.
A retractable cantilevered watercraft whip mooring system has a retractable canopy which covers the watercraft while moored to a dock. The system includes a pair of whips secured to the dock by a pair of mounts. Each whip includes a tie-down line. Each whip includes a tie-down line. The retractable cantilevered watercraft whip mooring system protects the watercraft from rubbing or bumping against the dock. In addition, a retractable canopy slides over and is secured by the pair of whips. The retractable canopy includes a support baten. The watercraft is secured by the retractable cantilevered watercraft whip mooring system to protect the watercraft from rubbing or bumping against the dock. The canopy slides over the pair of whips. The canopy is securable relative to an exterior side of the watercraft, enabling the watercraft to be protected during severe weather conditions while the watercraft is attached to the dock.1. A cantilevered watercraft canopy system for securing a watercraft to a dock, said cantilevered watercraft canopy system comprising: a. a first whip secured to said dock by a first dock mount, said first whip being affixed to said retractable canopy along said first edge of said retractable canopy, said first whip including a first tie down line, said first whip secured to said watercraft at a first watercraft attachment point; b. a second whip secured to said dock by a second dock mount, said second whip being affixed to said retractable canopy along said second edge of said retractable canopy, said second whip including a second tie down line, said second whip secured to said watercraft at a second watercraft attachment point; c. a first retractable canopy tie-down line, said first retractable canopy tie-down line further securing said retractable canopy to said dock; d. a retractable canopy having a first and a second edge, said first whip being affixed to said retractable canopy along said first edge, said second whip being affixed to said retractable canopy along said second edge; and e. a second retractable canopy tie-down line, said second retractable canopy tie down line further securing said retractable canopy to said watercraft; 2. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, whereby said watercraft is secured to said dock by said cantilevered watercraft canopy system to protect said watercraft from rubbing or bumping against the dock during severe weather conditions while said watercraft is attached to said dock. 3. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, wherein said retractable canopy is made of a flexible material, and said first and second whips are made of fiber reinforced plastic. 4. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, wherein said retractable canopy is stretchable, stretching over said first and second whips. 5. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, further comprising at least one support baten. 6. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, wherein said retractable canopy is secured directly to said dock. 7. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 1, further comprising a third whip secured to said dock by a third dock mount, said third whip being affixed to said retractable canopy, said third whip including a third tie down line, said third whip secured to said watercraft at a third watercraft attachment point. 8. A cantilevered watercraft canopy system for securing a watercraft to a dock, said cantilevered watercraft canopy system comprising: a. a first and a second whip secured to said dock, said first whip secured to said dock by a first dock mount, said second whip secured to said dock by a second dock mount, said first whip including a first tie down line, said first whip secured to said watercraft at a first watercraft attachment point, said second whip including a second tie down line, said second whip secured to said watercraft at a second watercraft attachment point; b. a first retractable canopy tie-down line, said first retractable canopy tie-down line further securing said retractable canopy to said dock, and a second retractable canopy tie-down line, said second retractable canopy tie down line further securing said retractable canopy to said watercraft; c. a retractable canopy having a first and a second edge, said first whip being affixed to said retractable canopy along said first edge, said second whip being affixed to said retractable canopy along said second edge; and d. at least one support baten extending along a width of said retractable canopy; 9. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, further comprising a third whip secured to said dock by a third dock mount, said third whip being affixed to said retractable canopy, said third whip including a third tie down line, said third whip secured to said watercraft at a third watercraft attachment point. 10. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said retractable canopy is made of a flexible material, and said first and second whips are made of fiber reinforced plastic. 11. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said canopy is retractable, stretching over said plurality of whips and attached to boat cleats on the opposite side of said watercraft from said dock. 12. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said retractable canopy is secured directly to said dock. 13. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said retractable canopy is stretchable, stretching over said first and second whips. 14. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 8, wherein said retractable canopy is secured directly to said dock. 15. A cantilevered watercraft canopy system for securing a watercraft to a dock, said cantilevered watercraft canopy system comprising: a. a first and a second whip secured to said dock, said first whip secured to said dock by a first dock mount, said second whip secured to said dock by a second dock mount, said first whip including a first tie down line, said first whip secured to said watercraft at a first watercraft attachment point, said second whip including a second tie down line, said second whip secured to said watercraft at a second watercraft attachment point; b. a first retractable canopy tie-down line, said first retractable canopy tie-down line further securing said retractable canopy to said dock, and a second retractable canopy tie-down line, said second retractable canopy tie down line further securing said retractable canopy to said watercraft; c. a retractable canopy having a first and a second edge, said first whip being affixed to said retractable canopy along said first edge, said second whip being affixed to said retractable canopy along said second edge; and d. at least one support baten extending along a width of said retractable canopy; 16. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, further comprising a third whip secured to said dock by a third dock mount, said third whip being affixed to said retractable canopy, said third whip including a third tie down line, said third whip secured to said watercraft at a third watercraft attachment point. 17. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, wherein said retractable canopy is made of a flexible material, and said first and second whips are made of fiber reinforced plastic. 18. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, wherein said retractable canopy is stretchable, stretching over said first and second whips and attachable to boat cleats on the opposite side of said watercraft from said dock. 19. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, wherein said retractable canopy is secured directly to said dock. 20. The cantilevered watercraft canopy system for securing a watercraft to a dock of claim 15, wherein said retractable canopy is secured directly to said dock.
3,600
274,084
15,966,855
3,638
A threaded adjustable-height insert may be installed in a bore of a sandwich panel, such that the insert may be configured to transfer a load to the sandwich panel. The threaded adjustable-height insert may include a first insert part and a second insert part that may be selectively operatively positioned with respect to each other. The overall height of the threaded adjustable-height insert may be adjusted by longitudinally sliding the second insert part with respect to the first insert part and rotating the second insert part with respect to the first insert part. Presently disclosed threaded adjustable-height inserts may be configured for flush installation in a sandwich panel. Methods of installing such threaded adjustable-height inserts and adjusting the height of the same are also disclosed.
1. A method of installing a threaded adjustable-height insert into a bore formed in a sandwich panel, the method comprising: installing a threaded adjustable-height insert into the bore, wherein the threaded adjustable-height insert comprises: a first insert part comprising: a first flange having a first upper surface and a first lower surface; a first neck extending from the first upper surface of the first flange to a first neck end region; and a first hole extending at least through the first neck and defined at least partially by a first inner surface of the first neck, wherein the first neck comprises a first outer surface opposite the first inner surface; and a second insert part comprising: a second flange having a second upper surface and a second lower surface; a second neck extending from the second lower surface of the second flange to a second neck end region; and a second hole extending through the second neck and the second flange, wherein the second hole is partially defined by a second inner surface of the second neck, and wherein the second neck comprises a second outer surface opposite the second inner surface; wherein the second insert part is configured to be selectively operatively positioned with respect to the first insert part such that the second neck has a threaded engagement with at least a portion of the first neck, wherein the threaded adjustable-height insert is configured to have a selectively adjustable overall height such that moving the second insert part with respect to the first insert part such that the second flange is moved towards the first flange reduces the overall height of the threaded adjustable-height insert, wherein the overall height of the threaded adjustable-height insert is defined as a perpendicular distance between the second upper surface of the second flange and the first lower surface of the first flange, wherein the first hole and the second hole are at least substantially concentric when the second insert part is operatively positioned with respect to the first insert part, and wherein the first insert part and the second insert part are configured to both longitudinally slidably translate relative to each other and rotate relative to each other when the second insert part is operatively positioned with respect to the first insert part; and adjusting a height of the threaded adjustable-height insert until the second upper surface of the second flange is at least substantially flush with an outer surface of the sandwich panel, wherein the adjusting the height comprises: longitudinally sliding one of the second insert part and the first insert part with respect to the other of the second insert part and the first insert part; and rotating one of the first insert part and the second insert part with respect to the other of the first insert part and the second insert part. 2. The method according to claim 1, further comprising forming a plurality of bores in the sandwich panel, the sandwich panel having a first skin having a first inner surface and an opposite first outer surface, a second skin opposite the first skin, the second skin having a second inner surface and an opposite second outer surface, the first outer surface of the first skin and the second outer surface of the second skin facing away from one another, and a core sandwiched between the first inner surface of the first skin and the second inner surface of the second skin, wherein the forming the plurality of bores comprises forming the plurality of bores such that each bore extends through at least one of the first skin and the second skin and into the core, and wherein the installing the threaded adjustable-height insert comprises installing a plurality of threaded adjustable-height inserts, each respective threaded adjustable-height insert of the plurality of threaded adjustable-height inserts being installed into a respective bore of the plurality of bores. 3. The method according to claim 1, wherein the installing the threaded adjustable-height insert into the bore comprises positioning the first flange adjacent a second skin of the sandwich panel, such that the first neck extends into the bore towards a first skin of the sandwich panel, and positioning the second insert part such that the second neck is positioned between the first flange of the first insert part and the second flange of the second insert part. 4. The method according to claim 3, wherein the installing the threaded adjustable-height insert is performed before the adjusting the height of the threaded adjustable-height insert. 5. The method according to claim 3, further comprising operatively positioning the second insert part with respect to the first insert part such that the second inner surface of the second neck is at least partially positioned on the first outer surface of the first neck and such that the first hole and the second hole are substantially concentric, wherein the operatively positioning is performed before the installing the threaded adjustable-height insert in the bore of the sandwich panel. 6. The method according to claim 1, wherein the installing the threaded adjustable-height insert into the bore comprises first inserting the first insert part into the bore and then inserting the second insert part into the bore such that the second inner surface of the second neck is at least partially positioned on the first outer surface of the first neck and such that the first hole and the second hole are substantially concentric. 7. The method according to claim 1, wherein the adjusting the height of the threaded adjustable-height insert comprises sliding a thread engagement clip of the first insert part along a longitudinally-extending slot formed within a second threaded portion of the second inner surface of the second neck of the second insert part. 8. The method according to claim 7, wherein the adjusting the height of the threaded adjustable-height insert comprises rotating the second insert part with respect to the first insert part such that the thread engagement clip of the first insert part engages the second threaded portion within the second neck of the second insert part. 9. The method according to claim 8, wherein the thread engagement clip extends radially outwardly from the first outer surface of the first neck of the first insert part. 10. The method according to claim 9, wherein the first neck of the first insert part comprises a longitudinally-extending channel extending from the first neck end region towards the first flange, wherein the longitudinally-extending channel is formed radially outward from the first inner surface of the first neck towards the first outer surface of the first neck, wherein the longitudinally-extending channel is configured to engage a tool that is configured to substantially prevent rotation of the first insert part as the second insert part is rotated with respect to the first insert part 11. The method according to claim 10, wherein the longitudinally-extending channel comprises a plurality of longitudinally-extending channels, wherein the thread engagement clip comprises a plurality of thread engagement clips, and wherein each respective longitudinally-extending channel of the plurality of longitudinally-extending channels is positioned to be staggered with respect to each respective thread engagement clip of the plurality of thread engagement clips. 12. The method according to claim 11, further comprising: engaging the tool with the plurality of longitudinally-extending channels; rotating the second insert part with respect to the first insert part while the tool is engaged with the longitudinally-extending channels of the first insert part; and preventing rotation of the first insert part with respect to the bore, via the tool, during the rotating the second insert part with respect to the first insert part. 13. The method according to claim 12, wherein the engaging the tool with the plurality of longitudinally-extending channels comprises inserting the tool through the second hole of the second insert part and the first hole of the first insert part to access the plurality of longitudinally-extending channels when the second insert part is operatively positioned with respect to the first insert part. 14. The method according to claim 1, wherein the adjusting the height of the threaded adjustable-height insert comprises radially expanding one or more radially-expandable tabs of the second insert part such that one or more thread engagement clips of the second insert part are longitudinally passed over one or more threads of a first threaded portion on the first outer surface of the first neck of the first insert part. 15. The method according to claim 14, wherein the adjusting the height of the threaded adjustable-height insert comprises rotating the second insert part with respect to the first insert part such that the one or more thread engagement clips of the second insert part engage the first threaded portion on the first outer surface of the first neck of the first insert part. 16. The method according to claim 1, wherein the bore comprises a blind bore that extends only partially into a thickness of a core of the sandwich panel, wherein the method further comprises forming the blind bore in the sandwich panel, and wherein the installing the threaded adjustable-height insert into the blind bore comprises positioning the first flange adjacent a base of the blind bore, such that the first neck extends into the blind bore from within the blind bore. 17. The method according to claim 16, further comprising preventing rotation of the first insert part with respect to the blind bore while the second insert part is operatively positioned with respect to the first insert part and rotated with respect to the first insert part, wherein the preventing rotation of the first insert part with respect to the blind bore comprises engaging the first insert part with a tool through the first hole of the first insert part and the second hole of the second insert part. 18. The method according to claim 1, wherein the longitudinally sliding the second insert part with respect to the first insert part coarsely adjusts the overall height of the threaded adjustable-height insert, and wherein the rotating the first insert part with respect to the second insert part finely adjusts the overall height of the threaded adjustable-height insert. 19. The method according to claim 1, further comprising inserting at least one secondary object within the first hole and the second hole of the threaded adjustable-height insert, the at least one secondary object being configured to transfer a localized load to the sandwich panel via the threaded adjustable-height insert.
A threaded adjustable-height insert may be installed in a bore of a sandwich panel, such that the insert may be configured to transfer a load to the sandwich panel. The threaded adjustable-height insert may include a first insert part and a second insert part that may be selectively operatively positioned with respect to each other. The overall height of the threaded adjustable-height insert may be adjusted by longitudinally sliding the second insert part with respect to the first insert part and rotating the second insert part with respect to the first insert part. Presently disclosed threaded adjustable-height inserts may be configured for flush installation in a sandwich panel. Methods of installing such threaded adjustable-height inserts and adjusting the height of the same are also disclosed.1. A method of installing a threaded adjustable-height insert into a bore formed in a sandwich panel, the method comprising: installing a threaded adjustable-height insert into the bore, wherein the threaded adjustable-height insert comprises: a first insert part comprising: a first flange having a first upper surface and a first lower surface; a first neck extending from the first upper surface of the first flange to a first neck end region; and a first hole extending at least through the first neck and defined at least partially by a first inner surface of the first neck, wherein the first neck comprises a first outer surface opposite the first inner surface; and a second insert part comprising: a second flange having a second upper surface and a second lower surface; a second neck extending from the second lower surface of the second flange to a second neck end region; and a second hole extending through the second neck and the second flange, wherein the second hole is partially defined by a second inner surface of the second neck, and wherein the second neck comprises a second outer surface opposite the second inner surface; wherein the second insert part is configured to be selectively operatively positioned with respect to the first insert part such that the second neck has a threaded engagement with at least a portion of the first neck, wherein the threaded adjustable-height insert is configured to have a selectively adjustable overall height such that moving the second insert part with respect to the first insert part such that the second flange is moved towards the first flange reduces the overall height of the threaded adjustable-height insert, wherein the overall height of the threaded adjustable-height insert is defined as a perpendicular distance between the second upper surface of the second flange and the first lower surface of the first flange, wherein the first hole and the second hole are at least substantially concentric when the second insert part is operatively positioned with respect to the first insert part, and wherein the first insert part and the second insert part are configured to both longitudinally slidably translate relative to each other and rotate relative to each other when the second insert part is operatively positioned with respect to the first insert part; and adjusting a height of the threaded adjustable-height insert until the second upper surface of the second flange is at least substantially flush with an outer surface of the sandwich panel, wherein the adjusting the height comprises: longitudinally sliding one of the second insert part and the first insert part with respect to the other of the second insert part and the first insert part; and rotating one of the first insert part and the second insert part with respect to the other of the first insert part and the second insert part. 2. The method according to claim 1, further comprising forming a plurality of bores in the sandwich panel, the sandwich panel having a first skin having a first inner surface and an opposite first outer surface, a second skin opposite the first skin, the second skin having a second inner surface and an opposite second outer surface, the first outer surface of the first skin and the second outer surface of the second skin facing away from one another, and a core sandwiched between the first inner surface of the first skin and the second inner surface of the second skin, wherein the forming the plurality of bores comprises forming the plurality of bores such that each bore extends through at least one of the first skin and the second skin and into the core, and wherein the installing the threaded adjustable-height insert comprises installing a plurality of threaded adjustable-height inserts, each respective threaded adjustable-height insert of the plurality of threaded adjustable-height inserts being installed into a respective bore of the plurality of bores. 3. The method according to claim 1, wherein the installing the threaded adjustable-height insert into the bore comprises positioning the first flange adjacent a second skin of the sandwich panel, such that the first neck extends into the bore towards a first skin of the sandwich panel, and positioning the second insert part such that the second neck is positioned between the first flange of the first insert part and the second flange of the second insert part. 4. The method according to claim 3, wherein the installing the threaded adjustable-height insert is performed before the adjusting the height of the threaded adjustable-height insert. 5. The method according to claim 3, further comprising operatively positioning the second insert part with respect to the first insert part such that the second inner surface of the second neck is at least partially positioned on the first outer surface of the first neck and such that the first hole and the second hole are substantially concentric, wherein the operatively positioning is performed before the installing the threaded adjustable-height insert in the bore of the sandwich panel. 6. The method according to claim 1, wherein the installing the threaded adjustable-height insert into the bore comprises first inserting the first insert part into the bore and then inserting the second insert part into the bore such that the second inner surface of the second neck is at least partially positioned on the first outer surface of the first neck and such that the first hole and the second hole are substantially concentric. 7. The method according to claim 1, wherein the adjusting the height of the threaded adjustable-height insert comprises sliding a thread engagement clip of the first insert part along a longitudinally-extending slot formed within a second threaded portion of the second inner surface of the second neck of the second insert part. 8. The method according to claim 7, wherein the adjusting the height of the threaded adjustable-height insert comprises rotating the second insert part with respect to the first insert part such that the thread engagement clip of the first insert part engages the second threaded portion within the second neck of the second insert part. 9. The method according to claim 8, wherein the thread engagement clip extends radially outwardly from the first outer surface of the first neck of the first insert part. 10. The method according to claim 9, wherein the first neck of the first insert part comprises a longitudinally-extending channel extending from the first neck end region towards the first flange, wherein the longitudinally-extending channel is formed radially outward from the first inner surface of the first neck towards the first outer surface of the first neck, wherein the longitudinally-extending channel is configured to engage a tool that is configured to substantially prevent rotation of the first insert part as the second insert part is rotated with respect to the first insert part 11. The method according to claim 10, wherein the longitudinally-extending channel comprises a plurality of longitudinally-extending channels, wherein the thread engagement clip comprises a plurality of thread engagement clips, and wherein each respective longitudinally-extending channel of the plurality of longitudinally-extending channels is positioned to be staggered with respect to each respective thread engagement clip of the plurality of thread engagement clips. 12. The method according to claim 11, further comprising: engaging the tool with the plurality of longitudinally-extending channels; rotating the second insert part with respect to the first insert part while the tool is engaged with the longitudinally-extending channels of the first insert part; and preventing rotation of the first insert part with respect to the bore, via the tool, during the rotating the second insert part with respect to the first insert part. 13. The method according to claim 12, wherein the engaging the tool with the plurality of longitudinally-extending channels comprises inserting the tool through the second hole of the second insert part and the first hole of the first insert part to access the plurality of longitudinally-extending channels when the second insert part is operatively positioned with respect to the first insert part. 14. The method according to claim 1, wherein the adjusting the height of the threaded adjustable-height insert comprises radially expanding one or more radially-expandable tabs of the second insert part such that one or more thread engagement clips of the second insert part are longitudinally passed over one or more threads of a first threaded portion on the first outer surface of the first neck of the first insert part. 15. The method according to claim 14, wherein the adjusting the height of the threaded adjustable-height insert comprises rotating the second insert part with respect to the first insert part such that the one or more thread engagement clips of the second insert part engage the first threaded portion on the first outer surface of the first neck of the first insert part. 16. The method according to claim 1, wherein the bore comprises a blind bore that extends only partially into a thickness of a core of the sandwich panel, wherein the method further comprises forming the blind bore in the sandwich panel, and wherein the installing the threaded adjustable-height insert into the blind bore comprises positioning the first flange adjacent a base of the blind bore, such that the first neck extends into the blind bore from within the blind bore. 17. The method according to claim 16, further comprising preventing rotation of the first insert part with respect to the blind bore while the second insert part is operatively positioned with respect to the first insert part and rotated with respect to the first insert part, wherein the preventing rotation of the first insert part with respect to the blind bore comprises engaging the first insert part with a tool through the first hole of the first insert part and the second hole of the second insert part. 18. The method according to claim 1, wherein the longitudinally sliding the second insert part with respect to the first insert part coarsely adjusts the overall height of the threaded adjustable-height insert, and wherein the rotating the first insert part with respect to the second insert part finely adjusts the overall height of the threaded adjustable-height insert. 19. The method according to claim 1, further comprising inserting at least one secondary object within the first hole and the second hole of the threaded adjustable-height insert, the at least one secondary object being configured to transfer a localized load to the sandwich panel via the threaded adjustable-height insert.
3,600
274,085
15,966,383
3,638
One embodiment is directed to a personal robotic system, comprising: an electromechanical mobile base defining a cross-sectional envelope when viewed in a plane substantially parallel to a plane of a floor upon which the mobile base is operated; a torso assembly movably coupled to the mobile base; a head assembly movably coupled to the torso; a releasable bin-capturing assembly movably coupled to the torso; and a controller operatively coupled to the mobile base, torso assembly, head assembly, and bin-capturing assembly, and configured to capture a bin with the bin-capturing assembly and move the torso assembly relative to the mobile base so that the captured bin fits as closely as possible within the cross-sectional envelope of the mobile base.
1. A personal robotic system, comprising: a. an electromechanical mobile base defining a cross-sectional envelope when viewed in a plane substantially parallel to a plane of a floor upon which the mobile base is operated; b. a torso assembly movably coupled to the mobile base; c. a head assembly movably coupled to the torso; d. a releasable bin-capturing assembly movably coupled to the torso; and e. a controller operatively coupled to the mobile base, torso assembly, head assembly, and bin-capturing assembly, and configured to capture a bin with the bin-capturing assembly and move the torso assembly relative to the mobile base so that the captured bin fits as closely as possible within the cross-sectional envelope of the mobile base. 2. The system of claim 1, further comprising a sensor operatively coupled to the controller and configured to sense one or more factors regarding an environment in which the mobile base is navigated. 3. The system of claim 2, wherein the sensor comprises a sonar sensor. 4. The system of claim 3, wherein the sonar sensor is coupled to the mobile base. 5. The system of claim 2, wherein the sensor comprises a laser range finder. 6. The system of claim 5, wherein the laser rangefinder is configured to scan a forward field of view that is greater than about 90 degrees. 7. The system of claim 6, wherein the laser rangefinder is configured to scan a forward field of view that is about 180 degrees. 8. The system of claim 5, wherein the sonar sensor is coupled to the mobile base. 9. The system of claim 2, wherein the sensor comprises an image capture device. 10. The system of claim 9, wherein the image capture device comprises a 3-D camera. 11. The system of claim 9, wherein the image capture device is coupled to the head assembly. 12. The system of claim 9, wherein the image capture device is coupled to the mobile base. 13. The system of claim 9, wherein the image capture device is coupled to the releasable bin-capturing assembly. 14. The system of claim 9, wherein the image capture device is coupled to the torso assembly. 15. The system of claim 1, wherein the mobile base comprises a differential drive configuration having two driven wheels. 16. The system of claim 15, wherein each of the driven wheels is operatively coupled to an encoder that is operatively coupled to the controller and configured to provide the controller with input information regarding a driven wheel position. 17. The system of claim 16, wherein the controller is configured to operate the driven wheels to navigate the mobile base based at least in part upon the input information from the driven wheel encoders. 18. The system of claim 2, wherein the controller is configured to operate the mobile base based at least in part upon signals from the sensor. 19. The system of claim 1, wherein the torso assembly is movably coupled to the mobile base such that the torso may be controllably elevated and lowered along an axis substantially perpendicular to the plane of the floor. 20. The system of claim 1, wherein torso assembly is movably coupled to the mobile base such that the torso may be controllably moved along an axis substantially parallel to the plane of the floor. 21. The system of claim 1, wherein the head assembly comprises an image capture device. 22. The system of claim 21, wherein the image capture device comprises a 3-D camera. 23. The system of claim 21, wherein the image capture device is movably coupled to the head assembly such that it may be controllably panned or tilted relative to the head assembly. 24. The system of claim 1, wherein the bin-capturing assembly comprises a under-ledge capturing surface configured to be interfaced with a ledge geometry feature of the bin. 25. The system of claim 24, wherein the capturing surface comprises a rail. 26. The system of claim 24 wherein the rail and ledge geometry feature of the bin are substantially straight. 27. The system of claim 1, further comprising a wireless transceiver configured to enable a teleoperating operator to remotely connect with the controller from a remote workstation, and to operate at least the mobile base. 28. The system of claim 27, wherein the controller is configured to navigate, observe the environment, and engage with one or more bins based at least in part upon teleoperation signals through the wireless transceiver from the teleoperating operator. 29. The system of claim 9, wherein the controller is configured to use the image capture device to automatically recognize the bin. 30. The system of claim 29, wherein one or more tags are coupled to the bin, the tags being configured to be recognizable and readable by the controller using the image capture device. 31. The system of claim 30, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the bin. 32. The system of claim 30, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the bin. 33. The system of claim 30, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 34. The system of claim 33, wherein the one or more tags are passive. 35. The system of claim 33, wherein the one or more tags are actively-powered. 36. The system of claim 9, wherein the controller is configured to use the image capture device to automatically recognize one or more tags associated with a location in the nearby environment. 37. The system of claim 36, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the location. 38. The system of claim 36, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the location. 39. The system of claim 36, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 40. The system of claim 39, wherein the one or more tags are passive. 41. The system of claim 39, wherein the one or more tags are actively-powered. 42. The system of claim 9, wherein the controller is configured to use the image capture device to automatically recognize one or more tags associated with an object in the nearby environment. 43. The system of claim 42, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the object. 44. The system of claim 42, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the object. 45. The system of claim 42, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 46. The system of claim 45, wherein the one or more tags are passive. 47. The system of claim 45, wherein the one or more tags are actively-powered. 48. A method for managing bins of physical objects in a human environment, comprising: a. providing a personal robotic system comprising an electromechanical mobile base defining a cross-sectional envelope when viewed in a plane substantially parallel to a plane of a floor upon which the mobile base is operated; a torso assembly movably coupled to the mobile base; a head assembly movably coupled to the torso; a releasable bin-capturing assembly movably coupled to the torso; and a controller operatively coupled to the mobile base, torso assembly, head assembly, and bin-capturing assembly; and b. operating the personal robotic system to capture a bin with the bin-capturing assembly and move the torso assembly relative to the mobile base so that the captured bin fits as closely as possible within the cross-sectional envelope of the mobile base. 49. The method of claim 48, further comprising providing a sensor operatively coupled to the controller and configured to sense one or more factors regarding an environment in which the mobile base is navigated. 50. The method of claim 49, wherein the sensor comprises a sonar sensor. 51. The method of claim 50, wherein the sonar sensor is coupled to the mobile base. 52. The method of claim 49, wherein the sensor comprises a laser range finder. 53. The method of claim 52, wherein the laser rangefinder is configured to scan a forward field of view that is greater than about 90 degrees. 54. The method of claim 53, wherein the laser rangefinder is configured to scan a forward field of view that is about 180 degrees. 55. The method of claim 52, wherein the sonar sensor is coupled to the mobile base. 56. The method of claim 49, wherein the sensor comprises an image capture device. 57. The method of claim 56, wherein the image capture device comprises a 3-D camera. 58. The method of claim 56, wherein the image capture device is coupled to the head assembly. 59. The method of claim 56, wherein the image capture device is coupled to the mobile base. 60. The method of claim 56, wherein the image capture device is coupled to the releasable bin-capturing assembly. 61. The method of claim 56, wherein the image capture device is coupled to the torso assembly. 62. The method of claim 48, wherein the mobile base comprises a differential drive configuration having two driven wheels. 63. The method of claim 62, wherein each of the driven wheels is operatively coupled to an encoder that is operatively coupled to the controller and configured to provide the controller with input information regarding a driven wheel position. 64. The method of claim 63, wherein the controller is configured to operate the driven wheels to navigate the mobile base based at least in part upon the input information from the driven wheel encoders. 65. The method of claim 49, wherein the controller is configured to operate the mobile base based at least in part upon signals from the sensor. 66. The method of claim 48, wherein the torso assembly is movably coupled to the mobile base such that the torso may be controllably elevated and lowered along an axis substantially perpendicular to the plane of the floor. 67. The method of claim 48, wherein torso assembly is movably coupled to the mobile base such that the torso may be controllably moved along an axis substantially parallel to the plane of the floor. 68. The method of claim 48, wherein the head assembly comprises an image capture device. 69. The method of claim 68, wherein the image capture device comprises a 3-D camera. 70. The method of claim 68, wherein the image capture device is movably coupled to the head assembly such that it may be controllably panned or tilted relative to the head assembly. 71. The method of claim 48, wherein the bin-capturing assembly comprises a under-ledge capturing surface configured to be interfaced with a ledge geometry feature of the bin. 72. The method of claim 71, wherein the capturing surface comprises a rail. 73. The method of claim 71 wherein the rail and ledge geometry feature of the bin are substantially straight. 74. The method of claim 48, further comprising providing a wireless transceiver configured to enable a teleoperating operator to remotely connect with the controller from a remote workstation, and to operate at least the mobile base. 75. The method of claim 74, wherein the controller is configured to navigate, observe the environment, and engage with one or more bins based at least in part upon teleoperation signals through the wireless transceiver from the teleoperating operator. 76. The method of claim 56, wherein the controller is configured to use the image capture device to automatically recognize the bin. 77. The method of claim 76, wherein one or more tags are coupled to the bin, the tags being configured to be recognizable and readable by the controller using the image capture device. 78. The method of claim 77, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the bin. 79. The method of claim 77, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the bin. 80. The method of claim 77, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 81. The method of claim 80, wherein the one or more tags are passive. 82. The method of claim 80, wherein the one or more tags are actively-powered. 83. The method of claim 56, wherein the controller is configured to use the image capture device to automatically recognize one or more tags associated with a location in the nearby environment. 84. The method of claim 83, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the location. 85. The method of claim 83, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the location. 86. The method of claim 83, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 87. The method of claim 86, wherein the one or more tags are passive. 88. The method of claim 86, wherein the one or more tags are actively-powered. 89. The method of claim 56, wherein the controller is configured to use the image capture device to automatically recognize one or more tags associated with an object in the nearby environment. 90. The method of claim 89, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the object. 91. The method of claim 89, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the object. 92. The method of claim 89, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 93. The method of claim 92, wherein the one or more tags are passive. 94. The method of claim 92, wherein the one or more tags are actively-powered.
One embodiment is directed to a personal robotic system, comprising: an electromechanical mobile base defining a cross-sectional envelope when viewed in a plane substantially parallel to a plane of a floor upon which the mobile base is operated; a torso assembly movably coupled to the mobile base; a head assembly movably coupled to the torso; a releasable bin-capturing assembly movably coupled to the torso; and a controller operatively coupled to the mobile base, torso assembly, head assembly, and bin-capturing assembly, and configured to capture a bin with the bin-capturing assembly and move the torso assembly relative to the mobile base so that the captured bin fits as closely as possible within the cross-sectional envelope of the mobile base.1. A personal robotic system, comprising: a. an electromechanical mobile base defining a cross-sectional envelope when viewed in a plane substantially parallel to a plane of a floor upon which the mobile base is operated; b. a torso assembly movably coupled to the mobile base; c. a head assembly movably coupled to the torso; d. a releasable bin-capturing assembly movably coupled to the torso; and e. a controller operatively coupled to the mobile base, torso assembly, head assembly, and bin-capturing assembly, and configured to capture a bin with the bin-capturing assembly and move the torso assembly relative to the mobile base so that the captured bin fits as closely as possible within the cross-sectional envelope of the mobile base. 2. The system of claim 1, further comprising a sensor operatively coupled to the controller and configured to sense one or more factors regarding an environment in which the mobile base is navigated. 3. The system of claim 2, wherein the sensor comprises a sonar sensor. 4. The system of claim 3, wherein the sonar sensor is coupled to the mobile base. 5. The system of claim 2, wherein the sensor comprises a laser range finder. 6. The system of claim 5, wherein the laser rangefinder is configured to scan a forward field of view that is greater than about 90 degrees. 7. The system of claim 6, wherein the laser rangefinder is configured to scan a forward field of view that is about 180 degrees. 8. The system of claim 5, wherein the sonar sensor is coupled to the mobile base. 9. The system of claim 2, wherein the sensor comprises an image capture device. 10. The system of claim 9, wherein the image capture device comprises a 3-D camera. 11. The system of claim 9, wherein the image capture device is coupled to the head assembly. 12. The system of claim 9, wherein the image capture device is coupled to the mobile base. 13. The system of claim 9, wherein the image capture device is coupled to the releasable bin-capturing assembly. 14. The system of claim 9, wherein the image capture device is coupled to the torso assembly. 15. The system of claim 1, wherein the mobile base comprises a differential drive configuration having two driven wheels. 16. The system of claim 15, wherein each of the driven wheels is operatively coupled to an encoder that is operatively coupled to the controller and configured to provide the controller with input information regarding a driven wheel position. 17. The system of claim 16, wherein the controller is configured to operate the driven wheels to navigate the mobile base based at least in part upon the input information from the driven wheel encoders. 18. The system of claim 2, wherein the controller is configured to operate the mobile base based at least in part upon signals from the sensor. 19. The system of claim 1, wherein the torso assembly is movably coupled to the mobile base such that the torso may be controllably elevated and lowered along an axis substantially perpendicular to the plane of the floor. 20. The system of claim 1, wherein torso assembly is movably coupled to the mobile base such that the torso may be controllably moved along an axis substantially parallel to the plane of the floor. 21. The system of claim 1, wherein the head assembly comprises an image capture device. 22. The system of claim 21, wherein the image capture device comprises a 3-D camera. 23. The system of claim 21, wherein the image capture device is movably coupled to the head assembly such that it may be controllably panned or tilted relative to the head assembly. 24. The system of claim 1, wherein the bin-capturing assembly comprises a under-ledge capturing surface configured to be interfaced with a ledge geometry feature of the bin. 25. The system of claim 24, wherein the capturing surface comprises a rail. 26. The system of claim 24 wherein the rail and ledge geometry feature of the bin are substantially straight. 27. The system of claim 1, further comprising a wireless transceiver configured to enable a teleoperating operator to remotely connect with the controller from a remote workstation, and to operate at least the mobile base. 28. The system of claim 27, wherein the controller is configured to navigate, observe the environment, and engage with one or more bins based at least in part upon teleoperation signals through the wireless transceiver from the teleoperating operator. 29. The system of claim 9, wherein the controller is configured to use the image capture device to automatically recognize the bin. 30. The system of claim 29, wherein one or more tags are coupled to the bin, the tags being configured to be recognizable and readable by the controller using the image capture device. 31. The system of claim 30, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the bin. 32. The system of claim 30, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the bin. 33. The system of claim 30, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 34. The system of claim 33, wherein the one or more tags are passive. 35. The system of claim 33, wherein the one or more tags are actively-powered. 36. The system of claim 9, wherein the controller is configured to use the image capture device to automatically recognize one or more tags associated with a location in the nearby environment. 37. The system of claim 36, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the location. 38. The system of claim 36, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the location. 39. The system of claim 36, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 40. The system of claim 39, wherein the one or more tags are passive. 41. The system of claim 39, wherein the one or more tags are actively-powered. 42. The system of claim 9, wherein the controller is configured to use the image capture device to automatically recognize one or more tags associated with an object in the nearby environment. 43. The system of claim 42, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the object. 44. The system of claim 42, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the object. 45. The system of claim 42, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 46. The system of claim 45, wherein the one or more tags are passive. 47. The system of claim 45, wherein the one or more tags are actively-powered. 48. A method for managing bins of physical objects in a human environment, comprising: a. providing a personal robotic system comprising an electromechanical mobile base defining a cross-sectional envelope when viewed in a plane substantially parallel to a plane of a floor upon which the mobile base is operated; a torso assembly movably coupled to the mobile base; a head assembly movably coupled to the torso; a releasable bin-capturing assembly movably coupled to the torso; and a controller operatively coupled to the mobile base, torso assembly, head assembly, and bin-capturing assembly; and b. operating the personal robotic system to capture a bin with the bin-capturing assembly and move the torso assembly relative to the mobile base so that the captured bin fits as closely as possible within the cross-sectional envelope of the mobile base. 49. The method of claim 48, further comprising providing a sensor operatively coupled to the controller and configured to sense one or more factors regarding an environment in which the mobile base is navigated. 50. The method of claim 49, wherein the sensor comprises a sonar sensor. 51. The method of claim 50, wherein the sonar sensor is coupled to the mobile base. 52. The method of claim 49, wherein the sensor comprises a laser range finder. 53. The method of claim 52, wherein the laser rangefinder is configured to scan a forward field of view that is greater than about 90 degrees. 54. The method of claim 53, wherein the laser rangefinder is configured to scan a forward field of view that is about 180 degrees. 55. The method of claim 52, wherein the sonar sensor is coupled to the mobile base. 56. The method of claim 49, wherein the sensor comprises an image capture device. 57. The method of claim 56, wherein the image capture device comprises a 3-D camera. 58. The method of claim 56, wherein the image capture device is coupled to the head assembly. 59. The method of claim 56, wherein the image capture device is coupled to the mobile base. 60. The method of claim 56, wherein the image capture device is coupled to the releasable bin-capturing assembly. 61. The method of claim 56, wherein the image capture device is coupled to the torso assembly. 62. The method of claim 48, wherein the mobile base comprises a differential drive configuration having two driven wheels. 63. The method of claim 62, wherein each of the driven wheels is operatively coupled to an encoder that is operatively coupled to the controller and configured to provide the controller with input information regarding a driven wheel position. 64. The method of claim 63, wherein the controller is configured to operate the driven wheels to navigate the mobile base based at least in part upon the input information from the driven wheel encoders. 65. The method of claim 49, wherein the controller is configured to operate the mobile base based at least in part upon signals from the sensor. 66. The method of claim 48, wherein the torso assembly is movably coupled to the mobile base such that the torso may be controllably elevated and lowered along an axis substantially perpendicular to the plane of the floor. 67. The method of claim 48, wherein torso assembly is movably coupled to the mobile base such that the torso may be controllably moved along an axis substantially parallel to the plane of the floor. 68. The method of claim 48, wherein the head assembly comprises an image capture device. 69. The method of claim 68, wherein the image capture device comprises a 3-D camera. 70. The method of claim 68, wherein the image capture device is movably coupled to the head assembly such that it may be controllably panned or tilted relative to the head assembly. 71. The method of claim 48, wherein the bin-capturing assembly comprises a under-ledge capturing surface configured to be interfaced with a ledge geometry feature of the bin. 72. The method of claim 71, wherein the capturing surface comprises a rail. 73. The method of claim 71 wherein the rail and ledge geometry feature of the bin are substantially straight. 74. The method of claim 48, further comprising providing a wireless transceiver configured to enable a teleoperating operator to remotely connect with the controller from a remote workstation, and to operate at least the mobile base. 75. The method of claim 74, wherein the controller is configured to navigate, observe the environment, and engage with one or more bins based at least in part upon teleoperation signals through the wireless transceiver from the teleoperating operator. 76. The method of claim 56, wherein the controller is configured to use the image capture device to automatically recognize the bin. 77. The method of claim 76, wherein one or more tags are coupled to the bin, the tags being configured to be recognizable and readable by the controller using the image capture device. 78. The method of claim 77, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the bin. 79. The method of claim 77, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the bin. 80. The method of claim 77, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 81. The method of claim 80, wherein the one or more tags are passive. 82. The method of claim 80, wherein the one or more tags are actively-powered. 83. The method of claim 56, wherein the controller is configured to use the image capture device to automatically recognize one or more tags associated with a location in the nearby environment. 84. The method of claim 83, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the location. 85. The method of claim 83, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the location. 86. The method of claim 83, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 87. The method of claim 86, wherein the one or more tags are passive. 88. The method of claim 86, wherein the one or more tags are actively-powered. 89. The method of claim 56, wherein the controller is configured to use the image capture device to automatically recognize one or more tags associated with an object in the nearby environment. 90. The method of claim 89, wherein at least one of the one of more tags is configured to assist the controller in determining the identification of the object. 91. The method of claim 89, wherein at least one of the one or more tags is configured to assist the controller in determining the geometric pose of the object. 92. The method of claim 89, wherein the one or more tags are selected from the group consisting of a QR code, an AR tag, a 2-D barcode, and a 3-D barcode. 93. The method of claim 92, wherein the one or more tags are passive. 94. The method of claim 92, wherein the one or more tags are actively-powered.
3,600
274,086
15,966,536
3,638
A retention housing for receiving at least one load transfer member is provided. In some embodiments, the retention housing and load transfer member may be included in a sandwich wall panel or a double wall panel. The load transfer member may transfer loads between first and second concrete elements. The retention housing may include first and second retention members, at least one guide member, and a size indicator for aligning the retention members with respect to each other. The guide member may retain the load transfer member at a predetermined angle. In some embodiments, the size indicator may correspond to the thickness of an insulation layer, such as in a sandwich wall panel. The retention housing may further include at least one depth locating means. A retention housing including first and second retention members may further include means for connecting the first and second retention members, such as in an aligned position.
1. A retention housing for receiving at least one load transfer member, said load transfer member transferring loads between first and second concrete elements, comprising: a first retention member; a second retention member; at least one guide member to retain said load transfer member at a predetermined angle; and at least one of said first and second retention members including a size indicator for aligning said first and second retention members with respect to each other. 2. The retention housing of claim 1 further comprising a depth locating means. 3. The retention housing of claim 2 wherein said depth locating means is a depth locating tab. 4. The retention housing of claim 1 wherein said first and second retention members each include a top lip and said top lip includes said size indicator for aligning said first and second retention members with respect to each other. 5. The retention housing of claim 1 wherein said first and second retention members each include a front surface and said front surface includes said size indicator for aligning said first and second retention members with respect to each other. 6. The retention housing of claim 1 wherein said retention housing includes a top and wherein said guide member is positioned between twenty and seventy degrees from the normal of said top and wherein said predetermined angle is also between twenty and seventy degrees from the normal of said top. 7. The retention housing of claim 6 wherein said angle is between forty-five and sixty degrees from the normal of said top. 8. The retention housing of claim 1 wherein said retention housing is capable of receiving two load transfer members. 9. The retention housing of claim 1 wherein said retention housing further includes at least one removable tab for aligning said first and second retention member with respect to each other. 10. The retention housing of claim 9 wherein said retention members each include a plurality of tabs, a portion of which are removed in said aligned position, and wherein a remaining portion creates a thermal break. 11. A sandwich wall panel comprising: a first concrete layer; a second concrete layer; an insulation layer located between said first concrete layer and said second concrete layer; at least one load transfer member; at least one retention housing receiving said load transfer member comprising: a first retention member; a second retention member; at least one guide member to retain said load transfer member at a predetermined angle; and at least one of said first and second retention members including a size indicator for aligning said first and second retention members with respect to each other. 12. The sandwich wall panel of claim 11 wherein said size indicator for aligning said first and second retention members corresponds to the thickness of said insulation layer. 13. The sandwich wall panel of claim 11 further comprising two load transfer members which are received by said retention housing. 14. The sandwich panel of claim 11 wherein said retention housing further comprises at least one depth locating means. 15. The sandwich wall panel of claim 11 wherein said insulation layer receives said retention housing. 16. A double wall panel comprising: a first concrete layer; a second concrete layer; an insulation layer located between said first concrete layer and said second concrete layer; an air gap located between said insulation layer and one of said first and second concrete layers; at least one load transfer member; at least one retention housing receiving said load transfer member comprising: a first retention member; a second retention member; at least one guide member to retain said load transfer member at a predetermined angle; and at least one of said first and second retention members including a size indicator for aligning said first and second retention members with respect to each other. 17. The double wall panel of claim 16 wherein said size indicator for aligning said first and second retention members corresponds to the thickness of said insulation layer and said air gap. 18. A retention housing for receiving at least one load transfer member, said load transfer member transferring loads between first and second concrete elements, comprising: a first retention member; a second retention member; at least one guide member to retain said load transfer member at a predetermined angle; at least one of said first and second retention members including a size indicator for aligning said first and second retention members with respect to each other in an aligned position; and means for connecting said first retention member and said second retention member in said aligned position. 19. The retention housing of claim 18 wherein said retention members further include at least one tab which may be removed in said aligned position. 20. The retention housing of claim 18 wherein said retention members each include a plurality of tabs, a portion of which are removed in said aligned position, and wherein a remaining portion creates a thermal break. 21. The retention housing of claim 18 wherein said first and second retention members are identical. 22. The retention housing of claim 18 wherein said retention members are adjustable. 23. The retention housing of claim 18 wherein said size indicator for aligning said first and second retention members with respect to each other corresponds to a plurality of sizes of said retention housing. 24. The retention housing of claim 18 wherein at least one of said first and second retention member includes a projection and at least one of said first and second retention member includes a slot and wherein said slot receives said projection to connect said first and second retention members in said aligned position. 25. The retention housing of claim 18 wherein at least one of said first and second retention member includes at least one of a top lip and a bottom lip. 26. The retention housing of claim 25 wherein at least one of said first and second retention members includes a bottom lip and said bottom lip is tapered.
A retention housing for receiving at least one load transfer member is provided. In some embodiments, the retention housing and load transfer member may be included in a sandwich wall panel or a double wall panel. The load transfer member may transfer loads between first and second concrete elements. The retention housing may include first and second retention members, at least one guide member, and a size indicator for aligning the retention members with respect to each other. The guide member may retain the load transfer member at a predetermined angle. In some embodiments, the size indicator may correspond to the thickness of an insulation layer, such as in a sandwich wall panel. The retention housing may further include at least one depth locating means. A retention housing including first and second retention members may further include means for connecting the first and second retention members, such as in an aligned position.1. A retention housing for receiving at least one load transfer member, said load transfer member transferring loads between first and second concrete elements, comprising: a first retention member; a second retention member; at least one guide member to retain said load transfer member at a predetermined angle; and at least one of said first and second retention members including a size indicator for aligning said first and second retention members with respect to each other. 2. The retention housing of claim 1 further comprising a depth locating means. 3. The retention housing of claim 2 wherein said depth locating means is a depth locating tab. 4. The retention housing of claim 1 wherein said first and second retention members each include a top lip and said top lip includes said size indicator for aligning said first and second retention members with respect to each other. 5. The retention housing of claim 1 wherein said first and second retention members each include a front surface and said front surface includes said size indicator for aligning said first and second retention members with respect to each other. 6. The retention housing of claim 1 wherein said retention housing includes a top and wherein said guide member is positioned between twenty and seventy degrees from the normal of said top and wherein said predetermined angle is also between twenty and seventy degrees from the normal of said top. 7. The retention housing of claim 6 wherein said angle is between forty-five and sixty degrees from the normal of said top. 8. The retention housing of claim 1 wherein said retention housing is capable of receiving two load transfer members. 9. The retention housing of claim 1 wherein said retention housing further includes at least one removable tab for aligning said first and second retention member with respect to each other. 10. The retention housing of claim 9 wherein said retention members each include a plurality of tabs, a portion of which are removed in said aligned position, and wherein a remaining portion creates a thermal break. 11. A sandwich wall panel comprising: a first concrete layer; a second concrete layer; an insulation layer located between said first concrete layer and said second concrete layer; at least one load transfer member; at least one retention housing receiving said load transfer member comprising: a first retention member; a second retention member; at least one guide member to retain said load transfer member at a predetermined angle; and at least one of said first and second retention members including a size indicator for aligning said first and second retention members with respect to each other. 12. The sandwich wall panel of claim 11 wherein said size indicator for aligning said first and second retention members corresponds to the thickness of said insulation layer. 13. The sandwich wall panel of claim 11 further comprising two load transfer members which are received by said retention housing. 14. The sandwich panel of claim 11 wherein said retention housing further comprises at least one depth locating means. 15. The sandwich wall panel of claim 11 wherein said insulation layer receives said retention housing. 16. A double wall panel comprising: a first concrete layer; a second concrete layer; an insulation layer located between said first concrete layer and said second concrete layer; an air gap located between said insulation layer and one of said first and second concrete layers; at least one load transfer member; at least one retention housing receiving said load transfer member comprising: a first retention member; a second retention member; at least one guide member to retain said load transfer member at a predetermined angle; and at least one of said first and second retention members including a size indicator for aligning said first and second retention members with respect to each other. 17. The double wall panel of claim 16 wherein said size indicator for aligning said first and second retention members corresponds to the thickness of said insulation layer and said air gap. 18. A retention housing for receiving at least one load transfer member, said load transfer member transferring loads between first and second concrete elements, comprising: a first retention member; a second retention member; at least one guide member to retain said load transfer member at a predetermined angle; at least one of said first and second retention members including a size indicator for aligning said first and second retention members with respect to each other in an aligned position; and means for connecting said first retention member and said second retention member in said aligned position. 19. The retention housing of claim 18 wherein said retention members further include at least one tab which may be removed in said aligned position. 20. The retention housing of claim 18 wherein said retention members each include a plurality of tabs, a portion of which are removed in said aligned position, and wherein a remaining portion creates a thermal break. 21. The retention housing of claim 18 wherein said first and second retention members are identical. 22. The retention housing of claim 18 wherein said retention members are adjustable. 23. The retention housing of claim 18 wherein said size indicator for aligning said first and second retention members with respect to each other corresponds to a plurality of sizes of said retention housing. 24. The retention housing of claim 18 wherein at least one of said first and second retention member includes a projection and at least one of said first and second retention member includes a slot and wherein said slot receives said projection to connect said first and second retention members in said aligned position. 25. The retention housing of claim 18 wherein at least one of said first and second retention member includes at least one of a top lip and a bottom lip. 26. The retention housing of claim 25 wherein at least one of said first and second retention members includes a bottom lip and said bottom lip is tapered.
3,600
274,087
15,964,473
3,638
An anchor assembly for anchoring refractory materials within a vessel is disclosed that provides for a more reliable refractory anchor and resultant refractory lining system that is easier to install both in terms of the refractory lining and the anchor assembly itself when compared to prior art anchor assemblies. The anchor assembly includes a base pin assembly, and at least one anchor leg connected to and extending from the base pin assembly. The base pin assembly includes a mounting end formed on one end of the pin assembly adapted for securing the base pin assembly to the vessel. The mounting end has an electrical resistance contact point formed thereon. The electrical resistant contact point preferably has a flux material located thereon.
1. An anchor assembly for anchoring refractory materials within a vessel, the anchor assembly comprising: a base pin assembly; and at least one anchor leg connected to and extending from the base pin assembly, wherein the base pin assembly having a mounting end formed on one end of the pin assembly adapted for securing the base pin assembly to the vessel, wherein the mounting end having an electrical resistance contact point formed thereon. 2. The anchor assembly according to claim 1, wherein each of the at least one anchor leg having a free end, a securing portion and at least one tab formed therein, wherein the securing portion being adapted to connect the at least one anchor leg to the base pin assembly, wherein each of the at least one tab extends from the anchor leg at an angle with respect to the anchor leg. 3. The anchor assembly according to claim 2, wherein the at least one anchor leg includes a plurality of anchor legs. 4. The anchor assembly according to claim 2, wherein the free end extends at an angle with respect to the securing portion. 5. The anchor assembly according to claim 2, where each of the at least one anchor leg having at least one opening formed therein, wherein each of the at least one opening being created when a corresponding tab is bent to extend from the anchor leg at the angle with respect to the free end. 6. The anchor assembly according to claim 5, wherein the free end extends at a first angle with respect to the securing portion and the tab extends at a second angle with respect to the securing portion. 7. The anchor assembly according to claim 6, wherein the at least one anchor leg includes a plurality of anchor legs. 8. The anchor assembly according to claim 2, wherein the base pin assembly having at least one slot formed therein for receiving the securing portion of the at least one anchor leg therein. 9. The anchor assembly according to claim 2, wherein the securing portion of each of the at least one anchor leg is secured to a portion of the base pin assembly. 10. The anchor assembly according to claim 9, wherein the securing portion is welded to the base pin assembly. 11. The anchor assembly according to claim 1, wherein the electrical resistant contact point having a flux material located thereon. 12. The anchor assembly according to claim 11, wherein the base pin assembly is formed from one of a carbon steel and an alloy steel. 13. The anchor assembly according to claim 11, wherein the base pin assembly has a first portion formed from a carbon steel and a second portion formed from an alloy steel, wherein the first portion and the second portion are welded together. 14. The anchor assembly according to claim 13, wherein the first portion and the second portion are welded together by a bimetallic weld. 15. The anchor assembly according to claim 13, wherein the mounting end being located on the first portion. 16. The anchor assembly according to claim 11, further comprising: a ceramic ferrule extending around the mounting end of the pin assembly. 17. A system for anchoring refractory materials to a vessel wall within a vessel, the system comprising: a plurality of anchor assemblies arranged in an array, wherein the plurality of anchor assemblies are secured to the vessel wall, wherein each anchor assembly comprising: a base pin assembly; and at least one anchor leg connected to and extending from the base pin assembly, wherein the base pin assembly having a mounting end formed on one end of the pin assembly adapted for securing the base pin assembly to the vessel, wherein the mounting end having an electrical resistance contact point formed thereon. 18. The system according to claim 17, wherein each of the at least one anchor leg having a free end, a securing portion and at least one tab formed therein, wherein the securing portion being adapted to connect the at least one anchor leg to the base pin assembly, wherein each of the at least one tab extends from the anchor leg at an angle with respect to the anchor leg. 19. The system according to claim 18, wherein the at least one anchor leg includes a plurality of anchor legs. 20. The system according to claim 18, wherein the free end extends at an angle with respect to the securing portion. 21. The system according to claim 18, where each of the at least one anchor leg having at least one opening formed therein, wherein each of the at least one opening being created when a corresponding tab is bent to extend from the anchor leg at the angle with respect to the free end. 22. The system according to claim 21, wherein the free end extends at a first angle with respect to the securing portion and the tab extends at a second angle with respect to the securing portion. 23. The system according to claim 22, wherein the at least one anchor leg includes a plurality of anchor legs. 24. The system according to claim 18, wherein the base pin assembly having at least one slot formed therein for receiving the securing portion of the at least one anchor leg therein. 25. The system according to claim 18, wherein the securing portion of each of the at least one anchor leg is secured to a portion of the base pin assembly. 26. The system according to claim 25, wherein the securing portion is welded to the base pin assembly. 27. The system according to claim 17, wherein the electrical resistant contact point having a flux material located thereon. 28. The system according to claim 27, wherein the base pin assembly is formed from one of a carbon steel and an alloy steel. 29. The system according to claim 27, wherein the base pin assembly has a first portion formed from a carbon steel and a second portion formed from an alloy steel, wherein the first portion and the second portion are welded together. 30. The system according to claim 29, wherein the first portion and the second portion are welded together by a bimetallic weld. 31. The system according to claim 29, wherein the mounting end being located on the first portion. 32. The system according to claim 17, further comprising: a ceramic ferrule extending around the mounting end of the pin assembly. 33. A method of mounting an anchor assembly for anchoring refractory materials to a vessel wall within a vessel, the method comprising: providing at least one anchor assembly, wherein each anchor assembly includes a base pin assembly, and at least one anchor leg connected to and extending from the base pin assembly, wherein the base pin assembly having a mounting end formed on one end of the pin assembly adapted for securing the base pin assembly to the vessel, wherein the mounting end having an electrical resistance contact point formed thereon; locating the mounting end of the at least one anchor assembly on the vessel wall such that the electrical resistance contact point is contacting the vessel wall; and securing the mounting end of the at least one anchor assembly on the vessel wall by welding the mounting end to the vessel wall using electrical resistance welding. 34. The method according to claim 33, wherein the electrical resistance contact point having a flux material located thereon. 35. The method according to claim 33, wherein the at least one anchor assembly includes a ceramic ferrule extending around the mounting end of the pin assembly, wherein locating the mounting end of the at least one anchor assembly on the vessel wall includes locating the ceramic ferrule such ceramic ferrule extends around a perimeter of a contact point between that the electrical resistance contact point and the vessel wall.
An anchor assembly for anchoring refractory materials within a vessel is disclosed that provides for a more reliable refractory anchor and resultant refractory lining system that is easier to install both in terms of the refractory lining and the anchor assembly itself when compared to prior art anchor assemblies. The anchor assembly includes a base pin assembly, and at least one anchor leg connected to and extending from the base pin assembly. The base pin assembly includes a mounting end formed on one end of the pin assembly adapted for securing the base pin assembly to the vessel. The mounting end has an electrical resistance contact point formed thereon. The electrical resistant contact point preferably has a flux material located thereon.1. An anchor assembly for anchoring refractory materials within a vessel, the anchor assembly comprising: a base pin assembly; and at least one anchor leg connected to and extending from the base pin assembly, wherein the base pin assembly having a mounting end formed on one end of the pin assembly adapted for securing the base pin assembly to the vessel, wherein the mounting end having an electrical resistance contact point formed thereon. 2. The anchor assembly according to claim 1, wherein each of the at least one anchor leg having a free end, a securing portion and at least one tab formed therein, wherein the securing portion being adapted to connect the at least one anchor leg to the base pin assembly, wherein each of the at least one tab extends from the anchor leg at an angle with respect to the anchor leg. 3. The anchor assembly according to claim 2, wherein the at least one anchor leg includes a plurality of anchor legs. 4. The anchor assembly according to claim 2, wherein the free end extends at an angle with respect to the securing portion. 5. The anchor assembly according to claim 2, where each of the at least one anchor leg having at least one opening formed therein, wherein each of the at least one opening being created when a corresponding tab is bent to extend from the anchor leg at the angle with respect to the free end. 6. The anchor assembly according to claim 5, wherein the free end extends at a first angle with respect to the securing portion and the tab extends at a second angle with respect to the securing portion. 7. The anchor assembly according to claim 6, wherein the at least one anchor leg includes a plurality of anchor legs. 8. The anchor assembly according to claim 2, wherein the base pin assembly having at least one slot formed therein for receiving the securing portion of the at least one anchor leg therein. 9. The anchor assembly according to claim 2, wherein the securing portion of each of the at least one anchor leg is secured to a portion of the base pin assembly. 10. The anchor assembly according to claim 9, wherein the securing portion is welded to the base pin assembly. 11. The anchor assembly according to claim 1, wherein the electrical resistant contact point having a flux material located thereon. 12. The anchor assembly according to claim 11, wherein the base pin assembly is formed from one of a carbon steel and an alloy steel. 13. The anchor assembly according to claim 11, wherein the base pin assembly has a first portion formed from a carbon steel and a second portion formed from an alloy steel, wherein the first portion and the second portion are welded together. 14. The anchor assembly according to claim 13, wherein the first portion and the second portion are welded together by a bimetallic weld. 15. The anchor assembly according to claim 13, wherein the mounting end being located on the first portion. 16. The anchor assembly according to claim 11, further comprising: a ceramic ferrule extending around the mounting end of the pin assembly. 17. A system for anchoring refractory materials to a vessel wall within a vessel, the system comprising: a plurality of anchor assemblies arranged in an array, wherein the plurality of anchor assemblies are secured to the vessel wall, wherein each anchor assembly comprising: a base pin assembly; and at least one anchor leg connected to and extending from the base pin assembly, wherein the base pin assembly having a mounting end formed on one end of the pin assembly adapted for securing the base pin assembly to the vessel, wherein the mounting end having an electrical resistance contact point formed thereon. 18. The system according to claim 17, wherein each of the at least one anchor leg having a free end, a securing portion and at least one tab formed therein, wherein the securing portion being adapted to connect the at least one anchor leg to the base pin assembly, wherein each of the at least one tab extends from the anchor leg at an angle with respect to the anchor leg. 19. The system according to claim 18, wherein the at least one anchor leg includes a plurality of anchor legs. 20. The system according to claim 18, wherein the free end extends at an angle with respect to the securing portion. 21. The system according to claim 18, where each of the at least one anchor leg having at least one opening formed therein, wherein each of the at least one opening being created when a corresponding tab is bent to extend from the anchor leg at the angle with respect to the free end. 22. The system according to claim 21, wherein the free end extends at a first angle with respect to the securing portion and the tab extends at a second angle with respect to the securing portion. 23. The system according to claim 22, wherein the at least one anchor leg includes a plurality of anchor legs. 24. The system according to claim 18, wherein the base pin assembly having at least one slot formed therein for receiving the securing portion of the at least one anchor leg therein. 25. The system according to claim 18, wherein the securing portion of each of the at least one anchor leg is secured to a portion of the base pin assembly. 26. The system according to claim 25, wherein the securing portion is welded to the base pin assembly. 27. The system according to claim 17, wherein the electrical resistant contact point having a flux material located thereon. 28. The system according to claim 27, wherein the base pin assembly is formed from one of a carbon steel and an alloy steel. 29. The system according to claim 27, wherein the base pin assembly has a first portion formed from a carbon steel and a second portion formed from an alloy steel, wherein the first portion and the second portion are welded together. 30. The system according to claim 29, wherein the first portion and the second portion are welded together by a bimetallic weld. 31. The system according to claim 29, wherein the mounting end being located on the first portion. 32. The system according to claim 17, further comprising: a ceramic ferrule extending around the mounting end of the pin assembly. 33. A method of mounting an anchor assembly for anchoring refractory materials to a vessel wall within a vessel, the method comprising: providing at least one anchor assembly, wherein each anchor assembly includes a base pin assembly, and at least one anchor leg connected to and extending from the base pin assembly, wherein the base pin assembly having a mounting end formed on one end of the pin assembly adapted for securing the base pin assembly to the vessel, wherein the mounting end having an electrical resistance contact point formed thereon; locating the mounting end of the at least one anchor assembly on the vessel wall such that the electrical resistance contact point is contacting the vessel wall; and securing the mounting end of the at least one anchor assembly on the vessel wall by welding the mounting end to the vessel wall using electrical resistance welding. 34. The method according to claim 33, wherein the electrical resistance contact point having a flux material located thereon. 35. The method according to claim 33, wherein the at least one anchor assembly includes a ceramic ferrule extending around the mounting end of the pin assembly, wherein locating the mounting end of the at least one anchor assembly on the vessel wall includes locating the ceramic ferrule such ceramic ferrule extends around a perimeter of a contact point between that the electrical resistance contact point and the vessel wall.
3,600
274,088
15,963,333
3,638
A tradesman trading card package for promoting various trades comprises at least one trading card with a picture of a tradesman one side of the card, information on the one side of the card describing the tradesman including name, profession, specialty and license number, and three columns on an opposite side of the card describing work experience including year obtained, organization and qualification. The tradesman trading cards are packaged according to geographical areas to be distributed to consumers in those geographical areas.
1. A tradesmen trading card package for promoting various trades comprising: at least one trading card with a picture of a tradesman on one side of the card; information on the one side of the card describing the tradesman including name, profession, specialty and license number; and at least one column on an opposite side of the card describing work experience including year obtained, organization and qualifications; wherein the tradesmen trading cards are packaged according to a geographical area of the tradesman described in the card to be distributed to consumers in those geographical areas. 2. A tradesmen trading card for promoting various trades comprising: a picture of a tradesman on one side of the trading card; information on the one side of the card describing the tradesman including name, profession, specialty and license number; and at least one column on an opposite side of the card describing work experience including year obtained, organization and qualifications. 3. A tradesmen trading card system for promoting various trades comprising: trading cards describing one tradesman on each card with information on the cards including a picture of the tradesman, name, profession, specialty, location and experience; a central organizing body for determining which tradesmen may qualify to be included on a card, the content of the cards in general, and the specific information for each tradesman; a distribution system for the trading cards wherein the cards of a tradesman in a geographical area are distributed only to consumers in that geographical area.
A tradesman trading card package for promoting various trades comprises at least one trading card with a picture of a tradesman one side of the card, information on the one side of the card describing the tradesman including name, profession, specialty and license number, and three columns on an opposite side of the card describing work experience including year obtained, organization and qualification. The tradesman trading cards are packaged according to geographical areas to be distributed to consumers in those geographical areas.1. A tradesmen trading card package for promoting various trades comprising: at least one trading card with a picture of a tradesman on one side of the card; information on the one side of the card describing the tradesman including name, profession, specialty and license number; and at least one column on an opposite side of the card describing work experience including year obtained, organization and qualifications; wherein the tradesmen trading cards are packaged according to a geographical area of the tradesman described in the card to be distributed to consumers in those geographical areas. 2. A tradesmen trading card for promoting various trades comprising: a picture of a tradesman on one side of the trading card; information on the one side of the card describing the tradesman including name, profession, specialty and license number; and at least one column on an opposite side of the card describing work experience including year obtained, organization and qualifications. 3. A tradesmen trading card system for promoting various trades comprising: trading cards describing one tradesman on each card with information on the cards including a picture of the tradesman, name, profession, specialty, location and experience; a central organizing body for determining which tradesmen may qualify to be included on a card, the content of the cards in general, and the specific information for each tradesman; a distribution system for the trading cards wherein the cards of a tradesman in a geographical area are distributed only to consumers in that geographical area.
3,600
274,089
15,962,572
3,638
Embodiments of the present invention relate to integrated modular LED display devices. In one embodiment, a modular LED display devices comprises a plastic housing with an outer surface exposed to an external environment. The modular LED display device is configured to display images using an array of pixels attached to a front side of a printed circuit board attached to the plastic housing. The modular LED display device includes a circuit for controlling a plurality of LEDs, the circuit being attached to the opposite second side of the printed circuit board. The first side of the printed circuit board is sealed to be waterproof by an overlying compound. The modular LED display device further includes a power supply including a power converter for converting alternating current (AC) power to direct current (DC) power. The modular LED display device is configured to be exposed to the external environment without additional enclosures.
1. A modular light emitting diode (LED) display device comprising: a first side and an opposite second side, wherein the first side of the modular LED display device comprises a display surface of the modular LED display device; a plastic housing comprising a first dimension that is between six inches and four feet, a first recessed region, and an outer surface of the modular LED display device that is exposed to an external environment and being sealed to be waterproof, the outer surface being part of the opposite second side of the modular LED display device; a printed circuit board attached to the plastic housing, the printed circuit board comprising a first side and an opposite second side; a plurality of LEDs arranged as pixels attached to the first side of the printed circuit board, wherein the pixels are arranged in an array of pixels comprising a plurality of rows and a plurality of columns, each pixel in the array of pixels is separated from adjacent pixels by a constant pixel pitch, and the modular LED display device is configured to display images using the array of pixels; a compound overlying the first side of the printed circuit board, wherein the first side of the printed circuit board is sealed to be waterproof by the compound, and the modular LED display device is configured to be exposed to the external environment without additional enclosures; a circuit for controlling the plurality of LEDs, the circuit being attached to the opposite second side of the printed circuit board, wherein the circuit is disposed in the first recessed region of the plastic housing; a power supply for powering the plurality of LEDs, the power supply comprising a power converter for converting alternating current (AC) power to direct current (DC) power; a thermally conductive material thermally contacting both the power supply and the plastic housing; a framework of louvers disposed over the first side of the printed circuit board, the framework of louvers being disposed between the plurality of rows; and coupling structures, wherein the modular LED display device is configured to be to be modularly attached with other modular LED display devices using the coupling structures to form an integrated display surface, and the modular LED display device is configured to operate with the other modular LED display devices to display a single image on the integrated display surface. 2. The modular LED display device of claim 1, wherein the plastic housing further comprises a second recessed region, and wherein the power supply is disposed in the second recessed region of the plastic housing. 3. The modular LED display device of claim 1, wherein: the LEDs of each of the pixels are configured as a surface-mounted device (SMD); and a surface of each of the SMDs is exposed to the external environment. 4. The modular LED display device of claim 1, wherein an ingress protection rating of the modular LED display device is IP 65. 5. The modular LED display device of claim 1, wherein an ingress protection rating of the modular LED display device is IP 66. 6. The modular LED display device of claim 1, wherein an ingress protection rating of the modular LED display device is IP 67. 7. The modular LED display device of claim 1, wherein an ingress protection rating of the modular LED display device is IP 68. 8. The modular LED display device of claim 1, further comprising a monitoring circuit configured to monitor power consumption of the modular LED display device and send a warning message upon detecting a lack of power. 9. The modular LED display device of claim 1, further comprising a pixel health loop circuit configured to monitor power being consumed by each of the plurality of LEDs. 10. The modular LED display device of claim 1, further comprising: an integrated data and power connector electrically coupled to the power supply, wherein the integrated data and power connector is configured to be waterproof, the integrated data and power connector comprises a set of power connectors and a set of data connectors, and the integrated data and power connector is electrically coupled to the circuit and to the plurality of LEDs; and a flexible cable comprising a first end and a second end, wherein the first end is coupled directly to the modular LED display device and the second end is coupled directly to the integrated data and power connector. 11. The modular LED display device of claim 1, further comprising: a height extending from a first edge of the modular LED display device to an opposite second edge of the modular LED display device; and a width extending from a third edge of the modular LED display device to an opposite fourth edge of the modular LED display device, wherein the printed circuit board extends to within an edge distance of each of the first edge, the opposite second edge, the third edge, and the opposite fourth edge, and the constant pixel pitch is greater than the edge distance. 12. The modular LED display device of claim 11, wherein the height is substantially half of the width. 13. A modular light emitting diode (LED) display device comprising: a first side and an opposite second side, wherein the first side of the modular LED display device comprises a display surface of the modular LED display device, and wherein the modular LED display device is configured to be exposed to an external environment without additional enclosures; a plastic housing comprising a first dimension that is between six inches and four feet, a second dimension that is between one foot and four feet, the second dimension being perpendicular to the first dimension, a first recessed region, and an outer surface of the modular LED display device that is exposed to the external environment and being sealed to be waterproof, the outer surface being part of the opposite second side of the modular LED display device; a printed circuit board attached to the plastic housing, the printed circuit board comprising a first side and an opposite second side; a plurality of LEDs arranged as pixels attached to the first side of the printed circuit board, wherein the pixels are arranged in an array of pixels comprising a plurality of rows and a plurality of columns, each pixel in the array of pixels is separated from adjacent pixels by a constant pixel pitch, and the modular LED display device is configured to display images using the array of pixels; a circuit for controlling the plurality of LEDs, the circuit being attached to the opposite second side of the printed circuit board, wherein the circuit is disposed in the first recessed region of the plastic housing; a power supply for powering the plurality of LEDs; a thermally conductive material thermally contacting both the power supply and the plastic housing; a framework of louvers disposed over the first side of the printed circuit board, the framework of louvers being disposed between the plurality of rows; and coupling structures, wherein the modular LED display device is configured to be to be modularly attached with other modular LED display devices using the coupling structures to form an integrated display surface, and the modular LED display device is configured to operate with the other modular LED display devices to display a single image on the integrated display surface. 14. The modular LED display device of claim 13, wherein: the plastic housing further comprises a second recessed region; the second recessed region comprises a third dimension and a fourth dimension that is perpendicular to the third dimension; the third dimension is parallel to and smaller than the first dimension; the fourth dimension is parallel to and smaller than the second dimension; and the power supply is disposed in the second recessed region. 15. The modular LED display device of claim 13, wherein the power supply comprises a power converter for converting alternating current (AC) power to direct current (DC) power. 16. The modular LED display device of claim 13, wherein the power supply comprises a power converter for converting direct current (DC) power to DC power. 17. The modular LED display device of claim 13, wherein: the LEDs of each of the pixels are configured as a surface-mounted device (SMD); and a surface of each of the SMDs is exposed to the external environment. 18. The modular LED display device of claim 13, further comprising a monitoring circuit configured to monitor power consumption of the modular LED display device and send a warning message upon detecting a lack of power. 19. The modular LED display device of claim 13, further comprising a pixel health loop circuit configured to monitor power being consumed by each of the plurality of LEDs. 20. The modular LED display device of claim 13, further comprising: an integrated data and power connector electrically coupled to the power supply, wherein the integrated data and power connector is configured to be waterproof, the integrated data and power connector comprises a set of power connectors and a set of data connectors, and the integrated data and power connector is electrically coupled to the circuit and to the plurality of LEDs; and a flexible cable comprising a first end and a second end, wherein the first end is coupled directly to the modular LED display device and the second end is coupled directly to the integrated data and power connector. 21. The modular LED display device of claim 13, further comprising: a height extending from a first edge of the modular LED display device to an opposite second edge of the modular LED display device; and a width extending from a third edge of the modular LED display device to an opposite fourth edge of the modular LED display device, wherein the printed circuit board extends to within an edge distance of each of the first edge, the opposite second edge, the third edge, and the opposite fourth edge, and the constant pixel pitch is greater than the edge distance. 22. The modular LED display device of claim 21, wherein the height is substantially half of the width. 23. A modular multi-device display system comprising: a mechanical support structure comprising a plurality of beams; a plurality of light emitting diode (LED) display devices, wherein the plurality of LED display devices is arranged in an array and mounted to the mechanical support structure so as to form an integrated display; a box disposed in a first housing and mounted to the mechanical support structure, wherein the box comprises a power management unit for providing power to each of the plurality of LED display devices, wherein the box comprises a receiver card that is configured to receive data to be displayed and feed the data to be displayed and communication to each of the plurality of LED display devices; and a plurality of electrical connections electrically connecting the box with each of the plurality of LED display devices, wherein each of the plurality of LED display devices comprises a first side and an opposite second side, wherein the first side of the LED display device comprises a display surface of the LED display device, a plastic housing comprising a first dimension that is between six inches and four feet, a first recessed region, and an outer surface of the LED display device that is exposed to an external environment and being sealed to be waterproof, the outer surface being part of the opposite second side of the LED display device, wherein the first housing is separate from the plastic housing, a printed circuit board attached to the plastic housing, the printed circuit board comprising a first side and an opposite second side, a plurality of LEDs arranged as pixels attached to the first side of the printed circuit board, wherein the pixels are arranged in an array of pixels comprising a plurality of rows and a plurality of columns, wherein each pixel in the array of pixels is separated from adjacent pixels by a constant pixel pitch, and wherein the LED display device is configured to display images using the array of pixels, a compound overlying the first side of the printed circuit board, wherein the first side of the printed circuit board is sealed to be waterproof by the compound, and wherein the LED display device is configured to be exposed to the external environment without additional enclosures, a circuit for controlling the plurality of LEDs, the circuit being attached to the opposite second side of the printed circuit board, wherein the circuit is disposed in the first recessed region of the plastic housing, a power supply for powering the plurality of LEDs, the power supply comprising a power converter for converting alternating current (AC) power to direct current (DC) power, a thermally conductive material thermally contacting both the power supply and the plastic housing, and a framework of louvers disposed over the first side of the printed circuit board, the framework of louvers being disposed between the plurality of rows. 24. The modular multi-device display system of claim 23, wherein each of the plurality of LED display devices is configured to be supported by both a first interior beam of the plurality of beams and a second interior beam of the of the plurality of beams, wherein the first interior beam is perpendicular to the second interior beam. 25. The modular multi-device display system of claim 23, wherein the plastic housing of each of the LED display devices further comprises a second recessed region, and wherein the power supply of each of the LED display devices is disposed in the second recessed region of the plastic housing. 26. The modular multi-device display system of claim 23, wherein for each of the LED display devices: the LEDs of each of the pixels are configured as a surface-mounted device (SMD); and a surface of each of the SMDs is exposed to the external environment. 27. The modular multi-device display system of claim 23, wherein an ingress protection rating of each of the LED display devices is IP 65. 28. The modular multi-device display system of claim 23, wherein an ingress protection rating of each of the LED display devices is IP 66. 29. The modular multi-device display system of claim 23, wherein an ingress protection rating of each of the LED display devices is at least IP 67. 30. A modular light emitting diode (LED) display device comprising: a first side and an opposite second side, wherein the first side of the modular LED display device comprises a display surface of the modular LED display device; means for encasing components of the modular LED display device, the means for encasing comprising plastic, a first dimension that is between six inches and four feet, a first recessed region, and an outer surface of the modular LED display device that is exposed to an external environment and being sealed to be waterproof, the outer surface being part of the opposite second side of the modular LED display device; means for emitting light from the modular LED display device, the means for emitting light comprising a plurality of pixels, wherein the pixels are arranged in an array of pixels comprising a plurality of rows and a plurality of columns, each pixel in the array of pixels is separated from adjacent pixels by a constant pixel pitch, and the modular LED display device is configured to display images using the array of pixels; means for supporting the means for emitting light, the means for supporting being attached to the means for encasing, wherein the means for emitting light are attached to a first side of the means for supporting; a means for protecting the modular LED display device overlying the first side of the means for supporting, wherein the means for supporting is protected by the means for protecting, and the modular LED display device is configured to be exposed to the external environment without additional enclosures; means for controlling operation of the means for emitting light attached to an opposite second side of the means for supporting, the means for controlling operation being disposed in the first recessed region; means for supplying power to the means for emitting light, the means for supplying power comprising a power converter for converting alternating current (AC) power to direct current (DC) power; means for transferring heat thermally contacting both the means for supplying power and the means for encasing; and means for coupling the modular LED display device, wherein the modular LED display device is configured to be to be modularly attached with other modular LED display devices using the means for coupling to form an integrated display surface, and the modular LED display device is configured to operate with the other modular LED display devices to display a single image on the integrated display surface.
Embodiments of the present invention relate to integrated modular LED display devices. In one embodiment, a modular LED display devices comprises a plastic housing with an outer surface exposed to an external environment. The modular LED display device is configured to display images using an array of pixels attached to a front side of a printed circuit board attached to the plastic housing. The modular LED display device includes a circuit for controlling a plurality of LEDs, the circuit being attached to the opposite second side of the printed circuit board. The first side of the printed circuit board is sealed to be waterproof by an overlying compound. The modular LED display device further includes a power supply including a power converter for converting alternating current (AC) power to direct current (DC) power. The modular LED display device is configured to be exposed to the external environment without additional enclosures.1. A modular light emitting diode (LED) display device comprising: a first side and an opposite second side, wherein the first side of the modular LED display device comprises a display surface of the modular LED display device; a plastic housing comprising a first dimension that is between six inches and four feet, a first recessed region, and an outer surface of the modular LED display device that is exposed to an external environment and being sealed to be waterproof, the outer surface being part of the opposite second side of the modular LED display device; a printed circuit board attached to the plastic housing, the printed circuit board comprising a first side and an opposite second side; a plurality of LEDs arranged as pixels attached to the first side of the printed circuit board, wherein the pixels are arranged in an array of pixels comprising a plurality of rows and a plurality of columns, each pixel in the array of pixels is separated from adjacent pixels by a constant pixel pitch, and the modular LED display device is configured to display images using the array of pixels; a compound overlying the first side of the printed circuit board, wherein the first side of the printed circuit board is sealed to be waterproof by the compound, and the modular LED display device is configured to be exposed to the external environment without additional enclosures; a circuit for controlling the plurality of LEDs, the circuit being attached to the opposite second side of the printed circuit board, wherein the circuit is disposed in the first recessed region of the plastic housing; a power supply for powering the plurality of LEDs, the power supply comprising a power converter for converting alternating current (AC) power to direct current (DC) power; a thermally conductive material thermally contacting both the power supply and the plastic housing; a framework of louvers disposed over the first side of the printed circuit board, the framework of louvers being disposed between the plurality of rows; and coupling structures, wherein the modular LED display device is configured to be to be modularly attached with other modular LED display devices using the coupling structures to form an integrated display surface, and the modular LED display device is configured to operate with the other modular LED display devices to display a single image on the integrated display surface. 2. The modular LED display device of claim 1, wherein the plastic housing further comprises a second recessed region, and wherein the power supply is disposed in the second recessed region of the plastic housing. 3. The modular LED display device of claim 1, wherein: the LEDs of each of the pixels are configured as a surface-mounted device (SMD); and a surface of each of the SMDs is exposed to the external environment. 4. The modular LED display device of claim 1, wherein an ingress protection rating of the modular LED display device is IP 65. 5. The modular LED display device of claim 1, wherein an ingress protection rating of the modular LED display device is IP 66. 6. The modular LED display device of claim 1, wherein an ingress protection rating of the modular LED display device is IP 67. 7. The modular LED display device of claim 1, wherein an ingress protection rating of the modular LED display device is IP 68. 8. The modular LED display device of claim 1, further comprising a monitoring circuit configured to monitor power consumption of the modular LED display device and send a warning message upon detecting a lack of power. 9. The modular LED display device of claim 1, further comprising a pixel health loop circuit configured to monitor power being consumed by each of the plurality of LEDs. 10. The modular LED display device of claim 1, further comprising: an integrated data and power connector electrically coupled to the power supply, wherein the integrated data and power connector is configured to be waterproof, the integrated data and power connector comprises a set of power connectors and a set of data connectors, and the integrated data and power connector is electrically coupled to the circuit and to the plurality of LEDs; and a flexible cable comprising a first end and a second end, wherein the first end is coupled directly to the modular LED display device and the second end is coupled directly to the integrated data and power connector. 11. The modular LED display device of claim 1, further comprising: a height extending from a first edge of the modular LED display device to an opposite second edge of the modular LED display device; and a width extending from a third edge of the modular LED display device to an opposite fourth edge of the modular LED display device, wherein the printed circuit board extends to within an edge distance of each of the first edge, the opposite second edge, the third edge, and the opposite fourth edge, and the constant pixel pitch is greater than the edge distance. 12. The modular LED display device of claim 11, wherein the height is substantially half of the width. 13. A modular light emitting diode (LED) display device comprising: a first side and an opposite second side, wherein the first side of the modular LED display device comprises a display surface of the modular LED display device, and wherein the modular LED display device is configured to be exposed to an external environment without additional enclosures; a plastic housing comprising a first dimension that is between six inches and four feet, a second dimension that is between one foot and four feet, the second dimension being perpendicular to the first dimension, a first recessed region, and an outer surface of the modular LED display device that is exposed to the external environment and being sealed to be waterproof, the outer surface being part of the opposite second side of the modular LED display device; a printed circuit board attached to the plastic housing, the printed circuit board comprising a first side and an opposite second side; a plurality of LEDs arranged as pixels attached to the first side of the printed circuit board, wherein the pixels are arranged in an array of pixels comprising a plurality of rows and a plurality of columns, each pixel in the array of pixels is separated from adjacent pixels by a constant pixel pitch, and the modular LED display device is configured to display images using the array of pixels; a circuit for controlling the plurality of LEDs, the circuit being attached to the opposite second side of the printed circuit board, wherein the circuit is disposed in the first recessed region of the plastic housing; a power supply for powering the plurality of LEDs; a thermally conductive material thermally contacting both the power supply and the plastic housing; a framework of louvers disposed over the first side of the printed circuit board, the framework of louvers being disposed between the plurality of rows; and coupling structures, wherein the modular LED display device is configured to be to be modularly attached with other modular LED display devices using the coupling structures to form an integrated display surface, and the modular LED display device is configured to operate with the other modular LED display devices to display a single image on the integrated display surface. 14. The modular LED display device of claim 13, wherein: the plastic housing further comprises a second recessed region; the second recessed region comprises a third dimension and a fourth dimension that is perpendicular to the third dimension; the third dimension is parallel to and smaller than the first dimension; the fourth dimension is parallel to and smaller than the second dimension; and the power supply is disposed in the second recessed region. 15. The modular LED display device of claim 13, wherein the power supply comprises a power converter for converting alternating current (AC) power to direct current (DC) power. 16. The modular LED display device of claim 13, wherein the power supply comprises a power converter for converting direct current (DC) power to DC power. 17. The modular LED display device of claim 13, wherein: the LEDs of each of the pixels are configured as a surface-mounted device (SMD); and a surface of each of the SMDs is exposed to the external environment. 18. The modular LED display device of claim 13, further comprising a monitoring circuit configured to monitor power consumption of the modular LED display device and send a warning message upon detecting a lack of power. 19. The modular LED display device of claim 13, further comprising a pixel health loop circuit configured to monitor power being consumed by each of the plurality of LEDs. 20. The modular LED display device of claim 13, further comprising: an integrated data and power connector electrically coupled to the power supply, wherein the integrated data and power connector is configured to be waterproof, the integrated data and power connector comprises a set of power connectors and a set of data connectors, and the integrated data and power connector is electrically coupled to the circuit and to the plurality of LEDs; and a flexible cable comprising a first end and a second end, wherein the first end is coupled directly to the modular LED display device and the second end is coupled directly to the integrated data and power connector. 21. The modular LED display device of claim 13, further comprising: a height extending from a first edge of the modular LED display device to an opposite second edge of the modular LED display device; and a width extending from a third edge of the modular LED display device to an opposite fourth edge of the modular LED display device, wherein the printed circuit board extends to within an edge distance of each of the first edge, the opposite second edge, the third edge, and the opposite fourth edge, and the constant pixel pitch is greater than the edge distance. 22. The modular LED display device of claim 21, wherein the height is substantially half of the width. 23. A modular multi-device display system comprising: a mechanical support structure comprising a plurality of beams; a plurality of light emitting diode (LED) display devices, wherein the plurality of LED display devices is arranged in an array and mounted to the mechanical support structure so as to form an integrated display; a box disposed in a first housing and mounted to the mechanical support structure, wherein the box comprises a power management unit for providing power to each of the plurality of LED display devices, wherein the box comprises a receiver card that is configured to receive data to be displayed and feed the data to be displayed and communication to each of the plurality of LED display devices; and a plurality of electrical connections electrically connecting the box with each of the plurality of LED display devices, wherein each of the plurality of LED display devices comprises a first side and an opposite second side, wherein the first side of the LED display device comprises a display surface of the LED display device, a plastic housing comprising a first dimension that is between six inches and four feet, a first recessed region, and an outer surface of the LED display device that is exposed to an external environment and being sealed to be waterproof, the outer surface being part of the opposite second side of the LED display device, wherein the first housing is separate from the plastic housing, a printed circuit board attached to the plastic housing, the printed circuit board comprising a first side and an opposite second side, a plurality of LEDs arranged as pixels attached to the first side of the printed circuit board, wherein the pixels are arranged in an array of pixels comprising a plurality of rows and a plurality of columns, wherein each pixel in the array of pixels is separated from adjacent pixels by a constant pixel pitch, and wherein the LED display device is configured to display images using the array of pixels, a compound overlying the first side of the printed circuit board, wherein the first side of the printed circuit board is sealed to be waterproof by the compound, and wherein the LED display device is configured to be exposed to the external environment without additional enclosures, a circuit for controlling the plurality of LEDs, the circuit being attached to the opposite second side of the printed circuit board, wherein the circuit is disposed in the first recessed region of the plastic housing, a power supply for powering the plurality of LEDs, the power supply comprising a power converter for converting alternating current (AC) power to direct current (DC) power, a thermally conductive material thermally contacting both the power supply and the plastic housing, and a framework of louvers disposed over the first side of the printed circuit board, the framework of louvers being disposed between the plurality of rows. 24. The modular multi-device display system of claim 23, wherein each of the plurality of LED display devices is configured to be supported by both a first interior beam of the plurality of beams and a second interior beam of the of the plurality of beams, wherein the first interior beam is perpendicular to the second interior beam. 25. The modular multi-device display system of claim 23, wherein the plastic housing of each of the LED display devices further comprises a second recessed region, and wherein the power supply of each of the LED display devices is disposed in the second recessed region of the plastic housing. 26. The modular multi-device display system of claim 23, wherein for each of the LED display devices: the LEDs of each of the pixels are configured as a surface-mounted device (SMD); and a surface of each of the SMDs is exposed to the external environment. 27. The modular multi-device display system of claim 23, wherein an ingress protection rating of each of the LED display devices is IP 65. 28. The modular multi-device display system of claim 23, wherein an ingress protection rating of each of the LED display devices is IP 66. 29. The modular multi-device display system of claim 23, wherein an ingress protection rating of each of the LED display devices is at least IP 67. 30. A modular light emitting diode (LED) display device comprising: a first side and an opposite second side, wherein the first side of the modular LED display device comprises a display surface of the modular LED display device; means for encasing components of the modular LED display device, the means for encasing comprising plastic, a first dimension that is between six inches and four feet, a first recessed region, and an outer surface of the modular LED display device that is exposed to an external environment and being sealed to be waterproof, the outer surface being part of the opposite second side of the modular LED display device; means for emitting light from the modular LED display device, the means for emitting light comprising a plurality of pixels, wherein the pixels are arranged in an array of pixels comprising a plurality of rows and a plurality of columns, each pixel in the array of pixels is separated from adjacent pixels by a constant pixel pitch, and the modular LED display device is configured to display images using the array of pixels; means for supporting the means for emitting light, the means for supporting being attached to the means for encasing, wherein the means for emitting light are attached to a first side of the means for supporting; a means for protecting the modular LED display device overlying the first side of the means for supporting, wherein the means for supporting is protected by the means for protecting, and the modular LED display device is configured to be exposed to the external environment without additional enclosures; means for controlling operation of the means for emitting light attached to an opposite second side of the means for supporting, the means for controlling operation being disposed in the first recessed region; means for supplying power to the means for emitting light, the means for supplying power comprising a power converter for converting alternating current (AC) power to direct current (DC) power; means for transferring heat thermally contacting both the means for supplying power and the means for encasing; and means for coupling the modular LED display device, wherein the modular LED display device is configured to be to be modularly attached with other modular LED display devices using the means for coupling to form an integrated display surface, and the modular LED display device is configured to operate with the other modular LED display devices to display a single image on the integrated display surface.
3,600
274,090
15,962,417
3,638
An insulative panel having a frontside and an opposing backside, and one or more other sides that extend from the frontside to backside, the one or more other sides angling inwardly from the frontside to the backside so that the panel has a tapering profile, the panel being configured to fit into a stud or other such cavity having standardized dimensions, the panel comprising an insulating material with an R-value suitable for use in building construction and remodeling.
1. An insulative panel having a frontside and an opposing backside, and one or more other sides that extend from the frontside to backside, the one or more other sides angling inwardly from the frontside to the backside so that the panel has a tapering profile, the panel being configured to fit into a building cavity having parallel walls of predetermined dimensions, the panel comprising an insulating material and the panel having an R-value of at least 3 per inch. 2. The panel of claim 1 wherein the one or more other sides comprise an opposing pair of a left side and a right side and the backside has left and right side edges on its perimeter that are inset from left and right side edges on the perimeter of the frontside by a predetermined degree dependent on the angling of the opposing pair of the panel's left side and right side. 3. The panel of claim 2 wherein the panel's perimeter has a generally rectilinear form for both the frontside and the backside. 4. The panel of claim 2 wherein the inward angling is at 88-65 degrees. 5. The panel of claim 2 wherein the panel is at least 10″ wide at the frontside. 6. The panel of claim 5 wherein the panel has a depth defined by the separation of the frontside and backside of least 1.5″ to 12″. 7. The panel of claim 6 wherein the panel comprises an open cell foam material of at least 3″ in depth. 8. The panel of claim 5 wherein the panel is between 10″ to 60″ wide at the frontside and has a depth defined by the separation of the frontside and backside of least 1.5″ to 12″. 9. The panel of claim 5 wherein the panel is between 14″ to 18″ wide at the frontside and has a depth defined by the separation of the frontside and backside of least 1.5″ to 6″. 10. The panel of claim 5, wherein the panel comprises an open or closed cell foam material. 11. The panel of claim 1, wherein one or more collapsible zones are provided in the panel to define one or more convergeable sections. 12. The panel of claim 11 wherein the collapsible zone comprises a cut-out or notch oriented along the longitudinal access of the panel and between the one or more other sides, which sides are a pair of a left side and a right side that are intended to be placed adjacent the left and right sides of a stud cavity. 13. The panel of claim 12 wherein the collapsible zone comprises a collapsible foam section. 14. The panel of claim 10 wherein the panel includes a plurality of spaced apart shot holes for accepting a filler material. 15. The panel of claim 1 wherein the panel has a multi-layer construction. 16. The panel of claim 15 wherein the layer is disposed as a surface layer on the panel's frontside and/or backside to provide any one more properties selected from the group of: a protective layer, a finish layer, a vapor or moisture barrier, a fire retarding barrier, an adhesive layer for adhering to other materials, a structural reinforcement layer, and/or a wear resistant layer. 17. A method of assembling an insulated structure, placing into a cavity defined by studs or other boundary elements serving as at least one set of spaced apart, opposing sides of the cavity, the panel of claim 1, such that the panel frictionally engages the spaced apart, opposing sides and fits substantially flush with the opening of the cavity. 18. The method of claim 17 wherein the studs comprise wood or metal studs defining a rectilinear cavity. 19. The method of claim 18 wherein the studs are set at a standard 16″×16″ spacing. 20. The method of claim 17 further comprising filling a gap or air space in the panel with a filler material.
An insulative panel having a frontside and an opposing backside, and one or more other sides that extend from the frontside to backside, the one or more other sides angling inwardly from the frontside to the backside so that the panel has a tapering profile, the panel being configured to fit into a stud or other such cavity having standardized dimensions, the panel comprising an insulating material with an R-value suitable for use in building construction and remodeling.1. An insulative panel having a frontside and an opposing backside, and one or more other sides that extend from the frontside to backside, the one or more other sides angling inwardly from the frontside to the backside so that the panel has a tapering profile, the panel being configured to fit into a building cavity having parallel walls of predetermined dimensions, the panel comprising an insulating material and the panel having an R-value of at least 3 per inch. 2. The panel of claim 1 wherein the one or more other sides comprise an opposing pair of a left side and a right side and the backside has left and right side edges on its perimeter that are inset from left and right side edges on the perimeter of the frontside by a predetermined degree dependent on the angling of the opposing pair of the panel's left side and right side. 3. The panel of claim 2 wherein the panel's perimeter has a generally rectilinear form for both the frontside and the backside. 4. The panel of claim 2 wherein the inward angling is at 88-65 degrees. 5. The panel of claim 2 wherein the panel is at least 10″ wide at the frontside. 6. The panel of claim 5 wherein the panel has a depth defined by the separation of the frontside and backside of least 1.5″ to 12″. 7. The panel of claim 6 wherein the panel comprises an open cell foam material of at least 3″ in depth. 8. The panel of claim 5 wherein the panel is between 10″ to 60″ wide at the frontside and has a depth defined by the separation of the frontside and backside of least 1.5″ to 12″. 9. The panel of claim 5 wherein the panel is between 14″ to 18″ wide at the frontside and has a depth defined by the separation of the frontside and backside of least 1.5″ to 6″. 10. The panel of claim 5, wherein the panel comprises an open or closed cell foam material. 11. The panel of claim 1, wherein one or more collapsible zones are provided in the panel to define one or more convergeable sections. 12. The panel of claim 11 wherein the collapsible zone comprises a cut-out or notch oriented along the longitudinal access of the panel and between the one or more other sides, which sides are a pair of a left side and a right side that are intended to be placed adjacent the left and right sides of a stud cavity. 13. The panel of claim 12 wherein the collapsible zone comprises a collapsible foam section. 14. The panel of claim 10 wherein the panel includes a plurality of spaced apart shot holes for accepting a filler material. 15. The panel of claim 1 wherein the panel has a multi-layer construction. 16. The panel of claim 15 wherein the layer is disposed as a surface layer on the panel's frontside and/or backside to provide any one more properties selected from the group of: a protective layer, a finish layer, a vapor or moisture barrier, a fire retarding barrier, an adhesive layer for adhering to other materials, a structural reinforcement layer, and/or a wear resistant layer. 17. A method of assembling an insulated structure, placing into a cavity defined by studs or other boundary elements serving as at least one set of spaced apart, opposing sides of the cavity, the panel of claim 1, such that the panel frictionally engages the spaced apart, opposing sides and fits substantially flush with the opening of the cavity. 18. The method of claim 17 wherein the studs comprise wood or metal studs defining a rectilinear cavity. 19. The method of claim 18 wherein the studs are set at a standard 16″×16″ spacing. 20. The method of claim 17 further comprising filling a gap or air space in the panel with a filler material.
3,600
274,091
15,770,530
3,638
A notebook is formed by folding up zigzag a continuous web form for every unit form with rouletted fold lines to form a text unit including a preset number of unit forms. After cutting off one folded side to leave the other folded side, the text unit is sandwiched between a front cover and a back cover and is covered with a spine. The spine is pressed against the remaining the other folded side of the text unit together with the front cover and the back cover by the medium of a mesh cloth impregnated with an adhesive so as to cover and wrap the whole a back of the stack of the front cover, the text unit and the back cover. The notebook can be used in a completely spread state (180-degree horizontal state) and conveniently, and retains high security and high functionality. A process for producing the notebook.
1. (canceled) 2. The notebook according to claim 10, wherein a release agent is applied in stripes on both sides of each ridge formed on one folded side of the multiplicity of stacked folded sheet pairs forming the text unit. 3. The notebook according to claim 10, wherein each unit form has a form face, and the form face includes a main data region equipped with a plurality of horizontal ruled lines, and the horizontal ruled lines include a ruled line formed of micro characters. 4. The notebook according to claim 3, wherein the horizontal ruled lines in the main data region comprise ruled lines formed of micro characters of from 0.3 point to 0.7 point and ruled lines formed of solid or dotted lines. 5. The notebook according to claim 3, where in the main data region is entirely formed of a mesh of horizontal ruled lines and vertical ruled lines as a whole. 6. The notebook according to claim 10, having a print pattern characterized by an uppermost or lowermost ruled line or a line above the horizontal ruled line that is formed of characters having point sizes which sequentially increase or decrease in a range of from 0.3 point up to 2 to 4 point toward an end of the line, and the point sizes become maximum or minimum at an intermediate point of the line. 7. The notebook according to claim 10, wherein said spine comprises a binding cloth paper or fabric, and said mesh cloth comprises a meshed cheesecloth or gauze cloth. 8. The notebook according to claim 10, wherein outer surfaces of said front and back covers are each recessed in a stripe form along the one folded side of the stacked unit form group, said spine is formed of a binding cloth paper and disposed to straddle over the recessed part of the front cover and the recessed part of the back cover so as to be flush with the front cover and the back cover. 9. (canceled) 10. A notebook, comprising: a stack of folded unit form group (text unit) formed by stacking a multiplicity of sheet pairs each comprising a pair of neighboring unit forms connected and folded with each other via a rouletted fold line, a mesh cloth impregnated with a liquid adhesive and directly applied against a back folded side of the unit form group for adhesion, and a front cover, a spine and a back cover disposed so as to wrap the unit form group, thereby providing an integrated notebook structure. 11. The process for producing a notebook according to claim 17, wherein: said one folded side is cut off while leaving the other folded side of the text unit before or after the disposition of the spine onto the stack of the front cover, the text unit and the back cover. 12. The process for producing a notebook according to claim 17, wherein said continuous web form is provided through steps of: successively printing a multiplicity of unit forms including horizontal ruled lines formed of a succession of micro characters by a printer equipped with an endless belt-shaped printing plate on one surface of a continuous web form, inverting the front and back surfaces of the continuous web form with a form inverter, printing on the other surface of the continuous web form a multiplicity of unit forms including a horizontal ruled line formed of a succession of micro characters by another printer equipped with an endless belt-shaped printing plate to provide the continuous web form with prints on both surfaces. 13. A process for producing a notebook according to claim 17, wherein said rouletted hold line is provided between each neighboring pair of unit forms by micro rouletting in a width direction of the continuous web form. 14. The process for producing a notebook according to claim 17, wherein the disposition of the mesh cloth impregnated with an adhesive between the back of the stack of the front cover, text unit and back cover and the spine, is performed by pressing of the spine onto the back of the stack so as to wrap the latter, after successive application of the adhesive and the mesh cloth or application of the mesh cloth preliminarily impregnated with the liquid adhesive onto the back of the stack. 15. (canceled) 16. The notebook according to claim 10, wherein said front cover, spine and back cover are formed of a single sheet of paper. 17. A process for producing a notebook according to claim 10, comprising: providing an elongated continuous web form having a multiplicity of unit forms successively printed with spacing therebetween along a length on both surfaces thereof together with a rouletted fold line formed in each spacing between the successive unit forms, folding up zigzag the continuous web form for every unit form with the rouletted fold lines, cutting and separating the continuous web form for every preset number of successive unit forms in a form width direction to form a zigzag-folded text unit, cutting off at least one side including one folded side while leaving the other folded side of the text unit, thereby forming a text unit comprising a stack of multiplicity of folded sheet pairs each of which is connected via the rouletted fold line, sandwiching the text unit with a front cover and a back cover, and covering and wrapping a side including said remaining the other folded side of the text book of the resultant stack of the front cover, the text unit and the back cover directly with a mesh cloth impregnated with a liquid adhesive, and further with a spine, to form an integrated notebook. 18. The process for producing a notebook according to claim 17, further including a step of applying a release agent in stripes on both sides of each ridge formed on said the other folded side of the multiplicity of stacked folded sheet pairs forming the text unit. 19. The process for producing a notebook according to claim 17, wherein the text unit comprising a multiplicity of folded sheet pairs is formed by cutting off three sides including said one folded side while leaving the other folded side of the zigzag-folded unit form group.
A notebook is formed by folding up zigzag a continuous web form for every unit form with rouletted fold lines to form a text unit including a preset number of unit forms. After cutting off one folded side to leave the other folded side, the text unit is sandwiched between a front cover and a back cover and is covered with a spine. The spine is pressed against the remaining the other folded side of the text unit together with the front cover and the back cover by the medium of a mesh cloth impregnated with an adhesive so as to cover and wrap the whole a back of the stack of the front cover, the text unit and the back cover. The notebook can be used in a completely spread state (180-degree horizontal state) and conveniently, and retains high security and high functionality. A process for producing the notebook.1. (canceled) 2. The notebook according to claim 10, wherein a release agent is applied in stripes on both sides of each ridge formed on one folded side of the multiplicity of stacked folded sheet pairs forming the text unit. 3. The notebook according to claim 10, wherein each unit form has a form face, and the form face includes a main data region equipped with a plurality of horizontal ruled lines, and the horizontal ruled lines include a ruled line formed of micro characters. 4. The notebook according to claim 3, wherein the horizontal ruled lines in the main data region comprise ruled lines formed of micro characters of from 0.3 point to 0.7 point and ruled lines formed of solid or dotted lines. 5. The notebook according to claim 3, where in the main data region is entirely formed of a mesh of horizontal ruled lines and vertical ruled lines as a whole. 6. The notebook according to claim 10, having a print pattern characterized by an uppermost or lowermost ruled line or a line above the horizontal ruled line that is formed of characters having point sizes which sequentially increase or decrease in a range of from 0.3 point up to 2 to 4 point toward an end of the line, and the point sizes become maximum or minimum at an intermediate point of the line. 7. The notebook according to claim 10, wherein said spine comprises a binding cloth paper or fabric, and said mesh cloth comprises a meshed cheesecloth or gauze cloth. 8. The notebook according to claim 10, wherein outer surfaces of said front and back covers are each recessed in a stripe form along the one folded side of the stacked unit form group, said spine is formed of a binding cloth paper and disposed to straddle over the recessed part of the front cover and the recessed part of the back cover so as to be flush with the front cover and the back cover. 9. (canceled) 10. A notebook, comprising: a stack of folded unit form group (text unit) formed by stacking a multiplicity of sheet pairs each comprising a pair of neighboring unit forms connected and folded with each other via a rouletted fold line, a mesh cloth impregnated with a liquid adhesive and directly applied against a back folded side of the unit form group for adhesion, and a front cover, a spine and a back cover disposed so as to wrap the unit form group, thereby providing an integrated notebook structure. 11. The process for producing a notebook according to claim 17, wherein: said one folded side is cut off while leaving the other folded side of the text unit before or after the disposition of the spine onto the stack of the front cover, the text unit and the back cover. 12. The process for producing a notebook according to claim 17, wherein said continuous web form is provided through steps of: successively printing a multiplicity of unit forms including horizontal ruled lines formed of a succession of micro characters by a printer equipped with an endless belt-shaped printing plate on one surface of a continuous web form, inverting the front and back surfaces of the continuous web form with a form inverter, printing on the other surface of the continuous web form a multiplicity of unit forms including a horizontal ruled line formed of a succession of micro characters by another printer equipped with an endless belt-shaped printing plate to provide the continuous web form with prints on both surfaces. 13. A process for producing a notebook according to claim 17, wherein said rouletted hold line is provided between each neighboring pair of unit forms by micro rouletting in a width direction of the continuous web form. 14. The process for producing a notebook according to claim 17, wherein the disposition of the mesh cloth impregnated with an adhesive between the back of the stack of the front cover, text unit and back cover and the spine, is performed by pressing of the spine onto the back of the stack so as to wrap the latter, after successive application of the adhesive and the mesh cloth or application of the mesh cloth preliminarily impregnated with the liquid adhesive onto the back of the stack. 15. (canceled) 16. The notebook according to claim 10, wherein said front cover, spine and back cover are formed of a single sheet of paper. 17. A process for producing a notebook according to claim 10, comprising: providing an elongated continuous web form having a multiplicity of unit forms successively printed with spacing therebetween along a length on both surfaces thereof together with a rouletted fold line formed in each spacing between the successive unit forms, folding up zigzag the continuous web form for every unit form with the rouletted fold lines, cutting and separating the continuous web form for every preset number of successive unit forms in a form width direction to form a zigzag-folded text unit, cutting off at least one side including one folded side while leaving the other folded side of the text unit, thereby forming a text unit comprising a stack of multiplicity of folded sheet pairs each of which is connected via the rouletted fold line, sandwiching the text unit with a front cover and a back cover, and covering and wrapping a side including said remaining the other folded side of the text book of the resultant stack of the front cover, the text unit and the back cover directly with a mesh cloth impregnated with a liquid adhesive, and further with a spine, to form an integrated notebook. 18. The process for producing a notebook according to claim 17, further including a step of applying a release agent in stripes on both sides of each ridge formed on said the other folded side of the multiplicity of stacked folded sheet pairs forming the text unit. 19. The process for producing a notebook according to claim 17, wherein the text unit comprising a multiplicity of folded sheet pairs is formed by cutting off three sides including said one folded side while leaving the other folded side of the zigzag-folded unit form group.
3,600
274,092
15,770,490
3,638
An apparatus for waste disposal comprises a seat bottom comprising a seat body having a topside for supporting a user and an underside opposite to the topside. The underside comprises a moveable bottom portion. One or more arms are attached between a bottom of the seat body and a top of the moveable bottom portion. Each arm has a first end and a second end, with the first end of each arm attached to the bottom of the seat body and the second end of each arm attached to the top of the moveable bottom portion. A first waste bag support is provided on the bottom of the seat body and a second waste bag support is provided on the top of the moveable bottom portion.
1. An apparatus for waste disposal comprising: a seat bottom comprising a seat body having a topside for supporting a user and an underside opposite to the topside, the underside comprising a moveable bottom portion; one or more arms, each arm having a first end and a second end, with the first end of each arm attached to a bottom of the seat body and the second end of each arm attached to a top of the moveable bottom portion; and a first waste bag support on the bottom of the seat body and a second waste bag support on the top of the moveable bottom portion. 2. An apparatus according to claim 1 wherein the bottom of the seat body defines a cavity sized to accommodate one or more arms such that when the moveable bottom portion is in a closed position the moveable bottom portion abuts the bottom of the seat body around the cavity. 3. An apparatus according to claim 1 wherein the moveable bottom portion defines a cavity sized to accommodate the one or more arms such that when the moveable seat bottom is in a closed position the seat body abuts the top of the moveable bottom portion around the cavity. 4. An apparatus according to claim 1 wherein the one or more arms comprise a pair of arms. 5. An apparatus according to claim 4 wherein the first end of each of the one or more arms is pivotally attached to the bottom of the seat body. 6. An apparatus according to claim 5 wherein the second end of each of the one or more arms is pivotally attached to the top of the moveable bottom portion. 7. An apparatus according to claim 6 wherein each arm comprises a lower segment and an upper segment each having a first end and a second end, with the first end of the upper segment pivotally coupled to the second end of the lower segment by a central pivot. 8. An apparatus according to claim 7 wherein each arm comprises a lower mounting bracket for mounting to the top of the moveable bottom portion and an upper mounting bracket for mounting to the bottom of the seat body, wherein the second end of the upper segment is pivotally coupled to the upper mounting bracket by an upper pivot and the first end of the lower segment is pivotally coupled to the lower mounting bracket by a lower pivot. 9. An apparatus according to claim 8 wherein each arm comprises a central torsion spring mounted on the central pivot, an upper torsion spring mounted on the upper pivot, and a lower torsion spring mounted on the lower pivot for biasing the moveable bottom portion into the closed position. 10. An apparatus according to claim 9 wherein the lower torsion spring has a smaller spring constant than the upper torsion spring and the central torsion spring. 11. An apparatus according to claim 9 comprising a locking mechanism for keeping said moveable bottom portion in an open position. 12. An apparatus according to claim 11 wherein the locking mechanism comprises a protrusion on the second end of the lower segment and an engagement structure on the bottom of the seat body configured to engage the protrusion when the upper segment is against the engagement structure and the moveable bottom portion is pulled away from the seat body. 13. An apparatus according to claim 1 wherein each arm comprises one or more springs for biasing the moveable bottom portion into the closed position. 14. An apparatus according to claim 13 comprising a locking mechanism for keeping said moveable bottom portion in an open position. 15. An apparatus according to claim 1 wherein the first waste bag support comprises a wire frame fixedly mounted to the bottom of the seat body and the second waste bag support comprises a wire frame fixedly mounted to the top of the moveable bottom portion. 16. An apparatus according to claim 1 wherein at least one of the first waste bag support and second waste bag support comprises a wire frame mounted by a spring loaded mechanism. 17. An apparatus according to claim 1 wherein the moveable bottom portion is shaped to conform to a shape of the underside of the seat body such that the apparatus is concealed when the moveable bottom portion is in the closed position. 18. A method for adapting a seat bottom for waste disposal, the seat bottom comprising a seat body having a topside for supporting a user and an underside opposite to the topside, the method comprising: connecting a moveable bottom portion to the underside of the seat body by means of one or more arms, each arm having a first end and a second end, with the first end of each arm attached to a bottom of the seat body and the second end of each arm attached to a top of the moveable bottom portion; and mounting a first waste bag support on the bottom of the seat body and a second waste bag support on the top of the moveable bottom portion. 19. A method according to claim 18 comprising forming a cavity in the underside of the seat body sized to accommodate the one or more arms such that when the moveable bottom portion is in a closed position the moveable bottom portion abuts the seat body around the cavity. 20. A method according to claim 18 wherein the moveable bottom portion defines a cavity sized to accommodate the one or more arms such that when the moveable seat bottom is in a closed position the seat body abuts the moveable bottom portion around the cavity.
An apparatus for waste disposal comprises a seat bottom comprising a seat body having a topside for supporting a user and an underside opposite to the topside. The underside comprises a moveable bottom portion. One or more arms are attached between a bottom of the seat body and a top of the moveable bottom portion. Each arm has a first end and a second end, with the first end of each arm attached to the bottom of the seat body and the second end of each arm attached to the top of the moveable bottom portion. A first waste bag support is provided on the bottom of the seat body and a second waste bag support is provided on the top of the moveable bottom portion.1. An apparatus for waste disposal comprising: a seat bottom comprising a seat body having a topside for supporting a user and an underside opposite to the topside, the underside comprising a moveable bottom portion; one or more arms, each arm having a first end and a second end, with the first end of each arm attached to a bottom of the seat body and the second end of each arm attached to a top of the moveable bottom portion; and a first waste bag support on the bottom of the seat body and a second waste bag support on the top of the moveable bottom portion. 2. An apparatus according to claim 1 wherein the bottom of the seat body defines a cavity sized to accommodate one or more arms such that when the moveable bottom portion is in a closed position the moveable bottom portion abuts the bottom of the seat body around the cavity. 3. An apparatus according to claim 1 wherein the moveable bottom portion defines a cavity sized to accommodate the one or more arms such that when the moveable seat bottom is in a closed position the seat body abuts the top of the moveable bottom portion around the cavity. 4. An apparatus according to claim 1 wherein the one or more arms comprise a pair of arms. 5. An apparatus according to claim 4 wherein the first end of each of the one or more arms is pivotally attached to the bottom of the seat body. 6. An apparatus according to claim 5 wherein the second end of each of the one or more arms is pivotally attached to the top of the moveable bottom portion. 7. An apparatus according to claim 6 wherein each arm comprises a lower segment and an upper segment each having a first end and a second end, with the first end of the upper segment pivotally coupled to the second end of the lower segment by a central pivot. 8. An apparatus according to claim 7 wherein each arm comprises a lower mounting bracket for mounting to the top of the moveable bottom portion and an upper mounting bracket for mounting to the bottom of the seat body, wherein the second end of the upper segment is pivotally coupled to the upper mounting bracket by an upper pivot and the first end of the lower segment is pivotally coupled to the lower mounting bracket by a lower pivot. 9. An apparatus according to claim 8 wherein each arm comprises a central torsion spring mounted on the central pivot, an upper torsion spring mounted on the upper pivot, and a lower torsion spring mounted on the lower pivot for biasing the moveable bottom portion into the closed position. 10. An apparatus according to claim 9 wherein the lower torsion spring has a smaller spring constant than the upper torsion spring and the central torsion spring. 11. An apparatus according to claim 9 comprising a locking mechanism for keeping said moveable bottom portion in an open position. 12. An apparatus according to claim 11 wherein the locking mechanism comprises a protrusion on the second end of the lower segment and an engagement structure on the bottom of the seat body configured to engage the protrusion when the upper segment is against the engagement structure and the moveable bottom portion is pulled away from the seat body. 13. An apparatus according to claim 1 wherein each arm comprises one or more springs for biasing the moveable bottom portion into the closed position. 14. An apparatus according to claim 13 comprising a locking mechanism for keeping said moveable bottom portion in an open position. 15. An apparatus according to claim 1 wherein the first waste bag support comprises a wire frame fixedly mounted to the bottom of the seat body and the second waste bag support comprises a wire frame fixedly mounted to the top of the moveable bottom portion. 16. An apparatus according to claim 1 wherein at least one of the first waste bag support and second waste bag support comprises a wire frame mounted by a spring loaded mechanism. 17. An apparatus according to claim 1 wherein the moveable bottom portion is shaped to conform to a shape of the underside of the seat body such that the apparatus is concealed when the moveable bottom portion is in the closed position. 18. A method for adapting a seat bottom for waste disposal, the seat bottom comprising a seat body having a topside for supporting a user and an underside opposite to the topside, the method comprising: connecting a moveable bottom portion to the underside of the seat body by means of one or more arms, each arm having a first end and a second end, with the first end of each arm attached to a bottom of the seat body and the second end of each arm attached to a top of the moveable bottom portion; and mounting a first waste bag support on the bottom of the seat body and a second waste bag support on the top of the moveable bottom portion. 19. A method according to claim 18 comprising forming a cavity in the underside of the seat body sized to accommodate the one or more arms such that when the moveable bottom portion is in a closed position the moveable bottom portion abuts the seat body around the cavity. 20. A method according to claim 18 wherein the moveable bottom portion defines a cavity sized to accommodate the one or more arms such that when the moveable seat bottom is in a closed position the seat body abuts the moveable bottom portion around the cavity.
3,600
274,093
15,745,658
3,638
The invention relates to an insert for a passport booklet data sheet, formed by a multilayer complex having at least a first layer and a second layer having a hinge that has a folding zone where the insert is intended to be sewn or stapled into a passport booklet, said second layer having, in combination, at least one layer of plastics material and at least one metal reinforcing layer that together form an extension which extends a certain distance beyond said folding zone of the hinge, so as to reinforce the resistance of said data sheet to being ripped out and torn with respect to the passport booklet, characterized in that said layer of plastics material and said metal reinforcing layer extend over the entire surface area of the insert.
1. An insert for a passport booklet data page, formed by a multilayer complex including at least a first layer and a second layer including a hinge having a folding zone where the insert is intended to be sewn or stapled into a passport booklet, said second layer including a combination of at least one layer made of plastic and at least one metal reinforcing layer together forming an extension that extends a certain distance beyond said folding zone of the hinge so as to improve the pull-out and tear resistance of said data page in relation to the passport booklet, wherein said layer made of plastic and said metal reinforcing layer extend over the entire surface area of the insert. 2. The insert as claimed in claim 1, further including an antenna enabling the insert to communicate with a remote reader. 3. The insert as claimed in claim 1, wherein said distance by which said extension extends beyond said folding zone is between 2 and 15 mm. 4. The insert as claimed in claim 1, wherein said metal layer has a thickness of between 10 and 30 micrometers, and said layer of plastic has a thickness of between 20 and 150 micrometers. 5. The insert as claimed in claim 1, wherein said layer made of plastic is made of polyester, and said metal layer is made of aluminum. 6. The insert as claimed in claim 5, wherein the layer of aluminum is bonded to the layer of polyester using an adhesive having a thickness of 2 to 3 micrometers. 7. The insert as claimed in claim 1, further including, along the folding zone of the hinge, perforations, enabling the insert to be locally made more flexible. 8. The insert as claimed in claim 1, wherein said extension of the metal layer includes one or more visible security elements that improve the resistance of the data page against forgery. 9. The insert as claimed in claim 8, wherein said security elements include visible designs etched into the metal layer. 10. The insert as claimed in claim 8, wherein said security elements include designs that are visible by reflection, or designs in watermark form, holograms, laser-etched designs, or designs obtained by hot stamping, by goffering, by cutting or by embossing of the metal layer. 11. A data page for a passport booklet, comprising an insert as claimed in claim 1. 12. A passport booklet, comprising a data page as claimed in claim 11. 13. The insert as claimed in claim 7, wherein the perforations have a honeycomb shape.
The invention relates to an insert for a passport booklet data sheet, formed by a multilayer complex having at least a first layer and a second layer having a hinge that has a folding zone where the insert is intended to be sewn or stapled into a passport booklet, said second layer having, in combination, at least one layer of plastics material and at least one metal reinforcing layer that together form an extension which extends a certain distance beyond said folding zone of the hinge, so as to reinforce the resistance of said data sheet to being ripped out and torn with respect to the passport booklet, characterized in that said layer of plastics material and said metal reinforcing layer extend over the entire surface area of the insert.1. An insert for a passport booklet data page, formed by a multilayer complex including at least a first layer and a second layer including a hinge having a folding zone where the insert is intended to be sewn or stapled into a passport booklet, said second layer including a combination of at least one layer made of plastic and at least one metal reinforcing layer together forming an extension that extends a certain distance beyond said folding zone of the hinge so as to improve the pull-out and tear resistance of said data page in relation to the passport booklet, wherein said layer made of plastic and said metal reinforcing layer extend over the entire surface area of the insert. 2. The insert as claimed in claim 1, further including an antenna enabling the insert to communicate with a remote reader. 3. The insert as claimed in claim 1, wherein said distance by which said extension extends beyond said folding zone is between 2 and 15 mm. 4. The insert as claimed in claim 1, wherein said metal layer has a thickness of between 10 and 30 micrometers, and said layer of plastic has a thickness of between 20 and 150 micrometers. 5. The insert as claimed in claim 1, wherein said layer made of plastic is made of polyester, and said metal layer is made of aluminum. 6. The insert as claimed in claim 5, wherein the layer of aluminum is bonded to the layer of polyester using an adhesive having a thickness of 2 to 3 micrometers. 7. The insert as claimed in claim 1, further including, along the folding zone of the hinge, perforations, enabling the insert to be locally made more flexible. 8. The insert as claimed in claim 1, wherein said extension of the metal layer includes one or more visible security elements that improve the resistance of the data page against forgery. 9. The insert as claimed in claim 8, wherein said security elements include visible designs etched into the metal layer. 10. The insert as claimed in claim 8, wherein said security elements include designs that are visible by reflection, or designs in watermark form, holograms, laser-etched designs, or designs obtained by hot stamping, by goffering, by cutting or by embossing of the metal layer. 11. A data page for a passport booklet, comprising an insert as claimed in claim 1. 12. A passport booklet, comprising a data page as claimed in claim 11. 13. The insert as claimed in claim 7, wherein the perforations have a honeycomb shape.
3,600
274,094
15,959,977
3,638
The present disclosure relates generally to picture frames and picture cabinets, and more specifically to rotatable picture frames and picture cabinets. In some embodiments, the picture frame may be easily rotatable between a landscape display and a portrait display via a set of recesses that correspond to a set of protrusions on a hanging mount. The picture frame may also support a varying number of photographs in a varying number of picture support compartments while also allowing the picture frame to be accessible while said picture frame is mounted on a planar surface such as a wall.
1. A mounting system comprising: a hanging mount with attachment means for securing the hanging mount to a surface, wherein the hanging mount comprises a plurality of protrusions; a picture frame comprising: a first set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in a first orientation; and a second set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in a second orientation; a third set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in the first orientation, wherein the third set of recesses are configured such that the picture frame can be mounted at a different height or width in the first orientation compared to the height or width of the picture frame when mounted on the first set of recesses; and a fourth set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in the second orientation, wherein the fourth set of recesses are configured such that the picture frame can be mounted at a different height or width in the second orientation compared to the height or width of the picture frame when mounted on the second set of recesses. 2. The mounting system of claim 1, wherein the plurality of protrusions comprises two protrusions and wherein the first set of recesses, the second set of recesses, the third set of recesses, and the fourth set of recesses each comprise two recesses that correspond to the two protrusions. 3. A picture frame comprising: a first set of recesses corresponding to a plurality of protrusions from a hanging mount for releasably attaching the hanging mount to the picture frame in a first orientation; and a second set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in a second orientation. 4. The picture frame of claim 3, further comprising: a third set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in the first orientation, wherein the third set of recesses are configured such that the picture frame can be mounted at a different height or width in the first orientation compared to the height or width of the picture frame when mounted on the first set of recesses; and a fourth set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in the second orientation, wherein the fourth set of recesses are configured such that the picture frame can be mounted at a different height or width in the second orientation compared to the height or width of the picture frame when mounted on the second set of recesses. 5. The picture frame of claim 3, further comprising: a front frame member, the front frame member being comprised of a frame surrounding a transparent display window; a rear support member, the rear support member including a plurality of picture compartments, where each picture compartment is capable of storing multiple pictures and where the rear support member is hingeably attached to the front frame member; a rear picture support releasably mounted within the picture compartment; an adjustment means displaced between the rear picture support and the picture compartment, the adjustment means placing a constant, yet variable force upon the rear picture support in a direction toward the display window; and attachment means for releasably securing the front frame member to the rear support member. 6. The picture frame of claim 5, wherein the transparent display window is made of glass. 7. The picture frame of claim 5, further comprising: a stand comprising: a stand recess; and a notch; a storage recess with a protrusion, wherein the protrusion corresponds to the stand recess such that the stand is held in place when stored; at least one slot with a slot protrusion; wherein the slot protrusion corresponds to the notch such that the stand fits into the slot. 8. The picture frame of claim 5, wherein the picture frame further includes a removable mat board mounted on an inside portion of the transparent display window, the mat board framing a border on the inside portion of the transparent display window. 9. The picture frame of claim 8, wherein the mat board is releasably secured to the front frame member by flexible support arms attached to the front frame member. 10. The picture frame of claim 8, wherein the mat board defines one display area. 11. The picture frame of claim 8, wherein the mat board defines a plurality of display areas. 12. The picture frame of claim 5, where the rear picture support is automatically adjusting.
The present disclosure relates generally to picture frames and picture cabinets, and more specifically to rotatable picture frames and picture cabinets. In some embodiments, the picture frame may be easily rotatable between a landscape display and a portrait display via a set of recesses that correspond to a set of protrusions on a hanging mount. The picture frame may also support a varying number of photographs in a varying number of picture support compartments while also allowing the picture frame to be accessible while said picture frame is mounted on a planar surface such as a wall.1. A mounting system comprising: a hanging mount with attachment means for securing the hanging mount to a surface, wherein the hanging mount comprises a plurality of protrusions; a picture frame comprising: a first set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in a first orientation; and a second set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in a second orientation; a third set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in the first orientation, wherein the third set of recesses are configured such that the picture frame can be mounted at a different height or width in the first orientation compared to the height or width of the picture frame when mounted on the first set of recesses; and a fourth set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in the second orientation, wherein the fourth set of recesses are configured such that the picture frame can be mounted at a different height or width in the second orientation compared to the height or width of the picture frame when mounted on the second set of recesses. 2. The mounting system of claim 1, wherein the plurality of protrusions comprises two protrusions and wherein the first set of recesses, the second set of recesses, the third set of recesses, and the fourth set of recesses each comprise two recesses that correspond to the two protrusions. 3. A picture frame comprising: a first set of recesses corresponding to a plurality of protrusions from a hanging mount for releasably attaching the hanging mount to the picture frame in a first orientation; and a second set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in a second orientation. 4. The picture frame of claim 3, further comprising: a third set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in the first orientation, wherein the third set of recesses are configured such that the picture frame can be mounted at a different height or width in the first orientation compared to the height or width of the picture frame when mounted on the first set of recesses; and a fourth set of recesses corresponding to the plurality of protrusions for releasably attaching the hanging mount to the picture frame in the second orientation, wherein the fourth set of recesses are configured such that the picture frame can be mounted at a different height or width in the second orientation compared to the height or width of the picture frame when mounted on the second set of recesses. 5. The picture frame of claim 3, further comprising: a front frame member, the front frame member being comprised of a frame surrounding a transparent display window; a rear support member, the rear support member including a plurality of picture compartments, where each picture compartment is capable of storing multiple pictures and where the rear support member is hingeably attached to the front frame member; a rear picture support releasably mounted within the picture compartment; an adjustment means displaced between the rear picture support and the picture compartment, the adjustment means placing a constant, yet variable force upon the rear picture support in a direction toward the display window; and attachment means for releasably securing the front frame member to the rear support member. 6. The picture frame of claim 5, wherein the transparent display window is made of glass. 7. The picture frame of claim 5, further comprising: a stand comprising: a stand recess; and a notch; a storage recess with a protrusion, wherein the protrusion corresponds to the stand recess such that the stand is held in place when stored; at least one slot with a slot protrusion; wherein the slot protrusion corresponds to the notch such that the stand fits into the slot. 8. The picture frame of claim 5, wherein the picture frame further includes a removable mat board mounted on an inside portion of the transparent display window, the mat board framing a border on the inside portion of the transparent display window. 9. The picture frame of claim 8, wherein the mat board is releasably secured to the front frame member by flexible support arms attached to the front frame member. 10. The picture frame of claim 8, wherein the mat board defines one display area. 11. The picture frame of claim 8, wherein the mat board defines a plurality of display areas. 12. The picture frame of claim 5, where the rear picture support is automatically adjusting.
3,600
274,095
15,770,166
3,638
A display device for selectively displaying an at least one multi-dimensional object is disclosed. In at least one embodiment, the display device provides an upper portion and a corresponding lower portion configured for selective engagement with the upper portion, thereby allowing the upper portion to remain in a substantially vertical orientation. The upper portion provides a first frame and an opposing second frame selectively engageable with the first frame. The first frame provides a first aperture, the first aperture having a first window panel spanning the first aperture. The second frame provides a corresponding second aperture sized for approximating the dimensions of the first aperture, the second aperture having a second window panel spanning the second aperture. The first and second apertures cooperate to define an enclosure therebetween for selectively receiving the at least one object therewithin.
1. A device for exhibiting an object comprising: a. A base having a connectable exterior; b. an enclosure that is slidable to the base having a pair of apertures disposed along the surface of the enclosure coextensive along the edge of each one of the apertures; c. An enclosure that may have protrusion can hold the display object; and d. a means configured for supporting the enclosure along the edge portion of the pair of apertures, the first and second portions of the enclosure. 2. A device according to claim 1 wherein the means configured for supporting the enclosure is an “L” shape locking mechanism attached to the enclosure. 3. A device according to claim 1 wherein the base has protrusions and crevices used to interconnect with other bases of same device. 4. A device according to claim 1 wherein the bottom part of the base has a crevice or slot to insert the enclosure to form a stacking device. 5. A device according to claim 1 wherein the said base allows it to be mounted in all directions (front, back, left, right, top and bottom). 6. A device according to claim 1 wherein the said enclosure is joined by separable frames bounding the space for holding the object. 7. A device according to claim 6 wherein the said enclosure is shaped in an upright position. 8. A device according to claim 1 wherein the object is exhibited between the first and the second translucent material of the two separable frames. 9. A method of exhibiting an object of claim 1 wherein the object is sustained in the region bounded by the oppositioned alignment of the first and second aperture thereby creating an illusion of the object floating in the enclosure. 10. A device according to claim 6 wherein the said enclosure has a lip or flange along the bottom edge of the enclosure. 11. A device according to claim 1 wherein the base portion is designed with a guide channel to slide and hold the enclosure in upright position. 12. A device according to claim 1 wherein the said enclosure has a protrusion on top that allows it to be connected to another base. 13. A device according to claim 1 wherein the said enclosure has a slot or flange for securing the first and second frame. 14. A device according to claim 1 wherein the aperture can hold a multidimensional object. 15. A device according to claim 1 wherein a translucent material may be attached to the aperture of the first and second frame. 16. A device according to claim 15 wherein the translucent material is made of elastomeric material. 17. A device according to claim 15 wherein the translucent material is made of polymeric material. 18. A device according to claim 17 wherein the polymeric material is made of polyurethane. 19. A device according to claims 17 and 18 wherein the elastomeric and polymeric materials are substantially translucent. 20. A device according claim 1 wherein the device is preferably made of thermoplastic material. 21. A device according claim 1 wherein the device is formed from a material not limited to wood, metal, aluminum and others. 22. A device according to claim 3 wherein the said protrusions are formed with a relatively narrow neck portion and relatively wider top portion. 23. A device according to claim 1 wherein the protrusion may be positioned within the frame projected towards the center of the aperture to hold the object of display. 24. A method of displaying an object comprising: inserting an object between the first and second frame of the device of claim 1; and closing the frames, such that the object is retained in the aperture defined by the oppositioned alignment of the first frame and second frame.
A display device for selectively displaying an at least one multi-dimensional object is disclosed. In at least one embodiment, the display device provides an upper portion and a corresponding lower portion configured for selective engagement with the upper portion, thereby allowing the upper portion to remain in a substantially vertical orientation. The upper portion provides a first frame and an opposing second frame selectively engageable with the first frame. The first frame provides a first aperture, the first aperture having a first window panel spanning the first aperture. The second frame provides a corresponding second aperture sized for approximating the dimensions of the first aperture, the second aperture having a second window panel spanning the second aperture. The first and second apertures cooperate to define an enclosure therebetween for selectively receiving the at least one object therewithin.1. A device for exhibiting an object comprising: a. A base having a connectable exterior; b. an enclosure that is slidable to the base having a pair of apertures disposed along the surface of the enclosure coextensive along the edge of each one of the apertures; c. An enclosure that may have protrusion can hold the display object; and d. a means configured for supporting the enclosure along the edge portion of the pair of apertures, the first and second portions of the enclosure. 2. A device according to claim 1 wherein the means configured for supporting the enclosure is an “L” shape locking mechanism attached to the enclosure. 3. A device according to claim 1 wherein the base has protrusions and crevices used to interconnect with other bases of same device. 4. A device according to claim 1 wherein the bottom part of the base has a crevice or slot to insert the enclosure to form a stacking device. 5. A device according to claim 1 wherein the said base allows it to be mounted in all directions (front, back, left, right, top and bottom). 6. A device according to claim 1 wherein the said enclosure is joined by separable frames bounding the space for holding the object. 7. A device according to claim 6 wherein the said enclosure is shaped in an upright position. 8. A device according to claim 1 wherein the object is exhibited between the first and the second translucent material of the two separable frames. 9. A method of exhibiting an object of claim 1 wherein the object is sustained in the region bounded by the oppositioned alignment of the first and second aperture thereby creating an illusion of the object floating in the enclosure. 10. A device according to claim 6 wherein the said enclosure has a lip or flange along the bottom edge of the enclosure. 11. A device according to claim 1 wherein the base portion is designed with a guide channel to slide and hold the enclosure in upright position. 12. A device according to claim 1 wherein the said enclosure has a protrusion on top that allows it to be connected to another base. 13. A device according to claim 1 wherein the said enclosure has a slot or flange for securing the first and second frame. 14. A device according to claim 1 wherein the aperture can hold a multidimensional object. 15. A device according to claim 1 wherein a translucent material may be attached to the aperture of the first and second frame. 16. A device according to claim 15 wherein the translucent material is made of elastomeric material. 17. A device according to claim 15 wherein the translucent material is made of polymeric material. 18. A device according to claim 17 wherein the polymeric material is made of polyurethane. 19. A device according to claims 17 and 18 wherein the elastomeric and polymeric materials are substantially translucent. 20. A device according claim 1 wherein the device is preferably made of thermoplastic material. 21. A device according claim 1 wherein the device is formed from a material not limited to wood, metal, aluminum and others. 22. A device according to claim 3 wherein the said protrusions are formed with a relatively narrow neck portion and relatively wider top portion. 23. A device according to claim 1 wherein the protrusion may be positioned within the frame projected towards the center of the aperture to hold the object of display. 24. A method of displaying an object comprising: inserting an object between the first and second frame of the device of claim 1; and closing the frames, such that the object is retained in the aperture defined by the oppositioned alignment of the first frame and second frame.
3,600
274,096
15,770,031
3,638
A method of manufacturing an image element array includes: providing a production tool having a surface pattern of ink-receptive elements spaced by areas which are not, the ink-receptive elements defining the array image elements; applying a multi-coloured first image formed of a inks to only the ink-receptive elements; and transferring only the portions of the multi-coloured first image corresponding to the image elements from the production tool to a substrate. An image element array is formed on the substrate. The production tool surface pattern is configured such that when viewing and image element arrays overlap, each viewing element within an image element array first region directs light from a respective image element or from a respective gap. The viewing angle in the first region directs light from either the array or the gaps.
1-47. (canceled) 48. A method of manufacturing an image element array for an optically variable security device, comprising: providing a production tool having a surface pattern of ink-receptive elements spaced by areas which are not ink-receptive, the ink-receptive elements defining the image elements of the desired image element array; applying a multi-coloured first image formed of a plurality of inks to only the ink-receptive elements of the surface pattern and not to the areas in between; transferring only the portions of the multi-coloured first image corresponding to the image elements of the desired image element array from the production tool to a substrate, by bringing the plurality of inks on the surface pattern into contact with the substrate or with a transfer assembly which then contacts the substrate, whereby an image element array is formed on the substrate; wherein the surface pattern on the production tool is configured such that, when a viewing element array is overlapped with the image element array, each viewing element within a first region of the image element array directs light from a respective one of the image elements or from a respective one of the gaps between the image elements in dependence on the viewing angle, whereby depending on the viewing angle the viewing element array in the first region directs light from either the array of image elements or from the gaps therebetween, such that upon changing the viewing angle, the first image is displayed by the image elements in combination across the first region of the image element array at a first range of viewing angles and not at a second range of viewing angles. 49. A method according to claim 48, wherein each of the plurality of inks is applied to the surface pattern in accordance with a respective image component representing the area(s) of the first image having a colour to which the ink contributes, at least two of the image components corresponding to different areas of the first image such that at least two of the plurality of inks are applied to different respective areas of the surface pattern. 50. A method according to claim 48, wherein at least some of the ink-receptive elements individually receive two or more of the plurality of inks in respective laterally offset areas of the element, whereby at least some of the image elements in the image element array formed on the substrate are individually multi-coloured. 51. A method according to claim 48, wherein the surface pattern comprises either: a surface relief structure of elevations and depressions, the elevations forming the ink-receptive elements and the depressions forming the areas which are not ink-receptive; or an arrangement of hydrophilic and hydrophobic parts of the surface of the production tool, the hydrophobic parts forming the ink-receptive elements and the hydrophilic parts forming the areas which are not ink-receptive. 52. A method according to claim 48, wherein the multi-coloured first image is applied to the surface pattern by either: applying each of the plurality of inks to the production tool sequentially, in register with one another; or applying each of the plurality of inks to a collection surface in register with one another and then transferring the plurality of inks simultaneously from the collection surface onto the surface pattern. 53. A method according to claim 48, wherein each of the plurality of inks is applied from a respective patterned tool being a patterned lithographic printing plate, a patterned chablon plate, a patterned anilox roller, or a patterned gravure roller. 54. A method according to claim 48, wherein in the first region of the image element array, the surface pattern is configured such that the image elements have substantially the same width as one another and are arranged periodically at least in the direction parallel to their width. 55. A method according to claim 48, wherein in the first region of the image element array, the surface pattern is configured either such that the image elements are elongate image elements; or such that the image elements are arranged in a periodic two-dimensional grid. 56. A method according to claim 48, wherein the surface pattern is configured such that the image elements are 100 microns or less in at least one dimension. 57. A method according to claim 48, further comprising providing a second image overlapping at least part of the image element array such that elements of the second image are exposed through the gaps between the elements of the first image, whereby the elements of both images can be viewed from the same side of the image array. 58. An image element array manufactured in accordance with claim 48. 59. A method of manufacturing a security device, comprising: (i) manufacturing an image element array using the method of claim 48; and (ii) providing a viewing element array overlapping the image element array; wherein the image element array and viewing element array are configured to co-operate such that each viewing element within a first region of the image element array directs light from a respective one of the image elements or from a respective one of the gaps between the image elements in dependence on the viewing angle, whereby depending on the viewing angle the viewing element array in the first region directs light from either the array of image elements or from the gaps therebetween, such that upon changing the viewing angle, the first image is displayed by the image elements in combination across the first region of the image element array at a first range of viewing angles and not at a second range of viewing angles. 60. A method according to claim 59, wherein in the first region of the image element array, the surface pattern is configured such that the image elements have substantially the same width as one another and are arranged periodically at least in the direction parallel to their width, and wherein at least in the first region, the viewing element array is periodic in at least one dimension. 61. A method according to claim 59, wherein the viewing element array is registered to the image element array at least in terms of orientation. 62. A method according to claim 59, wherein the viewing element array is a focussing element array, the focussing elements comprising lenses or mirrors. 63. A security device manufactured in accordance with claim 59. 64. A security article comprising a security device according to claim 63, wherein the security article is a security thread, strip, foil, insert, transfer element, label, or patch. 65. A security document comprising a security device according to claim 63, wherein the security document is a banknote, cheque, passport, identity card, driver's licence, certificate of authenticity, fiscal stamp, or other document for securing value or personal identity. 66. A method of manufacturing an image element array for an optically variable security device, comprising: providing a production tool having a surface pattern of ink-receptive elements spaced by areas which are not ink-receptive, the ink-receptive elements defining the image elements of the desired image element array; applying a multi-coloured first image formed of a plurality of inks to only the ink-receptive elements of the surface pattern and not to the areas in between; transferring only the portions of the multi-coloured first image corresponding to the image elements of the desired image element array from the production tool to a substrate, by bringing the plurality of inks on the surface pattern into contact with the substrate or with a transfer assembly which then contacts the substrate, whereby an image element array is formed on the substrate; wherein the surface pattern on the production tool is configured such that the image elements have substantially the same width as one another and are arranged periodically at least in the direction parallel to their width, spaced by gaps therebetween. 67. A method of manufacturing a security device, comprising: (i) manufacturing an image element array using the method of claim 66; and (ii) providing a viewing element array overlapping the image element array; wherein the image element array and viewing element array are configured to co-operate such that each viewing element within a first region of the image element array directs light from a respective one of the image elements or from a respective one of the gaps between the image elements in dependence on the viewing angle, whereby depending on the viewing angle the viewing element array in the first region directs light from either the array of image elements or from the gaps therebetween, such that upon changing the viewing angle, the first image is displayed by the image elements in combination across the first region of the image element array at a first range of viewing angles and not at a second range of viewing angles.
A method of manufacturing an image element array includes: providing a production tool having a surface pattern of ink-receptive elements spaced by areas which are not, the ink-receptive elements defining the array image elements; applying a multi-coloured first image formed of a inks to only the ink-receptive elements; and transferring only the portions of the multi-coloured first image corresponding to the image elements from the production tool to a substrate. An image element array is formed on the substrate. The production tool surface pattern is configured such that when viewing and image element arrays overlap, each viewing element within an image element array first region directs light from a respective image element or from a respective gap. The viewing angle in the first region directs light from either the array or the gaps.1-47. (canceled) 48. A method of manufacturing an image element array for an optically variable security device, comprising: providing a production tool having a surface pattern of ink-receptive elements spaced by areas which are not ink-receptive, the ink-receptive elements defining the image elements of the desired image element array; applying a multi-coloured first image formed of a plurality of inks to only the ink-receptive elements of the surface pattern and not to the areas in between; transferring only the portions of the multi-coloured first image corresponding to the image elements of the desired image element array from the production tool to a substrate, by bringing the plurality of inks on the surface pattern into contact with the substrate or with a transfer assembly which then contacts the substrate, whereby an image element array is formed on the substrate; wherein the surface pattern on the production tool is configured such that, when a viewing element array is overlapped with the image element array, each viewing element within a first region of the image element array directs light from a respective one of the image elements or from a respective one of the gaps between the image elements in dependence on the viewing angle, whereby depending on the viewing angle the viewing element array in the first region directs light from either the array of image elements or from the gaps therebetween, such that upon changing the viewing angle, the first image is displayed by the image elements in combination across the first region of the image element array at a first range of viewing angles and not at a second range of viewing angles. 49. A method according to claim 48, wherein each of the plurality of inks is applied to the surface pattern in accordance with a respective image component representing the area(s) of the first image having a colour to which the ink contributes, at least two of the image components corresponding to different areas of the first image such that at least two of the plurality of inks are applied to different respective areas of the surface pattern. 50. A method according to claim 48, wherein at least some of the ink-receptive elements individually receive two or more of the plurality of inks in respective laterally offset areas of the element, whereby at least some of the image elements in the image element array formed on the substrate are individually multi-coloured. 51. A method according to claim 48, wherein the surface pattern comprises either: a surface relief structure of elevations and depressions, the elevations forming the ink-receptive elements and the depressions forming the areas which are not ink-receptive; or an arrangement of hydrophilic and hydrophobic parts of the surface of the production tool, the hydrophobic parts forming the ink-receptive elements and the hydrophilic parts forming the areas which are not ink-receptive. 52. A method according to claim 48, wherein the multi-coloured first image is applied to the surface pattern by either: applying each of the plurality of inks to the production tool sequentially, in register with one another; or applying each of the plurality of inks to a collection surface in register with one another and then transferring the plurality of inks simultaneously from the collection surface onto the surface pattern. 53. A method according to claim 48, wherein each of the plurality of inks is applied from a respective patterned tool being a patterned lithographic printing plate, a patterned chablon plate, a patterned anilox roller, or a patterned gravure roller. 54. A method according to claim 48, wherein in the first region of the image element array, the surface pattern is configured such that the image elements have substantially the same width as one another and are arranged periodically at least in the direction parallel to their width. 55. A method according to claim 48, wherein in the first region of the image element array, the surface pattern is configured either such that the image elements are elongate image elements; or such that the image elements are arranged in a periodic two-dimensional grid. 56. A method according to claim 48, wherein the surface pattern is configured such that the image elements are 100 microns or less in at least one dimension. 57. A method according to claim 48, further comprising providing a second image overlapping at least part of the image element array such that elements of the second image are exposed through the gaps between the elements of the first image, whereby the elements of both images can be viewed from the same side of the image array. 58. An image element array manufactured in accordance with claim 48. 59. A method of manufacturing a security device, comprising: (i) manufacturing an image element array using the method of claim 48; and (ii) providing a viewing element array overlapping the image element array; wherein the image element array and viewing element array are configured to co-operate such that each viewing element within a first region of the image element array directs light from a respective one of the image elements or from a respective one of the gaps between the image elements in dependence on the viewing angle, whereby depending on the viewing angle the viewing element array in the first region directs light from either the array of image elements or from the gaps therebetween, such that upon changing the viewing angle, the first image is displayed by the image elements in combination across the first region of the image element array at a first range of viewing angles and not at a second range of viewing angles. 60. A method according to claim 59, wherein in the first region of the image element array, the surface pattern is configured such that the image elements have substantially the same width as one another and are arranged periodically at least in the direction parallel to their width, and wherein at least in the first region, the viewing element array is periodic in at least one dimension. 61. A method according to claim 59, wherein the viewing element array is registered to the image element array at least in terms of orientation. 62. A method according to claim 59, wherein the viewing element array is a focussing element array, the focussing elements comprising lenses or mirrors. 63. A security device manufactured in accordance with claim 59. 64. A security article comprising a security device according to claim 63, wherein the security article is a security thread, strip, foil, insert, transfer element, label, or patch. 65. A security document comprising a security device according to claim 63, wherein the security document is a banknote, cheque, passport, identity card, driver's licence, certificate of authenticity, fiscal stamp, or other document for securing value or personal identity. 66. A method of manufacturing an image element array for an optically variable security device, comprising: providing a production tool having a surface pattern of ink-receptive elements spaced by areas which are not ink-receptive, the ink-receptive elements defining the image elements of the desired image element array; applying a multi-coloured first image formed of a plurality of inks to only the ink-receptive elements of the surface pattern and not to the areas in between; transferring only the portions of the multi-coloured first image corresponding to the image elements of the desired image element array from the production tool to a substrate, by bringing the plurality of inks on the surface pattern into contact with the substrate or with a transfer assembly which then contacts the substrate, whereby an image element array is formed on the substrate; wherein the surface pattern on the production tool is configured such that the image elements have substantially the same width as one another and are arranged periodically at least in the direction parallel to their width, spaced by gaps therebetween. 67. A method of manufacturing a security device, comprising: (i) manufacturing an image element array using the method of claim 66; and (ii) providing a viewing element array overlapping the image element array; wherein the image element array and viewing element array are configured to co-operate such that each viewing element within a first region of the image element array directs light from a respective one of the image elements or from a respective one of the gaps between the image elements in dependence on the viewing angle, whereby depending on the viewing angle the viewing element array in the first region directs light from either the array of image elements or from the gaps therebetween, such that upon changing the viewing angle, the first image is displayed by the image elements in combination across the first region of the image element array at a first range of viewing angles and not at a second range of viewing angles.
3,600
274,097
15,769,195
3,638
A binding component holding sheet is configured to detachably hold a binding component obtained by spirally winding a wire rod and includes a holder configured to hold the binding component by insertion of a circumferential portion of the spirally wound binding component. The holder is configured to hold the binding component so that an area greater than a half of the binding component in a circumferential direction protrudes to one side of the binding component holding sheet and an area less than the half of the binding component in the circumferential direction protrudes to the other side of the binding component holding sheet.
1. A binding component holding sheet configured to detachably hold a binding component obtained by spirally winding a wire rod, the binding component holding sheet comprising: a holder configured to hold the binding component by insertion of a circumferential portion of the spirally wound binding component, wherein the holder is configured to hold the binding component so that an area greater than a half of the binding component in a circumferential direction protrudes to one side of the binding component holding sheet and an area less than the half of the binding component in the circumferential direction protrudes to the other side of the binding component holding sheet. 2. The binding component holding sheet according to claim 1, wherein the holder is configured to hold the binding component so that a part of the area greater than the half of the binding component in the circumferential direction protrudes to one side of the binding component holding sheet and a part of the area less than the half of the binding component in the circumferential direction protrudes to the other side of the binding component holding sheet. 3. The binding component holding sheet according to claim 1, further comprising: an escape hole configured to expose the binding component from one side of the binding component holding sheet to the other side by the insertion of the circumferential portion of the spirally wound binding component, wherein the escape hole has a length corresponding to multiples turns of the binding component in an axial direction of the binding component. 4. The binding component holding sheet according to claim 1, wherein the holder is configured to regulate a protrusion amount of the binding component to one side of the binding component holding sheet. 5. The binding component holding sheet according to claim 3, wherein the escape hole is configured to regulate a protrusion amount of the binding component to one side of the binding component holding sheet. 6. A binding component holding sheet configured to detachably hold a binding component obtained by spirally winding a wire rod, the binding component holding sheet comprising: an escape hole configured to expose the binding component from one side of the binding component holding sheet to the other side by insertion of a circumferential portion of the spirally wound binding component, wherein the escape hole has a length corresponding to multiples turns of the binding component in an axial direction of the binding component. 7. A binding component separation mechanism configured to separate a binding component obtained by spirally winding a wire rod from the binding component holding sheet according to claim 1, the binding component separation mechanism comprising: a contact part configured to contact against an outer peripheral surface of the binding component protruding to one side of the binding component holding sheet; a restricting part configured to restrict displacements of the binding component and the binding component holding sheet from the other side of the binding component holding sheet; and a conveying unit configured to relatively move the binding component holding sheet and the contact part in a direction in which the binding component held to the binding component holding sheet comes close to the contact part. 8. The binding component separation mechanism according to claim 7, wherein the restricting part is configured to support an outer peripheral surface of the binding component protruding to the other side of the binding component holding sheet. 9. The binding component separation mechanism according to claim 7, wherein the binding component is held to the binding component holding sheet with a part of the area greater than the half of the binding component in the circumferential direction protruding to one side of the binding component holding sheet and a part of the area less than the half of the binding component in the circumferential direction protruding to the other side of the binding component holding sheet, and wherein the contact part is configured to contact against the binding component between a radial center position of the binding component protruding to one side of the binding component holding sheet and the binding component holding sheet. 10. The binding component separation mechanism according to claim 7, wherein the conveying unit includes a sheet conveying unit configured to convey the binding component holding sheet. 11. The binding component separation mechanism according to claim 10, wherein the sheet conveying unit includes: a first sheet conveying roller configured to contact one side of the binding component holding sheet; and a second sheet conveying roller provided to face the first sheet conveying roller and configured to contact the other side of the binding component holding sheet, and wherein the second sheet conveying roller is configured to support the other side of the binding component holding sheet at a more upstream side than the first sheet conveying roller with respect to a conveying direction of the binding component holding sheet. 12. A bookbinding apparatus configured to bind sheets having a plurality of holes formed therein in one row by a binding component obtained by spirally winding a wire rod, the bookbinding apparatus comprising: a sheet conveyance path configured to convey a sheet processed in an image forming apparatus; a hole forming unit configured to form a plurality of holes in one row at an end portion of a sheet to be conveyed on the sheet conveyance path; a sheet aligning unit configured to stack and align a plurality of sheets having holes formed in the hole forming unit; a binding mechanism configured to bind the sheets aligned in the sheet aligning unit by conveying the binding component in an axial direction of the binding component while rotating the binding component in a circumferential direction; a binding component storing unit configured to store therein a binding component holding sheet having a plurality of binding components held thereto; the binding component separation mechanism according to claim 7 configured to separate the binding component from the binding component holding sheet, and a booklet discharging unit configured to discharge a booklet bound with the binding component. 13. The bookbinding apparatus according to claim 12, further comprising: a binding component conveyance path configured to convey the binding component, which is to be supplied from the binding component storing unit, to the binding mechanism, wherein the binding component conveyance path forms a curved conveyance path for conveying the binding component with being curved with respect to the axial direction to an end portion of a side, at which the binding mechanism starts insertion of the binding component, of the sheets aligned in the sheet aligning unit, at a position that is distant from the end portion by a distance smaller than a length of one binding component. 14. A binding component separation mechanism configured to separate a binding component obtained by spirally winding a wire rod from the binding component holding sheet according to claim 6, the binding component separation mechanism comprising: a contact part configured to contact against an outer peripheral surface of the binding component protruding to one side of the binding component holding sheet; a restricting part configured to restrict displacements of the binding component and the binding component holding sheet from the other side of the binding component holding sheet; and a conveying unit configured to relatively move the binding component holding sheet and the contact part in a direction in which the binding component held to the binding component holding sheet comes close to the contact part. 15. The binding component separation mechanism according to claim 14, wherein the restricting part is configured to support an outer peripheral surface of the binding component protruding to the other side of the binding component holding sheet. 16. The binding component separation mechanism according to claim 14, wherein the binding component is held to the binding component holding sheet with a part of the area greater than the half of the binding component in the circumferential direction protruding to one side of the binding component holding sheet and a part of the area less than the half of the binding component in the circumferential direction protruding to the other side of the binding component holding sheet, and wherein the contact part is configured to contact against the binding component between a radial center position of the binding component protruding to one side of the binding component holding sheet and the binding component holding sheet. 17. The binding component separation mechanism according to claim 14, wherein the conveying unit includes a sheet conveying unit configured to convey the binding component holding sheet. 18. The binding component separation mechanism according to claim 17, wherein the sheet conveying unit includes: a first sheet conveying roller configured to contact one side of the binding component holding sheet; and a second sheet conveying roller provided to face the first sheet conveying roller and configured to contact the other side of the binding component holding sheet, and wherein the second sheet conveying roller is configured to support the other side of the binding component holding sheet at a more upstream side than the first sheet conveying roller with respect to a conveying direction of the binding component holding sheet. 19. A bookbinding apparatus configured to bind sheets having a plurality of holes formed therein in one row by a binding component obtained by spirally winding a wire rod, the bookbinding apparatus comprising: a sheet conveyance path configured to convey a sheet processed in an image forming apparatus; a hole forming unit configured to form a plurality of holes in one row at an end portion of a sheet to be conveyed on the sheet conveyance path; a sheet aligning unit configured to stack and align a plurality of sheets having holes formed in the hole forming unit; a binding mechanism configured to bind the sheets aligned in the sheet aligning unit by conveying the binding component in an axial direction of the binding component while rotating the binding component in a circumferential direction; a binding component storing unit configured to store therein a binding component holding sheet having a plurality of binding components held thereto; the binding component separation mechanism according to claim 14 configured to separate the binding component from the binding component holding sheet, and a booklet discharging unit configured to discharge a booklet bound with the binding component. 20. The bookbinding apparatus according to claim 19, further comprising: a binding component conveyance path configured to convey the binding component, which is to be supplied from the binding component storing unit, to the binding mechanism, wherein the binding component conveyance path forms a curved conveyance path for conveying the binding component with being curved with respect to the axial direction to an end portion of a side, at which the binding mechanism starts insertion of the binding component, of the sheets aligned in the sheet aligning unit, at a position that is distant from the end portion by a distance smaller than a length of one binding component.
A binding component holding sheet is configured to detachably hold a binding component obtained by spirally winding a wire rod and includes a holder configured to hold the binding component by insertion of a circumferential portion of the spirally wound binding component. The holder is configured to hold the binding component so that an area greater than a half of the binding component in a circumferential direction protrudes to one side of the binding component holding sheet and an area less than the half of the binding component in the circumferential direction protrudes to the other side of the binding component holding sheet.1. A binding component holding sheet configured to detachably hold a binding component obtained by spirally winding a wire rod, the binding component holding sheet comprising: a holder configured to hold the binding component by insertion of a circumferential portion of the spirally wound binding component, wherein the holder is configured to hold the binding component so that an area greater than a half of the binding component in a circumferential direction protrudes to one side of the binding component holding sheet and an area less than the half of the binding component in the circumferential direction protrudes to the other side of the binding component holding sheet. 2. The binding component holding sheet according to claim 1, wherein the holder is configured to hold the binding component so that a part of the area greater than the half of the binding component in the circumferential direction protrudes to one side of the binding component holding sheet and a part of the area less than the half of the binding component in the circumferential direction protrudes to the other side of the binding component holding sheet. 3. The binding component holding sheet according to claim 1, further comprising: an escape hole configured to expose the binding component from one side of the binding component holding sheet to the other side by the insertion of the circumferential portion of the spirally wound binding component, wherein the escape hole has a length corresponding to multiples turns of the binding component in an axial direction of the binding component. 4. The binding component holding sheet according to claim 1, wherein the holder is configured to regulate a protrusion amount of the binding component to one side of the binding component holding sheet. 5. The binding component holding sheet according to claim 3, wherein the escape hole is configured to regulate a protrusion amount of the binding component to one side of the binding component holding sheet. 6. A binding component holding sheet configured to detachably hold a binding component obtained by spirally winding a wire rod, the binding component holding sheet comprising: an escape hole configured to expose the binding component from one side of the binding component holding sheet to the other side by insertion of a circumferential portion of the spirally wound binding component, wherein the escape hole has a length corresponding to multiples turns of the binding component in an axial direction of the binding component. 7. A binding component separation mechanism configured to separate a binding component obtained by spirally winding a wire rod from the binding component holding sheet according to claim 1, the binding component separation mechanism comprising: a contact part configured to contact against an outer peripheral surface of the binding component protruding to one side of the binding component holding sheet; a restricting part configured to restrict displacements of the binding component and the binding component holding sheet from the other side of the binding component holding sheet; and a conveying unit configured to relatively move the binding component holding sheet and the contact part in a direction in which the binding component held to the binding component holding sheet comes close to the contact part. 8. The binding component separation mechanism according to claim 7, wherein the restricting part is configured to support an outer peripheral surface of the binding component protruding to the other side of the binding component holding sheet. 9. The binding component separation mechanism according to claim 7, wherein the binding component is held to the binding component holding sheet with a part of the area greater than the half of the binding component in the circumferential direction protruding to one side of the binding component holding sheet and a part of the area less than the half of the binding component in the circumferential direction protruding to the other side of the binding component holding sheet, and wherein the contact part is configured to contact against the binding component between a radial center position of the binding component protruding to one side of the binding component holding sheet and the binding component holding sheet. 10. The binding component separation mechanism according to claim 7, wherein the conveying unit includes a sheet conveying unit configured to convey the binding component holding sheet. 11. The binding component separation mechanism according to claim 10, wherein the sheet conveying unit includes: a first sheet conveying roller configured to contact one side of the binding component holding sheet; and a second sheet conveying roller provided to face the first sheet conveying roller and configured to contact the other side of the binding component holding sheet, and wherein the second sheet conveying roller is configured to support the other side of the binding component holding sheet at a more upstream side than the first sheet conveying roller with respect to a conveying direction of the binding component holding sheet. 12. A bookbinding apparatus configured to bind sheets having a plurality of holes formed therein in one row by a binding component obtained by spirally winding a wire rod, the bookbinding apparatus comprising: a sheet conveyance path configured to convey a sheet processed in an image forming apparatus; a hole forming unit configured to form a plurality of holes in one row at an end portion of a sheet to be conveyed on the sheet conveyance path; a sheet aligning unit configured to stack and align a plurality of sheets having holes formed in the hole forming unit; a binding mechanism configured to bind the sheets aligned in the sheet aligning unit by conveying the binding component in an axial direction of the binding component while rotating the binding component in a circumferential direction; a binding component storing unit configured to store therein a binding component holding sheet having a plurality of binding components held thereto; the binding component separation mechanism according to claim 7 configured to separate the binding component from the binding component holding sheet, and a booklet discharging unit configured to discharge a booklet bound with the binding component. 13. The bookbinding apparatus according to claim 12, further comprising: a binding component conveyance path configured to convey the binding component, which is to be supplied from the binding component storing unit, to the binding mechanism, wherein the binding component conveyance path forms a curved conveyance path for conveying the binding component with being curved with respect to the axial direction to an end portion of a side, at which the binding mechanism starts insertion of the binding component, of the sheets aligned in the sheet aligning unit, at a position that is distant from the end portion by a distance smaller than a length of one binding component. 14. A binding component separation mechanism configured to separate a binding component obtained by spirally winding a wire rod from the binding component holding sheet according to claim 6, the binding component separation mechanism comprising: a contact part configured to contact against an outer peripheral surface of the binding component protruding to one side of the binding component holding sheet; a restricting part configured to restrict displacements of the binding component and the binding component holding sheet from the other side of the binding component holding sheet; and a conveying unit configured to relatively move the binding component holding sheet and the contact part in a direction in which the binding component held to the binding component holding sheet comes close to the contact part. 15. The binding component separation mechanism according to claim 14, wherein the restricting part is configured to support an outer peripheral surface of the binding component protruding to the other side of the binding component holding sheet. 16. The binding component separation mechanism according to claim 14, wherein the binding component is held to the binding component holding sheet with a part of the area greater than the half of the binding component in the circumferential direction protruding to one side of the binding component holding sheet and a part of the area less than the half of the binding component in the circumferential direction protruding to the other side of the binding component holding sheet, and wherein the contact part is configured to contact against the binding component between a radial center position of the binding component protruding to one side of the binding component holding sheet and the binding component holding sheet. 17. The binding component separation mechanism according to claim 14, wherein the conveying unit includes a sheet conveying unit configured to convey the binding component holding sheet. 18. The binding component separation mechanism according to claim 17, wherein the sheet conveying unit includes: a first sheet conveying roller configured to contact one side of the binding component holding sheet; and a second sheet conveying roller provided to face the first sheet conveying roller and configured to contact the other side of the binding component holding sheet, and wherein the second sheet conveying roller is configured to support the other side of the binding component holding sheet at a more upstream side than the first sheet conveying roller with respect to a conveying direction of the binding component holding sheet. 19. A bookbinding apparatus configured to bind sheets having a plurality of holes formed therein in one row by a binding component obtained by spirally winding a wire rod, the bookbinding apparatus comprising: a sheet conveyance path configured to convey a sheet processed in an image forming apparatus; a hole forming unit configured to form a plurality of holes in one row at an end portion of a sheet to be conveyed on the sheet conveyance path; a sheet aligning unit configured to stack and align a plurality of sheets having holes formed in the hole forming unit; a binding mechanism configured to bind the sheets aligned in the sheet aligning unit by conveying the binding component in an axial direction of the binding component while rotating the binding component in a circumferential direction; a binding component storing unit configured to store therein a binding component holding sheet having a plurality of binding components held thereto; the binding component separation mechanism according to claim 14 configured to separate the binding component from the binding component holding sheet, and a booklet discharging unit configured to discharge a booklet bound with the binding component. 20. The bookbinding apparatus according to claim 19, further comprising: a binding component conveyance path configured to convey the binding component, which is to be supplied from the binding component storing unit, to the binding mechanism, wherein the binding component conveyance path forms a curved conveyance path for conveying the binding component with being curved with respect to the axial direction to an end portion of a side, at which the binding mechanism starts insertion of the binding component, of the sheets aligned in the sheet aligning unit, at a position that is distant from the end portion by a distance smaller than a length of one binding component.
3,600
274,098
15,956,562
3,638
A lap siding product with a unique shiplap joint that spaces abutting pieces of siding correctly from each other without installer measurements. The shiplap joint comprises a bottom element and a top element. A lap siding panel or board has a bottom element shiplap joint at one end, and a top elements shiplap joint at the other end. The corresponding ends of two lap siding panels or boards (i.e., one bottom element and one top element) together form the unique shiplap joint of the present invention. An engineered “stop” on the underside of the top element spaces the pieces of siding correctly, without requiring measurement during installation. This also eliminates the need for caulk, pan flashing or joint covers in the joint between the pieces of siding or cladding. The shape of the joint also reduces the intrusion of water, and re-directs water down and out from behind the siding.
1. A lap siding system, comprising: a pair of lap siding panels, each panel comprising an outer face, an inner face, a first end and a second end, wherein the first end of one panel is configured to meet with and form a shiplap joint with the second end of the other panel; wherein the first end comprises a stop element disposed on the inner face configured to position a corresponding second end at a pre-determined spacing distance when forming the shiplap joint. 2. The lap siding system of claim 1, wherein the first end of each panel comprises a top shiplap joint element, and the second end of each panel comprises a bottom shiplap joint element, wherein the top shiplap joint element overlaps in whole or in part the corresponding bottom shiplap joint element when forming the shiplap joint. 3. The lap siding system of claim 2, wherein the top shiplap joint element and the bottom shiplap joint element are equal in thickness. 4. The lap siding system of claim 2, wherein the top shiplap joint element is thicker than the bottom shiplap joint element. 5. The lap siding system of claim 2, wherein the top shiplap joint element is thinner than the bottom shiplap joint element. 6. The lap siding system of claim 1, wherein the stop element is configured to break off when the pair of panels expand. 7. The lap siding system of claim 2, wherein the stop element extends longitudinally parallel to an inner face of the top shiplap joint element 8. The lap siding system of claim 2, wherein the stop element extends perpendicularly from the top shiplap joint element 9. The lap siding system of claim 2, further comprising one or more drainage grooves or channels in the top or bottom shiplap joint element, or both. 10. The lap siding system of claim 2, further comprising a visual indexing spacing feature in the bottom shiplap joint element. 11. The lap siding system of claim 2, wherein the visual indexing spacing feature in the bottom shiplap joint element also comprises a drainage groove or channel.
A lap siding product with a unique shiplap joint that spaces abutting pieces of siding correctly from each other without installer measurements. The shiplap joint comprises a bottom element and a top element. A lap siding panel or board has a bottom element shiplap joint at one end, and a top elements shiplap joint at the other end. The corresponding ends of two lap siding panels or boards (i.e., one bottom element and one top element) together form the unique shiplap joint of the present invention. An engineered “stop” on the underside of the top element spaces the pieces of siding correctly, without requiring measurement during installation. This also eliminates the need for caulk, pan flashing or joint covers in the joint between the pieces of siding or cladding. The shape of the joint also reduces the intrusion of water, and re-directs water down and out from behind the siding.1. A lap siding system, comprising: a pair of lap siding panels, each panel comprising an outer face, an inner face, a first end and a second end, wherein the first end of one panel is configured to meet with and form a shiplap joint with the second end of the other panel; wherein the first end comprises a stop element disposed on the inner face configured to position a corresponding second end at a pre-determined spacing distance when forming the shiplap joint. 2. The lap siding system of claim 1, wherein the first end of each panel comprises a top shiplap joint element, and the second end of each panel comprises a bottom shiplap joint element, wherein the top shiplap joint element overlaps in whole or in part the corresponding bottom shiplap joint element when forming the shiplap joint. 3. The lap siding system of claim 2, wherein the top shiplap joint element and the bottom shiplap joint element are equal in thickness. 4. The lap siding system of claim 2, wherein the top shiplap joint element is thicker than the bottom shiplap joint element. 5. The lap siding system of claim 2, wherein the top shiplap joint element is thinner than the bottom shiplap joint element. 6. The lap siding system of claim 1, wherein the stop element is configured to break off when the pair of panels expand. 7. The lap siding system of claim 2, wherein the stop element extends longitudinally parallel to an inner face of the top shiplap joint element 8. The lap siding system of claim 2, wherein the stop element extends perpendicularly from the top shiplap joint element 9. The lap siding system of claim 2, further comprising one or more drainage grooves or channels in the top or bottom shiplap joint element, or both. 10. The lap siding system of claim 2, further comprising a visual indexing spacing feature in the bottom shiplap joint element. 11. The lap siding system of claim 2, wherein the visual indexing spacing feature in the bottom shiplap joint element also comprises a drainage groove or channel.
3,600
274,099
15,955,476
3,638
A solar panel mount includes a plate, a compression spacer, a mounting shaft, and a mounting member. The plate includes a first edge and a first surface. The plate defines at least one opening spaced from the first edge. The mounting member is between the plate and the compression spacer, defines at least one channel aligned with the at least one opening of the plate to receive the mounting shaft through an opening of the at least one opening and a corresponding channel of the at least one channel. The compression spacer receives the mounting shaft.
1. A solar panel mount, comprising: a plate including a first edge, a first surface extending from the first edge, and a second surface opposite the first surface, the plate defining at least one opening spaced from the first edge, the plate defining a thickness from the first surface to the second surface, the thickness less than 0.2 inches; a mounting shaft; a mounting member defining at least one channel aligned with the at least one opening of the plate to receive the mounting shaft through an opening of the at least one opening and a corresponding channel of the at least one channel, the mounting member having a first side facing the plate; and a compression spacer facing a second side of the mounting member opposite the first side, the compression spacer receiving the mounting shaft. 2. The solar panel mount of claim 1, further comprising a plurality of flanges extending from the first edge, wherein at least one flange of the plurality of flanges defines an opening between the flange and the first edge to allow water to drain through the opening. 3. The solar panel mount of claim 1, wherein the at least one opening is spaced from the first surface, the plate defining a cavity on an opposite side of the first surface from the at least one opening, the cavity receiving the mounting member. 4. The solar panel mount of claim 1, wherein the mounting member is made of a steel alloy material. 5. (canceled) 6. The solar panel mount of claim 1, wherein a length of the plate along the first edge is less than 10 inches. 7. The solar panel mount of claim 1, wherein the cavity is closer to the first edge than a second edge opposite the first edge. 8. The solar panel mount of claim 1, further comprising a solar panel mounting bracket attached by the mounting shaft to the plate adjacent to the at least one opening. 9. The solar panel mount of claim 8, further comprising a sealing member to seal a boundary between the solar panel mounting bracket and the plate when the solar panel mounting bracket is attached to the plate. 10. The solar panel mount of claim 1, wherein the plate is made of at least one of a UV stabilized plastic or a plastic including flame retardant additives. 11. The solar panel mount of claim 1, wherein the thickness is greater than 0.04 inches and less than 0.1 inches. 12. The solar panel mount of claim 1, further comprising a sealing plug attached to a first end of the at least one channel of the mounting member opposite a second end adjacent to the plate. 13. The solar panel mount of claim 1, further comprising a sealant in the cavity between the mounting member and the first plate. 14. A roof mounting assembly, comprising: a plate including a first edge, a first surface extending from the first edge, and a second surface opposite the first surface, the plate defining at least one opening spaced from the first edge and from the first surface, the plate defining a cavity on an opposite side of the first surface from the at least one opening, the plate defining a thickness from the first surface to the second surface, the thickness less than 0.2 inches; a mounting shaft; a mounting member defining at least one channel sized to be aligned with the at least one opening of the plate when the mounting member is received in the cavity such that a first side of the mounting member faces the at least one channel; and a compression spacer sized to be at least partially received in in a second side of the mounting member opposite the plate first side of the mounting member when the mounting member is received in the cavity, the compression spacer defining an opening for receiving the mounting shaft. 15. The roof mounting assembly of claim 14, wherein the plate includes a plurality of flanges extending from the first edge, wherein at least one flange of the plurality of flanges defines an opening between the flange and the first edge to allow water to drain through the opening. 16. The roof mounting assembly of claim 14, wherein the thickness is greater than 0.04 inches and less than 0.1 inches. 17. The roof mounting assembly of claim 14, wherein the mounting member is made of a 6061 aluminum material. 18. The roof mounting assembly of claim 14, further comprising a solar panel mounting bracket attached by the mounting shaft to the plate adjacent to the at least one opening. 19. The roof mounting assembly of claim 18, further comprising a sealing member which seals a boundary between the solar panel mounting bracket and the plate when the solar panel mounting bracket is attached to the plate. 20. (canceled) 21. The solar panel mount of claim 1, wherein the plate includes a first portion, a second portion, and a divider between the first portion and the second portion, the first portion and the second portion extending from the first edge and defining the first surface and the second surface, the divider defining the at least one opening of the plate and a cavity receiving the mounting member, the divider extending above the first portion and the second portion. 22. The solar panel mount of claim 21, wherein the mounting member is fully received in the cavity.
A solar panel mount includes a plate, a compression spacer, a mounting shaft, and a mounting member. The plate includes a first edge and a first surface. The plate defines at least one opening spaced from the first edge. The mounting member is between the plate and the compression spacer, defines at least one channel aligned with the at least one opening of the plate to receive the mounting shaft through an opening of the at least one opening and a corresponding channel of the at least one channel. The compression spacer receives the mounting shaft.1. A solar panel mount, comprising: a plate including a first edge, a first surface extending from the first edge, and a second surface opposite the first surface, the plate defining at least one opening spaced from the first edge, the plate defining a thickness from the first surface to the second surface, the thickness less than 0.2 inches; a mounting shaft; a mounting member defining at least one channel aligned with the at least one opening of the plate to receive the mounting shaft through an opening of the at least one opening and a corresponding channel of the at least one channel, the mounting member having a first side facing the plate; and a compression spacer facing a second side of the mounting member opposite the first side, the compression spacer receiving the mounting shaft. 2. The solar panel mount of claim 1, further comprising a plurality of flanges extending from the first edge, wherein at least one flange of the plurality of flanges defines an opening between the flange and the first edge to allow water to drain through the opening. 3. The solar panel mount of claim 1, wherein the at least one opening is spaced from the first surface, the plate defining a cavity on an opposite side of the first surface from the at least one opening, the cavity receiving the mounting member. 4. The solar panel mount of claim 1, wherein the mounting member is made of a steel alloy material. 5. (canceled) 6. The solar panel mount of claim 1, wherein a length of the plate along the first edge is less than 10 inches. 7. The solar panel mount of claim 1, wherein the cavity is closer to the first edge than a second edge opposite the first edge. 8. The solar panel mount of claim 1, further comprising a solar panel mounting bracket attached by the mounting shaft to the plate adjacent to the at least one opening. 9. The solar panel mount of claim 8, further comprising a sealing member to seal a boundary between the solar panel mounting bracket and the plate when the solar panel mounting bracket is attached to the plate. 10. The solar panel mount of claim 1, wherein the plate is made of at least one of a UV stabilized plastic or a plastic including flame retardant additives. 11. The solar panel mount of claim 1, wherein the thickness is greater than 0.04 inches and less than 0.1 inches. 12. The solar panel mount of claim 1, further comprising a sealing plug attached to a first end of the at least one channel of the mounting member opposite a second end adjacent to the plate. 13. The solar panel mount of claim 1, further comprising a sealant in the cavity between the mounting member and the first plate. 14. A roof mounting assembly, comprising: a plate including a first edge, a first surface extending from the first edge, and a second surface opposite the first surface, the plate defining at least one opening spaced from the first edge and from the first surface, the plate defining a cavity on an opposite side of the first surface from the at least one opening, the plate defining a thickness from the first surface to the second surface, the thickness less than 0.2 inches; a mounting shaft; a mounting member defining at least one channel sized to be aligned with the at least one opening of the plate when the mounting member is received in the cavity such that a first side of the mounting member faces the at least one channel; and a compression spacer sized to be at least partially received in in a second side of the mounting member opposite the plate first side of the mounting member when the mounting member is received in the cavity, the compression spacer defining an opening for receiving the mounting shaft. 15. The roof mounting assembly of claim 14, wherein the plate includes a plurality of flanges extending from the first edge, wherein at least one flange of the plurality of flanges defines an opening between the flange and the first edge to allow water to drain through the opening. 16. The roof mounting assembly of claim 14, wherein the thickness is greater than 0.04 inches and less than 0.1 inches. 17. The roof mounting assembly of claim 14, wherein the mounting member is made of a 6061 aluminum material. 18. The roof mounting assembly of claim 14, further comprising a solar panel mounting bracket attached by the mounting shaft to the plate adjacent to the at least one opening. 19. The roof mounting assembly of claim 18, further comprising a sealing member which seals a boundary between the solar panel mounting bracket and the plate when the solar panel mounting bracket is attached to the plate. 20. (canceled) 21. The solar panel mount of claim 1, wherein the plate includes a first portion, a second portion, and a divider between the first portion and the second portion, the first portion and the second portion extending from the first edge and defining the first surface and the second surface, the divider defining the at least one opening of the plate and a cavity receiving the mounting member, the divider extending above the first portion and the second portion. 22. The solar panel mount of claim 21, wherein the mounting member is fully received in the cavity.
3,600