In Raid O, we found the Software stripping and block interleave. This operation requires a minimum of 2 drives, in which data is written to each drive in succession, each block going to the next available drive (striping) for a faster operation and less chance of overloading the drives. The volume can of course can be much larger than any single drive. The major problem faced here is the lack of redundancy, hence the failure of a single drive will bring the entire system down. RAID 0 is the fastest and most efficient array type, but offer no room for disaster recovery.
Raid 1 configuration is done with disk mirroring and duplexing in mind, and requires a minimum of two drives. The drives are being configured in pairs and all data is written identically to both drives. Each drive can be duplexed by being connected to its own interface controller. When one drive fails, this, will not cause a failure to the system, instead the other drive will continue to operate. Of course, two drives are now used for the equivalent storage capacity of one drive. The major challenge is that there is no performance gain with this level. The array of choice for performance-critical, fault tolerant environments. Also, RAID 1 is the only choice for fault tolerance if no more than two drives are desired.
Data striping and bit interleave. Data is written across each drive in succession, one bit at a time. Checksum data is recorded in a separate drive. RAID 2 is very slow for disk writes and seldom used today since ECC is embedded in almost all modern disk drives.
Data striping with bit interleave and parity checking. RAID 3 is similar to lever 2, but more reliable. Data striping is done across the drives, one byte at a time. Usually, 4 or 5 drives are used providing very high data transfer rates. One drive is dedicated to storing parity information. The failure of a single drive can be compensated by using the parity drive to reconstruct the failed drive contents. Since the parity drive is accessed on every write operation, the writing of data tends to be slower. The failure of two drives or more can be a problem. RAID 3 can be used in data intensive environments with long sequential records to speed up data transfer. However, it does not allow multiple I/O operations to be overlapped, and requires synchronized spindle drives to avoid performance degradation with short records.
Block interleave data striping with parity checking. As in level 3, RAID 4 uses a single parity drive and block data striping like in RAID 0. The drives in this RAID level function individually, with an individual drive reading a block of data. A failure of the controller will of course be catastrophic. Offers no advantages over RAID 5 and does not support multiple simultaneous write operations.
Block interleave, data striping with distributed check data on all drives. The one to use for NetWare. Parity information is distributed across all drives. RAID 5 efficiency goes up as the number of disks increases. You can use hot spares to rebuild a failed drive on “the fly”. The best choice in multi-user environments, which are not write performance sensitive. However, at least three, and more typically five drives, are required for RAID 5 arrays.
Extension to RAID 5, which adds a log structured file system providing a mapping between a disk drive’s physical sectors and their logical representation. As information is written, it is placed to sequential physical disk sectors.
Stripped array whose segments are RAID 1 arrays and containing the same fault tolerance as RAID 1. High I/O rates are achieved by stripping RAID 1 segments. Excellent solution for those considering RAID 1 since it provides good write performance, but is an expensive solution.
Implemented as striped RAID 0 array whose segments are RAID 3 arrays. RAID 53 also contains the same fault tolerance and overhead as RAID 3. Excellent solution for those considering RAID 3 since it provides additional write performance, but is an expensive solution and requires all drives to have the same synchronization.