| about us | advertise | careers | links |

[an error occurred while processing this directive]

Quantum Atlas V Hard Drive

The product
  • Seek Times:
    Average: 6.3ms
    Track to Track: 0.8ms
  • Average Rotational Latency: 4.17ms
  • Rotational Speed: 7,200 RPM
  • Buffer Size: 4Mb
  • Shock Durability
    Non-Operational (2ms): 280Grams
    Operational (2ms): 63Grams

(+,-) $600USD

1 2 3 4 5 6 7 8 9 1
8 /10
When Quantum's latest hard drive arrived in the mail I was rushing to rip open the box and lay my eyes on what they called "The next generation of hard drives". When I got down to the goods in the box, I found Quantum's new 36.7GB Atlas V. Boasting such specs as an Ultra160 SCSI interface and a whopping 4mb buffer, I was anxious to see just how this baby performed.

The specs are pretty decent for a higher end 7,200 RPM drive. The data transfer rates are quite good given todays standards, but the one thing that sticks out is the 4mb buffer. This an unusually large buffer for a drive of this type, a more typical buffer is around the 2MB mark. In benchmarking, the larger cache flexed it's muscle and posted some impresive numbers, but it certianlly adds to the cost of the drive. The buffer also has a sort of hidden function, where it's used to back-buffer data in ther event of the drive being bumped, so that data is not lost and/or the system doesn't have to start re-sending data. The larger buffer and the Ultra 160 SCSI interface match up quite nicely with the buffer taking full advantage of the extra bandwidth it the burst benchmarks.

The Atlas V is available with ether a 68pin lvd SCSI connector or an 80 pin SCA-2 connector. Our evaluation unit came equipped with a U160 compatible 68pin lvd connector which is backward compatible with Ultra2 and Ultra 2 Wide which is nice because it allows you to pop the Atlas V into an existing Ultra2 SCSI system. Most people think of performance and low CPU utalization when they think of SCSI, and SCSI3 is no different pumping an amazing 160mb/sec across the bus while using minimal CPU power to do it. The Ultra 160 SCSI standard utalized with the drive provides many new features including twice the data bandwidth of Ultra2 Wide SCSI, Double Transition Clocking, Double Edge Clocking, and CRC error checking, as well as being fully backward compatible as all SCSI standards are. There were three major changes to the SCSI 3 standard which allow such much bandwidth across the bus, the first being Double Transition Clocking.

Double transition clocking changes the digital protocol to use both edges of the SCSI request/acknowledge signal to clock data. Data transfer rates can be doubled simply by increasing the speed of only the data lines. For example, request/acknowledge signal on Ultra2 SCSI runs at 80 MHz, while data runs at only 40 MHz, or 80 MB/second on a 16-bit wide bus. By using both edges of the same 80 MHz request/acknowledge signal, the data rate can be increased to 80 MHz, or 160 MB/second on a 16-bit wide bus.

So without all the technobabble, the data pathways are "double pumped" and the request/acknodledge pathway only single pumped. Then the bus is clocked at 80MHz, producing double the bandwidth of Ultra2 Wide SCSI but at the same time not risking signal integrity by single clocking req/ack.

The second new component of SCSI 3 is Double Edge Clocking. This is where the signal peak is stretched out to allow easier timing across the bus. This provides a greater timing margin for traces, capacitance devices, cables, etc. This greatly reduces the chance of errors caused by noise and artifacts on the bus. At the same time, the bus is "double pumped", where information is transmitted on both the rising and falling edges of the signal. The net effect is that the maximum frequency of the clock lines (REQ/ACK) is slowed, without slowing the data rate.

The third new feature of the SCSI 3 standard is CRC (Cyclical Redundancy Check) error checking. SCSI 3 CRC error checking is the same method as emplyed by Ethernet and Fiber Channel networks, and is a tried and tested method of data validation. All single and double bit errors, as well as all odd number of errors and all burst errors up to 32-bits long are caught and corrected using this method.

The last feature Quantum toutes as a data saver is their new Shock protection system. With your standard hard drive, when the drive is accidentally bumped, the head gets tossed around causing data to get scattered across the drive, and if your really unlucky, the head crashes into the platter causing damage the surface of the platter and little particles to get scattered around. Not good. With the shock protection system, any shock applied to the drive is absorbed into the drive itself, rather that the head and arms. This does a couple of things, first it keeps the head from "slapping" the platter, and it also keeps any data being written to the drive from being scattered all over the place. I trust that that SPS actually works without testing it as I was not brave enough to start tossing around a $600 drive.

In the event that the drive is writing when it is bumped, the drive simple stops all writing when it detects movement and uses the data buffer to store informaion until it can re-align itself.


I installed the HD following Quantums instructions provided with the drive down to the last T and booted up the computer. The Tekram 390 U2W controlled recognised the drive immediatly and NT continued to boot. My first task at hand was to run the drive through a few rounds of SiSoft Sandra to get a feel for how the drive performed, this required that I partition and format the drive. Only NT refused to format the drive, reporting a general error message. Thinking NT was just being stuborn as it can be, I rebooted to DOS and tried to format a 2 GB partition using FAT16. No luck, DOS's format utility also refused to format the drive, also returning a general error message. I tried all sorts of partition sizes and placements and file systems, but to no avail. Double checking my jumper settings and physically re-installing both the controller and HD. Having had enough of this nonsence, I popped in my Linux Mandrake 7 CD and started an installation on the Atlas. Linux recognised, partitioned, and formatted the drive in record time, the whole 1.2GB install taking a mear 40 minutes.

Not being too familiar with Linux benchmarking, I started up The Gimp (GNU Image Manipulation Program, equivilant to PhotoShop) and tried loading and saving large graphics files. To say the least, I was amazed at how fast 30 and 50mb images popped up, usually with no noticable lag at all. Opening several pictures and other graphics programs had the narest dent in performance as I tossed around 100mb images in seconds where as my aging IDE drive dies when I start working with several large images. After several weeks of use it was quite evident that the operating system was running faster as a whole, the menu's were more snappy, and programs popped open pretty quickly. It was painfully evident when I returned to useing an old IDE hard drive just how large of a performance difference there is.

When Quantum conceived the Atlas V, it was not intended as a desktop drive, but for server and high end professional applications where large streams of data need to be moved fast and efficientaly. I already knew that the drive could throw around larger image files with ease, but servers are a whole different game. Starting 3 silmultanious servers, an MP3, General File, and Web Server for my LAN seemed like a fitting task to push the drive. During normal loads of serving 50MBits/sec from the drive, it was barely detectable, the system was still snappy, and large files were not a problem. Although it was a different story when I cranked up the load on the drive to 100MBit/sec, it was very evident that performance was hurting. Although the system was still very usable, it was the OS was slower and read/write performance in GIMP was noticabally slower. Keep in mind that Quantum rates the read throughput between 17 and 29MB/sec, and considering that I was using roughly 40% of that available throughput was being used, performance was still surprisingly good. To get some cold hard numbers off the drive, I enlisted the popular Bonnie benchmark. Bonnie is a *NIX file system benchmark, which stresses known hard drive I/O bottlenecks, and uses both per-character and block file access methods to test the drives read, write and re-write abilities.

When using a 100MB sized test file the drive posts very good numbers, especially when using a block method. When I cranked the test file up to 500MB, we see drastic difference's in the block read and random seek numbers. The large discrepiencies's are the combined effect of the Atlas's onboard cache and Linux's file caching. Linux will try to use free system RAM and cache file that are frequently being used, so this what throws the block read and random seek numbers out. Although when the test file is increased to 500MB, we see the real throughput numbers for the drive as Linux's file caching has zero effect once the file size exceeds available system RAM. Even when using the larger test file, the drive posted very good block benchmarks, posting 44 and 34MB/sec when writing, and 178 and 28MB/sec (no, that's not a typo, once again the cache and Linux's file cacheing come into play) when reading.



Web Target PC


[an error occurred while processing this directive]

Contact us | About us | Advertise
Copyright 1999-2007 TargetPC.com. All rights reserved. Privacy information.