[ Story so far: 2006-08 showed onboard Adaptec RAID very slow, and management arkward: Linux md raid was instead used directly on the disks. In 2006-09 there was a problem with 3 of 5 disks dropping from the bus at once. Foolishly, a desperate attempt was made to improve things by buying an LSI raid card. This was also pitifully slow, though with better management programs than the Adaptec ones. Initially the tests described here were to try to get the card working. Then, the linux md raid was again considered, but with some `pressure-testing' to try to identify the source of the problem. The source was eventually seen to be "smart" health-checking, while under other load, on the 3 new Maxtor disks Using the Adaptec non-RAID controller and linux md raid (5 or 6), heavy loading was placed on the array disks by such cruelty as (running simultaneously): $ while true ; do bonnie++ -d /public/ -c 30 -s 8000 -n 100000 -x 2 -u 600 -g 1000 ; done $ while true do cp -a /usr/portage/ /public/portage_copy ls -lR /public/portage_copy >/dev/null 2>&1 rm -rf /public/portage_copy done $ while true do dd if=/dev/zero of=/public/tmpfile_nt bs=$((1024*1024)) count=500 cat /public/tmpfile_nt > /dev/null dd if=/dev/zero of=/public/tmpfile_nt bs=$((1024*1024)) count=600 cat /public/tmpfile_nt > /dev/null rm /public/tmpfile_nt done and: $ while true ; do for d in /dev/sd? ; do smartctl -t long $d ; done ; sleep 10000 ; done All these together, after a time ranging from minutes to a day or so, stimulated SCSI errors ON THE THREE SIMILAR MAXTOR Atlas10k5 DISKS ONLY, causing one or more to be lost from the array. YET AGAIN, Maxtor seems to have been the cause of (now, MANY) lost hours. Since their support was unreachable, the disks have been replaced with ones from another manufacturer. The server has worked well now for over a year.