I understand there to be three different types of SMR drives: Device managed,
Host aware and Host managed.
It is my belief that only 'Device managed' drives are shipping
which any file system can use, but the file system has no mechanisms to
leverage or control the drives behavior hence they are not good candidates
for any RAID like system.
'host aware' or 'host managed' SMR drives open the real possibilities for use
under ZFS and other file systems.
I found these links of interest.
Tim Feldman - Host-Aware SMR - OpenZFS Dev Summit 2014
http://open-zfs.org/w/images/2/2a/Host-Aware_SMR-Tim_Feldman.pdf
http://www.snia.org/sites/default/files/Dunn-Feldman_SNIA_Tutorial_Shingled_Magnetic_Recording-r7_Final.pdf
--mikej
-----Original Message-----
From: owner-freebsd-***@freebsd.org [mailto:owner-freebsd-***@freebsd.org] On Behalf Of InterNetX - Juergen Gotteswinter
Sent: Tuesday, March 24, 2015 1:42 PM
To: FreeBSD FS; freebsd-***@freebsd.org
Subject: Re: Zoned Commands ZBC/ZAC, Shingled SMR drives, ZFS
The HGST He8 HDDs completed its rebuild in 19 hours and 46 minutes. The
Seagate Archive HDDs completed their rebuild in 57 hours and 13 minutes
this is ... a feature. right?
Post by Dale KlineREAD THE DOCUMENTATION THOROUGHLY on these SMR drives. There are serious WRITE restrictions on these drives because of the overlapping (shingled) tracks. I have read over several times and am still not sure of all of the caveats. As Tom states below, they are to be used mainly in "WRITE ONCE, READ MANY" environments.
-----Original Message-----
Sent: Tuesday, March 24, 2015 12:39 PM
To: Shehbaz Jaffer
Subject: Re: Zoned Commands ZBC/ZAC, Shingled SMR drives, ZFS
Post by Shehbaz JafferHi,
I was wondering what cost advantage do SMR drive provide as compared
to normal CMR drive?
8TB SMR drive - $ 260
3TB CMR (Conventional Magnetic Recording drive) - $ 105
Purchase price is not irrelevant, but the key benefits are increased capacity per disk, and reduced power usage per disk and (multiplied by the increase in capacity) per TB. In other words, they disks consume less power, you need fewer of them, maybe allowing you to run fewer servers.
Of course, you also need a mainly read only workload. The RAID rebuild test from the linked review is *scary*. I wouldn't use these in ZFS raidz without plenty of disaster recovery testing - how long does it take to re-silver the pool when you lose a disk and what is the performance characteristics of the pool whilst it is doing so.
Cheers
Tom
_______________________________________________
_______________________________________________
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
_______________________________________________
freebsd-***@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-***@freebsd.org"
GoPai.com | Facebook.com/PaymentAlliance
CONFIDENTIALITY NOTE: This message is intended only for the use
of the individual or entity to whom it is addressed and may
contain information that is privileged, confidential, and
exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby
notified that any dissemination, distribution or copying
of this communication is strictly prohibited. If you have
received this transmission in error, please notify us by
telephone at (502) 212-4001 or notify us at PAI , Dept. 99,
6060 Dutchmans Lane, Suite 320, Louisville, KY 40205