[OpenIndiana-discuss] ZFS 0+1 across disparate drives

Mehmet Erol Sanliturk m.e.sanliturk at gmail.com
Wed Dec 1 18:38:05 UTC 2021


On Wed, Dec 1, 2021 at 8:52 PM Michelle <michelle at msknight.com> wrote:

> Good advice. Thanks for taking the time.
>
> This remains one question ... can drives and partitions be mixed in a
> ZFS pool?
>
> This project is a backup project which is actually a backup of a
> backup, if that makes sense.
>
> The original data is spread across two servers, one running OI (the
> main data store) and a Cisco NAS (my video/rip store). These are backed
> up to external drives, with two backup sets for each. So I already have
> a situation with one RAID resilient copy of the data, along with two
> full external backups of the data.
>
> But I have a series of odd drives hanging around, as many people
> probably do. And I want to do something useful with them.
>
> The only other alternative is to dismantle and trash the drives.
>
> So I am going to undertaken this project, and document it, so that
> others in the same situation can find some use for their drives as
> opposed to simply throwing away perfectly serviceable drives.
>
> Yes, the drives are going to be disparate. Most of them are WD Red
> units, one is WD green. For this project I will also be re-using an old
> motherboard which has been sitting in the attic for years.
>
> The operating system will be on its own SSD drive, and the whole thing
> will be documented so in the case of an OS failure, the drive can
> easily be rebuilt and reconfigured... because all the notes are there.
>
> So... yes... partially I'm doing this for the hell of it. And hopefully
> my success or failure will be documented and of use to other people. I
> have a small youtube channel - not monetized - where all this will
> ultimately be recorded.
>
>




I am not using ZFS . Therefore I will not be able to make a useful comment
about ZFS issues .

Your idea to use your spare disks as an additional backup facility is
really a good idea .
You are on a good track . You will not lose anything ( your efforts will
increase your amount  of experience , meaning your expertise ) , but in a
serious hardware failure , your additional backups would be a very
important resource for you . Over previous years I lost a serious amount of
data due to not taking additional backups .

My other opinion is that "Do not use concatenated drives , because , if one
fails , it will cause failure of the others also . Therefore use
independent drives . If one fails , the others will be usable ."


With my best wishes and good health for all of you







>
> On Wed, 2021-12-01 at 20:08 +0300, Mehmet Erol Sanliturk wrote:
> > On Wed, Dec 1, 2021 at 2:55 PM Michelle <michelle at msknight.com>
> > wrote:
> >
> > > I'm trying to achieve a resilient way of bringing together all my
> > > older
> > > drives for a backup solution using scraps of whatever I can get my
> > > hands on.
> > >
> > > I have closing on 12TB of data so even the 10 won't be enough to
> > > back
> > > everything up, but this is as much for the exercise of doing it, as
> > > achieving anything solid. It won't be under pressure, but I'd
> > > rather
> > > push the envelope and see what I can do.
> > >
> > > So how would the command go?
> > >
> > > zpool create tank raidz mirror drive1 drive2 mirror drive3 drive4
> > > drive5
> > >
> > > ...which is where I come unstuck with the 2TB drive in the mix.
> > >
> > >
> > >
> >
> >
> >
> > A few days ago I lost many weeks of work because my drive #1 / 3 died
> > , not
> > synchronized into  #2 / 3 and  #3 / 3 .
> > This made the computer unbootable .
> > I replaced the failed disk and synchronized it  with #2 .
> > The disk #3 failed and made the computer unbootable .
> > I replaced that disk and synchronized it  with #2 .
> >
> > The #1 disk was new but bought  approximately 5 year ago .
> >
> > You are saying your disks are older .
> >
> > One "safe" but slow choice would be the following .
> >
> > Use external USB docks for each of your disks and make their file
> > systems
> > compatible
> > with your computer ( if your disks have other file systems ) .
> >
> > With your synchronization shell scripts (1) mount  (2) rsync (3)
> > un_mount
> > your drives
> > by using cron or ( manually which this option is not a good choice )
> > .
> >
> > If any one of your disks fails , it will not affect your computer .
> > This will be slow but without any other harm .
> >
> >
> > The most suitable additional action may be to backup your data to
> > external
> > disks
> > regularly . These disks will not be continuously connected to the
> > computer
> > and will not be affected  by electricity harmful effects .
> >
> >
> >
> > OR
> >
> > You may use another computer ( such as a single board computer ) as
> > an NFS
> > server
> > ( or a NAS if one is available to you ) , and use your drives in that
> > server .
> > Then synchronize your drives from your computer . If any disk fails ,
> > it
> > only affects the server
> > but your computer continues to work without affection .
> >
> >
> >
> > Mehmet Erol Sanliturk
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > On Wed, 2021-12-01 at 11:17 +0000, James wrote:
> > > > On 01/12/2021 08:31, Michelle wrote:
> > > > > Say I was to put a 2tb, three 4tb and a 6tb together (a 2 and
> > > > > two 4
> > > > > would make 10 and the other 4 and the 6 would also make 10)
> > > > >
> > > > > Would that be possible with ZFS now?
> > > >
> > > > I think it has always been possible, ask is is sensible?  Try it,
> > > > if
> > > > you
> > > > have nothing to loose.  The problem is if one drive fails is
> > > > takes
> > > > out
> > > > all of one side of the mirror.
> > > >
> > > > Why not use 2 separate 4TB mirrors?  4&4 = 4, 4&6 = 4, total
> > > > 8.  You
> > > > loose the 2TB drive completely but (guessing) it is the slowest
> > > > and
> > > > oldest.  You ignore 2TB of the 6.
> > > >
> > > > You don't say what you are trying to achieve but it's unlikely
> > > > you
> > > > have
> > > > the full 8TB of data, it is unlikely it can't be split.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > openindiana-discuss mailing list
> > > > openindiana-discuss at openindiana.org
> > > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> > > _______________________________________________
> > > openindiana-discuss mailing list
> > > openindiana-discuss at openindiana.org
> > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> > _______________________________________________
> > openindiana-discuss mailing list
> > openindiana-discuss at openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
>
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>


More information about the openindiana-discuss mailing list