[OpenIndiana-discuss] Doubt on ZFS

Basil Kurian basilkurian at gmail.com
Thu Feb 3 10:03:23 UTC 2011


Hi Jeppe


Thanks a lot for your reply. It cleared all my doubts

On 3 February 2011 13:13, Jeppe Toustrup <openindiana at tenzer.dk> wrote:

> 2011/2/3 Basil Kurian <basilkurian at gmail.com>:
> > [root at beastie /etc]# zpool create nas da0 da1
> > [root at beastie /etc]# zpool list
> > NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
> > nas   23.9G  73.5K  23.9G     0%  ONLINE  -
> > [root at beastie /etc]# zpool add nas da2
> > [root at beastie /etc]# zpool list
> > NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
> > nas   35.8G   134K  35.8G     0%  ONLINE  -
> >
> > *Then I stored one big file on /nas . after that , I tried to remove
> newly
> > attached disk.*
> >
> > [root at beastie /etc]# du -sh /nas/huge_file
> > 464M    /nas/huge_file
> > [root at beastie ~]# zpool remove nas da2
> > cannot remove da2: only inactive hot spares or cache devices can be
> removed
> > [root at beastie ~]# zpool offline  nas da2
> > cannot offline da2: no valid replicas
> > [root at beastie ~]# zpool detach  nas da2
> > cannot detach da2: only applicable to mirror and replacing vdevs
> >
> > *
> > Though the data stored in the pool is much less that the size of
> individual
> > disks ,  I 'm unable to remove any of the members from the pool. How can
> I
> > do that without losing data ?
> > *
>
> You can't, unless it's a mirror. What you created is essentially a RAID 0
> setup.
>
>
> > *I have one more doubt*
> >
> > [root at beastie ~]# zpool create nas mirror ad4 ad6 mirror da0 da1
> > [root at beastie ~]# zpool status
> >  pool: nas
> >  state: ONLINE
> >  scrub: none requested
> > config:
> >
> >    NAME        STATE     READ WRITE CKSUM
> >    nas         ONLINE       0     0     0
> >      mirror    ONLINE       0     0     0
> >        ad4     ONLINE       0     0     0
> >        ad6     ONLINE       0     0     0
> >      mirror    ONLINE       0     0     0
> >        da0     ONLINE       0     0     0
> >        da1     ONLINE       0     0     0
> >
> > [root at beastie ~]# zpool detach nas da0
> > [root at beastie ~]# zpool status
> >  pool: nas
> >  state: ONLINE
> >  scrub: none requested
> > config:
> >
> >    NAME        STATE     READ WRITE CKSUM
> >    nas         ONLINE       0     0     0
> >      mirror    ONLINE       0     0     0
> >        ad4     ONLINE       0     0     0
> >        ad6     ONLINE       0     0     0
> >      da1       ONLINE       0     0     0
> >
> > errors: No known data errors
> >
> > [root at beastie ~]# zpool attach nas da0
> > missing <new_device> specification
> > [root at beastie ~]# zpool attach nas da0 da1
> > invalid vdev specification
> > use '-f' to override the following errors:
> > /dev/da1 is part of active pool 'nas'
> >
> >
> > *How can I reattach it to the pool ?*
>
> Each drive/partition which has been in a ZFS pool, get it's last pool
> name, pool UID, etc. written to the drive, this is then checked when
> you want to use the drive again. The warning you get is to make sure
> you won't overwrite data on the wrong drive.
> When you are sure you are trying to add the correct drive, then simply
> add the '-f' option, as it tells you to, and the drive will be added
> to the pool.
>
>
> > *Finally one more doubt too*
> > [root at beastie ~]# zpool create nas mirror ad4 ad6 mirror da0 da1
> >
> > *can we do this in two steps. something like*
> >
> > [root at beastie ~]# zpool create nas1 mirror ad4 ad6
> > [root at beastie ~]# zpool create nas2 mirror da0 da1
> > [root at beastie ~]# zpool create nas nas1 nas 2
> > cannot open 'nas1': no such GEOM provider
> > must be a full path or shorthand device name
>
> Sure, but you have to use the 'add' command to add the extra mirror then:
>
> root at Urraco:/# mkfile 100m disk1 disk2 disk3 disk4
> root at Urraco:/# zpool create testpool mirror /disk1 /disk2
> root at Urraco:/# zpool status testpool
>  pool: testpool
>  state: ONLINE
>  scrub: none requested
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>         testpool    ONLINE       0     0     0
>          mirror-0  ONLINE       0     0     0
>            /disk1  ONLINE       0     0     0
>            /disk2  ONLINE       0     0     0
>
> errors: No known data errors
> root at Urraco:/# zpool add testpool mirror /disk3 /disk4
> root at Urraco:/# zpool status testpool
>  pool: testpool
>  state: ONLINE
>  scrub: none requested
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>         testpool    ONLINE       0     0     0
>          mirror-0  ONLINE       0     0     0
>            /disk1  ONLINE       0     0     0
>            /disk2  ONLINE       0     0     0
>          mirror-1  ONLINE       0     0     0
>            /disk3  ONLINE       0     0     0
>            /disk4  ONLINE       0     0     0
>
> errors: No known data errors
>
>
> --
> Venlig hilsen / Kind regards
> Jeppe Toustrup (aka. Tenzer)
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>



-- 
Regards

Basil Kurian
<http://basilkurian.tk>


More information about the OpenIndiana-discuss mailing list