[OpenIndiana-discuss] ashift 13?
James
lista at xdrv.co.uk
Thu Apr 9 14:07:25 UTC 2015
On 09/04/2015 01:12, jason matthews wrote:
> root at dbb005:/root# zfs list -r ashift9 ashift12 |grep postgres
> ashift12/128k/lzjb/postgres 403G 40.7G 403G
> ashift12/8k/lzjb/postgres 403G 40.7G 403G
How did you create the copies? The recordsize of the data is preserved
across send and receive which results in the size being the same if the
ashift is the same.
Test:
zfs create -o mountpoint=/junk rpool/junk
zfs set compress=on rpool/junk
zfs create rpool/junk/128k
zfs create rpool/junk/8k
zfs set recordsize=8k rpool/junk/8k
dd if=/bin/zsh of=/junk/128k/data bs=1024 count=1024
dd if=/bin/zsh of=/junk/8k/data bs=1024 count=1024
zfs snapshot -r rpool/junk at mark
zfs send rpool/junk/128k at mark | zfs receive rpool/junk/copy128k
zfs send rpool/junk/8k at mark | zfs receive rpool/junk/copy8k
zfs destroy -r rpool/junk at mark
zfs get -r recordsize rpool/junk
sync
zfs list -r -o name,used,compressratio rpool/junk
zdb -v rpool/junk/128k
zdb -v rpool/junk/8k
zdb -v rpool/junk/copy128k
zdb -v rpool/junk/copy8k
rm /junk/copy8k/data
rsync /junk/8k/data /junk/copy8k/data
sync
zfs list -r -o name,used,compressratio rpool/junk
zdb -v rpool/junk/copy8k
Results:
landeck:/# zfs get -r recordsize rpool/junk
NAME PROPERTY VALUE SOURCE
rpool/junk recordsize 128K default
rpool/junk/128k recordsize 128K default
rpool/junk/8k recordsize 8K local
rpool/junk/copy128k recordsize 128K default
rpool/junk/copy8k recordsize 128K default
landeck:/# zfs list -r -o name,used,compressratio rpool/junk
NAME USED RATIO
rpool/junk 4.02M 1.43x
rpool/junk/128k 856K 1.47x
rpool/junk/8k 1.09M 1.40x
rpool/junk/copy128k 856K 1.47x
rpool/junk/copy8k 1.09M 1.40x
landeck:/# zdb -v rpool/junk/128k
Dataset rpool/junk/128k [ZPL], ID 442, cr_txg 28276, 856K, 8 objects
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 56.0K 16K 25.00 DMU dnode
-1 1 16K 512 8K 512 100.00 ZFS user/group used
-2 1 16K 512 8K 512 100.00 ZFS user/group used
1 1 16K 1K 8K 1K 100.00 ZFS master node
2 1 16K 512 8K 512 100.00 SA master node
3 1 16K 512 8K 512 100.00 ZFS delete queue
4 1 16K 512 8K 512 100.00 ZFS directory
5 1 16K 1.50K 8K 1.50K 100.00 SA attr registration
6 1 16K 16K 16K 32K 100.00 SA attr layouts
7 1 16K 512 8K 512 100.00 ZFS directory
8 2 16K 128K 712K 1M 100.00 ZFS plain file
landeck:/# zdb -v rpool/junk/8k
Dataset rpool/junk/8k [ZPL], ID 448, cr_txg 28278, 1.09M, 8 objects
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 56.0K 16K 25.00 DMU dnode
-1 1 16K 512 8K 512 100.00 ZFS user/group used
-2 1 16K 512 8K 512 100.00 ZFS user/group used
1 1 16K 1K 8K 1K 100.00 ZFS master node
2 1 16K 512 8K 512 100.00 SA master node
3 1 16K 512 8K 512 100.00 ZFS delete queue
4 1 16K 512 8K 512 100.00 ZFS directory
5 1 16K 1.50K 8K 1.50K 100.00 SA attr registration
6 1 16K 16K 16K 32K 100.00 SA attr layouts
7 1 16K 512 8K 512 100.00 ZFS directory
8 2 16K 8K 972K 1M 100.00 ZFS plain file
landeck:/# zdb -v rpool/junk/copy128k
Dataset rpool/junk/copy128k [ZPL], ID 466, cr_txg 28282, 856K, 8 objects
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 56.0K 16K 25.00 DMU dnode
-1 1 16K 512 8K 512 100.00 ZFS user/group used
-2 1 16K 512 8K 512 100.00 ZFS user/group used
1 1 16K 1K 8K 1K 100.00 ZFS master node
2 1 16K 512 8K 512 100.00 SA master node
3 1 16K 512 8K 512 100.00 ZFS delete queue
4 1 16K 512 8K 512 100.00 ZFS directory
5 1 16K 1.50K 8K 1.50K 100.00 SA attr registration
6 1 16K 16K 16K 32K 100.00 SA attr layouts
7 1 16K 512 8K 512 100.00 ZFS directory
8 2 16K 128K 712K 1M 100.00 ZFS plain file
landeck:/# zdb -v rpool/junk/copy8k
Dataset rpool/junk/copy8k [ZPL], ID 476, cr_txg 28287, 1.09M, 8 objects
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 56.0K 16K 25.00 DMU dnode
-1 1 16K 512 8K 512 100.00 ZFS user/group used
-2 1 16K 512 8K 512 100.00 ZFS user/group used
1 1 16K 1K 8K 1K 100.00 ZFS master node
2 1 16K 512 8K 512 100.00 SA master node
3 1 16K 512 8K 512 100.00 ZFS delete queue
4 1 16K 512 8K 512 100.00 ZFS directory
5 1 16K 1.50K 8K 1.50K 100.00 SA attr registration
6 1 16K 16K 16K 32K 100.00 SA attr layouts
7 1 16K 512 8K 512 100.00 ZFS directory
8 2 16K 8K 972K 1M 100.00 ZFS plain file
landeck:/# rm /junk/copy8k/data
landeck:/# rsync /junk/8k/data /junk/copy8k/data
landeck:/# zfs list -r -o name,used,compressratio rpool/junk
NAME USED RATIO
rpool/junk 3.77M 1.45x
rpool/junk/128k 856K 1.47x
rpool/junk/8k 1.09M 1.40x
rpool/junk/copy128k 856K 1.47x
rpool/junk/copy8k 856K 1.47x
landeck:/# zdb -v rpool/junk/copy8k
Dataset rpool/junk/copy8k [ZPL], ID 476, cr_txg 28287, 856K, 8 objects
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 56.0K 16K 25.00 DMU dnode
-1 1 16K 512 8K 512 100.00 ZFS user/group used
-2 1 16K 512 8K 512 100.00 ZFS user/group used
1 1 16K 1K 8K 1K 100.00 ZFS master node
2 1 16K 512 8K 512 100.00 SA master node
3 1 16K 512 8K 512 100.00 ZFS delete queue
4 1 16K 512 8K 512 100.00 ZFS directory
5 1 16K 1.50K 8K 1.50K 100.00 SA attr registration
6 1 16K 16K 16K 32K 100.00 SA attr layouts
7 1 16K 512 8K 512 100.00 ZFS directory
9 2 16K 128K 712K 1M 100.00 ZFS plain file
This is a distraction, the real test is to note 8k records on 4k blocks
have inefficient compression compared with 128k records on 4k blocks or
8k records on 512 blocks. I believe your source is 8k in all tests
which is preserved by send/receive.
James.
More information about the openindiana-discuss
mailing list