[OpenIndiana-discuss] improving ZFS latency with hard disks
dave at loud-mouth.net
dave at loud-mouth.net
Mon May 17 18:33:44 UTC 2021
Replies below
On 2021-05-16 00:08, Toomas Soome via openindiana-discuss wrote:
>> On 16. May 2021, at 06:54, dave at loud-mouth.net wrote:
>>
>> Le me know if ZFS questions should be asked to another group.
>>
>> Problem: A documentation application takes several minutes to open
>> documents containing large numbers of links to images. Aparently, the
>> application is making many separate file attribute queries, so the sum
>> of all the separate disk rotation latencies are adding up.
>>
>> The obvious solution is to move the zfs pool over to SSDs, but since
>> the entire documentation collection is less than 10GB, I was wondering
>> if there was a way to address the problem with RAM. As far as I can
>> tell from the documentation, the ARC and L2ARC are based on recently
>> used data, so it doesn't sound like accessing attributes from all of
>> the linked files to a document would be helped by a larger ARC when
>> the document is first opened.
>>
>> Is there a way I can get more of the proximity files in the cache with
>> any tuning parameters, or even utilize some sort of RAM disk for the
>> files in the directory of concern?
>>
>
>
> How much RAM do you have, ...?
The machine has 16GB RAM, but I can increase this it will help speed
things
up.
> ...what is arcstat reporting while accessingthose files?
arcstat: I'm guessing you are interested in ARC hits here, so I ran the
archits.sh script. I started loading a file from the application right
at about the line starting with 79 hits below:
./archits.sh
HITS MISSES HITRATE
24622916 32173 99.87%
4 0 100.00%
79 0 100.00%
270 0 100.00%
224 0 100.00%
270 0 100.00%
265 0 100.00%
294 0 100.00%
269 0 100.00%
273 0 100.00%
267 0 100.00%
272 0 100.00%
265 0 100.00%
295 0 100.00%
266 0 100.00%
229 0 100.00%
267 0 100.00%
271 0 100.00%
268 0 100.00%
297 0 100.00%
234 1 99.57%
277 0 100.00%
270 0 100.00%
272 0 100.00%
268 0 100.00%
266 0 100.00%
270 0 100.00%
272 0 100.00%
265 0 100.00%
271 0 100.00%
268 0 100.00%
297 0 100.00%
228 0 100.00%
264 0 100.00%
267 0 100.00%
274 0 100.00%
274 0 100.00%
293 0 100.00%
232 0 100.00%
271 0 100.00%
301 0 100.00%
278 0 100.00%
268 0 100.00%
311 0 100.00%
269 0 100.00%
233 0 100.00%
313 0 100.00%
220 0 100.00%
268 0 100.00%
217 0 100.00%
0 0 0.00%
4 0 100.00%
blade% cat archits.sh
#!/usr/bin/sh
interval=${1:-5} # 5 secs by default
kstat -p zfs:0:arcstats:hits zfs:0:arcstats:misses $interval | awk '
BEGIN {
printf "%12s %12s %9s\n", "HITS", "MISSES", "HITRATE"
}
/hits/ {
hits = $2 - hitslast
hitslast = $2
}
/misses/ {
misses = $2 - misslast
misslast = $2
rate = 0
total = hits + misses
if (total)
rate = (hits * 100) / total
printf "%12d %12d %8.2f%%\n", hits, misses, rate
}
'
Does this actually mean everything but one request was read from the ARC
for the entire 3 minute documentation loading, so the disk latency has
nothing to do with my problem?
If I am reading this correctly, I guess I need to look elsewhere for the
problem. Maybe context switches for all of the checking the application
is doing?
> Are files accessed locally or via NFS/SMB (where from the actual
> slowness is > appearing)?
Files are local
>
> First open from slow media is always depending on media speed;
> however, the experience also depends on block sizes and fragmentation
> - does read ahead have chance to help etc.
>
> With persistent L2ARC, the less actively used data is on L2 (active
> data is on ARC), but the L2 wont be empty after reboot. However,
> pointers to L2 are also in ARC, with low RAM situation, this may
> complicate things even more…
>
> ramdisk versus ssd - I’d go for ssd there.
>
> rgds,
> toomas
>
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
More information about the openindiana-discuss
mailing list