[Users] ioacct weirdness?
Kir Kolyshkin
kir at openvz.org
Tue Jul 21 14:33:18 PDT 2015
Just one thing -- ioacct accounts on a VFS level, not on actual disk I/O
level.
Also, ioacct accounts all reads and writes, including tmpfs and /proc
and /sys etc.
Say, if you rewrite a file (a real file on disk) a thousand times real
quick, it does not
mean the data will be written to a disk a thousand times -- more
probably just once.
Maybe you want to look into iostat rather than ioacct?
https://openvz.org/IO_statistics
On 07/21/2015 12:39 PM, Mark Johanson wrote:
> So as it was explained to me before, the vm specific ioacct file resets when the vm is restarted. Which seemed to work and make sence until today. I have a vm that according to that file has made 403G of writes since it was restarted yesterday.
>
> (1 day) VE WWWW 403 GB in average daily writes and 403 GB in total since it was last rebooted
> (51 days) VE XXXX 9 GB in average daily writes and 487 GB in total since it was last rebooted
> (11 days) VE YYYY 5 GB in average daily writes and 60 GB in total since it was last rebooted
> (51 days) VE ZZZZ 2 GB in average daily writes and 132 GB in total since it was last rebooted
>
> *above day listings are from me just checking on their uptimes.
>
> My Script runs as:
>
> for veid in `vzlist -Hoveid`; do #run in all running vms
> a=`vzctl exec $veid uptime | awk '{print $3}'` #grab uptime from vm
> b=`grep write /proc/bc/$veid/ioacct | awk '{print $2}'` #grab write information from ioacct file for vm
> c=`echo "$b/$a" | bc` #divide writes by days up
> d=`echo "$c/1073741824" | bc` #daily writes in GB
> e=`echo "$b/1073741824" | bc` #total wirtes since uptime in GB
> echo "VE $veid $d GB in average daily writes and $e GB in total since it was last rebooted" #spit out results per vm
> done
>
> The ioacct file for this vm contains:
>
> read 121361661952
> write 433471070208
> dirty 2439431475200
> cancel 2005960404992
> missed 0
> syncs_total 5
> fsyncs_total 179645
> fdatasyncs_total 48312
> range_syncs_total 0
> syncs_active 0
> fsyncs_active 0
> fdatasyncs_active 0
> range_syncs_active 0
> io_pbs 0
> fuse_requests 0
> fuse_bytes 0
>
> So based on the write numbers (I do understand that dirty is unwritten and canceled is data not written during a flush) it comes out to 403G in writes since its reboot Mon Jul 20 08:46.
>
> Am I completely off on my math? Seemed to be working just fine until this vm spit out a 403G number. Am I missing something else?
> _______________________________________________
> Users mailing list
> Users at openvz.org
> https://lists.openvz.org/mailman/listinfo/users
More information about the Users
mailing list