[Users] ioacct weirdness?

Mark Johanson mjohanson at a2hosting.com
Wed Jul 22 13:00:47 PDT 2015


So I was taking a look at the iostats file for the below mentioned vm and I have a question. On the documentation page you list 10 columns, however the iostat file has 12? Are there two undocumented ones?

root [3.80 ] ~ #cat /proc/bc/XXXX/iostat
flush XXXX . 0 0 0 0 0 1717 13736 0 0
fuse XXXX . 0 0 0 0 0 0 0 0 0
sde XXXX . 0 0 0 79275 7006242 875699 29092504 875699 0
sdc XXXX . 0 0 0 81651 7128779 856366 29185344 856366 0
sda XXXX . 0 0 0 80181 7106555 867407 28976160 867407 0
sdd XXXX . 0 0 0 57702 5168485 460601 25953608 460601 0
sdb XXXX . 0 0 0 52981 5206820 454257 24961176 454257 0
sdf XXXX . 0 0 0 1883677 4924582 418193 23170296 418193 0
1   2    3 4 5 6 7       8       9      10       11     12





On 07/21, Kir Kolyshkin wrote:
> Just one thing -- ioacct accounts on a VFS level, not on actual disk I/O
> level.
> Also, ioacct accounts all reads and writes, including tmpfs and /proc and
> /sys etc.
> 
> Say, if you rewrite a file (a real file on disk) a thousand times real
> quick, it does not
> mean the data will be written to a disk a thousand times -- more probably
> just once.
> 
> Maybe you want to look into iostat rather than ioacct?
> 
> https://openvz.org/IO_statistics
> 
> 
> On 07/21/2015 12:39 PM, Mark Johanson wrote:
> >So as it was explained to me before, the vm specific ioacct file resets when the vm is restarted. Which seemed to work and make sence until today. I have a vm that according to that file has made 403G of writes since it was restarted yesterday.
> >
> >(1 day) VE WWWW 403 GB in average daily writes and 403 GB in total since it was last rebooted
> >(51 days) VE XXXX 9 GB in average daily writes and 487 GB in total since it was last rebooted
> >(11 days) VE YYYY 5 GB in average daily writes and 60 GB in total since it was last rebooted
> >(51 days) VE ZZZZ 2 GB in average daily writes and 132 GB in total since it was last rebooted
> >
> >*above day listings are from me just checking on their uptimes.
> >
> >My Script runs as:
> >
> >for veid in `vzlist -Hoveid`; do #run in all running vms
> >   a=`vzctl exec $veid uptime | awk '{print $3}'` #grab uptime from vm
> >   b=`grep write /proc/bc/$veid/ioacct | awk '{print $2}'` #grab write information from ioacct file for vm
> >   c=`echo "$b/$a" | bc` #divide writes by days up
> >   d=`echo "$c/1073741824" | bc` #daily writes in GB
> >   e=`echo "$b/1073741824" | bc` #total wirtes since uptime in GB
> >   echo "VE $veid $d GB in average daily writes and $e GB in total since it was last rebooted" #spit out results per vm
> >done
> >
> >The ioacct file for this vm contains:
> >
> >           read                          121361661952
> >           write                         433471070208
> >           dirty                        2439431475200
> >           cancel                       2005960404992
> >           missed                                   0
> >           syncs_total                              5
> >           fsyncs_total                        179645
> >           fdatasyncs_total                     48312
> >           range_syncs_total                        0
> >           syncs_active                             0
> >           fsyncs_active                            0
> >           fdatasyncs_active                        0
> >           range_syncs_active                       0
> >           io_pbs                                   0
> >           fuse_requests                            0
> >           fuse_bytes                               0
> >   So based on the write numbers (I do understand that dirty is unwritten and canceled is data not written during a flush) it comes out to 403G in writes since its reboot Mon Jul 20 08:46.
> >   Am I completely off on my math? Seemed to be working just fine until this vm spit out a 403G number. Am I missing something else?
> >_______________________________________________
> >Users mailing list
> >Users at openvz.org
> >https://lists.openvz.org/mailman/listinfo/users
> 
> _______________________________________________
> Users mailing list
> Users at openvz.org
> https://lists.openvz.org/mailman/listinfo/users


More information about the Users mailing list