<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Added 10G to each vm and rebooted, got
      the same issue. Included the same output you asked for last time
      to see if there was something obvious.<br>
      <br>
      <br>
      [root@ovz3 ~]# pstorage -c test_cluster stat<br>
      connected to MDS#2<br>
      Cluster 'test_cluster': healthy<br>
      Space: [OK] allocatable 56GB of 63GB, free 60GB of 63GB<br>
      MDS nodes: 3 of 3, epoch uptime: 2h 20m<br>
      CS nodes:  3 of 3 (3 avail, 0 inactive, 0 offline)<br>
      License: [Error] License not loaded, capacity limited to 100Gb<br>
      Replication:  1 norm,  1 limit<br>
      Chunks: [OK] 1 (100%) healthy,  0 (0%) standby,  0 (0%) degraded, 
      0 (0%) urgent,<br>
                   0 (0%) blocked,  0 (0%) pending,  0 (0%) offline,  0
      (0%) replicating,<br>
                   0 (0%) overcommitted,  0 (0%) deleting,  0 (0%) void<br>
      FS:  1MB in 4 files, 4 inodes,  1 file maps,  1 chunks,  1 chunk
      replicas<br>
      IO:       read     0B/s (  0ops/s), write     0B/s (  0ops/s)<br>
      IO total: read       0B (    0ops), write       0B (    0ops)<br>
      Repl IO:  read     0B/s, write:     0B/s<br>
      Sync rate:   0ops/s, datasync rate:   0ops/s<br>
      <br>
      MDSID STATUS   %CTIME   COMMITS   %CPU    MEM   UPTIME HOST <br>
          1 avail      2.0%       0/s   0.0%    18m   2h 20m
      ovz1.home.int:2510 <br>
      M   2 avail      2.4%       0/s   0.1%    18m   2h 20m
      ovz2.home.int:2510 <br>
          3 avail      3.8%       1/s   0.0%    18m   2h 20m
      ovz3.home.int:2510 <br>
      <br>
       CSID STATUS      SPACE   FREE REPLICAS IOWAIT IOLAT(ms) QDEPTH
      HOST <br>
       1025 active       21GB   19GB        0     0%       0/0    0.0
      ovz1.home.int <br>
       1026 active       21GB   19GB        0     0%       0/0    0.0
      ovz2.home.int <br>
       1027 active       21GB   20GB        1     0%       0/0    0.0
      ovz3.home.int <br>
      <br>
       CLID   LEASES     READ    WRITE     RD_OPS     WR_OPS     FSYNCS
      IOLAT(ms) HOST <br>
       2089      0/1     0B/s     0B/s     0ops/s     0ops/s    
      0ops/s       0/0 ovz1.home.int <br>
       2090      0/0     0B/s     0B/s     0ops/s     0ops/s    
      0ops/s       0/0 ovz2.home.int <br>
       2091      0/0     0B/s     0B/s     0ops/s     0ops/s    
      0ops/s       0/0 ovz3.home.int <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      On 01/29/2014 01:45 PM, Kirill Korotaev wrote:<br>
    </div>
    <blockquote
      cite="mid:5E14BDDC-2C69-42B6-B972-BF00379175D3@parallels.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=windows-1252">
      Edward, got it - there is a small threshold (10GB) on minimum free
      space on CS’es (reserved for different cases include recovery),
      <div>you have ~10GB per CS so you hit this threshold immediately.</div>
      <div><br>
      </div>
      <div>Most likely you run from inside VMs, right? Just increase
        disk space available to CS then.</div>
      <div><br>
      </div>
      <div>
        <div><br>
          <div>
            <div>On 29 Jan 2014, at 21:04, Edward Konetzko &lt;<a
                moz-do-not-send="true" href="mailto:konetzed@gmail.com">konetzed@gmail.com</a>&gt;
              wrote:</div>
            <br class="Apple-interchange-newline">
            <blockquote type="cite">
              <div bgcolor="#FFFFFF" text="#000000">
                <div class="moz-cite-prefix">[konetzed@ovz2 ~]$ sudo
                  pstorage -c test_cluster stat<br>
                  connected to MDS#3<br>
                  Cluster 'test_cluster': healthy<br>
                  Space: [OK] allocatable 28GB of 35GB, free 31GB of
                  35GB<br>
                  MDS nodes: 3 of 3, epoch uptime: 10h 25m<br>
                  CS nodes:  3 of 3 (3 avail, 0 inactive, 0 offline)<br>
                  License: [Error] License not loaded, capacity limited
                  to 100Gb<br>
                  Replication:  1 norm,  1 limit<br>
                  Chunks: [OK] 1 (100%) healthy,  0 (0%) standby,  0
                  (0%) degraded,  0 (0%) urgent,<br>
                               0 (0%) blocked,  0 (0%) pending,  0 (0%)
                  offline,  0 (0%) replicating,<br>
                               0 (0%) overcommitted,  0 (0%) deleting, 
                  0 (0%) void<br>
                  FS:  10KB in 2 files, 2 inodes,  1 file maps,  1
                  chunks,  1 chunk replicas<br>
                  IO:       read     0B/s (  0ops/s), write     0B/s ( 
                  0ops/s)<br>
                  IO total: read       0B (    0ops), write       0B
                  (    0ops)<br>
                  Repl IO:  read     0B/s, write:     0B/s<br>
                  Sync rate:   0ops/s, datasync rate:   0ops/s<br>
                  <br>
                  MDSID STATUS   %CTIME   COMMITS   %CPU    MEM   UPTIME
                  HOST <br>
                      1 avail      3.1%       1/s   0.1%    14m   9h 58m
                  ovz1.home.int:2510 <br>
                      2 avail      2.5%       0/s   0.0%    14m   9h 14m
                  ovz2.home.int:2510 <br>
                  M   3 avail      3.0%       1/s   0.3%    15m  10h 25m
                  ovz3.home.int:2510 <br>
                  <br>
                   CSID STATUS      SPACE   FREE REPLICAS IOWAIT
                  IOLAT(ms) QDEPTH HOST <br>
                   1025 active       11GB   10GB        0     0%      
                  0/0    0.0 ovz1.home.int <br>
                   1026 active       11GB   10GB        0     0%      
                  0/0    0.0 ovz2.home.int <br>
                   1027 active       11GB   10GB        1     0%      
                  0/0    0.0 ovz3.home.int <br>
                  <br>
                   CLID   LEASES     READ    WRITE     RD_OPS    
                  WR_OPS     FSYNCS IOLAT(ms) HOST <br>
                   2060      0/0     0B/s     0B/s     0ops/s    
                  0ops/s     0ops/s       0/0 ovz3.home.int
                  <br>
                   2065      0/1     0B/s     0B/s     0ops/s    
                  0ops/s     0ops/s       0/0 ovz1.home.int
                  <br>
                  <br>
                  I do have skype but I have meetings all day for work
                  and cant be on a computer after.  I may have time
                  tomorrow if that would work.  I am in the central time
                  zone.<br>
                  <br>
                  Edward<br>
                  <br>
                  <br>
                  On 01/29/2014 03:14 AM, Kirill Korotaev wrote:<br>
                </div>
                <blockquote
                  cite="mid:7782404C-F66E-48F4-9348-B9A2C39893C4@parallels.com"
                  type="cite">
                  Edward,
                  <div><br>
                  </div>
                  <div>can you send me in private email output of:</div>
                  <div># pstorage -c &lt;cluster&gt; stat</div>
                  <div>output?</div>
                  <div><br>
                  </div>
                  <div>Do you have a skype?</div>
                  <div><br>
                  </div>
                  <div>Thanks,</div>
                  <div>Kirill</div>
                  <div><br>
                    <div><br>
                    </div>
                    <div><br>
                      <div>
                        <div>On 29 Jan 2014, at 10:26, Edward Konetzko
                          &lt;<a moz-do-not-send="true"
                            href="mailto:konetzed@gmail.com">konetzed@gmail.com</a>&gt;
                          wrote:</div>
                        <br class="Apple-interchange-newline">
                        <blockquote type="cite">
                          <div bgcolor="#FFFFFF" text="#000000">
                            <div class="moz-cite-prefix">On 01/28/2014
                              09:51 AM, Kir Kolyshkin wrote:<br>
                            </div>
                            <blockquote
cite="mid:CAGmPdrydKUc+AgQh=A1-tPeDEdb7yVO0uEA6=Z23A+OW9QK14w@mail.gmail.com"
                              type="cite">
                              <div dir="ltr">
                                <div class="gmail_extra">
                                  <div class="gmail_quote">On 28 January
                                    2014 02:55, Kirill Korotaev <span
                                      dir="ltr">
                                      &lt;<a moz-do-not-send="true"
                                        href="mailto:dev@parallels.com"
                                        target="_blank">dev@parallels.com</a>&gt;</span>
                                    wrote:<br>
                                    <blockquote class="gmail_quote"
                                      style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
                                      <div class="im">&gt;&gt; On 25 Jan
                                        2014, at 07:38, Rene C. <a
                                          moz-do-not-send="true"
                                          href="mailto:openvz@dokbua.com">
                                          openvz@dokbua.com</a> wrote:<br>
                                        &gt;&gt;<br>
                                        &gt;<br>
                                        &gt; Hi,<br>
                                        &gt;<br>
                                        &gt; I read the website about
                                        the cloud storage and I found
                                        some words, which seems familiar
                                        for me.<br>
                                        &gt;<br>
                                        &gt; May I ask, which filesystem
                                        do you use to be able to
                                        regularly scrub and self-heal
                                        the filesystem?<br>
                                        &gt;<br>
                                        &gt; Personaly I use zfsonlinux
                                        in production for a long time
                                        now and I am very satisfied with
                                        it, and based on your
                                        description, it seems you should
                                        use something like that and
                                        something on top of the native
                                        filesystem to get a cloud
                                        storage.<br>
                                        &gt;<br>
                                        &gt; Or you use a ceph or alike
                                        "filesystem", which has similar
                                        capabilities with cloud
                                        features.<br>
                                        <br>
                                      </div>
                                      It’s more like a ceph. Data is
                                      stored in a distributed way, so
                                      unlike to zfs you have access to
                                      the data even in case of node
                                      failure (crash, CPU/memory fault
                                      etc.) and access is available from
                                      ANY cluster node.<br>
                                      As such we store the data and
                                      maintain checksums on every node
                                      and can do periodic scrubbing of
                                      the data.<br>
                                    </blockquote>
                                    <div><br>
                                    </div>
                                    <div>Just to clarify -- this is
                                      Parallels own distributed/cloud
                                      filesystem, not CEPH or GlusterFS,</div>
                                    <div>but similar to. For more info,
                                      check the links at <a
                                        moz-do-not-send="true"
                                        href="https://openvz.org/Parallels_Cloud_Storage#External_links">https://openvz.org/Parallels_Cloud_Storage#External_links</a></div>
                                  </div>
                                  <br>
                                </div>
                              </div>
                              <br>
                              <fieldset class="mimeAttachmentHeader"></fieldset>
                              <br>
                              <pre wrap="">_______________________________________________
Users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Users@openvz.org">Users@openvz.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="https://lists.openvz.org/mailman/listinfo/users">https://lists.openvz.org/mailman/listinfo/users</a>
</pre>
                            </blockquote>
                            Setup a cluster using Centos 6.5 64bit,
                            fresh install in KVM instances.  I wanted to
                            test functionality not actual speed.
                            <br>
                            <br>
                            All software was latest as of last night and
                            I followed the quick how to here <a
                              moz-do-not-send="true"
                              class="moz-txt-link-freetext"
                              href="https://openvz.org/Parallels_Cloud_Storage">
                              https://openvz.org/Parallels_Cloud_Storage</a><br>
                            <br>
                            Everything works great until I try to create
                            an instance using the command "vzctl create
                            101 --layout ploop --ostemplate
                            centos-6-x86_64 --private
                            /pcs/containers/101" from the docs.<br>
                            <br>
                            About one mb of data is written to disk and
                            then it just hangs.  The following is output
                            from dmesg<br>
                            <br>
                            [  360.414242] INFO: task vzctl:1646 blocked
                            for more than 120 seconds.<br>
                            [  360.414770] "echo 0 &gt;
                            /proc/sys/kernel/hung_task_timeout_secs"
                            disables this message.<br>
                            [  360.415406] vzctl         D
                            ffff88007e444500     0  1646   1611    0
                            0x00000084<br>
                            [  360.415418]  ffff88007ea59a68
                            0000000000000086 ffff8800ffffffff
                            000006b62934b8c0<br>
                            [  360.415428]  0000000000000000
                            ffff88007e9f2ad0 0000000000005eaa
                            ffffffffad17694d<br>
                            [  360.415437]  000000000ad7ef74
                            ffffffff81a97b40 ffff88007e444ac8
                            000000000001eb80<br>
                            [  360.415452] Call Trace:<br>
                            [  360.415492]  [&lt;ffffffff81517353&gt;]
                            io_schedule+0x73/0xc0<br>
                            [  360.415516]  [&lt;ffffffff811f39b3&gt;]
                            wait_on_sync_kiocb+0x53/0x80<br>
                            [  360.415537]  [&lt;ffffffffa04dbf47&gt;]
                            fuse_direct_IO+0x167/0x230 [fuse]<br>
                            [  360.415558]  [&lt;ffffffff8112e948&gt;]
                            mapping_direct_IO+0x48/0x70<br>
                            [  360.415567]  [&lt;ffffffff811301a6&gt;]
                            generic_file_direct_write_iter+0xf6/0x170<br>
                            [  360.415576]  [&lt;ffffffff81130c8e&gt;]
                            __generic_file_write_iter+0x32e/0x420<br>
                            [  360.415585]  [&lt;ffffffff81130e05&gt;]
                            __generic_file_aio_write+0x85/0xa0<br>
                            [  360.415594]  [&lt;ffffffff81130ea8&gt;]
                            generic_file_aio_write+0x88/0x100<br>
                            [  360.415605]  [&lt;ffffffffa04da085&gt;]
                            fuse_file_aio_write+0x185/0x430 [fuse]<br>
                            [  360.415623]  [&lt;ffffffff811a530a&gt;]
                            do_sync_write+0xfa/0x140<br>
                            [  360.415641]  [&lt;ffffffff8109d930&gt;] ?
                            autoremove_wake_function+0x0/0x40<br>
                            [  360.415655]  [&lt;ffffffff812902da&gt;] ?
                            strncpy_from_user+0x4a/0x90<br>
                            [  360.415664]  [&lt;ffffffff811a55e8&gt;]
                            vfs_write+0xb8/0x1a0<br>
                            [  360.415671]  [&lt;ffffffff811a5ee1&gt;]
                            sys_write+0x51/0x90<br>
                            [  360.415681]  [&lt;ffffffff8100b102&gt;]
                            system_call_fastpath+0x16/0x1b<br>
                            <br>
                            Even just trying to create a 10k file with
                            dd causes a task to hang.  "dd if=/dev/zero
                            of=/pcs/test.junk bs=1k count=10"<br>
                            <br>
                            <br>
                            Any ideas? Anymore info you would like for
                            debugging.  <br>
                          </div>
_______________________________________________<br>
                          Users mailing list<br>
                          <a moz-do-not-send="true"
                            href="mailto:Users@openvz.org">Users@openvz.org</a><br>
                          <a moz-do-not-send="true"
                            class="moz-txt-link-freetext"
                            href="https://lists.openvz.org/mailman/listinfo/users">https://lists.openvz.org/mailman/listinfo/users</a><br>
                        </blockquote>
                      </div>
                      <br>
                    </div>
                  </div>
                </blockquote>
                <br>
              </div>
            </blockquote>
          </div>
          <br>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>