<html><head><meta http-equiv="Content-Type" content="text/html charset=iso-8859-1"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">SSDSC2CW240A3 == Intel 520. It's not server grade as well and though it behaves much much better then desktop SSDs - we still saw it's loosing commits on power failure. So beware that your database can corrupt or loose transactions.<div>All server grade SSDs from Intel have "Enhanced power-loss data protection" feature in specs which implies capacitors on the board for saving data to NVRAM. It's 320, 710 and S3700 models.</div><div>Intel S3700 is the best and fastest we ever saw among different models including non-Intel ones and this is what Parallels recommends in Parallels Cloud Storage.</div><div><br></div><div><br><div><br><div><div>On Aug 29, 2013, at 13:20 , spameden <<a href="mailto:spameden@gmail.com">spameden@gmail.com</a>></div><div> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"><div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">2013/8/29 Kirill Korotaev <span dir="ltr"><<a href="mailto:dev@parallels.com" target="_blank">dev@parallels.com</a>></span><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div style="word-wrap:break-word">I also want to add that SSD models referred to in the bug (like OCZ one) are not server grade and you guys risk very much loosing your data or corrupting file system on power failure.<div>
You should test it heavily.<br></div></div></blockquote><div><br></div><div>Thanks for that.<br><br>But we are not using OCZ (also know they are not reliable).<br><br></div><div>The SSD in this server is INTEL SSDSC2CW240A3.<br>
<br></div><div>I can't try the latest redhat kernel on this system, because after converting it to deb it seems to be not working.<br><br></div><div>But I believe fix should be in 2.6.32-358.18.1.el6.centos.plus.x86_64. <br>
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word"><br><div><div><div class="h5"><div>On Aug 29, 2013, at 03:52 , Kir Kolyshkin <<a href="mailto:kir@openvz.org" target="_blank">kir@openvz.org</a>> wrote:</div>
<br></div></div><blockquote type="cite"><div><div class="h5">
<div text="#000000" bgcolor="#FFFFFF">
<div>On 08/28/2013 06:34 AM, spameden wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">2013/8/28 Kir Kolyshkin <span dir="ltr"><<a href="mailto:kir@openvz.org" target="_blank">kir@openvz.org</a>></span><br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>
<div>On 08/27/2013 08:20 AM, spameden wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>ArchLinux wiki says:<br>
<b>Warning: </b>Users need to be certain that
kernel version 2.6.33 or above is being used AND
that their SSD supports TRIM before attempting
to mount a partition with the <code style="display:inline-block;padding:0.1em 0.3em">discard</code> flag. Data loss can
occur otherwise!<br>
<br>
</div>
So I guess it's not in the OpenVZ kernel?<br>
<br>
I'd like to use TRIM because it increases
performance to SSD drastically!<br>
</div>
</blockquote>
<br>
</div>
You'd better check it with Red Hat, looking into their
RHEL6 documentation.<br>
<br>
My quick googling for "rhel6 kernel ssd discard" shows
that rhel6 kernel<br>
do support trim, they have backported it (as well as
tons of other stuff,<br>
so this is hardly 2.6.32 kernel anymore).<br>
</div>
</blockquote>
<div><br>
</div>
<div>I've just tested via hdparm (ofc it's not a perfect
tool to test out disk performance but still), here is what
I get on the latest 2.6.32-042stab079.5:<br>
<br>
# hdparm -t /dev/mapper/vg-root<br>
/dev/mapper/vg-root:<br>
Timing buffered disk reads: 828 MB in 3.00 seconds =
275.56 MB/sec<br>
<br>
</div>
<div>on standard debian-7 kernel (3.2.0-4-amd64):<br>
# hdparm -t /dev/mapper/vg-root<br>
/dev/mapper/vg-root:<br>
Timing buffered disk reads: 1144 MB in 3.00 seconds =
381.15 MB/sec<br>
<br>
</div>
<div>and it's only read speed test.<br>
<br>
</div>
<div>I don't get why it differs so much?<br>
</div>
<br>
</div>
</div>
</div>
</blockquote>
<br>
My suggestion is, since this functionality is not directly related
to OpenVZ, and<br>
we usually don't change anything in this code (unless there is a
reason to), to<br>
try reproducing it on a stock RHEL6 kernel and, if it is
reproducible, file a bug<br>
to red hat or, if it's not reproducible, file a bug to openvz.<br>
<br>
Kir.<br>
</div></div></div><div class="im">
_______________________________________________<br>Users mailing list<br><a href="mailto:Users@openvz.org" target="_blank">Users@openvz.org</a><br><a href="https://lists.openvz.org/mailman/listinfo/users" target="_blank">https://lists.openvz.org/mailman/listinfo/users</a><br>
</div></blockquote></div><br></div><br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@openvz.org">Users@openvz.org</a><br>
<a href="https://lists.openvz.org/mailman/listinfo/users" target="_blank">https://lists.openvz.org/mailman/listinfo/users</a><br>
<br></blockquote></div><br></div></div>
_______________________________________________<br>Users mailing list<br><a href="mailto:Users@openvz.org">Users@openvz.org</a><br>https://lists.openvz.org/mailman/listinfo/users<br></blockquote></div><br></div></div></body></html>