SSD Thread (Benchmarks, Brands, News, and Advice)

Discussion in 'Hardware Components and Aftermarket Upgrades' started by Greg, Oct 29, 2009.

  1. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,791
    Likes Received:
    0
    Trophy Points:
    205
    2013 and people still talk about partitioning n stuff.. i guess that's a quick hello, and byebye :)
     
  2. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,867
    Messages:
    12,297
    Likes Received:
    2,311
    Trophy Points:
    631

    Just the knowledgeable ones. ;)


    lol..

    bye!
     
  3. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,791
    Likes Received:
    0
    Trophy Points:
    205
    Hehe.. Haven't done partitions for, like, 10 years now. There's really zero need outside dualboot setups (which, thanks to vm stuff doesnt matter much as well).

    anyways, new stuff about ssds? Bigger, faster, cheaper are the obvious things. Not much else, right?
     
  4. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,867
    Messages:
    12,297
    Likes Received:
    2,311
    Trophy Points:
    631
    Cheaper, not so much.

    Bigger, yes: but more capacity is needed as the nand die shrinks and each nand chip is now 2x the capacity. Why needed? To ensure that all channels are fully populated and that each channel is optimally interleaved (and for space for OP'ing for max performance and lowest WA too, of course).

    Faster, not yet. SSD manufacturer's have stopped trying to improve the drives for too long now (they could have pushed for better consistency from 3 years ago - only now are we seeing some SSD's offer this benefit.. and only then; these are at the high end of the scale too).

    We'll see faster when NGFF becomes more available in the Haswell and newer platforms (and hopefully from the same manufacturer's that are pushing high performance along with consistency too).

    Lot's of new stuff to wrap our minds around - but in the end; what is available is pretty much same old same old.

    Compared to the barren landscape back in 2009 though - we're in SSD paradise right now. :)
     
  5. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,791
    Likes Received:
    0
    Trophy Points:
    205
    I am in that paradise. Not bothering about it ever again. Good enough all the way. Posting this from my surface pro, my sole pc now. Don't know and don't care about brand nor specs this ssd has. It just works. (with one partition ofc)
     
  6. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,867
    Messages:
    12,297
    Likes Received:
    2,311
    Trophy Points:
    631
    Yeah, I too like the Surface Pro - hope the v2 is everything I imagine...

    'Just works' is a far cry from 'working optimally' though... but glad the available 'good enough' is good enough for you.

    (I haven't drank that kool-aid yet). :)
     
  7. NIGHTMARE

    NIGHTMARE Notebook Evangelist

    Reputations:
    147
    Messages:
    483
    Likes Received:
    114
    Trophy Points:
    56
    @tilleroftheearth I have 830 256 GB. Partition 1 OS: 70 GB (Used= 33; Free=37); Partition 2 Data: 144 (Used=30; Free=114) and left 28 for OP. It's fine ? In case I utilize around 210 GB and left only 28 GB, so it affect my lifespan of SSD ?
     
  8. idiot101

    idiot101 Down and Broken

    Reputations:
    996
    Messages:
    3,902
    Likes Received:
    169
    Trophy Points:
    131
    The only thing that affects the lifespan of an SSD is the constant writes to each cell of the NAND flash. Leaving space free on the drive is to help maintain performance of the drive through garbage collection by TRIM. The empty space helps maintain the drive performance behaving like over-provisioned space but to a lesser extent. If I have messed up in the explanation, the article should clear it up for you.
    AnandTech | Exploring the Relationship Between Spare Area and Performance Consistency in Modern SSDs
     
  9. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,867
    Messages:
    12,297
    Likes Received:
    2,311
    Trophy Points:
    631
    What affects the lifespan of an SSD (nand chips) is the type of workload (writes) it is presented, the WA factor (this is where OP'ing helps immensely - especially if the controller has great GC routines; not too aggressive, but not too passive either) and the amount of time it is left 'off' without power.

    If you're writing 100% large sequential data files; then you will most likely get much more than the nominal maximum stated specs of your SSD...

    If you're writing 100% 4K random files, the lifespan of most consumer SSD's will be measured in weeks...

    If you're expecting to fill your SSD with data today and put it away in the closet and hope to see that data in late 2015 - I would bet you'd better kiss your data goodbye from now. (The SSD spec's state that data retention is 1 year without power applied... - but a lot of newer/cheaper SSD's are closer to 3 months - can't find that article now...).


    Most workloads are somewhere in between the above extremes (some mixture of power off, sequential and random writes). What is important is that the controller is able to minimize the WA factor as much as possible - especially as what is written to a system drive is not always user initiated - the O/S does a lot of internal/background file housekeeping too and keeps the drive busy without our input at all. So a controller with low WA is the first defense against physically wearing out the nand with inefficient Erase/Write cycles.

    The best example of these controllers were the original Intel X25-M and the more modern SandForce based controllers (I'm only including Intel's 520 Series here as that is what I have the most positive experiences with). The SF controller is actually better than the X25-M which had about a 1 WA factor - the SF controller could actually write much less to the nand cells than what the host O/S sent it - thanks to the real time compression the SF controller could do.

    See:
    Write amplification - Wikipedia, the free encyclopedia


    WA factors before these drives were in the 3 - 10x or greater range (so for every gigabyte of data we saved - up to 10GB or more of data was written to the nand chips, eating away at their lifespan...).

    What about other controllers/SSD's without the groundbreaking work Intel offered on it's X25-M line and the totally different approach the SF controller took to get WA down to 0.14 in certain workloads?


    They rely on different approaches of GC to keep WA in check. Some are better than others. Some don't work at all (in real world usage).

    TRIM also helps immensely with WA - but TRIM only informs the SSD that some data is old/stale and can be cleared - it is still up to the GC routines of the controller/firmware to actually erase those nand cells and have them ready to be written to.

    What helps almost all SSD's with regards to WA is OP'ing. This is because with 'unallocated' space/capacity available - there are almost always enough 'erased/cleared' nand cells to write to - without having to read/write/erase cells as the new data is being written - which is what causes WA to skyrocket in the first place as the 'priority' of the controller then is to write the new data as fast as possible and essentially kill whatever nand write cycles it takes to make it happen, now.


    I've stated this before; OP'ing, partitioning and leaving free space on our SSD's is not about making them faster. It is about making them more consistent in the performance they give; while also allowing for the highest possible sustained, 'steady-state' performance possible.

    The above performance benefits are a byproduct though of giving the GC and TRIM routines a chance to properly (at their pace) clean up the drive so that when we need to write to it - it responds immediately and as fast as possible.

    We see the results as a snappier system - but what we're doing is simply using the SSD within it's design parameters, by allowing the GC and TRIM routines to run at their own pace, we're rewarded with a huge decrease in WA, a noticeable increase in responsiveness and a system that allows us to run almost any workstation class workload indefinitely with a 'steady state' performance level that doesn't tank after a few minutes, hours or days.



    NIGHTMARE, in your specific example:

    256,000,000,000/1024/1024/1024= ~238GB

    238 x 0.7 = 166GB would be the maximum I would want to use, or;

    238 x 0.3 = 72GB is what I would leave 'unallocated'...


    to be able to use the drive in any way I want (almost indefinitely...).

    While leaving 28GB unallocated is better than nothing - in my workflows I find that that is not enough to make a difference in how the SSD responds (which to me indicates that the GC routines are still getting in the way of using the SSD when and how I want to). It may be enough for your workflows though.


    In addition to the above though; I find that I need at least 25GB free on the formatted C: drive to keep Windows 8 happy. And, an additional 25GB-50GB minimum free for any programs that use drive capacity as a 'Scratch disk' space too. But these are the same for a HDD based system anyways.

    Yeah; 30% of the 'real' drive capacity is a lot to leave 'unallocated'. But that is the cost of doing business with an SSD at this time. That is if you want the maximum benefits and the maximum reliability that SSD's can offer now.

    Without the availability of large capacity SSD's and the huge benefits that OP'ing provides to the responsiveness of the system, sustained-over-time performance and the higher endurance of the nand chips - SSD's would still be in their infancy for my workflows and HDD's (1TB WD Raptors, today) would be reigning still.

    Fortunately, I took an Intel 510 Series 250GB drive and experimented with using only 100GB of it in early 2011 - and I haven't looked back since. Sure, ~$600 for 100GB usable capacity was a lot - but I've paid much more for the 'right' type of storage device...


    So, without knowing your workflow - I can't say if leaving 'only' 28GB 'unallocated' is enough to shorten your SSD's lifespan (for the duration/expected length of your ownership of this SSD).

    But I can tell you in my experience (across many different platforms) - even in 'light' workloads - I can feel the detrimental effect on performance of an SSD that is filled to much more than 50% of the (total) available capacity.

    Which to me indicates that WA is higher than it needs to be - which means, yes: it will affect the lifespan of the SSD.


    But,

    it may not matter if you replace/upgrade/give away/sell the SSD before that happens. :)


    Everything wears out - enjoy it the best/fullest way you can. Hope the info here helps you make that decision a little clearer...


    It's complicated; everything affects everything else - the best we can do is use systems within their design parameters and hope that we understood those design parameter accurately (at least as it applies to us)...

    Take care.
     
  10. Bstorm

    Bstorm Guest

    Reputations:
    0
    Messages:
    10
    Likes Received:
    0
    Trophy Points:
    0
    I'm configuring a new R 6:8-780Powerpro / MSI GT70 barebones laptop over at Powernotebooks and I see they have the new Samsung Evo SSD in 1 TB. The performance looks really good, and the price isn't bad, but as I understand it, but it is TLC which I am not so crazy about.

    For my first drive I am planning to get the 512 Samsung Pro 840, because it has the endurance and the speed that I want.

    For my second drive I need lots of space, but would prefer not to have a spinning hard drive on my laptop. So I am torn between the Crucial M500 960 GB and the Samsung Evo 1TB. The Crucial M500 is slower but it is MLC which I prefer and it has already been run through the paces. The Samsung Evo 1 TB has awesome performance but it is TLC and relatively untested tech. What do you guys think, and are there any other factors I should consider?

    Also, has anyone heard if there will be a 1 TB version of the Samsung Pro coming out? Although I would prefer to get an upgraded laptop sooner rather than later, I could delay my laptop purchase until I could save up enough money to get it, if it would be worth it.
     
Loading...

Share This Page