1

I am searching for an answer about using SSDs drivers, which are build for the mainstream. Afaik ceph uses for its marketing to buy cheap disks for entprise ready storages. But I cannot find any details or tests about this.

E.g. if you take a look at the 1TB SATA 2.5" e.g. SanDisk Ultra 3d or Samsung 960evo both are pretty cheap right now to build a fast storage array with ceph

But if I read on, there are a bunch of limitations mentioned for storage arrays (for SANs or RAID-arrays) to avoid cache on the drives, to ensure a high DWPD. So does this also match do ceph? If considering this facts the price for the same sized SSD is about factor 3 to 5 more expensive between mainstream and enterprise SSDs.

So considering from economical perspective, I could buy at least 3 times more drives with mainstream SSDs compared to the enterprise grade SSDs for RAIDs or SAN storages.

Can someone tell me, if the mainstream SSDs will work well with ceph on production environments?

What endurance per named SSD drive can be expected?

1 Answer 1

1

I can only highly recommend to use SSDs with capacitors that protect against power interruption. For Ceph this not only protects data integrity but far more important it will result in much less latency. SSDs without capacitors are blocking when they write out their cache. SSDs with capacitors just ignore cache flushes (which ceph is sending out) because they are always able to write out their cache due to the capacitors help during a power loss situation.

We started out with 860 EVOs. It was a nightmare of throughput. 860 PROs were better, but the PM883 (with power-loss protection, also Samsung) really made a huge difference.

Be careful with Samsung though: They don't work well with AMD SATA Controllers. At least the SB7x0/SB8x0/SB9x0 SATA Controller. AMD and Samsung are pointing at each other why NCQ support is broken. Which sucks. This is 2021.

3
  • for what have you used the EVOs? And which EVOs did you use? 850evo? 860evo? 870evo? as sata or nvme? and for what did you use it? OSDs or as Journal disks?
    – cilap
    Feb 9, 2021 at 9:53
  • Thanks for the question. Updated the answer. We used 860 EVOS for osds back then. I simply can not recommend them for ceph.
    – itsafire
    Feb 10, 2021 at 21:21
  • do you have some values on the throughput comparisions? regarding EVO, Pro and PM883? and what sizes did you use? it makes also a difference could you also elaborate your arch? Have you had a NVMe journal drive?
    – cilap
    Feb 14, 2021 at 19:13

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .