• DFX4509B@lemmy.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    Good luck when SSDs are less reliable when powered off than HDDs, and still pricier for huge capacities.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      The disk cost is about a 3 fold difference, rather than order of magnitude now.

      These disks didn’t make up as much of the costs of these solutions as you’d think, so a disk based solution with similar capacity might be more like 40% cheaper rather than 90% cheaper.

      The market for pure capacity play storage is well served by spinning platters, for now. But there’s little reason to iterate on your storage subsystem design, the same design you had in 2018 can keep up with modern platters. Compared to SSD where form factor has evolved and the interface indicates revision for every pcie generation.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      For servers physical space is also a huge concern. 2.5” drives cap out at like 6tb I think, while you can easily find an 8tb 2.5” SSD anywhere. We have 16tb drives in one of our servers at work and they weren’t even that expensive. (Relatively)

    • Nomecks@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Spinning platter capacity can’t keep up with SSDs. HDDs are just starting to break the 30TB mark and SSDs are shipping 50+. The cost delta per TB is closing fast. You can also have always on compression and dedupe in most cases with flash, so you get better utilization.

      • Fluffy Kitty Cat@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        The cost per terabyte is why hard disk drives are still around. Once the cost for the SSD is only maybe 10% higher is when the former will be obsolete.

  • Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    I’ll shed no tears, even as a NAS owner, once we get equivalent capacity SSD without ruining the bank :P

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Considering the high prices for high density SSD chips…
      Why are there no 3.5" SSDs with low density chips?

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Not enough of a market

        The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.

        3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.

        Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 days ago

    No shit. All they have to do is finally grow the balls to build SSD’s in the same form factor as the 3.5" drives everyone in enterprise is already using, and stuff those to the gills with flash chips.

    “But that will cannibalize our artificially price inflated/capacity restricted M.2 sales if consumers get their hands on them!!!”

    Yep, it sure will. I’ll take ten, please.

    Something like that could easily fill the oodles of existing bays that are currently filled with mechanical drives, both in the home user/small scale enthusiast side and existing rackmount stuff. But that’d be too easy.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it’s driven primarily by the cost of the NAND chips, and you’d just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there’d be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.

      Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,

      The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I’ve seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn’t have takers.

    • Hozerkiller@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 days ago

      I hope youre not putting m.2 drives in a server if you plan on reading the data from them at some point. Those are for consumers and there’s an entirely different formfactor for enterprise storage using nvme drives.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        I’m not particularly interested to watch a 40 minute video, so I skinned the transcript a bit.

        As my other comments show, I know there are reasons why 3.5 inch doesn’t make sense in SSD context, but I didn’t see anything in a skim of the transcript that seems relevant to that question. They are mostly talking about storage density rather than why not package bigger (and that industry is packaging bigger, but not anything resembling 3.5", because it doesn’t make sense).

        • xyguy@startrek.website
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          The main point is that the disk controller gets exponentially more complicated as capacity increases and that the problem isnt with space for the nand chips bit that the controller would be too power hungry or expensive to manufacture for disks bigger than around 4tb.