Was looking through my office window at the data closet and (due to angle, objects, field of view) could only see one server light cluster out of the 6 racks full. And thought it would be nice to scale everything down to 2U. Then day-dreamed about a future where a warehouse data center was reduced to a single hypercube sitting alone in the vast darkness.

    • InverseParallax@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      It made some sense before virtualization for job separation.

      Then docker/k8s came along and nuked everything from orbit.

      • MNByChoice@midwest.social
        link
        fedilink
        arrow-up
        1
        ·
        22 hours ago

        VMs were a thing in 2013.

        Interestinly, Docker was released in March 2013. So it might have prevented a better company from trying the same thing.

        • InverseParallax@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          Yes, but they weren’t as fast, vt-x and the like were still fairly new, and the VM stacks were kind of shit.

          Yeah, docker is a shame, I wrote a thin stack on lxc, but BSD Jails are much nicer, if only they improved their deployment system

      • partial_accumen@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        The other use case was for hosting companies. They could sell “5 servers” to one customer and “10 servers” to another and have full CPU/memory isolation. I think that use case still exists and we see it used all over the place in public cloud hyperscalers.

        Meltdown and Spectre vulnerabilities are a good argument for discrete servers like this. We’ll see if a new generation of CPUs will make this more worth it.

        • InverseParallax@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.

          Also, I happen to know they’re working on even more hardware isolation mechanisms, similar to sriov but more enforced.

          • partial_accumen@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            23 hours ago

            128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.

            Sure, which is why we haven’t seen a huge adoption. However, in some cases it isn’t so much an issue of total compute power, its autonomy. If there’s a rogue process running on one of those 192 cores and it can end up accessing the memory in your space, its a problem. There are some regulatory rules I’ve run into that actually forbid company processes on shared CPU infrastructure.

            • InverseParallax@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              22 hours ago

              There are, but at that point you’re probably buying big iron already, cost isn’t an issue.

              Sun literally made their living from those applications for a long while.