Was looking through my office window at the data closet and (due to angle, objects, field of view) could only see one server light cluster out of the 6 racks full. And thought it would be nice to scale everything down to 2U. Then day-dreamed about a future where a warehouse data center was reduced to a single hypercube sitting alone in the vast darkness.
I think what will happen is that we’ll just start seeing sub-U servers. First will be 0.5U servers, then 0.25U, and eventually 0.1U. By that point, you’ll be racking racks of servers, with 10 0.1U servers slotted into a frame that you mount in an open 1U slot.
Silliness aside, we’re kind of already doing that in some uses, only vertically. Multiple GPUs mounted vertically in an xU harness.
You’ve reinvented blade servers
The future is 12 years ago: HP Moonshot 1500
“The HP Moonshot 1500 System chassis is a proprietary 4.3U chassis that is pretty heavy: 180 lbs or 81.6 Kg. The chassis hosts 45 hot-pluggable Atom S1260 based server nodes”
source
That did not catch on. I had access to one and the use case and deployment docs were foggy at best
It made some sense before virtualization for job separation.
Then docker/k8s came along and nuked everything from orbit.
VMs were a thing in 2013.
Interestinly, Docker was released in March 2013. So it might have prevented a better company from trying the same thing.
Yes, but they weren’t as fast, vt-x and the like were still fairly new, and the VM stacks were kind of shit.
Yeah, docker is a shame, I wrote a thin stack on lxc, but BSD Jails are much nicer, if only they improved their deployment system
Agreed.
Highlighting how often software usability reduces adoption of good ideas.
The other use case was for hosting companies. They could sell “5 servers” to one customer and “10 servers” to another and have full CPU/memory isolation. I think that use case still exists and we see it used all over the place in public cloud hyperscalers.
Meltdown and Spectre vulnerabilities are a good argument for discrete servers like this. We’ll see if a new generation of CPUs will make this more worth it.
128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.
Also, I happen to know they’re working on even more hardware isolation mechanisms, similar to sriov but more enforced.
Sure, which is why we haven’t seen a huge adoption. However, in some cases it isn’t so much an issue of total compute power, its autonomy. If there’s a rogue process running on one of those 192 cores and it can end up accessing the memory in your space, its a problem. There are some regulatory rules I’ve run into that actually forbid company processes on shared CPU infrastructure.
There are, but at that point you’re probably buying big iron already, cost isn’t an issue.
Sun literally made their living from those applications for a long while.
Yeah, that’s the stuff.