They have a few different configurations to choose from:
4U w/ 8 Thin-ITX blades
6U w/ 9 Mini-ITX blades
8U w/ 9 MicroATX blades
Here’s a blade:
Motherboard just mounts to the blade as normal. Each hot swap blade can have up to 4 2.5" drives and the fixed blades can have up to 6 2.5" drives.
I’ve been playing around with the idea of starting up a small game server host as a side gig. Since the best game server hardware at the moment (in terms of raw power) are high-clocked i7’s, this maybe seems like a good fit. The specs say it can handle CPU power draws of up to 125W, where the i7-7700k for example has a TDP of 91W, so definitely in the clear there. Since the cost is pretty low, if I ever went through with the idea, I’d have a spare couple blades ready and primed to swap in case a blade fails.
Not planning on making a move on this anytime soon, but wanted to gather some opinions on the chasis itself. I realize it’s not enterprise/server-grade hardware thus won’t have the same survivability/reliability as a Dell or HP blade system, but for the cost savings might be worth the risk. What do you guys think? Would a Colo provider have any issues with racking this? Am I bonkers for even considering this?
Yeah. I’m iffy about it. Would be cool to talk to someone that actually uses it. Haven’t looked into if any support/warranty is offered.
Since it’d be consumer-grade hardware going in it, I’d expect more HW failures than typical servers. But as long as I colo near me, I wouldn’t mind having to move things around every few months if need be.
Seems neat, but I’m not sure I would want that somewhere that I couldn’t put hands on it quickly. Also, I’m not sure colo providers would be too thrilled about the power bricks. I would probably only use it with the standard power supplies instead.
Agreed! Luckily I’ll be living in the Philly area again soon which is just a couple hour drive to NYC, which is where I’d probably colo it.
Yeah, I can see how that’d be of concern. Unfortunately I think that’s how they’re able to get the power to be so efficient. Didn’t look close enough to see if the bricks are swappable for full-fledged PSUs.
This is neat, but as you already mentioned it’s kind of for garbage-tier hardware. There are definitely better solutions available, and I don’t really trust taking a box, cutting holes and slapping fans and it to be the most efficient for consumer-grade hardware to survive long. I won’t mention the power, that has that already been done.
Ehh, there’s arguments for and against bladed system. Biggest pro to me is power efficiency. This chassis in particular is nice because each unit is completely independent with its own power supply and is essentially a convenient way to rack 8 machines with some fans.
Isn’t that exactly the same as having 8 x 1U boxes? Each would have their own PSU, etc. Separate fans in each box aren’t really going to run up your power bill. I don’t really equate blade systems with power efficiency, but only really have experience with HP gear.
If you’re looking to do it affordably, I’d consider at least pricing out (used) 1U boxes. I was pricing out some E3s this week and saw $350-400 E3-1270v2 w/ 16GB ram kits (supermicro boards w/ IPMI, chassis, psu, etc). Obviously better deals to be had, but can get some good deals with everything except drives included.