Dell R520 - The expansion continues...

Kai Robinson

TinkerDifferent Board President 2023
Staff member
Founder
Sep 2, 2021
1,056
1
1,091
113
42
Worthing, UK
For those that are unaware, the TD Game server is run on a Dell R520 - as shown here: https://tinkerdifferent.com/threads/tinkerdifferent-game-server-now-testing.2909/

In modern terms, it's ancient, was destined for the e-waste until i rescued it. I mean, who'd want a 10-core, 20 thread machine, it's SOOO old, right?

Since then, thanks to @Bolle it's been upgraded to 192GB of RAM and i've replaced the battery in the H710 PERC RAID card and added a few more trays ready for 6x SAS drives.

This is of course mega-overkill for the likes of an XP64 VM, i've allocated a whole TWO cores and 8GB of RAM to it (mad, right?). Obviously there's also the Rust VM for the TD PvE server, which is maintained by Xenocide from the TD Discord, which uses a few more cores, but again, 8 cores and 64GB RAM allocated to it - which leaves plenty more RAM and 10 more cores...

I also run my PleX server off another VM in it - with another 8-cores allocated and 16GB RAM.

Anyway, the point is, general resources are not an issue on this, but what I wanted to work on, mostly for learning, was to upgrade the disk subsystem and the networking to make it an all-in one solution to replace my very, very old (10 year old) Netgear READYNAS 104 (single core ARM w/512MB ugh!) with 4 x 6TB Disks (thanks to Erebus, also from the TD Discord).

Even though I work at an MSP and have been in IT for years - my experience with server hardware has some holes in it - specifically with regards to anything higher than Gigabit ethernet. I've not had experience with HBA's and NIC's using SFP/SFP+ so it's a little bit of a minefield.

So, with the additional 6 bays, I wanted to setup a RAID 50 - 6x6TB disks total split into two RAID5 arrays, mirrored, for some redundancy, running off the H710 PERC.

I wanted to have the storage pool directly available to both the PleX VM and create a new VM for the purpose of being a fileshare on the network - an iSCSI target basically.

I want to have a FAAAAST connection to my main PC, was going to get an Intel X520 10Gbit SFP+ NIC for both the R520 and my desktop PC side of things (they're about £19 here) and a single 3m Dell DAC cable to connect the two directly without relying on a 10Gbit switch. But, my knowledge here is patchy - i've read up, and this LOOKS like the right way to do it - rather than using a generic SFP+ transceiver and an LC-LC multimode fibre cable. It's all theory - so...would this work and does this sound feasible?
 

Kai Robinson

TinkerDifferent Board President 2023
Staff member
Founder
Sep 2, 2021
1,056
1
1,091
113
42
Worthing, UK
I have binned off the Dell specific DAC cable for one from FS - the NIC selection has also changed as i've been reading up that the Intel X520's are a bit picky with what transceivers and DAC cables they want to use.

Instead, it's looking like i'm going to pull the trigger on some single port Mellanox ConnectX-3's.

The VM has already been spun up and is ready for a test run this week over standard 1Gb ethernet.

Then all that remains is to purchase a boatload of disks!
 

bakkus

Moderator
Staff member
Mar 18, 2022
76
50
18
My 2 cents:

I wouldn't recommend running RAID50 in a 6-disk setup.
As we touched upon on Discord, RAID50 or rather 5+0 isn't for redundancy - it's for performance: https://www.techtarget.com/searchstorage/definition/RAID-50-RAID-50

Splitting the load across 2x RAID5 sets for more speed, mainly for writes.
However, if you're doing 2x 3-disk RAID5 sets, each of those arrays are running at the _absolute bare minimum_ amount of disks for RAID5.
In other words, you're pretty much guaranteed to lose large amounts of data you lose a disk. Risk times 2, since each 3disk set is its own separate RAID5.

For a 6-disk array I'd look into RAID10 (1+0), or just RAID5 across all disks - won't be SPEEED, but it'll be reliable and you'll still win performance since you're distributing the load across more spindles.

And yes, you can connect a DAC back-to-back.
 

Kai Robinson

TinkerDifferent Board President 2023
Staff member
Founder
Sep 2, 2021
1,056
1
1,091
113
42
Worthing, UK
My 2 cents:

I wouldn't recommend running RAID50 in a 6-disk setup.
As we touched upon on Discord, RAID50 or rather 5+0 isn't for redundancy - it's for performance: https://www.techtarget.com/searchstorage/definition/RAID-50-RAID-50

Splitting the load across 2x RAID5 sets for more speed, mainly for writes.
However, if you're doing 2x 3-disk RAID5 sets, each of those arrays are running at the _absolute bare minimum_ amount of disks for RAID5.
In other words, you're pretty much guaranteed to lose large amounts of data you lose a disk. Risk times 2, since each 3disk set is its own separate RAID5.

For a 6-disk array I'd look into RAID10 (1+0), or just RAID5 across all disks - won't be SPEEED, but it'll be reliable and you'll still win performance since you're distributing the load across more spindles.

And yes, you can connect a DAC back-to-back.

Yeah i might just YOLO it, run a RAID5 setup and then have a separate on-site backup to a NAS as well as Crashplan for really important data.
 

Kai Robinson

TinkerDifferent Board President 2023
Staff member
Founder
Sep 2, 2021
1,056
1
1,091
113
42
Worthing, UK
Update time!

I ended up indeed getting some Mellanox ConnectX-3's from eBay - not bad for about £19 each. One in the R520, one in my desktop. A single 5M DAC from FS links the two together. I created a test iSCSI share that ran solely over the Mellanox on the R520 (2TB) using the primary RAID0 boot volume for the VHDX storage. Speeds were NUTS! Exceeded 300MB/sec, peaking around 340MB/sec, for the 56GB .mkv i sent over to it (Aliens Re-Mastered Directors Cut in 4K). I think that'll be fast enough.

Now the only thing I was lacking was the disks themselves. As @bakkus has mentioned, RAID50 was a bad idea - so here, i'm going for a standard RAID5. Doing it like this also allows me to expand it by a further 2 disks when I need to. IE - in 2 months time, when i've likely filled it :D

Therefore - after a boatload of overtime, I pulled the trigger on these:

DSC_27812.jpg


That's 4 x HGST HUS726060AL4210's - 6TB, 12GBps SAS drives. Total cost? Well i offered £30 per drive, to be cheeky...they accepted!

System Supply Industries in the UK has you covered for 2nd hand enterprise gear and it is CHEAAAAAP! Why bother with a Synology when you can get an R520 or R520 chassis with CPU and RAM for £100 and 18TB of disk space for £120 on top?

I've setup a separate NAS VM for it, with iSCSI enabled - the idea is, I'll browse for files on my desktop, download them locally to SSD, punt them over the iSCSI super quick, and the VM will share everything else out locally via the internal Hyper-V switch to the PleX VM and via SMB to the rest of the network.

Ideally, i'd like a switch with at least 12 PoE ports, another 2 x 10Gb SFP+ ports - but i'm not finding anything I like, at least nothing that's not absolutely honking massive (48 port ex-enterprise gear, which really IS overkill!

I have my eye on one of these, but they're about 10 years old now: https://www.ebay.co.uk/itm/126516738905
And then there's this wee-beastie: https://www.ebay.co.uk/itm/296403866432

Argh, decisions, decisions...
 

This Does Not Compute

Active Tinkerer
Oct 27, 2021
217
347
63
www.youtube.com
Ideally, i'd like a switch with at least 12 PoE ports, another 2 x 10Gb SFP+ ports - but i'm not finding anything I like, at least nothing that's not absolutely honking massive (48 port ex-enterprise gear, which really IS overkill!
Juniper EX2300-C-12P would be a good choice if you want an enterprise-quality switch in a compact, fanless footprint -- but it's best managed via CLI, so you'd have an opportunity to learn JunOS.

If you want just a "dumb" unmanaged switch, I'd recommend keeping an eye on ServeTheHome's networking section, as they frequently review no-name switches that are showing very interesting specifications at crazy low prices.