We've had the worst luck with this server, and here's the issues we've had to deal with:
We have had to completely replace the server, so before we did that we had to backup 80TB+ that this server alone has, which took like 3-4 days. Then we got the server replaced, the "new" server when we started to put load (restore the whole backup) it started to get 1000 up to 4000+ load, which made 0 sense.
Furthermore, because of this issue, the DC took a few days to check all drives, replace MB, OS drive, ram sticks, basically everything but the existing HDDs which made sense as it had already data on it. Found 1 drive about to fail, which we thought that was the problem.
At this point we are like 1.5 weeks in.
Then... when thought everything was fine, the newly refurbished server started to have 100 load again with 0 explanation.
Finally, the DC ordered new drives which also took them a few days to get there, and got them swapped and installed today, in fact like 3 hours ago.
We now are syncing current data with our backup just to make sure we have not missed anything.
Later today/tomorrow we will setup the new server and build the new raid array, this takes like 3 days. Then we can restore backup which is another 3-4 days.
So, TL:DR -> ETA at bare minimum 8 more days
Things could've been faster if we (the DC and myself) would be on the same time zone, but because we are not, a lot of the time when they needed my authorization to for example turn the server off, I'd be sleeping or working on my real job, etc. But it is what it is, rather had these drives fail while in maintenance rather than in production.
Yes, we are aiming once per month to sync current data with our backup. This only takes like 1 day, these cases of the RAID array rebuild are one in a time type thing (unless a drive fails)