Tag: Terrible IT

So I heard you want redundancy AND space? Let’s get you on a RAID… 0?!

That’s right! Over the last week I had been communicating with a client that uses us to host one of their clients and were looking at possibly doing an emergency restore of their data to our platform. Great, right?

The reason behind them needing an emergency deployment in our systems was found out today. See, I knew that there was a catastrophic drive failure, which resulted in the loss of about a week or so of data (since that was the last time they did off server backups!). What I did not know, and found out today, that this server was not one they setup (thank god), but instead inherited from the previous IT. The previous group had been given the specs by GE for their CPS product, every time we get a new client we have to go through the hoops required to spec out a clients environment, even though we have ours tuned perfectly for us.

What bugs me is that in this calculator they give you, a 10 doc practice can be advised to have a database server with 14 spindles as a minimum.. Wait.. 14?! Yes, that’s right, 14 as a minimum, and that’s not all, they have to be SAS or SCSI drives, or have so many spindles for SATA that your paying up the arse to host your product.

I get it though, the more drives the more redundancy and the better read and write times you have, for a product that has been pieced together similar to Frankenstein’s creation, you have to set the bar high to make up for any programming issues (and don’t get me started on them).

This leads us back to why i’m writing this post.. This client’s former IT, with the knowledge of the recommended setup, consulted, procured and setup a solution that was 100% vm, but did so in a way that just makes me wonder how people are in IT to begin with.

The kicker? They somehow managed to setup this virutal host with a RAID 0.. Not a RAID 5 which was expected/thought had been done, but a raid 0.. I mean, I can see a RAID 0 for two drives when you want tempdb to be as fast as possible and with as much possible space. But hell, that’s tempdb.. if it takes a dive, it’s not the end of the world, (necessarily..) instead a system was setup, that served VMs (OS unknown), ran 24/7 and had monitoring, but your setup the drives as a giant raid 0.. way to go.. just.. wow.

I’ve played the various thoughts through my head after hearing about this, trying to play out how a Virtual Host with MISSION critical items on it, could be deployed as such a setup, with ZERO redundancies, it just astounds me. Now the client is looking at a week or two of downtime, and thousands of dollars to go to a data recovery company because of some incompetent IT guy (thanks to him/her for giving those of us with half a brain a bad name..).

Moral of this story guys? Don’t use a raid one when you have missing critical software on it, it’s stupid, and it’s dangerous..

What NOT to do on a Network… *shakes head*

So, I’ve been in Kansas City to turn up a nice new client for our Cloud Hosting, since their internet had not been kind to letting me on to check things out, we got onsite and had a not-so warm welcome to their IT infrastructure..

After arriving and checking out the machines and networking equipment, we found the following:

1. The domain, was 2003 based domain, that was something like: r512A3.com (I could only shake my head at this)
2. Workstations had no solid naming convention, after going to ~10 workstations, I did find a pattern.. they were all windows xp default naming convention -_- (the random letter and numbers, wtf!)
3. There were a total of about 35-40 PCs and Wyse terminals, but they had 3, 24 port racked switches, and 6 hubs.. yes.. hubs.. so the reason they kept loosing connectivity or had random dropping of connections, yup that daisy chained hub was the issue..
4. And the straw that broke the camel’s back per-say.. EVERY machine was a static IP config… yeah *everyone* oh, and who wants to guess why..? well.. it’s easy.. it’s because the old IT never setup the DHCP role on the DC.. awesome..

So now that your head has to hurt as much as mine has through the last day.. my rant is over.. I understand that some things are a preference of the IT that set it up (using a .com TLD instead of .local for a local domain [I prefer .local because of what it is..]), but gosh how the HELL can someone eff up a domain so badly..

After setting up DHCP on our new firewall, changing all of the local machines to DHCP and setting up all of the printers in the firewall’s DHCP reservation, we were on track for a good morning.

Fast forward to this morning, we got in and luckily the work we did last night helped immensely, there was some printing hiccups as we made sure to iron out all of the details for our cloud platform. Combine the printers with getting an emergency call to a local cable guy, we were able to do 8 new runs (btw, this place is 11k square feet!), to allow us to remove the Hubs, and hook up our three new wireless access points, score!

At the end of the day, we found ourselfs with a solid network, a wireless network that was ready for the docs to use and things looking good.

Now that I have my ranting out, i’m off for the night.. for those that read these, thanks, for those that don’t, you might be missing out.

Peace
-B