2008 R2 Terminal Servers and Printers……

So, at the 8-5, I am in charge of the architecture of our GE Centricity Hosted Platform. Part of that is maintaining and enhancing the Terminal Servers that clients log into.

Over the last few weeks, we had been presented with an issue where users would log in to the system, go to log into the EMR portion of CPS and would find that it took upwards of 100 seconds to bring the Chart application up. Wait, 100 seconds ?!

No application should EVER be that long to load up, but here is the catch, the EMR portion loads up the printers for the users at the time of launch. The idea of loading up printers when you start an application SEEMS like a good idea, but it’s not, and this is why. When you load up all of the printers, if you print spooler or printer is having issues loading it causes a chain reaction.. Great right?

So this circles us back to the complaints of 100+ second logins. After digging through the servers I found that our friend, Mr. Print Spooler, was over 300 MB.. Immediately restarting the spooler, it would climb to about 75MB and sit there till people would start logging in and doing printer related functions. To test my theory I would jump around servers looking for the various sizes that the spooler had gotten to and found that in our solution (we deploy Dell R410 and Dell R420’s with 32GB of RAM), if the spooler got above 200MB, degradation in spooling is seen, which results in the performance of printers going down.

To remedy this I created a VBscript that would check the current spooler size (very, VERY roughly), if it was above the 185MB mark the script would reboot it. This action was extremely reliable, and in the 3 days I had it in production, it seemed to hold the servers steady.

Since we deploy both RDP and citrix servers, and I had confirmed Citrix had similar issues, so I tried putting Citrix spooler restart commands within the script.. Note; They don’t work.. something about the way the spooler for Citrix works… at least my script could never get it to work so we disabled it for our Citrix Servers.

Another issue that I found with this script, is that if you have users with locally attached printers, they will be put offline and will NOT come back until the user restarts.

I set this to run every 5 minutes, and it seemed to work well (you know.. less the missing local printers..), we didn’t get ANY complaints about the EMR application loading up, so it was a win in my book.

With all of that said, if you don’t have locally attached printers but do have issues with the print spooler getting to high, I hope this helps you.

 

Exchange 2013.. is.. Awesome.

So, over the weekend I converted one of my clients into our Exchange in the Cloud solution. Their server was a old as can be 2003 SBS server (yuck!), so getting them off this server was waaay overdue.

Two weeks ago the client’s exchange server reported over 45GB of space used with just 14 accounts! So cleaning it up, on friday after the exchange store purged the trash bins, their store was reporting only 20GB of data. Fantastic! So I exported all of the users outlook profiles from their respective Outlook clients.

With their fantastic DSL, I had to calculate about 40% of the data be kept locally to upload to our servers and the other 60% would come with me to upload to our servers, then import straight into their accounts once in our cloud.

That is where this post goes to give the awesome it’s name. I had been trying to get an outlook client setup in one of our Farm’s, however I kept getting an error relating to the Global address list not being found for the user. Since I knew that I was able to setup the accounts locally on the client’s machines, I looked for another way to import into the client’s account.

In exchange 2010, I remembered that there was an export and import exchange cmdlet that allowed my to do just that, import and export, but it did have some issues with that process. Looking up to see if they (Microsoft) had expanded the functionality, I was overjoyed (and I do mean OVERJOYED) to find out that not only had they kept the functionality, they had expanded it to be an autonomous system that as soon as you tell the system to import, it queues it and does it!

Pretty cool huh? I thought so.. The only thing you need to be aware of is that the following command needs to be run to enable your admin account the ability to import and export by default (it is not enabled by default, most likely a security restriction so that you don’t have random people adding random PST files to the accounts on your servers).

Command for exchange cmdlet:
New-ManagementRoleAssignment -Role “Mailbox Import Export” -User “DOMAIN\administrator”

This will enable the following to the Admin Menu:

The import process is slick, it will ask you for the location of the shared PST file, from there it asks you what email to send the notifications to (note, even if you remove admin, it still sends it to the admin account..), and viola, you’re done! I Queued up 5 accounts and within 5 hours it was completed.

So.. not so much a rant, just a props to Microsoft for actually doing something right and making a script that does what it’s supposed to without issue..

Till Next time..

-B

Raid Array only 67% rebuilt? No problem!

So one of the fun things I get to do on a daily basis is that I get to work with some pretty cool stuff.

One of those things, are servers. Today while my boss and I were driving to a convention in Georgia (a whole different story, 12 hour drive SUCKED). We had one of our client’s servers crash, for a second day in a row.

The kicker is this, after fighting with Dell to RMA a 600GB SAS drive, and get it next-day’ed to the site, a tech had 3 hours previously replaced a known bad drive with a new working one. So our first response was, WTF?! we just replaced the bad drive!

One of our techs ran onsite and working with him, we found that the new drive was marked Offline in the Dell H700 Bios (uhoh..), and the other drive in the RAID 1, was showing as failed (SHIT!). Working with my tech, I had him turn the Drive 0:1 online, naturally this should be a fruitless effort, that does no more than just turning a drive on.

The back story, for my decision, is that one of my own servers, a Dell R410, that runs on a H700 as well, had a similar issue, where the RAID 5 had two drive failures due to excessive heat, to fix the system with NO issues, was to put the offline drive to on, and to reestablish the RAID. With this thinking I had the tech try just turning on the new, offline drive.

When he rebooted the server, he got a windows prompt that the server had shutdown incorrectly (YESSS!!), this was so far so good, it meant that the rebuild of the RAID 1 Array was somewhat a success (as if it wasn’t it shouldn’t have booted up right?). When the system booted up completely, there were no errors, and no signs of applications not starting up… So basically, a drive, a 600GB (SAS 6Gbps) Drive, somehow managed to rebuild its ENTIRE array in just under 3 hours.. bull shit I say, but how can we explain how this system is online without the complete RAID array rebuild..

Well, here is where it get’s interesting.. After my tech confirmed with the client that they could get in, and work properly, one of my in-office techs called dell, and after a lengthy chat with the tech found that the RAID 1, that was the array 0 of the RAID 10 configuration had actually NOT finished rebuilding, the logs from the controller actually told us, that it got to 66.7% rebuilding, when the first drive in the array failed.. wait.. 67?! yeah, my thoughts exactly. A drive, that was in the Array 0 of a RAID 10, somehow got to 67% completion, lost its master drive and somehow was booting up..? yeah.  It makes no sense to us either, from my boss to the other techs, everyone says the system should be DITW (Dead In The Water), yet, here it was alive and kicking.

With that bit of fun, we have to be careful, I suspect there HAS to be some sort of data missing, from a windows update file to some system file that hasn’t been accessed yet, that we will see down the line. Zombie Server. Ugh, thankfully, we have a contingency plan in place, in case the system dies decide to die, for good..

And that’s all there is for today, nothing better than driving for 6 hours, and having a server go belly up, awesome stuff..

We are here in Atlanta now and i’m exhausted. Hopefully tomorrow isn’t so exciting, but will post back 😀

-B