New job, New tech, more fun!

It’s been a long time since i’ve used the blog, over a year in fact (sorry..) however, a few months ago I started a new adventure! I changed roles, industries and companies, not to mention I went from a solidified technology stack to learning a lot of new things right out the door.

Today I am a Senior Systems Architect on the Development group for Cloud5, a Hospitality Service Provider. The team i’m on is tasked with all of the monitoring and data analytics projects around the companies new client dashboard.

It’s a lot of fun and exciting challenges. as, I came from a very puppetized world to this one where i’m designing and implementing the initial solutions used for gathering data on networks and user experiences.

My technology stack has changed a bit as well, the former place used XenServer/Proxmox clusters for VMs and Containers, which was managed by a central puppet master. Our networks were 100% Juniper so management was easy. The new place is a bit different in that i’m not managing production systems, i’m tasked with developing the new technology and delivering solutions to the operations team which is vetted and implemented per my spec (at their discretion).

The benefits of no longer being on an operations team is fantastic, i’m not on call (HOORAY!) and as long as my work is getting done in a timely manner, I can leave the office and have a proper work life balance, which is not something i’ve seen in 10 years.

Technology wise, I came in after the initial product design sessions, i’ve had wiggle room, but i’ve really tried to work with what was previously designed. My thought was, why reinvent the wheel (or design spec) if it seems like its a viable product and prior people to me put a TON of effort into figuring out solutions to problems they were having at the time. This has allowed me to work with oVirt an opensource Hypervisor created by RedHat that excels at comparing itself to vmware in that it’s KVM (libvirt) under the hood, is open source, and has the backing of a large company like RHEL. Features i’ve liked so far is that you can setup what’s called an ovirt-engine instance, and manage your oVirt nodes in clusters and datacenter segmentation/management. The Engine ends up managing the nodes through ansible and an in-house application called VDSM. I won’t get into much detail (this is for another post), but it’s a very versatile and ACTIVE project. I’ve submitted three posts to the user list and each one has been answered within a day, i’ve had prior experiences posting to user lists only to get left hanging, RHEL engineers have been quick to reply back to my questions and i’ve not had a request go unsolved, in some manner.

My latest project has been docker, I have enough experience to be dangerous with LXC/LXD, and I had always looked at docker as a solution that might have gotten too big too fast thus putting it down as a viable option (among other reasons from a pure technology viewpoint that I won’t go into). However the combination of documentation for Dockerfiles, to the usability of docker-compose, my (limited to a few days) experience has been rather positive. The amount of people using it has resulted in just as much activity on places like stackoverflow, so any issues ive come across have been quickly resolved. I’m still on the fence about docker from a technology standpoint, but as far as usability is concerned so far I’m not complaining.

I’ll try to post more often, i’ve gotten a lot of experiences that documenting in a blog might be advised. As well, i’m looking at even more interesting technology that future projects will work with, coming up down the road!

HTTP2…. is…. facinating….

So i’ve been trying to do some more reading lately, and this one caught my eye.. I have been interested in web technologies for as long as I can remember, and seeing the latest drafts come through on HTTP2 is AWESOME..

Check out Daniel Stenberg’s site (, where he has a pdf that lays out the new protocol HTTP2 standards.

I’m looking forward to future adoption by NGINX and Apache (as of writing there haven’t been any updates to their software to support http2 :sadface:) where I can start playing around with the new features http2 offers to web developers!

All for now!~

Some days, VB Scripting is just AWESOME.

So it’s been a while (AGAIN!), i’m such a slacker, sorry..

Last night we had to do an update for a specific version of the EHR software we host at work. While normally updates are fine, the way this specific update had to be completed resulted in the following for over 70 separate databases.

1) Find all of the instances of where the plugin is installed (each database)
2) Download update (takes about a minute)3) Uninstall the old version of the plugin
4) Install the new version, so that it runs against the MSSQL database.

While this process on paper seems easy/quick…

It’s not.

To do 70 database updates, with 3 guys working on updating plugins and servers (think terminal servers), it took us about an hour and a half, so you can imagine how long it would take for one person to do everything. Oh, and did I mention this plugin was the LEAST used one?! Yup, the number one plugin has roughly 130 databases in usage… eep.

I have in the past attempted to contact the vendor, to see if there was a way to automate the installation.

Of all of the steps posted above, I think I can circumvent the following..

-Downloading the plugin to the website..I can fix this by just downloading to one central location and just copy and pasting the contents to the website where it indicates what version you have, the only problem is that if in a specific version they fix verb-age in the database, that the SQL query that runs at the end fixes, we won’t ever know. Hence my previous comment, I had talked with the vendor to see if they could release the SQL scripts that were embedded in the cab file (I have tried using most known applications to open the cab file, but each time it goes to an exe that is compiled ._.), and when approached the vendor stated that because our setup is so unique, they would not help us out (You know, because being the premier Cloud platform for your product, being nationally recognized, doesn’t do it for you..), so i’m left with hoping that the SQL job is never really needed..

This leads me to my biggest hurdle, documentation.. While things since I took over have been better, there is still too much room for error. Example, how do I know if a new database is added, and if they have the plugins I need to update?!

That’s where my project today came into play. I noticed that my task list was low today, so I took the opportunity to create a nifty (my opinion) script that will take a comma delimited list of servers, and connect to each one, grab the available databases and query for the specific plugin that you are looking for.

Since I’m a PHP developer, and know NOTHING about proper VB coding, I went to google after having the direction I needed to go with the code, I went about finding a way to query the server to get a list of all of the servers.. To my delight, someone posted a very easy and nifty way to do just that, the Sysdatabase database contains everything I needed!

Here is the code I used to query that, using the given server.

So this will be called initially to grab the list of databases on the server, I then take this and plug it in to a loop statement where it will query the database, check if any rows are returned, if they are then I mark it. Typically if the plugin is active within the database we can find a row within the specific table.

As I loop through the databases and if a hit appears on the table, i’ll write to a text file placed off server so that I can view it when I need it.

So, with this script it will cut down on the potential for missing a database, the script will give all instances so that even if the plugin is removed, we can verify that the database gets the plugin or not.

the tl;dr, my AMAZING VB scripts win again </sarcasm>, but it will be nice to use this down the road to get an accurate count of databases..

My end goal with this project is to make it semi automated, where it can give an output of available plugins, what versions are where and update accordingly.. it’s a long ways away, but it will cut down on the human aspect of a giant upgrade process..

MySQL Clusters are so awesome they can be the biggest PITA.

So for those that don’t have me added on facebook, you didn’t know, one of our big projects at work has been to get a MySQL Cluster online and functional.

This is big for a whole host of reasons, the biggest is that our Hosted VoIP Phone solution has been taking off, what this entails is more servers, with more backend databases needed.

We have been noticing that our current solution, for our web and our phones have been hitting a peak of usefulness, so we thought, Let’s Cluster it!

After a failed setup by another co worker (not his fault 100%, he has many many things to do, and no time to do them in..), I took control, reinstalled Debian Wheezy on a Dell R410 and Dell R420, both with 32GB of RAM (we are working on getting them boosted to 64GB, for MOAR POWA!).

I spent 6 hours last week Wednesday, into the evening, getting the cluster configured and talking between all nodes. The setup consisted of two Management nodes, two NDB nodes and two mysqld nodes. This setup gives the ultimate level of redundancy (imo), and makes it easy to scale with new Data nodes without the worry of having to deploy new mysqld/Management servers to handle load.

So.. tl;dr, we had one of our development days today, I spent the durration of the day, making sure that both systems were setup in the even of a reboot that they would join back up and have no issues, along with making sure I could do rolling updates without so much as a peep from any active clients to the servers.

Once I had those down pat and Documented I proceeded to move our Cacti install’s database over to the cluster, I figured if anything could be moved, this could.

Due to the nature of the NDBd, I imported all of the tables into the database as their MyISAM engine structures, then I proceeded to change the engine’s over to NDBCLUSTER.

The process was all unicorns and ponies until I hit the last two of the tables, I would get an error about a #xx-XXx-xx table being full, wtf right? So after searching.. and Searching.. and Searching.. I only found instances where people would say that their Data settings were too low.. However, with my install, that was not the case, I had tuned the servers to use 22GB of RAM for the NDBd, so that we could chuck any and all data into memory and go nuts.

Digging some more I found that I could create new tables, assign columns and data, but if a table had existing structure, like in the Invision Power Board I was testing a migration for, nope, it wouldn’t have it. I constantly would get errors for the table being full, yet we only had 6MB of data being stored in RAM.. -_-

So.. in frustration I ended up posting on the MySQL Cluster forums so if you or anyone knows the answer to my quandary, PLEASE LET ME KNOW.. I have a few ideas, but i’m running out of patients with this thing..

If/When I find the answer, i’ll post up a new entry letting the world know what I found, since i’m the only one to have this issue..


So there was a need..

So.. finally started a blog.. i’ve resisted for years, but heck it had to be done.

I’ve been a professional IT guy for the last 4 years (damn.. it really has been 4 years..), while it’s been amazing, there have been some seriously AMAZING (sarcasm implied.) stuff that has come up.

Hopefully, if I remember, i’ll try and post things here without naming names and posting anything trace-secrety.

Oh and FYI, i’m in IT, i’m turrible at spelling and grammar, mostly the grammar part 😀