Install CentOS 7 on a Rock64

One of my recent projects has been to build out a solution for one of our applications that gathers data from a network. Right now it runs as a vm on esx or ovirt and will gather data all day and night, it’s fantastic.

This, however, was a problem for clients that don’t have vm hosts containing enough system resources to put our vm on it, or better yet, no vm host at all!

In comes this service-in-a-can idea, that is, can we transplant our vm into a SBC (Small-Board Computer) and have it work in a more autonomous manner, gathering data wherever we put it, however we deploy it.

Our vm utilizes docker containers to compartmentalize services, whether it be a system that checks popular websites for responsiveness or a container that hosts icinga for host checks. So, I opted to spec out the vm for 2 vCPU and 4GB of ram, more than enough to support a system that still doesn’t know what else is going to be on it. My goal was to be able to support anything the development leads wanted to put on it down the line without having to think about upsetting the system resources currently available to it. The worst thing that could have happened is we sold clients vm servers that would support this specific vm, and then have to upgrade ram later on to support further functionality upgrades.

The Hardware

After a fair amount of searching while keeping in mind that a cheaper alternative was more likely to gain traction and open doors with clients, I opted to go with a Rock64 4GB RAM unit. Along with the board I got the following

  1. Rock64 4G SBC – $44.95 USD
  2. 5V 3A Power Supply – $6.99 USD
  3. Rock64 Aluminum Casing – $12.99 USD
  4. 32GB eMMC Module – $24.95 USD

All In for $89.88, slick right?

Note: After completing the project, I would have dropped the eMMC module in favor of a 64GB SDXC micro-sd card.

Researching CentOS on Rock64

If you’ve found my post, you’ll know that there is LITTLE out there when you compare CentOS to say, debian, or ubuntu. Heck, the PINE64 Installer itself, which allows you to easily flash a SD card with an operating system only has Debian, Ubuntu, armbian or android OSes for the Rock64.

It’s a slick tool but I was a little taken aback by the lack of non-debian distros, was running CentOS on ARM that hard? There is a SIG dedicated to both <ARMv7 and >ARMv8, and current releases with the normal x86 variants.

Doing some google searching I found the following in addition to the pine64.org forum post asking about CentOS support for the pine boards (I assume the rock64 just uses a newer chip but otherwise is the same).

https://github.com/umiddelb/aarch64/wiki/Install-CentOS-7-on-your-favourite-ARMv8-ARM64-AArch64-board

At first glance it looks promising, but having been through their steps it won’t work. Something about the Rock64 that I never dug into would never let the board boot. If there was an easy way to do a serial to the unit I would have been able to get a better understanding of what the issue was but I ended up taking a different route (although I borrowed some of the thoughts in the document).

Digging into the distributions that came with the PINE64 installer, I found the github for the maintainer:

https://github.com/ayufan-rock64/linux-build

Admittedly, I used the Step 2 from umiddelb’s document, I kept /boot, /lib/modules and /lib/firmware but ended up using the last build AArch64 from CentOS, which happens to be 7.4.1708.

Setup Steps

The following is the step-by-step that I took from my Fedora based system to get a SD card setup and running with the stretch-minimal-rock64-0.7.11-1075 image from ayufan, and the CentOS-7-aarch64-rootfs-7.4.1708 tarball from one of the many CentOS 7 vaults (I used Tripadvisor’s).

Download your rock64 image, based on stretch as well as the most recent Cent 7 rootfs tarball.

curl -sSL -o CentOS-7-aarch64-rootfs-7.4.1708.tar.xz https://mirrors.tripadvisor.com/centos-vault/altarch/7.4.1708/isos/aarch64/CentOS-7-aarch64-rootfs-7.4.1708.tar.xz 
curl -sSL -o stretch-minimal-rock64-0.7.11-1075-arm64.img.xz https://github.com/ayufan-rock64/linux-build/releases/download/0.7.11/stretch-minimal-rock64-0.7.11-1075-arm64.img.xz

Unpack the image (we’ll extract the tarball right into the SD card later.

unxzstretch-minimal-rock64-0.7.11-1075-arm64.img.xz 

Find your sd card if you haven’t already (make sure its unmounted), and write the image to the disk.

dd bs=1M if=stretch-minimal-rock64-0.7.11-1075-arm64.img of=/dev/mmcblk0 

Note: You will need to expand out the root partition by a few GB, the debian based partitioning scheme is smaller than what we need (since I keep all of the debian folders). If you were getting rid of everything aside from /boot, /lib/modules and /lib/firmware there should be enough space when extracting the CentOS rootfs.

parted /dev/mmcblk0 resizepart 7 10240 # expand it out 10GB
resize2fs /dev/mmcblk0p7
sync

Mount the card, I chose /mnt/sd-rock64, and move the folders and files around.

mount /dev/mmcblk0p7 /mnt/sd-rock64
cd /mnt/sd-rock64
mkdir debian-src
mv * debian-src/
tar --numeric-owner -xpJf /mnt/CentOS-7-aarch64-rootfs-7.4.1708.tar.xz -C .
# I move the centos folders just because im a pack rat, feel free to remove them.
mv boot boot.centos
mv lib/firmware lib/firmware.centos
cp -R debian-src/boot/ .
cp -R debian-src/lib/firmware lib/
cp -R debian-src/lib/modules lib/

Use blkid to get the UUID of mmcblk0p7, add an etc/fstab with the root partition defined.

echo 'UUID=ccdc39df-5dee-41fb-8f85-c68ee54dcd94 / ext4 defaults 0 0' >> etc/fstab

Lastly, change the root password hash in etc/shadow, I never could find what it was supposed to be set as, some people said “centos” but it never worked and this is just easier.

openssl passwd -1

Go ahead, unmount the root partition and stick the SD card into your Rock64, it should boot up. Looking at my # time through these steps, I don’t see any glaring errors, or at least haven’t come across anything yet that was a show stopper.

Final Thoughts

My setup may not be what you need, or have a vision for, but my hope was to provide a more verbose setup method for those that may come across this.

In the end I probably won’t be using the rock64 unit for my specific application, as it turns out some of the vendors we use for some of the services that run in containers don’t supply binaries that are compiled for ARM cpus. I’m looking at you Icinga, I don’t really want to sit here for an hour waiting for it to compile just to figure out that the permissions are wrong or something didn’t compile just right.

Hopefully your build works out, the Rock64 is an amazing piece of technology for under $100 bucks, so it will be the top of list down the line if I can look at arm based systems.

New job, New tech, more fun!

It’s been a long time since i’ve used the blog, over a year in fact (sorry..) however, a few months ago I started a new adventure! I changed roles, industries and companies, not to mention I went from a solidified technology stack to learning a lot of new things right out the door.

Today I am a Senior Systems Architect on the Development group for Cloud5, a Hospitality Service Provider. The team i’m on is tasked with all of the monitoring and data analytics projects around the companies new client dashboard.

It’s a lot of fun and exciting challenges. as, I came from a very puppetized world to this one where i’m designing and implementing the initial solutions used for gathering data on networks and user experiences.

My technology stack has changed a bit as well, the former place used XenServer/Proxmox clusters for VMs and Containers, which was managed by a central puppet master. Our networks were 100% Juniper so management was easy. The new place is a bit different in that i’m not managing production systems, i’m tasked with developing the new technology and delivering solutions to the operations team which is vetted and implemented per my spec (at their discretion).

The benefits of no longer being on an operations team is fantastic, i’m not on call (HOORAY!) and as long as my work is getting done in a timely manner, I can leave the office and have a proper work life balance, which is not something i’ve seen in 10 years.

Technology wise, I came in after the initial product design sessions, i’ve had wiggle room, but i’ve really tried to work with what was previously designed. My thought was, why reinvent the wheel (or design spec) if it seems like its a viable product and prior people to me put a TON of effort into figuring out solutions to problems they were having at the time. This has allowed me to work with oVirt an opensource Hypervisor created by RedHat that excels at comparing itself to vmware in that it’s KVM (libvirt) under the hood, is open source, and has the backing of a large company like RHEL. Features i’ve liked so far is that you can setup what’s called an ovirt-engine instance, and manage your oVirt nodes in clusters and datacenter segmentation/management. The Engine ends up managing the nodes through ansible and an in-house application called VDSM. I won’t get into much detail (this is for another post), but it’s a very versatile and ACTIVE project. I’ve submitted three posts to the user list and each one has been answered within a day, i’ve had prior experiences posting to user lists only to get left hanging, RHEL engineers have been quick to reply back to my questions and i’ve not had a request go unsolved, in some manner.

My latest project has been docker, I have enough experience to be dangerous with LXC/LXD, and I had always looked at docker as a solution that might have gotten too big too fast thus putting it down as a viable option (among other reasons from a pure technology viewpoint that I won’t go into). However the combination of documentation for Dockerfiles, to the usability of docker-compose, my (limited to a few days) experience has been rather positive. The amount of people using it has resulted in just as much activity on places like stackoverflow, so any issues ive come across have been quickly resolved. I’m still on the fence about docker from a technology standpoint, but as far as usability is concerned so far I’m not complaining.

I’ll try to post more often, i’ve gotten a lot of experiences that documenting in a blog might be advised. As well, i’m looking at even more interesting technology that future projects will work with, coming up down the road!

Multiple clusters on the same l2 lan? Check your Cluster id!

Over the last few weeks (possibly a month), the principle architect at work has been diligently working on converting our single staging environment to a dual staging environment.

The company I work for has two datacenters in the US (geographically separated, yay..!) and over the past few months we have been working on various projects that have required more than a single development or staging environment. This is where our principle Architect comes in, his mission has been to convert our whole puppet code base as well as countless servers in our home office to use a two site format.

He conquered this large mountain of a task, however he ran into an issue the last few weeks that has taken various forms with many many late nights of troubleshooting.

Our dual staging environment uses SRX 650s in a cluster, with EX switches and MX routers to properly emulate our production environments. (with some budget constrained network replacements) It’s cool to see how it’s come together, being able to properly test every piece of our server and network architecture in a closed environment.

However yesterday I was approached by my co-worker and he explained to me this. Part of our production network includes various L2 point-to-point links that allow redundancy and quick communications between each datacenter. He had gotten to the steps of setting up this L2 connection and had cabled up the two networks but was having issues. He explained to me that once he had the cables connected to either side his attempts to ping either endpoint were futile. Curious I logged in to both SRX clusters and ran mtr to either side and sure as hell nothing was getting through. I took the task on and began tracing everything and making notes of which port plugged into which ports.

After gathering all of the data I began looking through logs on the SRXs, but nothing.. I started monitors on either side looking at the vlan that each was bound to. This was where things started looking weird, let’s establish the two sides as Side “A” and Side “B”, side A would show the arp in and out requests, however on the B side, it only showed in and no outs.

The hell?

Since we were emulating a poorly designed network (it was done by a dev with aspirations of being a sysadmin) and the connections would go:

Weird, right? Oh well, it works and that’s what we have to deal with until the next refresh.

So I decided to add l3 interfaces on the switches, to see if I could spot any problems at the lowest connection point. After doing so I found that the pings between the EX Clusters were STILL dropping horribly. Going one step further I cut the connections on either side to the SRX clusters via that vlan and to my surprise I could ping between the L3 interfaces on the EX clusters.

The next step was adding one SRX cluster back into the fold and seeing what would happen. Starting with A side I enabled the vlan on the aggregate interfaces back to the cluster and watched the pings intensely.

Nothing changed.

Huh? Pings still worked AND I could ping from the side b ex to the side a srx ip.. Blasphemy. So I enabled the side b vlans on the trunk and watched as pings went to >90% packetloss. Frantically I reverted the b side and decided to switch them, enabling side b and disabling side a’s SRX cluster to see what would happen.

The same thing! Pings were fine, flawless even between the a side SRX cluster and the b side ex cluster. What the heck. For science sake I enabled the side b SRX and confirmed the ping failure was back.

My next step had me playing around in the b side EX cluster, I ran:

While the mac addresses of everything the switch had learned ran past an odd mac address caught my eye:

00:10:db:ff:10:00

Odd, right? Seems. Mechanical. So I dug into it, matching all instances of that mac. The first thing I noticed was that this mac appeared on BOTH EX clusters, all the while it contained the EXACT name of the aggregate interface from the side b srx.

I could clear out the arp table and it would still appear, building back up all of the vlans that it was apparently associated with, it just didn’t feel right.

One of the useful things you can do on Juniper equipment (and this might be available on other vendors, i’m sure.), is that by issuing start shell I can drop into a stripped down bsd shell, giving me access to the filesystem and various basic commands including mtr.

I decided I wanted to see what would happen if I had an MTR running between each node. There was something odd about how quickly the systems degraded when I enabled both sides, but it didn’t make sense just looking at a ping. So I disabled the b side srx vlan, and started an mtr from the a side srx to the b side ex watching as everything worked perfectly while showing one hop. (which is expected) I then re-enabled the b side srx and found that the mtr was showing the ip of the b side AND a side srx.

Wait, HUH?!

It started making more sense to me, well only a little. I postulated that the issue had to do with something in the cluster, that the mac address from one side was causing the opposite side srx to get confused and to not reply back properly. Seeing both ips in the mtr had the lack of connectivity making more sense, if there was a mac conflict for some reason you bet your going to lose connectivity.

I started googling for dual srx clusters causing issues, or mac address conflict with srx cluster where I found the following.

http://forums.juniper.net/t5/SRX-Services-Gateway/Two-Pairs-of-SRX-Clusters-on-MAC-Address-Conflicts/td-p/211431

The guy had the EXACT same issue I was facing, I couldn’t believe it, so I looked over the cluster ids of each cluster and sure enough they both had cluster id of 1. Unbelievable.

In that forum topic they link to a page, here is the one Juniper KB as well as another that talks about the need for unique cluster ids and how you can change said id on each node.

https://www.juniper.net/documentation/en_US/junos/topics/example/chassis-cluster-node-id-and-cluster-id-setting-cli.html

https://www.juniper.net/documentation/en_US/junos/topics/reference/command-summary/set-chassis-cluster-clusterid-node-reboot.html

After changing the cluster id of SRX cluster – side b to 2, and rebooting it, I saw the system INSTANTLY pick up, pings between the SRX cluster endpoints were working flawlessly. Working with the coworker who had spent weeks trying to fix ghost issues due to this, couldn’t believe it, such a small thing caused such a HUGE headache. (Such as random vpns flapping because conflicts with our remote sides unrelated to the staging environment).

TL;DR – Make sure if you decide to put more than one SRX cluster on the same layer 2 network, or close to there, change the cluster id! Hell, even just changing it to have a unique id per cluster would be best, juniper gives you a big enough of an integer space! Juniper generates the RETH0 mac address based on the cluster id, so if you have the same cluster id across multiple clusters, your going to have a bad day.

 

 

HTTP2…. is…. facinating….

So i’ve been trying to do some more reading lately, and this one caught my eye.. I have been interested in web technologies for as long as I can remember, and seeing the latest drafts come through on HTTP2 is AWESOME..

Check out Daniel Stenberg’s site (http://daniel.haxx.se/http2/), where he has a pdf that lays out the new protocol HTTP2 standards.

I’m looking forward to future adoption by NGINX and Apache (as of writing there haven’t been any updates to their software to support http2 :sadface:) where I can start playing around with the new features http2 offers to web developers!

All for now!~

So I heard you want redundancy AND space? Let’s get you on a RAID… 0?!

That’s right! Over the last week I had been communicating with a client that uses us to host one of their clients and were looking at possibly doing an emergency restore of their data to our platform. Great, right?

The reason behind them needing an emergency deployment in our systems was found out today. See, I knew that there was a catastrophic drive failure, which resulted in the loss of about a week or so of data (since that was the last time they did off server backups!). What I did not know, and found out today, that this server was not one they setup (thank god), but instead inherited from the previous IT. The previous group had been given the specs by GE for their CPS product, every time we get a new client we have to go through the hoops required to spec out a clients environment, even though we have ours tuned perfectly for us.

What bugs me is that in this calculator they give you, a 10 doc practice can be advised to have a database server with 14 spindles as a minimum.. Wait.. 14?! Yes, that’s right, 14 as a minimum, and that’s not all, they have to be SAS or SCSI drives, or have so many spindles for SATA that your paying up the arse to host your product.

I get it though, the more drives the more redundancy and the better read and write times you have, for a product that has been pieced together similar to Frankenstein’s creation, you have to set the bar high to make up for any programming issues (and don’t get me started on them).

This leads us back to why i’m writing this post.. This client’s former IT, with the knowledge of the recommended setup, consulted, procured and setup a solution that was 100% vm, but did so in a way that just makes me wonder how people are in IT to begin with.

The kicker? They somehow managed to setup this virutal host with a RAID 0.. Not a RAID 5 which was expected/thought had been done, but a raid 0.. I mean, I can see a RAID 0 for two drives when you want tempdb to be as fast as possible and with as much possible space. But hell, that’s tempdb.. if it takes a dive, it’s not the end of the world, (necessarily..) instead a system was setup, that served VMs (OS unknown), ran 24/7 and had monitoring, but your setup the drives as a giant raid 0.. way to go.. just.. wow.

I’ve played the various thoughts through my head after hearing about this, trying to play out how a Virtual Host with MISSION critical items on it, could be deployed as such a setup, with ZERO redundancies, it just astounds me. Now the client is looking at a week or two of downtime, and thousands of dollars to go to a data recovery company because of some incompetent IT guy (thanks to him/her for giving those of us with half a brain a bad name..).

Moral of this story guys? Don’t use a raid one when you have missing critical software on it, it’s stupid, and it’s dangerous..

2008 R2 Terminal Servers and Printers……

So, at the 8-5, I am in charge of the architecture of our GE Centricity Hosted Platform. Part of that is maintaining and enhancing the Terminal Servers that clients log into.

Over the last few weeks, we had been presented with an issue where users would log in to the system, go to log into the EMR portion of CPS and would find that it took upwards of 100 seconds to bring the Chart application up. Wait, 100 seconds ?!

No application should EVER be that long to load up, but here is the catch, the EMR portion loads up the printers for the users at the time of launch. The idea of loading up printers when you start an application SEEMS like a good idea, but it’s not, and this is why. When you load up all of the printers, if you print spooler or printer is having issues loading it causes a chain reaction.. Great right?

So this circles us back to the complaints of 100+ second logins. After digging through the servers I found that our friend, Mr. Print Spooler, was over 300 MB.. Immediately restarting the spooler, it would climb to about 75MB and sit there till people would start logging in and doing printer related functions. To test my theory I would jump around servers looking for the various sizes that the spooler had gotten to and found that in our solution (we deploy Dell R410 and Dell R420’s with 32GB of RAM), if the spooler got above 200MB, degradation in spooling is seen, which results in the performance of printers going down.

To remedy this I created a VBscript that would check the current spooler size (very, VERY roughly), if it was above the 185MB mark the script would reboot it. This action was extremely reliable, and in the 3 days I had it in production, it seemed to hold the servers steady.

Since we deploy both RDP and citrix servers, and I had confirmed Citrix had similar issues, so I tried putting Citrix spooler restart commands within the script.. Note; They don’t work.. something about the way the spooler for Citrix works… at least my script could never get it to work so we disabled it for our Citrix Servers.

Another issue that I found with this script, is that if you have users with locally attached printers, they will be put offline and will NOT come back until the user restarts.

I set this to run every 5 minutes, and it seemed to work well (you know.. less the missing local printers..), we didn’t get ANY complaints about the EMR application loading up, so it was a win in my book.

With all of that said, if you don’t have locally attached printers but do have issues with the print spooler getting to high, I hope this helps you.

 

Finding the elusive tables with Change Tracking enabled (MSSQL)

So this will have been the third time that I got burned by a database with two tables enabled with Change Tracking. Each time till now I kept forgetting to write them down somewhere.

So if you come across this post, here is the query to run against a database to find out just which tables are setup with change tracking.

And just like that, I don’t have anything else to add.. thanks for reading, back to work I go!

Some days, VB Scripting is just AWESOME.

So it’s been a while (AGAIN!), i’m such a slacker, sorry..

Last night we had to do an update for a specific version of the EHR software we host at work. While normally updates are fine, the way this specific update had to be completed resulted in the following for over 70 separate databases.

1) Find all of the instances of where the plugin is installed (each database)
2) Download update (takes about a minute)3) Uninstall the old version of the plugin
4) Install the new version, so that it runs against the MSSQL database.

While this process on paper seems easy/quick…

It’s not.

To do 70 database updates, with 3 guys working on updating plugins and servers (think terminal servers), it took us about an hour and a half, so you can imagine how long it would take for one person to do everything. Oh, and did I mention this plugin was the LEAST used one?! Yup, the number one plugin has roughly 130 databases in usage… eep.

I have in the past attempted to contact the vendor, to see if there was a way to automate the installation.

Of all of the steps posted above, I think I can circumvent the following..

-Downloading the plugin to the website..I can fix this by just downloading to one central location and just copy and pasting the contents to the website where it indicates what version you have, the only problem is that if in a specific version they fix verb-age in the database, that the SQL query that runs at the end fixes, we won’t ever know. Hence my previous comment, I had talked with the vendor to see if they could release the SQL scripts that were embedded in the cab file (I have tried using most known applications to open the cab file, but each time it goes to an exe that is compiled ._.), and when approached the vendor stated that because our setup is so unique, they would not help us out (You know, because being the premier Cloud platform for your product, being nationally recognized, doesn’t do it for you..), so i’m left with hoping that the SQL job is never really needed..

This leads me to my biggest hurdle, documentation.. While things since I took over have been better, there is still too much room for error. Example, how do I know if a new database is added, and if they have the plugins I need to update?!

That’s where my project today came into play. I noticed that my task list was low today, so I took the opportunity to create a nifty (my opinion) script that will take a comma delimited list of servers, and connect to each one, grab the available databases and query for the specific plugin that you are looking for.

Since I’m a PHP developer, and know NOTHING about proper VB coding, I went to google after having the direction I needed to go with the code, I went about finding a way to query the server to get a list of all of the servers.. To my delight, someone posted a very easy and nifty way to do just that, the Sysdatabase database contains everything I needed!

Here is the code I used to query that, using the given server.

So this will be called initially to grab the list of databases on the server, I then take this and plug it in to a loop statement where it will query the database, check if any rows are returned, if they are then I mark it. Typically if the plugin is active within the database we can find a row within the specific table.

As I loop through the databases and if a hit appears on the table, i’ll write to a text file placed off server so that I can view it when I need it.

So, with this script it will cut down on the potential for missing a database, the script will give all instances so that even if the plugin is removed, we can verify that the database gets the plugin or not.

the tl;dr, my AMAZING VB scripts win again </sarcasm>, but it will be nice to use this down the road to get an accurate count of databases..

My end goal with this project is to make it semi automated, where it can give an output of available plugins, what versions are where and update accordingly.. it’s a long ways away, but it will cut down on the human aspect of a giant upgrade process..

MySQL Clusters are so awesome they can be the biggest PITA.

So for those that don’t have me added on facebook, you didn’t know, one of our big projects at work has been to get a MySQL Cluster online and functional.

This is big for a whole host of reasons, the biggest is that our Hosted VoIP Phone solution has been taking off, what this entails is more servers, with more backend databases needed.

We have been noticing that our current solution, for our web and our phones have been hitting a peak of usefulness, so we thought, Let’s Cluster it!

After a failed setup by another co worker (not his fault 100%, he has many many things to do, and no time to do them in..), I took control, reinstalled Debian Wheezy on a Dell R410 and Dell R420, both with 32GB of RAM (we are working on getting them boosted to 64GB, for MOAR POWA!).

I spent 6 hours last week Wednesday, into the evening, getting the cluster configured and talking between all nodes. The setup consisted of two Management nodes, two NDB nodes and two mysqld nodes. This setup gives the ultimate level of redundancy (imo), and makes it easy to scale with new Data nodes without the worry of having to deploy new mysqld/Management servers to handle load.

So.. tl;dr, we had one of our development days today, I spent the durration of the day, making sure that both systems were setup in the even of a reboot that they would join back up and have no issues, along with making sure I could do rolling updates without so much as a peep from any active clients to the servers.

Once I had those down pat and Documented I proceeded to move our Cacti install’s database over to the cluster, I figured if anything could be moved, this could.

Due to the nature of the NDBd, I imported all of the tables into the database as their MyISAM engine structures, then I proceeded to change the engine’s over to NDBCLUSTER.

The process was all unicorns and ponies until I hit the last two of the tables, I would get an error about a #xx-XXx-xx table being full, wtf right? So after searching.. and Searching.. and Searching.. I only found instances where people would say that their Data settings were too low.. However, with my install, that was not the case, I had tuned the servers to use 22GB of RAM for the NDBd, so that we could chuck any and all data into memory and go nuts.

Digging some more I found that I could create new tables, assign columns and data, but if a table had existing structure, like in the Invision Power Board I was testing a migration for, nope, it wouldn’t have it. I constantly would get errors for the table being full, yet we only had 6MB of data being stored in RAM.. -_-

So.. in frustration I ended up posting on the MySQL Cluster forums so if you or anyone knows the answer to my quandary, PLEASE LET ME KNOW.. I have a few ideas, but i’m running out of patients with this thing..

If/When I find the answer, i’ll post up a new entry letting the world know what I found, since i’m the only one to have this issue..

PEACE.

What NOT to do on a Network… *shakes head*

So, I’ve been in Kansas City to turn up a nice new client for our Cloud Hosting, since their internet had not been kind to letting me on to check things out, we got onsite and had a not-so warm welcome to their IT infrastructure..

After arriving and checking out the machines and networking equipment, we found the following:

1. The domain, was 2003 based domain, that was something like: r512A3.com (I could only shake my head at this)
2. Workstations had no solid naming convention, after going to ~10 workstations, I did find a pattern.. they were all windows xp default naming convention -_- (the random letter and numbers, wtf!)
3. There were a total of about 35-40 PCs and Wyse terminals, but they had 3, 24 port racked switches, and 6 hubs.. yes.. hubs.. so the reason they kept loosing connectivity or had random dropping of connections, yup that daisy chained hub was the issue..
4. And the straw that broke the camel’s back per-say.. EVERY machine was a static IP config… yeah *everyone* oh, and who wants to guess why..? well.. it’s easy.. it’s because the old IT never setup the DHCP role on the DC.. awesome..

So now that your head has to hurt as much as mine has through the last day.. my rant is over.. I understand that some things are a preference of the IT that set it up (using a .com TLD instead of .local for a local domain [I prefer .local because of what it is..]), but gosh how the HELL can someone eff up a domain so badly..

After setting up DHCP on our new firewall, changing all of the local machines to DHCP and setting up all of the printers in the firewall’s DHCP reservation, we were on track for a good morning.

Fast forward to this morning, we got in and luckily the work we did last night helped immensely, there was some printing hiccups as we made sure to iron out all of the details for our cloud platform. Combine the printers with getting an emergency call to a local cable guy, we were able to do 8 new runs (btw, this place is 11k square feet!), to allow us to remove the Hubs, and hook up our three new wireless access points, score!

At the end of the day, we found ourselfs with a solid network, a wireless network that was ready for the docs to use and things looking good.

Now that I have my ranting out, i’m off for the night.. for those that read these, thanks, for those that don’t, you might be missing out.

Peace
-B