I know its been quite some time since I last wrote a post, and i am trying to get back into blogging again..
That aside, I was recently involved in a selection panel for a new supplier, whilst not normally exciting and usually filled with a lot of standard terms and conditions, i was quite shocked to see a major vendor had in their SLA section that they would adhere to a 24x7x4, however there was no listing of what the penalty was if they breached.
We raised this with the particular vendor, and what was even worse, is when they said “yeah, thats our normal, if you want penalties included for breaches, then you have to pay a higher cost for the SLA”
Im sorry, but I thought that’s what a SLA is for?
I would love to hear your opinions on the matter, and whether all SLA’s, especially Enterprise level SLA’s should have penalties built into them no matter what, whether it be financial penalties or otherwise.
You can probably guess by now that the vendor is question was ruled out from selection on these grounds, as the cost increase for the SLA was approx. 30% more ontop of the original offering.
In April this year, Chrome issued a warning shot to its users about NPAPI and that it would not exist beyond Chrome 44.
Until Chrome 45.x was released officially users could simply re-enable NPAPI extensions in the popular browser and just receive an annoying yellow bar which reminded you it would be unsupported “soon”.
Well now its official, NPAPI is hard and fast out of chrome, and from what i read, ActiveX (Microsoft IE/Spartan) extensions have been stripped from the new versions of IE/Spartan.
Flash has already updated to the newer PPAPI scheme, and can be enabled if needed by users.
Unfortunately for the majority of people of rely on NPAPI extensions like; Java, Silverlight just to name two, will no longer work.
Oracle (java) have not released an official statement other than to say they recommend users user Firefox or Safari if they need to utilise NPAPI Plugins. This is a pretty poor stance in my opinion. Hopefully we hear something soon regarding a feasible outcome.
Personally I see a lot of people who are 100% devout chrome users will feel the pinch for a while until their plugins are updated to the new scheme or re-written for HTML5. From a security perspective i applaud the stance taken by Chrome to disable a plugin/api set that was first introduced in Netscape Navigator 2.0 back in 1995. Java has been riddled with bugs and vulnerabilities and when its on the “6 Billion Devices” it claims, its a rather large target.
Most companies who have major support utilities written in NPAPI for web-browsers will no doubt be scrambling to release a new versions of their tools. I look forward to seeing what the developers out of Dell, VMWare, EMC just to name a few release in the coming months!
Long time no blog, been really busy unfortunately depending which way you look at it.
Recently we have noticed an increase in VMs failing to reboot correctly on soft-boot.
It looks more and more like we have run into a bug between Microsoft and VMWare. Whereby the TSC is not being reset correctly upon reboot, if the VM has been up for more than ~60 or ~100 Days, depending upon the configuration.
VMWare tells me that this is by design of VMWare Hardware v10, and is to bring the vHardware inline with Microsoft Server Virtualization Validation Program, and keeping a better clock sync within a VM.
From here, if you are experiencing the issue, you have a couple of options,
1A) If you are running ESXi 5.5, update to 5.5 U3 when it becomes GA
1B) If you are running ESXi 6.0, update to 6.0 U1 when it becomes GA
2) Investigate the VMX work arounds in the below articles as described by VMWare
3) Continue to hard-reset any affected VMs as they arise.
As part of a recent push from work, we have been trying to virtualise systems that belong to other departments, primarily to increase redundancy and all the great things that visualisation brings.
But what happens when you are told that something “Can’t be virtualised”. I recently encountered this twice with two popular Building Automation System (BAS) Vendors.
One was a simple-ish case of “it needs local serial connection to the controller” whereby a simple but high quality MOXA P5150A would solve the issue, not only does it do a range of configuration types, but its also powered by PoE, which inturn is UPS backed from our Cisco 3750’s.
The other one used a USB key into a Dell Workstation, which was used to hold the identity and license for the site. This was a little more challenging as it was not made clear that this was the case until i had already run the P2V using VMWare Converter Standalone. It was when i was removing the old physical hardware from where it was that i noticed the key sitting in the back.
Upon doing some googling and digging i found that a USB key could be passed from a workstation running VI Client to a VM, but this would not provide a permanent connection, nor a suitable outcome for the customer. What i did find is that if the USB is inserted into the host ESXi server (Ours are running 5.5 Update 2 with Critical Patches) and reboot the host which the USB is connected to and the guest will be running on, you could “add hardware” and directly map the USB from the host hardware through to the VM.
It worked a treat, upon doing this and a subsequent reboot of the VM to clear the error with the BAS Software, it worked like it was on Physical.
Some docs that i found really useful stem from the below link,
VMWware Pubs – USB Passthrough
It was really important to us that we manage to virtualise these systems, and thankfully VMWare and the broad range of tools available were able to complete it and deliver a faster outcome than the Windows-on-Hardware approach. More to come on this topic in a future blog about when is a VM faster than Local Hardware.
So as many Aussies know, we have 2 time shifts each year for daylight savings, being the first Sunday in April (clocks back 1 hour), and first Sunday in October (clocks forward 1 hour)
Sunday evening I received an alert from my Isilon cluster saying that the time had drifted by more than 4 minutes from the AD time and Auth would be affected.
Sure enough, it had, any CIFS/SMB access was problematic, but NFS was unphased.
After a discussion with EMC support it was determined that there were too many time sources on the cluster. Being that both NTP and SMBTime. SMBTime is a service that pulls time from an Active-Directory, and NTP being Network Time Protocol.
In our setup both eventually point to the same time source. But it became apparent the two services don’t have a precedence order between themselves, and just end up in a race condition.
EMC support were excellent in assisting me to diagnose the issue and provide recommendations to remediate the system.
On EMC recommendation we disabled the SMBTime service and forced a re-sync so NTP would reset the clock. Once done it was all back to normal for data access and the alerts were cleared as a result.
It has left me with questions as the system was setup by an Integrator/Partner, and left in a “unrecommended” setup
More to follow….
Last week I did an OE upgrade on an EMC VNX2 Series SAN.
All pre-checks passed, new files downloaded, ready to go right? No.
Go to proceed to start the upgrade with my domain admin user via LDAP, and can’t proceed due to “Insufficient CLI Permissions”. Weird. Never mind, I’ll log back in with the root/sysadmin user.
Hold on, that doesn’t have permission either? Knowing full well I can log in manually if needed, so why won’t the upgrade work?
Turns out that if you have complex passwords with special characters, the upgrade bombs out with an error.
Quick work around is make a local admin user for the upgrade with a simple password and use that and delete it when done. Not ideal but it works I guess.
I know it’s not only EMC products that let you set a password with special characters and complexity, then break login due to “invalid” characters. Why do vendors allow this to happen in their software?
Better than another vendor that allows long passwords but only stores the first 8 characters…
So we got some new Dell M630 Kit to Virtualise SQL.
One of the briefs was to match the performance of the existing physical hardware. While this was not too challenging CPU/RAM wise as the existing kit was 5 years old.
What was interesting was the existing PCI-E Fusion I/O cards in the Dell M610X’s we had. Dell no longer offer a “full height” blade to facilitate the PCI-E cards anymore. Instead they offered us their new NVMe PCIe SSD Cards, at 1.6TB each, theyre certainly not lacking in space.
What is not lacking either is performance. Holy hell these things are fast! While i have only had them for 2 days and limited testing, based on SQL-IO testing done between the two systems, the NVMe drives are approximately twice as fast for 8K Rand RE/RW, and 3 times faster for 64 Seq RE/RW!
Thanks for coming to this new venture of mine, ive decided to start a bit of a blog, no idea how long it will last.
Ill be blogging about my works with Cisco Network/Security/Wireless, EMC SAN/NAS, and VMWare technologies
My blogs are my own, they do not represent my employers past, present or future.