F5 Load Balancers (LB) have been a common feature across a number of environments I’ve worked at. While administration of these devices is generally performed via the web interface, F5s also have a REST API that allows the same management tasks to be performed. This opens the possibility of using VMware’s vRealize Orchestrator (vRO) to manage F5 Load Balancers via the same REST API.
user
Building Nutanix AHV Templates with Packer
Packer is a tool that many IT Infrastructure professions would be familiar with. Packer allows the creation of “machine images” (or base templates) in a way that’s consistent and highly repeatable. The result is machine images that can be used on a variety of platforms such as cloud providers like AWS or Azure or on-prem infrastructure like VMware, all configured to your organisation’s needs.
Nutanix has its own Virtual Machine format called AHV, which runs on Nutanix’s hypervisor Acropolis. Since this hypervisor is based off the KVM hypervisor, and Packer has support for KVM, Packer can be used to build templates for a Nutanix target platform. This post will detail the process I went through to create a Windows 2016 template for Nutanix.
vRealize Orchestrator – PowerShell Hosts
PowerShell Hosts are one of the types of endpoint available in vRealize Orchestrator’s Inventory. By having a PowerShell Host, you can leverage the breadth of PowerShell functionality from within your vRealize Orchestrator workflows. In this article, I’ll run through adding a PowerShell Host as well as some considerations from a technical and security point of view.
Adding A PowerShell Host
vRealize Orchestrator has a built-in Workflow for adding a Host under Library > PowerShell > Configuration. Run the “Add a PowerShell host” Workflow to start it. The opening interface is below:
Creating Service Accounts with vRealize Orchestrator
vRealize Orchestrator (vRO) has a lot of plugins that allow it to integrate with other systems and services. One of such plugin is for Active Directory. This plugin allows you to perform a number of standard AD activities, like creating users. vRO already has built in workflows to create and manipulate users. In this post, I’m going to run through what you might end up implementing if you wanted to be able to create Service Accounts via vRO.
Improving the vRA Admin Experience – Reservation Alerts to Slack
The Reservation system in vRealize Automation (vRA) provides a bucket of resources to a team or business unit via Business Group. A risk with Reservations comes about with how I think VMware intended them to be used vs how some organisations may use them. I suspect VMware’s intention was that Reservations should be self-managed by the Business Group associated with it. This makes sense if each individual team has a Business Group as the scope of what’s in the Reservation is “their stuff”. It would mean if a Reservation reached capacity, it would be up to that team to manage the situation.
What if the Business Group was being used differently, where it covers multiple teams? In the event of the Reservation becoming full, the scope is larger than one team. In this situation, it might be good to get a heads up on when Reservations are running low on resources. Email alerts can be setup and yes, sent through to Slack, the formatting in Slack is less than desirable. So I decided to look at a way of doing it better.
Improving the vRA Customer Experience – Send Chef errors to Slack
One of the issues that can be amplified by automation is logging. Some logs have an ephemeral nature, having a short lifespan due to various factors. This can be especially painful if the logs relate to failures and contain information that could assist in fixing the problem.
This was the issue I was seeing when vRealize Automation (vRA) requests would fail when Chef attempted to apply settings. If Chef failed critically, vRA would be made aware of it and fail the entire request. Of course, vRA would then delete the virtual machine and the local Chef logs. In many cases, there was a gap of only a minute or two between the Chef failure and the vRA cleanup tasks.
Installing ElasticStack Beats on vCenter 6.7
I recently deployed a vCenter appliance to 6.7 after a power outage corrupted the 6.5 instance. A followup task for the virtual appliance was getting the ElasticStack Beats (MetricBeat, Filebeat) installed again. In this post, I will go through the process of installing the Beats and some of the minor issues I ran into.
VMware vRealize Suite Lifecycle Manager 1.2 – First Impressions
When VMware created the vRealize brand, they grouped together some of their most complex products under one banner. vRealize Automation (vRA) required the deployment and configuration of two components – a virtual appliance and a Windows server. The Windows server had a long list of prerequisites. In terms of operational management, using products like vRA meant ongoing work on scripts, workflows and other artifacts. The logical response to this is to create a non-production instance to protect your production instance. Moving updates to production could be achieved manually or via VMware’s Codestream product, but both approaches left a lot to be desired. vRealize Suite Life Cycle Manager (vRSLCM or just LCM) is a new approach to this set of problems.
Getting LCM Running
LCM comes supplied as a “Virtual Application” where a few configuration options are required to provision it. One of the LCM-specific settings is whether you want to enable the vaguely named “Content Management”. Enabling this will cause the appliance to use 4 processors instead of 2. Once the appliance is deployed and started, the rest of the configuration happens via the web interface.
Blizzard’s IT Architecture and Testing at Blizzcon 2017
Last November I was able to attend Blizzcon in Anaheim. Blizzcon is the annual convention hosted by Blizzard Entertainment (creators of Overwatch, Diablo, Starcraft, World of Warcraft, etc). In the past the focus has been solely on the games and the game developers. In the last 2-3 years there have been more panels that give more of a look “behind the curtain”. These panels have more information about design processes and engineering practices at Blizzard. There were 2 panels I went to which highlighted this – one was engineering and the other was about level design. Some points that jumped out were:
Blizzard’s Overarching Architectural Philosophy
During the Q&A for the engineering panel, the engineers were asked about whether there was any sort of mandated technologies that have to be used across the business or in particular areas. The response? They used whatever technology or tools that made sense for that area of the business and its needs. The team that handles the websites end up using technologies that make sense in that area. This led into a discussion about the Blizzard’s use of APIs as the means to allow these different technology islands to talk to each other. This approach allows the best tools for the job in an area, but creates a reliance on ensuring any API changes to don’t have downstream effects. Which leads into the next topic…
Testing and Documentation
There was an interesting reference to how Blizzard deal with keeping documentation up to date. With their reliance on APIs, there would most likely be a process where changes have to be tested. Part of their test model involves taking sample data and assets from documentation and run tests with it. If the documentation’s samples haven’t been updated reflect changes in functionality, the test should fail and be flagged. This approach isn’t completely foolproof, but it was an interesting approach to the issue of documentation in IT.
Giving people space to be creative
The level design panel blew away one major assumption I had about Blizzard’s level design process for World of Warcraft. My assumption was that the game designers would detail the game world to a fine degree. The level design people would build that without much scope for changing things. The reality was that the game designers would only outline what a particular zone or area would need (mostly in terms of quest flow or general look and feel). It was the level designers who would flesh out the world. Many of those pieces of “character” or “flavour” in the game world were due to the level designers filling those gaps with their own stories.
I’m hoping in the future, they’ll keep doing these sort of panels. One with a bit more focus on the infrastructure side of things would be cool to see.
Project Honolulu – First impressions
Project Honolulu is Microsoft’s attempt at revamping the server administration experience. Historically the Windows server toolkit has been built around using numerous MMC (Microsoft Management Console) plugins – things like Event Viewer, AD Users and Computers and DNS Management are all built on MMC. We’ve seen a couple of attempts at revamping this in the past, there was Server Manager in 2008 and a refreshed form in 2012.
I suspect one of the driving forces behind Honolulu is the shift from RPC-based connectivity to WinRM for remote administration of servers. Honolulu seems to represent an alignment with this since it supports only Server 2012 onwards as nodes to manage, and its gateway component installs on Windows 10 or 2016. The documentation claims the management functions are performed using remote powershell or WMI over WinRM.
Installation
The installer for Project Honolulu is only about 30MB. While it supports installation on a system in a workgroup configuration, the TrustedHosts value for WS-Man needs to include the target nodes to be managed. The installer can do this for you or you can manually perform the commands to do it. On a domain joined system this isn’t required.
Starting Up Project Honolulu
Following install, the web interface will load with a tour splash screen. The main landing page is bare to start with.
Clicking on the Add button presents three possible options – a regular server connection, a fail-over cluster or a hyper-converged cluster. The first 2 options then allow the adding of single items or importing in bulk from a test file. The hyper-converged cluster option allows only single items. When adding in the server name or IP, Honolulu will appear to perform ongoing checking of the name to ensure it meets correct requirements and if it can connect with current credentials.
Single sign-on authentication is supported and is the default option. In the case of my test scenario, the system running Honolulu was in a workgroup and attempting to connect to domain based systems. For this, I manually specified credentials. One thing to note is it appears that even if a server is added using IP, the name will be resolved (perhaps by the initial connection) and subsequent connection attempts will fail if this name can’t be resolved by DNS. For domain-joined installations this should be a very rare case, in workgroup configurations it could happen. Honolulu will initiate its connections to the target node on the target’s port 5985. Assuming initial connection and authentication is successful, you should see a HTTP/1.1 Status 200 in a network capture. If everything is good, the status will have a tick and say “Online”. From there you can click on the server’s name, or select it and click Connect to drill down further.
Overview Page
The first page shown is an overview of the server and includes some metric graphs like CPU and RAM usage and the ability to shutdown or reboot the server remotely.
On the left is a collapsible menu of the other sections such as events, firewall, registry, processes, etc.
Tools
Some of the tools seem quite functional. The Events one allows filtering of the event log with a reasonable number of criteria and allows exporting. Files allows browsing of the target node’s file system with the ability to create folders, rename/delete and so on. The Process section looks to have all the main functions you would typically use in Task Manager, allowing remote termination of processes.
Closing Thoughts
Project Honolulu is an interesting tool from Microsoft. It seems capable of replacing the traditional Server Manager app that most Windows sytem administrators are familar with. I’ll be most interested in seeing how it develops in terms of extensions.