Archive for the ‘Virtualisation Practice’ Category

Cisco Live

Thursday, March 21st, 2013

Keynote:

Cisco live held at the Melbourne Convention and Exhibition Center was a tremendous success. It started off with the keynote presented by Carlos Dominguez, Senior Vice President, Office of the CEO, Cisco. The presentation was about the convergence of data and how it would change the way we drive cars, gather information and do shopping. A few examples were given, such as Nest thermostat, which allows you to manage the temperature of your home using mobile devices such as an iPhone or iPad. It also has an artificial intelligence engine which remembers the temperature set and turns itself down when no movement is detected.

An example for data convergence, solutions are being developed to have sensors in the humble lamp post. They will provide current information such as the weather, next scheduled public transport in the area and directions to it. They also will provide information on the traffic for people using personal transport. The challenges faced are to store the data specific to each individual and be agnostic on which personal device they are using to “control” and use the data.

Another example of personal devices changing the way we live, applications are being developed which map your current eating habits to what the doctors prescribes. The application not only provides the specific information about each meal but also guides you to the nearest place where you can find healthier options for the same type of food. And it charts what has been purchased and keep a track of your vital statistics such as time spent exercising, heart rate and blood pressure monitoring etc. All these applications and devices are using data which was unheard of 5 years ago.

Break-out Sessions:

The keynote was followed by technical sessions, which were related to Cisco Networking and Security, Unified Computing, Unified Computing Architecture, Cloud Computing, virtualisation and End User Computing. Most of these sessions were for the technical consultants and architects. There were also break-out sessions which were specifically aimed at the management level. These were mainly discussion sessions rather than presentations.

The technical break-out sessions provided valuable information and guided input into how cisco works specifically around network architecture and system architecture. The virtualisation stream of breakout sessions had solutions around VMware vSphere and Windows Hyper-V. There was a lot more focus on stateless computing and how to use Cisco UCS platform to manage stateful yet server independent solutions for virtualisation.

Solutions Lounge:

The Solutions Lounge was the area where all the Cisco partners presented their custom solutions for the customers. We had a booth of our own where we were talking about Cisco IAC and Cloupia products from Cisco and how they help the customer’s cloud strategy. The other notable solutions were offered by major partners like VCE, Telstra, NetApp to name a few. The interesting part (apart from the actual solutions being spoken about) was the varied games and promotions that were run by each partner. Most of them had an iPad as the prize, which shouldn’t be a surprise considering how popular they are.

The importance of regular health checks

Tuesday, November 6th, 2012

We all know why it is important that you visit your local doctor for regular check-ups. They can find and diagnose any problems before they start, or identify problems early, when the probability of treating and curing any issues is significantly higher. Regular visits can increase your chances of living a longer and overall healthier life.

So why is your IT environment any different? Regular health checks of your infrastructure and supporting environment can identify any new or systemic issues within your environment before they start impacting end-users. Early detection usually results in a measured approach to remediation being taken, rather than the rushed ‘have to fix it now’ method. Like medical check-ups, there are common high-level areas that should be checked each time, including configuration, performance, capacity and currency of the environment.

Scheduling regular checks and reviews of your infrastructure is one way to ensure that the expenditure and ongoing investment in the solution results in “a longer and overall healthier life” for the environment.

 

Transitioning from intuitive to data-driven approaches to capacity management

Wednesday, November 17th, 2010

What does capacity planning look like within your organisation?

In most organisations existing capacity planning methodologies are rooted in decades of traditional physical server deployments and dedicated infrastructure. If an application server is at 50% load today, and load has historically doubled every 24 months, chances are your capacity planning methodology predicts that you have two years before you must add further capacity. While such an approach may work acceptably when dealing with dedicated, physical server instances the now widespread use of production server virtualisation limits how accurate and therefore worthwhile such predictions can be.

Further, when this approach fails, IT managers and administrators typically fall back on intuitive approaches to capacity planning –  responding to reports of application slowness, or to changes in headcount, in a linear manner that does not account for the complex relationships between application performance and each layer of the infrastructure upon which the applications are hosted.

These intuitive capacity planning methodologies are at best inefficient, resulting in needless or poorly targeted infrastructure investment. At worst, they can be completely ineffective, resulting in highly approaches to infrastructure management with significant operational costs.

Virtually Unknowable

Virtualisation – along with the adoption of other shared systems, such as clustered database and web servers, hardware load balancing appliances and storage area networks – necessitate a holistic approach to capacity planning. It is no longer enough to simply understand resource utilisation on an application-by-application basis. Instead, IT managers must consider the inter-relationships between applications; when are peak periods for individual applications, which applications have peak periods which overlap, how do applications map to line-of-business functions, etc . Each additional piece of data which must be included in capacity planning calculations exponentially increases the complexity of the forecasting, increasing the likelihood for error and therefore decreasing the value of the capacity planning exercise itself.

Given this it is no wonder that in most organisations capacity planning for virtualised environments remains an ad hoc process, with virtual infrastructure administrators applying the traditional physical server cap planning methodology to ESX hosts and simply trying to manage around its shortfallings via “agility” in infrastructure procurement and deployment.

A New Approach

A new generation of tools are beginning to emerge that seek to resolve these problems. Approaches vary across vendors but we can see common themes between them:

  • the ability to automate application mapping, allowing analysis to incorporate relationships between servers
  • the ability to rationalise performance and capacity metrics from multiple infrastructure layers – typically application, database, operating system, hypervisor, network and storage
  • Scenario-based modelling of growth

By automating discovery and data collection and by operating across all layers of the application/infrastructure stack these tools help drive a transition from the old, intuitive capacity planning methodologies to one that is based on hard data, and therefore much better able to accurately predict capacity demands within your unique environment. And, as we will discussing in a forthcoming post, such a data-driven approach is critical to managing not just capacity forecasts but application performance as well.

Data-Driven Approaches to Performance Management

Wednesday, November 17th, 2010

Does your organisation have a true performance management methodology? For the majority of organisations the answer is simply “no” – performance management amounts to a variety of disparate, ad hoc and predominantly reactive  processes. Examples of such approaches may include:

  • Server utilisation monitoring – perfmon statistics – CPU/memory/disk utilisation etc.
  • CMDB
  • User-feedback – “It seems slow”
  • Transaction response time monitoring (“stopwatch testing”)

Common limitations of these legacy approaches include:

  • Not data-driven
  • Reactive – bottlenecks are typically only identified after they cause performance problems
  • Don’t take into account shared systems – virtualisation/SAN/network
  • Obtaining more useful data requires significantly greater operational investment
  • Tools tend to be focus on individual infrastructure layers making it difficult to build processes that are useful across the entire enterprise infrastructure
  • Baseline performance benchmarking only useful for before/after analysis – cannot be used for accurate “what if” scenario planning

The largest challenge faced by infrastructure administrators in responding to performance problems is a lack of data. When users complain that “it’s slow” administrators lack the critical information needed to effectively respond – how did the application perform before the issue arose; what utilisation metrics correlated to the previous, acceptable, performance level; to what degree is performance now degraded; what has changed between then and now?

As a result, administrators tend to fall back on intuitive approaches to performance troubleshooting – looking for errant performance metrics, reviewing code release schedules and recent infrastructure changes, and frequently fall back on crude techniques such as increasing available computational resources in a hope to resolve performance bottlenecks. Such approaches are inefficient and are not cost effective, and do not scale to large, complex environments.

A new wave of tools is emerging that seek to resolve these problems. These tools are generally “cross domain”, referring to their ability to collect and analyse data from multiple infrastructure layers. Typically, the include the ability to determine whether performance variations are due to increased load, code changes, infrastructure changes, or impacted by performance of shared system components (ie, where an application’s performance is degraded due to increased load on a shared component such as a virtualisation farm).

An additional benefit of a data-driven performance management approach is the ability to “right-sizing” infrastructure – particularly in virtualised environments, resources are often over-allocated to individual servers and are therefore wasted. Once administrators fully understand the true performance and resource requirements of applications, these wasted resources can be reclaimed and reallocated.

In addition, they are capable of complex scenario modelling – this allows administrators to forecast the impacts associated with, for example, a successful web marketing campaign, the hiring of 100 new office staff, or the opening of a new branch office. As a result administrators can proactively identify future performance bottlenecks and IT infrastructure spending can be targeted to where they will deliver most benefit. Further, by understanding application utilisation trends and knowing  where bottlenecks reside in their infrastructure, administrators are able to reesolve performance issues before their users even notice them.