Virtualisation is revolutionising the manner in which enterprises deliver resources from their data centres to their desktops.
The technology underpins a transition in how data centres are used, from traditional storage houses for applications and operating systems into the next generation platform of 'delivery centres', enabling companies to make better use of their existing hardware assets.
Today's fast-paced business environment requires organisations to become much more agile, starting with their data centres. According to analyst Gartner, about 90% of business processes are captured in software; therefore the ability to deliver applications and desktops to users anytime, anywhere from their data centre is rapidly becoming a core focus for most organisations.
As such, virtualisation is revolutionising the way IT conducts its mission to provide infrastructure services to businesses. The global virtualisation market, spurred by the increasing popularity of server virtualisation, is expected to grow at 113% over the next three years, according to IDC. The research consultancy believes global virtualisation services - which were worth $5.5 billion last year - will increase to a value of $11.7 billion by 2011.
To understand the significance of end-to-end virtualisation infrastructure, it is necessary to understand the business needs that drove the conception of virtualisation about 50 years ago, and how changing business needs over the years have shaped it over time and into the contemporary iteration it is today.
Looking back
The term virtualisation is an old one: it has been widely used since the 1960s, and over the years has been applied to many different aspects and scopes of computing.
To this day, virtualisation continues to evolve, ensuring business challenges are met with confidence.
Nick Keene is country manager of Citrix Systems Southern Africa.
If we open up our history books, we will find that IBM started to explore virtualisation techniques back in the mid-1960s as a way to supply time-sharing on the IBM 7044. By creating multiple virtual images of the main machine, it was able to allow multiple users access to the same memory and resources of the main machine through virtual images.
This research led to the development of the VM/370, a major advancement in the mainframe arena for IBM's System/370 line. It allowed for multiple "copies" of the hardware that ran as virtual sessions while the virtual machine monitor ran directly on the real hardware. This technology led the way for many of the most successful offerings in this field. The concepts and techniques pioneered in the late 1960s and early 1970s are still used today on many large mainframe systems.
Virtualisation was effectively abandoned during the 1980s and 1990s when client-server applications and inexpensive physical x86 servers and desktops established the model of distributed computing. Rather than sharing resources centrally in the virtualisation model, organisations used the low cost of distributed systems to build up islands of computing capacity.
The broad adoption of Windows and the emergence of Linux as server operating systems in the 1990s established x86 servers as the industry standard. However, this brought with it new IT infrastructure and operational challenges for businesses, such as low infrastructure use, increased infrastructure and management costs, insufficient failover and disaster protection and high maintenance of end-user desktops.
Planning ahead
The current decade brings with it many of the business challenges of the previous; companies today still under-utilise their servers and data centres, face insufficient disaster recovery and high desktop maintenance costs. However, augmenting these challenges are new factors simulated through the entrance of the echo-boomer generation into the workforce; that is, greater demand for instant access to resources from anywhere, any time. This generation alone coined the term 'Web commuter'.
In recognising these issues would grow beyond the 1980s and 1990s and continue to plague CIOs into the future, ex IBM-er Ed Iacobucci, who was part of the team working on IBM's OS/2, established Citrix in 1989.
Ten years later, Citrix transcended tradition thinking to bring about ideals around thin client computers that operate as 'information devices'. This model further developed the early ideals of the technology's forefathers that client software in client-server architecture networks should depend on the central server for processing activities, and not the local desktop.
During that time, the entire industry was challenged into rethinking the way Windows applications and desktops should be delivered to users. This inspired much of the early adoption and evolution of contemporary virtual technologies. The concept of the virtual server was also introduced - and with it the ability to reduce the space and energy requirements of traditional physical servers and vastly increase server productivity - into sharp focus.
To this day, virtualisation continues to evolve, ensuring business challenges are met with confidence. The latest rendition of the technology, termed end-to-end virtualisation, provides businesses with even greater agility in accessing resources in secure and high-performance infrastructures.
The growth of open source development has also stimulated further development and innovation in virtualisation technology. Using open source platforms such as hypervisor technology has made it possible to embrace the worldwide developer community's input, thereby enriching the technology and making it truly dynamic in its evolution.
Looking into the future, virtualisation technologies promise even greater benefit to business, from consolidating servers further to reduce costs, increase scalability and 'greening' their IT system; through to the ability of users to access resources from remote handheld devices.
* Nick Keene is country manager of Citrix Systems Southern Africa.
Share