Tag Archives: Virtualization

VMware: The Basics

What is a virtual machine?
A virtual machine is a set of virtual hardware and files. The virtual hardware gives us the possibility to install a Guest OS on top. This Operating system must be “supported” by the Hypervisor, although many times you can install unsupported Guest OSes.

Virtual machine files:
So, let’s take a look at some of the most important files that constitute a VMware virtual machine:

• .VMX file – This is the configuration file for a VM.I In this file all the properties like number of vCPUs, RAM, Virtual Nic interfaces, Guest OS, etc. are contained.

• .VMDK file – This file is also known as the virtual disk descriptor. Here the geometry of the virtual disk is described.

.VMDK File

• –flat.vmdk – This is the data disk file where all the information of the VM is stored.
• .nvram – This file contains the “Bios” of the virtual machine (settings, etc.)
• .vswp – This file is the VM swap file. It is created once the VM is started and is used by the hypervisor to guarantee the assigned memory to the VM. This file is equal in size of the vRAM (RAM assigned to the VM). The only situation when it can be different is when you have a memory reservation configured to vm. In that case the .vswp file will be the same size as the defined reservation.

• .vmtx – this file is only present when you mark the VM as a template, when you set a VM as a template the only thing that happens is that the .vmx file is converted into a .vmtx file

vsphere

• .vmsd – this file is a snapshot descriptor. You can find the different snapshots that you have for that VM, the files of those snaps, etc.

vmsd

• vmss – this file is known as the “snapshot state.” Here the configuration state (.vmx information) of the VM at the time when the snapshot was taken is stored. So for example, if I took a snapshot when the VM was configured with just one vCPU and then I take a snapshot with 2 vCPUs the change in number of vCPUs will be known by the hypervisor using this .vmss file.

vmss

• -delta.vmdk – this file contains the changes of the VM after a snapshot was taken, so we essentially have the “base” disk and the delta files that store all further changes on disk.

delta.vmdk

As we can see, a virtual machine is easy to migrate and manage because it is a set of file, instead of a physical server. There are other VM files like .log files and .lck lock files.

Resources that can be assigned to a virtual machine:

Virtual machine hardware:
A virtual machine requires a set of “virtual devices.” These devices or virtual hardware provides access to the underlying physical resources. It is important to note that the access to hardware is controlled by the hypervisor. Currently VMware presents the following hardware devices to the virtual machines:

• SCSI adapter – This virtual SCSI adapter allows the use of “virtual disks, with a maximum of 4 SCSI adapters per VM and 15 targets (disks) in each adapter (60 disks). There are different types of adapters: LSI Logic Parallel, LSI Logic SAS, BusLogic Parallel and VMware Paravirtual SCSI (PVSCSI). PVSCSI adapter is a Paravirtualized virtual adapter that can give us greater performance. I If you want to know more about it take a look here.

scsi

• USB controller – A vSphere VM can have three types of USB controllers: UHCI (USB 1.0), EHCI (USB 2.0) and xHCI (USB 3.0), with a maximum of one controller of each type (3 controllers) per VM (3 controllers different version of USB). Each controller can have up to 20 different devices.
• Floppy controller – This floppy controller can have up to two devices. Usually this virtual floppy is used to insert drivers in a floppy image (.flp).
• Network cards – Also known as “vNics,” vSphere supports up to 10 network cards per VM. There are different types of vNics that can be available to a VM depending on the virtual hardware or “VM compatibility,” vlance (emulated 10 Mbps nic), E1000, Flexible (can change between vlance or VMXNET), VMXNET2 and VMXNET3.The VMXNET adapters are “paravirtualized” adapters, that allows better performance. If you want to know more about the different types of virtual nics take a look at this great blog post. Also we can have SR–IOV compatible devices (virtual interfaces of a physical nic/PCIe device) that can be presented to a VM reducing the overhead and increasing the performance.

network cards

• AHCI controller (SATA) – This type of controller is only available in vSphere 5.5. A VM can have up to 4 SATA controllers with a maximum of 30 disks per controller.
• Video card – provides video for the VM. We can also add 3D hardware rendering and software rendering to this “vGPU.”
• Other – a VM can have up to three parallel ports and up to four serial/com ports.
• RAM – the maximum amount of RAM that can be assigned to a VM in vSphere 5.5 is 1TB.
• CPU – the maximum number of vCPUs that can be assigned to a VM is 64. This is true for vSphere 5.5.
It’s very important to know that the CPU is not virtualized by the vmkernel. The hypervisor only assigns the different vCPUs to different cores on the physical system leveraging the CPU scheduler.

Virtual Disks
As we already know, a virtual machine can have virtual disks that are attached to a vSCSI adapter or a SATA controller, but we can add different types of virtual disks that will reflect directly in the physical storage.

virtual disks

Let’s start by explaining what is Thin Provisioning at vSphere. Thin Provisioning enables the hypervisor to assign disk space to the VMs on demand. This allows over allocation of the physical storage. With Thin Provisioning the Guest OS (the Operating system installed in the VM) sees the full allocated space but in reality only the consumed space is allocated on the physical storage.

Example:
John creates a VM with a thin provisioned virtual disk. He assigns 80GB of space to that disk. John installs an Ubuntu guest OS and several applications that consumes a total of 40GB from the 80GB allocated so it’s only 50 percent. Only 40GB of space is consumed at the physical disk/storage as we can see in the following image:

exaMPLE

Basically the hypervisor “tricks” the gOS and reports the total size of the disk without really occupying all the space in the physical storage.
Now that we know what is Thin Provisioning lets take a look at the current supported types of virtual disks or VMDKs:

vmdks

• Thick provision lazy zeroed – this type of disk allocates the total space assigned to it on the physical disk/layer (datastore). If there was previous data on the disk it does not get over written due to the fact that with this type of disk there is no writing of zeroes to the blocks that constitute the virtual disk. In this case the “erasing” or writing of zeroes is performed on demand on first write.
• Thick provision eagered zeroed – in this type of disk all the space is allocated on creation and a write of zeroes is performed on all blocks that are part of this virtual disk. Because of this the time to create an eagered zeroed vmdk is longer.
• Thin provision – with this type of disk space is allocated on demand.

Now it’s time to talk about the different disk modes on vSphere. This “modes” define how a vmdk (virtual disk) will behave when we want to take a snapshot of the VM. The following modes can be configured:

vmdk vsphere

• Dependent – With this mode the virtual disk (vmdk) is included in the snapshot, so if you delete the snapshot the changes are gone.
In this mode if we power off the VM the snapshot and changes are persistent.
• Independent persistent mode – In this mode the virtual disk is not affected by snapshots, so no delta file is created and every change is written directly to disk.
• Independent Non persistent – in this mode the virtual disk is affected by snapshots, a redo log is created (delta file) and any write or change is captured there.If you delete the snapshot or power off the changes are gone.
This article has been an introduction to VMware, defining what a virtual machine is, discussing the different resources that can be assigned to a virtual machine, an overview of VMware tools, converting a physical server into a virtual machine and best practices/guidelines for design.

HP completes Three Year cost cutting plan

HP has completed its three-year IT shake up, and has saved around $1 billion in costs, thanks to the skills of Randy Mott, Hewlett Packard’s chief information officer. Mott jumped ship from computer manufacturer Dell back in July 2005, just prior to announcing 14,500 job cuts to balance the books. In a bizarre coincidence, IBM also laid-off 14,500 staff to do exactly the same.

An ex-CIO for retail chain, Walmart, Mott certainly knew how to run the IT operations of a large business. Mott was given a $15.3 million compensation package to encourage him to join them and give their IT departments a damn good seeing too – you’ve got to spend some money to make some money, right? As the past shows us, IT vendors are generally rubbish at using IT equipment cost-effectively, as they buy for themselves to uses, it at little or no cost, that doesn’t pay the bills.

In 2005 when Mott took over as HP’s CIO, they ran 85 data centres. Since his reign began, he had consolidated these down to a “six pack” of three redundant, highly virtualised data centres. Motts aim was to cut the 25,000 servers within those 85 data centres to just 14,000 machines within the three mirrored data centres. HP has also cut back on its application portfolio, removing 6,000 of them. The company wanted to cut them down to 1,500 applications, and maybe even as low as 1,100, but didn’t quite manage it when the re-shuffle came to a close.

As part of the tightening of belts, HP have started using a SAP ERP system more, which HP chief executive officer Mark Hurd spoke about last week during the company’s report on the fourth quarter financial figures.

The company has not provided an answer yet n the exact amount of servers its cut back on, but claimed today that it has cut down its servers by an impressive 40 percent, while boosting its processing power by an staggering 250 percent. The company boasted that it has reduced the costs of its networking by 50 percent over the last three years, while managing to triple its bandwidth. Impressive.

With the world going a bit ga-ga over everybody’s green credentials, earth lovers will be pleased to hear that HP has reduced its power consumption by 60 percent since 2005. This is all very impressive. That a company can shed that amount of costs in just three years through part-virtualization amongst other things is a great advertisement for how other companies should be running their data centres.

HP projects $128 billion in revenue in 2009.

VMware Mobile Virtualization – Breathing new Life into Mobile Market

Virtualization pioneer, VMware is looking to break in to the mobile market, and create virtual mobile phone technology in a similar way to the company’s server virtualization technology. On Monday the company announced the arrival of VMware Mobile Virtualization Platform (MVP), which it has developed from technology it gained from Trango Virtual Processors last month.

The basic premise is to extract applications and data from a phone’s hardware, which should cut the development time required by manufacturers, as well as enabling mobile phone users to install a variety of different application over a wider range of handsets

According to the company, mobile phone manufacturers are spending a lot of time getting new phones to the marketplace because they have to code for multiple chipsets, operating systems, and device drivers across their product families.

Chris Hazelton, research director of mobile and wireless for The 451 Group said, “There’s a benefit to the manufacturer – it’s lower cost in terms of development because you can have software on a number of different devices, and it doesn’t need to be tweaked for each device, just for the virtualized environment.”

However, it seems there may be more to it that that, and the result could be f more benefit to both carriers and end users. Most mobile phone manufacturers are getting used to the idea of building mobile phones that use open operating systems. That being said, a phones core functions, ad its private data need to remain secure and in working order.

“A mobile phone has core features and responsibilities, and that’s voice — being able to work with a carrier network – and that operating system is tightly controlled by the carrier and the device vendor,” Hazelton said.

“And then you have this virtualized environment that would be open to developers or open to the user to add and install applications to customize the phone as they want – it’s this sandbox that’s very distinct and separate from the core features of the phone, and it won’t disrupt the carrier network,” he explained.

Hazelton noted that virtualization could be used to make a traditional phone work in the same way as one of the more current smartphones – without the development time involved in building one.

“It will take a couple of years before this gets some traction,” he added.

Although at this point, it’s all a bit unclear exactly what VMware will do, market research company Gartner believe it will give the mobile industry a further boost.

“Gartner sees virtualization in the mobile space as a very promising and potentially a fast emerging market,” noted Monica Basso, a research vice president for Gartner.

“We predict that by 2012, more than 50 percent of new smartphones shipped will be virtualized,” she added.

VMware have also made us think about the idea of having multiple profiles on our phones. Think of it in a social networking sense: Facebook for work colleagues, Myspace for socialising. By having virtualized phones, we could separate our work from our play profiles, keeping work files separate, and having different contact lists.

Charles King, principal analyst for Pund-IT said, “The arrival of the iPhone and the G1 Google phone, along with the Windows-based smartphones and BlackBerries, are getting people to think about exactly what is a smartphone?”

“In the traditional sense, a smartphone was basically a cell phone with a PDA strapped to it. BlackBerry took a step further by optimizing for e-mail and instant messaging, and I think what we’re looking at now, with the iPhone and G1 – and some devices that are on the way – are full-fledged hand-held computers,” he said.

“At a certain point, you have to say, ‘If I have a handheld computer, what is the best way to utilize the system resources?’” King noted.

“And I think that’s the question VMware is looking to address,” he added.

King also believes that by virtualizing a mobile phone, you can keep your vital files and data safe from hackers.

“What happens when hackers start targeting malicious code at the smartphone market?” King said. “The browser could be isolated from the rest of the phone, or you could isolate e-mail, to keep the greater system from being damaged,” he said.

Microsoft Hypes up Hyper-V

With Windows 7 on the horizon, Microsoft has wowed its fans, and heard some pre-release grumbles from its detractors, but they’ve not really talked about the server-side of the coin.

Microsoft big-wigs have promised that Microsoft’s Hyper-V virtualization technology will come up with the goods, after the media tore the company’s promises to pieces over the last few months.

Bill Laing is the corporate VP of Microsoft Windows server and solutions division. He said at a recent conference, the Microsoft Windows Hardware Engineering Conference in LA, that the planned management features would be the “competitive differentiator” in the upcoming version of Hyper-V, which is due with Windows Server 2008 R2. Laing demonstrated Hyper-V performing a live migration, playing a video while switching hypervisors.

The next version of Microsoft’s SQL Server database, Kilimanjaro, promises greater scalability and improved management in large environments that the current SQL Server 2008.

The SQL Server RDBMS engine will be changed in Kilimanjaro so that it can run up to 256 logical processors, going beyond the limit of 64, and eliminating the need to manually partition applications across nodes. This allows customers to run high scale and particularly difficult applications.

Microsoft is “lighting up” new technologies, according to Quentin Clark, general manager for the SQL Server database engine. He says this will allow, for example, Excel to work with larger data sets than currently possible. “We are aligning it with more Office experience…to complete the last mile of business intelligence,” Clark said.

Last month Microsoft announced the arrival of Madison, a server-based appliance based on Kilimanjaro with hardware partners for large scale and detailed analysis.

The company plans to reference implementations ahead of Madison and announced that it might expand into other area as there has already been interest. This could mean appliances for packaged apps, like SAP, or the much more difficult, ‘custom old transaction processing’ (COTP).

COTP could be hard to achieve because reference architectures tend to require knowledge of I/O, memory and CPU capabilities in addition to knowing the software’s own limits.

Unlimited Virtualization from IBM and Windows

IBM are offering customers who buy its System x rack servers and Bladecenter blade servers the option of Microsoft’s Windows Datacenter, allowing for unlimited virtualization.

Windows Server 2008 Standard Edition (SE) can run on machines with up to four sockets and up to 32GB of main memory on x64-based servers – a more than suitable operating system for blades and most rack servers. Enterprise Edition (EE) gives you a bit more flexibility with up to eight sockets and potentially 2 terabyte’s of memory. SE allows just one virtual machine on a server, and EE allows four, so if you want more, you need to buy more Windows licenses.

Datacenter Edition scales up to 64 sockets for x64 servers and up to 2 terabyte’s, and allows unlimited virtual machines.

In October 2006, Window Server 2003 R2 Datacenter Edition, allowed users to deploy an unlimited number of Standard Edition, Enterprise Edition, or Datacenter Edition VM’s on their machines. Back then Datacenter Edition cost $2,999 per processor socket with no client access licenses (CALs), which cost the user $40 each time.

Windows Server 2008 Datacenter Edition remains the same, price-wise, and still allows for unlimited virtualization. However, now Microsoft Hyper-V hypervisor comes with the Standard, Enterprise and Datacenter Editions, which means customers won’t have to buy VMware’s ESX server or Citrix Systems’ XenServer to do virtualization.

What this means is that you can afford to move from EE to Datacenter Edition on two and four socket blade servers, simplifying your software stack (all Windows), and get unlimited virtualization as well.

Businesses believe Virtualization will Drive SaaS Adoption

According to new research by web hosting firm Hostway, over three quarters of organisations believe that server virtualization will drive adoption of Software as a Service (SaaS).

More than 60 percent of respondents said that they play to adopt SaaS in some form within the net five years. About 45 percent of those surveyed believe that the technology has not taken off until recently because of a lack of available virtualization technologies.

Before server virtualization, SaaS providers found it difficult to reliably offer software on demand, according to the research.

“Without virtualization the business model for SaaS would not be viable,” said Hostway director Neil Barton.

“The business model for SaaS means you need to get a high level of utilization from the servers that the applications in the cloud sit on. Virtualization enables this. The message to application vendors is that you need to either SaaS-enable your applications yourself, or partner with people who can allow your applications to be offered as a service.”

Software as a Service is a model of software deployment whereby n application is hosted as a service provided to customers across the internet. This means the customer does not need to install and run the application on the customers own computer, thus removing the need for maintenance, and support.

The advantage of SaaS is that payment for a specific application can be spread out rather than a single expense at the point of purchase.

For a range of SaaS options please contact us here, or call 0845 094 8895.

Virtualization Explained Part 2: Business Virtualization

The value of virtualization has been found by businesses because of the opportunity it gave legacy applications to continue working without tying up a physical computer. This in turn led to the next big virtualization virtue: server consolidation.

If you can avoid having to maintain old hardware for an outdated platform by running a system within a virtual PC there is no reason you can’t avoid maintaining new hardware by continuing to virtualize your systems. By using virtualization you are able to combine multiple workloads all on the one physical computer. Although this equates to less machines, the overall work performed, and the number of environments represented, does not decrease in any way.

Imagine a medium to large scale business that houses a data centre with racks and racks of servers. Each one of these is performing a task of some kind. One could be a mail server, one a print server and several would be dedicated to a range of applications including small rarely-used apps that are only required by one person, a situation that most companies can find themselves in.

If you imagine that each on of these computers is taking up space in the building. They are using a lot of power, and require constant cooling, all of which costs lots of money. Now image the systems administrator when the business come along and announces they’ve bought another special-purpose piece of software that needs its own web server not tied up for any other purpose.

The systems admin has two options. He can order new hardware, taking up space and adding upkeep cost, or if he’s clever he will recognize he can use the capacity within the existing servers. This is where consolidation comes into its own, as you could run two (or more) entirely independent and even conflicting virtualized computers on the one set of hardware, which will prove much more energy efficient than two individual computers would be.

Some businesses have found that they can cut the number of servers in their organization by consolidating them as virtual servers. This can help deliver he cost savings, and help a company who wants to turn that little bit greener.

In addition to this, virtualization cuts the dependency between an operating system and the underlying computer hardware. Typically, when setting up a server there is a lot of work involved configuring the computer itself. The computers setup is largely tied in to that machine. If you wanted to upgrade to a more powerful computer you generally cannot simply backup the computer and the restore onto the new system. Generally, you must go through the format, installation and setup process from the start.

This is not the case with virtualization. As the file system is merely a file on a disk, and the virtual computer is protected from the real hardware, by working through an abstraction layer which is provided by the software implementing your virtual environments.

You can now replace your hardware with a more powerful computer that need not bare any resemblance to your previous system. The virtualized systems won’t mind. Just copy their disk files over and you’re up and running once more, and unless something has went completely wrong they’ll pick off from where they left off.

For Virtualized Network Installations and Consultancy, Contact us here.

Virtualization Explained Part 1

Quite simply Virtualization is the method of running a pseudo-computer as if it was just another application running on a real computer. The virtualized system believes it has a hard drive to itself, but the reality is that its entire file system is contained within disk files on another underlying file system. The virtualized computer will act as though it were physically installed on a computer, with the processor and resources to itself.

Virtualization makes for a terrific test environment, as the virtual computer doesn’t have to be the same operating system as the host it runs on. What this means is that you could trial different Linux distros within virtual PCs on your Windows-based computer. This takes away the risk factor of harming your Windows installation, but you still reap the benefits of a real Linux deployment.

However, it doesn’t just work with Linux on Windows, you might want to trial Windows Server software on your Linux or Windows XP or Vista system. You can mock-up a multi-server environment on a single computer by running more that one virtual PC with its own configurations.

The computer architecture doesn’t even have to match. You can easily run a 32-bit virtual PC on a 64-bit computer. This is where the advantages of legacy application support comes in to play. For example, you may have an old Windows NT program your business must continue supporting, yet you don’t want to maintain a Windows NT computer. You could have amazingly shiny servers running a more contemporary server operating system, that doesn’t matter what is installed on them, nor if they are 64-bit machines. You can create a virtual 32-bit Windows NT environment for your legacy program to run on. This gives the impression that it’s running on such a computer, and you’ve removed the hassle of maintaining old hardware.

Down to the fact that file systems for virtual computers are just disk files within a parent environment you can easily perform a backup with a straightforward disk copy. This allows you to make up backups prior to implementing any changes on your virtual PC. If something goes wrong then you don’t have to panic; recovery is nothing more than closing the virtual machine, replacing the file system disk file with a backup copy and restarting. As far as your virtual computer knows, nothing happened: anything between making the backup and restoring did not happen.

For Virtualized Network Installations and Consultancy, Contact us here.