Category Archives: Servers

VMware: The Basics

What is a virtual machine?
A virtual machine is a set of virtual hardware and files. The virtual hardware gives us the possibility to install a Guest OS on top. This Operating system must be “supported” by the Hypervisor, although many times you can install unsupported Guest OSes.

Virtual machine files:
So, let’s take a look at some of the most important files that constitute a VMware virtual machine:

• .VMX file – This is the configuration file for a VM.I In this file all the properties like number of vCPUs, RAM, Virtual Nic interfaces, Guest OS, etc. are contained.

• .VMDK file – This file is also known as the virtual disk descriptor. Here the geometry of the virtual disk is described.

.VMDK File

• –flat.vmdk – This is the data disk file where all the information of the VM is stored.
• .nvram – This file contains the “Bios” of the virtual machine (settings, etc.)
• .vswp – This file is the VM swap file. It is created once the VM is started and is used by the hypervisor to guarantee the assigned memory to the VM. This file is equal in size of the vRAM (RAM assigned to the VM). The only situation when it can be different is when you have a memory reservation configured to vm. In that case the .vswp file will be the same size as the defined reservation.

• .vmtx – this file is only present when you mark the VM as a template, when you set a VM as a template the only thing that happens is that the .vmx file is converted into a .vmtx file

vsphere

• .vmsd – this file is a snapshot descriptor. You can find the different snapshots that you have for that VM, the files of those snaps, etc.

vmsd

• vmss – this file is known as the “snapshot state.” Here the configuration state (.vmx information) of the VM at the time when the snapshot was taken is stored. So for example, if I took a snapshot when the VM was configured with just one vCPU and then I take a snapshot with 2 vCPUs the change in number of vCPUs will be known by the hypervisor using this .vmss file.

vmss

• -delta.vmdk – this file contains the changes of the VM after a snapshot was taken, so we essentially have the “base” disk and the delta files that store all further changes on disk.

delta.vmdk

As we can see, a virtual machine is easy to migrate and manage because it is a set of file, instead of a physical server. There are other VM files like .log files and .lck lock files.

Resources that can be assigned to a virtual machine:

Virtual machine hardware:
A virtual machine requires a set of “virtual devices.” These devices or virtual hardware provides access to the underlying physical resources. It is important to note that the access to hardware is controlled by the hypervisor. Currently VMware presents the following hardware devices to the virtual machines:

• SCSI adapter – This virtual SCSI adapter allows the use of “virtual disks, with a maximum of 4 SCSI adapters per VM and 15 targets (disks) in each adapter (60 disks). There are different types of adapters: LSI Logic Parallel, LSI Logic SAS, BusLogic Parallel and VMware Paravirtual SCSI (PVSCSI). PVSCSI adapter is a Paravirtualized virtual adapter that can give us greater performance. I If you want to know more about it take a look here.

scsi

• USB controller – A vSphere VM can have three types of USB controllers: UHCI (USB 1.0), EHCI (USB 2.0) and xHCI (USB 3.0), with a maximum of one controller of each type (3 controllers) per VM (3 controllers different version of USB). Each controller can have up to 20 different devices.
• Floppy controller – This floppy controller can have up to two devices. Usually this virtual floppy is used to insert drivers in a floppy image (.flp).
• Network cards – Also known as “vNics,” vSphere supports up to 10 network cards per VM. There are different types of vNics that can be available to a VM depending on the virtual hardware or “VM compatibility,” vlance (emulated 10 Mbps nic), E1000, Flexible (can change between vlance or VMXNET), VMXNET2 and VMXNET3.The VMXNET adapters are “paravirtualized” adapters, that allows better performance. If you want to know more about the different types of virtual nics take a look at this great blog post. Also we can have SR–IOV compatible devices (virtual interfaces of a physical nic/PCIe device) that can be presented to a VM reducing the overhead and increasing the performance.

network cards

• AHCI controller (SATA) – This type of controller is only available in vSphere 5.5. A VM can have up to 4 SATA controllers with a maximum of 30 disks per controller.
• Video card – provides video for the VM. We can also add 3D hardware rendering and software rendering to this “vGPU.”
• Other – a VM can have up to three parallel ports and up to four serial/com ports.
• RAM – the maximum amount of RAM that can be assigned to a VM in vSphere 5.5 is 1TB.
• CPU – the maximum number of vCPUs that can be assigned to a VM is 64. This is true for vSphere 5.5.
It’s very important to know that the CPU is not virtualized by the vmkernel. The hypervisor only assigns the different vCPUs to different cores on the physical system leveraging the CPU scheduler.

Virtual Disks
As we already know, a virtual machine can have virtual disks that are attached to a vSCSI adapter or a SATA controller, but we can add different types of virtual disks that will reflect directly in the physical storage.

virtual disks

Let’s start by explaining what is Thin Provisioning at vSphere. Thin Provisioning enables the hypervisor to assign disk space to the VMs on demand. This allows over allocation of the physical storage. With Thin Provisioning the Guest OS (the Operating system installed in the VM) sees the full allocated space but in reality only the consumed space is allocated on the physical storage.

Example:
John creates a VM with a thin provisioned virtual disk. He assigns 80GB of space to that disk. John installs an Ubuntu guest OS and several applications that consumes a total of 40GB from the 80GB allocated so it’s only 50 percent. Only 40GB of space is consumed at the physical disk/storage as we can see in the following image:

exaMPLE

Basically the hypervisor “tricks” the gOS and reports the total size of the disk without really occupying all the space in the physical storage.
Now that we know what is Thin Provisioning lets take a look at the current supported types of virtual disks or VMDKs:

vmdks

• Thick provision lazy zeroed – this type of disk allocates the total space assigned to it on the physical disk/layer (datastore). If there was previous data on the disk it does not get over written due to the fact that with this type of disk there is no writing of zeroes to the blocks that constitute the virtual disk. In this case the “erasing” or writing of zeroes is performed on demand on first write.
• Thick provision eagered zeroed – in this type of disk all the space is allocated on creation and a write of zeroes is performed on all blocks that are part of this virtual disk. Because of this the time to create an eagered zeroed vmdk is longer.
• Thin provision – with this type of disk space is allocated on demand.

Now it’s time to talk about the different disk modes on vSphere. This “modes” define how a vmdk (virtual disk) will behave when we want to take a snapshot of the VM. The following modes can be configured:

vmdk vsphere

• Dependent – With this mode the virtual disk (vmdk) is included in the snapshot, so if you delete the snapshot the changes are gone.
In this mode if we power off the VM the snapshot and changes are persistent.
• Independent persistent mode – In this mode the virtual disk is not affected by snapshots, so no delta file is created and every change is written directly to disk.
• Independent Non persistent – in this mode the virtual disk is affected by snapshots, a redo log is created (delta file) and any write or change is captured there.If you delete the snapshot or power off the changes are gone.
This article has been an introduction to VMware, defining what a virtual machine is, discussing the different resources that can be assigned to a virtual machine, an overview of VMware tools, converting a physical server into a virtual machine and best practices/guidelines for design.

Why Microsoft Needs Windows 7 to Succeed

Thursday, 22 October sees the highly anticipated arrival of Microsoft’s Windows 7 operating system, with many believing that the future of the world’s largest software company will depend on its success.

The enormous scale of Microsoft’s grip on the market becomes clear when told of the 90% of computers relying on its Windows operating system, and over 1 billion people using it.

Microsoft’s last financial year saw a £35.7bn turnover with a net profit of approximately £9bn. Over half the profits generated were reliant upon Windows.

Experts have predicted that Microsoft’s stranglehold over the market was due to drop, with competitors Linux and Apple waiting to jump in. Many experts predict that software will be shifted to the “cloud,” where people connect to remote servers to access their software in a revolution to worldwide computing.

Microsoft brought about the attention of regulators at the US Department of Justice and the European Commission with ruthless actions towards competitors.

The release of its Vista operating system 3 years ago rendered many of its first users with unusable hardware and software; a crushing blow and seriously damaged its reputation with software developers and customers alike.

Most people still prefer Windows XP, Vista’s eight-year-old predecessor, with estimates suggesting that Vista has between 18.6% and 35% hold on the market.

Annette Jump, research director at technology firm Gartner, believes that “Vista is the worst-adopted operating system” whilst Microsoft International’s Jean-Philippe Courtois thinks “we don’t feel great about Vista adoption.”

This could be the only chance for Microsoft to regain the confidence that took a blow during the Vista period. Many Microsoft executives feel that they learnt a lot from what went wrong with Vista.

Windows 7 looks set to be released in good time, just 3 years after the release of Vista. Those that have tested it have reported it to be fast, secure, reliable, and easy to use. Microsoft have made big steps to avoid making the mistakes experienced with Vista, and prepared its partners for the release.

Mr Courtois believes that “the Windows ecosystem is the broadest in the world, and we have to take care of that,” with Alex Gruzen from Dell Computers surprised at how “the preparations for Windows 7 have been a remarkable step up from the days of dealing with Vista.”

He continued by revealing that “in the past, Microsoft looked at its operating system in isolation, and gave it to [manufacturers] to do whatever they wanted. Now they collaborate, help to figure out which third-party vendors are slowing down the system, help them improve their code.”

Sidekick Loss Hits T-Mobile Phone Sales

T-Mobile has had to withdraw the Sidekick in America, after being made aware that customers could lose personal data through its server.

The designer of Sidekick’s software, Danger (a subsidiary of Microsoft), confirmed the fault, with the mobile phone industry condemning the issue as one of the biggest failings in recent years.

Microsoft are also coming out of the situation look bad, after promoting cloud or online services as a means of less expensive solution to enterprise storage.

Harry McCracken, editor of Technologizer.com told BBC News “this is the most spectacular loss of data on the web to date.”

“There have been other examples, but always from small companies. For this to involve a big name like Microsoft is a major embarrassment and a big worry for consumers and Microsoft.”

Data back-up

It is understood that Microsoft’s company Danger, experienced a technical hitch which caused major data loss, with Sidekick users seeing disruptions for the past week. Investigations are underway to find the cause of the faulty server, with Microsoft yet to offer an explanation.

Sidekick uses an online service to provide back-up contacts, calendar appointments, photos and other personal information saved to the mobile phone. Some of the one million subscribers to Sidekick have “almost certainly” lost personal data as a result of this glitch according to Microsoft.

Those most at risk of losing their personal information are those who let their battery fully drain or removed it completely, causing all local copies of data to be cleared from the phone.

“I had 411 contacts, now they are all gone. I had five e-mail accounts set up on the phone as well which are also gone, address book and all,” complained 17 year old high-school student Kayla Hasse from New Jersey.

“I am extremely upset not only due to the fact I lost everything, but also because I pay 20 some dollars a month for THIS? It’s ridiculous.”

Mr McCracken feels it’s a “real wake-up call for customers.”

“In the past we have always tended to assume that big companies are better at backing up our data than we are. While this is true in most cases, a lot of people are going to say you can’t trust third parties, whether it’s Microsoft, Google, Apple or whoever.”

The future of cloud computing

Whilst Microsoft and T-Mobile may experience the immediate fall-out from this problem, experts fear that it may cause long term damage to customer confidence in cloud computing.

Will Strauss, president of Forward Concepts is concerned. “Microsoft has been beating the drum for the idea of cloud computing where we all trust our stuff on some server up in Washington State,”

“This is going to throw a little cold water on that idea for the moment. Microsoft is going to have to do some explaining and give good assurances that cloud computing is viable and that it won’t lose data in the future, otherwise people won’t trust it.”

Fujitsu adds SAS, iSCSI and SSD to Eternus DX

Fujitsu is looking to strengthen its Eternus DX brand by adding SAS and iSCSI interfaces and a solid state drive, enhancements developed since its March takeover of Fujitsu Siemens Computers.

Fujitsu Siemens Computers had twin controller arrays with Fibre Channel connectivity for small/medium businesses in the FibreCAT SX60 and SX80. SAS and iSCSI were already features found in Fujitsu’s Eternus 2000 array, directed at a similar customer base and offered the Eternus 4000 and 8000 as larger-scale arrays.

Fujitsu has been aiming to merge the SME products into one range and to re-brand it ‘Eternus DX’. In June, initial steps were made by renaming the FibreCAT SX60 and SX80 to Eternus DX60 and DX80. STEC had already been chosen to supply its Eternus drive arrays with solid state drives (SSD’s) a month earlier.

Due to this range of modifications, the Eternus 2000 range has now been replaced by the Eternus DX60 and DX80, with SAS and iSCSI working in conjunction with the Fibre Channel. Up to 24 15,000rpm SAS HDD’s are offered by the DX60 with an option of 300 or 400GB capacity, and the DX80 up to 120. Nearline SAS drives are also available with either 1TB or 750GB capacities, rotating at 7,200rpm, however SATA drives are not supported.

The DX/60/DX80 give leading edge performance and easy-to-manage storage for a range of uses. The capability of data storage to the most appropriate medium, from SAS, nearline SAS to SSD allows customers to generate better information lifecycle management.

2 to 4 Fibre Channel host interfaces are offered by both models, or an equal number of 3Gbit/s SAS or 1Gbit/s iSCSI interfaces. 4Gbit/s Fibre Channel is provided by the DX60 while the DX80 can deliver 8Gbit/s with 100 or 200GB SSD’s.

Both products include RAID migration, Data Block Guard, eight snapshots which is extendable to 512 (DX60) AND 1,024 (DX80), redundant copy, disk encryption, and a function called Eco-mode, which is based on MAID (Massive Array of Inactive Disks) technology to allow administrators to save electricity by choosing to spin-down inactive disk drives.

They suit a range of uses, such as Microsoft Cluster Server or X10 sure and storage consolidation. Also, these products are perfect for important company applications such as E-mail, data archiving, database operation and disk back-up.

The DX60/DX80 also acts as a great storage aid in virtual server environments with VMware vSphere, Critix and Microsoft Hyper-V.

AMD prepares the “fastest graphics supercomputer ever”

Advanced Micro Devices (AMD) are preparing themselves to release what they are calling the “fastest graphics supercomputer ever” they have announced.  It is the AMD Fusion Render Cloud, capable of bringing the user a powerful enough system to allow you to render high definition graphics in 3 dimensions, in a way that has never been done before.

AMD have teamed up with OTOY to bring you the AMD Fusion Render Cloud that is said to bring professional graphics rendering right into your home, or as far into your home as possible as the technology uses a bunch of remote computers to render the 3D high definition footage and then shoots it back to your computer down a cable.

The Fusion Render Cloud is a mixture of lots of different processors that will work together to get the touch job done, such as the AMD Phenom II processors, ATI Radeon HD 4870 graphics processors and AMD 790 chipsets.  This whole set of processing power will supply over 1 petaflop and will be delivered over 1,000 GPUs.

According to the Chief Executive Officer of AMD, Dirk Meyer, the cloud supercomputer would be easily accessible and easy to use.   Meyer went on to say, “Mobile computing is never going to be the same, and cloud computing really has the opportunity to open up new vistas both for the film and game industries.  Now we’re poised for a great leap forward in visual computing as well as mobile computing.”

The announcement was made at the Consumer Electronics Show in Las Vegas.  Also on stage with Dirk Meyer was Jules Urbach, the Chief Executive Officer and founder of OTOY, who are working with AMD to bring the public this render cloud.  Urbach’s software company, OTOY, mainly delivers graphics content to users from server farms in a similar way that AMD are suggesting here and so they would clearly be up for the task.

At the Consumer Electronics Show, Urbach showed a number of ATI graphics cards working together to bring a first player shooting game to a wired up device, showing how the processors could work together to provide greater power.

“All of a sudden we are taking one of the world’s most complicated games and we’re putting it in a Web page. It’s huge.  All you need is an iPhone…. [or] a laptop to use it,” announced Urbach, who also claimed that the cloud would be ready for use by the third quarter of the year.

AMD need this little boost and will be promoting the cloud as a way of reinventing the company’s image.  AMD recently made 600 workers unemployed as the economic crisis started to take its toll.   On top of these redundancies, the company has recorded 8 quarterly losses one after another and have been falling behind Intel, their main rival, by not matching releases and failing to bring out anything of any real interest compare to Intel who have been taking the market from every angle.

IBM announce AEM v4.1

IBM has announced the arrival of its Active Energy Manager v4.1 (AEM) plug-in for its Systems Director system management tool, which is used to monitor the power consumption and temperatures of IBM systems, as well as putting a restriction on the amount of power that is available to selected server models. The announcement came a bit longer than expected, but should make a welcome addition to the range.

Administrators can use the plug-in on Linux partitions with Power, X64 or mainframe iron, and it can reach into Linux, Windows, z/OS and I 6.1 partitions on each machine to keep them to a set level of energy consumption.

This recent addition to the AEM tools is suitable of System z mainframes and will work with all other server platforms. Each of the updates include features that allow reaching into power distribution units inside computer racks and access the level of power currently being consumed, giving data centre managers a way to monitor at the rack and the server level.

A great feature of the plug-in is that it can be used to put a lid on power consumption on a multi-system level, i.e. all of the servers in a rack, or all the blades in a case. The plug-in supports the new Power6-based power 520 and 550 systems which had their release in mid-October in addition to the new Power6-based JS12 and JS22 blades.

You can monitor energy use on any of the Power system or System z mainframe, however only the Power6 and the X64 systems allow you to cap power consumption. To use AEM v4.1 you must have Systems Director V6.1, and the software includes an updated web console, which means you don’t have to install client software on your administration PC to use the tool.

This latest edition comes on the back of multiple efforts by compute and server companies to ensure we have greener data centres across the world, and is a welcome addition to IBM’s software portfolio.

EU sets Code of Conduct for Greener Datacentres

The European Commission (EU) has requested that data centre owners voluntarily sign a Code of Conduct which encourages a best practice when it comes to energy efficiency, and provide monthly energy reports and an overall annual report to an EU secretariat.

The Code of Conduct states that: “Electricity consumed in data centres, including enterprise servers, ICT equipment, cooling equipment and power equipment, is expected to contribute substantially to the electricity consumed in the European Union (EU) commercial sector in the near future.”

The man behind the plan is Paolo Bertoldi who is part o the EU’s Renewable Energies Unit is taking responsibility to ensure greener datacentres. Bertoldi has been, for the past two years, behind a working group that has had a series of meetings including, European government representatives including DEFRA, and various manufacturers with a business interest including Intel and Sun Microsystems.

Bertoldi has worked hard to encourage the support of multiple public sector bodies across the length and breadth of Europe, by taking on jobs that they can’t do, or don’t want to do, and getting them involved without over-committing them – an impressive feat fro the generally slow to move EU.

The EU press release said that, “Historically, data centres have been designed with large tolerances for operational and capacity changes, including possible future expansion. Many today use design practices that are woefully outdated. These factors lead to power consumption inefficiencies. In most cases only a small fraction of the grid power consumed by the data centre actually gets to the IT systems. Most enterprise data centres today run significant quantities of redundant power and cooling systems typically to provide higher levels of reliability. Additionally IT systems are frequently run at a low average utilization.”

The Code of Conduct has been put on place to reduce European data centre power consumption, and has the support of the British government.

Companies and public sector bodies will sign up to the CoC, agreeing to a set amount of initial energy consumption, monitor it month by month, and try to undertake some best practices, like virtualising servers, using cold-aisle cooling and not mixing hot and cold air in the data centre.

This new initiative is an encouraging step toward a greener Europe, it will be interesting to see how data centres change their current practices to adapt to the new rules over the coming months.

European SMEs Spending $7.6bn on Servers and Networks in 2008

According to a recent survey by AMI-Partners, who take surveys with small to medium-sized business (SMEs) of under 1000 employees, SMEs are expected to spend $7.6bn on servers and networking equipment in 2008 across the UK, Germany, France, Italy, Sweden, Norway and Finland.

AMI-Partners say that over half of the $7.6bn will be spent in the UK, Germany and France alone, who are also the main buyers of IT equipment – which is no great surprise considering they have the strongest economies of the lot.

Over the three countries, average spending in 2008 is expected to grow by around 5 percent, with around two thirds of spending coming from businesses with less than 100 employees.

A particularly interesting discovery was that 22 percent of small businesses have only recently bought their first servers, with many getting by using mid-range PCs, until their business needed to expand.

The survey found that across the board, Windows Server 2003 was the most popular operating system, followed by Windows Server 2000. There is good news for Linux, as it was showing the seeds of growth.

Networking hardware costs are expected to rise by 8 percent in 2008. This takes into account the cost of network switches, routers, network interface cards, wireless LAN gear, and cabling. LAN switches are said to account for the bulk of network spending in 2008, the survey said.

Good news for IBM: Of the UK businesses surveyed, over 40 percent of them had bought Blade Servers, which is the highest penetration of Blade Server adoption of all the countries polled.

For Virtualized Network Installations and Consultancy, Contact us here.

Has Google gone Under? Down Under

Google is going under. Down under. Well that is according to Australian IT. The search giant has dispatched the oddly named “Lord of the nerds”…sorry, “Duke of the Data Centers” to one of the only places that doesn’t have a Google data center.

The small team of Americans are led by Simon Tusha, that’s the Duke to you and me, headed to the land of Oz for some “high-level discussions” with local data center providers.

Tech Mag The Register asked Google if the rumours were true, and they sort of answered the question, kind of.

“Fast, innovative products are crucial for our users and require significant computing power,” a company spokesman said. “As a result, Google invests heavily in technical facilities around the world and is constantly on the look out for additional locations. However, we don’t comment on possible sites or locations.”

However, an Australian spokesman said that Australian IT was considering their options. “While we’re investing in our Australian operations, we haven’t made any decisions about whether we’ll locate a data centre here,” he said.

Google currently has 36 data centers operating or under construction across the world, but none in Australia.

According to Australian IT, some businesses have had to do without Gmail, and other cloud-based apps as routing data to Google server’s overseas increases bandwidth costs.

Google currently construct its data centers by stacking shipping containers pre-packed with servers and cooling equipment, and transport them around the world, allowing easy set up. And there are rumours that Google are even manufacturing their own servers and Ethernet switches.

Sun and Fujitsu reveal Sparc Enterprise M3000

Fujitsu and its server partner Sun Microsystems have released an entry-level server that will fill the product gap between the company’s quad-core Sparc T and Sparc64 VII models.

The new server is code-named “Ikkaku” and is sold as the Sparc Enterprise M3000. The Ikkaku features a single-socket box, and the four cores in that singe Sparc64 VII processor run at 2.52 GHz. The processor has 64KB of L1 data cache and 64KB of L1 instruction cache per core, and 5 MB of L2 cache on chip shared by the four cores. The server’s motherboard supports up to32GB of main memory using 4 GB of memory, and features four low-profile PCI-Express x8 peripheral slots.

The system uses the same Jupiter server bus as larger Sparc Enterprise M servers to link the components of the system together. That bus has 17BG/sec of peak aggregate bandwidth and 4GB/sec of I/O bandwidth. According to Sun, the M3000 has double the performance of the entry servers using UltraSparc-Iii processors.

John Fowler, the executive vice president in charge of Sun’s Systems Group, has brought the M3000 to the market for a few reasons. Customers deploying large Jupiter systems usually use an n-tier architecture, with larger servers running the databases behind applications and web application servers accessing the data and running the application code that feeds off the databases.

Although some companies don’t mind mixing one type of database and application servers, some do, and Sun and Fujitsu needed a smaller server for midsized customers. Also, the system is relatively quiet, at just 47 decibels, which Sun say is around the same as a quiet office.

The Sparc Enterprise M3000 server is available now. A base-level machine with a single processor, 4 GB of main memory, two 146 GB disks, a DVD drive and a Solaris 10 license will cost you around $15,000.

For Virtualized Network Installations and Consultancy, Contact us here.