IBM announce AEM v4.1

IBM has announced the arrival of its Active Energy Manager v4.1 (AEM) plug-in for its Systems Director system management tool, which is used to monitor the power consumption and temperatures of IBM systems, as well as putting a restriction on the amount of power that is available to selected server models. The announcement came a bit longer than expected, but should make a welcome addition to the range.

Administrators can use the plug-in on Linux partitions with Power, X64 or mainframe iron, and it can reach into Linux, Windows, z/OS and I 6.1 partitions on each machine to keep them to a set level of energy consumption.

This recent addition to the AEM tools is suitable of System z mainframes and will work with all other server platforms. Each of the updates include features that allow reaching into power distribution units inside computer racks and access the level of power currently being consumed, giving data centre managers a way to monitor at the rack and the server level.

A great feature of the plug-in is that it can be used to put a lid on power consumption on a multi-system level, i.e. all of the servers in a rack, or all the blades in a case. The plug-in supports the new Power6-based power 520 and 550 systems which had their release in mid-October in addition to the new Power6-based JS12 and JS22 blades.

You can monitor energy use on any of the Power system or System z mainframe, however only the Power6 and the X64 systems allow you to cap power consumption. To use AEM v4.1 you must have Systems Director V6.1, and the software includes an updated web console, which means you don’t have to install client software on your administration PC to use the tool.

This latest edition comes on the back of multiple efforts by compute and server companies to ensure we have greener data centres across the world, and is a welcome addition to IBM’s software portfolio.

New report predicts massive increase in malware and phishing in 2009

Reports from security provider MessageLabs suggest that virus writers are highly likely to release increasingly sophisticated strains of malware over the course of next year in an effort to get back in to the game after some high-profile botnet shutdown operations in 2008.

The organisation predicts that hackers will set off a series of attacks in which malware will exists as a virtualization layer running directly on the hardware and hidden by the operating system.

Senior analyst Paul Wood from Message Labs explained further: “The operating system does not know it’s there, and the malware will be intercepting low-level operating system calls.”

“The problem will be in realizing it’s there and understanding how to clean up, because it’s so low-level and tangled up in the operating system that sometimes the only recourse is to reinstall the machine from scratch.”

He believes that cyber criminals will concentrate of infecting systems with sophisticated malware that can switch between different tasks as appropriate. He gives the example that if a piece if malware determines that the spam it is sending out is being blocked, it could then be told to launch denial-of-service attacks instead.

Also according to MessageLabs, mobile malware is set to increase in 2009, but not with the goal of infecting devices to create botnets – instead attackers will try to make money by subverting the phones so that they dial premium rate lines set up by the criminals – “thank you for holding, your cash is important to us”.

The company predicts that phishing scams will increase massively, and increase in cleverness, as the criminals target weaknesses in Domain Name Server (DNS) system to launch phishing websites by creating sub-domains in exposed accounts. This method could be used to find a way round the traditional URL filters that can detect when criminals use type-squatting techniques, which rely on the mistakes of users typing in the wrong address in a browser.

“We have seen legitimate businesses with good domains being taken over in some way,” said Wood. “The criminals gain access to the admin function of their DNS console, add sub-domains to their records and then use these domains in phishing e-mails.”

Greener Gaming? New report shows Consoles are eating up too much power

Everywhere you look right now you see green this, green that. It’s on the tip of everyone who loves a debate’s tongue, and is debated at al levels of society and industry. The one place that’s escaped the notion of green computing is something that pretty much every home has these days – games consoles.

A new report from the Natural Resources Defence Council (NRDC) has warned game consoles developers to do more to cut down the machines power consumption.

The report claims that across the US, video game consoles can consume the same amount of power as it would take to light up all of the homes in San Diego, and the bulk of energy consumption actually takes place while the system sits on standby.

NRDC Senior Scientist Noah Horowitz said, “If you leave your Xbox 360 or Sony PlayStation 3 on all the time, you can cut your electric bill by as much as $100 a year simply by turning it off when you are finished playing”.

“With so many struggling in today’s economy, it’s important to realize there are simple steps gamers can take to lower their energy costs. And if manufacturers make future systems more energy efficient, they’ll be doing the right thing for consumers’ pockets, for our clean energy future, and for the environment.”

The NRDC report detailed the amount of energy consumption Xbox 360’s, Playstation 3 and the Nintendo Wii use when they are active, on standby or turned off.

The research found that on average, the PS3 and 360 used a huge 150 watts and 119 watts, respectively. Over a year the two systems used more than 1,000 kilowatt-hours is they were left on 24 hours a day, 7 days a week, which is equal to running two fridges at the same time, for the same length of time – but in reality I’ve never heard of an Xbox 360 can manage that task… but I digress.

The Wii was much more economical, using just 20 watts of electricity, which is less that the Wii’s big brother, the GameCube. The other consoles used far more power than their earlier models.

“Video game consoles are really just specialized computers, and most computers, especially laptops, have really sophisticated energy management technology,” said Nick Zigelbaum, an energy analyst at NRDC. He believes energy efficiency, “is just something these companies have not connected with their products”.

The main reason for the 360 and PS3’s high energy consumption is down to the systems high-definition capabilities. Using these functions cause the consoles to work extremely hard, creating huge energy levels, and continues even after the consoles is switched off.

The PS3 for example, uses five times as much power than that of a standard Sony Blu-ray DVD player, playing the same movie.

The NRDC recommends that manufacturers should incorporate more energy efficient components and automatic energy management features.

“It would just be default like when you’re typing something in Word and you close your laptop. You don’t lose the document you were working on. It’s been saved, most of the time whether you chose to save it or not. That kind of communication and coordination is something that should start happening in the gaming industry,” Zigelbaum said.

Zigelbaum noted that power saving methods is catching on these days. He aid that it used to be that many people would leave lights on when they left a room, but now people are switching off.

“The work that is going forward now by Microsoft and Sony to include some auto-off features are a really good and necessary step. Now the focus is on how to make those features work the way they want them to and the way we want them to work,” he continued.

Is Virtual Reality becoming Virtually realistic?

When does virtual reality end and real life begin? Is it when your wife shouts that your dinners ready? Is when your character gets killed online? Or is it when you’re found by your parents hanging in your bedroom closet?

Social gaming on a massive scale is meant to be a way to remove yourself from the fools of this world, and dive into a dream land that has you as a hero slaying mythical beasts, or as a smart-dressed, confident person that’s not afraid to interact with other humans, among other things.

But it often goes wildly wrong. Take the example of David Pollard, 40, ho has been divorced by his wife Amy Taylor after he was caught “cheating” on her with someone else’s avatar in Second Life – the hugely popular virtual reality game. The couple, who lived on benefits and collectively weighed 41 stones, gave themselves svelte in game avatars, cool jobs and fancy clothes – complete escapism, creeping over into real life.

Fantasy gaming is more popular than ever, with around 5 million Second Life users registered on the UK (around a third of the worldwide total), and gamers queuing up – and collapsing from exhaustion – for the recent World of Warcraft expansion pack.

“When it comes to fantasy, people have always done it,” points out Alistair Ross, chartered psychologist and honorary fellow at Glasgow’s Strathclyde university.

“Don’t forget that kids used to dress up and play cowboys and Indians and pretend to kill each other.”

Gamers, I would seem, are happy to destroy some virtual worlds in the same way people do it in real life. Virtual Utopia, once a nice play to play in is not entirely unlike the real world, it currently has a massive crime problem, and high interest rates. You’d be as well not bothering.

“Residents of all Utopian societies want to build an ideal place but often have specific ideas about who fits in,” Ken Roemer, past president of the Society for Utopian Studies, said recently. “Who they don’t let in defines the boundary of who they are as the place grows, there’s this notion of the wrong type of people coming in.”

The problem is, he suggested, is that people are becoming increasingly addicted to these games, and some are finding it difficult to separate right from wrong in the real world.

One disturbing example of how bizarre online gaming can be was the McAllister family form Arizona, who played Second Life with each other. Dad Jim bought a shotgun in the game and ‘murdered his wife, 41, and kids Timothy, 13 and Rebecca-Anne, 9, before killing himself.

Second Life creators Lindon Labs freaked out, naturally, and called the police to investigate. The police promptly burst down the family’s front door to find the family happily eating dinner together.

“Our family gets along great in real life,” McAllister told police. “But in Second Life, we couldn’t stop arguing.”

Is the link between gaming getting too muddied? It’s an impossible area to regulate unless these games are taking offline, but with so many people relying on them for their confidence, and only means of social interaction, who could take them away and not feel guilty.

Lenovo Announce Secure Managed Client

High spec computer manufacturers Lenovo has announced Secure Managed Client (SMC), which is a storage-based desktop solution that should massively reduce the costs and worries about security that come in running an IT business.

The rather exciting SMC turns off hard drives and stores all data in a non-server remote location, all the while cunningly disguising itself as an everyday PC.

The SMC consists of a client, a hard drive-less ThinkCentre desktop PC Intel vPro Technology, comes with a Lenovo co-developed software stack and a centralised Lenovo storage Array, all powered by Intel.

The SMC gives IT managers quite a few advantages over current server-based computing options such as IBM Blade PCs, thin clients or desktop virtualization.

SMC gives the user a full Windows experience, and it works alongside and enhances current IT processes and tools to a traditional desktop PC by re-enabling the hard drives. The SMC is highly energy efficient, using far less energy than a traditional system. The SMC is secure, in that all the stored information is stored in a safe, single location, reducing the risk of theft dramatically.

Lenovo has estimated that its large enterprise customers in North America spend around $120 a month on running a PC, taking in to account expenses from deskside IT visits, call centre support, and management costs.

Lenovo claim that using the SMC solution could reduce this expense to around $70 per PC. The SMC solution is currently being offered on the Think Centre M57p1 desktop PC, but will also available for the M58p2 in 2009. The M58p is the greenest, and most secure ThinkCentre yet, the company boasts.

EU sets Code of Conduct for Greener Datacentres

The European Commission (EU) has requested that data centre owners voluntarily sign a Code of Conduct which encourages a best practice when it comes to energy efficiency, and provide monthly energy reports and an overall annual report to an EU secretariat.

The Code of Conduct states that: “Electricity consumed in data centres, including enterprise servers, ICT equipment, cooling equipment and power equipment, is expected to contribute substantially to the electricity consumed in the European Union (EU) commercial sector in the near future.”

The man behind the plan is Paolo Bertoldi who is part o the EU’s Renewable Energies Unit is taking responsibility to ensure greener datacentres. Bertoldi has been, for the past two years, behind a working group that has had a series of meetings including, European government representatives including DEFRA, and various manufacturers with a business interest including Intel and Sun Microsystems.

Bertoldi has worked hard to encourage the support of multiple public sector bodies across the length and breadth of Europe, by taking on jobs that they can’t do, or don’t want to do, and getting them involved without over-committing them – an impressive feat fro the generally slow to move EU.

The EU press release said that, “Historically, data centres have been designed with large tolerances for operational and capacity changes, including possible future expansion. Many today use design practices that are woefully outdated. These factors lead to power consumption inefficiencies. In most cases only a small fraction of the grid power consumed by the data centre actually gets to the IT systems. Most enterprise data centres today run significant quantities of redundant power and cooling systems typically to provide higher levels of reliability. Additionally IT systems are frequently run at a low average utilization.”

The Code of Conduct has been put on place to reduce European data centre power consumption, and has the support of the British government.

Companies and public sector bodies will sign up to the CoC, agreeing to a set amount of initial energy consumption, monitor it month by month, and try to undertake some best practices, like virtualising servers, using cold-aisle cooling and not mixing hot and cold air in the data centre.

This new initiative is an encouraging step toward a greener Europe, it will be interesting to see how data centres change their current practices to adapt to the new rules over the coming months.

Michael Dell talks about the next Generation of Supercomputers

Yesterday, in Texas, the annual Supercomputing 2008 trade show began with a speech from Michael Dell – chief executive of computer company Dell. Dell have often dabbled in high performance computing (HPC) and from his keynote speech it would appear that Mr Dell wants to delve further into the HPC area. However, his speech was reportedly more like a hardcore sales pitch.

Dell made a few points in his speech that described what the next generation of supercomputing could turn out like.

He began by showing how far we have to go to design a power efficient, easily programmable, redundant supercomputer. He said that the human brain has around 100 billion neurons, each with around a 1000 or so synapses, each running at near 20 petaflops of “raw computing performance”, which is it was to be built today (which it can’t) would break through the petaflops barrier and would cost an estimated $3.6bn (£2.4bn). He rounded off his comparison by saying: “The human brain uses about 20 watts of energy, so we evidently still have a long way to go.”

Someone from the crowd asked Dell what would it tale to simulate the human brain, and Dell shrugged off the question saying that he never suggested simulating the brain, what he meant to demonstrate that HPC clusters are not so great when compared to mother nature.

“For me, the dream and the excitement about computers was not to replace the human brain,” Dell said. However, he believes that more can be done to change the way people act with machines: “It is a fairly rudimentary process today. We type keys and something happens. I think there is an enormous opportunity to improve the man-machine interface.”

Dell spoke on the “three waves of supercomputing”, which according to him included specialised vector machines and proprietary operating systems in the 70s, followed by microprocessors during the 80s and 90s, and toward the end of the 90s standards-based parallel clusters. He says that the 4th wave will “deliver higher density machines”, most likely in Blade or other custom form factors with pools of shared storage, and focus on a constant hassle of running an administering them. Dell went on to show figures from Tabor Research, a supercomputing market researcher, showing that around 70 percent of HPC budgets are eaten up by staff and administrative items in the budget.

Dell wants to encourage cheap HPC setups, just like it did with its servers and home PC’s in the past, and would like to see some businesses and developing countries have them.

He said that five years ago, a teraflops of computing cost around $1m, but these days you could get about 25 times as much for the same price. The density has not gone up as much as the price has dropped, but it’s still pretty impressive. Just three years ago, Dell said that a 2,500 core cluster with 1,250 servers using 3GHz x64 processors would deliver near o 9.8 teraflops. These days a 1,240 core machine using just 155 servers will deliver 10.7 terraflops – a 90 percent reduction in servers.

High Performance and energy efficient? Intel Core i7 has arrived

Intel’s shiny new high-end desktop processor range has hit shelves. The new core i7 range, which were previously codenamed “Nehalem”, are based on a clever new micro-architecture that is designed to deliver high performance and be much more energy efficient.

Ever on the ball, Tech magazine TechNewsWorld spoke with Pund-IT analyst Charles King about the new chips: “Typically, when Intel makes an update as significant as Core i7, the first adopters are typically in the high-end desktop,” he said

“Given i7′s video graphics capabilities, it sounds like it’ll be a good choice for a high-end gaming PC,” he added.

King noted that Intel’s generally scale down new chips for the notebook market, and scales them up for the server crowd.

At the minute, you can get the new chip in Dell’s Studio XPS desktop featuring core i7-920 processors running at 2.66 and 2.93GHz, but the XPS 730x gaming box can support Intel’s 3.2GHz Extreme Edition processor,

The core i7 has had some significant improvements made from previous chips, For example the i7 is the first time that Intel has moved the system memory controller onto the CPU.

“This improves system performance and eliminates the traditional ‘north bridge’ that has been a standard part of Intel-based PCs and servers for over two decades. AMD made this same move in 2003, and others, including Sun and IBM, did it even earlier,” said Nathan Brookwood, principal analyst for Insight 64.

“Better late than never,” he added.

The core i7 brings back hyper-threading, Brookwood noted, the last time this was seen was on the Pentium 4, but wasn’t present in earlier core designs.

“This improves performance by 15 to 20 percent for multi-threaded applications. Since the chip has four cores, and each core has two threads, a single chip looks like eight logical processors to Windows or Linux,” Brookwood noted.

The i7 processors use 8 processing threads, 8MB of Intel Smart Cache, and three channels of DDR3 1066MHz memory, which Intel says improves performance for data-intensive applications.

Intel noted that its i7 processors integrated memory controller handles the data flow between main memory and the execution engine, which results in faster time to memory and less latency for requests.

The company boasts that the i7 Extreme Edition 3.2GHz processor is the “highest performing processor on the planet”.

Brookwood said that scaled down versions for desktops and notebooks won’t become available until the third quarter of 2009, but you have just to wait until the first quarter of the year for two-way server versions,

“Versions for four-way servers won’t show up until late ’09,” he said.

“This gives AMD’s Shanghai – announced last week – some breathing room in two- and four-way servers, and it gives AMD lots of room in the mainstream, sub-(US)$1,200 range, and value segments of the market,” he added.

Fading Fast: Are Sun Microsystems doomed?

Struggling Sun Microsystems, the server and software innovator, is to make around 6,000 workers redundant next year – a massive 18 percent of the company’s workforce.

Sun’s announcement has come amid a crumbling economy with the resulting impact on the manufacturing and financial industries spreading to the technology industry.

Sun is to re-shuffle its software side of the business and plans to merge other departments together. As part of the tightening of belts, Rich Green, the company’s vice president for software, is set to leave the position he’s had since 2006.

Sun have said that the redundancies will save the company around $800 million a year, starting from the next quarter, however it costs a lot of money to lay-off so many staff, so Sun can expect to incur costs of around $600 million to do so.

Andrew Reichman, a senior analyst at Forrester Research, said: “it’s expensive to lay people off”. He say’s its unfortunate for the company, to “lay off that number of people”, and thinks that, “the severance packages will be for half a year to a year, just based on those numbers”.

Reichman noted that IT is suffering tough times at present, with Sun having a particularly rough time.

Sun reported massive losses compared to a year ago. In the quarter ending June 30th, the company reported a 74 percent loss in earnings from $329 million to $88 million. In the same period in 2007, Sun’s quarterly revenue fell 1.4 percent to $3.78 billion, down from $3.83 billion. Sun reportedly earned $403 billion in the fiscal year ending June 30th, down by almost 15 percent from the previous year.

Sun has struggled to compete with large server and software makers like IBM or Dell, and as Reichman said: “Sun has always competed based on performance – not cost.”

“There’s a lot of competition in the hardware space from vendors who can produce higher volumes at lower costs,” he added.

When it comes to software, Red Hat, a Linux server-software maker, are Sun’s biggest competitors.

“The bulk of the market is moving away from Unix and more towards Linux platforms,” Reichman said. “Sun made Solaris open source, and I think it is still figuring out the best way to demonstrate the value of open source Solaris compared to other options. Sun’s struggling to reinvent itself in a changing economy.”

Reichman believes that Sun has the better products, but lack the know how to bring those technologies to the marketplace effectively. However, rather than form a hardcore sales team, Reichman believes that Sun shouldn’t give up its tradition of innovation.

“The risk with so many layoffs, of course, is they could lose some innovation by letting go of this number of people at a time when they need it the most,” Reichman warned. “This is a key inflection point for Sun, and they need their best and brightest to get back on track.”

Like many other hardware and software companies, Sun have been effected by the credit crisis. The financial sector is siphoning money left, right and centre, and those are the customers of Silicon Valley.

“This isn’t a surprise, but it’s definitely a big deal,” Reichman said. “So far, Silicon Valley has not been hit as hard as Wall Street, but the two are very tightly linked. Companies that are related to banks and financial services are struggling. This is the biggest shockwave to hit the technology sector.”

IBM starts its Winter Sales

With the economy on a perpetual downslide at the moment, it’s fair to say that company’s aren’t to keen on opening up their cheque books and signing away a heap of cash. However, IBM thinks they can make businesses spend a little extra this year.

The computer giant has devised a few deals that are designed to make customers spend a little bit to ensure IBM makes it through the cold, harsh winter, involving the System p and Power Systems products.

On the Power System side, IBM is well aware that customers who didn’t change over to it Power6-based Power Systems machines back in July 2007, are unlikely to change over now unless they have a desperate space need.

The changeover has been ignored by customers due to the application conversion process that is required when customers change over to i 6.1, and because the earlier i5/OS V5R4M5 sub-release is supported on Power6 iron, customers need to have the conversion to unlock all the features of the Power6 chip.

IBM has got it sussed really. They know that eventually customers who have System p and Power Systems iron installed will have to buy some capacity. These servers support “capacity on demand” processor core activation, and capacity is not cheap – IBM charges quite a bit for old Power5 and Power5+ cores as well as Power6 cores.

The capacity upgrade deal covers System p 570, 590 and 595 systems, which span a range from 2 to 64 processor Power5 or Power5+ cores across the three models, and includes the Power Systems 570 machine, which is based on Power6 processors.

IBM is insisting customers have processor cards installed by November 7th so allow them to take part in the deal, and they have until December 19th to place an order to activate the latent processing capacity.

On the Power6-based 570 box, IBM is currently offering a 45 percent discount off the cost of activation of 3.5, 4.2 and 47GHz Power6 cores.

If by now you are completely unaware of how much all this costs, IBM is selling a processor card with a dual-core Power6 chip, running at 4.7GHz, for $11,500, and to activate each of the cores it’ll cost you a further $23,000 for each one. So when you consider the 45 percent discount, it’s not too bad at all.

For older Power5 and Power5+ servers, IBM’s pricing is a little less per core, making it more appealing for customers to move to the new system. IBM want its customers to upgrade now rather than pushing it further back, which is why they are offering a massive 60 percent discount on System p 570, 590 and 595 machines with 1.65, 1.9, 2.1 or 2.3GHz Power5 cores and 1.9 or 2,2GHz Power5+ cores sitting idle. To get the discount on the p 570 machine, customers have to activate two cores at a time, but on the 590 and 595 machines, they can activate the one at a time.