InfoWorld's 2010 Green 15 Awards: Green-tech projects coupled with innovation and collaboration yield bountiful rewards
This year's crop of Green 15 winners demonstrates how organizations of all sizes are finding innovative ways to use information technology to achieve critical and often complementary environmental and business objectives. Increasingly, companies are moving beyond out-of-the-box products and siloed approaches to making IT itself more energy efficient. Rather, they're leveraging technology as part of a broader, holistic effort to create greener operations as a whole.
Telecom equipment manufacturer Ericsson, for example, has adopted a complex asset management system that the company and its global partners use to deliver parts, products, and repair services to customers in the most efficient way possible. The project promotes environmental objectives such as reuse, fuel efficiency, and material conservation -- and it saves Ericsson cash while boosting customer satisfaction.
[ See last year's list of Green 15 winners. | View a slideshow of green tech and gadgets for Earth Day. | Keep abreast of green IT news and tips by subscribing to InfoWorld's free weekly Green Tech newsletter. ]
Meanwhile, accounting company KPMG is finding ways to use IT to ingrain sustainable practices in day-to-day operations. For example, the company added a Green Travel Advisor to its internal portal that urges employees to use telepresence over air travel whenever practical. When it's not, the advisor directs them to environmentally responsible hotels.
Companies are also continuing to devise ways to enhance traditional green tech projects. In the past, data center greening projects tended to rely heavily on rolling out server virtualization, creating hot and cold aisles, and adjusting temperature and airflow. Green 15 winners including Dell and Intel have taken green data center initiatives a step further, employing homegrown techniques to drill down into how efficiently, or inefficiently, resources are being used and whether they're required at all.
Out of necessity or optimism, more organizations are thinking different in the name of thinking green. Syracuse University, for example, has done what few traditional data centers are willing to try: employing DC power in its new data center. The government of Andrha Pradesh in India embraced virtual desktops at 5,000 schools because it lacked the infrastructure for PCs. And a consortium of universities in Canada transformed a circular cement silo that formerly housed a particle accelerator into an innovatively designed cooling enclosure for a new supercomputer.
Since InfoWorld launched the Green 15 in 2008, project leaders have reiterated a shared sentiment time and again: Technology goes only so far in helping an organization achieve environmental objectives. Organizations in which executives work to promote green practices, engage employees, and drive collaboration and knowledge sharing among departments and business units will enjoy the greatest return on their green IT investments.
InfoWorld's 2010 Green 15 Award winners, in alphabetical order:
Aflac pushes for paperless practices, yields productivity gainsEmbracing electronic documents and print-management technology, the insurer sees faster policy processing and lower bills
Stacks of unclaimed printouts are a common site at organizations across the globe. At best, those pages get tossed into the nearest recycling bin, but even so, they represent a significant waste of natural resources and hard-earned company cash. As part of a companywide greening effort, insurance company Aflac instituted various technologies and policies to put the kibosh on print waste, resulting in a smaller environmental footprint, commendable cost reductions, and a surge in efficiency.
Starting last year, Aflac opted to invest only in energy-efficient network printers and has reduced printer count by 34 percent. Further, the IT department has set machines to default to two-sided printing, and the company embraced print management technology from Secureprint that requires users to confirm a print job at the printer before it will execute. Print jobs not retrieved within 24 hours are purged from the queue.
Aflac's paper-reduction efforts didn't end at the printer, either. The company's IT department has developed an online system called Smart App Next Generation (SNG) for enrolling and accessing policies electronically. "From a business perspective, this has helped Aflac reduce the need to handle the large amounts of paperwork usually associated with writing insurance policies. We are able to process the applications faster because there is no shuffling of paper between the agents and Aflac," said Pat Rayl, 2nd vice president of technology services at Aflac.
Agents have an added incentive to employ the environmentally friendlier approach to handling policies, Rayl noted: "Applications are approved faster through SNG. Therefore, the agents' commissions are paid faster."
The electronic delivery of policies, coupled with the electronic delivery of agents' statements, billing invoices, and corporate reports, has enabled Aflac to achieve an average of only 1.84 printed sheets of paper per active policy. There's also the ripple effect of fewer stacks of papers being sent to and fro via time consuming, eco-taxing snail mail.
Aflac's paper-reduction initiatives are a fraction of the company's overall sustainability initiatives. Other endeavors have included a data center makeover, such as server reduction through virtualization and improving cooling efficiency and air flow with techniques such as blanking panels, hot and cold aisles, and Koldlok around open floor areas. Additionally, the IT department has developed and promoted Aflac's Meeting Place, which features a suite of collaboration tools, including discussion groups, blogs, wikis, shared-document management, videoconferencing, and instant messaging. By promoting this approach to collaboration among offices spread out among its various offices and corporate campuses, the company saw a 43 percent increase in online meetings in 2009.
Rayl attributed the success of Aflac's multiple green endeavors to two key factors: sponsorship, promotion, and communication from upper management, as well as making the processes easy for employees to embrace. In terms of communication, Aflac's IT department maintains a Green IT page on Aflac Workplace that keeps company employees up to date on key metrics such as paper usage and server efficiency. "Probably the biggest lesson we have learned is that simply implementing green initiatives is not enough. Employees must be continually educated on the benefits of these programs and how they can contribute to making an environmental impact at work and at home," Rayl said.
Andhra Pradesh overcomes resource limitations with virtual desktopsWatt-sipping virtual PCs give students in India 5,000 school computer labs and a new start
For some organizations, embracing sustainability is something of a luxury. They have an ample supply of electricity, for example, and plenty of cash on hand, so they can gradually deploy waste-reducing projects that pay for themselves over time. For other organizations, however, sustainable efforts are driven by real, immediate needs.
Such was the case for the government of Andhra Pradesh in India, which needed to supply 1.8 million students across 5,000 public schools with access to state-of-the-art computing facilities. Standard desktop PCs wouldn't cut it, due to limited funding and limited electricity resources -- but virtual PCs proved a perfect alternative.
The school district chose a virtual desktop solution from NComputing. Students are equipped with their own monitor, keyboard, mouse, and NComputing X-series devices. (The devices come in kits that also include vSpace virtualization software and a PCI card.) The devices are connected to individual desktop computers, at a ratio of around 4 to 1, which perform the bulk of the processing work for the connected systems. In all, 40,000 NComputing devices were deployed, along with 10,000 full PCs provided by various OEMs, including HP and Acer.
The eco-magic of the NComputing access devices is that they require just a single watt of power to run, compared to a typical desktop that draws between 65 and 250 watts. Thus, the Andhra Pradesh government uses 90 percent less electricity than it would to power labs running traditional PCs. The virtual machines' watt-sipping nature was particularly significant given the limited infrastructure in the region.
"Certain locations where the installations occurred had very weak or no electricity infrastructure. For example, certain areas only received a few hours of electricity in the day. With a project of this scale, big generators would have been needed to support the setup of each lab in all of the 5,000 schools," said Stephen Dukker, CEO at NComputing. "However, because of the low electricity consumption, the Andhra Pradesh government purchased smaller generators that are generally used in homes."
In terms of cost, the government estimates that taking the virtual desktop route conserved a whopping $20 million when factoring in savings on larger generators, fuel, electricity, and the like. From a green perspective, the virtual devices are also eco-friendlier than traditional PCs in that they last longer and contain fewer materials.
Thanks to the power of green technology, the students of Andhra Pradesh will be far better prepared for the future. "Earlier students did not even have an idea of how to switch on and off the computer," said Bhavani, a teacher at Zila Parishad School at Medak, India. "After four months, they are operating [the machines] themselves."
CLUMEQ transforms rundown particle accelerator into high-efficiency cooling enclosureHPC consortium discovers circular shape of concrete structure yields significant cooling efficiencies
The Université Laval in Quebec, Canada, had two problems. First, its campus was home to a run-down particle accelerator, constructed in the 1960s, that needed to be decommissioned. Second, the university and 11 of its fellow institutions, members of an HPC consortium, needed a place to construct a state-of-the-art supercomputer. With a little ingenuity -- and a devotion to embracing sustainable practices -- the group was able to transform the 36-foot-wide, 65-foot-high circular concrete silo into an effective cooling enclosure for its supercomputer.
Transforming the silo into a home for a new data center presented some unusual challenges for CLUMEQ (Consortium Laval, Université du Québec, McGill and Eastern Quebec). The final design concept comprised a topology where three levels of server racks are arranged along a circle, creating an inner hot-air cylindrical core and an outer ring-shaped cold-air plenum. The large floor cross-section of the cold-side plenum results in very low air velocity, almost no turbulence (thanks to the absence of corners), and thus uniform temperature and pressure, according to Marc Parizeau, professor at Université Laval and deputy director of CLUMEQ.
"Having a single annular-shaped cold aisle with a large cross-section and thus very low air velocity is probably close to ideal if one wants to air cool today's high-power density racks without using rear-door heat exchangers or other technologies that require bringing water near the servers," Parizeau said.
The main cooling system, located in the basement, pulls the hot air down from the center using energy-efficient variable drive fans. The hot air is cooled by forcing it through highly efficient, custom-designed walls of coils connected to the campus chilled water loop. Designers considered employing liquid cooling, but "simulations demonstrated that [benefits would be] marginal compared with our 120,000 CFM blowing capacity, and not worth the risk -- and costs -- of putting water above the servers."
Heat waste generated by the supercomputer is put to good reuse. During eight months of the year, it's transferred from the chilled water return to the campus hot-water loop to provide heating for the school, thus reducing energy bills.
The supercomputer itself was built by Sun using the company's Constellation blade system. It's composed of 7,680 Intel Nehalem cores with 24TB of RAM and 1 petabyte of high-performance, high-availability parallel storage. The server hardware itself is "quite energy efficient, but not significantly more than the competition," Parizeau said.
Beyond enjoying the benefits of using an existing structure to house its supercomputer, the Université Laval and CLUMEQ estimate the silo design results in annual savings of more than 1.5 million kWh, compared with a traditional data center. Transforming the silo into a data center likely costs more than going the conventional square-build, raised-floor route, Parizeau said, "but this does not take into account the higher efficiency of the silo design, nor the fact that we recycled a building that was almost impossible to reuse for anything else. It may have cost a little more, but we got more for the money -- and there were no budget overruns."
Dell spurs efficiency by pulling the plug on unnecessary appsRetiring 7,000 useless or redundant apps contributes to huge efficiency gains in Dell's data centers
Dell has taken its efficiency initiative a step beyond those of most other data center operators: Beyond consolidating hundreds of physical servers, it's pulled the plug on hundreds of others.
One of the key strategies in Dell's efforts to slash energy waste in its data center was to identify precisely which of the 10,000 support applications were necessary. "Our first real 'aha' moment was to challenge the assumption of the phrase 'keeping the lights on' itself, which by definition implies an untouchable set of applications that you must keep running at all costs," said Robin Johnson, CIO at Dell. "We decided instead to look at that part of the business as an opportunity to turn the lights off. Rather than viewing it as the 'must run' portion of IT, we instead became maniacally focused on what could be eliminated from the fixed-cost side of IT."
The first step was to change the way IT billed departments for computer resources. Previously, departments paid a proportionate share of the total IT budget based on their percentage of overall company revenue. Under the model, there was no incentive for departments to give much thought to whether they were running more applications than they needed. Under the new model, departments were charged for their actual usage "but with a twist," said Johnson. "Rather than charging for actual usage using some complex formula of compute capacity consumed, we simply took the entire cost of the data centers and application infrastructure and divided by the total number of applications."
This step helped prepare company leaders for the next stage of the project. Dell's IT department conducted a thorough analysis of the various apps it was supporting and discovered thousands that had no identified owner or appeared to have little or no utilization. Those servers were simply unplugged from the network in controlled batches. Then IT waited for trouble tickets to arrive. "Not surprisingly, for each group of 500 servers that was taken off the network, at most two or three trouble tickets were raised," Johnson said.
This entire process helped Dell eliminate around 60 percent of the 7,000 total apps it ended up removing. The remainder came from identifying niche apps that could be replaced by an enterprise-level solution, as well as weeding out and eliminating duplicate apps.
When these efforts were all done, Dell managed to reduce its number of supported apps from 10,000 to 3,000, which freed up a significant amount of data center resources. These efforts coupled with virtualization have allowed the company to remove 4,000 servers over the past year. Meanwhile, server utilization levels have doubled to 40 percent -- a number Dell is continuing to improve.
Moreover, the company has reaped even more energy savings by upgrading to high-efficiency servers and reorganizing the way it does power and cooling -- including using outside air 150 days of the year in sweltering Austin, Texas. All in all, Dell reports that through its array of data center efficiency efforts, it has increased overall computing capacity by 270 percent, reduced energy consumption by 30 percent, and saved over $50 million in assorted costs. Retiring and consolidating thousands of servers and apps has also simplified IT administration tasks, including management, accounting, and licensing.
EPA's Energy Star for servers and data centers illuminates sustainable paths New specifications set a much-needed bar for energy efficiency in the products or operations
Over the past couple of years, an increasing number of data center operators and hardware manufacturers have proudly proclaimed that the respective facilities they run or the hardware they produce are oh so greener than the competition's. But such proclamations can leave observers wondering what that really means, given that standards for weighing such claims have been lacking.
That's changed in the past year as the Environmental Protection Agency rolled out not one but two brand-new Energy Star specifications, one for servers and one for data centers, that set a bar for assessing and comparing the energy efficiency of individual machines or entire facilities. While not perfect, these two specs reflect some heavy-duty data gathering and feedback solicitation from stakeholders. More important, these specs mark a couple of critical steps forward for IT sustainability in the United States and beyond.
Energy Star for Servers took well over a year to develop, with the EPA collecting comments from vendors, environmental groups, and other concerned parties. The end result was a standard applicable to machines with between one and four sockets and at least one hard drive. Servers that manage to burn the fewest watts while idling are eligible for the Energy Star designation. Power wasted in idle mode is indeed significant, particularly given that servers are notoriously underutilized. Additionally, compliant servers must be capable of measuring their own real-time power use, processor utilization, and air temperature -- all critical data for helping operators assess the overall efficiency of their facilities.
Devising the first edition of the Energy Star for Data Centers spec entailed gathering and analyzing a wealth of data center measurements, amassed over extended periods of time from an array of facilities. Through careful statistical analysis, and again drawing on feedback from stakeholders, the EPA determined what criteria do and do not account for differences in energy efficiency among data centers. The end result was an Energy Star standard based on PUE (Power Usage Effectiveness), which is the ratio of overall data center power consumption to the power consumption of IT equipment.
Energy Star for Data Centers compares a facility's actual PUE against their predicted PUE, which is effectively what the average PUE would be among similar facilities. Data centers that achieve a PUE well below the predicted level (once verified by the third party) can claim Energy Star status. A finalized version of the spec will be released in Portfolio Manager, the EPA's online benchmarking tool, later this year.
Both sets of specs need fine-tuning. Energy Star for Servers, for example, doesn't consider a server's efficiency when it's doing actual work, nor does it take into account cores per processor. Energy Star for Data Centers is based heavily on PUE, which, though useful, hardly paints a complete picture of power usage. Further, the standard doesn't consider differences that can affect overall PUE, such as tier level or what sort of work a data center is doing. The EPA, however, readily recognizes that these standards (like other Energy Star standards) are a work in progress. The organization is already in the process of developing Version 2.0 of Energy Star for Servers and is seeking feedback from stakeholders.
In the meantime, server vendors and data center operators now have useful maps to guide them down the uncertain path toward sustainability.
Ericsson drives a greener supply chain Web-based asset management system promotes increases in efficiency and reuse
Telecom equipment company Ericsson faced a problem not uncommon among manufacturing companies: Its supply chain was fraught with inefficiencies. The company had limited visibility into its own far-reaching inventory of products and parts, and for competitive reasons, repair providers were reluctant to share their inventory data. Thus, in order to ensure it could keep up with customer demands, the company had to maintain excess stock, which can prove both costly and wasteful. Moreover, the company determined that it was spending more time and resources than necessary to get inventory to customers -- not to mention the waste that came from disposing of excess wares that had become obsolete.
In an effort to make its supply chain more efficient and environmentally sound, Ericsson last year deployed a network asset management system from Trade Wings called Re:source Visibility. Among its feats, the system provides Ericsson and its 2,000 global partners with a consolidated, up-to-date view of the inventories at repair centers and service channel operation centers, as well as from new material order teams.
The greater visibility into inventory lets Ericsson and partners determine whether the products or parts a customer needs are available at a nearby repair shop, thus saving the time and expense of ordering and shipping the goods from afar. "An important piece in the success of our initiative is the ability to look beyond the normal boundaries of internal stock levels to first consider equipment that's part of our reuse and WEEE [the European Community's Waste Electrical and Electronic Equipment directive] material flows, and then if necessary, search the secondary market," said Mikael Thoren, global planning manager at Ericsson.
Additionally, Ericsson can better foresee potential shortages of in-demand wares, thus helping to reduce costlier small-production runs to fulfill customer requirements.
From a logistics perspective, Ericsson can use the system to devise efficient transportation routes, taking into account distance, fuel, and emissions when, for example, moving inventory from one location to another. "The system provides both availability of equipment and the distances from point of need, which has provided us with the ability to factor fuel consumption into the decision-making process," said Thoren. "As this initiative continues to evolve, we're working to broaden the number of variables (e.g., weight, transport type, CO2 emissions) available to us in order to maximize the environmental benefits of our reuse optimization strategy."
According to Thoren, the program also supports the company's material take-back service, a legal requirement under the WEEE directive, where customers can request that Ericsson pick up retired goods for end-of-life management. "In 2009, we received approximately 500 requests globally for WEEE collection, which amounted to about 7,045 tons," Thoren said. "Our recovery rate for treated equipment is more than 95 percent; the WEEE directive's requirement is 75 percent."
All of these benefits add up to faster, more efficient customer service, lower operating and energy costs, less electronic waste, and fewer carbon emissions. All told, the company has saved approximately $10 million and seen more than a 20 percent decrease in equipment purchases. Also, repair volumes in some repair centers have dropped by nearly 80 percent.
Intel pinpoints thousands of unproductive servers Using a homegrown application for measuring server utilization, chipmaker is able to reassign or retire 5,000 machines
Imagine running a company with a staff of 2,000 full-time employees who spent around 80 percent of their time doing nothing beyond waiting for some work to do. Odds are, you'd make some staffing changes pretty darn quickly to address such egregious waste. Yet in data centers around the world, servers are permitted to run 24/7, wasting power and adding to organizations' carbon footprints while operating at average utilization levels of 20 percent, 10 percent, or even less.
There are several reasons data center operators tolerate this level of waste. One is that companies lack the necessary tools to gain full visibility into the hardware they're running, such as how much work a machine is doing or whether it's powering a business-critical application. Thus, it's generally easier (and safer) to simply add new racks of servers when computing demands increase, rather than performing a time-consuming inventory of all the machines and pulling the plug on systems that appear to be performing unnecessary work.
Intel last year developed an innovative application for determining which servers were earning their keep and which ones were slacking off. Called iSHARP (Interactive System Health and Resource Productivity), the application is capable of accurately measuring and tracking utilization on the company's large distributed pools of computers. These particular machines are part of an interactive environment, used to process design and development simulations and related tasks for microprocessors.
"This was in effect an effort to drive down the cost of capital expenditures within the batch and interactive services and the evergrowing operational expenses, including data center power, cooling, and space," said Richard Meneely, Interactive Computing Product Owner for Intel's Engineering Computing group. "We would prefer to not add the expense of building and operating any additional data centers."
In developing iSHARP, Intel first had to define algorithms to correctly identify underutilized machines. Specifically, the app measures CPU and memory utilization on a frequent basis for each system within the interactive computing environment. Those measurements are written to a back-end database for reporting and analysis. The algorithms take into account the individual system's architecture, hardware configuration, and category of application when determining thresholds for identifying underutilization.
Beyond the challenge of developing this application itself, Intel's engineers also had to convince end-users that they could relinquish their computing resources without fear. "Design teams were often initially reluctant to give up resources they already had and believed doing so would impact their productivity. iSHARP allowed us to communicate the same information our IT engineers saw directly with the customer," said Meneely.
"We often offered to keep the targeted systems available offline for a period of time in case the customer determined they really did need it. After a period of time, confidence grew with our customers that we could accurately measure and remove systems without impacting their productivity," Meneely concluded.
The effort proved remarkably successful. In the span of about 12 months, Intel reduced the size of its targeted server pools from 14,000 machines to under 9,000, a reduction of 35 percent. Another 2,700 servers were reallocated to more productive purposes, and 2,300 were removed entirely. The removal of those machines helped Intel shed over 8 million kWh and save $645,000 on energy costs. From a business perspective, the project helped Intel boost the efficiency and capacity of its IT environment -- without hurting productivity.
Meneely said he is now involved in an effort called LEAF, which will build on the lessons learned from iSHARP to provide detailed data for each application within Intel's interactive environment. That, in turn, will help Intel further optimize its resource allocation.
Iron Mountain finds limestone a natural fit for data center efficiencyGeothermal and subterranean conditions of former limestone mine yield significant savings on cooling
Twenty-two stories below ground, deep within the secure confines of a former limestone mine in Pennsylv
Copyright 2009 IDG Magazines Norge AS. All rights reserved
Postboks 9090 Grønland - 0133 OSLO / firstname.lastname@example.org / Telefon 22053000
Ansvarlig redaktør Morten Kristiansen / Utviklingsansvarlig Ulf H. Helland / Salgsdirektør Jon Thore Thorstensen