The California Energy Commission has passed energy-efficiency standards for computers and monitors in an effort to reduce power costs, becoming the first state in the nation to adopt such rules. Th…
“Apple has created a subsidiary to sell the excess electricity generated by its hundreds of megawatts of solar projects. The company, called Apple Energy LLC, filed a request with the Federal Energy Regulatory Commission to sell power on wholesale markets across the US.
The company has announced plans for 521 megawatts of solar projects globally. It’s using that clean energy to power all of its data centers, as well as most of its Apple Stores and corporate offices. In addition, it has other investments in hydroelectric, biogas, and geothermal power, and looks to purchase green energy off the grid when it can’t generate its own power. In all, Apple says it generates enough electricity to cover 93 percent of its energy usage worldwide.
But it’s possible that Apple is building power generation capacity that exceeds its needs in anticipation of future growth. In the meantime, selling off the excess helps recoup costs by selling to power companies at wholesale rates, which then gets sold onward to end customers.
It’s unlikely that Apple, which generated more than $233 billion in revenue in fiscal 2015, will turn power generation into a meaningful revenue stream — but it might as well get something out of the investment. The company issued $1.5 billion in green bonds earlier this year to finance its clean energy projects.” (2)
“My initial expectation was that we would see the return on investment in terms of driving down our energy costs, and we have seen that,” says Pittenger, to whom Smith reports. “What wasn’t part of my expectations was the gains we would have in operational efficiencies and our abilities to do repairs and maintenance much, much better and much, much smarter.”
>” […] Over those 125 buildings on the main Microsoft campus, there are more than 30,000 building systems components — assets, in Smith’s terms — and more than 2 million points where building systems ranging from HVAC to lighting to power monitoring are connected to sensors. In a 24-hour period, those systems produce half a billion data transactions. Each one is small, but when you’re talking about half a billion of something, all those 1s and 0s add up pretty quickly.
But what’s important is being able to do something with those 1s and 0s, which Microsoft could not do until recently because of the mess of systems involved, says Jim Sinopoli, managing principal, Smart Buildings, who helped set up the software pilot program.
“You have an opportunity, if you’re building a new campus or a new building, to really start with a clean slate,” he says. “But you go in these existing buildings and you generally will come upon some unforeseen obstacles.”
The project turned out to be a relatively easy sell. First, Pittenger’s background is financial, so being able to show a strong ROI was a definite plus for Smith, because his boss understands exactly what that means when it comes time to ask for funding. Second, facilities management at Microsoft benefits from a company culture that considers every department to be a key player.
“(CEO) Steve Ballmer likes to say, ‘There are no support organizations at Microsoft,'” Pittenger says. “Everybody is fundamental to the core mission of the company. And we feel that way.”
After gaining approval, the first step was deciding how those obstacles would be overcome. Smith and his team began by writing out 195 requirements for the new way of operating and what their ultimate tool would be able to do. Then they proceeded to look around for an off-the-shelf solution that would be able to do all those things — and failed to find one. So, they built it.
More specifically, they worked with three vendors in a pilot program, encompassing 2.6 million square feet, to build an “analytics blanket” of fault detection algorithms that is layered on top of the different building management systems and reports back to the operations center. If Building 17 and Building 33 have different building management systems, those systems may not be able to talk to each other or provide data to a single reporting system in the operations center. But they can talk to the analytics blanket, which can take the information from every building and combine it into a single output in the operations center. It’s not a replacement for the BMS; instead, it’s adding on functionality that enhances the benefits of the existing BMS.”<
Apple plans to invest $2 billion to build a data center in Arizona in the location where its failed sapphire manufacturing facility exists, the state announced Monday.
“> […] The company plans to employ 150 full-time Apple staff at the Mesa, Arizona, facility, which will serve as a command center for its global network of data centers. In addition to the investment for the data center, Apple plans to build a solar farm capable of producing 70-megawatts of energy to power the facility.
Apple’s investment is expected to create up to 500 construction jobs as well, the state said.
Apple said it expects to start construction in 2016 after GT Advanced Technologies Inc., the company’s sapphire manufacturing partner, clears out of the 1.3 million square foot site. The $2 billion investment is in addition to the $1 billion that Apple had earmarked to build scratch-resistant sapphire screens at the same location.
The investment comes a few months after GTAT filed for bankruptcy protection in October, citing problems with the Arizona facility. Shortly after its bankruptcy filing, GTAT said it planned to lay off more than 700 employees in Arizona.
In October 2013, Apple had agreed to build a sapphire factory in Mesa that GTAT was going to operate. At the time, Apple had said the new factory was going to create 2,000 jobs and move an important part of its supply chain to the U.S.
However, the project struggled to produce a consistent level of sapphire at the quality demanded by Apple. In the end, Apple did not use sapphire from the facility for its latest iPhones. After GTAT’s bankruptcy, Apple has said it was seeking ways to preserve the jobs lost at the Mesa facility.
Arizona’s governor said the state did not provide additional financial incentives to keep Apple in the state. For the original investment in 2013, Arizona provided $10 million to Apple to sweeten the deal for the company.”<
America’s data centers are consuming — and wasting — a surprising amount of energy.
>”Our study shows that many small, mid-size, corporate and multi-tenant data centers still waste much of the energy they use. Many of the roughly 12 million U.S. servers spend most of their time doing little or no work, but still drawing significant power — up to 30 percent of servers are “comatose” and no longer needed, while many others are grossly underutilized. However, opportunities abound to reduce energy waste in the data-center industry as a whole. Technology that will improve efficiency exists, but systemic measures are needed to remove the barriers limiting its broad adoption across the industry.
How much energy do data centers use?
The rapid growth of digital content, big data, e-commerce and Internet traffic more than offset energy-efficiency progress, making data centers one of the fastest-growing consumers of electricity in the U.S. economy, and a key driver in the construction of new power plants. If such data centers were a country, they would be the globe’s 12th-largest consumer of electricity, ranking somewhere between Spain and Italy.
In 2013, U.S. data centers consumed an estimated 91 billion kilowatt-hours of electricity. That’s the equivalent annual output of 34 large (500-megawatt) coal-fired power plants — enough electricity to power all the households in New York City, twice over, for a year. […]
Fixing the problem
While current technology can improve data center efficiency, we recommend systemic measures to create conditions for best-practices across the data center industry, including:
Adoption of a simple, server-utilization metric. One of the biggest efficiency issues in data centers is underutilization of servers. Adoption of a simple metric, such as the average utilization of the server central processing units (CPUs), is a key step in resolving the energy-consumption issue. […]
Rewarding the right behaviors. Data center operators, service providers and multi-tenant customers should review their internal organizational structures and external contractual arrangements and ensure that incentives are aligned to provide financial rewards for efficiency best practices. […]
Disclosure of data-center energy and carbon performance.Public disclosure is a powerful mechanism for demonstrating leadership and driving behavior change across an entire sector. […]
If just half of the technical savings potential for data-center efficiency that we identify in our report is realized (taking into account market barriers), electricity consumption in U.S. data centers could be cut by as much as 40 percent. […]”<
New analysis suggests there’s still an opportunity to cut power consumption and save billions in 2014.
>”A new tally by the Natural Resources Defense Council (NRDC) suggests there’s still a big opportunity to cut energy usage by 40 percent, saving more than $3.8 billion in 2014 alone. Put another way, that’s like switching off 39 billion kilowatt-hours of electricity, the equivalent of 14 large, coal-fired power plants.
“Most of the attention is focused on the highly visible hyperscale ‘cloud’ data centers like Google’s and Facebook’s, but they are already very efficient and represent less than 5 percent of U.S. data center electricity consumption,” said Pierre Delforge, NRDC’s director of high-tech energy efficiency. “Our small, medium, corporate and multi-tenant data centers are still squandering huge amounts of energy.”
Here’s the likely outcome: By 2020, U.S. data centers will probably require about 140 kilowatt-hours of electricity to keep online.
The biggest culprits in wasteful IT power consumption are underutilized servers using significant amounts of electricity without performing any useful purpose, according to NRDC. […]
Figures suggest the average server operates at just 12 percent to 18 percent of its capacity, which means businesses could stand to be far more aggressive about consolidating or virtualizing them. That’s particularly true of the smallest server rooms, ones that crop up with little advance planning.
“The more work a server performs, the more energy-efficient it is—just as a bus uses much less gasoline per passenger when ferrying 50 people than when carrying just a handful,” the analysis notes.
Among the recommended fixes for this persistent problem are the adoption of metrics that provide deeper insight into average server utilization, more public disclosure of data center energy performance information, and “green” data center leases that provide incentives for energy savings.
The reason why these green data center service contracts work, according to the report, is because they create financial incentives for companies to consider their energy use. […]”<
The City of Calgary has reached an agreement with Shaw Communications to provide free Wi-Fi at city-owned locations.
>After reviewing applications, the city decided Shaw had the best proposal and technical expertise, and awarded Shaw the contract.
Shaw will partner with the city to install free WiFi zones in a variety of public locations including recreation facilties, parks and LRT stations.
“The City manages a variety of public spaces and we were looking to partner with an organization that would be able to provide reliable WiFi services, at no cost to citizens, as well as meet industry regulations and provide technical support,” says Heather Reed-Fenske, the city’s manager of Information Technology.
During the initial launch of the program, public WiFi will be available in a select number of public spaces. […]
Mayor Naheed Nenshi says free public WiFi will give Calgarians better access to city services.
Once the initial zones are up and running, the city will collect feedback from Calgarians to evaluate the success of the program.
An announcement is expected soon on when the service will be available.<
See on globalnews.ca
>The structure of this agreement is similar to our earlier commitments in Iowa and Oklahoma. Due to the current structure of the market, we can’t consume the renewable energy produced by the wind farm directly, but the impact on our overall carbon footprint and the amount of renewable energy on the grid is the same as if we could consume it. After purchasing the renewable energy, we’ll retire the renewable energy credits (RECs) and sell the energy itself to the wholesale market. We’ll apply any additional RECs produced under this agreement to reduce our carbon footprint elsewhere.<
See on googleblog.blogspot.ca
WARREN, Michigan—General Motors has gone through a major transformation … a three-year effort to reclaims its own IT after 20 years of outsourcing.
>The first physical manifestation of that transformation is here at Warren, where GM has built the first of two enterprise data centers. The $150 million Warren Enterprise Data Center will cut the company’s energy consumption for its enterprise IT infrastructure by 70 percent, according to GM’s CIO Randy Mott. If those numbers hold up, the center will pay for itself with that and other savings from construction within three years. […]
The data center is part of a much larger “digital transformation” at the company, Mott said. GM is consolidating its IT operations from 23 data centers scattered around the globe (most of them leased) and hiring its own system engineers and developers for the first time since 1996. Within the next three to five years, GM expects to hire 8,500 new IT employees with 1,600 of them in Warren. “We’re already at about the 7,000 mark for internal IT from our start point of about 1,700,” Mott said. […]
So far, three of the company’s 23 legacy data centers have been rolled into the new Warren data center. That’s eliminated a significant chunk of the company’s wide-area network costs. “We have 8,000 engineers at (Vehicle Engineering Center) here,” Liedel said. And those engineers are pushing around big chunks of data—the “math” for computer-aided design, computer aided manufacturing, and a wide range of high-performance computing simulations.
“Now with the data center on the same campus, we’re not paying for the WAN bandwidth we had before,” Liedel explained. “We’ve got dark fiber here on the campus, and the other major concentration of engineers is at Milford at the Proving Ground.” Milford and Warren are connected over fiber via dens wave division multiplexing, providing 10 channels of 10-gigabit-per-second bandwidth.<
See on arstechnica.com
“While we draw from our direct involvement in Google’s infrastructure design and operation over the past several years, most of what we have learned and now report here is the result of the hard work, insights, and creativity of our colleagues at Google. The work of our Technical Infrastructure teams directly supports the topics we cover here, and therefore, we are particularly grateful to them for allowing us to benefit from their experience.”