About duanetilden

Engineer, Entrepreneur, Blogger, Farmer, Traveler and Nature Lover. I love music, quotes and blog about Green Building & Energy.

A Modern Renaissance of Electrical Power: Microgrid Technology – Part 1

NYC First Power Grid - Edison #2.png

Figure 1:  The original Edison DC microgrid in New York City, which started operation on September 4, 1882 (1)

A.  Historical Development of Electric Power in the Metropolitan City

The development of electricity for commercial, municipal and industrial use developed at a frantic pace in the mid to late 1800’s and early 1900’s.  The original distribution system consisted of copper wiring laid below the streets of New York’s east side.  The first power plants and distribution systems were small compared to today’s interconnected grids which span nations and continents.  These small “islands” of electrical power were the original microgrids.  In time they grew to become the massive infrastructure which delivers us electrical power we have become dependent upon for the operation of our modern society.

1) Let There Be Light! – Invention of the Light Bulb

When electricity first came on the scene in the 1800’s it was a relatively unknown force. Distribution systems from a central plant were a new concept originally intended to provide electric power for the newly invented incandescent light bulb.  Thomas Edison first developed a DC power electric grid to test out and prove his ideas in New York, at the Manhattan Pearl Street Station in the 1870’s.  This first “microgrid” turned out to be a formidable undertaking.

[…] Edison’s great illumination took far longer to bring about than he expected, and the project was plagued with challenges. “It was massive, all of the problems he had to solve,” says writer Jill Jonnes, author of Empires of Light: Edison, Tesla, Westinghouse, and the Race to Electrify the World, to PBS. For instance, Edison had to do the dirty work of actually convincing city officials to let him use the Lower East Side as a testing ground, which would require digging up long stretches of street to install 80,000 feet insulated copper wiring below the surface.

He also had to design all of the hardware that would go into his first power grid, including switchboards, lamps, and even the actual meters used to charge specific amounts to specific buildings. That included even the six massive steam-powered generators—each weighing 30 tons—which Edison had created to serve this unprecedented new grid, according to IEEE. As PBS explains, Edison was responsible for figuring out all sorts of operational details of the project—including a “bank of 1,000 lamps for testing the system:” (1)

Although Edison was the first to develop a small DC electrical distribution system in a city, there was competition between DC and AC power system schemes in the early years of electrical grid development.  At the same time, there were a hodge-podge of other power sources and distribution methods in the early days of modern city development.

In the 1880s, electricity competed with steam, hydraulics, and especially coal gas. Coal gas was first produced on customer’s premises but later evolved into gasification plants that enjoyed economies of scale. In the industrialized world, cities had networks of piped gas, used for lighting. But gas lamps produced poor light, wasted heat, made rooms hot and smoky, and gave off hydrogen and carbon monoxide. In the 1880s electric lighting soon became advantageous compared to gas lighting. (2)

2) Upward Growth – Elevators and Tall Buildings

Another innovation which had been developing at the same time as electrical production and distribution, was the elevator, a necessity for the development of tall buildings and eventually towers and skyscrapers .  While there are ancient references to elevating devices and lifts, the original electric elevator was first introduced in Germany in 1880 by Werner von Siemens (3).  It was necessary for upward growth in urban centers that a safe and efficient means of moving people and goods was vital for the development of tall buildings.

Later in the 1800s, with the advent of electricity, the electric motor was integrated into elevator technology by German inventor Werner von Siemens. With the motor mounted at the bottom of the cab, this design employed a gearing scheme to climb shaft walls fitted with racks. In 1887, an electric elevator was developed in Baltimore, using a revolving drum to wind the hoisting rope, but these drums could not practically be made large enough to store the long hoisting ropes that would be required by skyscrapers.

Motor technology and control methods evolved rapidly. In 1889 came the direct-connected geared electric elevator, allowing for the building of significantly taller structures. By 1903, this design had evolved into the gearless traction electric elevator, allowing hundred-plus story buildings to become possible and forever changing the urban landscape. Multi-speed motors replaced the original single-speed models to help with landing-leveling and smoother overall operation.

Electromagnet technology replaced manual rope-driven switching and braking. Push-button controls and various complex signal systems modernized the elevator even further. Safety improvements have been continual, including a notable development by Charles Otis, son of original “safety” inventor Elisha, that engaged the “safety” at any excessive speed, even if the hoisting rope remained intact. (4)

The-Story-In-Elevators-And-Escalators-274

Figure 2:  The Woolworth Building at 233 Broadway, Manhattan, New York City – The World’s Tallest Building, 1926 (5)

3) Hydroelectric A/C Power – Tesla, Westinghouse and Niagara Falls

Although Niagara Falls was not the first hydroelectric project it was by far the largest and from the massive power production capacity spawned a second Industrial Revolution.

“On September 30, 1882, the world’s first hydroelectric power plant began operation on the Fox River in Appleton, Wisconsin. […] Unlike Edison’s New York plant which used steam power to drive its generators, the Appleton plant used the natural energy of the Fox River. When the plant opened, it produced enough electricity to light Rogers’s home, the plant itself, and a nearby building. Hydroelectric power plants of today generate a lot more electricity. By the early 20th century, these plants produced a significant portion of the country’s electric energy. The cheap electricity provided by the plants spurred industrial growth in many regions of the country. To get even more power out of the flowing water, the government started building dams.” (6)

pic4.jpg

Figure 3:  The interior of Power House No. 1 of the Niagara Falls Power Company (1895-1899) (7)

niagaraplant.jpg

Figure 4:  Adam’s power station with three Tesla AC generators at Niagara Falls, November 16, 1896. (7)

Electrical Transmission, Tesla and the Polyphase Motor

The problem of the best means of transmission, though, would be worked out not by the commission but in the natural course of things, which included great strides in the development of AC. In addition, the natural course of things included some special intervention from on high (that is, from Edison himself).

But above all, it involved Tesla, probably the only inventor ever who could be put in a class with Edison’s in terms of the number and significance of his innovations. The Croatian-born scientific mystic–he spoke of his insight into the mechanical principles of the motor as a kind of religious vision–had once worked for Edison. He had started out with the Edison Company in Paris, where his remarkable abilities were noticed by Edison’s business cohort and close friend Charles Batchelor, who encouraged Tesla to transfer to the Edison office in New York City, which he did in 1884. There Edison, too, became impressed with him after he successfully performed a number of challenging assignments. But when Tesla asked Edison to let him undertake research on AC–in particular on his concept for an AC motor–Edison rejected the idea. Not only wasn’t Edison interested in motors, he refused to have anything to do with the rival current.

So for the time being Tesla threw himself into work on DC. He told Edison he thought he could substantially improve the DC dynamo. Edison told him if he could, it would earn him a $50,000 bonus. This would have enabled Tesla to set up a laboratory of his own where he could have pursued his AC interests. By dint of extremely long hours and diligent effort, he came up with a set of some 24 designs for new equipment, which would eventually be used to replace Edison’s present equipment.

But he never found the promised $50,000 in his pay envelope. When he asked Edison about this matter, Edison told him he had been joking. “You don’t understand American humor,” he said. Deeply disappointed, Tesla quit his position with the Edison company, and with financial backers, started his own company, which enabled him to work on his AC ideas, among other obligations.

The motor Tesla patented in 1888 is known as the induction motor. It not only provided a serviceable motor for AC, but the induction motor had a distinct advantage over the DC motor. (About two-thirds of the motors in use today are induction motors.)

The idea of the induction motor is simplicity itself, based on the Faraday principle. And its simplicity is its advantage over the DC motor.

An electrical motor–whether DC or AC–is a generator in reverse. The generator operates by causing a conductor (armature) to move (rotate) in a magnetic field, producing a current in the armature. The motor operates by causing a current to flow in an armature in a magnetic field, producing rotation of the armature. A generator uses motion to produce electricity. A motor uses electricity to produce motion.

The DC motor uses commutators and brushes (a contact switching mechanism that opens and closes circuits) to change the direction of the current in the rotating armature, and thus sustain the direction of rotation and direction of current.

In the AC induction motor, the current supply to the armature is by induction from the magnetic field produced by the field current.  The induction motor thus does away with the troublesome commutators and brushes (or any other contact switching mechanism). However, in the induction motor the armature wouldn’t turn except as a result of rotation of the magnetic field, which is achieved through the use of polyphase current. The different current phases function in tandem (analogous to pedals on a bicycle) to create differently oriented magnetic fields to propel the armature.  

Westinghouse bought up the patents on the Tesla motors almost immediately and set to work trying to adapt them to the single-phase system then in use. This didn’t work. So he started developing a two-phase system. But in December 1890, because of the company’s financial straits–the company had incurred large liabilities through the purchase of a number of smaller companies, and had to temporarily cut back on research and development projects–Westinghouse stopped the work on polyphase. (8)

4) The Modern Centralized Electric Power System

After the innovative technologies which allowed expansion and growth within metropolitan centers were developed there was a race to establish large power plants and distribution systems from power sources to users.  Alternating Current aka AC power was found to the preferred method of power transmission over copper wires from distant sources.  Direct Current power transmission proved problematic over distances, generated resistance heat resulting in line power losses. (9)

440px-New_York_utility_lines_in_1890

Figure 5:  New York City streets in 1890. Besides telegraph lines, multiple electric lines were required for each class of device requiring different voltages (11)

AC has a major advantage in that it is possible to transmit AC power as high voltage and convert it to low voltage to serve individual users.

From the late 1800s onward, a patchwork of AC and DC grids cropped up across the country, in direct competition with one another. Small systems were consolidated throughout the early 1900s, and local and state governments began cobbling together regulations and regulatory groups. However, even with regulations, some businessmen found ways to create elaborate and powerful monopolies. Public outrage at the subsequent costs came to a head during the Great Depression and sparked Federal regulations, as well as projects to provide electricity to rural areas, through the Tennessee Valley Authority and others.

By the 1930s regulated electric utilities became well-established, providing all three major aspects of electricity, the power plants, transmission lines, and distribution. This type of electricity system, a regulated monopoly, is called a vertically-integrated utility. Bigger transmission lines and more remote power plants were built, and transmission systems became significantly larger, crossing many miles of land and even state lines.

As electricity became more widespread, larger plants were constructed to provide more electricity, and bigger transmission lines were used to transmit electricity from farther away. In 1978 the Public Utilities Regulatory Policies Act was passed, making it possible for power plants owned by non-utilities to sell electricity too, opening the door to privatization.

By the 1990s, the Federal government was completely in support of opening access to the electricity grid to everyone, not only the vertically-integrated utilities. The vertically-integrated utilities didn’t want competition and found ways to prevent outsiders from using their transmission lines, so the government stepped in and created rules to force open access to the lines, and set the stage for Independent System Operators, not-for-profit entities that managed the transmission of electricity in different regions.

Today’s electricity grid – actually three separate grids – is extraordinarily complex as a result. From the very beginning of electricity in America, systems were varied and regionally-adapted, and it is no different today. Some states have their own independent electricity grid operators, like California and Texas. Other states are part of regional operators, like the Midwest Independent System Operator or the New England Independent System Operator. Not all regions use a system operator, and there are still municipalities that provide all aspects of electricity. (10)

 

800px-Electricity_grid_simple-_North_America.svg.png

Figure 6:  Diagram of a modern electric power system (11)

A Brief History of Electrical Transmission Development

The first transmission of three-phase alternating current using high voltage took place in 1891 during the international electricity exhibition in Frankfurt. A 15,000 V transmission line, approximately 175 km long, connected Lauffen on the Neckar and Frankfurt.[6][12]

Voltages used for electric power transmission increased throughout the 20th century. By 1914, fifty-five transmission systems each operating at more than 70,000 V were in service. The highest voltage then used was 150,000 V.[13] By allowing multiple generating plants to be interconnected over a wide area, electricity production cost was reduced. The most efficient available plants could be used to supply the varying loads during the day. Reliability was improved and capital investment cost was reduced, since stand-by generating capacity could be shared over many more customers and a wider geographic area. Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to lower energy production cost.[3][6]

The rapid industrialization in the 20th century made electrical transmission lines and grids a critical infrastructure item in most industrialized nations. The interconnection of local generation plants and small distribution networks was greatly spurred by the requirements of World War I, with large electrical generating plants built by governments to provide power to munitions factories. Later these generating plants were connected to supply civil loads through long-distance transmission. (11)

 

To be continued in Part 2:  Distributed Generation and The Microgrid Revolution

 

References:

  1. The Forgotten Story Of NYC’s First Power Grid  by Kelsey Campbell-Dollaghan
  2. The Electrical Grid – Wikipedia
  3. The History of the Elevator – Wikipedia
  4. Elevator History – Columbia Elevator
  5. The History of Elevators and Escalators – The Wonder Book Of Knowledge | by Henry Chase (1921)
  6. The World’s First Hydroelectric Power Station
  7. Tesla Memorial Society of New York Website 
  8. The Day They Turned The Falls On: The Invention Of The Universal Electrical Power System by Jack Foran
  9. How electricity grew up? A brief history of the electrical grid
  10. The electricity grid: A history
  11. Electric power transmission

Hybrid Electric Buildings; A New Frontier for Energy and Grids

.OneMaritimePlaza-300x225 PeakerPlantSanFranHybrid Electric Buildings are the latest in developments for packaged energy storage in buildings which offer several advantages including long-term operational cost savings. These buildings have the flexibility to combine several technologies and energy sources in with a large-scale integrated electric battery system to operate in a cost-effective manner.

San Francisco’s landmark skyscraper, One Maritime Plaza, will become the city’s first Hybrid Electric Building using Tesla Powerpack batteries. The groundbreaking technology upgrade by Advanced Microgrid Solutions (AMS) will lower costs, increase grid and building resiliency, and reduce the building’s demand for electricity from the sources that most negatively impact the environment.

Building owner Morgan Stanley Real Estate Investing hired San Francisco-based AMS to design, build, and operate the project. The 500 kilowatt/1,000 kilowatt-hour indoor battery system will provide One Maritime Plaza with the ability to store clean energy and control demand from the electric grid. The technology enables the building to shift from grid to battery power to conserve electricity in the same way a hybrid-electric car conserves gasoline. (1)

In addition to storage solutions these buildings can offer significant roof area to install solar panel modules and arrays to generate power during the day.  Areas where sunshine is plentiful and electricity rates are high, solar PV and storage combinations for commercial installations are economically attractive.

For utility management, these systems are ideal in expansion of the overall grid, as more micro-grids attach to the utility infrastructure overall supply and resiliency is improved.

In recent developments AMS has partnered with retailer Wal-Mart to provide on-site and “behind the meter” energy storage solutions for no upfront costs.

solar-panels-roof-puerto-rico.png

Figure 2.  Solar Panels on Roof of Wal-Mart, Corporate Headquarters, Puerto Rico (3)

On Tuesday, the San Francisco-based startup announced it is working with the retail giant to install behind-the-meter batteries at stores to balance on-site energy and provide megawatts of flexibility to utilities, starting with 40 megawatt-hours of projects at 27 Southern California locations.

Under the terms of the deal, “AMS will design, install and operate advanced energy storage systems” at the stores for no upfront cost, while providing grid services and on-site energy savings. The financing was made possible by partners such as Macquarie Capital, which pledged $200 million to the startup’s pipeline last year.

For Wal-Mart, the systems bring the ability to shave expensive peaks, smooth out imbalances in on-site generation and consumption, and help it meet a goal of powering half of its operations with renewable energy by 2025. Advanced Microgrid Solutions will manage its batteries in conjunction with building load — as well as on-site solar or other generation — to create what it calls a “hybrid electric building” able to keep its own energy costs to a minimum, while retaining flexibility for utility needs.

The utility in this case is Southern California Edison, a long-time AMS partner, which “will be able to tap into these advanced energy storage systems to reduce demand on the grid as part of SCE’s groundbreaking grid modernization project,” according to Tuesday’s statement. This references the utility’s multibillion-dollar grid modernization plan, which is now before state regulators.  (2)

References:

  1. San Francisco’s First Hybrid Electric Building – Facility Executive, June 28, 2016
    https://facilityexecutive.com/2016/06/skyscraper-will-be-san-franciscos-first-hybrid-electric-building/

  2. Wal-Mart, Advanced Microgrid Solutions to Turn Big-Box Stores Into Hybrid Electric Buildings, GreenTech Media, April 11, 2017  https://www.greentechmedia.com/articles/read/wal-mart-to-turn-big-box-stores-into-hybrid-electric-buildings?utm_source=Daily&utm_medium=Newsletter&utm_campaign=GTMDaily

  3. Solar Panels on Wal-Mart Roof  http://corporate.walmart.com/_news_/photos/solar-panels-roof-puerto-rico

An Engineer’s Take On Major Climate Change

Summary:
1. Climate science is very complicated and very far from being settled.

2. Earth’s climate is overwhelmingly dominated by negative-feedbacks that are currently poorly represented in our Modeling efforts and not sufficiently part of ongoing investigations.

3. Climate warming drives atmospheric CO2 upward as it stimulates all natural sources of CO2 emission. Climate cooling drives atmospheric CO2 downward.

4. Massive yet delayed thermal modulations to the dissolved CO2 content of the oceans is what ultimately drives and dominates the modulations to atmospheric CO2.

5. The current spike in atmospheric CO2 is largely natural (~98%). i.e. Of the 100ppm increase we have seen recently (going from 280 to 380ppm), the move from 280 to 378ppm is natural while the last bit from 378 to 380ppm is rightfully anthropogenic.

6. The current spike in atmospheric CO2 would most likely be larger than now observed if human beings had never evolved. The additional CO2 contribution from insects and microbes (and mammalia for that matter) would most likely have produced a greater current spike in atmospheric CO2.

7. Atmospheric CO2 has a tertiary to non-existent impact on the instigation and amplification of climate change. CO2 is not pivotal. Modulations to atmospheric CO2 are the effect of climate change and not the cause.

Watts Up With That?

Guest essay by Ronald D. Voisin

Let’s examine, at a high and salient level, the positive-feedback Anthropogenic Global Warming, Green-House-Gas Heating Effect (AGW-GHGHE) with its supposed pivotal role for CO2. The thinking is that a small increase in atmospheric CO2 will trigger a large increase in atmospheric Green-House-Gas water vapor. And then the combination of these two enhanced atmospheric constituents will lead to run-away, or at least appreciable and unprecedented – often characterized as catastrophic – global warming.

This theory relies entirely on a powerful positive-feedback and overriding (pivotal) role for CO2. It further assumes that rising atmospheric CO2 is largely or even entirely anthropogenic. Both of these points are individually and fundamentally required at the basis of alarm. Yet neither of them is in evidence whatsoever. And neither of them is even remotely true. CO2 is not only “not pivotal” but it…

View original post 4,012 more words

Water Conservation and a Change in Climate Ends California Drought

Water scarcity is becoming a greater problem in our world as human demands for water increases due to population growth, industry, agriculture, and energy production. When the water supply is being pushed beyond its natural limits disaster may occur.  For California residents the end of the drought is good news.  Return of wet weather raises reservoir levels and effectively prevents wildfires.  However, another drought could be around the corner in years to come.  Thus government and water users need to remain vigilant and continue to seek ways to conserve and reduce water use.
ca-reservoirs 2017 End of drought.png
Figure 1. 2017 California Major Water Reservoir Levels
By Bark Gomez and Yasemin Saplakoglu, Bay Area News Group (1)
Friday, April 07, 2017 05:17PM

Gov. Jerry Brown declared an end to California’s historic drought Friday, lifting emergency orders that had forced residents to stop running sprinklers as often and encouraged them to rip out thirsty lawns during the state’s driest four-year period on record.

The drought strained native fish that migrate up rivers and forced farmers in the nation’s leading agricultural state to rely heavily on groundwater, with some tearing out orchards. It also dried up wells, forcing hundreds of families in rural areas to drink bottled water and bathe from buckets.

Brown declared the drought emergency in 2014, and officials later ordered mandatory conservation for the first time in state history. Regulators last year relaxed the rules after a rainfall was close to normal.

But monster storms this winter erased nearly all signs of drought, blanketing the Sierra Nevada with deep snow, California’s key water source, and boosting reservoirs.

“This drought emergency is over, but the next drought could be around the corner,” Brown said in a statement. “Conservation must remain a way of life.” (2)

References:

  1. https://wattsupwiththat.com/2017/04/08/what-permanent-drought-california-governor-officially-declares-end-to-drought-emergency/ 
  2. http://abc7news.com/weather/governor-ends-drought-state-of-emergency-in-most-of-ca/1846410/

What Does Moist Enthalpy Tell Us?

“In terms of assessing trends in globally-averaged surface air temperature as a metric to diagnose the radiative equilibrium of the Earth, the neglect of using moist enthalpy, therefore, necessarily produces an inaccurate metric, since the water vapor content of the surface air will generally have different temporal variability and trends than the air temperature.”

Climate Science: Roger Pielke Sr.

In our blog of July 11, we introduced the concept of moist enthalpy (see also Pielke, R.A. Sr., C. Davey, and J. Morgan, 2004: Assessing “global warming” with surface heat content. Eos, 85, No. 21, 210-211. ). This is an important climate change metric, since it illustrates why surface air temperature alone is inadequate to monitor trends of surface heating and cooling. Heat is measured in units of Joules. Degrees Celsius is an incomplete metric of heat.

Surface air moist enthalpy does capture the proper measure of heat. It is defined as CpT + Lq where Cp is the heat capacity of air at constant pressure, T is air temperature, L is the latent heat of phase change of water vapor, and q is the specific humidity of air. T is what we measure with a thermometer, while q is derived by measuring the wet bulb temperature (or, alternatively, dewpoint…

View original post 203 more words

Site C Dam Construction in BC – A Political Water Grab?

Mega projects grab headlines and provide many photo opportunities for politicians.  Since the construction of the depression era Hoover Dam, these massive construction projects have historically provided for jobs and opportunity when the economy is slow.  However, some questions remain, such as; are these projects in everyone’s best interests, what are we losing, and is there a better way to accomplish our goals?

“‘Water grabbing’ refers to a situation in which public or private entities are able to take control of, or reallocate, precious water resources for profit or for power — and at the expense of local communities and the ecosystems on which their livelihoods are based.

The effects have been well-documented: examples include families driven away from their villages to make room for mega dams, privatization of water sources that fails to improve access for the public, and industrial activity that damages water quality.”

[…]

“…hydropower comprises about 70 per cent of the world’s renewable energy mix, and guarantees a lower amount of total emissions than fossil fuel plants, its overall impacts are not always positive. This is especially the case when dams are not planned with an emphasis on the impacts on people and the environment.

In North America, many dams built in the 1980s are now being demolished because of their impacts on fish species such as salmon. In some cases they are replaced with more modern dams that do not require building large-scale reservoirs.” (1)

A Short Political History of the Site C Dam

Site C dam construction

Figure 1.  Construction on the Site C dam on the Peace River in the fall of 2016. Photo: Garth Lenz. (2)

“On May 10, 1990, the Vancouver Sun reported remarks made by then Energy Minister Jack Davis at an Electric Energy Forum: “Power projects initiated by B.C. Hydro will be increasingly guided by environmental concerns because of mounting public pressure.” Noting the province’s abundance of power sources, he said: “We have the scope to be different.”

However, during a 1991 Social Credit party leadership campaign the winner, Rita Johnston declared in her policy statement that she wanted to accelerate construction of the “$3 billion” dam. Johnston’s leadership was brief because the Socreds were defeated in October 1991.

In 1993, the dam was declared dead by then BC Hydro CEO Marc Eliesen. Site C is dead for two reasons,” Eliesen said. “The fiscal exposure is too great … the dam is too costly. Also it is environmentally unacceptable.”

Despite these twists and turns, B.C. Hydro’s staff worked diligently to keep the dam alive.

Fast forward to April 19, 2010, when then B.C. Liberal Premier Gordon Campbell made his announcement that Site C was on again, now branded as a “clean energy project” and an important part of “B.C.’s economic and ecological future.”

Campbell claimed the dam would power 460,000 new homes and repeated the mantra of an increasing power demand of 20 to 40 per cent in the following 20 years.

In the ensuing seven years since the 2010 announcement, power demand has stayed virtually the same, despite BC Hydro’s forecast for it to climb nearly 20 per cent during that time. The reality is B.C.’s electricity demand has been essentially flat since 2005, despite ongoing population growth.

Campbell resigned in 2011 amidst uproar over the Harmonized Sales Tax (HST), opening the field for a leadership race, which Christy Clark won. That brings us to the May 2013 election, during which Clark pushed liquefied natural gas (LNG) exports as the solution to B.C.’s economic woes. With the LNG dream came a potential new demand for grid electricity, making Site C even more of a hot topic.

Four years on from Clark’s pronouncement there are no LNG plants up and running, despite her promise of thousands of jobs. Without a market for Site C’s power, Clark has started ruminating about sending it to Alberta, despite a lack of transmission or a clear market.

Oxford University Professor Bent Flyvbjerg has studied politicians’ fascination with mega projects, describing the rapture they feel building monuments to themselves: “Mega projects garner attention, which adds to the visibility they gain from them.”

This goes some way to explaining the four-decade obsession with building the Site C dam, despite the lack of clear demand for the electricity. (2)

 

References:

  1.  Water and power: Mega-dams, mega-damage?
    http://www.scidev.net/global/water/data-visualisation/water-power-mega-dams-mega-damage.html
  2. Four Decades and Counting: A Brief History of the Site C Dam https://www.desmog.ca/2017/03/23/four-decades-and-counting-brief-history-site-c-dam

Clean Energy creates more jobs than fossil fuels, with a wage premium

Sustainability as an economic driving force does not need the distracting agenda of “Climate Change” to be viable.

Work and Climate Change Report

Following on the January 2017  report US Energy and Employment from the U.S. Department of Energy, more evidence of the healthy growth of the clean energy industry comes in a report  by the Environmental Defense Fund Climate Corps and Meister consultants.  Now Hiring: The Growth of America’s Clean Energy and Sustainability Jobs    compiles the latest statistics from diverse sources,  and concludes that “sustainability” accounts for an estimated 4.5 million jobs (up from 3.4 million in 2011) in the U.S. in 2015. Sustainability jobs are defined as those in energy efficiency and renewable energy, as well as waste reduction, natural resources conservation and environmental education, vehicle manufacturing, public sector, and corporate sustainability jobs.  Statistics drill down to wages and working conditions – for example,  average wages for energy efficiency jobs are almost $5,000 above the national median, and wages for solar workers are above the national median of $17.04…

View original post 127 more words

California adopts nation’s first energy-efficiency rules for computers

The California Energy Commission has passed energy-efficiency standards for computers and monitors in an effort to reduce power costs, becoming the first state in the nation to adopt such rules. Th…

Source: California adopts nation’s first energy-efficiency rules for computers

Twelve Reasons Why Globalization is a Huge Problem

Globalization seems to be looked on as an unmitigated “good” by economists. Unfortunately, economists seem to be guided by their badly flawed models; they miss  real-world problems. In …

Source: Twelve Reasons Why Globalization is a Huge Problem

Benchmarking Buildings by Energy Use Intensity (EUI)

There are many metrics and measurements when it comes to evaluating energy as we use it in our daily lives.  In order to compare between different sources or end uses we often have to make conversions in our terms so that our comparisons are equitable.  This may be further complicated as different countries often use different standards of measure, however, we will convert to common units.

Benchmarking

Benchmarking is the practice of comparing the measured performance of a device, process, facility, or organization to itself, its peers, or established norms, with the goal of informing and motivating performance improvement. When applied to building energy use, benchmarking serves as a mechanism to measure energy performance of a single building over time, relative to other similar buildings, or to modeled simulations of a reference building built to a specific standard (such as an energy code). (1)

Benchmarking is a common practice in buildings to establish existing consumption rates and to identify areas that require improvement and to help prioritize improvement projects.  These benchmarks can be established for a building, system within a building, or even a larger campus, facility or power source.  Usually an energy or facility manager will determine energy consumption over a fixed period of time, 1 to 3 years, and compare it to similar facilities.  Normalized by gross square footage of the building the EUI is usually expressed as kBtu/sf per year.

Energy Intensity (EI) of a Country

Figure 1:  Energy Intensity of different economies The graph shows the amount of energy it takes to produce a US $ of GNP for selected countries. (2)

Not to be confused with Energy Use Intensity, Energy Intensity is an economic measure of energy use normalized by the GDP of a country and is considered a measure of a Nation’s Energy Efficiency.  Countries with a high EI have a higher cost to convert energy into GDP, whereas countries with low EI have lower costs of converting energy into GDP.  Many factors contribute to the EI value, including climate, energy sources and  economic productivity. (2)

Energy Use Intensity (EUI)

The EUI of a building includes the electrical power use and heating fuel consumption for heating and hot water generation.  Many facilities require different loads according to their primary use or function, including cooling and refrigeration.  For the comfort of occupants electricity is needed for lighting and plug loads to meet the functioning needs of the equipment in the facility.  Heating, ventilation and air conditioning (HVAC) may require electricity or another fuel such as natural gas.  Hot water may be generated with electricity or a fuel.  A site may also have solar PV or hot water, wind power, and daylighting programs.  There are also many strategies which may be employed by building operators to reduce loads and energy consumption including controls, storage, micro-grid, purchasing offsets, etc.

When comparing buildings, people not only talk about total energy demands, but also talk about “energy use intensity” (EUI).  Energy intensiveness is simply energy demand per unit area of the building’s floorplan, usually in square meters or square feet. This allows you to compare the energy demand of buildings that are different sizes, so you can see which performs better.

EUI is a particularly useful metric for setting energy use benchmarks and goals. The EUI usually varies quite a bit based on the building program, the climate, and the building size. (3)

Image result

Figure 2.  Typical EUI for selected buildings.  This graph is based on research EPA conducted on more than 100,000 buildings (4)

Site Energy vs Source Energy

As we go forward into the future, it is rather unclear how current events will affect the international agreements on reducing carbon consumption.  However, generally speaking, renewable energy sources are seen to becoming more economic for power production.  For many facilities this means that supplementing existing grid sources for power with on-site power production is making economic sense.  Future building improvements may include sub-systems, batteries and energy storage schemes, renewable sources or automated or advanced control systems to reduce reliance on grid sourced power.

The energy intensity values in the tables above only consider the amount of electricity and fuel that are used on-site (“secondary” or “site” energy). They do not consider the fuel consumed to generate that heat or electricity. Many building codes and some tabulations of EUI attempt to capture the total impact of delivering energy to a building by defining the term  “primary” or “source” energy which includes the fuel used to generate power on-site or at a power plant far away.

When measuring energy used to provide thermal or visual comfort, site energy is the most useful measurement. But when measuring total energy usage to determine environmental impacts, the source energy is the more accurate measurement.

Sometimes low on-site energy use actually causes more energy use upstream.  For example, 2 kWh of natural gas burned on-site for heat might seem worse than 1 kWh of electricity used on-site to provide the same heating with a heat pump.  However, 1 kWh of site electricity from the average US electrical grid is equal to 3.3 kWh of source energy, because of inefficiencies in power plants that burn fuel for electricity, and because of small losses in transmission lines.  So in fact the 2 kWh of natural gas burned on site is better for heating. The table below provides the conversion factors assumed by the US Environmental Protection Agency for converting between site and source energy. (3)

References:

(1) BUILDING ENERGY USE BENCHMARKING  https://energy.gov/eere/slsc/building-energy-use-benchmarking

(2) ENERGY INTENSITY  https://en.wikipedia.org/wiki/Energy_intensity

(3) MEASURING BUILDING ENERGY USE  https://sustainabilityworkshop.autodesk.com/buildings/measuring-building-energy-use

(4) WHAT IS ENERGY USE INTENSITY (EUI)?  https://www.energystar.gov/buildings/facility-owners-and-managers/existing-buildings/use-portfolio-manager/understand-metrics/what-energy