Microgrid as a Service (MaaS) and the Blockchain

It is a splendid event to observe when two new technologies combine to create a new marketplace. In recent years as new sources of distributed energy have been entering the electrical grid to provide power they are necessitating a change to the existing large-scale infrastructure model of power supply.

Classic Electric Power Grid Model

Figure 1. Classic electric power grid model with bulk generators transferring power long distances to reach the consumer.  Image courtesy of NetGain Energy Advisors. (1)

The old model utility was large and centralized and tracking transactions was simple as consumers were on one side of the ledger, while the provider as on the other. And whereby currency and energy flowed only in opposite directions between two identified parties, consumer and provider.

In the emerging markets of small-scale independent energy providers, we can see buildings, communities and even individual residences having built capacity to provide intermittently or on demand power at times, and consume or store power from the grid at other times. Solar power is only available during the day, and will require new commercial methods of energy storage.

How-Microgrids-Work

Figure 2. An example Microgrid (2)

In the transition from decentralized utility is the development of the Micro-grid.  The Micro-grid offers many benefits to society, including; (a) use of renewable energy sources that reduce or eliminate the production of GHG’s, (b) increases in energy efficiency of energy transmission due to shortening of transmission distances and infrastructure, (c) improved municipal resilience against disaster and power reductions, and finally, (d) promotion of economic activity that improves universal standard of living.

As buildings and communities evolve they are moving toward renewable energy sources to supplement their energy requirements and reduce operating costs. Even the building codes are getting into the act, requiring buildings be constructed to new energy efficiency standards. Also, we are seeing the development of new technologies and business methods, such as solar powered charging stations for electric vehicles.

The existing electrical grid and utility model has to develop and adapt to these new technologies and means of locally generating power. The future will include the development and incorporation of peer to peer networks and alternative energy supply methods. Consumers may purchase power from multiple sources, and produce power and supply it to other users via the electrical grid.

Micro-grid and the Blockchain

As new energy sources/providers emerge there is added complexity to the network. Consumers of power can also be an energy providers, as well as having different energy sources available. This increased functionality raises the complexity of possible transactions in the network.

Imagine a financial ledger, where each user in the system is no longer constrained to be a consumer, but also a supplier to other users in the system. In order to track both the credits and debits it has been proposed that the exchange of blockchain tokens be utilized to sort out complicated energy transfer transactions in a distributed P2P network.

P2P TRADING

This class of Platform Application gives retailers the ability to empower consumers (or in an unregulated environment, the consumers themselves) to simply trade electricity with one another and receive payment in real-time from an automated and trustless reconciliation and settlement system. There are many other immediate benefits such as being able to select a clean energy source, trade with neighbors, receive more money for excess power, benefit from transparency of all your trades on a blockchain and very low-cost settlement costs all leading to lower power bills and improved returns for investments in distributed renewables. (3)

One blockchain based energy token that has caught my attention is called POWR and is currently in pre-ICO sales of the tokens by the Australian platform Power Ledger. One of the uses of the platform that is being suggested is peer to peer trading.

 “We are absolutely thrilled with the results of the public presale,” says Dr Jemma Green, co-founder and chair of Power Ledger. “Selling out in just over 3 days is a very strong performance in line with global ICO standards, which speaks to the strong levels of interest from consumer and institutional buyers.”

The proceeds from the total pre sale were AU$17 million and the main sale on Friday offers approximately 150 million POWR tokens (subject to final confirmation before the sale opens) in an uncapped sale, meaning that the level of market demand will have set the final token price at the end of the sale. (4)

 

References

  1. The Changing Power Landscape
  2. Siemens – Microgrid Solutions
  3. Power Ledger Applications
  4. PRESS RELEASE Having Closed $17M In Their Presale ICO, Power Ledger Confirm Their Public Sale Will Commence on 8th September 2017
Advertisements

An Engineering Blockchain Cryptocurrency

The revolutionary aspect of the blockchain is starting serious discussions in the Professional Engineering community. Indications are that there are some fundamental problems in Engineering may be solved by the issuance of a token, in this case called Quant (1) and is currently in the “sand-box” phase of development.

The plan, in part, involves mining Quant to create a public key, or data-base called Engipedia.  There is also a “proof-of-stake” (2) aspect, which forms an engineer’s private key summarizing by algorithm the engineer’s personal data such as education, qualifications, projects, and other contributions or related works.

The Quant token, which is proposed to have inherent smart contract capabilities will be mined by engineers in a variety of ways, most of which are intended to establish an expanding  knowledge base, one such enterprise is called Engipedia. This is a knowledge base which has a formidable upside for democratic technological advancement and dissemination of workable knowledge worldwide.

As a virtual currency, the Quant token may provide a necessary bridge to financing that was previously inaccessible to engineers. Often pools of capital are controlled by vested interests or politically minded parties. Economic opportunities, which previously were unavailable due to lack of funding, may now have a financial vehicle for entrepreneurial Engineers.

The Design is the Contract

Engineering is different than finance and insurance. Finance and Insurance merely need to represent a physical object in a party / counter-party transaction script.  There is no design involved. Engineering represents a physical object – the engineering design and specification IS the smart contract. Then, what happens in construction, operations, maintenance, renovation, and replacement is far too complex to be scripted in a single smart contract. Engineering outcomes involve enormous mass, forces, and real-life consequences. (3)

References:

  1. The Market for QUANT
  2. QUANT Proof of Stake
  3. A Warning to Engineering Firms Concerning Blockchain Technology

Sustainable Smart Cities and Disaster Mitigation – Preparing for the 1000 Year Storm

Hurricanes Cause Massive Damage

In light of recent events, such as the current hurricane season of 2017 which has already struck large sections of Texas with Hurricane Harvey causing massive damage which has been estimated at $180 billion by Texas Governor Greg Abbott (1) there are questions about how we can prepare cities better for disaster. One method considered is in our building codes, which are constantly being upgraded and improved, by constructing buildings to be more resilient and handle harsher conditions.

There is a limit to what a building code can do and enforce. Areas and regions that have seen widespread destruction, will have to be rebuilt.  However, to what standards? The existing building codes will have to be examined for their efficacy in storm-proofing buildings to withstand the effects of high winds and water penetration, some of which has already been performed.

Codes do not prevent external disasters such as from storms, tornadoes, tidal waves (tsunami), earthquakes, forest fire, lightning, landslides, nuclear melt-down and other extreme natural and man-made events. What building codes do is establish minimum standards of construction for various types of buildings and structures. Damage to buildings, vehicles, roads, power systems and other components of a city’s infrastructure are vulnerable to flooding which cannot be addressed in a building code. Other standards are needed to address this problem.

Storm-Proofing Cities

Other issues arise regarding flooding, and how water can be better managed in the future to mitigate water collection and drainage. These may require higher levels of involvement across a community and perhaps beyond municipal constraints, requiring state-wide developments. Breakwaters, sea walls, levees, spill ways and other forms of structures may be added to emergency pumping stations and micro-grid generator/storage facilities as examples of infrastructure improvements that could be utilized.

Bigger decisions may have to be considered as to the level of reconstruction of buildings in vulnerable areas. Sea warming as noted occurring has some scientists pondering if there is a connection between global warming and increased storm volatility as indicated by water temperature rises and tidal records (2). If bigger and more frequent storms are to come, then it must be considered in future building and infrastructure planning.

Regional Infrastructure and Resiliency

Exposed regions as well as larger, regional concerns in areas of maintaining power, roadways, and diverting and draining water are major in the resiliency of a community. When a social network breaks down, there is much lost, and recovery of a region can be adversely affected by loss of property and work.

Many of the lower classes will not have insurance and lose everything. Sick and elderly can be especially exposed, not having means to prepare or escape an oncoming disaster, and many will likely perish unless they can get access to aide or a shelter quickly.

Constructing better sea walls and storm surge barriers may be an effective means to diverting water in the event of a hurricane on densely populated coastal areas. Although considered costly to construct, they are a fraction of the cost of damage that may be caused by a high, forceful storm surge which can obliterate large unprotected populated areas. The Netherlands and England have made major advancements in coastal preparedness for storms.

Storm Surge Barriers

Overall Effectiveness for Reducing Flood Damage

There are only a few storm surge barriers in the United States, although major systems installed abroad demonstrate their efficacy. The Eastern Scheldt barrier in the Netherlands (completed in 1986) and the Thames barrier in the United Kingdom (completed in 1982) have prevented major flooding. Lavery and Donovan (2005) note that the Thames barrier, part of a flood risk reduction system of barriers, floodgates, floodwalls, and embankments, has reliably protected the City of London from North Sea storm surge since its completion.

Four storm surge barriers were constructed by the USACE in New England in the 1960s (Fox Point, Stamford, New Bedford, and Pawcatuck) and a fifth in 1986 in New London, Connecticut. The barriers were designed after a series of severe hurricanes struck New England in 1938, 1944, and 1954 (see Appendix B), which highlighted the vulnerability of the area. The 1938 hurricane damaged or destroyed 200,000 buildings and caused 600 fatalities (Morang, 2007; Pielke et al., 2008).

The 2,880-ft (878-m) Fox Point Barrier (Figure 1-8) stretches across

the Providence River, protecting downtown Providence, Rhode Island. The barrier successfully prevented a 2-ft (0.6-m) surge elevation (in excess of tide elevation) from Hurricane Gloria in 1985 and a 4-ft (1.2-m) surge from Hurricane Bob in 1991 (Morang, 2007) and was also used during Hurricane Sandy. The New Bedford, Massachusetts, Hurricane Barrier consists of a 4,500-ft-long (1372-m) earthen levee with a stone cap to an elevation of 20 ft (6 m), with a 150-ft-wide (46-m) gate for navigation. The barrier was reportedly effective during Hurricane Bob (1991), an unnamed coastal storm in 1997 (Morang, 2007), and Hurricane Sandy. During Hurricane Sandy, the peak total height of water (tide plus storm surge) was 6.8 feet (2.1 m), similar to the levels reached in 1991 and 1997. The Stamford, Connecticut, Hurricane Barrier has experienced six storms producing a surge of 9.0 ft (2.7 m) or higher between its completion (1969) and Hurricane Sandy. During Hurricane Sandy, the barrier experienced a storm surge of 11.1 ft (3.4 m), exceeding that of the 1938 hurricane (USACE, 2012). (3)

The biggest challenge is to build storm surge barriers large enough for future Hurricanes. There is a question that given the magnitude of current and future storms that these constructed barriers may be breached.  Engineers design structures to meet certain standards, and with weather these were the unlikely 1 in 100 year storm events. However, this standard is not good enough as Hurricane Katrina in Louisiana exemplified, as being rated a 1 in 250 year storm event. With climate changes these events may become more frequent.

Much of the damage from Katrina came not from high winds or rain but from storm surge that caused breaches in levees and floodwalls, pouring water into 80 percent of New Orleans. To the south, Katrina flooded all of St. Bernard Parish and the east bank of Plaquemines Parish. Plaquemines Parish flooded again in 2012 with Hurricane Isaac.

Soon after Katrina, Congress directed the Corps of Engineers to build a system that could protect against a storm that has a 1 percent chance of happening each year, a “1-in-100-year” storm.

The standard is less a measure of safety and more a benchmark that allows the city to be covered by the National Flood Insurance Program. Louisiana’s master coastal plan calls for a much stronger 500-year system. The corps says Katrina was a 250-year storm for the New Orleans area.

Since 2005, the Army Corps has revamped the storm protection system’s 350 miles of levees and floodwalls, 73 pumping stations, three canal-closure structures, and four gated outlets. The corps built a much-heralded 26-foot-high, 1.8-mile surge barrier in Lake Borgne, about 12 miles east of the center of the city.

During Katrina, a 15- to 16-foot-high storm surge in Lake Borgne forced its way into the Intracoastal Waterway, putting pressure on the Industrial Canal levees that breached and caused catastrophic flooding in the city’s Lower 9th Ward.

“In New Orleans, we know that no matter how high we build this or how wide we build it, eventually there will be a storm that’s able to overtop it,” New Orleans District Army Corps spokesman Ricky Boyett says, admiring the immense surge barrier from a boat on Lake Borgne. “What we want is this to be a strong structure that will be able to withstand that with limited to no damage from the overtopping.” (4)

500 Year Floods

Hurricane Harvey brought an immense amount of extreme rain, which brought a record 64″ in one storm to the Houston metropolitan region. This is a staggering amount of water, over 5 feet in height, this amount of water could only overwhelm low-lying areas, and depressions in topography. Flash floods can happen during extreme storms, where a drainage system is designed for a 1:100 year flood event, and not for a 1:500 or 1:1000 year flood event. Road ways can easily become rivers as drainage systems back up and surface water has no place to collect.

500-year-floods

Figure 1. 500 year flood events in the USA since 2015 (5)

New standards in development may need to accommodate more stringent standards. Existing municipal drainage systems are not designed to handle extreme rain and other means of drainage systems may have to be developed to divert water away from centers of population. Communities will be built to new standards, where storm water management is given a higher priority to avert flooding.

BN-UX285_HARVEY_M_20170831100012

Figure 2. Floodwaters from Tropical Storm Harvey (6)

Given the future uncertainty of our climate and weather, we cannot continue to ignore the devastating effects that disasters have on cities and regions. We must ask some difficult questions regarding the intelligence of continuing to build and live in increasingly higher risk regions.

On a personal level every citizen must take some responsibility in their choices of where to live. As for governments they need to decide how best to allocate limited resources in rebuilding and upgrading storm protection systems. It is anticipated that some areas will be abandoned as risks become too high for effective protection from future storm events.

The Oil and Gas Industry

It seems there is an irony involved with the possibility that storms severity is linked to global warming, and that access to vulnerable regions often are in part economically driven by the oil and gas industry.  Hurricane Harvey is the most recent storm which is affecting fuel prices across the USA. Refinery capacity has shrunk due to plant shut-downs.  Shortages in local fuel supplies are occurring, as remaining gasoline stations run dry.

Goldman Sachs estimates that the hurricane has taken 3 million barrels a day — or about 17% — of refining capacity offline, and that’s likely to increase the overall level of crude-oil inventories over the next couple of months. (7)

Oil and gas are particularly vulnerable to exposure to the weather, and it is in their own best interests to provide local protection to the area so that they can continue extracting the resource. However, ancillary industries such as refining may better be served by relocation away from danger areas. Also, supply lines become choked by disaster, and can potentially have consequences beyond the region which was exposed to the disaster.

The Electric Vehicle in the Smart City

Such events can only put upward pressure on the price of fuel, while providing further incentive to move away from the internal combustion engine as means of motive power. Electric vehicles would provide a much better ability to recover quickly from storm events as they are not restricted by access to fuel. Micro-grids in cities provide sectors of available power for which electric emergency response vehicles can move.

By moving reliance away from carbon based fuels to renewable electric sources and energy storage, future development in cities may see the benefits inherent in the electric vehicle. Burning fuels create heat, water and carbon dioxide in the combustion process. They consume our breathable oxygen and pollute the atmosphere. Pipelines, tankers and rail cars can break and spill causing pollution. Exploration causes damage to the environment.

A city that is energy efficient and reliant on renewable sources of energy that benignly interact with the environment can approach self-sustainability and a high degree of resilience against disaster. This combined with designing to much higher standards which keep in mind the current volatility our climate is experiencing, and uses the lessons learned in other areas as indicators of best practices into the future.

 

References

  1. Hurricane Harvey Damages Could Cost up to $180 Billion
  2. Global warming is ‘causing more hurricanes’
  3. “3 Performance of Coastal Risk Reduction Strategies.” National Research Council. 2014. Reducing Coastal Risk on the East and Gulf Coasts. Washington, DC: The National Academies Press. doi: 10.17226/18811.
  4. Rising Sea Levels May Limit New Orleans Adaptation Efforts
  5. Houston is experiencing its third ‘500-year’ flood in 3 years. How is that possible?
  6. Hurricane Harvey Slams Texas With Devastating Force
  7. GOLDMAN: Harvey’s damage to America’s oil industry could last several months

Turning to Net Zero for Buildings – The HERS Index

Over the last few months my time has been occupied with travel and work. Relocation and working in construction has consumed certain amounts of time. In the process I have continued to learn and observe my working environment from the perspective of a mechanical engineer.

I have upgraded some of my technology, investing in a smart phone for it’s utility and ease of connection. However, this newer tech is still not the best for longer term research and curation efforts, such as this blog. I am happy to report I have managed to land a longer term residence which now will provide me the needed stability and access to resources, while I can set up my work space needed for more intensive endeavours.

Now relocated in Vancouver, I have a few projects in the works, and am able to get back to focusing some of my time into my own research and development, to which, is one of the major purposes of my blogging. Next week, on September 25th there is a luncheon course presentation I plan on attending regarding upcoming changes to the BC Building Code introducing The Energy Step Code. More on this topic later after the seminar.

In California we already see the movement on towards the construction of net zero buildings, as compliance to the 2016 Building Energy Standard which applies to “new construction of, and additions and alterations to, residential and nonresidential buildings.” (1) These rules came into effect January 1st, 2017. I will be reviewing this publicly available document and provide more insight and commentary at a later time.

One measure of rating homes for energy efficiency that I have seen often referenced and may be a tool for reporting and rating homes is the HERS Index as shown in the graphic.

Image 1:  HERS Index scale of residential home energy consumption.

As we can see from the scale that there is reference home, so there are calculation needed to rate a home, computer methods are available online where a houses data can be input for a curious homeowner, however qualified ratings are to be done by a qualified HERS Rating technician. These ensure by performance tests that a house meets standards in actual use and perform as claimed.

A comprehensive
HERS home energy rating

The HERS Rater will do a comprehensive HERS home energy rating on your home to assess its energy performance. The energy rating will consist of a series of diagnostic tests using specialized equipment, such as a blower door test, duct leakage tester, combustion analyzer and infrared cameras. These tests will determine:

  • The amount and location of air leaks in the building envelope
  • The amount of leakage from HVAC distribution ducts
  • The effectiveness of insulation inside walls and ceilings
  • Any existing or potential combustion safety issues

Other variables that are taken into account include:

  • Floors over unconditioned spaces (like garages or cellars)
  • Attics, foundations and crawlspaces
  • Windows and doors, vents and ductwork
  • Water heating system and thermostats

Once the tests have been completed, a computerized simulation analysis utilizing RESNET Accredited Rating Software will be used to calculate a rating score on the HERS Index. (3)

As buildings become more expensive and are asked to provide ever more services there will be a movement to make these building more efficient to operate and maintain. As we do more with less, there will be social impacts and repercussions. To some these changes may be disruptive, while enabling newer markets in energy efficiency, renewables, energy storage, micro-grids and net zero buildings, to name a few.

References:

  1. California Building Code Title 24 – 2016 Building Energy Efficiency Standards for Residential and Nonresidential Buildings.
  2. Understanding the HERS Index
  3. How to Get a HERS® Index Score

Energy Certificates and the Blockchain Protocol

In the world of energy production, renewable energy sources, micro grids, large scale users, and other forms of electric power schemes there is a concentrated effort being placed on utilizing the Blockchain protocol.  This is because of the unique way in which a unit may be defined and tracked, similarly, can be associated to tracking quantities of value created and utilized in a complex trading scheme.

In a recent article (1) it has been reported that Jesse Morris, principal for electricity and transportation practices at RMI and co-founder of the Energy Web Foundation (EWF) received $2.5 million to develop the Blockchain protocol for energy purposes.

“We have a strong hypothesis that blockchain will solve a lot of long-running problems in the energy sector,” said Morris. “Overcoming these challenges could make small, incremental changes to energy infrastructure and markets in the near term, while others would be more far-reaching and disruptive.”

Certificates (also known as guarantees) of origin would assure the user that a particular megawatt-hour of electricity was produced from renewables. According to Morris, the U.S. alone has 10 different tracking systems, Asia-Pacific has several more, and each European country has its own system of certification. Blockchain could be used to transparently guarantee the origin of the electrons.

Longer-term, and more radically, RMI sees the future of electricity networks being driven by the billions of energy storage and HVAC units, EVs, solar roof panels and other devices and appliances at the grid edge.

Blockchains can allow any of them to set their own level of participation on the grid, without the need for an intermediary. And crucially, they can be configured so that if a grid operator needs guaranteed capacity, the grid-edge unit can communicate back to the grid whether or not it’s up to the task.

This is an example of what Morris described as blockchain’s ability to “fuse the physical with the virtual” via machine-to-machine communication.  (1)

Another example of the emergence of the usefulness and interest in the Blockchain protocol is in crowdsourcing and distributed ledger applications.

Illustration by Dan Page (2)

At its heart, blockchain is a self-sustaining, peer-to-peer database technology for managing and recording transactions with no central bank or clearinghouse involvement. Because blockchain verification is handled through algorithms and consensus among multiple computers, the system is presumed immune to tampering, fraud, or political control. It is designed to protect against domination of the network by any single computer or group of computers. Participants are relatively anonymous, identified only by pseudonyms, and every transaction can be relied upon. Moreover, because every core transaction is processed just once, in one shared electronic ledger, blockchain reduces the redundancy and delays that exist in today’s banking system.

Companies expressing interest in blockchain include HP, Microsoft, IBM, and Intel. In the financial-services sector, some large firms are forging partnerships with technology-focused startups to explore possibilities. For example, R3, a financial technology firm, announced in October 2015 that 25 banks had joined its consortium, which is attempting to develop a common crypto-technology-based platform. Participants include such influential banks as Citi, Bank of America, HSBC, Deutsche Bank, Morgan Stanley, UniCredit, Société Générale, Mitsubishi UFG Financial Group, National Australia Bank, and the Royal Bank of Canada. Another early experimenter is Nasdaq, whose CEO, Robert Greifeld, introduced Nasdaq Linq, a blockchain-based digital ledger for transferring shares of privately held companies, also in October 2015. (2)

 

References:

  1. Energy Companies look to Blockchain
  2. A Strategist’s Guide to the Blockchain

A Modern Renaissance of Electrical Power: Microgrid Technology – Part 1

NYC First Power Grid - Edison #2.png

Figure 1:  The original Edison DC microgrid in New York City, which started operation on September 4, 1882 (1)

A.  Historical Development of Electric Power in the Metropolitan City

The development of electricity for commercial, municipal and industrial use developed at a frantic pace in the mid to late 1800’s and early 1900’s.  The original distribution system consisted of copper wiring laid below the streets of New York’s east side.  The first power plants and distribution systems were small compared to today’s interconnected grids which span nations and continents.  These small “islands” of electrical power were the original microgrids.  In time they grew to become the massive infrastructure which delivers us electrical power we have become dependent upon for the operation of our modern society.

1) Let There Be Light! – Invention of the Light Bulb

When electricity first came on the scene in the 1800’s it was a relatively unknown force. Distribution systems from a central plant were a new concept originally intended to provide electric power for the newly invented incandescent light bulb.  Thomas Edison first developed a DC power electric grid to test out and prove his ideas in New York, at the Manhattan Pearl Street Station in the 1870’s.  This first “microgrid” turned out to be a formidable undertaking.

[…] Edison’s great illumination took far longer to bring about than he expected, and the project was plagued with challenges. “It was massive, all of the problems he had to solve,” says writer Jill Jonnes, author of Empires of Light: Edison, Tesla, Westinghouse, and the Race to Electrify the World, to PBS. For instance, Edison had to do the dirty work of actually convincing city officials to let him use the Lower East Side as a testing ground, which would require digging up long stretches of street to install 80,000 feet insulated copper wiring below the surface.

He also had to design all of the hardware that would go into his first power grid, including switchboards, lamps, and even the actual meters used to charge specific amounts to specific buildings. That included even the six massive steam-powered generators—each weighing 30 tons—which Edison had created to serve this unprecedented new grid, according to IEEE. As PBS explains, Edison was responsible for figuring out all sorts of operational details of the project—including a “bank of 1,000 lamps for testing the system:” (1)

Although Edison was the first to develop a small DC electrical distribution system in a city, there was competition between DC and AC power system schemes in the early years of electrical grid development.  At the same time, there were a hodge-podge of other power sources and distribution methods in the early days of modern city development.

In the 1880s, electricity competed with steam, hydraulics, and especially coal gas. Coal gas was first produced on customer’s premises but later evolved into gasification plants that enjoyed economies of scale. In the industrialized world, cities had networks of piped gas, used for lighting. But gas lamps produced poor light, wasted heat, made rooms hot and smoky, and gave off hydrogen and carbon monoxide. In the 1880s electric lighting soon became advantageous compared to gas lighting. (2)

2) Upward Growth – Elevators and Tall Buildings

Another innovation which had been developing at the same time as electrical production and distribution, was the elevator, a necessity for the development of tall buildings and eventually towers and skyscrapers .  While there are ancient references to elevating devices and lifts, the original electric elevator was first introduced in Germany in 1880 by Werner von Siemens (3).  It was necessary for upward growth in urban centers that a safe and efficient means of moving people and goods was vital for the development of tall buildings.

Later in the 1800s, with the advent of electricity, the electric motor was integrated into elevator technology by German inventor Werner von Siemens. With the motor mounted at the bottom of the cab, this design employed a gearing scheme to climb shaft walls fitted with racks. In 1887, an electric elevator was developed in Baltimore, using a revolving drum to wind the hoisting rope, but these drums could not practically be made large enough to store the long hoisting ropes that would be required by skyscrapers.

Motor technology and control methods evolved rapidly. In 1889 came the direct-connected geared electric elevator, allowing for the building of significantly taller structures. By 1903, this design had evolved into the gearless traction electric elevator, allowing hundred-plus story buildings to become possible and forever changing the urban landscape. Multi-speed motors replaced the original single-speed models to help with landing-leveling and smoother overall operation.

Electromagnet technology replaced manual rope-driven switching and braking. Push-button controls and various complex signal systems modernized the elevator even further. Safety improvements have been continual, including a notable development by Charles Otis, son of original “safety” inventor Elisha, that engaged the “safety” at any excessive speed, even if the hoisting rope remained intact. (4)

The-Story-In-Elevators-And-Escalators-274

Figure 2:  The Woolworth Building at 233 Broadway, Manhattan, New York City – The World’s Tallest Building, 1926 (5)

3) Hydroelectric A/C Power – Tesla, Westinghouse and Niagara Falls

Although Niagara Falls was not the first hydroelectric project it was by far the largest and from the massive power production capacity spawned a second Industrial Revolution.

“On September 30, 1882, the world’s first hydroelectric power plant began operation on the Fox River in Appleton, Wisconsin. […] Unlike Edison’s New York plant which used steam power to drive its generators, the Appleton plant used the natural energy of the Fox River. When the plant opened, it produced enough electricity to light Rogers’s home, the plant itself, and a nearby building. Hydroelectric power plants of today generate a lot more electricity. By the early 20th century, these plants produced a significant portion of the country’s electric energy. The cheap electricity provided by the plants spurred industrial growth in many regions of the country. To get even more power out of the flowing water, the government started building dams.” (6)

pic4.jpg

Figure 3:  The interior of Power House No. 1 of the Niagara Falls Power Company (1895-1899) (7)

niagaraplant.jpg

Figure 4:  Adam’s power station with three Tesla AC generators at Niagara Falls, November 16, 1896. (7)

Electrical Transmission, Tesla and the Polyphase Motor

The problem of the best means of transmission, though, would be worked out not by the commission but in the natural course of things, which included great strides in the development of AC. In addition, the natural course of things included some special intervention from on high (that is, from Edison himself).

But above all, it involved Tesla, probably the only inventor ever who could be put in a class with Edison’s in terms of the number and significance of his innovations. The Croatian-born scientific mystic–he spoke of his insight into the mechanical principles of the motor as a kind of religious vision–had once worked for Edison. He had started out with the Edison Company in Paris, where his remarkable abilities were noticed by Edison’s business cohort and close friend Charles Batchelor, who encouraged Tesla to transfer to the Edison office in New York City, which he did in 1884. There Edison, too, became impressed with him after he successfully performed a number of challenging assignments. But when Tesla asked Edison to let him undertake research on AC–in particular on his concept for an AC motor–Edison rejected the idea. Not only wasn’t Edison interested in motors, he refused to have anything to do with the rival current.

So for the time being Tesla threw himself into work on DC. He told Edison he thought he could substantially improve the DC dynamo. Edison told him if he could, it would earn him a $50,000 bonus. This would have enabled Tesla to set up a laboratory of his own where he could have pursued his AC interests. By dint of extremely long hours and diligent effort, he came up with a set of some 24 designs for new equipment, which would eventually be used to replace Edison’s present equipment.

But he never found the promised $50,000 in his pay envelope. When he asked Edison about this matter, Edison told him he had been joking. “You don’t understand American humor,” he said. Deeply disappointed, Tesla quit his position with the Edison company, and with financial backers, started his own company, which enabled him to work on his AC ideas, among other obligations.

The motor Tesla patented in 1888 is known as the induction motor. It not only provided a serviceable motor for AC, but the induction motor had a distinct advantage over the DC motor. (About two-thirds of the motors in use today are induction motors.)

The idea of the induction motor is simplicity itself, based on the Faraday principle. And its simplicity is its advantage over the DC motor.

An electrical motor–whether DC or AC–is a generator in reverse. The generator operates by causing a conductor (armature) to move (rotate) in a magnetic field, producing a current in the armature. The motor operates by causing a current to flow in an armature in a magnetic field, producing rotation of the armature. A generator uses motion to produce electricity. A motor uses electricity to produce motion.

The DC motor uses commutators and brushes (a contact switching mechanism that opens and closes circuits) to change the direction of the current in the rotating armature, and thus sustain the direction of rotation and direction of current.

In the AC induction motor, the current supply to the armature is by induction from the magnetic field produced by the field current.  The induction motor thus does away with the troublesome commutators and brushes (or any other contact switching mechanism). However, in the induction motor the armature wouldn’t turn except as a result of rotation of the magnetic field, which is achieved through the use of polyphase current. The different current phases function in tandem (analogous to pedals on a bicycle) to create differently oriented magnetic fields to propel the armature.  

Westinghouse bought up the patents on the Tesla motors almost immediately and set to work trying to adapt them to the single-phase system then in use. This didn’t work. So he started developing a two-phase system. But in December 1890, because of the company’s financial straits–the company had incurred large liabilities through the purchase of a number of smaller companies, and had to temporarily cut back on research and development projects–Westinghouse stopped the work on polyphase. (8)

4) The Modern Centralized Electric Power System

After the innovative technologies which allowed expansion and growth within metropolitan centers were developed there was a race to establish large power plants and distribution systems from power sources to users.  Alternating Current aka AC power was found to the preferred method of power transmission over copper wires from distant sources.  Direct Current power transmission proved problematic over distances, generated resistance heat resulting in line power losses. (9)

440px-New_York_utility_lines_in_1890

Figure 5:  New York City streets in 1890. Besides telegraph lines, multiple electric lines were required for each class of device requiring different voltages (11)

AC has a major advantage in that it is possible to transmit AC power as high voltage and convert it to low voltage to serve individual users.

From the late 1800s onward, a patchwork of AC and DC grids cropped up across the country, in direct competition with one another. Small systems were consolidated throughout the early 1900s, and local and state governments began cobbling together regulations and regulatory groups. However, even with regulations, some businessmen found ways to create elaborate and powerful monopolies. Public outrage at the subsequent costs came to a head during the Great Depression and sparked Federal regulations, as well as projects to provide electricity to rural areas, through the Tennessee Valley Authority and others.

By the 1930s regulated electric utilities became well-established, providing all three major aspects of electricity, the power plants, transmission lines, and distribution. This type of electricity system, a regulated monopoly, is called a vertically-integrated utility. Bigger transmission lines and more remote power plants were built, and transmission systems became significantly larger, crossing many miles of land and even state lines.

As electricity became more widespread, larger plants were constructed to provide more electricity, and bigger transmission lines were used to transmit electricity from farther away. In 1978 the Public Utilities Regulatory Policies Act was passed, making it possible for power plants owned by non-utilities to sell electricity too, opening the door to privatization.

By the 1990s, the Federal government was completely in support of opening access to the electricity grid to everyone, not only the vertically-integrated utilities. The vertically-integrated utilities didn’t want competition and found ways to prevent outsiders from using their transmission lines, so the government stepped in and created rules to force open access to the lines, and set the stage for Independent System Operators, not-for-profit entities that managed the transmission of electricity in different regions.

Today’s electricity grid – actually three separate grids – is extraordinarily complex as a result. From the very beginning of electricity in America, systems were varied and regionally-adapted, and it is no different today. Some states have their own independent electricity grid operators, like California and Texas. Other states are part of regional operators, like the Midwest Independent System Operator or the New England Independent System Operator. Not all regions use a system operator, and there are still municipalities that provide all aspects of electricity. (10)

 

800px-Electricity_grid_simple-_North_America.svg.png

Figure 6:  Diagram of a modern electric power system (11)

A Brief History of Electrical Transmission Development

The first transmission of three-phase alternating current using high voltage took place in 1891 during the international electricity exhibition in Frankfurt. A 15,000 V transmission line, approximately 175 km long, connected Lauffen on the Neckar and Frankfurt.[6][12]

Voltages used for electric power transmission increased throughout the 20th century. By 1914, fifty-five transmission systems each operating at more than 70,000 V were in service. The highest voltage then used was 150,000 V.[13] By allowing multiple generating plants to be interconnected over a wide area, electricity production cost was reduced. The most efficient available plants could be used to supply the varying loads during the day. Reliability was improved and capital investment cost was reduced, since stand-by generating capacity could be shared over many more customers and a wider geographic area. Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to lower energy production cost.[3][6]

The rapid industrialization in the 20th century made electrical transmission lines and grids a critical infrastructure item in most industrialized nations. The interconnection of local generation plants and small distribution networks was greatly spurred by the requirements of World War I, with large electrical generating plants built by governments to provide power to munitions factories. Later these generating plants were connected to supply civil loads through long-distance transmission. (11)

 

To be continued in Part 2:  Distributed Generation and The Microgrid Revolution

 

References:

  1. The Forgotten Story Of NYC’s First Power Grid  by Kelsey Campbell-Dollaghan
  2. The Electrical Grid – Wikipedia
  3. The History of the Elevator – Wikipedia
  4. Elevator History – Columbia Elevator
  5. The History of Elevators and Escalators – The Wonder Book Of Knowledge | by Henry Chase (1921)
  6. The World’s First Hydroelectric Power Station
  7. Tesla Memorial Society of New York Website 
  8. The Day They Turned The Falls On: The Invention Of The Universal Electrical Power System by Jack Foran
  9. How electricity grew up? A brief history of the electrical grid
  10. The electricity grid: A history
  11. Electric power transmission

Hybrid Electric Buildings; A New Frontier for Energy and Grids

.OneMaritimePlaza-300x225 PeakerPlantSanFranHybrid Electric Buildings are the latest in developments for packaged energy storage in buildings which offer several advantages including long-term operational cost savings. These buildings have the flexibility to combine several technologies and energy sources in with a large-scale integrated electric battery system to operate in a cost-effective manner.

San Francisco’s landmark skyscraper, One Maritime Plaza, will become the city’s first Hybrid Electric Building using Tesla Powerpack batteries. The groundbreaking technology upgrade by Advanced Microgrid Solutions (AMS) will lower costs, increase grid and building resiliency, and reduce the building’s demand for electricity from the sources that most negatively impact the environment.

Building owner Morgan Stanley Real Estate Investing hired San Francisco-based AMS to design, build, and operate the project. The 500 kilowatt/1,000 kilowatt-hour indoor battery system will provide One Maritime Plaza with the ability to store clean energy and control demand from the electric grid. The technology enables the building to shift from grid to battery power to conserve electricity in the same way a hybrid-electric car conserves gasoline. (1)

In addition to storage solutions these buildings can offer significant roof area to install solar panel modules and arrays to generate power during the day.  Areas where sunshine is plentiful and electricity rates are high, solar PV and storage combinations for commercial installations are economically attractive.

For utility management, these systems are ideal in expansion of the overall grid, as more micro-grids attach to the utility infrastructure overall supply and resiliency is improved.

In recent developments AMS has partnered with retailer Wal-Mart to provide on-site and “behind the meter” energy storage solutions for no upfront costs.

solar-panels-roof-puerto-rico.png

Figure 2.  Solar Panels on Roof of Wal-Mart, Corporate Headquarters, Puerto Rico (3)

On Tuesday, the San Francisco-based startup announced it is working with the retail giant to install behind-the-meter batteries at stores to balance on-site energy and provide megawatts of flexibility to utilities, starting with 40 megawatt-hours of projects at 27 Southern California locations.

Under the terms of the deal, “AMS will design, install and operate advanced energy storage systems” at the stores for no upfront cost, while providing grid services and on-site energy savings. The financing was made possible by partners such as Macquarie Capital, which pledged $200 million to the startup’s pipeline last year.

For Wal-Mart, the systems bring the ability to shave expensive peaks, smooth out imbalances in on-site generation and consumption, and help it meet a goal of powering half of its operations with renewable energy by 2025. Advanced Microgrid Solutions will manage its batteries in conjunction with building load — as well as on-site solar or other generation — to create what it calls a “hybrid electric building” able to keep its own energy costs to a minimum, while retaining flexibility for utility needs.

The utility in this case is Southern California Edison, a long-time AMS partner, which “will be able to tap into these advanced energy storage systems to reduce demand on the grid as part of SCE’s groundbreaking grid modernization project,” according to Tuesday’s statement. This references the utility’s multibillion-dollar grid modernization plan, which is now before state regulators.  (2)

References:

  1. San Francisco’s First Hybrid Electric Building – Facility Executive, June 28, 2016
    https://facilityexecutive.com/2016/06/skyscraper-will-be-san-franciscos-first-hybrid-electric-building/

  2. Wal-Mart, Advanced Microgrid Solutions to Turn Big-Box Stores Into Hybrid Electric Buildings, GreenTech Media, April 11, 2017  https://www.greentechmedia.com/articles/read/wal-mart-to-turn-big-box-stores-into-hybrid-electric-buildings?utm_source=Daily&utm_medium=Newsletter&utm_campaign=GTMDaily

  3. Solar Panels on Wal-Mart Roof  http://corporate.walmart.com/_news_/photos/solar-panels-roof-puerto-rico

What Does Moist Enthalpy Tell Us?

“In terms of assessing trends in globally-averaged surface air temperature as a metric to diagnose the radiative equilibrium of the Earth, the neglect of using moist enthalpy, therefore, necessarily produces an inaccurate metric, since the water vapor content of the surface air will generally have different temporal variability and trends than the air temperature.”

Climate Science: Roger Pielke Sr.

In our blog of July 11, we introduced the concept of moist enthalpy (see also Pielke, R.A. Sr., C. Davey, and J. Morgan, 2004: Assessing “global warming” with surface heat content. Eos, 85, No. 21, 210-211. ). This is an important climate change metric, since it illustrates why surface air temperature alone is inadequate to monitor trends of surface heating and cooling. Heat is measured in units of Joules. Degrees Celsius is an incomplete metric of heat.

Surface air moist enthalpy does capture the proper measure of heat. It is defined as CpT + Lq where Cp is the heat capacity of air at constant pressure, T is air temperature, L is the latent heat of phase change of water vapor, and q is the specific humidity of air. T is what we measure with a thermometer, while q is derived by measuring the wet bulb temperature (or, alternatively, dewpoint…

View original post 203 more words

Site C Dam Construction in BC – A Political Water Grab?

Mega projects grab headlines and provide many photo opportunities for politicians.  Since the construction of the depression era Hoover Dam, these massive construction projects have historically provided for jobs and opportunity when the economy is slow.  However, some questions remain, such as; are these projects in everyone’s best interests, what are we losing, and is there a better way to accomplish our goals?

“‘Water grabbing’ refers to a situation in which public or private entities are able to take control of, or reallocate, precious water resources for profit or for power — and at the expense of local communities and the ecosystems on which their livelihoods are based.

The effects have been well-documented: examples include families driven away from their villages to make room for mega dams, privatization of water sources that fails to improve access for the public, and industrial activity that damages water quality.”

[…]

“…hydropower comprises about 70 per cent of the world’s renewable energy mix, and guarantees a lower amount of total emissions than fossil fuel plants, its overall impacts are not always positive. This is especially the case when dams are not planned with an emphasis on the impacts on people and the environment.

In North America, many dams built in the 1980s are now being demolished because of their impacts on fish species such as salmon. In some cases they are replaced with more modern dams that do not require building large-scale reservoirs.” (1)

A Short Political History of the Site C Dam

Site C dam construction

Figure 1.  Construction on the Site C dam on the Peace River in the fall of 2016. Photo: Garth Lenz. (2)

“On May 10, 1990, the Vancouver Sun reported remarks made by then Energy Minister Jack Davis at an Electric Energy Forum: “Power projects initiated by B.C. Hydro will be increasingly guided by environmental concerns because of mounting public pressure.” Noting the province’s abundance of power sources, he said: “We have the scope to be different.”

However, during a 1991 Social Credit party leadership campaign the winner, Rita Johnston declared in her policy statement that she wanted to accelerate construction of the “$3 billion” dam. Johnston’s leadership was brief because the Socreds were defeated in October 1991.

In 1993, the dam was declared dead by then BC Hydro CEO Marc Eliesen. Site C is dead for two reasons,” Eliesen said. “The fiscal exposure is too great … the dam is too costly. Also it is environmentally unacceptable.”

Despite these twists and turns, B.C. Hydro’s staff worked diligently to keep the dam alive.

Fast forward to April 19, 2010, when then B.C. Liberal Premier Gordon Campbell made his announcement that Site C was on again, now branded as a “clean energy project” and an important part of “B.C.’s economic and ecological future.”

Campbell claimed the dam would power 460,000 new homes and repeated the mantra of an increasing power demand of 20 to 40 per cent in the following 20 years.

In the ensuing seven years since the 2010 announcement, power demand has stayed virtually the same, despite BC Hydro’s forecast for it to climb nearly 20 per cent during that time. The reality is B.C.’s electricity demand has been essentially flat since 2005, despite ongoing population growth.

Campbell resigned in 2011 amidst uproar over the Harmonized Sales Tax (HST), opening the field for a leadership race, which Christy Clark won. That brings us to the May 2013 election, during which Clark pushed liquefied natural gas (LNG) exports as the solution to B.C.’s economic woes. With the LNG dream came a potential new demand for grid electricity, making Site C even more of a hot topic.

Four years on from Clark’s pronouncement there are no LNG plants up and running, despite her promise of thousands of jobs. Without a market for Site C’s power, Clark has started ruminating about sending it to Alberta, despite a lack of transmission or a clear market.

Oxford University Professor Bent Flyvbjerg has studied politicians’ fascination with mega projects, describing the rapture they feel building monuments to themselves: “Mega projects garner attention, which adds to the visibility they gain from them.”

This goes some way to explaining the four-decade obsession with building the Site C dam, despite the lack of clear demand for the electricity. (2)

 

References:

  1.  Water and power: Mega-dams, mega-damage?
    http://www.scidev.net/global/water/data-visualisation/water-power-mega-dams-mega-damage.html
  2. Four Decades and Counting: A Brief History of the Site C Dam https://www.desmog.ca/2017/03/23/four-decades-and-counting-brief-history-site-c-dam

Twelve Reasons Why Globalization is a Huge Problem

Globalization seems to be looked on as an unmitigated “good” by economists. Unfortunately, economists seem to be guided by their badly flawed models; they miss  real-world problems. In …

Source: Twelve Reasons Why Globalization is a Huge Problem