Health knowledge made personal
Join this community!
› Share page:
Go
Search posts:

Ready, Fire, Aim! Upgrading America's Building Stock for Energy Efficiency

Posted Aug 06 2013 3:12am

Source

 

Where we are - Over decades, a widely accepted mythology has blossomed in the large building energy efficiency community. This mythology says that if we just build new buildings to be efficient, and replace or retrofit existing buildings, we can drastically reduce building energy use. It's comforting to energy policy-makers, and commercially advantageous for consultants, contractors, energy suppliers, and purveyors of energy efficiency equipment and services. Everybody knows what to do, and especially how to make money from goods and services. For building owners and operators, energy management is entirely too simple; just spend money - if you have it.

 

The myth is all the more seductive, because it embodies elements of truth. But the myth overlooks unsavory truths, and seriously impairs and slows our efforts toward efficiency. The myth has flourished partly from lack of practical metrics and diagnostics that quickly show near-real-time details of how energy is used, and mis-used, in buildings. Without such tools, truly effective commissioning, and capturing low-cost operating and maintenance ("O&M") savings, has remained frustratingly elusive to most consultants, policymakers, and especially to building owners and operators. Partly from bafflement and frustration, many are overlooking the bonanza of quick savings available from solving these problems.

 

Trouble Appears - Recent "universal benchmarking" of CO2 footprints of buildings in some cities has given us a broad sample of energy use data for large buildings. Benchmarks of this type (see http://www.energystar.gov/index.cfm?c=evaluate_performance.bus_portfoliomanager# ) don't show how energy is used, but they do show annual usage. The samples are broad enough that we can finally see there are large unexplained variations between buildings of similar vintage and occupancy, and that newer buildings tend to use more energy than older ones (see http://www.nyc.gov/html/gbee/downloads/pdf/nyc_ll84_benchmarking_report_2012.pdf ).

 

Analysis of this early data attributes the variations at least partly to different operating and maintenance practices between buildings. We should have changed course long ago, but as the fog begins to clear, we see the tip of a large iceberg, and we're dead in the water, listing to starboard.

 

Buildings use 40% of the energy consumed in the US, nearly $500 billion/yr. The need for enhanced economic competitiveness, improved balance of payments, and reduced global warming all imply the need for rapid, large energy use reductions. Policy-makers recognize this imperative. But they have an incomplete grasp of the situation, demonstrated by their focus. It's almost solely on efficient designs for new construction, deep retrofits of existing buildings, and a pre-occupation with encouraging and somehow securing funding of these expensive projects. Realities not adequately addressed by policy-makers include the need to pursue inherently verifiable commissioning of new buildings, lack of effective planning and management before and during retrofits, and especially lack of on-going management throughout building life cycles. Addressing these problems to ensure initial and continuing efficiency requires objective, low-cost, energy-data-driven metrics and diagnostics applicable throughout the building's life.

 

New and retrofitted buildings are being brought on-line with naïve expectations they will automatically operate as intended, and continue to function efficiently. "Deep retrofits", advocated by some as the only viable long-term solution for inefficient existing buildings, will sometimes address non-existent problems, and miss real ones, without use of effective diagnostics prior to retrofits. Deep retrofits will also be plagued by substantial under-performance, unless properly commissioned and managed with on-going diagnostics.

 

The data from "universal benchmarking" are especially troubling. Existing buildings include many that were never properly commissioned, but should be. Commissioned or not, existing buildings need on-going data-driven tracking and management to ensure that, once fixed, they continue to operate efficiently, despite inevitable (and otherwise undetected) equipment and human failures.

 

We must improve our understanding of the scope of these problems, and our planning and efforts must address them with appropriate tools. Until we adopt better metrics and diagnostics for building characterization and on-going life-cycle management of buildings, we'll continue to charge down the wrong path, using an over-simplified problem definition to achieve flawed decisions.

 

Unrecognized Problems Lead to Flawed Decisions

 

This problem is exemplified by a plenary speaker at the ACEEE 2012 Monterey conference on energy efficiency in buildings. Addressing rational allocation of limited resources to reduce California's building energy use by 80% by 2050, the speaker discussed existing buildings using a two-valued choice - extensive retrofits, or demolish-and-replace. Quick, low-cost savings from O&M improvements were not seriously considered, yet the speaker expressed concern about constraints on the pace of progress, posed by huge capital requirements. Unfortunately, the speaker was voicing a widely accepted and commonly used decision matrix.

 

This decision process has implicit wrong assumptions and a missing link. First, it assumes retrofits are the cure-all for performance shortfalls in existing buildings. Second, it assumes retrofits will recognize and address problems in existing buildings, without mis-diagnoses. Third, it assumes, for all buildings built (or rebuilt) with efficient design and efficient equipment, lasting efficiency is automatically assured. Fourth, it lacks a bridge strategy to save energy quickly while waiting for the benefits of retrofits and new construction. Fifth, it neglects reduced capital needs resulting from improved O&M. Without realizing O&M savings first, retrofit scopes and associated capital needs will be over-estimated, requiring years more work and additional financing.

 

For decades, programs subsidizing efficiency improvements have given lip service to improving O&M, advocating that O&M improvements should be addressed before making retrofits. But contractors historically achieve little improved O&M efficiency; they are often chosen as low bidders (with no incentive to pursue O&M improvements) or for their retrofit expertise. Often contractors decide to replace oversized equipment with like-sized equipment. Some contractors even argue that savings measurement and verification ("M&V") diverts capital from desperately needed improvements. For lack of data-driven diagnostics and building performance metrics, we have been making comforting - but wrong - assumptions about retrofits always being the solution. Some retrofits address overlooked problems with inappropriate expenditures. Contractors like the assumption that retrofits are a panacea, because that increases their sales. Building and utility program managers, under pressure to do something, like the assumption because they can point out how much money they spent, and exhibit it as an accomplishment.

 

These wide-spread tendencies have spawned a new industry of "evaluation", sometimes used to independently verify that program expenditures provide real benefits. While providing reassurance that good things are happening, evaluators also often discover short-falls from the hoped-for results.

 

Building Realities

 

Buildings can suffer serious efficiency shortfalls due to poorly managed O&M. This bonanza of potential savings hasn't been generally recognized. But recent "big data" from first-generation benchmarking efforts (c.f. NYC 2012 report) indicate there are large variations between buildings with similar vintage and occupancy, built to the same codes. Bringing all buildings up to the level of their peers (75th percentile of performance) would save about 31% of purchased energy.

 

Buildings also under-perform from poor or no commissioning. The author recently served as a Resource Efficiency Manager in a 5.3 million SF complex. Many recently completed buildings there offered large improvement potential, despite being well insulated, with modern lighting, mechanical and building automation systems. These buildings had the "right stuff" but didn't work as intended, mostly because nobody measured their performance. Systematic problems with how buildings were commissioned resulted in many "retro-commissioning opportunities" after new buildings completed their one-year warranty period. Personnel responsible for follow-on O&M in these buildings had no staffing, no budget, no tools, and no training for retro-commissioning. Many new buildings (typically complex and prone to O&M errors) turned over to their care after warranty were big headaches.

 

Triage is Needed

 

We need a better alternative to the two-valued "retrofit or replace" decision model. Emergency medicine offers a better decision-making model - triage. When faced with a crisis that overwhelms scarce resources, doctors use a three-valued logic to allocate effort; decide which patients will survive without immediate treatment, which patients will probably die with or without immediate treatment, and which will be saved by immediate treatment. In building terms, these categories would be:

 

- Buildings performing well enough that limited resources offer a better payback if employed elsewhere

- Buildings that should be replaced, and

- Buildings that, based on measured performance, would benefit from better O&M and perhaps retrofits.

 

This type of decision matrix should be used for existing buildings, with one key improvement. Buildings needing "better O&M or retrofits" should first fix O&M problems, then be re-assessed, before final retrofit choices are made. The missing link is that such triage requires diagnostics fast and accurate enough to first categorize individual buildings, and then track their progress during either O&M improvements or retrofits.

 

Available Diagnostics

 

Current Diagnostics - Few diagnostics (low-cost or otherwise) are suitable for triage of buildings. Astoundingly, there is no equivalent to automotive "miles per gallon" in general use for buildings! EPA's Energy Star Portfolio Manager ("ESPM") benchmarking tool comes closest to "mpg" for buildings - but it measures tons of CO2 instead of energy required. It takes too long (imagine being able to measure mpg of your car only once a year, instead of just before and just after a tune-up!). Required inputs include factors that are not intrinsic to the building as operated. ESPM provides a single rating number that responds to many factors, some of which are either extrinsic or judgmental, such as what is a representative peer group for the building, the CO2 intensity of purchased electricity, or in what climate zone it operates. Nonetheless, it's the only tool in general use that comes close.

 

Energy audits are sometimes recommended, but they are costly, partly because they don't rely strictly on measurements. Audit quality is highly dependent on auditor skill, judgement, and budget, leading to less than pristine credibility. Among other problems, audits do not yield a numerical "figure of merit" for peer comparisons between buildings, or for before-and-after comparisons. Hence, they are neither unbiased nor useful as diagnostic "yardsticks".

 

Audits work best for follow-up, once diagnostics and benchmarks disclose specific energy use patterns of interest. At that point, audits can become a focussed investigation, instead of being a "cookie-cutter" exercise.

 

"Energy Star" buildings (achieving annual ESPM performance ratings in the top 25 percentile of buildings in established categories) would be candidates for the first triage category. These buildings probably have room for improvement, but they aren't the first priority to treat, with limited resources. However, single number ESPM ratings give no clue as to patterns of their energy use.

 

ESPM offers no help to categorize buildings belonging in the second and third triage groups, other than a lower ESPM rating. Commercially convenient assumptions are that low-scoring buildings either need an audit or extensive retrofits, or should be replaced. But lower ESPM ratings don't indicate which are clunkers, obviously needing replacement, and which ones just need a tune-up or some upgrades. Studies show some very low-scoring buildings are recent vintage, with lots of insulation, modern equipment, and that presumably meet recent codes. These shouldn't be candidates for retrofits or demolition. Correctly parsing buildings into triage categories 2 or 3 requires better metrics and diagnostics, beyond or in addition to ESPM.

 

ESPM was devised to serve EPA's mission by capturing the annual "carbon footprint" of buildings - which it does reasonably well. ESPM also compares building performance, based on monthly energy bills and semi-local weather, to that of peers that are in similar climates, and with similar occupancies. Instead of being a hard number that can be compared to, for example, as-designed performance, or to past performance, it embodies assumptions about what is a valid peer group, and how those "comparable" buildings perform. These assumptions change with time.

 

Using ESPM for feedback on the impact of changes in individual buildings is neither fast nor precise. ESPM requires a year of continuous monthly billing data to achieve a baseline. To measure a change in building performance needs an additional year, post-change. Limited speed and precision in ESPM ratings (from using monthly data) impairs before and after comparisons, making such comparisons slower and less precise. Detecting changes with ESPM needs a significant change to work reliably, even if uncertain precision is overlooked.

 

Future Diagnostics - Part of the raw data that goes into ESPM, manipulated differently, forms a moderately useful engineering "figure of merit" for a building. Monthly purchased energy, by fuel type, per square foot, trended against coincident outside temperature for the area, gives a fairly good (albeit imprecise and slow to obtain) set of characteristic curves for how a building uses those fuels. Such data show how building energy use, as operated, responds to a range of weather conditions. These numerical results do not require judgments or assumptions.

 

With some crucial refinements (such as using weekly data, comparing only like cycles of occupancy, and local weather), and coincident outside temperature, one can get much more precise trends, quickly. These trends maximize accuracy for occupied buildings, from a theoretical standpoint, and determine trends faster. They are also suitable for triage and for comparisons between buildings and building models, or for before-and-after performance comparisons of the same building. The author has successfully field-tested such "diagnostic benchmarks".

 

Pilot use in a large complex demonstrated that diagnostic benchmarks are very useful tools, both for triage and to identify and correct O&M problems, and to verify those corrections, for large energy savings. (Lambert, L.A., "Getting Your Money's Worth: Diagnostic Benchmarking for Commissioning", August 2012, Proceedings, ACEEE Summer Study on Energy Efficiency in Buildings, American Conference for an Energy Efficient Economy, Washington, DC)

 

Going Forward - with Improved Aim

 

There are three key things needed to address the disturbing findings from early "universal benchmarking". First, we must confirm our understanding of highly variable energy use among otherwise similar buildings, and to what extent it is due to either poor or no commissioning, and/or poor O&M practices. Second, to accomplish this, we need to use suitable benchmarks and diagnostics. Third, assuming that both improved O&M and commissioning prove necessary to enhance efficiency in buildings, we need to make improved benchmarks and diagnostics more available, and standard throughout the industry, so we have continual triage, and better feedback on our accomplishments.

 

Understanding Universal Benchmarking Data - We need early insight into existing data from "universal benchmarking", such as New York City's. How do variations in O&M, or lack of proper commissioning, affect these results? Using even a monthly basis version of by-fuel-type "diagnostic benchmarking" on this existing data should improve our understanding of why there is so much variation between similar buildings.

 

Proving Diagnostic Benchmarks - Pilot O&M improvement, and retro-commissioning of buildings within the "universal benchmarking" building set, using weekly diagnostic benchmarking, is needed. If outliers with high energy use are due to poor O&M, diagnostic benchmark usefulness should become apparent fairly quickly, as O&M improvements are made. If high outliers are due to lack of proper initial commissioning, that should also become apparent from diagnostic benchmarks after retro-commissioning some of these buildings.

 

Improved Benchmark Availability - EPA presently allows any building owner or operator to access and use ESPM, at no charge. Two easy changes would make diagnostic benchmarks more widely available. First, the monthly raw data used in ESPM could provide a crude diagnostic benchmark, if presented as x-y line graphs of energy use, by fuel type, per square foot, versus coincident outside temperature. This data is already available for buildings "in" ESPM. It only needs re-processing, plus a modification to ESPM to support display and download of such graphs. Some problems typically encountered, such as erratic operation, or excessive reheat in a VAV system, would become more apparent by re-processing universal benchmarking data in this way.

 

Second, modify ESPM to enable user input of weekly energy use data, for any user. EPA would need to support this by supplying coincident weekly local outside temperature data. In conjunction with the graphic output capability described above, this would provide access to full precision, fast response diagnostic benchmarks. Users could then try, for example, modified O&M one week, and see results the next.

 

Although these suggestions are offered in the context of better understanding the "universal benchmarking" data, they should be offered to all users of ESPM.

 

Modify Codes and Standards - If, as expected, diagnostic benchmarks prove useful with "universal benchmarking" buildings, we should begin broader use. The author expects general agreement these diagnostics should then be adopted for all large buildings in the US. At this point, diagnostic benchmarking should be added to such standards as ASHRAE standard 100, IPMVP, ISO 50001, and building codes. Once in wide use, we can start using this data for triage, to determine more accurately which buildings just need O&M enhancements, which really should be retrofitted, and which buildings should eventually be replaced. This is the "Aim!" part we've been missing.

 

In the author's experience, for existing buildings, best practice is to first use continuous diagnostic benchmarking to fine-tune O&M. In the process of doing so, shortcomings of the building physical plant become more apparent, along with insights into equipment sizing requirements. For example, in one building that needed immediate boiler replacement, diagnostic benchmarking data showed replacement boilers with only half the capacity of the original equipment were required, avoiding considerable costs. Such savings would, all by themselves, justify additional metering and/or effort needed for diagnostic benchmarking, in many instances.

 

A better knowledge of physical plant shortcomings (as a result of O&M tune-up) will help to determine what retrofits are really needed. Similarly, higher confidence estimates of both retrofit savings and costs become feasible. Higher confidence understanding of retrofit costs and benefits, after the O&M improvements have been accomplished, diminishes retrofit costs and risks. This will remove much of the uncertainty from the retrofit or replace question.

 

For new buildings, all should be commissioned using diagnostic benchmark data for the real building, compared to pre-construction computer model energy projections, to determine if the new building meets efficiency expectations or guarantees, before the end of the new building's warranty period.

 

Then we'll see if our aim was correct, and if new construction, deep retrofits, or replacement are the only options.

 

Read related Article here:

http://thehaneygroup.org/blog/a-room-for-alternative-energy-source-solar-energy-

http://ireport.cnn.com/docs/DOC-1015603?ref=feeds/latest

Post a comment
Write a comment:

Related Searches