Electricity “Foregone”- Maximising CHP’s Virtual COP

In the Introduction to the government’s Heat Strategy (The Future of Heating), the chief Scientific Advisor to DECC, Professor MacKay, makes the point that waste heat from power generation isn’t free. When you make use of the waste heat for space heating or industrial processes, it comes at a cost – the cost being the loss of electricity generation. He calls it the electricity “foregone”. He then goes on to compare CHP to electric heat pumps. He says that CHP systems can be regarded as “virtual” heat pumps.  Both technologies effectively require some power to produce heat. What you might know as a CHP system’s Z factor is the same as the heat pumps Co-efficient of Performance (COP) – the ratio of power in (or foregone) to heat out.

Professor McKay goes on to state that whether we use heat pumps or CHP what we need to ensure that this COP (real or virtual) is as high as possible i.e. the minimum power use for the maximum heat delivered and the key to this is to reduce heating system temperatures to as low as possible. And to do this we need to insulate homes – that way our current radiators will still be able to adequately heat our homes even though we’ve reduced the temperatures at which they operate (in order to maximise our CHP/heat pump COP).

But temperatures aren’t the only critical factor in maximising a CHP’s virtual COP. In my last blog I talked about the idea of nuclear power operating in CHP mode. I stated that the COP for a nuclear power station could be as high as 14 (in fact the graph below shows this as just over 16) – based on 2 key design principles.  The first is to do with design and operation of the district heating system itself to achieve low flow and return temperatures. The second key principle, that no-one seems to be discussing or be aware of, is the design of the power station itself – or more specifically the power station’s steam turbine. This design idea has been put forward by William Orchard and while his other ideas about designing for optimal temperatures seems to be gaining ground, this idea seems to be largely unknown.

Almost all large thermal power stations in the UK use a steam turbine to generate electricity. Sometimes (in the case of gas combined cycle CCGT power stations) they combine a steam turbine with a gas turbine which gives an overall higher efficiency of electrical generation. The hot exhaust gases from the gas turbine are used to raise steam for the steam turbine. But you need your fuel in gaseous form to be able to do this – hence why this combining of cycles is generally limited to gas.  There are also power stations that use gas turbines on their own but these are predominantly used for peak load and operate for very few hours in the year.

So when we talk about large power stations operating in CHP mode it’s generally the heat that we extract from the steam turbine that is used for district heating. Conventionally when steam turbines operate in CHP mode there is just one point of extraction for the heat from the turbine. The innovation that William Orchard proposes consists of two parts. The first is to have multiple stages of extraction from the turbine and the second is to design the turbine so that the heat is extracted at lower temperatures and pressures using an automated system. By doing this you could dramatically reduce the electricity “foregone” when the heat is extracted. The following chart shows how this plays out for different types of power station and different flow and return temperatures. Three stages of extraction from a CCGT with a 75 degree flow and 25 degree return gives a COP of 17. For nuclear, at the same temperatures, the result is slightly lower at just over 16. In characteristic form William also shows how this compares to an air source heat pump with a COP of 3.

The implication of this is that if we do go ahead with building new nuclear power stations, we have a choice. We can build them with the turbines optimised for CHP and build district heating connected to them  in which case we only need to build an additional 6% of electrical capacity to make up for the electricity “foregone”. If we go down an electric heat pump route then (assuming a COP of 3) we’ll need an additional 33% of electrical capacity to supply the equivalent heat output.

In my next blog find out more out how these turbine modifications work and what happened when William wrote to turbine manufacturers to start making turbines in this way.

Source: ©Copyright Orchard Partners London Ltd 2012

Posted in CHP, Heat networks | Comments Off

Ofgem’s Proposal to Remove Triad Payments from Embedded Generators

If you’ve been involved in any CHP/DH feasibility studies you will probably know that many projects are quite marginal. The high cost of building a heat network that has to compete with an existing gas network means that that very few projects are huge money spinners. One of the most critical variables is the value of electricity generated. And given the often derisory price for electricity exported any additional payments that CHP can obtain for providing grid services are often critical. These include payments for Stor, Capacity Market, and Triad.
These payments will become even more important if CHP is to play a role in backing up intermittent renewable electricity generation. In the future as renewables increase their penetration of the grid, gas CHP embedded in local distribution networks, combined with heat pumps, could either provide generation when wind and sun energy are low or demand when there is excess from these sources. For this to work grid service payments will be needed to ensure CHP still stacks up on much lower run hours.
It is therefore dismaying to find that Ofgem are proposing to remove Triad payments from embedded CHP plant. Triad payments (in the form of Transmission Network Use of System (TNUoS – and a smaller Balancing charge) charges are how the National Grid charges for the cost of its upkeep. Every half hourly metered nondomestic customer in the UK contributes to the upkeep cost by paying a charge relating to their kilowatt hour usage during three half hour periods in a year – the Triad periods. (Generators connected to the Transmission System are also charged). These are chosen retrospectively as the three half-hour periods with the highest electricity demand on the grid (between November and February separated by 10 days). The logic of paying embedded CHP operators during these triad periods is that generating electricity directly into the local distribution network CHP reduces demand on the National Grid. In exactly the same way if you are a consumer connected to the local distribution network and you’re able to reduce your demand during the triad periods then you are also reducing demand on the National Grid. For a 1 MWe CHP plant in London triad revenue amounts to around £50,000 a year – not a huge amount but potentially critical in tipping a project into financial viability.
Ofgem’s proposals are a step in the wrong direction. They will make low carbon heat networks even more difficult to justify financially and move the goalposts in a way which is favourable to large plant which overwhelmingly dumps its waste heat rather than putting its to useful purpose.
You can read Ofgem’s rationale here:


Make your views heard by writing to Ofgem by Friday 23 September 2016. Contact details: Dena Barasi (dena.barasi@ofgem.gov.uk) or Andrew Self (andrew.self@ofgem.gov.uk).

Posted in CHP, Heat networks, Renewables | Comments Off

Nuclear CHP: why settle for less?

An intriguing title for a blog I hope you’ll agree. Firstly let me say that I’m not a fan of nuclear power. I’m concerned about the safety, the cost, the waste disposal issues and the fact that it’s not renewable. But I do recognise that the government seems to be absolutely determined to get the industry to build more nuclear power stations. For evidence of this see its willingness to provide a fixed price for nuclear power of double the wholesale price of electricity for 35 years with 65% of the capital costs covered by treasury backed loan guarantees. This is in addition to the  repeated mantra that we hear from government ministers that we need a “balanced” portfolio of electricity generation including nuclear, renewables and fossil fuels with (they hope ) carbon capture as we move towards our low carbon future.

This emphasis on nuclear being part of the generation mix is combined with a general desire to see the electrification of both heating and transport. This means that without a very significant reduction in demand “the balanced portfolio” will need to add up to a larger total than we currently have. Without any demand reduction at all nuclear power capacity would need to be around 3x its current level in order to maintain its contribution to the current mix. I recognise that these are very approximate figures but I’m talking in very ballpark terms here. If you think of our energy demand cake as roughly a third power, a third heat and a third transport, if you then move the heat and the transport into the power sector then you have 3 times our current power demand.

Now here I hope you can see where nuclear CHP comes in. If, rather than the electricification of heating, we build district heating systems and connect those to the waste heat from nuclear power stations then we can avoid having to build one third (very approximately) of those new nuclear power stations. We only have to double our capacity (to supply the newly electrified transport sector) not triple it.

Now I can imagine your objections already and I’ll try and deal with them in turn.

  1. Consumers will never accept heat from nuclear power stations in their homes.
  2. Water in district heating systems isn’t like sea water or ground water which we commonly hear associated with radioactivity concerns. It’s different because it’s treated to remove impurities. Pure treated water can’t carry radioactivity.

    Furthermore and rather obviously the water in the district heating network is not the same water as that heated directly by nuclear fission. This water is turned into steam by the heat given off by fission. This steam drives a steam turbine and then is re-condensed into water in the power stations condensers.  Generally sea water is used to condense the steam in nuclear power stations in the UK.  So the first (heat exchange) barrier is at the condensers.  Potentially you could have more heat exchangers in the system though those these come at a cost in that they increase the return temperature of the water back to the power station which increases the loss. We already have an infrastructure that connects our homes to nuclear power stations. It’s the electricity network. Arguably contamination is about as likely as contamination being transmitted through the wires that connect power stations to our homes.

    You also need to think through the scenario in which this district heating future would take place.  The first step would not be the construction of the pipes between cities and power stations. We’d most likely build up district heating within cities with local gas fired CHP (William Orchard’s Hub Concept). As we move towards 2050 we’d see more interconnection and in many cases nuclear would be one contributory heat source to those heat networks.

  3. Nuclear power stations are a long way from conurbations where heat is actually needed and heat unlike power can’t be cost effectively transmitted long distances.
  4. It is true that nuclear power stations are not ideally placed to provide heat to cities but it is entirely practicable to transport large quantities of heat long distances. William Orchard examined this in an article for the CIBSE Journal using Sizewell B (a nuclear power station on the Suffolk coast) and a 128kM 2meter diameter heat main to London as an example.  The water would actually arrive in London 0.1 degree hotter than when it left the power station. That’s because the energy inputs from pumping the water are lost into the water, increasing its temperature.  Additionally very large heat mains lose proportionately less heat than small heat mains because the surface area through which heat is lost (the outer surface of the pipe) is smaller as a proportion of the heat transmitted. Obviously the pipes are insulated as well. The costs of pumping water are less as well, proportionately, with larger diameter pipes – less friction per unit of heat transmitted.

    In Finland the Lovisa 3 nuclear plant was put forward as a CHP plant providing heat to Helsinki’s district heating system. It‘s around 100km from Helsinki. The proposed plant has been rejected by the government but not, as far as I can discern, because of the CHP element.

    If you look at a map of current and planned nuclear power stations then it’s easy to imagine how many of major conurbations could be supplied e.g. London from Bradbury and Sizewell, Greater Manchester from Heysham, Newcastle, Sunderland etc from Hartlepool, Birmingham from Oldbury and Bristol from Hinckley Point. Ok all of those would involve huge civil works but what route to 2050 carbon reduction doesn’t and how does building a few hundred miles of connecting pipes compare to building the double the number of nuclear power stations?

  5. Building district heating is too costly. You’d be digging up streets from now until 2050.
  6. Well we already are digging up streets. I’ve recently, with a mixture of frustration and sadness, watched as the gas network in the town where I live was systematically dug up and replaced with more gas network. And if we persist down a route where we electrify both heating and transport then we will need to dig up the electricity network and replace that to accommodate the increased electricity requirements. If you don’t believe this then take a look at the current activities of UK Power Networks who are responsible for the distribution networks in London, South East and the East of England. They are very keenly looking around for alternatives to digging up London because they envisage the problems. One of their alternatives is small-scale CHP embedded in the network. This has 2 functions – it provides an alternative to electrification of heat and because electricity is generated locally it can provide an alternative to new larger wires in the ground.

  7. Doing this locks us into nuclear power for a very long time. What if a future nuclear accident somewhere in the world causes a future government to shut down. We’ll then be faced with a double whammy – loss of a significant electricity supply and heat.
  8. The situation is not really different to the scenario which we are avoiding i.e. one where we build more nuclear in order to supply heating that is then generated electrically. In our district heating future, nuclear power won’t be the only heat provider to our networks and we’ll have large thermal stores with electricity as a back up so we won’t be dependent on nuclear as the heat supply. It will cause a serious problem if nuclear goes but only in the same way that it would have done had we gone down the electric heating with triple the nuclear capacity route.  Furthermore we decrease the chances of a nuclear accident as we have to build less nuclear plant so there is less nuclear around to go wrong in the first place.

    If, as we build up district heating, it turns out that, happily, we’re not going to go down the nuclear route then district heating is flexible enough to be able to take heat from a variety of sources whether that be large scale solar thermal (as is common in Denmark), geothermal  or surplus renewable electricity or hydrogen. If we go down the route of switching our heating systems to electricity and digging up the streets to put larger cables in then we won’t have that flexibility.

  9. Waste heat from power stations isn’t really free because when you start using the waste heat from a power station you sacrifice some electricity production.
  10. This is true but the amount of electricity you sacrifice depends on the design of the district heating system. Specifically it depends on the temperature at which you reject heat compared to the temperature you were rejecting heat before you made the connection. So let’s say your nuclear power station was rejecting heat at sea water temperature – as most do. You then connect up to your district heating system and the temperature of the water returning from the city’s radiators and hot water cylinders is the temperature at which you are now rejecting heat from the power station. In a conventionally designed system the “Z factor” i.e. the heat used divided by the electricity lost might be as poor as 4. However by optimising the design of the district heating to achieve a low return temperature then you can significantly reduce the electricity loss and increase the Z factor.  That’s why the various innovations put forward in William Orchard’s Hub concept are so important. If you can get the return water temperature down to approaching sea water temperature then you minimise this electricity loss. A further factor is the design of the power station’s turbine and the way heat is extracted from it (again William Orchard and Robert Hyde have come up with innovations here). This will be the subject of a separate blog so I won’t dwell on it here but taking all these Hub innovations together means that the electricity production lost can be as low as 1/14 of the heat supplied to district heating. If you thought of CHP as a sort of heat pump then that would be a Co-efficient of Performance (COP) of 14 – rather better than the COP of 3 we rather optimistically think of heat pumps achieving today.

It’s strange to write a blog proposing something you don’t really want but life is full of compromises. The advantage of going down the route where we allow for the possibility of nuclear CHP is clear: either nuclear does get built and less nuclear capacity is needed or it doesn’t and we have district heating that can draw its heat from a variety of other sources.

Posted in CHP, Greenhouse gas targets, Heat networks, Renewables | Comments Off

Is Consumption Modelling inherently superior to Production Modelling?

The point is continually made that pure consumption modelling, whilst being possibly more difficult to do than a mixed approach or pure production approach, is inherently superior. I would like to question this. In my view both approaches are valid and in an ideal world we might do both but one is not necessarily superior to the other.

What’s the difference?

Production modelling of carbon emissions allocates the emissions geographically based on where they are produced. You burn gas in your home in the UK so the emissions are allocated to the UK. However if you buy goods produced in China then whatever carbon emissions associated with the production of those goods gets allocated to China.

Consumption Modelling allocates emissions from goods or services to where the goods and services are consumed. If a householder in the UK buys goods from China all of the emissions relating to their production, transportation and retailing are allocated to the UK.

Actually things are little more complicated than this. Both approaches will, to some extent, use fuel purchases not actual consumption or production. So for instance with transportation under NI186 fuel purchased in the UK is allocated to the road network based on traffic flows and assumptions about efficiencies of different vehicles. Some approaches used a blend of consumption and production. The NI186 figures produced by DECC which use a production approach for gas used in homes for heating/hot water/cooking but when it comes to electricity generation they allocate the emissions on a consumption basis.

So why do I think that one is not inherently superior to the other?

Let’s take the following example. If a company closes its operations in Leeds and moves them to China then under a production modelling approach the UK’s emissions will go down. Under a consumption modelling approach they should increase as now emissions relating to the embodied carbon in the goods produced will rise because of the increased transportation distance required. In this case the production approach seems morally questionable. Effectively the UK is cheating by shifting emissions elsewhere whilst continuing to consume in an unsustainable way.

However let’s take another example. Let’s say China now reduces the carbon intensity of its electricity production -while the UK does nothing. Under a consumption approach emissions in the UK will decrease. Under a production approach they’ll remain the same. The consumption approach then gives a morally questionable output. We’ve done nothing yet our emissions have reduced.

Or here’s another scenario. A factory in Leeds which produces goods for export reduces its emissions. Under a production approach UK emissions will reduce. Under a consumption approach they remain unchanged. Does that seem right?

It seems to me that neither approach is perfect. What is important is that globally we reduce our emissions – and fast.  To do this we need to ensure that everyone is working towards that end. Currently international negotiations are based on a production approach. And as long as everyone is working towards that under a global cap, counting emissions consistently and actually reducing them in line with climate science then that’s fine. Of course they’re not, but that’s not inherent in the modelling approach.

Chris Dunham, Managing Director of Carbon Descent

Posted in Greenhouse gas targets | Comments Off

Out with the old

Old, inefficient incandescent GLS light bulbs have started to disappear from the shops and will be phased out completely over the next few years. Halogen spotlights will also be required to meet new energy efficiency standards by 2016, which will involve the phasing out of some types of halogen lamps.  This means that lamps that you currently use may no longer be available and will need replacing with an energy efficient alternative.

Compact fluorescent lamps (CFL) and tube fluorescent lighting (TFL) are the most efficient alternatives that give the greatest savings, at around 80%. CFL’s are now available in a range of lamp types and fittings and are even available in dimmable versions.

I can’t tolerate fluorescent flicker!

For some people though swapping their traditional light bulb with any kind of fluorescent lamp is not an option due to the adverse health effects that can be experienced. The constant flicker of florescent can affect the sensory system in some individuals – some people’s sensory system can detect the flicker that others cannot. Some people also complain of headaches, migraines, eye strain and general eye discomfort from fluorescent lamps.

What are the energy efficiency alternatives other than fluorescent?

The good news is that as halogen lamps change to comply with new lighting regulations a new range of energy saver halogen lamps are becoming available to replace old phased out lamps. You can now replace your old light bulbs with equivalent halogen lamps that provide the same quality of light, and even look the same, just without the flicker.

LED Lamps

LED lamps will save more energy than halogen lamps and provide a decent quality of light to create good ambient lighting. There is now a full range of lamps available to suit every type of light fitting. However, the lamps are more expensive compared to halogen and CFL lamps.

What is the ‘Colour Temperature’ of Lamps?

Colour Temperature refers to the colour variation of light (the colour of the light) and is measured in Kelvin. This scale ranges from the flame from a candle at around 1,000 K to deep blue sky at around 10,000 K. In simple terms the colour a bulb emits is red at low temperatures, blue at high temperatures and white in the middle.

For a bulb or tube to be classified as “daylight” it will have a colour temperature of between 4,000 K and 7,000 K. It is commonly accepted that a colour temperature of 6,300 K to 6,500 K gives the closest reproduction of natural sunlight. A standard tungsten bulb is around 2,700 K.

All types of lamp are available in various colour temperatures so it is always worth considering this as another option when trying to get the right lighting effect.

An illustration of two booths lit by different colour temperatures

Final energy saving tips

  • Use energy efficient compact fluorescent lamps with high frequency ballasts which can help you save up to 80% electrical energy consumption for lighting.
  • Only use low energy lamps marked with an energy label to ensure luminous efficacy. The energy label guarantees that the lamp can operate a long service life and illumination output is able to meet international safety standards.
  • Use a timer control devices to limit unnecessary lighting in areas that are unoccupied.
  • Maximise natural light whenever possible and practical.
  • Ageing lamps or dirt on fixtures can reduce total illumination by up to 50%. Periodic cleaning can help increase illumination levels.
  • Use the appropriate number of low wattage lamps to replace the use of high wattage lamps.
  • Clean or repaint rooms with lighter colours to enhance light reflection.
Posted in Uncategorized | Comments Off

BluePrint Energy Auditing Tool

by Carbon Descent

BluePrint is Carbon Descent’s latest software solution in the low carbon sector. It is both a tablet PC application for collecting data during site visits and a desktop program for developing recommendations and producing automated reports.

My experience as an energy auditor began 5 years ago, working in a dedicated energy auditing firm. In that company we developed many of our own systems in house, including macro linked Excel sheets which partially automated the calculation of energy saving recommendations. Amongst the engineers we thought the next evolution of our systems, in terms of saving time and improving quality, was for the information collected onsite to be recorded directly onto a computer, ready for processing.

BluePrint does exactly this. BluePrint is a software tool, 4 years in development that streamlines the data handling in energy audits. It has been developed by Software Manager Julie Allen as the product of a Knowledge Transfer Partnership between Carbon Descent and London South Bank University.

Example of BluePrint loading on a tablet PC

The benefits of BluePrint above conventional auditing methods

Time saving – This is achieved through onsite data gathering with a tablet PC, robust recommendation calculators, automated report generation and inbuilt benchmarking features.

Consistency – The tool brings a consistent methodology from audit to audit as well as across a team of energy assessors.

Reuse of facility details for subsequent audits – Once an audit is complete, the layout of the site, clients, and other site specific information is contained in the software database. This makes subsequent audits faster with easily accessible historical data.

Configurable – The software is highly customisable. Additional recommendations can be added to the existing suite. The report generated following an audit can be tailored for different sites, such as, a factory, a swimming pool or an office.

Up to date reference data – Data such as emissions factors and the costs of measures are regularly updated to ensure accurate assessments are produced.

Future features planned

Additional resources – BluePrint can record many features about a site that relate to its resource consumption. Whilst currently only set-up for Energy, we plan to add water and waste functionality in following releases.

Management capabilities – The tool will provide a range of preformatted management information reports for single or multiple projects. This will enable an organisation to monitor various parameters across projects such as time spent by an auditor, CO2 savings identified, and an overview of project billing.

Display Energy Certificates – The benchmarking methodology employed in BluePrint closely follows that of the Display Energy Certificates. With some planned alterations, it is intended that the product become certified to deliver Display Energy Certificates in the near future.

The official launch for BluePrint will be towards the end of 2011. Please email us at software@carbondescent.org.uk should you wish to be invited to the launch or receive news updates for this product.

The official BluePrint Website can be found here

Posted in BluePrint Software | Comments Off

Carbon reduction targets – looking forwards or backwards?

This entry looks at modelling the 34% 2020 greenhouse gas (GHG) reduction target set to an adjusted CO2 only 2005 baseline. The target adjustment formula is given below:

The UK Greenhouse Gas Inventory, 1990 to 2005 Annual Report for submission under the Framework Convention on Climate Change, gives a total greenhouse gas reduction from 1990 until 2005 of 15.4%. This includes land use, land use change and forestry (LULUCF). Excluding LULUCF, the reduction figure becomes 14.8%. When applying the above formula to translate the 34% reduction target onto 2005, the target adjusts to 22.0% and 22.5% respectively.

This all seems well and good, but there is the aspect of GHG vs. CO2 to consider. Whilst GHG reduced by 15.4% (inc LULUCF) and 14.8% (exc LULUCF) between 1990 and 2005, CO2 only reduced by 6.4% (inc LULUCF) and 5.6% (exc LULUCF). When applying the formula to translate the 34% reduction target onto 2005, the target adjusts to 29.5% and 30.1% respectively.

So which is correct?

It depends if you are looking forwards or backwards. If you are looking retrospectively at the 1990 baseline, and assuming that it is equally difficult to reduce all GHGs, then one could argue that taking the CO2 only reduction figures is correct.

If you are looking forward and setting 2005 as your baseline, then you could argue that the problem as it stands is to reduce emissions across the board up to the 2020 and 2050 target. It is interesting to note that local authorities were not tasked with reducing emissions until much later than 1990, and for most authorities the first glimpse at their carbon footprint came along with the National Indicator 186 and Full Local CO2 emissions data set which were both post 2005. The Climate Change Act also only came into legislation in 2008, so in consideration of this, it is best to look at the total GHG account in 2005 for the baseline. This means modelling 2020 targets of 22% (inc LULUCF) and 22.5% (exc LULUCF).

Posted in Greenhouse gas targets, VantagePoint | Comments Off

What Particular spatial, environmental and public constraints are presented for the UK wind projects’ development and success? Can they be managed?

In the UK, wind energy is one of the most promising renewable resources. Great Britain has the best wind resources in Europe and one of the best resources around the world.  Wind energy is currently the UK’s major renewable energy supplier (2.2 % of total electricity supply). According to the Committee for Climate Change [1], the UK is able to produce 31 % of its total electricity supply through offshore and onshore wind power.

However there are several issues that impede the UK to make the most out of its wind resources. First of all there is the issue of geographical mismatch. The main energy resources are cited in the North while the major energy consumption is in the south of the UK.

Another issue is transmission. Considering the fact that the energy production and demand will increase and that wind energy will play a key role along with the fact that the current grid of the country is incapable to satisfy the UK’s demand without any reinforcement, especially in the North, it is understandable that transmission is a rather significant issue.

However the scope of this blog is to identify what environmental and spatial constraints and externalities barrier the development and exploitation of the UK’s wind energy resources. Moreover it tries to identify and analyze the correlation between public attitude and wind projects’ success and development, in order to conclude on whether the above mentioned impediments could be managed.

Spatial, Topographical and Public Considerations and Constraints

The UK has a Technical potential of onshore energy around 300 TWh/yr however the practicable potential is only 8 TWh/yr, where technical potential stands for the assessment of the amount of useful energy that could be extracted from UK’s wind energy resources while practicable potential stands for the “accessible” potential. In Table 1 the data for the offshore wind potential is also shown.

Table 1 On shore and off shore UK potential [2]

This vast difference between technical and practicable potential is obviously due to several constraints that barrier the development of several wind projects. These constraints have mainly to do with the environmental impact of wind turbines, spatial and topographical issues and finally with the public attitude towards wind projects. 

Environmental Considerations

Planning of a wind energy project should always take into account the environmental impact of wind turbines. The main impacts on the environment are noise, electromagnetic interference and visual impact.

Noise: Wind turbines produce two kinds of noise: Mechanical noise that is caused from the gear box and the generator of the turbine and aerodynamic noise that is caused by the coactions of wind with the blades of the turbine. In Denmark, the maximum acceptable wind turbine noise level in open country side is 45 db (A) and 40db (A) in residential areas. In the UK there is no specific noise limit for wind turbines. The higher acceptable noise level in the UK is 68 db (A)-referring to buildings near roads. Figure 2 shows the noise levels related to distance.

Figure 1 Noise levels and distance from turbine [3]

Electromagnetic Interference: The wind turbine can severely affect the signal of radio, television or microwave if it is cited between the transmitter and the receiver, since it may reflect some of the radiation causing interference between the original wave and the reflected one.

Visual Impact: Visual impact of the turbines has to do with the local community’s attitude towards wind farms and with several other design factors. However flickering, which is caused by interference between sunlight and the blades of the turbine can be really disturbing for residents that have visual contact with the turbines [3]. 

Spatial and Topographical issues

Before the set out of the planning of wind projects several topographical and spatial issues have to be taken into account. One major constraint that could seriously affect the planning of a wind farm is the shape and the hostility of the terrain at the building site. In particular, uneven and hostile terrains can severely affect the three-dimensional wind flow and increase the mechanical fatigue of the blades; therefore they can affect the Wind Turbines’ (WT) energy efficiency and life time [4]. In Figure 7 we can see how several terrains affect the wind flow.

Figure 2 Wind Profile on several terrains

Other studies [5] are more specific on the constraint factors that affect the planning of wind farms. They categorize the constraints as topographic, wind speed and direction, land use/cover, population, access, resources, hydrology, ecology and historical/cultural resource. Table 2 accumulates these criteria and constraints.

Table 2 Criteria and constraint that affect wind energy projects planning

A recent case study for Wales [6], categorizes on a more general and clear way the constraint factors. According to this there are three major categories of constraint factors. These are namely, the absolute constraints which are general rules and facts that block the creation of a wind farm, the localized constraints which are factors that impede wind farm projects, but they have to do with local conditions, facts and rules, and finally there are electricity distribution issues and additional criteria for the area selection, which criteria that cannot be fitted to any of the above categories. These constraints factors are summarized on Table 3.

Table 3 Constraining factors for wind energy development projects

 Figure 2 displays the National parks’ area (Absolute or Ecology constraints) across the UK.

Figure 3 Land Use Restrictions [7]

Public issues and Management

A quite important factor that has to be taken into account before we start planning a wind farm is the local population on the building site. Most people do not like the idea that wind farms are about to be built near the area of their residence. There are several case studies that indicate how public attitude affects wind projects.

Quite Recently a study [8] has been conducted in the UK in order to find whether there was correlation between the decisions concerning wind farm planning made by the local authorities and the attitude of local officers, parish councils and landscape protection groups. The study concluded that the opinion of the local population and their understanding on wind farm projects significantly affected the final decisions of the local authorities.

A reasonable question rising is how did Denmark manage to become the leading Nation in Wind Energy development while the UK has better wind resources. Is it because the Danish happen to like the way wind turbines look? In fact Denmark has started to develop very early its Wind Industry. The Danish government in the early 1970s promoted very actively the development of renewable energy and strongly encouraged investors to invest on wind energy, in order to be less depended on oil. However Denmark also faced severe public constraints, which have been overcome when the government encouraged small investors- through taxation, subsidies etc.- to enter wind market, and allowed ordinary people to get involved [9]. Another study [10] on the Danish case indicates that people’s participation has played a very significant role on Denmark’s Wind Industry development.

If we take a look in the case of the UK we would see that it is different from the Danish case. According to a UK case study [11], concerning England and Wales, the local community did not have the opportunity to seriously participate or get involved with the wind projects at their residential area. In addition to that, a study on Wind energy planning in England, Denmark and Wales [12] concludes that wind projects in which public gets highly involved are more likely to be accepted and successful. 


  1. Committee on Climate Change. (2008) Building a low carbon technology-The UK’s contribution to tackling climate change. http://www.theccc.org.uk/pdfs/key%20messages%20-%2012%20page%20version.pdf
  2. Gross, R. (2004) Technologies and innovation for system change in the UK: status, prospects, and system requirements of some leading renewable energy options, Energy PolicyNo. 17, November, pp. 1905-1919, Elsevier/ScienceDirect/http://dx.doi.org/10.1016/j.enpol.2004.03.017
  3. Boyle, G., (1996) Renewable Energy: Power for a Sustainable Future, Oxford: Oxford University Press and The Open University
  4. Botta, G., Cavaliere, M., Viani, S., and Pospí, S. (1998) Effects of hostile terrains on wind turbine performances and loads: The Acqua Spruzza experience, Journal of Wind Engineering and Industrial Aerodynamics, Vol. 74-76, pp.419-431, http://dx.doi.org/10.1016/S0167-6105(98)00038-5
  5. Baband, S., and Parry, T., (2001) Developing and applying a GIS-assisted approach to locating wind farms in the UK, Renewable Energy, Vol. 24, September, pp.59-71, Pergamon/ScienceDirect/  http://dx.doi.org/10.1016/S0960-1481(00)00169-5
  6. Cowell, R., (2009) Wind power, landscape and strategic, spatial planning—The construction of ‘acceptable locations’ in Wales, Land Use Policy,  http://dx.doi.org/10.1016/j.landusepol.2009.01.006
  7. Dartmoor National Park Authority. (2009) Maps.  http://www.dartmoor-npa.gov.uk/text/vi-map-nationalparks-citiesmotorways.gif
  8. Toke, D., (2005) D. Toke, Explaining wind power planning outcomes: some findings from a study in England and Wales, Energy Policy [Electronic], Vol. 33, No. 12, August, pp. 1527-1539, Elsevier/ScienceDirect/http://dx.doi.org/10.1016/j.enpol.2004.01.009
  9. Krohn, S., (2002) The Wind Turbine Market in Denmark,


10.  Christensen, P., and Lund, H., (1998) Conflicting views of sustainability: The case of wind power and nature conservation in Denmark, European Environment

11.  Devine-Wright, P., McAlpine, G., et al., (2001). Wind turbines in the landscape: an evaluation of local community involvement and other considerations in UK wind farm development. In: 32nd Annual Conference of the Environmental Design Research Association, Edinburgh, Scotland

12.  Loring, J., M., (2006) Wind energy planning in England, Wales and Denmark: Factors influencing project success, Energy Policy Vol. 35, No. 4, April, pp. 2648-2660, Available: Elsevier/ ScienceDirect/ http://dx.doi.org/10.1016/j.enpol.2006.10.008

Posted in Renewables | Comments Off

Server room temperature myth busting – energy savings, disk failure and temperature

Saving energy in server rooms is often the last area tackled in large office environments. This may be due to a lack of understanding about what can be done to save energy and also due to barriers created by out of date information relating to server temperature requirements. On this point, a particular incident comes to mind. Whilst conducting an energy assessment of a large office, in response to a recent server drive failure, I saw the IT manager of the civic centre turn down the server room cooling from 18 degrees C to 16 degrees C. From witnessing actions like this and from several discussions on server room temperature, what seems standard in many IT departments is a paranoia amongst IT professionals to over cool servers in fear of disk failure. I suspect this IT manager might not have made his decision to turn down the thermostat so quickly should he have read some of the studies mentioned here.

In this post, I am going to look at some surprising studies and recent change in thought around server temperature, drive failure rates, and energy savings.

In 2008, the American society of Heating, Refrigeration, and Air-conditioning Engineers (ASHRAE) changed their recommended server room temperature and humidity range. The table of changes has been taken from ‘2008 ASHRAE Environmental Guidelines for Datacom Equipment’ [1] and is displayed below:

The changes in recommended temperature and humidity ranges have been driven by a demand to reduce energy consumption in server room cooling. ASHRAE’s changes mean that free cooling delivered to some server rooms through the use of an economiser can achieve greater energy savings by operating more often instead of energy hungry chillers. Free cooling limits the dependence on electricity driven chilling which according to a recent paper [2], contributes 33% of typical data centre electricity consumption.

But a lot of servers don’t have free cooling, so are there savings in these cases?

A study published by Dell dated 2009 titled ‘Energy impact of increased server inlet temperature’ [3] addresses this question through controlled testing of a variety of server configurations, loads, and cooling arrangements. The study highlights that there are three types of inbuilt server fan arrangements.

1. Fixed speed inbuilt cooling fans that aren’t dependent on inlet temperature.
2. Stepped speed inbuilt cooling fans that step based on server inlet temperature thresholds.
3. Variable inbuilt cooling fans that vary their speed smoothly as inlet temperature increases.

This variability in inbuilt server fan speed means additional power is drawn as inlet temperature increases for the second and third fan types listed above. This means that there is a trade-off between the reduction in chiller cooling loads and the increased requirement for inbuilt server fan cooling. The findings of the study show that depending on the server mix, cooling arrangement and load conditions, there is a sweet spot whereby minimum energy consumption is experienced. This spot lies around the 24 to 27 degree C inlet temperature. A diagram of one of the tests is displayed below:

So raising server inlet temperature to between 24 and 27 degrees C can result in energy savings, but is it safe?

Yes, a lot of evidence suggests that it is. Looking to Google’s 2007 publication ‘Failure Trends in Large Disk Drive Populations’ [4] we are presented with a study of the largest disk drive population at the date of publication. The study highlights that ‘Contrary to previously reported results, we found very little correlation between failure rates and either elevated temperature or activity levels’. In fact the study showed that there is a clear trend showing that lower temperatures correlate with higher failure rates, and that only at very high temperatures is there a slight reversal of this trend.

The following two graphs show key results of the study. The first shows the average failure rate (AFR) as a function of temperature (depicted as dots with error bars). The second graph shows the AFR as a function of temperature and drive age.

These graphs suggest that the IT manager’s decision of turning the set-point down from 18 to 16 degrees C, may result in an increase in disk drive failure. It would be interesting to have measured the inlet temperature of the drive that failed.

In this post I have tried to present a clear argument for evaluating server temperatures. I would suggest thoroughly profiling the temperatures in a server room before deciding to increase temperature set points. My recommendation would be to conduct temperature logging at several server air inlets. Do not rely on the room thermostat to tell you what is going on at the inlet of a server. It could be that whilst an air conditioning unit is set to 18 degrees C, it is incapable of actually reaching this set-point and therefore the air conditioning thermostat is a poor mechanism of measurement of actual server room temperature. In addition to this, a server room could exhibit hot spots due to poor air mixing. This could result in some servers being supplied air at inlet temperatures greatly exceeding what is recommended in this post. It would also be interesting to conduct power logging to verify the savings before and after the alterations. More case studies are needed to debunk the myths about server room temperature.

Please feel free to comment below.

[1] 2008 ASHRAE Environmental Guidelines for Datacom Equipment (Expanding the Recommended Environmental Envelope) – http://tc99.ashraetcs.org/documents/ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf
[2] Improving Data Center PUE Through Airflow Management – http://www.coolsimsoftware.com/LinkClick.aspx?fileticket=KE7C0jwcFYA%3D&tabid=65
[3] Energy impact of increased server inlet temperature – http://i.dell.com/sites/content/business/solutions/whitepapers/en/Documents/dci-energy-impact-of-increased-inlet-temp.pdf
[4] Failure Trends in Large Disk Drive Populations – http://static.googleusercontent.com/external_content/untrusted_dlcp/labs.google.com/en//papers/disk_failures.pdf

Posted in Energy efficiency | Comments Off

Variable Speed Drives: Is it a viable energy saving solution for commercial and industrial users?


Variable Speed Drive (VSD), also known as inverters or Variable Frequency Drives, is the term that describes the equipment used to regulate the rotational speed and hence torque of an electric motor. Basically VSDs are electronic devices that can be attached to a motor to fluctuate its speed through a control mechanism, such as temperature or pressure.

The concept is not new. Initially VSDs were developed for achieving better process control in the industrial sector and recently they have further developed as a successful solution for smaller industries and building systems due to their potential to save significant amounts of energy. It is possible to couple them with any motor exhibiting a variable load, but the most usual applications are pumps and fans that operate in industrial processes or as part of heating, ventilation and air-conditioning systems (HVAC).

Millions of motors are used by commercial and industrial users around the world, which, according to ABB Group, account for more than 65% of industrial electricity demand. During the design phase of a mechanical project, the exact motor loads is often unknown, this in turn leads to motor selection being oversized to safely meet the maximum system requirements. This leads to energy wastage as the motor operates outside its optimum zone. In order to prevent this and to achieve more control over operations and processes, many commercial and industrial users turn to airflow control vanes, two speed drives and other solutions. These solutions however are inefficient compared to VSDs from an energy saving perspective. In the following lines we shall describe, how VSDs can save energy and money for commercial and industrial users.

VSDs operation and energy saving principles

A VSD can reduce energy consumption of a motor by as much as 60%. This is due to the fact that they control the speed of the motor by altering the frequency and therefore the power supplied to it. Even a small reduction in the rotational speed can give significant savings in the energy consumed by the motor.

The next step is to understand in simple terms how altering the rotational speed of a motor can save energy. In order to do so we take a closer look to the so called affinity laws which are used in hydraulics to express relationships between the variables involved in the operation and performance of rotary machines such as pumps and fans. The following formulas apply to both axial and rotary flows, and are used to express the relationship between head, volumetric flow rate, shaft speed, and power. If we consider that the diameter of the impeller stays the same we have the following:


  • Q is the volumetric flow rate (e.g. CFM, GPM or L/s),
  • N is the shaft rotational speed (e.g. rpm),
  • H is the pressure or head developed by the fan/pump (e.g. ft or m), and
  • P is the shaft power (e.g. W).

Now let’s come back to the declaration that only a small reduction in the rotational speed can significantly reduce the energy consumed by the motor. Let’s assume that the rotational speed, N1, of an industrial pump is reduced by 20%. This should mean that:

If we combine relationships (3) and (4) we get the following expression:

Consequently, a 20% reduction of the rotational speed leads to a 49% power requirement reduction. The explanation for the aforementioned relationships and therefore the energy savings achieved by VSDs lies in the pressure difference across the impeller. When less pressure is produced, less acceleration of air or fluid across the impeller is required. It is the simultaneous reduction of acceleration and pressure that multiplies the savings.

At this point we should clarify to the readers that a VSD does not constrain the rotational speed of a motor to a certain level in order to achieve energy savings. This should mean that the power input would be insufficient at several times. The main advantage of a VSD is that it can alter the rotational speed of a motor so that the power input can match the duty required and this way diminish energy wastage. In the graph below we can see in a simplified way how a system changes its operation after the deployment of a VSD.

In the case where the system operates without the use of a VSD, the power input remains constant regardless of changes in the load output over time, because the controlling device is a throttle or damper. When a VSD is used we the input power is tailored to suit the output duty. The throttle or damper is eliminated with savings in maintenance.

As we have already mentioned VSDs are not the only way to control an operation, therefore the reader may wonder why an industrial or commercial user should prefer this solution over others. The following graph shows how much more energy is saved by VSDs compared to that saved by traditional flow control methods that do not vary rotational speed.

Figure 1 Energy saved by VSD compared to traditional flow control methods

Building system saving example

In the following table we give an example for the energy and cost savings that could be achieved by using a typical VSD for a fan operating in the HVAC system of a hospital. We assumed that the fan is working for 15 hours a day and 7 days a week. The cost of electricity is considered to be £0.075 per kWh. Finally, we assumed that the savings achieved were around 25% which is similar to the savings observed by the HVAC system of Charing Cross hospital after the installation of VSDs.

From the above table it becomes evident that for large buildings where motors operate frequently the payback period could be really short.

Common applications and Summary

VSDs can provide significant energy savings in applications for Industrial users, HVAC systems and Leisure and Commercial buildings. Listed below are some typical applications for each sector:

Typical applications for Industrial users

  • Primary and secondary air fans
  • Boiler feed, chilled water, river water pumps

Typical HVAC applications

  • Variable air volume – air conditioning systems
  • Supply  fans
  • Exhaust air systems, such as dust extraction, paint shop exhaust, and fume cupboards
  • Heating and chilled water pumping, duty/ standby pump sets
  • Refrigeration systems
  • Some modern compressors and chillers

Typical applications for leisure and commercial buildings

  • Swimming pool pumps and ventilation
  • Sports halls, gymnasiums and dance studios
  • Fountains
  • Ice rinks

To wrap up, VSDs are one of the most cost efficient solutions that can reduce energy consumption, carbon emissions and electricity bills. Insulating a building gives a thirty year return of investment, while a VSD usually pays back an investment in less than two years. At this point it should be clarified that the good operation of a VSD is highly depended on the controls and sensors used, however this should be the issue of a future blog.

[1] ABB Drives and Motors Catalogue 2011,  ABB standard drives for fans and pumps

[2] The calculation has been made using the online GE motors calculator, http://www.gemotors.com.br/calculator/index.asp#f

Posted in Energy efficiency | Comments Off