Exergytowork Efficiency Improvements Since

The Wise Generator

Homemade Energy WISE Generator

Get Instant Access

4.5.1 Prime Movers

In a very important sense the industrial revolution was powered by steam. The fuel required to perform a unit of mechanical work (for example, a

Gdp Growth 1800 1900
Figure 4.9 Performance of steam engines: fuel consumption and thermal efficiency

horsepower hour or kilowatt hour) from steam has decreased dramatically since 1800, and even since 1900, although the decline has been very slow since the 1960s. Steam engines have become more efficient (in both countries) since Watt's time, as shown in Figure 4.9. The largest stationary steam piston engines - cross-compound 'triple expansion' engines - generated up to 5 MW at efficiencies above 20 percent (Smil 1999, p. 145). In the case of large stationary or marine steam engines operating under optimal conditions (at constant loads), the thermal efficiency exceeded 15 percent in the best cases. However, single expansion (non-compound) coal-burning steam locomotives - the product of engine efficiency and boiler efficiency - were not nearly so efficient: about 6 percent on average, depending on boiler pressure, temperature, fuel and power output. Results from three sets of experiments, as of the late 19th century, for locomotives with indicated horsepower ranging from 130 to 692, ranged from 4.7 to 7.7 percent (Dalby 1911, table XXI, p. 847). The more powerful engines were not necessarily the most efficient. The lack of improvement in railway steam engine efficiency opened the door for diesel-electric locomotives, starting around 1930.

Factory engines were typically larger than railway engines, but not more efficient. Moreover, transmission losses in factories, where a central engine was connected to a number of machines by a series of leather belts, were enormous. For instance, if a stationary steam engine for a factory with

Index Sic Heaters

Source: Devine (1982).

Figure 4.10 Sources of mechanical drive in manufacturing establishments (USA, 1869-1939)

Source: Devine (1982).

Figure 4.10 Sources of mechanical drive in manufacturing establishments (USA, 1869-1939)

machines operating off belt drives circa 1900 had a thermal efficiency of 6 percent, with 50 percent frictional losses, the net exergy efficiency was 3 percent (Dewhurst 1955, appendices 25-3, 25-4, cited in Schurr and Netschert 1960, footnote 19, p. 55). The Dewhurst estimate, which took into account these transmission losses, set the average efficiency of conversion of coal energy into mechanical work at the point of use at 3 percent in 1900 (when most factories still used steam power), increasing to 4.4 percent in 1910 and 7 percent in 1920, when the substitution of electric motors for steam power in US factories was approaching completion (Figure 4.10) (Devine 1982). The use of steam power in railroads was peaking during the same period.

A steam-electric central generating plant together with its (local) transmission and distribution system achieved around 3 percent efficiency by 1900, and probably double (6 percent net) by 1910. Thermal power plants operated at nearly 10 percent (on average) by 1920 and reached 33 percent in the mid-1960s. Electric motors in factories were already capable of 80 percent or so efficiency in reconverting electric power to rotary motion, rising to 90 percent plus in recent times.9 So, the combined efficiency of the generator-motor combination was at least 8 percent by 1920; it reached 20 percent by mid-century and nearly 30 percent by 1960. Hence the overall efficiency gain in this case (from 1920 to 1960) was of the order of fivefold - more than enough to explain the shift to electric power in factories. Motor drive for pumps, compressors and machine tools of various types, but excluding air-conditioning and refrigeration, accounted for nearly 45 percent of total electricity use in the peak year (1927), but the industrial share of motor use has declined quite steadily since then to around 23 percent in the year 2000 (Ayres et al. 2003).

In the case of railroad steam locomotives, average thermal efficiency circa 1920 according to another estimate was about 10 percent, whereas a diesel-electric locomotive half a century later, circa 1970, achieved 35 percent (Summers 1971). Internal friction and transmission losses and variable load penalty are apparently not reflected in either figure, but they would have been similar in percentage terms in the two cases. If these losses amounted to 30 percent, the two estimates (Dewhurst's and Summers') are consistent for 1920. Old coal-burning steam locomotives circa 1950 still only achieved 7.5 percent thermal efficiency; however, newer oil-burning steam engines at that time obtained 10 percent efficiency and a few coal-fired gas turbines got 17 percent (Ayres and Scarlott 1952, tables 6, 7). But the corresponding efficiency of diesel-electric locomotives circa 1950 was 28 percent, taking internal losses into account (ibid., tables 7, 8). The substitution of diesel-electric for steam locomotives in the US began in the 1930s and accelerated in the 1950s (see Figure 4.11).

The most attractive source of power for electricity generation has always been falling water and hydraulic turbines. Hydraulic turbines were already achieving 80 percent efficiency by 1900. The first 'large-scale' hydro-electric power plant in the US was built in 1894-5 at Niagara Falls. Alternating current was introduced at that time by Westinghouse, using Tesla's technology, for transmission beyond a few miles. The facility served local industry as well as nearby Buffalo. But most of the electricity consumers at that time were not located close to hydro-electric sites, so coal-fired steam-electric generation soon dominated the US industry.

On the other hand, Norway, Sweden, Switzerland, Austria, France, Canada and Japan relied entirely or mainly on hydro-electric power until the 1930s, and all but Japan, France and Sweden still do. Meanwhile Egypt, Brazil and Russia have also invested heavily in hydro-electric power, and China is doing so now. Unfortunately, most of the rest of the world does not have significant hydraulic resources today. Needless to say, those countries with hydro-electric power produce useful work more efficiently, on average, than the rest of the world.

In the case of steam-electric power, the so-called 'heat rate' in the US has fallen from 90,000 Btu/kWh in 1900 to just about 11,000 Btu/kWh by 1970 and 10,000 Btu/kWh today.10 The heat rate is the inverse of conversion

Brake Thermal Efficiency Over The Years

Year

Figure 4.11 Substitution of dieselfor steam locomotives in the USA, 1935-57

Year

Figure 4.11 Substitution of dieselfor steam locomotives in the USA, 1935-57

efficiency, which has increased by nearly a factor of ten, from 3.6 percent in 1900 or so to nearly 33 percent on average (including distribution losses). The declining price and increasing demand for electric power is shown in Figure 4.12.

Steam-turbine design improvements and scaling up to larger sizes accounted for most of the early improvements. The use of pulverized coal, beginning in 1920, accounted for major gains in the 1920s and 1930s. Better designs and metallurgical advances permitting higher temperatures and pressures accounted for further improvements in the 1950s. Since 1960, however, efficiency improvements have been very slow, largely because existing turbine steel alloys are close to their maximum temperature limits, and almost all power plants are 'central', meaning that they are very large, located far from central cities and therefore unable to utilize waste heat productively.

The retail price of electricity (in constant dollars) to residential and commercial users decreased dramatically prior to 1950 and by a factor of two since then. On the other hand, the consumption of electricity in the US has increased over the same period by a factor of 1200, and continued to increase rapidly even after 1960. This is a prime example of the so-called 'rebound effect'.11 The probable explanation is that a great many new electrical devices and consumer products - from washing machines and

Gdp Growth 1945 1950

Figure 4.12 Index of total electricity production by electric utilities

(1902 = 1) and average energy conversion efficiency (USA, 1902-98)

Figure 4.12 Index of total electricity production by electric utilities

(1902 = 1) and average energy conversion efficiency (USA, 1902-98)

refrigerators to electric ranges, water heaters, air-conditioners, TVs and most recently, PCs and DVD players - were introduced after 1930 or so and penetrated markets gradually (Figures 4.13a and 4.13b).

The work done by internal combustion engines in automobiles, trucks and buses (road transport) must be estimated in a different way. In the case of heavy diesel-powered trucks with a compression ratio in the range of 15:1 to 18:1, operating over long distances at highway speeds, the analysis is comparable to that for railways. The engine power can be optimized for this mode of operation and the parasitic losses for a heavy truck (lights, heating, engine cooling, air-conditioning, power-assisted steering, etc.) are minor. Internal friction and drive-train losses and losses due to variable load operation can conceivably be as low as 20 percent, though 25 percent is probably more realistic.

For vehicles operating in urban traffic under variable load (stop-start) conditions, the analysis is quite different.12 Gasoline-powered ICE engines nowadays (2001) have an average compression ratio between 8 and 8.5. This has been true since the early 1970s, although average US compression ratios had been higher in the 1960s, in the heyday of the use of tetraethyl lead as an anti-knock additive, as shown in Figure 4.14 (Ayres and Ezekoye 1991). The thermal efficiency of a 'real' fuel-air four-cycle auto (or truck) engine operating at constant speed (2000 rpm) is around 30 percent. By

-H—

— Telephone

• Television

• Refrigerator

0-

- Range

- Laundromat

*•

- Freezer

Consumer Price Index 1910 1920

Figure 4.13a Household electrification (I) (percent of households)

Year

1910 1920

1980 1990

Figure 4.13a Household electrification (I) (percent of households)

Year

-

-X—

— Air-conditioning

Q-"

•• Dishwasher

EH

•• Garbage disposal

— —■ —

— Clothes drier

Water heater

-

- Electric blanket

1950

Year

1950

1960

1970

1980

Figure 4.13b Household electrification (II) (percent of households)

Economic Growth 1940 1945

1925 1930 1935 1940 1945 1950 1955 1960 1965 1970 1975 1926

Figure4.14 Compression ratio in auto engines (USA, 1926-75)

1925 1930 1935 1940 1945 1950 1955 1960 1965 1970 1975 1926

Figure4.14 Compression ratio in auto engines (USA, 1926-75)

contrast, with a compression ratio of 4:1 (typical of engines in 1920) the maximum theoretical thermal efficiency would have been about 22 percent (Figure 4.15). Internal engine friction would reduce these by a factor of about 0.8, while the penalty for variable loads in stop-start urban driving introduces another factor of 0.75. With a manual transmission (European average), there is a multiplier of 0.95 to account for transmission losses, but for American cars with automatic transmissions, the transmission loss is more like 10 percent for small cars, less for larger ones.13 Other parasitic losses (lights, heating, air-conditioning, etc.) must also be subtracted. These items can account for 4.5 bhp on average, and up to 10 bhp for the air-conditioning compressor alone, when it is operating.

The net result of this analysis suggests that for a typical 'mid-size' American car with automatic transmission, the overall exergy efficiency with which the engine converts fuel energy into so-called brake horsepower at the rear wheels - where the tire meets the road - was as low as 8 percent in 1972 (American Physical Society et al. 1975), and perhaps 10 percent for a comparable European or Japanese car of the same size with manual transmission. An earlier but similar analysis based on 1947 data arrived at an estimate of 6.2 percent efficiency for automobiles, based on gasoline input (Ayres and Scarlott 1952).14

Contrary to widespread assumptions, there has been little or no improvement in thermodynamic engine efficiency since the 1970s. Four and five-speed transmissions, overhead cams, four valves per cylinder, electronic

Engine Efficiency 2010 2000 1990 1980

Compression ratio

Figure 4.15 Internal combustion engine efficiency

Compression ratio

Figure 4.15 Internal combustion engine efficiency control and fuel injection have been collectively responsible for perhaps 15 percent cumulative reduction in engine losses since 1972. Heavier vehicles (light trucks, vans and sports utility vehicles) exhibit lower fuel economy (10.3 mpg for 1972; 17 mpg in 1990). Heavy trucks exhibit still lower fuel economy, around 6 mpg. From 1970 to 1990, overall average motor vehicle fuel economy in the US increased from 12.0 mpg to 16.4 mpg; from 1990 to 1998 there has been a very slight further increase to 17.0 mpg (United States Department of Energy annual).15

Thanks to regulations known as the Corporate Average Fuel Economy (CAFÉ) standards, imposed in the aftermath of the 1973-4 Arab oil boycott, the US passenger vehicle fleet of 1990 achieved about 50 percent more vehicle miles per gallon of fuel than in 1972. This was only partly due to drive train efficiency gains but mainly to weight reductions, smaller engines, improved aerodynamics and better tires. However, these improvements must be classified as secondary, rather than primary, efficiency gains.

A more detailed analysis of energy losses in automobile transportation (circa 1990) that reflects the impact of CAFÉ standards and distinguishes between urban driving (12.6 percent) and highway driving (20.2 percent) is summarized in Figure 4.16. In that year, passenger cars in the US averaged 20.2 mpg. Unfortunately, the distinction between urban (stop-start)

Usa Economic Growth 1994
Note: U = Urban; H = Highway. Source: Adapted from Green (1994).

Figure 4.16 Breakdown of energy requirements for a typical mid-size automobile (shown for US federal urban and highway driving cycles as a percent of the energy content of the fuel)

and highway driving is not clear in the highway statistics. Assuming urban vehicle miles traveled (VMT) accounted for something like 40 percent of the total, the average thermodynamic efficiency would have been between 15 and 16 percent.16

In the case of heavy diesel-powered trucks with a compression ratio in the range of 15-18, operating over long distances at highway speeds, the analysis is comparable to that for railways. The engine power can be optimized for this mode of operation and the parasitic losses for a heavy truck (lights, heating, engine cooling, air-conditioning, power-assisted steering, etc.) are minor. Overall thermodynamic efficiency for such trucks could be as high as 20 percent, even allowing for friction and parasitic loads.

For aircraft up to 1945, most engines were piston-type spark ignition ICEs and fuel was high (100 plus) octane gasoline. Engine efficiencies were comparable to those achieved by a high-compression engines (12:1) under constant load. This would be about 33 percent before corrections for internal losses (a factor of 0.8) and variable load penalty (a factor of 0.75), or roughly 20 percent overall. Aircraft are even more efficient in cruising, but there are heavy losses in takeoff and some in landing.

Gas turbines began replacing piston engines during World War II, and more rapidly thereafter. The turbo takeover in the commercial aviation market began around 1955 and accelerated in the 1960s. The fuel consumption index fell from an arbitrary value of 100 for the early turbo-jets of 1955 to 55 for the advanced turbo-fans of the year 2000. These improvements can be categorized as thermodynamic. Of course it takes a number of years before a new engine type penetrates the fleet, so fleet averages lag significantly (a decade or so) behind state-of-the-art.

In 1989, the US Environmental Protection Agency calculated that the average thermodynamic efficiency of all motor transportation (including trucks, buses, railroads and aircraft) was 8.33 percent.17 Because of the increasing size of motor vehicles - pickup trucks and so-called sports utility vehicles (SUVs) - sold, it is unlikely that the average efficiency of the transport sector in the US has improved since then. On the other hand, thanks to a combination of factors, such as smaller vehicles and much more intensive use of electrified railways and subways, the corresponding efficiency in Japan reached nearly 15 percent by 1990, although there has been a slight decline subsequently. The efficiency of exergy use in Japan is reviewed in Chapter 6.

4.5.2 Direct Heat and Quasi-work

A declining, but still considerable, fraction of the fuel inputs to the economy is still used for heat (Figure 4.4a and b). Process heat and space heat do not

'perform work' in the usual sense, except in heat engines. However, process improvements that exploit improvements in heat transfer and utilization may be classed as thermodynamic efficiency gains, no less than the use of turbo-chargers or recuperators in modern auto, truck or aircraft engines. It is possible in some cases to calculate the minimum theoretical exergy requirements for the process or end-use in question and compare with the actual consumption in current practice. The ratio of theoretical minimum to actual exergy consumption - for an endothermic process - is known as the 'second-law efficiency' (American Physical Society et al. 1975). The product of second-law efficiency times exergy input can be regarded as 'useful' heat delivered to the point of use, or 'quasi-work'.

There are three different cases. First high temperature (say greater than 600° C). High temperature heat drives endothermic processes such as carbo-thermic metal smelting, casting and forging, cement manufacturing, lime calcination, brick manufacturing and glass-making, plus some use in endothermic chemical processes like ammonia synthesis and petroleum refining (for example, cracking). The second case is intermediate temperature heat, namely 100° C to 600° C, but mostly less than 200° C and mostly delivered to the point of use by steam. The third case is low temperature heat at temperatures below 100° C, primarily for hot water or space heat.

We know of very little published data allocating industrial heat requirements by temperature among these cases. Based on a detailed 1972 survey covering 67 four-digit SIC groups and 170 processes, it appears that roughly half of all US industrial process heat was required at temperatures greater than 600° C and most of the rest was in the intermediate category (Lovins 1977, figure 4.1). We assume hereafter that this allocation has been constant over time, although it may well have changed.

Intermediate and low temperature heat is required for many industrial purposes, usually delivered to the point of use via steam. Examples include increasing the solubility of solids in liquids, accelerating dehydration and evaporation (for example, in distillation units), liquefaction of solids or viscous liquids for easier transportation or mixing and acceleration of desired chemical reactions, many of which are temperature dependent. For purposes of back-casting to 1900, we have assumed that all coke and coke oven gas, as well as half of the natural gas allocated to industry, as opposed to residential and commercial usage, were used for high temperature processes. Most of the rest of the fuels used for industrial purposes are assumed to be for steam generation.

We consider high temperature industrial heat first. The iron and steel industry is the obvious exemplar. In this case, the carbon efficiency of reduction from ore might appear to be a reasonable surrogate, since the reducing agent for iron ore is carbon monoxide. Thus the C/Fe (carbon to iron) ratio is a true measure of efficiency, as regards the use of this resource. There was a reduction from about 1.5 tons C per ton Fe in 1900 to a little less than 1 ton per ton in 1950, or about 0.1 tons of carbon per ton of steel saved per decade. Total energy consumption for iron smelting has declined at almost the same rate, however. In 1900 the average was about 55 MJ/kg.

From 1953 to 1974 total exergy consumption per ton of steel declined by 35 percent (adjusted for the 1973 ratio of pig iron to crude steel) while the carbon rate (coke to iron) declined even more, by 45 percent. During that period fuel oil replaced some of the coke, while electric power consumption, for electric arc furnaces (EAFs) increased significantly (National Research Council National Academy of Sciences 1989). In 1973 the average exergy consumption was 20.5 GJ per tonne of steel in the US (with 36 percent EAF in that year), as compared to 18.5 GJ/t in Japan (30 percent EAF) and 24.5 GJ/t in Canada (Elliott 1991). The rate of improvement has certainly slowed since then, but final closure of the last open hearth furnaces and replacement of ingot casting by continuous casting has continued, as has the penetration of EAF scrap-melting furnaces as a share of the whole.

A recent study of the steel sector provides a useful update (de Beer 1998). A 'reference' integrated steel plant described in that study consumes a total of 22.6 GJ/t exergy inputs, of which 20.2 is coal and 1.87 is the exergy content of scrap.18 Rolled steel output embodies 6.62 GJ/t, with other useful by-products from gas to tar and slag accounting for a further 4.28 GJ/t. The remaining 11.62 GJ/t is lost exergy. The second-law efficiency of such a plant would be very nearly 50 percent, counting salable by-products. Significant improvements are still possible, at least in terms of the primary product. The author expects future plants to achieve 12 GJ/t (with smaller by-product output, of course.) Of course EAF melting of scrap is much more exergy-efficient, current state-of-the art being around 7 GJ/t with near-term improvement potential to half of this, or 3.0 GJ/t.

Fairly detailed static (single-year) exergy analyses have been carried out for a number of major energy-consuming industries, including iron and steel, aluminum, copper, chlor-alkali, pulp and paper and petroleum refining. In second-law terms, the calculated second-law efficiencies based on 1970-72 data were as follows: iron and steel 22.6 percent, primary aluminum 13.3 percent,19 cement production 10.1 percent and petroleum refining 9.1 percent (for example, Gyftopoulos et al. 1974; Hall et al. 1975; Ayres 1989c). The real question is how much improvement took place from 1900 to 1972.

If the 1974 performance was equivalent to a second-law efficiency of 22.6 percent - as noted above - the 1953 efficiency must have been about 14.5 percent and the efficiency in 1900 was probably between 9 and 10 percent, based on coke rates. If the best available technologies circa 1973 had been used, the second-law efficiencies would have been 35 percent for iron and steel, 12 percent for petroleum refining, 16.8 percent for aluminum and 17 percent for cement (Gyftopoulos et al. 1974). A 25 percent average efficiency for all high temperature industrial processes is probably a fair guess. Given a 20-year half-life for industrial plants (Landsberg et al. 1963; Salter 1960), it is probably safe to assume that the best-practice figures for 1975 became 'average' by 1995, due to incremental improvements and replacement of the last efficient facilities. If the overall second-law efficiency of the industrial sector's use of high temperature process heat was 25 percent in 1975, it is unlikely to be much better than that - perhaps 30 percent - in 2000. In countries industrializing from scratch (for example, South Korea), process efficiencies in recent years are likely to be a little higher, due to newer equipment.

Though exothermic in principle, pulp and paper manufacturing is a major energy consumer (2600 PJ in 1985 and 2790 PJ in 1994 - about 3 percent of the US national total). About half of the total energy (exergy) consumed was purchased electricity or fuel. The best short-term measure of progress in the pulp and paper industry is tons of paper output per unit of fuel (exergy) input. A similar measure would be applicable to the copper mining and smelting sector, which is also exothermic in principle (for sulfide ores). Unfortunately, we do not have reliable historical data for either of these industries. The major opportunity for future improvement is to make fuller use of the exergy content of the pulpwood feedstock, of which less than half (in mass terms) is incorporated in most grades of paper. (The exception is newsprint, which is made by a different process, known as mechanical pulping, that does not separate the cellulose from the hemi-cellulose and lignin fractions.)

For kraft (that is, 'strong') paper, the consumption of purchased energy per unit of output in the US has fallen more or less continuously, from 41.1 GJ per metric ton (air dried) in 1972 to 35.6 GJ/t in 1988 (Herzog and Tester 1991). Those improvements were largely triggered by the so-called 'oil crisis' of 1973-4, as well as environmental regulations on the disposal of so-called 'black liquor'. However, it is noteworthy that the state-of-the-art (best-practice) plant in 1988 consumed only 25 GJ/t or 70 percent as much energy as the average. Adoption of advanced technologies now being developed could bring this down to 18 GJ/t by 2010. At present, wet lignin waste is burned in a furnace for both heat and chemical recovery, but the first-law efficiency of that process is low (about 65 percent compared to 90 percent for a gas-fired furnace) (Herzog and Tester 1991). Gasification of the lignin waste followed by gas-turbine co-generation offers the potential of becoming self-sufficient in both heat and electricity (ibid).20

Significant process improvements have been recorded in the chemical industry. An example where a time series is available is high density polyethylene (HDPE). This plastic was first synthesized in the 1930s and is now one of the most important industrial materials. In the 1940s energy requirements were 18 MJ/kg, (= GJ/t) down to 11.5 MJ/kg in the 1950s. Improvements in compressors reduced this to 9.4 MJ/kg on average in the 1970s. But Union Carbide's UNIPOL process introduced in 1968 achieved 8.15 MJ/kg, which dropped to 4.75 MJ/kg in 1977 and 1.58 MJ/kg as of 1988 (Joyce 1991). The ten-fold reduction in energy requirements is one of the reasons why prices have fallen and demand has risen accordingly.

Nitrogen fixation is another example for which data are available. The electric arc process (circa 1905) required 250 GJ/t; the cyanamide process introduced a few years later (circa 1910) reduced this to something like 180 GJ/t. The Haber-Bosch ammonia synthesis process - the original version of the process now employed everywhere - achieved 100 GJ/t by 1920 (using coal as a feedstock) (Smil 2001, appendix K). Incremental improvements and increasing scale of production brought the exergy consumption down steadily: to 95 GJ/t in 1930, 88 GJ/t in 1940 and 85 GJ/t in 1950 (ibid.). Natural gas replaced coal as a feedstock subsequently, and the reciprocating compressors of the older plants were replaced by centrifugal turbo-compressors which enabled much higher compression ratios. By 1955 exergy requirements of the best plants had dropped to 55 GJ/t, and by 1966 it was down to 40 GJ/t. Global production soared, from 5 MMT in 1950 to around 100 MMT today. Since 1950 the decline in exergy cost has been more gradual, to 27 GJ/t in 1996 and 26 GJ/t in 2000 (ibid.). According to one author, the theoretical minimum for this process is 24.1 GJ/t (de Beer 1998, chapter 6). Smil states that the stoichiometric exergy requirement for the process is 20.9 GJ/t (Smil 2001). The latter implies that the second-law efficiency of ammonia synthesis rose from 8.3 percent in 1905 to over 77 percent in 2000. Clearly there is not much more room for improvement in this case.

Synthetic soda ash produced via the Solvay process is another documented case. The first plant (circa 1880) achieved 54.6 GJ/t. By 1900 this had fallen by 50 percent to 27 GJ/t and by 1912 is was down to 25 GJ/t. Then progress accelerated briefly during World War I and early postwar years. However, from 1925 to 1967, improvement was very slow (from 15 GJ/t to 12.9 GJ/t). Historical efficiency improvements for pulp and paper, ammonia, HDPE and soda ash are plotted in Figure 4.17, along with steel.

Extrapolating back to 1900 is always problematic. Except for the above examples, it is difficult to estimate an efficiency figure for 1920 or 1900, since for many industries there are virtually no time series data, at least

Data Table The Growth Soda Years
Figure 4.17 Exergy consumption by industrial processes ( USA, 1880-2000)

in a convenient form. If one takes the efficiency improvement in the steel industry (roughly three-fold) as a model for the efficiency gains for high temperature heat elsewhere in manufacturing, it would follow that the average exergy efficiency of high temperature heat use in the industrial sector as a whole was around 9.5 percent in 1900. We make this assumption in Table 4.1 in Section 4.7.

As mentioned above, the second-law approach is also applicable to the use of direct heat for steam generation in the industrial sector and for space heating, water heating and cooking in the residential and commercial (R&C) sectors. The most optimistic assumption is 25 percent (American Physical Society et al. 1975; United States Congress Office of Technology Assessment 1983). A British study obtained a lower estimate of 14 percent (Olivier et al. 1983). The technology of boilers has not changed significantly over the years. The differences mainly depend on the temperature of the steam and the efficiency of delivery to the point of use. We think the lower estimate is more realistic. An important difference between this and most earlier (pre-1975) studies is that different measures of efficiency are used. The older studies used what is now termed first-law efficiency, namely the fraction of the chemical energy (enthalpy) of the fuel that is delivered to the furnace walls or the space to be heated.

Based on first-law analysis, in 1950 an open fireplace was about 9 percent efficient, an electric resistance heater was 16.3 percent efficient (allowing for 80 percent losses in the generating plant), synthetic 'town gas' was 31 percent efficient, a hand-fired coal furnace was 46 percent, a coal furnace with a stoker yielded 60 percent and a domestic oil or gas furnace gave 61 percent (Ayres and Scarlott 1952, table 12). Incidently, the authors calculated that a heat pump with a coefficient-of-performance of four would be 65 percent efficient. However, as noted earlier, if alternative ways of delivering the same amount of comfort to the final user are considered, the above efficiencies are too high. In 1950, space heating accounted for 42 percent of all exergy consumption in the residential and commercial sector, with cooking and hot water adding 2.5 and 3.2 percent respectively.

The APS summer study previously cited (American Physical Society et al. 1975) concluded that heat delivered by a conventional central oil or gas furnace to heat the rooms of a typical house to 70° F by means of hot water or hot air would correspond to a second-law efficiency of 6 percent, while the second-law efficiency for water heating was perhaps 3 percent. It made no estimate for cooking on a gas range, but similar arguments suggest that a 3 percent figure might be appropriate in this case too for 1970.

It is difficult to make a meaningful estimate for 1900, since the basic furnace technology from 1900 to 1970 changed very little, except that coal or coke were the fuels of choice in the early part of the century, whereas oil and gas had replaced coal by 1970. The oil burner or gas burner lost considerably less heat up the stack than its clumsy predecessor, and far less than a wood stove or open fireplace. We guess that the heating systems of 1970 were at least twice as efficient as those of 1900, in second-law terms. According to this logic, space heating systems in 1900 were probably 3 percent efficient in second-law terms.

A 'typical' wood-frame house in North America is poorly insulated and uses around eight times as much heat as a well-insulated one (Ayres 1989b). Assuming houses in 1900 were essentially uninsulated, while houses in 1970 were moderately (but not well) insulated, it appears that the overall efficiency of space heating in 1970 was something like 2 percent, whereas houses in 1900 achieved only 0.25 percent at best. It is interesting to note that the overall efficiency of space heating in the US by 1960 had already improved by a factor of seven-plus since 1850, due mainly to the shift from open fireplaces to central heating (Schurr and Netschert 1960, p. 49 footnote). However, we have to point out that most of the gains were due to systems optimization, rather than increased efficiency at the equipment level.

Recent investments in heating system modernization, insulation, upgrading of windows and so forth may conceivably have doubled the 1970 figure by now. Progress since 1970 has been slightly accelerated (thanks to the price increases of the 1970s), but space heating systems are rarely replaced in existing buildings, which have an average life expectancy of more than 50 years, based on average economic depreciation rates of 1.3 percent per annum (Jorgenson 1996). The penetration of new technologies, such as solar heating and electric heat pumps, has been very slow so far.

Was this article helpful?

0 0
Solar Stirling Engine Basics Explained

Solar Stirling Engine Basics Explained

The solar Stirling engine is progressively becoming a viable alternative to solar panels for its higher efficiency. Stirling engines might be the best way to harvest the power provided by the sun. This is an easy-to-understand explanation of how Stirling engines work, the different types, and why they are more efficient than steam engines.

Get My Free Ebook


Post a comment