Pistons in cylinders first saw use in
steam engines. Scotland's James Watt crafted the first good ones during
the 1770s. A century later, the German inventors Nicolaus Otto and
Gottlieb Daimler introduced gasoline as the fuel, burned directly within
the cylinders. Such motors powered the earliest automobiles. They were
lighter and more mobile than steam engines, more reliable, and easier to
start.
Some single-piston gasoline engines
entered service, but for use with airplanes, most such engines had a
number of pistons, each shuttling back and forth within its own cylinder.
Each piston also had a connecting rod, which pushed on a crank that was
part of a crankshaft. This crankshaft drove the propeller.
The cylinder is closed on one end (the
cylinder head), and the piston fits snugly in the cylinder. The piston
wall is grooved to accommodate rings which fit tightly against the
cylinder wall and help seal the cylinder's open end so that gases cannot
escape from the combustion chamber. The combustion chamber is the area
between the top of the piston and the head of the cylinder when the piston
is at its uppermost point of travel.
The up-and-down movement
of the piston is converted to rotary motion to turn the propeller by the
connecting rod and the crankshaft, just as in most automobiles. Note the
crankshaft, connecting rod, and piston arrangement and
imagine how the movement of the piston is converted to the rotary motion
of the crankshaft. Note particularly how the connecting rod is joined to
the crankshaft in an offset manner.
The valves at
the top of the cylinder open and close to let in a mixture of fuel and air
and to let out, or exhaust, burned gases from the combustion chamber. The
opening and closing of a valve are done by a cam geared to the crankshaft.
This gearing arrangement ensures that the two valves open and close at the
proper times.
Engines built for airplanes had to
produce plenty of power while remaining light in weight. The first
American planebuilders—Wilbur and Orville Wright, Glenn Curtiss—used
motors that resembled those of automobiles. They were heavy and complex
because they used water-filled plumbing to stay cool.
A French engine of 1908, the "Gnome,"
introduced air cooling as a way to eliminate the plumbing and lighten the
weight. It was known as a rotary engine. The Wright and Curtiss motors had
been mounted firmly in supports, with the shaft and propeller spinning.
Rotary engines reversed that, with the shaft being held tightly—and the
engine spinning! The propeller was mounted to the rotating engine, which
stayed cool by having its cylinders whirl within the open air.
Gnome Rotary
During World War I, rotaries attained
tremendous popularity. They were less complex and easier to make than the
water-cooled type. They powered such outstanding fighter planes as
German's Fokker DR-1 and Britain's Sopwith Camel. They used castor oil for
lubrication because it did not dissolve in gasoline. However, they tended
to spray this oil all over, making a smelly mess. Worse, they were limited
in power. The best of them reached 260 to 280 horsepower (190 to 210
kilowatts).
Thus, in 1917 a group of American engine
builders returned to water cooling as they sought a 400-horsepower
(300-kilowatt) engine. The engine that resulted, the Liberty was the most
powerful aircraft engine of its day, with the U.S. auto industry building
more than 20,000 of them. Water-cooled engines built in Europe also
outperformed the air-cooled rotaries, and lasted longer. With the war
continuing until late in 1918, the rotaries lost favour.
Liberty engine
In this fashion, designers returned to
water-cooled motors that again were fixed in position. They stayed cool by
having water or antifreeze flow in channels through the engine to carry
away the heat. A radiator cooled the heated water. In addition to offering
plenty of power, such motors could be completely enclosed within a
streamlined housing, to reduce drag and thus produce higher speeds in
flight. Rolls Royce, Great Britain's leading engine-builder, built only
water-cooled motors.
Air-cooled rotaries were largely out of
the picture after 1920. Even so, air-cooled engines offered tempting
advantages. They dispensed with radiators that leaked, hoses that burst,
cooling jackets that corroded, and water pumps that failed.
Thus, the air-cooled "radial engine"
emerged. This type of air-cooled engine arranged its cylinders to extend
radially outward from its hub, like spokes of a wheel. The U.S. Navy
became an early supporter of radials, which offered reliability along with
light weight. This was an important feature if planes were to take off
successfully from an aircraft carrier's flight deck.
radial engines
You can see in the illustration that this is a five-cylinder engine -- radial engines typically have anywhere from three to nine cylinders. The radial engine has the same sort of pistons, valves and spark plugs that any four-stroke engine has. The big difference is in the crankshaft.
Instead of the long shaft that's used in a multi-cylinder car engine, there is a single hub -- all of the piston's connecting rods connect to this hub. One rod is fixed, and it is generally known as the master rod. The others are called articulating rods. They mount on pins that allow them to rotate as the crankshaft and the pistons
move.
With financial support from the Navy,
two American firms, Wright Aeronautical and Pratt & Whitney, began
building air-cooled radials. The Wright Whirlwind, in 1924, delivered 220
horsepower (164 kilowatts). A year later, the Pratt & Whitney Wasp was
tested at 410 horsepower (306 kilowatts).
Aircraft designers wanted to build
planes that could fly at high altitudes. High-flying planes could swoop
down on their enemies and also were harder to shoot down. Bombers and
passenger aircraft flying at high altitudes could fly faster because air
is thin at high altitudes and there is less drag in the thinner air. These
planes also could fly farther on a tank of fuel.
But because the air was thinner,
aircraft engines produced much less power. They needed air to operate, and
they couldn't produce power unless they had more air. Designers responded
by fitting the engine with a "supercharger." This was a pump that took in
air and compressed it. The extra air, fed into an engine, enabled it to
continue to put out full power even at high altitude.
Early superchargers underwent tests
before the end of World War I, but they were heavy and offered little
advantage. The development of superchargers proved to be technically
demanding, but by 1930, the best British and American engines installed
such unitsroutinely. In the United States, the Army funded work on
superchargers at another engine-builder, General Electric. After 1935,
engines fitted with GE's superchargers gave full power at heights above
30,000 feet (9,000 meters).
Fuels for aviation also demanded
attention. When engine designers tried to build motors with greater power,
they ran into the problem of "knock." This had to do with the way fuel
burned within them. An airplane engine had a carburettor that took in fuel
and air, producing a highly flammable mixture of gasoline vapour with air,
which went into the cylinders. There, this mix was supposed to burn very
rapidly, but in a controlled manner. Unfortunately, the mixture tended to
explode, which damaged engines. The motor then was said to knock.
Poor-grade fuels avoided knock but
produced little power. Soon after World War I, an American chemist, Thomas
Midgely, determined that small quantities of a suitable chemical added to
high-grade gasoline might help it burn without knock. He tried a number of
additives and found that the best was tetraethyl lead. The U.S. Army began
experiments with leaded aviation fuel as early as 1922; the Navy adopted
it for its carrier-based aircraft in 1926. Leaded gasoline became standard
as a high-test fuel, used widely in automobiles as well as in aircraft.
Leaded gas improved an aircraft engine's
performance by enabling it to use a supercharger more effectively while
using less fuel. The results were spectacular. The best engine of World
War I, the Liberty, developed 400 horsepower (300 kilowatts). In World War
II, Britain's Merlin engine was about the same size—and put out 2,200
horsepower (1,640 kilowatts). Samuel Heron, a long-time leader in the
development of aircraft engines and fuels, writes that "it is probably
true that about half the gain in power was due to fuel."
These advances in supercharging and
knock-resistant fuels laid the groundwork for the engines of World War II.
In 1939, the German test pilot Fritz Wendel flew a piston-powered fighter
to a speed record of 469 miles per hour (755 kilometres per hour). U.S.
bombers used superchargers to carry heavy bomb loads at 34,000 feet
(10,000 meters). They also achieved long range, the B-29 bomber had the
range to fly non-stop from Miami to Seattle. Fighters routinely topped 400
miles per hour (640 kilometres per hour). Airliners, led by the Lockheed
Constellation, showed that they could fly non-stop from coast to coast.
By 1945, the jet engine was drawing both
attention and excitement. Jet fighters came quickly to the forefront.
However, while early jet engines gave dramatic increases in speed, they
showed poor fuel economy. It took time before engine builders learned to
build jets that could sip fuel rather than gulp it. Until that happened,
the piston engine retained its advantage for use in bombers and airliners,
which needed to be able to fly a great distance without refuelling.
Pratt & Whitney was the first to achieve
high thrust with good fuel economy. Its J-57 engine, which did these
things, first ran on a test stand in 1950. Eight such engines powered the
B-52, a jet bomber with intercontinental range that entered service in
1954. Civilian versions of this engine powered the Boeing 707 and Douglas
DC-8, jet airliners that began carrying passengers in 1958 and 1959,
respectively. In this fashion, jet engines conquered nearly the whole of
aviation.