How often are radio telescopes used to measure parallax? When was this first done?

How often are radio telescopes used to measure parallax? When was this first done?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The abstract of A magnetar parallax (also in MNRAS) says:

TE J1810-197 (J1810) was the first magnetar identified to emit radio pulses, and has been extensively studied during a radio-bright phase in 2003−2008. It is estimated to be relatively nearby compared to other Galactic magnetars, and provides a useful prototype for the physics of high magnetic fields, magnetar velocities, and the plausible connection to extragalactic fast radio bursts. Upon the re-brightening of the magnetar at radio wavelengths in late 2018, we resumed an astrometric campaign on J1810 with the Very Long Baseline Array, and sampled 14 new positions of J1810 over 1.3 years. The phase calibration for the new observations was performed with two phase calibrators that are quasi-colinear on the sky with J1810, enabling substantial improvement of the resultant astrometric precision. Combining our new observations with two archival observations from 2006, we have refined the proper motion and reference position of the magnetar and have measured its annual geometric parallax, the first such measurement for a magnetar. The parallax of 0.40±0.05 mas corresponds to a most probable distance 2.5+0.4−0.3 kpc for J1810. Our new astrometric results confirm an unremarkable transverse peculiar velocity of ≈200km s−1 for J1810, which is only at the average level among the pulsar population. The magnetar proper motion vector points back to the central region of a supernova remnant (SNR) at a compatible distance at ≈70 kyr ago, but a direct association is disfavored by the estimated SNR age of ~3 kyr.

This reports the first radio astrometric determination of parallax "for a magnetar".


  1. How often are radio telescopes used to measure parallax? Is it done semi-regularly, or is this a rarity, used only in special cases?
  2. When was the first time that the parallax of a distant object (e.g. outside of our solar system) measured for the first time using radio astrometric techniques?

Unfortunately "first time" is ambiguous, it could mean first time ever for this kind of measurement, or first time for a given object i.e. nobody knew the parallax of that particular object until the radio astrometric value was determined. It may be too much work to try to answer both. If that's the case, just indicate which kind of "first" is being reported.

Observatories Across the Electromagnetic Spectrum

Astronomers use a number of telescopes sensitive to different parts of the electromagnetic spectrum to study objects in space. Even though all light is fundamentally the same thing, the way that astronomers observe light depends on the portion of the spectrum they wish to study.

For example, different detectors are sensitive to different wavelengths of light. In addition, not all light can get through the Earth's atmosphere, so for some wavelengths we have to use telescopes aboard satellites. Even the way we collect the light can change depending on the wavelength. Here we briefly introduce observatories used for each band of the EM spectrum.

A sample of telescopes (operating as of February 2013) operating at wavelengths across the electromagnetic spectrum. Observatories are placed above or below the portion of the EM spectrum that their primary instrument(s) observe.

The represented observatories are: HESS, Fermi and Swift for gamma-ray, NuSTAR and Chandra for X-ray, GALEX for ultraviolet, Kepler, Hubble, Keck (I and II), SALT, and Gemini (South) for visible, Spitzer, Herschel, and Sofia for infrared, Planck and CARMA for microwave, Spektr-R, Greenbank, and VLA for radio. Click here to see this image with the observatories labeled.

(Credit: Observatory images from NASA, ESA (Herschel and Planck), Lavochkin Association (Specktr-R), HESS Collaboration (HESS), Salt Foundation (SALT), Rick Peterson/WMKO (Keck), Germini Observatory/AURA (Gemini), CARMA team (CARMA), and NRAO/AUI (Greenbank and VLA) background image from NASA)

Radio observatories

This artist's conception shows Earth and the Spektr-R spacecraft with an imagined radio antenna that is created by combining Spektr-R's data with that of Earth-bound radio telescopes. (Credit: Lavochkin Association)

Radio waves can make it through the Earth's atmosphere without significant obstacles. In fact, radio telescopes can observe even on cloudy days. In principle, then, we don't need to put radio telescopes in space. However, space-based radio observatories complement Earth-bound radio telescopes in some important ways.

A special technique used in radio astronomy is called "interferometry." Radio astronomers can combine data from two telescopes that are very far apart and create images that have the same resolution as if they had a single telescope as big as the distance between the two telescopes. This means radio telescope arrays can see incredibly small details. One example is the Very Large Baseline Array (VLBA), which consists of 10 radio observatories that reach from Hawaii to Puerto Rico, nearly a third of the way around the world.

By putting a radio telescope in orbit around Earth, radio astronomers can make images as if they had a radio telescope the size of the entire planet. The first mission dedicated to space interferometry was the Japanese HALCA mission which ran from 1997 to 2005. The second dedicated mission is the Russian Spektr-R satellite, which launched in 2011.

Microwave observatories

Artist's conception of the European Space Agency's (ESA's) Planck observatory cruising to its orbit. (Credit: ESA/D. Ducros)

The Earth's atmosphere blocks much of the light in the microwave band, so astronomers use satellite-based telescopes to observe cosmic microwaves. The entire sky is a source of microwaves in every direction, most often referred to as the cosmic microwave background (or CMB for short). These microwaves are the remnant of the Big Bang, a term used to describe the early universe.

A very long time ago, all the matter in existence was scrunched together in a very small, hot ball. The ball expanded outward and became our universe as it cooled. Since the Big Bang, which is estimated to have taken place 13.8 billion years ago, it has cooled all the way to just three degrees above absolute zero. It is this "three degrees" that we measure as the microwave background.

The first precise measurements of the temperature of the microwave background across the entire sky was done by the Cosmic Background Explorer (COBE) satellite from 1989 to 1993. Since then, the Wilkinson Microwave Anisotropy Probe (WMAP) refined the COBE measurements, operating from 2001 to 2010. More recently, the Planck mission launched in 2009 and further improved astronomer's understanding of the CMB.

Infrared observatories

Artist's conception of SOFIA flying at sunset (Credit: NASA)

Photograph of the Keck I and II domes at dawn the Keck telescopes operate in visible and infrared wavelengths. (Credit: Rick Peterson/WMKO)

Infrared astronomy has to overcome a number of challenges. While some infrared radiation can make it through Earth's atmosphere, the longer wavelengths are blocked. But that's not the biggest challenge – everything that has heat emits infrared light. That means that the atmosphere, the telescope, and even the infrared detectors themselves all emit infrared light.

Ground-based infrared telescopes reside at high altitudes in dry climates in an effort to get above much of the water vapor in the atmosphere that absorbs infrared. However, ground-based infrared observatories must still account for the atmosphere in their measurements. To do this, the infrared emission from the atmosphere is measured at the same time as the measurement of the cosmic object being observed. Then, the emission from the atmosphere can be subtracted to get an accurate measurement of the cosmic object. The telescopes, for both ground-based and space/airborne observatories, are also designed to limit the spurious infrared radiation from reaching the detector, and the detectors are cooled to limit their infrared emissions.

In 2003, NASA launched the Spitzer Space Telescope into an earth-trailing, heliocentric orbit, where it did not have to contend with the comparatively warm environment in near-Earth space. Another major infrared facility is the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA carries a large telescope inside a 747 aircraft flying at an altitude sufficient to get it well above most of the Earth's infrared absorbing atmosphere.

Scheduled for launch in 2018, the James Webb Space Telescope is a large, space-based observatory, that is optimized for infrared wavelengths. Webb will be able to look further back in time to find the first galaxies that formed in the early Universe and to peer inside dust clouds where stars and planetary systems are forming today. Webb will also be in a heliocentric orbit at the second Lagrange point. To keep the mirror and instruments cold (and allow the telescope to detect the faintest of heat signals from distant objects), it has a giant sunshield, which will block the light and heat from the Earth, Sun, and Moon.

Visible spectrum observatories

The Hubble Space Telescope just after it was captures by the Space Shuttle Atlantis to be serviced in 2009. (Credit: NASA)

Visible light can pass right through our atmosphere, which is why astronomy is as old as humanity. Ancient humans could look up at the night sky and see the stars above them. Today, there is an army of ground-based telescope facilities for visible astronomy (also called "optical astronomy"). However, there are limits to ground-based optical astronomy. As light passes through the atmosphere, it is distorted by the turbulence within the air. Astronomers can improve their chances of a good image by putting observatories on mountain-tops (above some of the atmosphere), but there will still be limits to how crisp their images will be, especially for faint sources.

Visible-light observatories in space avoid the turbulence of the Earth's atmosphere. In addition, they can observe a somewhat wider portion of the electromagnetic spectrum, in particular ultraviolet light that is absorbed by the Earth's atmosphere. The Hubble Space Telescope is perhaps the most famous optical telescope in orbit. Also in orbit is the Kepler observatory. Kepler is using visible light to survey a portion of the Milky Way galaxy to discover planetary systems. The Swift satellite also carries an UltraViolet and Optical Telescope (the UVOT) to perform observations of gamma-ray bursts.

Ultraviolet observatories

Artist's concept of the GALEX satellite in orbit. (Credit: NASA/JPL-Caltech)

The Earth's atmosphere absorbs ultraviolet light, so ultraviolet astronomy must be done using telescopes in space. Other than carefully-select materials for filters, a ultraviolet telescope is much like a regular visible light telescope. The primary difference being that the ultraviolet telescope must be above Earth's atmosphere to observe cosmic sources.

The GALEX observatory was the most recent dedicated ultraviolet observatory. It was launched in 2003 and shut down operations in 2013. Its goal was to observe the history of star formation in our Universe in ultraviolet wavelengths, and it observed over a half-billion galaxies going back to when our Universe was just about 3 billion years old.

The Hubble Space Telescope and the UltraViolet and Optical Telescope on Swift can both perform a great deal of observing at ultraviolet wavelengths, but they only cover a portion of the spectrum that GALEX observes.

X-ray observatories

Artist's concept of the NuSTAR satellite. (Credit: NASA/JPL-Caltech)

X-ray wavelengths are another portion of the electromagnetic spectrum that are blocked by Earth's atmosphere. X-rays also pose a particular challenge because they are so small and energetic that they don't bounce off mirrors like lower-energy forms of light. Instead, they pass right through. Unless they just barely graze the surface of the mirror. (Read more about how X-rays are focused on the X-ray Telescope Introduction page.)

Focusing X-ray telescope require long focal lengths. In other words, the mirrors where light enters the telescope must be separated from the X-ray detectors by several meters. However. launching such a large observatory is costly and limits the launch vehicles to only the most powerful rockets (the Space Shuttle in the case of the Chandra X-ray Observatory).

In 2012, the Nuclear Spectroscopic Telescope Array (or NuSTAR for short), solved this problem by designing an observatory with a deployable mast. In other words, NuSTAR was designed with its mirror module and detector module on a mast, or boom, that could be extended once it was in orbit. By doing that, NuSTAR could be launched on a low-cost rocket.

Gamma-ray observatories

Artist's concept of the Fermi satellite. (Credit: NASA)

One of the HESS telescopes. (Credit: HESS Collaboration)

Not only are gamma-rays blocked by Earth's atmosphere, but they are even harder than X-rays to focus. In fact, so far, there have been no focusing gamma-ray telescopes. Instead, astronomers rely on alternate ways to determine where in the sky gamma-rays are produced. This can be properties of the detector or using special "masks" that cast gamma-ray shadows on the detector.

The Swift satellite was launched in 2004 to help solve the mystery of gamma-ray bursts. Swift has a gamma-ray detector that can observe half the sky at a time, and if it detects a gamma-ray burst, the satellite can quickly point its X-ray and optical telescopes in the direction of the burst. The Fermi Space Telescope was launched in 2008 and is designed to study energetic phenomena from a variety of cosmic sources, including pulsars, black holes, active galaxies, diffuse gamma-ray emission and gamma-ray bursts.

It might be surprising to know that astronomers can use ground-based astronomy to detect the highest energy gamma-rays. For these gamma-rays, the telescopes don't detect the gamma-rays directly. Instead, they use the atmosphere itself as a detector. The HESS array has been in operation for over 10 years. The array began with four telescopes arranged in a square, and recently added the HESS II telescope to its ranks.

Principles of operation

Radio telescopes vary widely, but they all have two basic components: (1) a large radio antenna and (2) a sensitive radiometer, or radio receiver. The sensitivity of a radio telescope—i.e., the ability to measure weak sources of radio emission—depends both on the area and efficiency of the antenna and on the sensitivity of the radio receiver used to amplify and to detect the signals. For broadband continuum emission over a range of wavelengths, the sensitivity also depends on the bandwidth of the receiver. Because cosmic radio sources are extremely weak, radio telescopes are usually very large—up to hundreds of metres across—and use the most sensitive radio receivers available. Moreover, weak cosmic signals can be easily masked by terrestrial radio interference, and great effort is taken to protect radio telescopes from man-made emissions.

The most familiar type of radio telescope is the radio reflector consisting of a parabolic antenna, which operates in the same manner as a television satellite dish to focus the incoming radiation onto a small antenna called the feed, a term that originated with antennas used for radar transmissions (see figure ). This type of telescope is also known as the dish, or filled-aperture, telescope. In a radio telescope the feed is typically a waveguide horn and transfers the incoming signal to the sensitive radio receiver. Solid-state amplifiers that are cooled to very low temperatures to reduce significantly their internal noise are used to obtain the best possible sensitivity.

In some radio telescopes the parabolic surface is equatorially mounted, with one axis parallel to the rotation axis of Earth. Equatorial mounts are attractive because they allow the telescope to follow a position in the sky as Earth rotates by moving the antenna about a single axis parallel to Earth’s axis of rotation. But equatorially mounted radio telescopes are difficult and expensive to build. In most modern radio telescopes, a digital computer is used to drive the telescope about the azimuth and elevation axes to follow the motion of a radio source across the sky.

In the simplest form of radio telescope, the receiver is placed directly at the focal point of the parabolic reflector, and the detected signal is carried by cable along the feed support structure to a point near the ground where it can be recorded and analyzed. However, it is difficult in this type of system to access the instrumentation for maintenance and repair, and weight restrictions limit the size and number of individual receivers that can be installed on the telescope. More often, a secondary reflector is placed in front of (Cassegrain focus) or behind (Gregorian focus) the focal point of the paraboloid to focus the radiation to a point near the vertex, or centre, of the main reflector. Multiple feeds and receivers may be located at the vertex where there is more room, where weight restrictions are less stringent, and where access for maintenance and repair is more straightforward. Secondary focus systems also have the advantage that both the primary and secondary reflecting surfaces may be carefully shaped so as to improve the gain over that of a simple parabolic antenna.

Earlier radio telescopes used a symmetric tripod or quadrapod structure to hold the feed or secondary reflector, but such an arrangement blocks some of the incoming radiation, and the reflection of signals from the support legs back into the receiver distorts the response. In newer designs, the feed or secondary reflector is placed off the central axis and does not block the incoming signal. Off-axis radio telescopes are thus more sensitive and less affected by interference reflected from the support structure into the feed.

The performance of a radio telescope is limited by various factors. The accuracy of a reflecting surface may depart from the ideal shape because of manufacturing irregularities. Wind load can exert force on the telescope. Thermal deformations cause differential expansion and contraction. As the antenna is pointed to different parts of the sky, deflections occur due to changes in gravitational forces. Departures from a perfect parabolic surface become important when they are a few percent or more of the wavelength of operation. Since small structures can be built with greater precision than larger ones, radio telescopes designed for operation at millimetre wavelengths are typically only a few tens of metres across, whereas those designed for operation at centimetre wavelengths range up to 300 metres (1,000 feet) in diameter. For operation at relatively long metre wavelengths where the reflecting surface need not have an accuracy better than a few centimetres, it becomes practical to build very large fixed structures in which the reflecting surface can be made of simple “chicken wire” fencing or even parallel rows of wires.

Traditionally the effect of gravity has been minimized by designing the movable structure to be as stiff as possible in order to reduce the deflections resulting from gravity. A more effective technique, based on the principle of homology, allows the structure to deform under the force of gravity, and the cross section and weight of each member of the movable structure are chosen to cause the gravitational forces to deform the reflecting structure into a new paraboloid with a slightly different focal point. It is then necessary only to move the feed or secondary reflector to maintain optimum performance. Homologous designs have become possible only since the development of computer-aided structural simulations known as the finite element method.

Some radio telescopes, particularly those designed for operation at very short wavelengths, are placed in protective enclosures called radomes that can nearly eliminate the effect of both wind loading and temperature differences throughout the structure. Special materials that exhibit very low absorption and reflection of radio waves have been developed for such structures, but the cost of enclosing a large antenna in a suitable temperature-controlled radome may be almost as much as the cost of the movable antenna itself.

The cost of constructing an antenna with a very large aperture can be greatly reduced by fixing the structure to the ground and moving either the feed or the secondary reflector to steer the beam in the sky. However, for parabolic reflecting surfaces, the beam can be steered in this way over only a limited range of angle without introducing aberration and a loss of signal strength.

Radio telescopes are used to measure broad- bandwidth continuum radiation as well as narrow-bandwidth spectroscopic features due to atomic and molecular lines found in the radio spectrum of astronomical objects. In early radio telescopes, spectroscopic observations were made by tuning a receiver across a sufficiently large frequency range to cover the various frequencies of interest. Because the spectrometer had a narrow frequency range, this procedure was extremely time-consuming, and it greatly restricted observations. Modern radio telescopes observe simultaneously at a large number of frequencies by dividing the signals up into as many as several thousand separate frequency channels that can range over a much larger total bandwidth of tens to hundreds of megahertz.

The most straightforward type of radio spectrometer employs a large number of filters, each tuned to a separate frequency and followed by a separate detector that combines the signal from the various filters to produce a multichannel, or multifrequency, receiver. Alternatively, a single broad-bandwidth signal may be converted into digital form and analyzed by the mathematical process of autocorrelation and Fourier transforms (see below). In order to detect faint signals, the receiver output is often averaged over periods of up to several hours to reduce the effect of noise generated by thermal radiation in the receiver.

How far away is Alpha Centauri?

Here is an example of how to derive a star’s distance from its parallax angle. We will use the parallax reported for Alpha Centauri in a study published in January 2016 in Astronomy & Astrophysics. The value reported by the two authors, 743 ± 1.3 mas (thousandths of an arc second), refers to half the angle that would be obtained by measuring the angular displacement of Alpha Centauri six months apart. All parallax angles reported in the scientific literature always refer to half of the angle measured after six months. This makes the calculations easier. Such an angle subtends the Earth’s orbit radius (i.e., the astronomical unit), observed from the star’s distance.

To obtain the distance to Alpha Centauri, we must consider an imaginary right triangle whose minor leg is the Earth’s orbit’s radius (149.6 million km). The major leg, enormously longer than the other, is the distance that separates the Sun from Alpha Centauri. Parallax is the angle opposite the minor leg. Of this right triangle, we know the parallax angle, 743 mas, and the length of the side opposite it: 1 au or one astronomical unit. We lack the length of the side adjacent to the parallax angle, which represents precisely the distance of Alpha Centauri from the Sun. The operation to be carried out is as follows:

However, before carrying out the calculation, we must transform the parallax angle from arc seconds into radians [2], the unit of measurement of the angles used in this type of trigonometric operations. The transformation returns a value of 3.602 × 10⁻⁶ radians, i.e., 3.6 millionths of a radian, testifying that parallax angles are really tiny [3]. The calculation to be carried out becomes:

The result is 277,624 astronomical units. The binary pair formed by Alpha Centauri A and B is almost 280,000 times the average distance between the Earth and the Sun away from us. It is an abyss of space equivalent to 4.153 × 10¹³ km or just over 41.5 trillion kilometers!

Since handling such large numbers is impractical, star distances are usually expressed in light-years. A light-year is the distance traveled by light in vacuum in a year and corresponds to 63,241 astronomical units or 9.461 × 10¹² km (9,461 billion km). By transforming astronomical units into light-years, we finally obtain that Alpha Centauri is 4.39 light-years from Earth. And this is how, from the measurement of a star’s parallax angle, we quickly came to derive its distance.

It must be said that, in the real world, measurements are never perfect, and astronomy is no exception. In fact, the study cited above informs us that Alpha Centauri’s parallax angle is associated with an error of ± 1.3 thousandths of an arc second. It means that the actual value of that angle could be any size between 741.7 and 744.3 thousandths of an arc second. Using the same procedure as before, let’s see what this margin of uncertainty corresponds to, transposed into astronomical units.

First, we transform the two values ​​into radians. We get 3.5959 × 10⁻⁶ radians for the smallest value and 3.6085 × 10⁻⁶ radians for the largest. Now, we perform two divisions similar to the one performed above using the parallax angle’s average value:

We get a minimum distance of 277,123 astronomical units (4,382 light-years) and a maximum distance of 278,094 astronomical units (4,397 light-years). That’s a difference of 971 astronomical units or over 145 billion km. It may seem little, but it is a significant uncertainty, considering that Alpha Centauri is the closest star system to Earth.


The nearest stars, the triplet Alpha Centauri A, Alpha Centauri B and Proxima Centauri, are roughly 1000 times more distant, approximately 40.7 trillion km (25.3 trillion mi). Such huge distances are often given in terms of light-years, namely the distance that light travels in a Julian year of 365.25 days (9.461 trillion km or 5.879 trillion mi). Thus the Alpha/Proxima Centauri star system is roughly 4.3 light-years away. The Milky Way galaxy consists of some 300 billion stars in a spiral-shaped conglomerate roughly 100,000 light-years across.

The nearest spiral galaxy is the Andromeda Galaxy, which can be seen with many home telescopes. It is roughly 2.54 million light-years away. There are hundreds of billions of galaxies in the observable universe. As of the present date, the most distant observed galaxy is some 13.2 billion light-years away, which is more than 5000 times more distant than the Andromeda Galaxy. The age of the universe itself is currently estimated to be 13.75 billion years (plus or minus 0.011 billion years), so this galaxy must have formed soon after the big bang. An interesting online tool, which one can use to determine first-hand the age of the universe from known data, is available at [WMAP2009].

The scope of the universe is perhaps best illustrated by an example given by Australian astrophysicist Geraint Lewis. He noted that if the entire Milky Way galaxy is represented by a small coin, roughly one cm across, then the Andromeda galaxy would be another small coin roughly 25 cm (10 in) away. The observable universe would then extend for 5 km (3 mi) in every direction, encompassing some 300 billion galaxies (and roughly 3 x 10 22 individual stars). And yet most of the universe is empty space! [Lewis2011].

So how are these distances measured? How can scientists possibly measure or calculate these enormous distances with any confidence?


The same principle is used in astronomy, where instead of using the distance between your two eyes as a baseline, researchers use the diameter of the earth's orbit around the sun, which is 2 AU or approximately 300 million km (186 million mi). As the earth travels around the sun in its orbit, relatively close stars are observed to move slightly, with respect to other "fixed" stars that are evidently much more distant. In most cases, this movement is very slight, only a fraction of a second of arc, but reasonably accurate distance measurements can nonetheless be made for stars up to about 10,000 light-years away, encompassing over 100,000,000 stars. This scheme, which relies on very basic geometry and trigonometry, is illustrated by the following diagram [courtesy Wikimedia]:

It can easily be seen, using basic trigonometry (try it!), provided p is small (which it is for all stars), that the distance D to the near star is given by 206265 AU / p, where AU is the astronomical unit mentioned above (i.e., the distance from the earth to the sun, 150 million km or 93 million miles), and p is the parallax angle measured in seconds of arc. The resulting value D when p = 1 is a unit of distance known as a parsec, which is equivalent to 3.261 light-years (i.e., 3.085 x 10 13 km or 1.879 x 10 13 miles).

A closely related scheme is known as expansion parallax. Here scientists study, say, the expanding cloud ring surrounding an object such as the Crab Nebula, which is the aftermath of a supernova explosion recorded by Chinese and Arab astronomers in 1054 CE. By comparing the measured rate of angular expansion with the velocity measured by the Doppler effect [Doppler2011], the distance to the object can be calculated.

It is interesting that even such a basic form of scientific reckoning as parallax is not immune to improvement. In April 2014, researchers at the NASA Goddard Space Flight Center announced that they have utilized the Hubble Space Telescope and a technique known as "spatial scanning" to greatly extend the range at which parallax measurements can be made. Using this technique, they have been able to measure parallax angles as small as five billionths of a degree, which permits measurements of distances to stars more than 75,000 light-years away (encompassing much of the Milky Way galaxy) [SD2014a].

Standard candles

One type of "standard candle," which has been used since the 1920s, is the class of Cepheid variable stars (stars that periodically vary in brightness), for which there is a known relation between the period and its absolute luminosity. There are some difficulties with such measurements, but most of the issues have now been worked out satisfactorily, and distances determined using this scheme are believed accurate to within about 7% for more nearby galaxies, and 15-20% for the most distant galaxies.

Type Ia Supernovas

In August 2011, worldwide attention was focused on a Type Ia supernova that exploded in the Pinwheel Galaxy (known as M101), a beautiful spiral galaxy located just above the handle of the Big Dipper in the Northern Hemisphere. This is the closest supernova to the earth since the 1987 supernova, which was visible mainly in the Southern Hemisphere. Three photos of the 2011 supernova, taken on 22, 23 and 24 Aug 2011 (just before detection, first detection, and one day later), are shown here [courtesy Peter Nugent of LBNL]:

At the present time, Type Ia supernovas are widely considered to be the most reliable "standard candle" for astronomical distance measurements. They have been used to measure distances to galaxies as far away as 13.2 billion years. The uncertainty in these measurements is typically 5%.

Type Ia supernovas have more than merely academic interest, because they have been the principal tool used during the past 13 years to deduce the startling conclusion that the universe is not only expanding, but accelerating. This was first discovered by two teams of scientists in 1998, one led by Saul Perlmutter of Lawrence Berkeley National Laboratory, and the other led by Brian P. Schmidt of Harvard University (now at Australian National University). Each team relied on measurements of Type Ia supernovas to reach their conclusion. In October 2011, Perlmutter, Schmidt and Adam Riess (a co-worker of Schmidt) were awarded the 2011 Nobel Prize in Physics [Overbye]. See Cosmological constant for further details.

The cosmic distance ladder

One advantage of the numerous distance-measuring schemes in use, which overlap over a range of distances from nearby to very distant, is that astronomers can calibrate and corroborate their measurements with multiple approaches. Such calibrations and corroborations thus lend an additional measure of reliability to these schemes. Indeed, by comparing results using different methods, weaknesses have been identified in certain schemes. In most cases, additional studies have demonstrated ways to guard against and correct for known difficulties.

As a single example of these multiple approaches, prior to 2011 the distance to the Pinwheel Galaxy (shown above) was determined, based on measurements of Cepheid variable stars in the galaxy, to be 20.9 million light-years, with an uncertainty of 1.8 million light-years. As of September 2011, measurements of the light output of the 2011 Type Ia supernova in the Pinwheel Galaxy are completely consistent with this distance figure.


However, these distance figures cause insuperable problems for creationists and others who insist that the earth and the universe about us can be no older than 6000 years or so. Some of these writers, such as Henry Morris, have gone so far as to theorize that the Creator deliberately placed quadrillions of photons in space enroute to earth, with patterns strongly suggestive to 20th and 21st century astronomers that events such as supernova explosions occurred millions of years ago, when they really didn't [Boardman1973, pg. 26]:

Needless to say, even many religious believers have difficulty swallowing this "God the Great Deceiver" theology. Kenneth Miller of Brown University, for example, blasted this notion in these terms [Miller1999, pg. 80]:


One of the first uses of optical interferometry was applied by the Michelson stellar interferometer on the Mount Wilson Observatory's reflector telescope to measure the diameters of stars. The red giant star Betelgeuse was the first to have its diameter determined in this way on December 13, 1920. [3] In the 1940s radio interferometry was used to perform the first high resolution radio astronomy observations. For the next three decades astronomical interferometry research was dominated by research at radio wavelengths, leading to the development of large instruments such as the Very Large Array and the Atacama Large Millimeter Array.

Optical/infrared interferometry was extended to measurements using separated telescopes by Johnson, Betz and Townes (1974) in the infrared and by Labeyrie (1975) in the visible. [4] [5] In the late 1970s improvements in computer processing allowed for the first "fringe-tracking" interferometer, which operates fast enough to follow the blurring effects of astronomical seeing, leading to the Mk I, II and III series of interferometers. Similar techniques have now been applied at other astronomical telescope arrays, including the Keck Interferometer and the Palomar Testbed Interferometer.

In the 1980s the aperture synthesis interferometric imaging technique was extended to visible light and infrared astronomy by the Cavendish Astrophysics Group, providing the first very high resolution images of nearby stars. [6] [7] [8] In 1995 this technique was demonstrated on an array of separate optical telescopes for the first time, allowing a further improvement in resolution, and allowing even higher resolution imaging of stellar surfaces. Software packages such as BSMEM or MIRA are used to convert the measured visibility amplitudes and closure phases into astronomical images. The same techniques have now been applied at a number of other astronomical telescope arrays, including the Navy Precision Optical Interferometer, the Infrared Spatial Interferometer and the IOTA array. A number of other interferometers have made closure phase measurements and are expected to produce their first images soon, including the VLTI, the CHARA array and Le Coroller and Dejonghe's Hypertelescope prototype. If completed, the MRO Interferometer with up to ten movable telescopes will produce among the first higher fidelity images from a long baseline interferometer. The Navy Optical Interferometer took the first step in this direction in 1996, achieving 3-way synthesis of an image of Mizar [9] then a first-ever six-way synthesis of Eta Virginis in 2002 [10] and most recently "closure phase" as a step to the first synthesized images produced by geostationary satellites. [11]

Astronomical interferometry is principally conducted using Michelson (and sometimes other type) interferometers. [12] The principal operational interferometric observatories which use this type of instrumentation include VLTI, NPOI, and CHARA.

Current projects will use interferometers to search for extrasolar planets, either by astrometric measurements of the reciprocal motion of the star (as used by the Palomar Testbed Interferometer and the VLTI), through the use of nulling (as will be used by the Keck Interferometer and Darwin) or through direct imaging (as proposed for Labeyrie's Hypertelescope).

Engineers at the European Southern Observatory ESO designed the Very Large Telescope VLT so that it can also be used as an interferometer. Along with the four 8.2-metre (320 in) unit telescopes, four mobile 1.8-metre auxiliary telescopes (ATs) were included in the overall VLT concept to form the Very Large Telescope Interferometer (VLTI). The ATs can move between 30 different stations, and at present, the telescopes can form groups of two or three for interferometry.

When using interferometry, a complex system of mirrors brings the light from the different telescopes to the astronomical instruments where it is combined and processed. This is technically demanding as the light paths must be kept equal to within 1/1000 mm over distances of a few hundred metres. [ why? ] For the Unit Telescopes, this gives an equivalent mirror diameter of up to 130 metres (430 ft), and when combining the auxiliary telescopes, equivalent mirror diameters of up to 200 metres (660 ft) can be achieved. This is up to 25 times better than the resolution of a single VLT unit telescope.

The VLTI gives astronomers the ability to study celestial objects in unprecedented detail. It is possible to see details on the surfaces of stars and even to study the environment close to a black hole. With a spatial resolution of 4 milliarcseconds, the VLTI has allowed astronomers to obtain one of the sharpest images ever of a star. This is equivalent to resolving the head of a screw at a distance of 300 km (190 mi).

Notable 1990s results included the Mark III measurement of diameters of 100 stars and many accurate stellar positions, COAST and NPOI producing many very high resolution images, and Infrared Stellar Interferometer measurements of stars in the mid-infrared for the first time. Additional results include direct measurements of the sizes of and distances to Cepheid variable stars, and young stellar objects.

High on the Chajnantor plateau in the Chilean Andes, the European Southern Observatory (ESO), together with its international partners, is building ALMA, which will gather radiation from some of the coldest objects in the Universe. ALMA will be a single telescope of a new design, composed initially of 66 high-precision antennas and operating at wavelengths of 0.3 to 9.6 mm. Its main 12-meter array will have fifty antennas, 12 metres in diameter, acting together as a single telescope – an interferometer. An additional compact array of four 12-metre and twelve 7-meter antennas will complement this. The antennas can be spread across the desert plateau over distances from 150 metres to 16 kilometres, which will give ALMA a powerful variable "zoom". It will be able to probe the Universe at millimetre and submillimetre wavelengths with unprecedented sensitivity and resolution, with a resolution up to ten times greater than the Hubble Space Telescope, and complementing images made with the VLT interferometer.

Optical interferometers are mostly seen by astronomers as very specialized instruments, capable of a very limited range of observations. It is often said that an interferometer achieves the effect of a telescope the size of the distance between the apertures this is only true in the limited sense of angular resolution. The amount of light gathered—and hence the dimmest object that can be seen—depends on the real aperture size, so an interferometer would offer little improvement as the image is dim (the thinned-array curse). The combined effects of limited aperture area and atmospheric turbulence generally limits interferometers to observations of comparatively bright stars and active galactic nuclei. However, they have proven useful for making very high precision measurements of simple stellar parameters such as size and position (astrometry), for imaging the nearest giant stars and probing the cores of nearby active galaxies.

A simple two-element optical interferometer. Light from two small telescopes (shown as lenses) is combined using beam splitters at detectors 1, 2, 3 and 4. The elements creating a 1/4-wave delay in the light allow the phase and amplitude of the interference visibility to be measured, which give information about the shape of the light source. A single large telescope with an aperture mask over it (labelled Mask), only allowing light through two small holes. The optical paths to detectors 1, 2, 3 and 4 are the same as in the left-hand figure, so this setup will give identical results. By moving the holes in the aperture mask and taking repeated measurements, images can be created using aperture synthesis which would have the same quality as would have been given by the right-hand telescope without the aperture mask. In an analogous way, the same image quality can be achieved by moving the small telescopes around in the left-hand figure — this is the basis of aperture synthesis, using widely separated small telescopes to simulate a giant telescope.

At radio wavelengths, interferometers such as the Very Large Array and MERLIN have been in operation for many years. The distances between telescopes are typically 10–100 km (6.2–62.1 mi), although arrays with much longer baselines utilize the techniques of Very Long Baseline Interferometry. In the (sub)-millimetre, existing arrays include the Submillimeter Array and the IRAM Plateau de Bure facility. The Atacama Large Millimeter Array has been fully operational since March 2013.

Max Tegmark and Matias Zaldarriaga have proposed the Fast Fourier Transform Telescope which would rely on extensive computer power rather than standard lenses and mirrors. [14] If Moore's law continues, such designs may become practical and cheap in a few years.

The Distances of the Stars

In 1838, Friedrich Wilhelm Bessel won the race to measure the first distance to a star other than our Sun via the trigonometric parallax – setting the first scale of the Universe.

Recently, Mark Reid and Karl Menten, who are engaged in parallax measurements at radio wavelengths, revisited Bessel’s original publications on “his” star, 61 Cygni, published in the Astronomische Nachrichten (Astronomical Notes). While they could generally reproduce the results obtained by Bessel and two contemporary 19th century astronomers, the eminent Friedrich Georg Wilhelm von Struve and Thomas Henderson, they discovered why some of these early results were statistically inconsistent with modern measurements.

Out of reverence for Bessel, Reid and Menten decided to publish their findings also in the Astronomische Nachrichten. Founded in 1821, it was one of the first astronomical journals in the world and is the oldest that is still being published.

Stamp issued by the German federal post office in 1984, on the occasion of the 200th anniversary of Friedrich Wilhelm . [more]

Stamp issued by the German federal post office in 1984, on the occasion of the 200th anniversary of Friedrich Wilhelm Bessel’s birth.

Stamp issued by the German federal post office in 1984, on the occasion of the 200th anniversary of Friedrich Wilhelm Bessel’s birth.

Knowing the distance to astronomical objects is of fundamental importance for all of astronomy and for assessing our place in the Universe. The ancient Greeks placed the unmoving “fixed” stars farther away than the celestial spheres on which they thought the planets were moving. However, the question “how much farther?” eluded an answer for centuries after astronomers started trying to address it. Things came to a head in the late 1830s, when three astronomers zeroed in on different stars, spending many nights at their telescope, often under harsh conditions. It was Friedrich Wilhelm Bessel who won the race in 1838 by announcing that the distance to the double-star system 61 Cygni is 10.4 light years. This proved that stars are not just a little farther away from us than planets, but more than a million times farther – a truly transformational result that totally revised the scale of the Universe as it was known in the 19th century.

Bessel’s measurement was based on the trigonometric parallax method. This technique is essentially triangulation, which is used by surveyors to determine distances on land. Astronomers measure the apparent position of a “nearby” star against much more distant stars, using the Earth’s orbit around the Sun to provide different vantage points over a year’s time.

Bessel had to make his pain-staking measurements over nearly 100 nights at his telescope. Astronomers now are far more “efficient”. The Gaia space mission is measuring accurate distances for hundreds of millions of stars, with great impact on astronomy. However, because of interstellar dust that pervades the Milky Way’s spiral arms, Gaia has difficulties observing stars within the Galactic plane that are farther from the Sun than about 10,000 light years – this is just 20% of the Milky Way’s size of more than 50,000 light years. Therefore, even a mission as powerful as Gaia will not yield the basic layout of our Galaxy, many aspects of which are still under debate – even the number of spiral arms is uncertain.

In order to better address the structure and size of the Milky Way, Mark Reid from the Center for Astrophysics | Harvard-Smithsonian and Karl Menten from the Max Planck Institute for Radio Astronomy (MPIfR) initiated a project to determine the distances to radio sources that are constrained to spiral arms of the Milky Way. Their telescope of choice is the Very Long Baseline Array, a collection of 10 radio telescopes spanning from Hawaii in the west to the eastern tips of the USA. By combining the signals of all 10 telescopes thousands of kilometers apart one can make images of what one could see were our eyes sensitive to radio waves and separated by nearly the size of the Earth.

This project is carried out by an international team, with scientists of the MPIfR making major contributions – MPIfR director Karl Menten has enjoyed a fruitful collaboration with Mark Reid for more than 30 years. When, near the start of the project, a catchy acronym was discussed, they chose to name it the Bar and Spiral Structure Legacy Survey, in short the BeSSeL Survey. Of course, they had the great astronomer and mathematician and parallax pioneer Friedrich Wilhelm Bessel on their mind.

As in all experimental or observational science, measurements only attain meaning if their uncertainties can be determined in a reliable way. This is also the bread and butter in radio astrometry and is given close attention by the BeSSeL project astronomers. In Bessel’s time, astronomers had learned to pay attention to measurement errors and to account for them when deriving results from their data. This often involved tedious calculations done entirely with pencil and paper. Naturally, a scientist of Bessel’s caliber was well aware to follow any issues that could possibly affect his observations. He realized that temperature variations in his telescope could critically affect his delicate measurements. Bessel had a superb instrument at his observatory at Königsberg in Prussia (the present Russian Kaliningrad), which came from the genius instrument maker Joseph Fraunhofer and was the last one he built. Nevertheless, variable temperature had a major impact on the observations required for a parallax measurement, which must be spread over an entire year some are made in hot summer and others in cold winter nights.

Mark Reid became interested in Bessel’s original work and studied his papers on 61 Cygni. He noticed some small inconsistencies in the measurements. To address these he and Karl Menten started to dig deeper into the original literature. Bessel’s papers were first published in German, in the Astronomische Nachrichten, although some excerpts were translated into English and appeared in the Monthly Notices of the Royal Astronomical Society. Thus, the original German versions had to be examined, where Menten’s native German came in handy.

Reid and Menten also put the results of Bessel’s closest competitors under scrutiny. Thomas Henderson, who worked in Cape Town, South Africa, targeted α Centauri, the star system now known to be the closest to our Sun. Shortly after Bessel announced his result, Henderson published a distance to this star.

The eminent astronomer Friedrich Georg Wilhelm von Struve measured α Lyrae (Vega). The literature search for von Struve’s data involved some detective work. A detailed account of it was only published in Latin as a chapter of a voluminous monograph. The MPIfR librarian traced a copy to the Bavarian State library, which provided it in electronic form. It has long been a mystery as to why von Struve announced a tentative distance to Vega, one year before Bessel’s result for 61 Cygni, only to revise it to double that distance later with more measurements. It seems that von Struve first used all of his measurements, but in the end lost confidence in some and discarded those. Had he not done so, he probably would have received more credit.

Reid and Menten can generally reproduce the results obtained by all three astronomers, but found that von Struve and Henderson underestimated some of their measurement uncertainties, which made their parallaxes appear somewhat more significant than they actually were. “Looking over Bessel’s shoulder was a remarkable experience and fun,” says Mark Reid. “Viewing this work both in an astronomical and historical context has really been fascinating”, concludes Karl Menten.

Background Information

Principle of Stellar Parallax: One wants to determine the distance, D, to a nearby (foreground) star. Over the course of a year, that star’s position apparently changes relative to the positions of faraway background stars and prescribes an ellipse that is a projection of the Earth’s orbit around the Sun. Its semi-major axis is the parallax angle π. The distance in “astronomical units” is then simply given by D = 1/ π. One astronomical unit, AU, the Earth-Sun distance is equal to approximately 150 million kilometers. The distance at which an object would have a parallax of 1 arcsecond is called one parsec (pc). It is the basic distance unit used by astronomers and corresponds to approx. 3.26 light years or 206,000 AU.

Surveying the Galaxy

Looking at the same object from two different positions can determine its distance.

Our Milky Way galaxy contains many exotic and newly discovered objects, including black holes, microquasars and supernova remnants. Nevertheless the best and usually only reliable way to determine the distances to these objects has been known for thousands of years and is the standard technique used to measure distances here on Earth for construction and determining property boundaries.

The same technique is used by many animals to build a three dimensional view of their surroundings.

The basic idea is always the same: measure the shift in the apparent direction of an object from two different positions. This shift (measured as an angle) is called a parallax.

Professional surveyors (whether property surveyors or galactic surveyors) can then use basic trigonometry to compute the distance to the object if they have the distance between the two measurements and the parallax.

Animals with two forward facing eyes such as apes and owls can use a similar technique to build a three-dimensional model of their immediate environment. At least some scientists have concluded that the head-bobbing seen in pigeons and many other bird species with eyes on opposite sides of the head also serves to provide parallax data from two different positions. (eg. see this PDF abstract).

Astronomical parallax

In astronomy the distances involved are much larger than for ordinary surveying projects and so a larger baseline between the two measurement positions is required. Fortunately, the Earth's orbit around the Sun conveniently provides a sufficient distance. The radius of the Earth's orbit (called the astronomical unit) is well known. In fact it is so well known that it is used to define the parsec, the basic unit of distance beyond our solar system. A parsec is the distance of an object with a parallax of one arc-second (that is a 60th of a 60th of a degree) when using the radius of the Earth's orbit as a baseline. The nearest star has a distance of 1.3 parsecs and the centre of the galaxy has a distance of about 8000 parsecs.

Measuring the distance of objects in parsecs is in principle easy. In practice it is not so simple.

In principle, measuring astronomical parallax is easy: measure the position of an object in the sky relative to other objects known to be much further away (eg. quasars in distant galaxies). Wait six months for the Earth to orbit to the opposite side of the sun. Measure the position again. Use the radius of the orbit and the change in position to determine the distance to the object. It turns out that using the Earth's orbit as a baseline is enough to determine the distance of any object in the Milky Way - even objects on the far side of the galaxy.

Of course it is not so easy. The main difficulty is measuring the parallax accurately. Stars and other Milky Way objects are so distant that the first accurate parallax measurement for a star (61 Cygni) was made by Friedrich Bessel only in 1838 even though the technique was obvious once it was accepted that the Earth orbits the Sun. The first large scale parallax measurements were only carried out by the Hipparcos satellite, launched in 1989. Even then, Hipparcos was only able to reliably measure parallax for a few thousand stars located at a distance of at most 2-300 parsecs. This is not even as far as the Orion nebula within the local stellar neighbourhood.

Technology has advanced quickly since Hipparcos and the Gaia satellite, scheduled to be launched in 2012, is expected to measure parallax for about 1 billion stars scattered over a distance of several thousand parsecs. Gaia will provide a major step forward in mapping our Milky Way in great detail.

But Gaia will not finish the job. Gaia will measure parallax for optically visible stars, and much of the Milky Way, especially the most interesting parts in the central bar and spiral arms, is obscured by dust. Another technology is required. Fortunately, that technology is now available.

Radio to the rescue

Hubble is by far the most famous telescope. But it is not the most powerful telescope available. The most powerful telescopes are the much less known EVN (based in Europe) and VLBA (based in the United States). These networks of radio telescopes use sophisticated computer and communication technology called very long baseline interferometry (VLBI) to function as single continent spanning telescopes - the greatest eyes-on-the-sky ever created.

Like Gaia, the VLBI networks (which also include several smaller networks based in Canada, Australia, Japan and elsewhere) can measure parallax accurately enough to determine the positions of objects across the galaxy. Unlike Gaia, these networks can pierce the Milky Way's clouds of dust to determine the distances to masers - focused sources of microwave radiation associated with star formation regions. By mapping the masers, the VLBI networks can determine the exact locations of the Milky Way's star formation regions and so map the galaxy as a whole.

So far the VLBI networks have determined the positions of about a dozen or so carefully selected objects. This has already been enough to start to determine the positions of key parts of the Milky Way's spiral arms. Plans have been announced to dramatically expand the number of accurate radio parallax measurements in the near future. Galaxy Map will continue to keep track of these important developments.

Substitutes for parallax

Reliable VLBI parallax measurements have only started to become available within the past decade. Before this, astronomers were forced to use much less reliable techniques to estimate the distances of objects more than a few hundred parsecs from our solar system. The two main techniques were kinematic and photometric distance estimates.

Kinematic distance estimates

When an object moves towards an observer, its light is shifted to higher (blue) frequencies. When the object moves away, its light is shifted to lower (red) frequencies. This fact was first reported by the Austrian physicist and astronomer Christian Doppler in 1842. Astronomers immediately began to use this fact to measure the movement (kinematics) of stars and star clusters. These velocity measurements led the Dutch astronomer Jan Oort in 1928 to propose that the Milky Way rotates. At about the same time, other astronomers, including Edwin Hubble, concluded that the very high red shifts of distant galaxies meant that the universe was expanding.

If astronomers made the assumption that velocities observed for Milky Way stars or nebulae were due primarily to the rotation of the Milky Way and that the Milky Way had a simple rotation model, then they could use this model to determine two possible distances for each star or nebula. Astronomers then used several techniques to choose between these two possible estimates. Even better, blue or red shifts could be measured for the large clouds of hydrogen gas detected across the Milky Way by radio telescopes. For much of the second half of the twentieth century (and especially in the 1950s and 1960s) it was believed that kinematic distance estimates could map the Milky Way.

A very clear description of the process used to make kinematic distance estimates (including the formulas used) can be found in this recent paper.

It is now known from other distance estimates (especially those now available from VLBI parallax) that the main assumptions behind many kinematic distance estimates were incorrect. The Milky Way does not have a simple rotation curve (different kinds of objects rotate at different rates). The velocities measured within much of the Milky Way are strongly affected by local gas movements such as expanding bubbles and streaming within spiral arms and not just the rotation of the galaxy as a whole.

Of course astronomers have always known that the assumptions behind kinematic distance estimates were not always true. But now it is known that many kinematic estimates grossly distort the distance to important parts of the Milky Way. For example, VLBI parallax suggests that the Perseus arm in the second galactic quadrant lies at a distance of about 2500 parsecs rather than 5000 parsecs, and the Sagittarius arc is also much closer (about 1000-1500 parsecs instead of 2000-3000 parsecs).

Perhaps the best that can be said is that objects with a similar velocity and direction are more likely to lie at the same distance than those with different velocities. So velocity measurements help to group objects together but do not determine the distances for those groups. You can read more about kinematic distance estimates and view images illustrating two of the major velocity data sets available in the chapter on Velocity.

Photometric distance estimates

Stellar spectra reveal an amazing amount of information about a star. Not only does a red or blue shift reveal its velocity but the details of the spectrum often reveal its chemical composition, its temperature and sometimes even its approximate size. If the size and temperature are known, then it is possible to calculate the luminosity. Comparing the actual and apparent luminosity of a star should in theory make it possible to calculate its distance.

As with kinematic distance estimates, there are a number of problems with this photometric approach. Estimating distance from luminosity requires a model for the distribution of dust in the Milky Way and this distribution (especially in the spiral arms) is not simple. Determining the size of a star is not trivial, although this problem can be partially overcome by using probabilistic calculations applied to entire star clusters rather than individual stars. Finally, most photometric techniques apply only to optically visible stars and star clusters. Recent infrared observations have shown that many and probably most star forming regions are obscured by thick clouds of dust.

On the whole, recent VLBI parallax measurements have shown that photometric distance estimates, when available, tend to be considerably more accurate than kinematic estimates. Nevertheless, now that accurate parallax measurements are or will soon be available for many Milky Way objects, they should be preferred over any less reliable method.

Johanes Kepler (1571 - 1630)

By anyone&rsquos measure, Johannes Kepler ranks as a gold medalist in the history of science. This great German mathematician and astronomer (contemporary with the King James Bible and the Pilgrims) discovered fundamental laws of nature that have stood the test of time and are still widely used today. He advanced mathematics in science to new heights, including the first use of logarithms for astronomy and the foundation for integral calculus. He made useful inventions. He was a major force in moving science away from its subservience to authority and onto an empirical foundation, and from superstition to mathematical law. He helped mankind understand how the universe works. When the great Isaac Newton expressed that his ability to see farther than others was due to &ldquostanding on the shoulders of giants,&rdquo he most certainly had Kepler in mind. Yet this humble, devout Christian, from a poor, uneducated home, had a life filled with difficulty. In spite of it, he stands as a consummate example of a Christian doing excellent science from theological motives Kepler pursued science as a mission from God. In his words, he was merely &ldquothinking God&rsquos thoughts after Him.&rdquo Anyone who thinks Christianity is inimical to science should take a close look at the life of this giant of science &ndash and Christian faith.

Kepler is considered the Father of Celestial Mechanics. The story of how he worked for eight years trying to figure out the orbit of Mars and the other planets from the observations of Tycho Brahe is legendary. Kepler was a perfectionist &ldquoclose enough&rdquo was not good enough. He started by assuming the common belief that the orbits of the planets were perfect circles. Moreover, he had a tempting hypothesis that the ratios of the orbital distances matched the proportions of the regular solids, but it did not quite work. It was Kepler&rsquos genius and integrity that forced him to abandon his pet theory and discover the truth. After many years of work, and thousands of pages of tedious calculations, he fit the data to the formula for an ellipse, and finally, everything fell into place. This illustrates how in science frequently a fundamental truth lays lurking in the minute details that do not fit the expectations. To an honest scientist, the data must drive the conclusions, and Kepler&rsquos discovery ranks as a seminal point in the history of science. With this finding, he overcame 1500 years of error based on the thinking of Ptolemy, Aristotle and even Copernicus that the heavenly orbits must be perfect circles.

From his discovery, Kepler derived his famous Three Laws of Planetary Motion. These were the first truly scientific laws, based as they were on empirical data and not authority or Aristotelian logic. Kepler established precise mathematical relationships describing orbital motion: (1) the orbits of the planets are ellipses, with the sun at one focus, (2) the motion of a body is not constant, but speeds up closer to the sun (a line connecting the sun and the planet sweeps out equal areas in equal times), and (3) the farther away a planet is, the slower it moves (the square of the period is proportional to the cube of the semimajor axis). Newton later explained these relationships in his theory of universal gravitation, but Kepler&rsquos Laws are just as accurate today as when he first formulated them, and even more useful than he could have imagined! Even today, NASA&rsquos Jet Propulsion Laboratory navigates spacecraft around the solar system using Kepler&rsquos Laws, and astronomers routinely speak of Keplerian orbits not only for the solar system but for stars orbiting galaxies, and for galaxies orbiting clusters and superclusters. The whole universe obeys Kepler&rsquos Laws, or as he would have preferred to say, obeys God&rsquos laws that he merely uncovered: he said, &ldquoSince we astronomers are priests of the highest God in regard to the book of nature, it befits us to be thoughtful, not of the glory of our minds, but rather, above all else, of the glory of God.&rdquo

These discoveries would be enough to guarantee Kepler membership in the science hall of fame, but there&rsquos much more. Not only was he the Father of Celestial Mechanics, Kepler is also considered the Father of Modern Optics. He advanced the understanding of reflection and refraction and human vision, and produced improvements in eyeglasses for both nearsightedness and farsightedness, and for the telescopes that his colleague Galileo (with whom he corresponded) had first turned toward the heavens. He invented the pinhole camera and designed a gear-driven calculating machine. He investigated weather phenomena and also made other fundamental discoveries about the heavens, such as the rotation of the sun, and the fact that ocean tides are caused primarily by the moon (for which Galileo derided him, but Kepler was proved right). He predicted that trigonometric parallax might be used to measure the distances to the stars. Though the telescopes of his day were too crude to detect the parallax shift, he was right again, and the recent Hipparcos satellite used this principle to refine our measurements to thousands of stars. Kepler&rsquos &ldquofirsts&rdquo make an impressive list of accomplishments.

One would think a man must be the son of a privileged family to rise to such heights, but nothing could be farther from the truth for this, and other, great Christians in science like Newton, Carver and Faraday. Kepler was from a poor, uneducated family. He was often ill, and lived with no advantages that would have predicted his success. His mother was a flighty woman given to superstition, and his father was a roaming mercenary, frequently off to the battlefield to fight for the highest bidder. At age six, Kepler saw the Great Comet of 1577 which in those days people assumed were bad omens, but Kepler was fascinated. Later, his father bought and operated a low-class inn, and young Johannes was required to do hard labor to help the struggling family business (later, when it failed, his father deserted the family). When given a chance to go to school, Kepler&rsquos genius coupled with diligence advanced him quickly. Devout by nature, he decided he would serve God as a clergyman. He studied for two years in a seminary at the University of Tubingen, receiving training in Latin, Greek, Hebrew, mathematics and the usual Greek philosophy, but there also became acquainted with the newer ideas of Copernicus and those who doubted that the Greeks were the last word in knowledge. It was only when he was pressured to accept a position as a mathematics instructor 500 miles away in Graz that he reluctantly postponed his goal to become a Lutheran minister. Driven away from Graz in 1597 by pressure from the Catholic counter-reformation, he moved to Prague, where he became assistant to the great but eccentric Danish astronomer Tycho Brahe, the best celestial observer of his day. When Brahe died in 1601, Kepler inherited all the Mars observations. He devoted himself to figure out the problem of the orbit of Mars, and the rest is history. Kepler became imperial mathematician till, in 1612, religious wars again forced a move of his family, this time to Linz. As district mathematician in Linz, he published additional works, and discovered his third law of celestial mechanics. He moved three more times in 1626 before his death in 1630.

In spite of his successes, Kepler&rsquos life was filled with hardship, poverty, political turmoil, false accusations and difficult work. Afflicted with complications from an early bout of smallpox, he suffered many ailments throughout life. His first wife was unappreciative of his work, and died early three of their five children died in infancy. Later remarried, Kepler saw only two of their seven children breach adulthood. He repeatedly was forced to move because of the 30 Years War. A Lutheran, he was caught in the middle not only between Catholics and Protestants, but also between the Lutheran and Calvinistic controversies over communion, baptism and other issues. Finding neither group completely in accord with his understanding of Scripture, and loyal to the Word of God alone, he found himself at odds with some of his fellow Protestants. In a time of religious tumult and superstition, he seemed to be the only one with real wisdom and balance when poised between extreme positions. He had to defend his mother who was falsely accused of being a witch. He was forced to move on several occasions due to war or pestilence three times in the prime of his career, and another three times after age 55. He was never paid near what he was worth even then, it was often in the form of IOU&rsquos that never seemed to arrive. His untimely death came about from catching fever during a hard journey trying to collect long-overdue funds owed him from the imperial treasury even his heirs had difficulty collecting it later.

Kepler never thought of himself as famous and was often depressed by the harshness of his circumstances. Yet he had an inner joy that would make his imagination soar when he thought of the heavens and how everything worked according to the Creator&rsquos mathematical plan. Astronomy was his &ldquoescape to reality&rdquo when the hardships and follies of civilization bore down on him. He imagined space travel and speculated about earthlike planets around distant stars. He wrote 80 books, including the first science fiction story, The Dream (about an imaginary flight to the moon), and of course more technical treatises such as the consummate compilation of Tycho&rsquos data using logarithms, The Rudolphine Tables this work did much to advance the heliocentric theory.

Kepler built on a Pythagorean conception of the universe, in which number and mathematical relationship form the essence of things, but he cast it into a distinctively Christian form. To him, the God of the Scriptures was the great Mathematician. Kepler&rsquos signature work, the Harmony of the World described his conception of the heavenly bodies making a kind of celestial &ldquomusic of the spheres&rdquo as the outworking of the mind of God, perfect in geometric harmony. It expressed his belief that the world of nature, the world of man and world of God all fit together into a harmonious system that could be explored by science.

Kepler had once believed that becoming a clergyman was the only way to serve God and proclaim His truth, but he found that astronomy and mathematics were also a ministry, a way to open windows to the mind of God. Deeply spiritual all his life, he said, &ldquoLet also my name perish if only the name of God the Father is elevated.&rdquo On November 15, 1630, as he lay dying, he was asked on what did he pin his hope of salvation. Confidently and resolutely, he testified: &ldquoOnly and alone on the services of Jesus Christ. In Him is all refuge, all solace and welfare.&rdquo

Craters on the moon and Mars are named in Kepler&rsquos honor, and NASA&rsquos Kepler spacecraft will be launched in 2008 to search for earth-size habitable planets around other stars.

The telescope of the future!

The reflecting telescopes made in the 18th and 19th centuries had different problems than refracting telescopes. While the mirrors made of the tin and copper alloy were easier to make than grinding lenses, these metal mirrors tended to discolor, which meant they had to be polished quite often. The astronomers also ran into a physics problem. Because the mirrors and the telescopes that held them were so very large, scientists found the huge instruments difficult to move. Changing the field of view to a new area of the sky was slow and time-consuming.

By the mid-1800s, astronomers decided that it wasn't the telescopes but the atmosphere that caused distant objects to still look hazy. They realized that as light passes through the air, it bends in unpredictable ways. With the unaided eye, this is what makes stars appear to twinkle. But when you use a telescope, this bending light just makes images look blurry, kind of how things appear when you look up from under water.

It was this problem of not being able to clearly focus distant images that led to the development of the Hubble Space Telescope.


To be a successful astronomer, you had to be awake when the stars were out. How do you think astronomers in the 16th through 19th centuries supported themselves and their families?

Next The End of Hubble?

This lesson presents some of the controversies about the space telescope from its earliest years as a dream of Dr. Lyman Spitzer, Jr. to the present day. Continue reading