Science

# What is a parsec? Definition and calculation

The parsec is a unit of distance that is often used by astronomers as an alternative to the light year, just as kilometers can be used as an alternative to miles. Science fiction franchises such as Star Wars have been known to misuse the word parsec, misleadingly describing it as a measurement of time or speed.

In fact, according to the California Institute of Technology, one parsec is approximately 3.26 light years, or almost 19 trillion miles (31 trillion km). (will open in a new tab) (Caltech).

The term “parsec” is a combination of “parallax” and “arcsecond”, which comes from the use of triangulation when measuring the distance between two stars.

Related: Sounds in space: what sounds do the planets make?

### Seconds of arc and parallel effect

Astronomers use arcseconds to measure very small angles: 3600 seconds is one degree, just like there are 3600 seconds in one hour. These small angles help astronomers measure large distances using what is known as the parallax effect.

If you hold the pencil at arm’s length and alternately close your left and right eyes, you will notice that the pencil moves left and right relative to more distant objects, even if you keep it perfectly still.

Parallax can be demonstrated by looking at a pencil with one eye or the other. (Image credit: Getty Images)

This is a parallax effect, and it occurs because the angular direction of the pencil is slightly different when viewed with the left and right eyes. If you could measure this angular difference, then knowing the distance between your eyes would allow you to calculate the distance to the pencil.

The same principle allows astronomers to measure the distance to nearby stars. They photograph a section of the sky with a star of interest to them and other, more distant objects, such as galaxies.

Six months later, when the Earth is on the other side of the Sun, NASA says they take another photo of the same patch of sky. The star will appear to have moved a small angular distance relative to background objects. Measuring this angle and then dividing it in half (because we have two equal and opposite offsets from the Sun) gives us the star’s parallax.

So that’s where the parsec came from: it’s the hypothetical distance at which a star will show a parallax of exactly one second. In fact, true stellar parallaxes are smaller, which means that their distances are always greater than a parsec.

### parsec vs light year

As logical as the definition of a parsec may be, for most people it may still seem unnecessarily complicated. In contrast, the light year is much easier to understand. This is simply the distance light travels in a year, and has been in use since at least 1838.

A light year even has a utility that goes beyond mere measurement because it tells us that when we observe an object X light years away, we see it as it was X years ago. So why would anyone use parsec instead?

The answer seems to be that when astronomers first started measuring stellar distances using the parallax method, they simply presented their results in terms of “parallax X seconds” rather than light years.

Then, around 1913, Herbert Hall Turner came up with the idea to reduce that number to the parsec—and the name stuck even as other non-parallax-based methods of measuring stellar distance were developed. Today the International Astronomical Union (will open in a new tab) recommends the use of parsecs in light years in scientific papers, although the latter is still very common in popular usage.