Welcome to χ&h.eu

Astro images

A very accessible way to make observations of scientific importance is to count the sunspots with a small telescope. You would be continuing an observation that has been carried out systematically since 1848. Spots appear in groups, and the sunspot number to write down is not simply the total number of spots. Rather, for each group another ten is added to the count. Single spots also count as groups (i.e. 11 rather than one). So if there are g groups with a total of f individual spots, the sunspot number is:

R = 10 · g + f

The diagram shows my sunspot numbers for the years around 2005 to 2015. The horizontal axis is a day counter known as the Julian Date (here minus 2,450,000 days). The purple curve shows the total spot count, green shows only spots in the northern hemisphere of the Sun, and the cyan curve shows the southern spots. The daily spot counts have been averaged first over 30 days and then over 9 such 30-day periods. The sunspot number goes up and down in cycles of about 11 years’ length. The recent minimum came somewhat late and lasted rather long, raising worries of very few sunspots for decades to come. In late 2009 it seemed possible to say, however, that the minimum was over and that increasing sunspot numbers could be expected for the next few years. To show the minimum more clearly, the data are also shown exaggerated by a factor five for the years around the minimum.

Since then, the sunspot cycle has gone through its maximum in early 2014. In fact there was a lesser maximum before then around the turn from 2011 to 2012. In the 11-year sunspot cycle, it is common to have double maxima, but it is unusual that the second maximum is the higher of the two.

There has been a remarkably strong asymmetry between North and South, with the North responsible for the earlier peak, but the South later producing the higher peak. The asymmetry in fact existed in the lead up to minimum, when the North had significantly fewer spots than the South.

Recent changes

The page layout has been changed to waste less space, and to ensure more predictable layout of pages.
The last page of the astrophotography pages (“Object”) has been re-written to make more sense.

Site map

Noctilucent cloud

Noctilucent clouds (NLC) are clouds of water ice at the top of the mesosphere at an altitude of 84 km. At that height these clouds can be in sunlight after the Sun has set - or before it has risen - on the lower tropospheric clouds. Scotland lies in the range of geographic latitude where these clouds can be seen; at low latitudes the mesosphere is too warm to form water ice, at high latitudes the summer nights are too bright. The clouds themselves can form only during the season when the mesopause is coldest, i.e. the summer months. In the northern hemisphere the NLC season runs from late May to late August.


The geometry of illumination of NLC.

The graphic shows a cut through the Earth with the observer on the left looking in the direction where the Sun is below her horizon. The observer is looking upwards into the sky. Near the observer the line of sight is in the Earth’s shadow and any cloud will probably appear dark against the sky. But further away from the observer the line of sight crosses into the sunlit part of the atmosphere and any cloud there may appear bright against the background.

Consider a cloud at height z above the surface of the Earth and along the surface a distance α away from the observer. If we were looking north the angle α would be the difference in geographic latitude of the cloud and the observer. We count α positive if the cloud is toward the Sun and negative if the observer sees the cloud at the opposite azimuth to the Sun. Given z, the altitude h seen by the observer and the distance angle α are related by

h = atan2[(R+z) cos(α) - R, (R+z) sin(α)]

α = acos[R cos(h) / (R+z)] - h

Note that the “altitude” h can be larger than 90°. This will happen when α is negative: Imagine the observer looking towards the horizon where the Sun is, then raising her head to look at the cloud. If h > 90° (if α < 0) she must bend over backwards to look beyond the zenith to see the cloud.

But the main question is whether the cloud is in sunlight. This depends on the angle hSun, which is a negative angle telling how far below the horizon the Sun is for the observer. We look for the intersection point of the layer with height z with the border of the shadow. The ground distance and observer’s altitude of that point are

αsh = -hSun - acos[R/(R+z)]

hsh = atan2[(R+z) cos(αsh) - R, (R+z) sin(αsh)]

(Note that the maths is this simple only for the case of NLC in the same direction as the Sun or in the opposite direction of the Sun. The equations cannot be applied to NLC at azimuth significantly to the left or right of the Sun.)

At sunset the light/shadow border is at hsh = 180°. This is on the horizon opposite the Sun, i.e. the whole sky is still sunlit (whatever height z we are considering). When the Sun has dropped 4° below the horizon the z = 15 km level is illuminated up to only hsh ~ 35°. Around this time normal cloud is passing into the shadow and will appear dark against the sky. When the Sun is 6° below the horizon (civil twilight) the 15 km level is illuminated only to 2° above the horizon. At the same time the 85 km level is still in sunlight to hsh ~ 168°, virtually all the sky is still sunlit for clouds at that level. This is why it is recommended to start NLC observations when the Sun has dropped 4 or 6° below the horizon.

When the Sun is 9° below, the shadow at the 85 km level is moving quickly across the observer’s sky, and at 12° below NLC would be illuminated only if below an altitude of 12° and toward the Sun. The recommendation is to observe until the Sun is 16° below, because by then NLC would be confined to only 2° on the sunward horizon.

Automatic camera 2009-2012

The dSLR on its cardboard recliner, inside the cardboard light protection box.

The NLC camera moved against the window pane. The laptop is in front.

The NLC camera and laptop on the window sill. The City of Edinburgh and Kingdom of Fife beyond.
For several years, I had set up an automatic camera to take images of the northern sky every 15 min during the night. These were then inspected and reports made to the BAA Aurora Section and to the NLC observers’ home page. Reports of “no NLC” are as important as positive reports of seeing NLC. Of course, when there is tropospheric cloud or fog in the way, no report can be made.

The camera was a Canon EOS 300D digital SLR set to 400 ISO and f/3.5, connected by USB to a Linux laptop that had gphoto2 installed. This camera model is supported quite well by the software, so that the laptop could set the exposure time, release the shutter to take the image, and download the image to the laptop. The laptop was also networked, so that the images and associated logs could be copied elsewhere for reduction and analysis.

The exposure time (shutter speed) depends on the altitude of the Sun. The best exposure in seconds for a given altitude of the Sun in degrees seems to be (for 400 ISO, f/3.5)

t = 0.5 s · exp[-(hSun+7.3°)/1.2°]

Remember that hSun is a negative number. Images were taken whenever hSun < −4°. The resulting exposure times ranged from 1/4 s to 8, 15 or 30 s, depending on how dark it got at midnight. These values were good for the sky background and reasonably faint NLC. For bright NLC much shorter exposures were necessary, as short as 1 s for very bright NLC.

The camera drew power not from its battery, but from a power supply plugged into a wall socket. Nonetheless, it appeared to go into power save mode after a few minutes with nothing to do. The laptop ran 24/7 and its cron demon invoked a custom Perl script every five minutes at 4, 9, 14, … minutes past the hour. The script used my Java application Sputnik (see my software page) to calculate the azimuth and altitude of the Sun for the next full minute (0, 5, 10, … minutes past the hour). If the Sun was too high to look for NLC, the script would merely get a status report from the camera, wait two minutes, get another status report, and then quit. Three minutes later, the next cron job would run and repeat the action. This was how the camera was prevented from going to power save mode.

If the altitude of the Sun was appropriate to look for NLC, then the Perl script would also take a sequence of images. The longest exposure in the sequence was calculated from the altitude of the Sun by the formula above, but subject to the longest setting possible (30 s). Other exposures were shorter by factors 2, 4, 8 etc. Normally, no images were taken when the Sun was too deep for NLC to be seen, but images could be taken all night to look for Perseid meteors or aurora borealis.

Automatic camera 2013-2015
The Canon EOS 300D - both the original I had purchased in 2004 and the second-hand replacement - ended with mechanical failure. Since the 2013 season I use a cheap compact camera without controlling laptop. In his talk to the Astronomical Society of Edinburgh in 2012, David Small (who runs a similar camera project), introduced us to the Canon Hack Developer Kit, whereby certain Canon compact cameras can run scripts to take images autonomously.

I purchased a Canon PowerShot A810 for this project, which now runs a CHDK Lua script for the whole NLC season. At nightfall it switches to imaging mode and moves out the lens, at daybreak it returns to display mode and moves the lens back in. During the day (only!), I can plug a laptop into the USB connection to download new images, adjust the clock, etc. As before, I use Linux and gphoto2 for this. The imaging schedule is essentially the same as in previous seasons, a sequence of increasing exposures every 15 min with the longest exosure depending on the solar altitude as

t = 16.3 s · (S/ISO)−1 · (f/D)2 · exp[−(hSun+7.3°)/1.2°]

where S is the ISO setting and f/D the f ratio (focal length divided by aperture). This camera allows a maximum exposure of 60 s.

2015 season

The camera was active from 19 May to mid August, based again at the Royal Observatory Edinburgh. Once again, Jose Sabater Montes had part of his view over the city blocked by the camera assembly.

These are the nights during which at least some useful observations were made from 2015-05-19/20 to -08-15/16.

2014 season

The camera was active from mid May to mid August, based again at the Royal Observatory Edinburgh. Once again, Jose Sabater Montes had part of his view over the city blocked by the camera assembly.

These are the nights during which at least some useful observations were made in 2014. The camera was active from 2013-05-14/15 to -08-17/18.

2013 season

The camera was active from mid May to mid August, based at the Royal Observatory Edinburgh. Once again, Jose Sabater Montes had part of his view over the city blocked by the camera assembly.

These are the nights during which at least some useful observations were made in 2013. The camera was active from 2013-05-14/15 to -08-20/21.

2012 season

In 2012, the camera was based at the Royal Observatory Edinburgh, which also provided an old laptop and network access. Thanks are due to Jose Sabater Montes, whose view from the office on the City of Edinburgh was spoilt a bit by the camera.

The camera did in fact run between the 2011 and 2012 seasons to look for aurora, but there had been several problems. While an NLC season requires about 10000 to 15000 frames, looking for aurora during the winter requires on the order of 50000 frames. This ultimately led to the death of the shutter, having performed about 100000 actions in total. Douglas Cooper from Doune was kind enough to sell me his old Canon EOS 300D so that automatic imaging was suspended only briefly during a period of bad weather.

These are the nights during which at least some useful observations were made in 2012. The camera was active from 2012-05-15/16 to -08-21/22.

2011 season

In 2011, the camera was back at the Royal Observatory Edinburgh, which also provides an old laptop and network access. Thanks are due to Jose Sabater Montes, whose view from the office on the City of Edinburgh is spoilt a bit by the camera.

These are the nights during which at least some useful observations were made in 2011. The camera was active from 2011-05-01/02. It failed on 2011-08-16/17.

2010 season

In 2010, the Royal Observatory Edinburgh could not be used to set up the camera: The copper domes dating from 1894 were being refurbished, and the whole building where the camera would have been sited was shrouded in tarpaulin for most of the year. Thanks are due to David Small, who lives in the Scottish Borders a few km north of Kelso and had offered his window for the camera to look out of. The camera was also using his domestic wireless network and his broadband connection to upload the images to the server at my home.

These are the nights during which at least some useful observations were made in 2010. The camera was active from 2010-05-13/14 to 2010-08-31/32.

2009 season

In 2009, the camera was set up at the Royal Observatory Edinburgh. Thanks are due to Michele Cirasuolo, who had his office view somewhat impaired by the equipment. The camera was then set to 200 ISO and exposures were twice as long as used from 2010 onwards, but still subect to a maximum of 30 s.

These are the nights during which at least some useful observations were made in 2009.

NLC 2009-05-13/14 - Pseudo NLC.
NLC 2009-05-29/30 - NLC undetected.


Comet 17P/Holmes on 2007-11-11.

In the 1970s and '80s photography for amateur astronomers was hard. With the limited amount of time, effort and equipment I was prepared to invest the results were usually poorer than what the human eye could see. Around the turn of the century, this has turned around completely with the arrival of digital consumer cameras. Astrophotography at the level I am prepared to go to has become simple enough that usually the results show more than the eye can see.

Why is astrophotography more difficult than regular photography - portraits, landscapes, architecture etc.?


The objects are faint
There is not a lot of light in the universe; nights outside the cities are quite dark. To record faint objects we use a combination of methods:

Sensitive detector. This is one of the major strengths of the digital cameras. The CCD and CMOS detectors get much more signal out of a small number of photons than photographic film did.
High ISO setting. This is actually not always a good idea; you will have to experiment with your camera. In digital cameras, the ISO setting is basically an amplifier gain setting. A very high gain setting can mean that most of what comes out of the amplifier is noise.
Long exposure. This has always been the main method to collect enough light for a good image. The main drawback is that the Earth rotates and the objects in the sky move across the field of view. Long exposures require a camera mount that can track the stars and compensate for the Earth’s rotation.
Large aperture. A bigger lens collects more light, making faint objects more accessible. However, large precision-machined pieces of glass cost a lot of money. Still, this is one reason why the astrophotographer puts the camera lens aside and uses a telescope instead.
Fast f ratio. For a given aperture, a shorter focal length gives a brighter image. Fast f ratios require thicker and more complicated lenses, and we need more money to buy them.
Not all objects are faint, though. Sometimes the problem is also one of dynamic range. In a star cluster, the brightest stars may be overexposed before the faint ones are detected. Or the sky background - due to twilight or city lights - may be so bright as to overexpose the whole image before the faint object of interest makes an impact on the detector.

A dSLR will normally give you 8-bit JPG format, but if you ask it for raw images, you may get 12-bit numbers out of it. A CCD camera will usually give you 16-bit numbers. More bits mean higher dynamic range. The smallest brightness step recorded is always one. If you have 8-bit data, saturation occurs at 255, with 12-bit data you can count 16 times further to 4095 before saturation.

The objects are small

This image illustrates the issues of faint and small objects. The 10-second exposure shows no stars, and the rising crescent Moon would not impress us without the dark hill in front of the twilight sky by its side.
The objects are not really small, but they are far away, making them appear small. Either way, if we want to see a certain amount of detail on the Moon or a planet we have to magnify it enough so that it covers a significant number of pixels in our image.

Long focal length. The focal length determines how many millimetres on the detector correspond to a degree on the sky. A longer lens makes the object bigger. The downside is that a longer lens costs more and tends to have a slower f ratio, thus making matters worse for faint objects.
Smaller pixels. This means more pixels per millimetre and hence spreading the object over more pixels. This can give better resolution of the object. It may not be obvious, but the downside for faint objects still exists: A given lens, from a given object, collects a given number of photons. Spread the image of the object over more pixels and the number of photons per pixel goes down.
In addition, smaller pixels may not even deliver more resolution. The resolution is limited by two fundamental factors:

Due to Heisenberg’s uncertainty principle, high resolving power requires a large aperture. Before quantum theory, this phenomenon in optics was called diffraction. Call it what you will, if the aperture itself cannot deliver a well-resolved image, putting smaller pixels into the image plane will not help.
Due to turbulence in the Earth’s atmosphere, only very short exposures can have resolution better than a few arc seconds. Any reasonably long exposure will be blurred, and more pixels or larger aperture will not help.
High resolution, like long exposure, causes a problem with the movement of the sky due to the Earth’s rotation.

The objects move

Some objects themselves move, like satellites or meteors. In addition, the whole sky - stars, planets, the Moon, etc. - appears to move because the Earth rotates. This is not apparent to the naked eye, and short wide-angle exposures will also not show this. However, we often need high resolution (long focal length) to record small objects, and we often need long exposures to record faint objects. Objects that move will be smeared into trails.

Circumpolar stars.

This can be an aesthetic bonus, such as in images of circumpolar star trails. Whether desired or not, the movement does exacerbate the problem of too little light coming from faint objects. Say, you take a long wide-angle exposure to record a constellation with a satellite and a meteor. The stars might take several seconds to move from one pixel to the next. The satellite will move faster and spend only a fraction of a second on each pixel. The meteor will pass over hundreds of pixels in less than a second. While the stars are recorded well, satellites and meteors may not deposit enough photons per pixel to become visible at all.

There is nothing we can do to track meteors, because they are unpredictable. Even tracking a satellite will be a challenge. But for stars, planets, Sun and Moon we an use a motorised equatorial mount to compensate for the Earth’s rotation and the predictable slow movement of solar system objects against the star background. These mounts are sophisticated mechanical devices, which makes them heavy, cumbersome, and expensive. Even so, at high resolution, they will not be accurate enough, and a guiding feedback mechanism will be needed to put a given star into the same image pixel in spite of drive irregularities. Such feedback could be achieved by putting a human eye behind a second parallel set of optics, or it could be clever software having a peek at the image as it is being exposed. Either way, there would have to be a way to vary the speed of the tracking motor to compensate for the errors in the drive gears and thus make pin-prick stellar images.

Consumer equipment

Products for the mass market are cheap. However, they are not designed for our purposes, and it is good fortune if they can do the job. Typically, easy-to-use equipment is less useful for extraordinary tasks. Auto-focus and automatic exposures are designed for run-of-the-mill tasks where the objects are large and bright. They fail at night, when most of the image appears black at short exposure.

It is vital that we can manually set things like ISO, aperture, exposure time, white balance and focus. “Bulb exposure” should be possible, and ideally the camera should be able itself to time exposures up to 30 s or more.

In most cases, we need control of focal length as well. The camera lens may have to be removed and replaced by a different “lens”, such as a telescope. Quite a lot can be done without removing the camera lens, including images through a telescope. Being able to remove the camera lens gives a lot more flexibility.

This makes a dSLR more useful than a regular digital camera. dSLRs have another advantage. Their detectors are larger, as are their lenses. The larger lenses collect more light and give better diffraction-limited resolution. The larger detectors have larger pixels and each pixel collects more photons, making the images less noisy.

Webcams are used to image planets, as they can quickly take many frames (later to be stacked into a single image), and because their small weight makes them easy to attach to a telescope. With video recording now possible in compact (and dSLR) cameras, these may be an alternative to the webcam.

Image processing

Our ideal image is one with only signal, i.e. the light from the object of interest. We sometimes find unwanted contributions like noise or a light-polluted night sky in our images. Stacking has become a common weapon to combat noise. The idea is that the noise is a random pattern that changes from one image to the next. Add up several or many images and the noise in them will partially cancel itself out, giving the signal the upper hand. However, you should not rush into image stacking without good cause. A far better way to reduce noise is to take a longer exposure. Only if that is not possible - say, if it would lead to overexposure, star trailing, image blurring - should stacking be used. If you have the choice, using raw images is much better for stacking. The conversion to gamma-corrected, compressed, 8-bit images is best done after stacking.

With the noise reduced, image defects from the camera will become apparent, namely bias and dark current. These can be subtracted, provided they are recorded in separate images. Those are called dark frames, because they are taken with no light reaching the detector. Along with dark subtraction, we should talk about division by a flat field. A flat field would also be a separate image taken of an object of uniform brightness. The flat field image will show vignette from the lens and possibly sensitivity variations between individual image pixels.

Much astrophotography is undertaken in order to take an image, to enjoy looking at it, and to show it to others. We want the image to look its best, and so will often apply cosmetic processing: optimise the brightness and contrast (linear and non-linear stretch), perhaps emphasise small detail over large-scale features (unsharp mask), crop away boring outer regions of a large image, rescale the pixel size to match how the image is used.

Many images also have some scientific value. I take images of the Sun to count its spots on a daily basis. I also take images of the northern summer night sky to log the presence and extent of noctilucent clouds. This is visual analysis and is compatible with optimising the visual appearance of the images first.

Quantitative analysis of images - photometry and astrometry - should, however, be done on relatively raw images. By all means, stack to reduce the noise and subtract the dark frame. Perhaps also subtract a sky background. But the more cosmetic processing steps will in many cases have a detrimental effect on the numerical analysis. You can still, of course, do the quantitative analysis first and afterwards make the image look nice as well.

See also

This page is only a brief introduction to several pages that deal with different aspects in detail. If you want to read all of them, then the following is perhaps the best order in which to do so:

Image processing

Astro images

Noctilucent cloud, 2009-06-17, Edinburgh.

Algol project

While preparing a talk about the eclipsing binary stars Algol and ε Aurigae for the Astronomical Society of Edinburgh (ASE), I had the idea of handing out comparison charts so that people could go out and estimate Algol’s brightness with the naked eye. From the audience came the question whether this could not also be done with digital photography, which it can. So I have looked a bit closer into this and come up with another set of comparison stars for use in digital imaging.

Although aimed mainly at members of ASE, anyone can use the information provided here to observe Algol and then on their own or in a group draw a light curve of brightness against time. The simplest modes of observation described here are not up to a standard that would be acceptable by variable star observers, but the more advanced modes both of human-eye and digital-imaging observation should be acceptable by amateur astronomy organisations like the British Astronomical Association Variable Star Section (BAA VSS) or the American Association of Variable Star Observers (AAVSO). That said, Algol these days seems to be of little interest to serious variable star observers.

Comparison chart

Comparison chart for Algol. For offline use, see also the PDF version.
An essential tool in observing variable stars is a “comparison chart”. This is a star chart that shows the variable itself and a number of nearby stars of known, constant brightness. The graphic shows the chart itself, and below are two lists of comparison stars with their relevant data. The PDF version of the chart includes those data; print it out and use it during your observations.

Use this chart to find the variable Algol and the comparison stars. The circle indicates a 15-degree field for use in digital photography. For naked eye observation, the comparison stars and their brightnesses are:

α Persei 1.8 (alpha Persei)
ζ Persei 2.8 (zeta Persei)
ο Persei 3.8 (omicron Persei)
For digital photography, between one and six comparison stars can be used. Their colours B−V may also be used. These stars are:

  star     V     err     B−V    err

β Persei var −0.050 0.001 (beta Persei)
α Persei 1.795 0.010 +0.481 0.004 (alpha Persei)
ν Persei 3.777 0.023 +0.425 0.005 (nu Persei)
ι Persei 4.049 0.007 +0.595 0.007 (iota Persei)
ω Persei 4.612 0.022 +1.115 0.006 (omega Persei)
π Persei 4.696 0.005 +0.061 0.009 (pi Persei)
κ Persei 3.803 0.011 +0.980 0.002 (kappa Persei)
(Data from J.C. Mermilliod 1991, Catalogue of homogeneous means in the UBV System, Institut d’Astronomie, Université de Lausanne.)


Brighter/fainter estimate by naked eye
The very simplest observation you can make of Algol is to just check whether it seems brighter or fainter than the second human-eye comparison star, ζ Persei. Of course note down the time as well as the brighter/fainter estimate.

Algol is at its brightest most of the time, clearly brighter than ζ Persei. For a few hours at intervals of just under three days, it is significantly fainter than ζ Persei. A long series of observations of this kind should allow you to determine the period - the length of time between successive brightness minima - and then to predict when minima will occur.

Quantitative brightness estimate by naked eye
The standard method to observe variable stars with the human eye is to estimate its relative brightness difference to two comparison stars, one that is brighter and one that is fainter. To do this for Algol with the chart above, first determine whether Algol is brighter or fainter than ζ Persei. If brighter, use α and ζ Persei. If fainter, use ζ and ο Persei. The result of the observation is initially noted down in the form

A x V y B
where V stands for the variable, A and B the brighter and fainter comparison star names, and x an y are numbers to indicate where between the brightness of A and B the variable is estimated to be. For example

α 1 V 2 ζ
would indicate that Algol is about twice as far in brightness form ζ than it is from α Persei.

Keep this original record of your observation - along with the time you made the brightness estimate. Later, to draw a light curve or to report the observation, convert the result to the magnitude scale. This conversion is a linear interpolation of the brightness between the two comparison stars. If we now use V, A and B as symbols for the magnitudes of the stars, and still use x and y for the numbers in our observing result, then:

V = (x · B + y · A) / (x + y)

or to use the example:

V = (1 · 2.8 + 2 · 1.8) / (1 + 2) = 2.133 ~ 2.1

At the end, round to the nearest 0.1 mag, which is the precision of the trained human eye.

Algol is 2.1 mag most of the time. About 10% of the time - for 9 or 10 hours every 2.9 days - it is fainter. In the middle of such a minimum it is as faint as 3.4 mag, or roughly halfway between ζ and ο Persei. A good strategy would be to look every hour, but to look every quarter hour while the variable is fainter than halfway between α and ζ Persei.

Taking digital pictures

The extraction of photometry from digital images is potentially more precise than the human eye. Here are a few points to consider when taking such images:

Take raw images rather than JPG images. The analysis relies on a linear relationship between the light falling into a detector pixel and the number encoded in the resulting image file. This is usually the case for raw images. But if the camera applies a gamma correction - as it invariably does for JPG format - the data are not directly suitable to carry out star photometry.
Use a tripod. Use either a cable release or the built-in optional shutter delay to avoid shaking the camera at the start of exposure.
Choose a lens of zoom factor to include the circled area of the comparison chart in the images. All six comparison stars will then be in the images. E.g., if you use a 60% size detector as found in the cheaper dSLR cameras (22 by 15 mm detector, or 60% compared to the old-fashioned 36 by 24 mm frames on 35 mm film), then a focal length of 50 mm is a snug fit to the circle in the chart.
Centre the image halfway between Algol and Mirfak (α Persei).
Try to reduce the effect of vignette on the circular field of interest. If you open the aperture of the lens to its maximum (smallest f number) then the corners and edges of the image get less light than the centre. If you reduce the aperture by, say, one stop (e.g. from f/2.8 to f/4 or from f/3.5 to f/5) then a larger central area will be illuminated more evenly. This helps making the photometry more accurate.
There is no need to focus with great care. On the contrary, you should deliberately de-focus a little. This will spread the light of any star over several detector pixels, which also helps the accuracy of the photometry.
Choose a length of exposure between 3 and 10 seconds. Less, and the result might be erratic due to the twinkling of stars. More, and the image might have to be tracked with a motorised mount. A little bit of trailing due to long exposure - just like a little defocus - does no harm, but long thin star trails are not handled well by photometry software.
Choose an ISO setting (and aperture) that avoids saturating the star images. This depends somewhat on the degree of defocus. For Algol (in fact for Mirfak as the brightest star of interest) a good setting may be 400 ISO, 6 s exposure and an aperture of f/4.
Check your results for saturation. One way is occasionally to inspect the the highest numbers in the raw images (before dark subtraction).
Photometry from digital photographs
The Citizen Sky project (http://www.citizensky.org/) has prepared a good tutorial on photometry with dSLR cameras. You will need software

to convert the camera’s raw data into a more common and versatile format like FITS,
to stack multiple frames into a single image,
to display an image, to select stars, and to measure their “instrumental magnitudes”,
to analyse the instrumental magnitudes of the variable and comparison stars to return the proper magnitude of the variable.
The Citizen Sky tutorial gives pointers for all these needs. Please do study these, in particular for items 1 to 3; the tutorial gives the most appropriate information for most observers. That said, I use entirely different software.

Raw data can also be converted with Dave Coffin’s dcraw utility (http://www.cybercom.net/~dcoffin/dcraw/). Some Linux distributions already include it and it may only be a matter of installing the package from your distribution. I hacked the source code myself to get very original data from the conversion, but you can get the same effect with command line options to avoid unwanted scaling or colour “correction”:

dcraw -r 1 1 1 1 -k 0 -S 4095 -o 0 -h -f -4
The result is 16-bit PPM format. I use my own stacking utilities (chicam, available at http://www.chiandh.me.uk/soft/) either to convert to FITS or to stack the PPM frames and write the result in FITS.

To display the result and to detect and measure stars in it I use the Starlink software collection (see http://en.wikipedia.org/wiki/Starlink_Project). This is available for Linux and Mac OS X, but this is perhaps not for the faint-hearted. I used to work for the project and my astronomer-users at work use the software; hence my computers have it installed anyway.

For the analysis I do recommend my own Photometry spread sheet. It is very much inspired by that of the Citizen Sky project. Mine is perhaps a little easier to use, it calculates the stellar coordinates with higher precision, and it adds some calculations regarding time scales. Most of all, the catalogue data have been filled in for Algol and the above six comparison stars, while the Citizen Sky original is pre-filled for ε Aurigae instead.

Single frame, one comparison star
The simplest way to use your digital camera is to take a single raw image and to measure Algol and α Persei as the comparison star. The precision of the result is not great, however. 0.05 mag is typical. Anyway, this is a bit better than the human eye can possibly be, and perhaps a lot better than the untrained eye.

The photometry software gives us the instrumental magnitudes of the variable (v) and of the comparison star (a). We also know the real brightness A of the comparison star. The brightness V of the variable then is

V = v − a + A

For example, the software might give us readings

β Persei -12.881 = v
α Persei -13.483 = a
and we have A = 1.795. Hence

V = −12.881 + 13.483 + 1.795 = 2.40

Round the result to two digits. The precision is such that quoting three digits would be overkill.

Stack of frames, colour and airmass correction
One of the advantages of digital data over observation with the human eye is that we can take many frames and average them into a single image with reduced noise. To take full advantage of this, we should also correct the frames for the camera artefacts that are captured in dark frames. And in the analysis we should correct for the colour differences between our camera’s green channel and a standard Johnson V filter, and for the difference in airmass between the stars.

Airmass is a measure for the amount of air that each star has to shine through to reach our camera. A star that is lower on the horizon has higher airmass and appears more dimmed than a star further up.

All the maths is in the Photometry spread sheet. What you have to do is copy the seven instrumental magnitudes (one variable and six comparison stars) into the spread sheet, fill in the when and where of your observation, and read off the V magnitude of the variable and the precision of the result.


Ephemeris (plural ephemerides) is the term for astronomical forecast data, such as where each planet is, how bright it is, when eclipses occur, etc.

At the bottom of this page are links to pages with today’s ephemeris.

International Space Station ISS/Zarya
The International Space Station is also known as Zarya (Russian for dawn), since its first module was build and launched by Russia. While in sunlight, it is very bright (about −3 mag according to http://www.heavens-above.com) and easy to spot at night. Like most satellites it orbits west to east. In an evening pass, it will rise in the West or Southwest and move eastward. Usually it will then disappear into the Earth’s shadow so that we cannot see it setting in the East or Southeast. In a morning pass it moves in the same direction, but the shadow is on the other side. So it will emerge from the shadow and move toward its set on the east or southeast horizon.

Chinese space station Tiangong 1

This satellite was launched in September 2011 and reaches a brightness of about 2.5 mag. The inclination of the orbit is less than Zarya, so that in Edinburgh it will rise no more than 6° above the horizon.

Iridium flares

There are about 90 Iridium satellites in Earth orbit, which are intended for satellite-based mobile phone communications. These satellites have become famous for the fact that they can produce very bright flares. These occur when one of their three large’ish flat’ish aerials happens to reflect the sunlight onto the observer. These aerials point 40° downward from the orbit onto Earth, one points in the forward direction the other two point back left and back right.

For any given time and using the satellite orbital elements from http://www.celestrak.com one can calculate where the Sun, satellite and observer are, and by how many degrees the reflection of each of the three aerials misses the observer. An empirical relationship between this angle and the brightness of the reflection has been determined (Randy John, 2002, SKYSAT v0.64, http://home.comcast.net/~skysat). 2° corresponds to about 0 mag, 0.5° to −3 mag. The brightest flares are −8 or −9 mag.

These are flares and not flashes. A nighttime flare lasts about 10 to 30 s, during which time the satellite moves several degrees. With a precision of about 1 s, I calculate here the time of maximum brightness of a flare, and also the times when the angle is 2°. I do not check whether it is day or night. You have to make your own judgement. The brightest flares should be visible even in daylight, perhaps those brighter than −5 mag or so.

I re-calculate each day, around midday UT, for the next 24-hour interval. The calculations are for Edinburgh. To give an idea of how far away from the city they remain valid, consider a a typical distance of the satellite of 1000 km and a typical elevation of h = 40°. At that distance the 2° angle corresponds to 35 km. If you’re in the ideal spot and looking at the satellite, and then go that far sideways, you lose the flare. Forward or backward you can go further by a factor 1/sin(h). Unfortunately, my calculations don’t tell you which way to go to have an even brighter flare. But these flares are common enough that you usually don’t need to travel to catch one.

If you’re not near enough to Edinburgh, or if you want more information about the flares that hit you, try http://www.heavens-above.com. There you can pick a place and get a longer and more detailed forecast.

Today’s ephemerides


The software made available here comes under the GNU General Public Licence, reproduced below.

Sputnik has to do with the calculation of ephemeris of Sun, Moon, planets, asteroids, comets, and artificial satellites. Sputnik 1.9 is C++ source code for such an application. The Sputnik data package supports the application, but is not strictly necessary to build or use it. The Satellite package is that part of the Sputnik application that performs the orbit integration of artificial satellites. The Sputnik 1.9 package includes this, but programmers might want to use only this part to build their own application around it. Technically speaking, the Satellite packages contains only the Satellite class and the classes required by it.

Sputnik 3.1 is not a full replacement for Sputnik 1.9. Sputnik 3.1 is written in Java. It does some of the basic jobs, but in a different way, and it does some things that 1.9 does not. Sputnik 3.1 contains a Java port of the same code for orbit integration of artificial satellites in the self-contained SDP4 class. You need to download only one of the two archive files, check the README file.

The SDP4 class of Sputnik 3.1 is used in Shawn Gano’s JSatTrak (http://www.gano.name/shawn/JSatTrak/), a graphical satellite tracking programme for traditional Java platforms. The whole Sputnik 3.1 application is used in Mike Fuchs’ DroidSat (http://sites.google.com/site/droidsatproject/), a graphical satellite tracking programme for the Android platform (Google mobile phones).

chicam is a small suite of command line utilities to process frames from webcams or digital SLRs. The central engine is the stack utility, which will read one or more frames and shift and stack them into a single average image. There are now also utilities for high dynamic range tone mapping. Download one of the four tar balls. The Linux binaries were built on Debian 6.0 (squeeze). The “Windows” binary is actually a Cygwin binary. chicam is written in C and links with netpbm, fftw and cfitsio libraries.

package version date intro documentation archive file
Satellite 1.9.3 2003-04-06 README
Sputnik 1.9.4 2003-09-14 README PostScript PDF
Sputnik data 1.6.1 2000-02-27 README
Sputnik 3.1.10 2015-04-05 README HTML tgz

source tgz
chicam | 1.4.1 | 2012-05-20 | HTML | HTML
tutorial | source tgz
Linux 32 bit tgz
Linux 64 bit tgz
Windows tgz

GNU General Public Licence
Version 2, June 1991

Copyright © 1989, 1991 Free Software Foundation, Inc.
675 Mass Ave, Cambridge, MA 02139, USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.


The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software–to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation’s software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.

When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.

To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.

For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their

We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.

Also, for each author’s protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors’ reputations.

Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone’s free use or not licensed at all.

The precise terms and conditions for copying, distribution and
modification follow.



  1. This License applies to any program or other work which contains
    a notice placed by the copyright holder saying it may be distributed
    under the terms of this General Public License. The “Program”, below,
    refers to any such program or work, and a “work based on the Program”
    means either the Program or any derivative work under copyright law:
    that is to say, a work containing the Program or a portion of it,
    either verbatim or with modifications and/or translated into another
    language. (Hereinafter, translation is included without limitation in
    the term “modification”.) Each licensee is addressed as “you”.

Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.

  1. You may copy and distribute verbatim copies of the Program’s
    source code as you receive it, in any medium, provided that you
    conspicuously and appropriately publish on each copy an appropriate
    copyright notice and disclaimer of warranty; keep intact all the
    notices that refer to this License and to the absence of any warranty;
    and give any other recipients of the Program a copy of this License
    along with the Program.

You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.

  1. You may modify your copy or copies of the Program or any portion
    of it, thus forming a work based on the Program, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
    stating that you changed the files and the date of any change.
    b) You must cause any work that you distribute or publish, that in
    whole or in part contains or is derived from the Program or any
    part thereof, to be licensed as a whole at no charge to all third
    parties under the terms of this License.
    c) If the modified program normally reads commands interactively
    when run, you must cause it, when started running for such
    interactive use in the most ordinary way, to print or display an
    announcement including an appropriate copyright notice and a
    notice that there is no warranty (or else, saying that you provide
    a warranty) and that users may redistribute the program under
    these conditions, and telling the user how to view a copy of this
    License.  (Exception: if the Program itself is interactive but
    does not normally print such an announcement, your work based on
    the Program is not required to print an announcement.)

These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.

Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.

In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.

  1. You may copy and distribute the Program (or a work based on it,
    under Section 2) in object code or executable form under the terms of
    Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
    source code, which must be distributed under the terms of Sections
    1 and 2 above on a medium customarily used for software interchange; or,
    b) Accompany it with a written offer, valid for at least three
    years, to give any third party, for a charge no more than your
    cost of physically performing source distribution, a complete
    machine-readable copy of the corresponding source code, to be
    distributed under the terms of Sections 1 and 2 above on a medium
    customarily used for software interchange; or,
    c) Accompany it with the information you received as to the offer
    to distribute corresponding source code.  (This alternative is
    allowed only for noncommercial distribution and only if you
    received the program in object code or executable form with such
    an offer, in accord with Subsection b above.)

The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.

If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.

  1. You may not copy, modify, sublicense, or distribute the Program
    except as expressly provided under this License. Any attempt
    otherwise to copy, modify, sublicense or distribute the Program is
    void, and will automatically terminate your rights under this License.
    However, parties who have received copies, or rights, from you under
    this License will not have their licenses terminated so long as such
    parties remain in full compliance.

  2. You are not required to accept this License, since you have not
    signed it. However, nothing else grants you permission to modify or
    distribute the Program or its derivative works. These actions are
    prohibited by law if you do not accept this License. Therefore, by
    modifying or distributing the Program (or any work based on the
    Program), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Program or works based on it.

  3. Each time you redistribute the Program (or any work based on the
    Program), the recipient automatically receives a license from the
    original licensor to copy, distribute or modify the Program subject to
    these terms and conditions. You may not impose any further
    restrictions on the recipients’ exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties to
    this License.

  4. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License. If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Program at all. For example, if a patent
    license would not permit royalty-free redistribution of the Program by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Program.

If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other

It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.

This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.

  1. If the distribution and/or use of the Program is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Program under this License
    may add an explicit geographical distribution limitation excluding
    those countries, so that distribution is permitted only in or among
    countries not thus excluded. In such case, this License incorporates
    the limitation as if written in the body of this License.

  2. The Free Software Foundation may publish revised and/or new versions
    of the General Public License from time to time. Such new versions will
    be similar in spirit to the present version, but may differ in detail to
    address new problems or concerns.

Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and “any
later version”, you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software

  1. If you wish to incorporate parts of the Program into other free
    programs whose distribution conditions are different, write to the author
    to ask for permission. For software which is copyrighted by the Free
    Software Foundation, write to the Free Software Foundation; we sometimes
    make exceptions for this. Our decision will be guided by the two goals
    of preserving the free status of all derivatives of our free software and
    of promoting the sharing and reuse of software generally.

                      NO WARRANTY



    Appendix: How to Apply These Terms to Your New Programs

If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the “copyright” line and a pointer to where the full notice is found.

<one line to give the program's name and a brief idea of what it does.>
    Copyright (C) 19yy  <name of author>
    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation; either version 2 of the License, or
    (at your option) any later version.
    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    GNU General Public License for more details.
    You should have received a copy of the GNU General Public License
    along with this program; if not, write to the Free Software
    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.

Also add information on how to contact you by electronic and paper mail.

If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:

Gnomovision version 69, Copyright (C) 19yy name of author
    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands show w' andshow c’ should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than show w' andshow c’; they could even be
mouse-clicks or menu items–whatever suits your program.

You should also get your employer (if you work as a programmer) or your
school, if any, to sign a “copyright disclaimer” for the program, if
necessary. Here is a sample; alter the names:

Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision’ (which makes passes at compilers) written by James Hacker.

, 1 April 1989
Ty Coon, President of Vice

This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.

Stockert radio telescope

Stockert radio telescope with sage in the foreground.
The Stockert radio telescope was constructed in 1956, and the Förderverein Astropeiler Stockert took the opportunity to celebrate the fiftieth anniversary in 2006. In the years 1985/86 I worked as operator at the Stockert radio telescope. Even before that I had taken part in a student practical there. From this resulted two articles for the journal of the amateur astronomy society Volkssternwarte Bonn. During the research for one of these articles, Prof. Wolfgang Priester gave me a photocopy of a newspaper article of 1957; the staff of the Universitätssternwarte Bonn concocted this April Fool’s prank in the year after construction.

All these articles are available in transcripts of their German originals:

Scottish independence

The Saltire, Scotland’s flag based on the cross of St Andrew [1].
Robert Burns, Scotland’s national bard [2].
Between Scotland’s two national days, St Andrew’s Day on 30 November 2011 and Burns’ Night on 25 January 2012, the debate about a referendum on Scottish independence moved forward significantly. Two consultations were launched on the latter date, by the Scottish Government [3] and by the Government of the United Kingdom [4], resp. These consultations were by no means co-ordinated, they competed with each other. The Scottish Government is in favour of independence, the UK Government is against Scotland breaking away.

This difference comes about, because the Scottish National Party (SNP) won an overall majority in the Scottish Parliament in 2011, while the UK Government is formed by parties in favour of preserving the United Kingdom. The latter is inevitable, given that only in Scotland and Wales is there significant support for national parties, and given that England has 83% of the population.

The difference does reflect a difference between the peoples of Scotland and England. This difference is not so pronounced that the result of an independence referendum is a foregone conclusion. England will always dominate in terms of population of a United Kingdom. England tends more to capitalism, Scotland more to socialism, to put it in very extreme terms. England is more insular and Atlantic, Scotland is more Celtic and European. While separation is not inevitable, it seems to me as something that Scots should welcome and vote in favour of.

Who’s in charge?

Distribution of seats in the UK Parliament after the 2010 election.
The push for a referendum and for independence comes from the SNP. Until 1999, there was no Scottish Parliament. Until 2007, the SNP was in the minority and a Labour / LibDem coalition ruled. In 2007 the SNP formed a minority government. In 2010 the LibDems disappointed UK voters by going into coalition in the UK Parliament with the wrong party. The expectation was that they would join forces with Labour, but Labour was not strong enough to make numbers add up. A year later the Scottish LibDem vote collapsed and the SNP was able to form the Scottish Government by itself.

The push against independence comes not only from the UK Government and the Conservatives. Because Labour and LibDems collect a significant fraction of their votes in Scotland, they need to keep Scotland in the UK. All big UK-wide parties are against Scottish independence, including the Scottish branches of those parties.

The case for the Scottish Government holding a referendum on Scottish independence is obvious. It has backing from the Scottish Parliament and hence from the people of Scotland for doing this. Scotland is undeniably a distinct territory with a distinct population, which has the right to self determination. The status quo is that Scotland and England joined by mutual agreement in 1707. While the UK cannot expel Scotland, Scotland can demand independence.

The case for the UK Government setting rules and conditions for a referendum is more legalistic. The sovereign of the United Kingdom is the UK Parliament (not the people). The Scottish Parliament exists because the UK Parliament created it and devolved certain powers to it. Holding a referendum on independence is not one of those powers. If the UK does provide a good enough mechanism for Scotland to decide on its independence, then Scotland cannot go against that mechanism. Only if the UK denies Scotland its right to a referendum, could the Scottish Parliament try to assume the power of holding a referendum.

Since the referendum can in the end not be avoided, the UK Government has to be careful not to subvert the referendum too obviously, as they might thereby increase the vote for independence.

In October 2012, the Scottish Government and the UK Government agreed that the Scottish Parliament should have the temporary power to hold the independence referendum. If it goes ahead, it has to do so before the end of 2014 [5].

The only condition the UK Government insists on is that the referendum have a single question with a yes/no answer. The exact question is up to the Scottish Parliament, but it obtained advice from the Electoral Commission.

The question
The Scottish Government started out with the question “Do you agree that Scotland should be an independent country?” After the Electoral Commission pondered the issue for a while, the question will now be

Should Scotland be an independent country?

The second question

This would have been a question about much more powers for the Scottish Parliament short of independence. In the event of the referendum going against independence, the second question could help Scotland get more autonomy than it has now. The Scottish Government tried to give the impression that it was not keen on such a second question in the referendum. The UK also appeared to be against such a question and alleged that the Scottish Government is in favour of it.

From the formal agreement between the two governments it seems that the UK Government insisted on no second question, the Scottish Government can continue to claim it never wanted it, merely wanted to offer it if anyone else was interested.

Most of the electorate are probably against independence but in favour of more devolved powers. They will now have to make up their mind and say “yes” or “no” to independence. Their best bet might be to vote “yes” and hope independence is rejected by only a narrow margin. That should give the Scottish Government a strong mandate to demand more devolved powers, but would clearly keep Scotland in the UK.

Transition to independence

Two further landmark documents were issued in February 2013, one each from the Scottish and UK parliaments. The Scottish Government set out the schedule for independence, subject to the referendum outcome [6]. The referendum will take place in autumn 2014. Formal independence would be in March 2016. The first post-independence Scottish Parliament would be elected in May 2016. Some negotiations and constitutional arrangements would come before formal independence, but the final written constitution would be formulated after May 2016 by the people, i.e. more than just the new Parliament and new Government.

The UK Government obtained advice from two legal experts from the universities of Cambridge and Edinburgh, as annex to their own analysis document [7]. The annex makes it clear that Scotland would be a new country outwith all international agreements, while the remainder of the UK would still be the UK and retain its status amongst the countries of the planet. But that is just the legal position, and all it says is that as an independent country Scotland is on its own and has to come to arrangements with its neighbours without being able to go to a higher power to demand concessions.

The question of membership in the EU, NATO, the United Nations is not a legal one; it is political. Will other EU members deny Scotland membership? Will they demand Scotland to become a member? Will they be indifferent? This can become tricky, say, if Scotland wants to be in NATO but throw out the weapons of mass destruction that the UK keeps on the Clyde. Or if Scotland wanted to keep the UK’s budget rebate and Euro opt-out.

It is likely that some EU members would prefer Scotland not to vote for independence, because they may have problems keeping all their own regions together. But does that mean that after Scottish independence they would really want to punish Scotland? That would not go down well in their problem regions, and it would also go against the spirit of the EU, which is to soak up more and more European countries to stabilise democracy and justice and to foster trade and economic might.

One EU member that might veto Scotland joining would be the remainder of the UK. I don’t think that is a real possibility. More worrying is that the UK might leave the EU, moving Scotland rather further away from the nearest EU territory (Ireland and Denmark).


Where would Scotland rank amongst the 27 (then 28) countries of the European Union?

The population of EU countries ranges from 81.8 million (Germany) to 0.4 million (Malta). The UK (62.0 million) ranks 3rd between France and Italy. Scotland (5.2 million) would rank 20th between Finland and Ireland.

The size in area of EU countries ranges from 675,000 km2 (France) to 300 km2 (Malta). The UK (245,000 km2) ranks 8th between Italy and Romania. Scotland (78,000 km2) would rank 16th between the Czech Republic and Ireland.

The population density of EU countries ranges from 1318/km2 (Malta) to 16/km2 (Finland). The UK (253/km2) ranks 4th between Belgium and Germany. Scotland (67/km2) would rank 22nd between Bulgaria and Ireland.

The gross domestic product per capita of EU countries ranges from US$78,400 (Luxembourg) and US$39,900 (Netherlands) to US$11,900 (Romania and Bulgaria). The UK (US$34,600) ranks 8th between Belgium and Germany. The Wikipedia numbers indicate more than US$40,000 for Scotland. If true, and if sustained through the separation from the UK, this would place it second only to Luxembourg.

The tax revenue relative to gross domestic product ranges from 48.2% (Denmark) to 28.0% (Romania). The UK (37.3%) ranks 12th between the Netherlands and Slovenia. Given that a higher percentage of Scots than other UK residents work in the public sector, Scotland’s figure would probably be higher, perhaps somewhere between Hungary’s 40.4% and Finland’s 43.1%.

By none of these parameters would Scotland be near the extreme of the range. It would be one of the many small countries rather than one of the six with more than 30 million population. Scotland would have 7 votes on the EU Council and 12 or 13 members of the European Parliament. In the EU, small nations have more representation per population than large nations like the UK. Scots would be among the richer peoples of Europe.


It is intended to retain the head of state of the UK as that of Scotland, similar to Canada, Australia, etc. I would have preferred a republic, but there would be no majority for that. The possibility of the head of state being in post for 60 years, and also not having been elected on his or her merits, makes no sense.

The electoral system would, I imagine, be similar to the current one. Proportional representation is approximated by filling about 60% of parliamentary seats with locally elected candidates, and by filling the remaining seats from party lists to get as close as possible to the proportion of those list votes.

Currently, the electorate for the Scottish Parliament are residents of Scotland that are EU citizens. It is intended that the referendum will also be decided by EU citizens resident in Scotland. So this may also be the rule for the post-independence parliament. This is unfair to some degree: Swiss and Norwegian nationals have right of residency as EEA citizens, but cannot vote. Scots who have moved away from Scotland for a short period cannot vote, while migrant workers from the EU can vote. That may feel wrong with respect to those who stay only a year or two, but there are also those of us who stay for decades.


The question of currency is genuinely difficult. EU members are generally obliged to adopt the Euro as their currency. With the current economic turmoil no, country not in the Eurozone already would want to join. It appears that the SNP intend to keep the British Pound and to help out running the Bank of England. I don’t think this is realistic. The UK may not want this, the EU would not want this, and in a few years’ time the Euro might again be stronger than the Pound.

Military, nuclear weapons, nuclear power
Scotland will want to be in NATO. As of October 2012, the SNP is changing its mind towards that point of view. The reservations about NATO are probably mostly about nuclear weapons. The entire UK nuclear weapons arsenal is based in Scotland. It is not practical to throw out the UK’s Royal Navy from the Clyde and Forth. It then makes no sense for Scotland to quit NATO either.

There are four civil nuclear installations in Scotland. The Dounraey research centre is already being decommissioned, which would continue. Apparently, it should enter interim care and surveillance state by 2036 and become a brown-field site by 2336 - about 10 generations from now. Either end of Scotland’s Central Belt, the Torness and Hunterston B power stations are owned by British Energy. Hunterston A and Chapelcross near the English border are not producing energy any more.

Scotland would be likely to phase out nuclear power, while the current UK Government seems keen to resume the building of new plants at least outwith Scotland. Scotland has a significant supply of hydro power, and significant potential for wind farms inland and offshore.

Other questions

References and credits

A lesson in TCP routing

In my work as Linux and network administrator my boss and I recently had occasion to learn how the routing of TCP traffic over the Internet in 2014 works very differently from what we learnt one or two decades earlier.

TCP/IP stands for Transmission Control Protocol / Internet Protocol and is the standard by which client server communication happens over the Internet. A client - workstation or web browser on a workstation - initiates a connection to a server - an Apache or IIS application on a server computer across the Internet. They exchange streams of bytes, each end acting both as source and destination of such a stream. The source chops up its stream of bytes into packets, each packet finds its way across the Internet to the destination, where the packets are collected, put back in sequence and the stream of bytes extracted.

“The Internet” is an internet, and an internet is a combination of one or more local networks (usually Ethernets). The local networks are interconnected by “routers”. The source of traffic sends the traffic to the nearest router, it sends it onward on a different network that leads closer to the destination. After a number of such hops from one local network onto another the packets eventually arrive at the destination.

The problem

The problem isn’t really important. But the subtlety of the cause and the magnitude of the effect kicked off an investigation that overturned a few long-held ideas of how packets are routed between networks that we had in our heads.

The ingredients to observe the problem are:

A Windows client workstation running a web browser. The MTU in its network interface is set correctly to 1500, and like all modern operating systems, path MTU discovery is on. The MTU is the size limit for an Ethernet frame on the local network. This limits the size of an Internet packet or packet fragment. [1]
A web server.
A router running Debian Linux with kernel 3.2.60 as announced in Debian Security Advisory 2972 [2]. The router has at least two network interfaces. The default settings for the network interfaces are to have TCP segmentation offload turned on. The term offload refers to the fact that the operating system allows the network hardware to do the work, it offloads the work to the hardware [3].
Two Ethernets, one connecting the client to the router on one interface and one connecting the server to the router on the other interface. Both Ethernets use standard frame sizes so that MTU settings of 1500 are correct everywhere in the experimental setup. [1]
The client sends a small volume of traffic to ask for a web page. The server then responds with a large volume of traffic to deliver the web page to the client. However, the router refuses to forward almost all traffic from the server, alleging that the packets are too big to be sent on via the Ethernet that leads to the client. The server cannot figure out how to react to the error messages from the router and hardly any of its traffic makes it through to the client.

The problem is quite peculiar and not really important here. There are many measures one could take to avoid the problem, each on its own avoids the problem:

Use a Linux client. Note that it is something the client says to the server that causes the server’s traffic to fail; this is perplexing.
Use a smaller MTU on the Windows client. We discovered this by accident, because one client had a Cisco VPN client installed, which had changed the MTU to 1300. Again, note the client settings affect the server’s traffic success rate.
Turn path MTU discovery off on the Windows client. This has two effects. First, the MTU is reduced to 536. Second the packets sent by the client no longer have the DF (do not fragment) flag set. [4]
Even though the web server encounters the problem and receives the error messages from the router, the details of the server do not matter. It can be Windows or Linux, Apache or IIS.
More understandable is that changes on the router eliminate the problem: Going back to kernel 3.2.57 avoids the problem. Turning off all offloading to the network cards avoids the problem.
Lessons learnt
TCP handshake and flow control
At the start of a TCP connection there is a three-way handshake. The client sends an empty packet with the SYN (synchronise parameters) flag set. The server replies with an empty packet with SYN and ACK (acknowledge receipt) set. The client responds with a packet with ACK set. [5]

What we had not really realised is how much other information the client and server are exchanging about each other in the handshake.

The handshake probably contains information sent from the client to the server that in the case of a Windows client then causes the problem.

Noteworthy is the MSS parameter that is communicated in the initial handshake. This is the “maximum segment size”, i.e. the largest packet size the sender is willing to receive. We find this typically set to 1460, which is the sender’s local MTU (1500) minus the length of the IP or TCP header (40). [6]

Also noteworthy is the win parameter. This is communicated in all packets throughout the connection and changes during the connection. It tends to start out fairly large (14600 for a Linux sender, 8192 for a Windows sender) but reduces later to something around the 500 mark. The main purpose of this parameter seems to be for the recipient of the bulk data to signal to the sender that transmission should be stopped for a while to allow the receiver to process the data received so far. [7]

Further, there is a wscale parameter, communicated in the handshake only. win characterises how much data the receiver can receive next, and is used to inhibit transmission while the receiver processes. wscale addresses the problem of using the bandwidth efficiently when the connection is long distance and high bandwidth. The sender should then send more traffic before expecting an acknowledgement. To do so sender and receiver have another buffer to keep larger amounts of traffic while it is also on the network.

In our experiment we saw win=14600,wscale=6 from a Linux client and win=8192,wscale=8 from a Windows server. The larger window sizes therefore are [8]

WL = 14600 · 26 = 934400
WW = 8192 · 28 = 2097152

MTU variations and fragmentation

Traditionally, the sender of packets would use its local MTU as the maximum size of the packets it sends. As the packet hops from router to router, it may happen that a router cannot pass it on because the MTU on the next network is too small. What then would normally happen was that the router would split the packet into fragments. Only the first fragment would retain the full IP header. But all fragments would need new header information to identify the packet and each fragment’s order in the sequence of fragments. [9]

Fragments would then be reassembled into packets at the destination, not on a router. [9]

It has always been possible for the sender to flag a packet as “DF” or “do not fragment”. If a router is asked to forward such a packet, but cannot, it will send a fragmentation-required error message to the source of the packet, telling it what the MTU value is that causes the dilemma. The source can then interpret the message and resend smaller packets. [9]

Path MTU discovery and DF flag
Several changes in typical behaviour have occurred over time:

All modern operating systems now try to discover the “path MTU”. They would like to learn the smallest MTU on the networks that need to be traversed from source to destination. The sender can then compose small enough packets to begin with and the complications and inefficiencies of fragmentation can be avoided. To make path MTU discovery, the sender will send all packets with DF flag set. No router can ever fragment a packet and must always send back an error message. The sender uses the error message iteratively to learn the smallest MTU along the path to the destination. It will then make packets small enough to slip through without fragmentation. [4]
Badly configured firewalls may block all ICMP traffic. The fragmentation-required message is ICMP traffic. Hence such firewalls break path MTU discovery. [4]
Firewalls re-assemble packets from fragments in order to carry out their rule checks. Firewalls are routers and traditionally would not have done this. [9]
These changes are at odds with each other. In the absence of fragmentation-required messages, path MTU discovery can be carried out differently with increasing packet sizes and by detecting resulting throughput problems. [4]

tcpdump and offload
In diagnosing our problem our main tool was to record and inspect traffic as it enters and exits the router, using the tcpdump utility. To our surprise, packets received and sent were too large to fit the MTU, typically small integer multiples of 1460 (MTU minus IP header).

tcpdump inspects packets as they are handed from the network hardware to the operating system. It turns out that the large packets seen by the OS kernel are an artefact of “offload”. With offload on, the network hardware does not exchange traffic frame by frame with the operating system. Rather, it combines multiple received frames and splits outgoing traffic into multiple sent frames. [10,11,12]

For a while we surmised that the network card was carrying out defragmentation. In this mental image, the sender of the traffic would have sent a packet far exceeding the network MTU, its sending network interface would have split the packet into fragments, and the receiving network interface would have reassembled the fragments into the original packet. But this did not make sense, as all traffic was marked DF - do not fragment. [9]

Using the additional ethtool utility, we found the network interfaces configured thusly:

ethtool -k eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-unneeded: off [fixed]
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: on
tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: on
loopback: off [fixed]
It turns out that between the upper layers of the IP stack in the operating system and the network hardware, each large packet of the upper layers correspond to multiple packets (not fragments) at the lower layer. The smaller packets all fit the network MTU, while the larger packet at the higher levels is more efficient to process. [10,11,12]

The offloading may merely shift the work of splitting and reassembling large packages from the operating system to the network card. But this also shifts the work such that tcpdump in one case sees the small MTU-matching packets and in the other it sees the large efficiently processable packets.

Since they are all packets, the fact that they all have the DF frag does formally not matter. The small units are full IP packets and not fragments of packets, so the DF flag is formally obeyed. Whether its intention is bypassed is debatable. One has to recall that the intention of the DF flag has changed; in the era of ubiquitous path MTU discovery the DF flag is almost meaningless (set on virtually all packets). [4]

[12] seems to indicate that the upper level big packet in the receiving OS may correspond to the big packet in the sending OS, but we have doubts about this. The intention (efficiency in the OS) and placement of offloading in the whole process of IP seems to hint that the big packets are just ephemeral entities in the sending or receiving OS with no intended correspondence of these entities between sending and receiving OS. However, de facto, the flow of packets and the use of PSH flags in TCP packets might approximately achieve such a correspondence: One big packet at the sending end turns into a burst of small packets on the network, and may then be likely that those very packets are then reassembled into the same large packet in the receiver.

TSO: TCP segmentation offload
This applies to outbound TCP traffic. Segmentation means the splitting of the large packet at the higher level of the IP stack into MTU-sized packets before they go out on the network. [13]

UFO: UDP fragmentation offload
Apparently it is uncommon for network cards to handle segmentation of outgoing non-TCP traffic. Indeed our network interface is shown as UFO fixed off. [13]

GSO: generic segmentation offload
GSO is apparently a generalisation of TSO to traffic that can be TCP or not. Given that non-TCP traffic can probably not be segmented, GSO should be equivalent to TSO. [13]

LRO: large receive offload
This applies to inbound traffic. Multiple incoming packets are combined into a bigger packet for the higher levels of the IP stack. Oddly, this is off and fixed to be off. [14]

GRO: generic receive offload
It is unclear what this is, but we can turn it on or off. Clearly it applies to inbound traffic. Also, our experiments show that incoming MTU-sized TCP packets are turned into large TCP packets. Since this is the only receive offload setting that is on, this must be the setting responsible. [14]

Should we offload or not?

The objective of offloading is efficiency and throughput. This may or may not happen, depending on the implementation in the networking hardware and on the power of the CPU and operating system. In the Linux community, enthusiasm is limited also because the offloading code is proprietary and cannot be fixed for security problems it may have. [3]

In our case, we have to turn off offloading at least in part. This is to allow the use of the 3.2.60 kernel without stifling traffic from servers to Windows clients. If we want to try a partial switching off, we need to recall the problem. After the router kernel sees the large packet assembled by the receiving network interface, it seems to refuse to pass it for transmission.

Initially, we should turn off TSO and GSO, but expect that that will not fix the problem. Then we should turn of GRO instead and expect that to fix the problem. We have already established that turning off all three does fix the problem.

In the end we should probably go with the open-source argument [3] and turn off all three offloads - TSO, GSO, GRO - on all network interfaces.

How to turn it off
The ethtool utility, in setting features on or off, uses feature names that have no resemblance to the feature names displayed above. In particular [10]:

ethtool -K eth1 gso off
ethtool -K eth1 tso off
ethtool -K eth1 gro off

ethtool -K eth1 lro off
Cannot change large-receive-offload
ethtool -K eth1 ufo off
Cannot change udp-fragmentation-offload
Finally, we have the extra complication that on one network interface the router does VLAN tagging [15]. In addition to eth3 itself we have several interfaces eth3.N where N is the VLAN number. If we turn the features off only for eth3, then the eth.N interfaces show the feature as off but requested on. This is sufficient for the routing problem to go away.

The feature manipulation does not persist across a reboot, so the obvious place to make these settings is as a pre-up or post-up command in /etc/network/interfaces for each interface as and when it is brought up.

auto eth3.2
iface eth3.2 inet static
post-up ethtool -K eth3 gso off
post-up ethtool -K eth3 tso off
post-up ethtool -K eth3 gro off
This changes eth3 itself and not eth3.N, but that is sufficient.

The fix in practice

We returned the router to its faulty state - kernel 3.2.60 and offload on - and then turned off offload features one by one and interface by interface.


“Maximum transmission unit (MTU)”. Wikipedia.
Debian (2014). “DSA-2972-1 linux – security update”. Debian Security Advisories.
“TCP offload engine (TOE)”. Wikipedia.
“Path MTU Discovery (PMTUD)”. Wikipedia.
Charles M. Kozierok (2005). The TCP/IP guide.
“Maximum segment size (MSS)”. Wikipedia.
“Transmission Control Protocol (TCP)”. Wikipedia.
“TCP window scale option”. Wikipedia.
“IP fragmentation”. Wikipedia.
Jeff Morriss (2012). “Re: wireshark sees jumbo TCP packets in linux”. Wireshark-users mailing list.
“Capture setup - Offloading”. Wireshark wiki.
Dan Siemon (2013). “Queueing in the Linux network stack”. http://www.coverfire.com
“Large segment offload (LSO)”. Wikipedia.
“Large receive offload (LRO)”. Wikipedia.
“Virtual LAN (VLAN)”. Wikipedia.