Unsharp masking is a technique to increase the contrast of small scale features while suppressing large scale features. The technique originates from the darkroom of astrophotograpy, where this is done as follows. Say you have an astrophotograph as a negative. You take a contact copy on negative film, which therefore is a positive image of the sky. However, in making this you somehow ensure that the copy is slightly out of focus. After developing this unsharp copy, you stack the original and the unsharp copy - which thereby becomes and unsharp mask - in the negative holder of the enlarger and use that to make your prints.
The negative has the usual effect on the print, bringing a positive image onto the paper. The mask in effect results in different exposure times between generally bright areas and generally dark areas of the image. Bright parts are exposed less, faint parts are exposed more. So overexposure in bright parts may be prevented, while faint parts are enhanced.
|A Moon detail before and after unsharp masking. Bringing out the finer detail and enhancing its contrast is paid for by amplifying the noise, too. Using more frames to reduce noise becomes more important.|
In digital image processing you can also find unsharp masking methods, although this seems to be uncommon in professional astronomy. Photo applications like Photoshop or the Gimp can do unsharp masking.
The correct algorithm for unsharp masking is (cf. http://micro.magnet.fsu.edu/primer/java/digitalimaging/processing/unsharpmask/):
F = [1/(2 c - 1)] [c I - (1 - c) U(w)]
c ∈ (0.5,1.0]
I is the original image. U(w) is the unsharp mask calculated by a Gauß convolution (a blurring with a two-dimensional bell curve); it depends on the full width at half maximum w of the bell curve. c can range from 0.5 to 1, but cannot be exactly 0.5. For c = 5/9, 3/4 and 1.0, resp.:
c = 5/9: F = 5.0 I - 4.0 U(w)
c = 3/4: F = 1.5 I - 0.5 U(w)
c = 1.0: F = 1.0 I
The sum of the coefficients is always 1. Close to c = 1 the maksing effect is small, close to 1/2 the mask becomes almost as important as the original image.
I have so far been using a simpler algorithm, typically with k = 0.7:
G = I - k U(w)
k ∈ [0.0,1.0]
k = 0.7: G = I - 0.7 U(w)
Apart from a c-dependent scaling factor, G and F are the same with
c = 1 / (k + 1)
k = 0.7 => c = 10/17
c = 10/17: F = [10 I - 7 U(w)] / 3
I use the following little procedure based on Starlink's KAPPA package to perform unsharp masking with my own algorithm. First I set the parameters, the name of the input grey image, the name of the masked image, the width of the smoothing and the fraction of the smoothed image to subtract.
in=grey ; out=masked ; fwhm=10 ; k=0.7 rm -f temp.sdf gausmooth fwhm=$fwhm @$in temp maths exp="ia-pa*ib" ia=@$in ib=temp pa=$k out=@$out rm -f temp.sdf
You have to experiment with the masking parameters w (the full width at half maximum of the smoothing Gauß function) and k. I find w = 10 and c = 0.7 useful for Sun, Moon and planets when the ToUcam Pro VGA is used at 3500 mm focal length. In that case each pixel is 0.33", so w = 3.3", which is three or four times the image resolution.
The example above is for a grey image. Can we use unsharp masking for colour? Yes. From the data reduction we have a grey NDF and a 24-bit colour PNG. One option is to use the unsharp masking in the Gimp or a similar photo processing application. Another is to apply my algorithm to RGB individually before combining the NDF images into a colour PNG. Thirdly, you can use the Gimp to layer the original colour image with the masked grey image and use one for the colour and the other for the intensity of the image.
Picking up from the reduction results (see section on grey and colour images of planets), we proceed as follows for a masked colour image:
rm -f temp.sdf fwhm=10 ; c=0.7 in=subR ; out=maskR gausmooth fwhm=$fwhm @$in temp maths exp="ia-pa*ib" ia=@$in ib=temp pa=$c out=@$out in=subG ; out=maskG gausmooth fwhm=$fwhm @$in temp maths exp="ia-pa*ib" ia=@$in ib=temp pa=$c out=@$out in=subB ; out=maskB gausmooth fwhm=$fwhm @$in temp maths exp="ia-pa*ib" ia=@$in ib=temp pa=$c out=@$out rm -f temp.sdf truec black=0 white=50 red=maskR green=maskG blue=maskB | \ pnmtopng > colour.png
For easy viewing we need to convert our final NDF image into a graphics format. I prefer PNG, because is uses lossfree compression (like GIF does), and is free of the licensing politics that GIF is embroiled in. PNG also compresses more than GIF.
package we get a conversion to TIFF, which we use as an uncompressed interim
format. By default this scales the image so that the minimum and maximum
become black and white, resp. We convert on with
pnmtopng, which are commonly found on Linux systems.
ndf2tiff in=masked out=temp.tif tifftopnm temp.tif | pnmtopng > r001.png rm temp.tif
I like to have a thumbnail 64 pixels high as well. This can be made
the Gimp, or on the command line with
mogrify command seems to have changed between the
versions included in Red Hat 7.1 and 8.0 respectively:
Red Hat 7.1 (ImageMagick 5.2.7): cp r001.png r001_.png ; mogrify -geom 200x64 r001_.png Red Hat 8.0 (ImageMagick 5.4.7): mogrify -geometry 200x64 r001.png ; mv r001.mgk r001_.png
When we are looking for faint objects (star fields, nebulae, clusters etc.),
unsharp masking is not the right method to enhance contrast. Rather we will
use in the simplest case a linear stretch. What this means is that we do not
accept the minimum and maximum image values as black and white in our
graphics. Rather, we give
ndf2tiff command suitable image values for black and white
|An image of the Orion nebula. Left without linear stretch, so that the brightest star is about the only thing we see. Right with very strong stretch showing the faint parts of the nebula, but the bright parts appear overexposed.|
ndf2tiff in=mosaic out=temp.tif scale=scale low=0.85 high=2 tifftopnm temp.tif | pnmtopng > linstretch.png rm temp.tif
As you can see, our images are so good that it is often impossible to show all its content in a simple 8-bit grey image, even when a linear stretch is applied. This is not so much a problem of how many grey levels our screen can show, but how many our eye can distinguish.
One kind of non-linear stretches is to apply a mathematical function to the
image before converting to graphic. A square root, fourth root or logarithm
will enhance the contrast in the faint parts and reduce contrast in the bright
maths could do these things for us.
Another option is to use false colour. There are more colours our eye can distinguish than there are greys. So by encoding the sky brightness in colour we can potentially see more detail. We must not forget, of course, that the colours are false and not the true colour of the object. I have written a programme to convert a grey NDF image into a false colour graphic. The colour wedge it uses runs first from black through all shades of grey to white, then flips to pink and runs through the colours of the rainbow to red and on to white again. The user can choose the input image values that should become black, pink and white.
falsec in=mosaic black=0.85 pink=2 white=50 | pnmtopng > false.png
Copyright © 2003 Horst Meyerdierks
$Id: present.shtml,v 3.3 2004/02/21 18:13:39 hme Exp $