chi and h site logo

Home
About χ & h

Webcam amateur astronomy

Object
Detector
Optics
Camera
Mount
Computer and operating system
Driver software
Acquisition software
Data reduction
Presentation
Theory and tests

Theory and tests

Detectors

Both my webcams have CCD detectors, although some webcams do have CMOS detectors, which are said to be less sensitive. Both webcams also have 8 bit analogue to digital converters. Very briefly, light falling onto the pixels of the CCD is converted into electric current, which is then digitised into computer readable numbers. No doubt there is an electronic amplifier in the middle, too.

The CCD is an integrated electronic circuit, at the heart of which is a rectangular area of light sensitive semiconductor. As light hits this, electrons from the detector crystal turn from being fixed to a particular location to being mobile across the detector.

By clever design and by application of approriate voltages at the right time, this area is divided into pixels, which form the smallest units of the resulting images. The main property of the pixels is that the mobile electrons most of the time can move only within the pixel in which they originate, and cannot stray into neighbouring pixels. The controlling voltages can clear the detector of charge, begin an exposure and end an exposure.

The controlling voltages can also shift any collected electrons from one pixel to the next. When this happens the pixels along one edge that get shifted off the detector area, those pixel charges are "read out" and become a fragment of current in the electronic circuits of the webcam. So when the exposure has ended, pixels are read out one by one, resulting in an electric current whose strength encodes the pattern of illumination recorded during the exposure. This current is amplified and digitised. The numbers we obtain in the computer for each pixel we call ADUs (analogue digital units). We do initially not know how many electrons or indeed photons one ADU represents. This depends on various control settings in the webcam electronics and driver software.

Each pixel has to some degree its own properties, similar to but different from those of all other pixels. These properties are described by three different numbers and captured in three different images. In the following list I include noise as a fourth item, because it is another contribution to the digitised image that is not due to star light.

Bias:
This describes the electric charge or ADU level that a pixel has even before exposure begins - or that gets added when the electrons are read out. Either way this adds a constant to the ADU read out. The bias in ADU can be measured by reading out an image without having exposed the detector at all. Once the bias has been measured in this way, it can be subtracted off other images, provided they are taken in the same conditions (temperature, webcam driver settings etc.).
Dark current:
This describes the electric current or increase of ADU during the exposure time that is not due to light, but happens even in the absence of illumination of the detector. So this adds for each unit of exposure time a certain amount to the ADU read out. The dark current can be measured by exposing the detector (opening the electronic equivalent of the shutter) for as long as possible, but without allowing any light onto it. The resulting image is corrected for bias and divided by the exposure time and hence contains the dark current in ADU/s. Once the dark current has been measured in this way and has been corrected for the bias, it can be scaled up to any exposure time one might have used for another image and can be subtracted from that image. Again this is dependent on temperature, driver settings etc. being the same.
Flat field:
A flat field is almost an ordinary image taken with the detector. However, care is taken that each pixel receives the same amount of light. The image is corrected for bias and dark current and then divided by a constant such that the average of all pixels amounts to 1.0. If all pixels converted light to ADU by the same conversion factor, then the flat field would contain the value 1.0 in each and every pixel. In reality, less sensitive pixels will contain a slightly smaller number and more sensitive pixels a number just above 1.0. Once this flat field has been measured, other images can be corrected for the pixel to pixel sensitivity differences simply by dividing those images pixel for pixel by this flat field.
Noise:
There are probably several sources of noise in the process of a star emitting light and the computer receiving ADU values from the webcam. Noise is a statistical thing, it varies randomly from pixel to pixel, but also from one image to the next. One source of noise is in the light received: There are only so many photons hitting each pixel during the exposure time. As a very general rule of statistics, if you count n things of a kind (here photons) then your statistical error on that count is the square root of n. Say, if you ask 200 voters who should be prime minister, both candidates would get 100 votes, each with an error of 10, or a 5 per cent margin. If you ask 20000 voters, each candidate would get 10000 votes with an error of 100. That is only a 0.5 per cent margin. Apart from the conting of photons, the detector also does a count of the electrons in each pixel that are the result of the photons. This is a smaller number and makes more noise than the photons themselves. The electronics down the line add to all this a thermal noise in proportion to the temperature above absolute zero. Some noise is generated before amplification and therefore amplified, some noise is generated afterwards.

In practice it is usually easiest to combine the bias and dark measurements into a single image. Just use the relevant exposure time - the same as for the images that need correcting for bias/dark - and take the dark image without allowing any light onto the detector.

On the other hand, flat fields are not so easy to take. You might build up a library of flat fields applicable to common webcam driver settings you use. Or you might find that the effect of flat field correction is minor and can be omitted altogether.

What we do about noise may be obvious from the above: The more voters we ask, the smaller the margin of error becomes. In our case that means we take more images and average them all. The signal (wanted, truthful signal) is the same in each image and the same in their average. But the noise (unwanted, erroneous addition to the signal) varies randomly from image to image. In the average of n images it comes out 1/√n times that of an individual image, i.e. noise goes down in proportion to the square root of the number of images.

In addition to that, we try to avoid some of the noise. Taking one long exposure yields less noise than taking many shorter exposures adding up to the same total exposure time. This is because some of the noise is not in the photon or electron count, but comes about when the image is read out of the detector and amplified. The fewer readouts, the less noise we make in the first place.

Here is something we definitely must not do about noise: It is tempting to tweak the webcam driver parameters until we see no noise in the images. This looks good, but it is bad. We have not done very much about the noise itself, we have mainly made the analogue to digital conversion coarser so that the noise is no longer apparent. We've asked the digitiser to tell us a lie about how good our data are. Our individual images are perfect, if you call perfect that they cannot be improved by averaging many of them together.

Noise and digitisation

Why should we digitise the noise? Why not change the digitiser so that noise is below the level of 0.5 ADU or so?

To illustrate the need for sufficient noise, I've calculated an example, in which we have pixels with only noise, a pixel where a source contributes at the 3 σ level (a marginal detection) and a pixel where another source contributes at the 0.1 σ level (clearly not detected). I've assumed three different digitisations, in one the noise level is digitised as 2 ADU in the second as 0.5 ADU and in the third as 0.25 ADU. Then I've assumed that 100 images are averaged.

High noise Histogram of 100 images. The blue line and area denote the histogram for pure noise. The red line is for an 0.1 σ source, the green line for a 3 σ source. The highest frequency is 20.
Low noise Histogram of 100 images. The blue line and area denote the histogram for pure noise. The red line is for an 0.1 σ source, the green line for a 3 σ source. The highest frequency is 68. Note how the 3 σ source is less clearly separated from the noise than in the previous graph.
Very low noise Histogram of 100 images. The blue line and area denote the histogram for pure noise. The red line is for an 0.1 sigma; source, the green line for a 3 sigma; source. The highest frequency is 96. Things have become worse as far as distinguishing the 3 σ source and especially the 0.1 σ source from the noise background is concerned.

The table below shows the numbers for

  1. single image and average of 100 images,
  2. background noise, 0.1 σ source and 3 σ source,
  3. background noise digitised to 2 ADU, 0.5 ADU and 0.25 ADU.

  High Low Very low
Single Noise 0.00 +- 2.00 0.00 +- 0.50 0.000 +- 0.25
Undetected 0.20 +- 2.00 0.05 +- 0.50 0.025 +- 0.25
Marginal 6.00 +- 2.00 1.50 +- 0.50 0.750 +- 0.25
Average Noise 0.00 +- 0.202 0.00 +- 0.057 0.000 +- 0.020
Undetected 0.15 +- 0.197 0.04 +- 0.057 0.010 +- 0.022
Marginal 6.00 +- 0.202 1.50 +- 0.058 0.840 +- 0.037

In the high noise digitisation the average does indeed reduce the noise tenfold and turns the marginal source into a 30 σ solid detection. The undetected source is pulled up to the 1 σ level. That is expected, but insufficient for detection. We would need 1000 frames to raise it to 3 σ and therefore marginally detect it.

In the low noise digitisation the noise is in fact not reduced 10-fold, but only 9-fold. The marginal source therefore becomes only a 26 sigma detection, although that is quite enough. The undetected source does not quite reach the 1 sigma level, but it is getting close.

In the very low noise digitisation the noise figures for the three sources are somewhat erratic. Depending on where between integer ADU values the correct value lies, the statistics become quite different. That is bad news for the reliability of the data. The marginal source is still very well detected at 23 σ, but the undetected source is lagging further behind.

Although I am a bit surprised how well the low and very low noise digitisations are still doing, I would choose to have the noise digitised at about 1 ADU. When doing real experiments I will always correct for dark current, in which case the desired noise in the dark-corrected image will be 1.4 ADU. (In subtracting the dark from the target we "add" two images with the same level of noise, so the resulting image has √2 times as much noise.)

Bias and dark

Bias Logitech QuickCam VC Bias Philips ToUcam Pro VGA Bias frames for the QuickCam VC (left) and ToUcam Pro VGA (right). These are taken at minimum exposure (0.13 ms and 0.2 ms, resp.) and averaged from 1000 such frames. The linear stretches used are from 19.0 to 24.6 ADU and 0.42 to 2.71 ADU, resp.

The bias level is much higher in the QuickCam at 26.4 ADU, compared to 2.71 ADU in the ToUcam. The scatter is not quite as disparate with 0.86 ADU and 0.27 ADU resp. While the QuickCam has a complex pattern of staggered horizontal bars, the ToUcam has strictly vertical stripes, but possibly a more random pattern of such lines. Note how the rightmost few columns in the QuickCam are very different from the rest of the detector.

Dark Logitech QuickCam VC Dark Philips ToUcam Pro VGA Dark frames for the QuickCam VC (left) and ToUcam Pro VGA (right). These are taken at maximum exposure (1.83 s and 40 ms, resp.) and averaged from 1000 such frames. The linear stretches used are from 0 to 10 ADU and 0 to 20 ADU, resp.

For the QuickCam bias subtraction very well gets rid of all the structure. What remains is a low level of noise and a number of warm pixels. 430 pixels are between 10 and 14 ADU. Only 40 pixels are brighter than that and only 10 pixels are between 77 and 87 ADU. The ToUcam retains structure very similar to the bias in spite of the bias having been subtracted. The level is significant, although the combined dark and bias ends up about 20 ADU in both webcams. The dark current in the ToUcam shows significant differences of 2 ADU between odd (brighter) and even (darker) rows.

Flat field

Flat Logitech QuickCam VC Flat Philips ToUcam Pro VGA Flat fields for the QuickCam VC (left) and ToUcam Pro VGA (right). Averaged from 2000 frames. The linear stretches used are from 0.9 to 1.1 ADU and 0.918 to 1.074 ADU, resp. A gradient in illumination disqualifies the right image, except for this demonstration.

The QuickCam VC shows structure in the flat that is in principle similar to the bias. Similarly, the ToUcam Pro has some amount of vertical stripes like its bias and dark frames do. It also has the odd rows one per cent less sensitive than the even rows.

In some cases I have found the use of a flat field beneficial, in other cases it seemed to make no difference. Deep images with the QuickCam VC show differences between odd and even rows, though to a lesser extent than the ToUcam. In planetary work I have on occasion removed this pattern by smoothing with a Gauß of 3 pixel full width at half maximum just before performing the unsharp mask that is routine for such observations.

It is a good idea to build a library of flat fields for the standard parameter settings of each camera. Use the webcam without lens and direct it into the light. In order to have not too much light this may have to be done at night, perhaps toward a dimmed electrical light. Alternatively you can use the twilight sky, provided there are no strong streetlights about.

Noise reduction

Exposure time

For the Philips ToUcam Pro, an experiment to investigate noise as function of exposure time shows that there is a basic level of noise, which is constant below 1 ms. For longer exposures the additional noise above this base level rises as the 1.5th power with exposure. This is a lot faster than you would expect from a square root law (0.5th power). Lengthening the exposure does not seem to help the ToUcam Pro.

For the Logitech QuickCam VC, an experiment that accumulates the same exposure time once in a single frame and once as an average of 32 shorter exposed frames shows that the noise in the latter image is about 6 times higher than in the former, which would only be expected if all the noise stems from the bias or readout action and virtually none of it arises during exposure as part of the dark current. Therefore noise is kept lower by using maximum frame exposure. This is why deep sky astro webcam'ers modify their webcams the Steve Cambers way.

Average many frames

When we stack a number of frames we expect the signal to remain the same in the average frame, but the noise to reduce by a factor of √n. This is due to the - hopefully - random changes of the noise pattern from one frame to the next. To test this, I have taken what I call double dark images, and have varied the number of frames used. A double dark is where I take two sets of dark frames. One I pretend to be a traget and the other to be the dark frame to match it. That is to day, I subtract the second dark from the first. The result should contain only noise.

For normal observations I would state the number n of target frames, but assume that the dark consists of another n dark frames. Similarly here I use n as the number of frames in the first of the double dark. E.g. if n = 300, I use 300 frames in the first dark and another 300 frames in the second dark. Using n = 1, 3, 10, 30, 100 and 300 we obtain the following graphs for noise versus number of frames.

Square root law Square root law of noise reduction. The bottom graph is for the Logitech QuickCam VC, the top for the Philips ToUcam Pro VGA. The markers are the measurements, the lines represent square root laws.

The square root law is a good representation for both webcams. It begins to break down after 100 frames (183 s exposure) for the QuickCam VC and after 300 frames (12 s) for the ToUcam Pro. Note that the noise level per se is a lot higher in the ToUcam.

To compare the two webcams, consider that the ToUcam in a single frame and per exposure time gives 2.5 times the signal of the QuickCam. However a single QuickCam frame lasts 1.83 s compared to 0.04 s for a ToUcam frame. Hence the QuickCam frame has 20 times the signal of the ToUcam. At the same time it has at least 5 times less noise than the ToUcam. So the signal-to-noise ratio of the QuickCam frames is about 90 times better. On the other hand in the time it takes to take the single QuickCam frame about 9 ToUcam frames can be taken. Still, the QuickCam is 10 times better than the ToUcam in terms of elapsed time.

The number to take away from this exercise is the noise level that an observation suffers that consists of a single target frame and a single dark frame at maximum exposure time. This number is for the ToUcam Pro

N = 7.5 ADU

and for the QuickCam VC

N = 1.4 ADU

Cooling

The experiments about bias, dark and double dark are repeated with the webcams in the fridge. Instead of 23°C they are then at 6°C, which may reduce the noise and have other effects on the detector properties. The table shows the mean values and standard deviation of the bias and dark frames. These are averages of 1000 frames and should contain hardly any noise. The table also shows the noise (standard deviation) in the double dark images, once for n = 1 and once for n = 300.

  QuickCam VC
6°C
QuickCam VC
23°C
ToUcam Pro VGA
6°C
ToUcam Pro VGA
23°C
Bias 25.2 +- 0.8921.3 +- 0.86 0.112 +- 0.0301.12 +- 0.27
Dark -1.04 +- 0.341.09 +- 1.43 3.95 +- 0.5910.0 +- 2.2
Double dark 1 1.151.38 2.906.49
Double dark 300 0.0870.13 0.1930.53

While the bias in the ToUcam Pro goes down by a factor of ten when cooled, for the QuickCam VC it may actually go up a bit.

The dark current is insignificant in the QuickCam VC at both temperatures, but its standard deviation reduces fourfold when cooled. The reduction is not quite that strong in the ToUcam, but still significant.

Now for the noise proper (double darks). The cooling effect is always stronger than the 6 per cent reduction of absolute temperature. For the QuickCam VC it appears that the effect for single frames is marginal, but that for a large number of frames the noise is reduced down to the square root law we determined above. So the effect of cooling is to allow more frames to be stacked without the square root law breaking down.

For the ToUcam Pro cooling is significantly more beneficial than for the QuickCam VC, and again the noise reduction is stronger when many frames are averaged. However, this level of cooling by no means can negate the earlier arguments to favour the QuickCam VC for deep sky work.

Calibration: ADU, brightness, flux, magnitude

Recall that each individual frame is an 8 bit digitisation where each pixel can have only the discrete values

IF ∈ {0, 1, 2, 3, 4, ..., 255} ADU

Things are clearer if we give these numbers a unit similar to metres for length of degrees for angles. We call this analogue digital units (ADU).

If the frame exposure time is chosen too long the brightest parts of the object of interest are represented as 255 ADU even though they might deserve 300, 400 or even 500 ADU. This is what is called saturation: we have to use 255 because higher numbers cannot be transferred, processed or stored.

Quite commonly we therefore take a large number n of frames and stack them. This is both in order to tickle out faint detail from the noise and to be able to boost contrast even on bright objects. In the averaging of those frames we obtain a stack that has not only discrete integer values in its pixels, but all intermediate floating point values as well:

Is ∈ [0, 255] ADU

If frames are summed as well as averaged the range may be larger than this. This should be corrected. E.g. AstroVideo will add RGB to grey and we should divide by 3 to compensate. Also, Vega or AstroVideo can add frame groups on line, an we should here divide by the number of frames in such groups.

The stack exposure should be the frame exposure times the number of frames

Ts = n TF

We ignore the complication that from the QuickCam VC the same frame may be recorded multiple times. That should ideally be sorted out by eliminating duplicate raw frames and using for n the number of unique frames. (Cf. Reduction / Grey / Frame uniqueness).

The stack is an average and not a sum of frames. So it is correct to calculate the brightness in ADU per second of exposure time by dividing the stack values by the frame exposure time:

B = Is/TF

While B tells us how bright a surface (sky background, extended nebula, planetary disc, etc.) is, often we want to calculate the flux, or the total amount of light coming from the source. This is particularly useful for point sources (stars etc.). In our images they are not exactly points, they are not even restricted to a single pixel, but extend into neighbouring pixels. There are several reasons for this, which do not concern us in this context. The flux of the source is simply the brightness added up over the area A of the source (the pixels affected by the source).

S = ∑A B = 1.13 Bpeak w2

The second form applies only to point sources, assuming that they are recorded more or less as a Gauß function (a bell curve). w is the full width at half maximum (FWHM) of the curve and Bpeak is the highest B value, at the centre of the star's image.

To recall the units, I is in analogue digital units (ADU), B in ADU/s and S is in ADU*pix2/s. We use pix as a length unit, being the distance from one pixel to the next in the horizontal or vertical direction. The area of a pixel is 1 pix2, an area of 10 by 10 pixels is 100 pix2.

The flux S of a star is a linear measure of how much light we receive from it, while a star's magnitude is a logarithmic measure. So we can write down a relationship between a star's magnitude m as tabled in star catalogues and its flux as determined from our images. The conversion necessarily involves information about the lens aperture D and the detector sensitivity m0:

S = 100.4 (m0 - m) (D/mm)2 ADU pix2/s

m = m0 - 2.5 lg[S/(ADU pix2/s)] + 5 lg(D/mm)

For an extended source we will find catalogued their total magnitude and their size (diameter in arc minutes, half axes of an elliptical area, etc.). Catalogues become very crude here, as these numbers do not reflect how a galaxy may not have an elliptical shape and how its centre is much brighter than its periphery. With these caveats we can write the relationship between the catalogued magnitude m and ellipse half axes a and b and the brightness in our images. This involves the conversion between pix and arc minutes and therefore introduces the focal length f and the detectors' linear pixel size into the equation. The mathematical argument below starts with the conversion from magnitude m to flux S into our detector. It then states the area A of an ellipse given its half axes. The brightness B in the detector is the flux divided by the area (neglecting variations in surface brightness of the actual source). Then we substitute S and A into the expression for B. In this the pixel area needs conversion into square arc minutes (sq'), which differs from webcam to webcam. Substituting this into the expression for B yields the last pair of equations. The numeric constant is due to the different pixel sizes. m0 is the detector sensitivity.

S = 100.4 (m0 - m) (D/mm)2 ADU pix2/s
A = 3.14 a b
B = S / A
B = 100.4 (m0 - m) (D/mm)2 (ADU pix2/s) / (3.14 a b)

For the Logitech QuickCam VC:

1 pix2 = 740 sq' (mm/f)2
B = (235 ADU/s) 100.4 (m0 - m) (D/f)2 / (a b / sq')

and for the Philips ToUcam Pro VGA:

1 pix2 = 370 sq' (mm/f)2
B = (118 ADU/s) 100.4 (m0 - m) (D/f)2 / (a b / sq')

Note how the flux S for any given magnitude m grows with the square of the lens aperture D, while the brightness B for any given magnitude and source size grows with the square of the inverse f ratio D/f. A larger lens shows stars quicker, a smaller f ratio shows nebulae quicker.

Apart from the rough tieing in of visual star magnitudes with detector specific ADU and pix2, we have not made the connection between our observed images and physical units of flux or brightness. This can be done only by using the comparison between an observed source and its catalogued brightness. To do this we should pick a reasonably bright early A star from our image and look up its visual magnitude V. This converts to a source flux in the physical units of Jy (Jansky). The unit is defined as

1 Jy = 10-26 W m-2 Hz-1

and conversion for visual (green) magnitudes can be done with (UKIRT, 1998, Conversion of magnitudes to Janskys and F-lambda, http://www.jach.hawaii.edu/UKIRT/astronomy/utils/conver.html):

S = 3540 Jy   10-0.4 V

On the other hand we can measure the flux in ADU pix2/s as outlined above and convert the pixel size to sr (steradian, rad2). For the QuickCam VC:

1 pix2 = 6.3 10-5 sr (mm/f)2

and for the ToUcam Pro VGA:

1 pix2 = 3.1 10-5 sr (mm/f)2

To exercise this, consider an A0 star of V = 5.0m observed with f = 50 mm, f/1.8 and the QuickCam VC. This might conceivably register with a flux of 487 ADU pix2/s, which would be the number we extract from our image. Using the pixel size and focal length we can convert this to 1.23 10-5 ADU sr/s. On the other hand, the catalogued flux is 35.4 Jy. Dividing the two gives us a conversion factor of 2.88 106 (Jy/sr)/(ADU/s). If we multiply our image with this value we convert from brightness in ADU/s into brightness in Jy/sr. These results are then independent of the detector, optics and atmospheric transparency. So images from different days can be compared without problem. We are, however, assuming that the detector basically observes visual brightness in the V band. This is not actually the case, and we do not know the relative sensitivity to light of different colours. Still, if we calibrate on white objects (an A0 star is by definition white), and are interested in white objects, we should be ok, even when comparing images from different detectors.

Sensitivity

  ASA m0 m1
QuickCam VC 2800 4.5 -0.2
ToUcam Pro VGA 7000 8.2 -0.2

For bright objects we can compare single frames with photographic film. The webcam can be underexposed, equivalnet to when the negative remains clear after developing, or it can be overexposed when the negative turns all black. As it happens, I have taken images of the Moon at similar phase with a traditional film and with both webcams. The phase of the Moon in all cases was such that 82 or 83 per cent of the lunar disc were illuminated. By comparison of the f ratios and required exposure times the ASA numbers of the two webcams can be worked out as 2800 (QuickCam VC) and 7000 (ToUcam Pro VGA).

Although this sounds great, these numbers apply only where a single webcam frame can register the object without problem. And such a frame has only an exposure time of at most 1.83 s or 40 ms respectively, while a film can be exposed for many minutes if required. Another reservation about this comparison with photographic film is that film may merit digitisation into more than 8 bit. To reach similar dynamic range with a webcam we would have to average a number of frames. This would increase the time it takes us to obtain the image and in effect reduce the sensitivity from the values tabled above.

For deep sky work the quantity m0 describes the sensitivity of the detector. To determine this from observations, we need to calculate the flux S from our images and compare this to their catalogued magnitudes. I use here two similar exposures with the two webcams, both f = 50 mm, f/1.8, and an exposure of about 10 minutes. The results are 4.5 mag and 8.2 mag resp.

This covers the conversion between magnitude and ADU, but it does not take account of the noise. A related but different measure of sensitivity is the limiting magnitude. To determine this we choose stars barely detected above the noise in our images and look up their magnitude. We have to take into account the exposure time in this case. To capture the dependency on the aperture D and exposure time Ts we write:

mmax = m1 + 5 lg(D/mm) + 1.25 lg(Ts/s)

The same two observations with the respective webcams show that both have the same limiting magnitude (10.4 mag or 10.5 mag after 10 minutes), resulting for both webcams in the same m1 of -0.2 mag.

Linearity

Above we have been happily converting ADUs to fluxes and magnitudes, but this is valid in that simple form only if the detector response is linear, i.e. if 10 ADU corresponds to 10 times the light as 1 ADU does. In the discussion about the ToUcam Pro driver I say that it has a Gamma control and that the Gamma value seems to be 1.4 even at minimum. So it is not linear.

The QuickCam VC has no Gamma control and we might be lucky enough that it responds linearly to light. To test this I identified a number of stars in an image of a star field and looked up their magnitudes in the Hipparcos catalogue (ESA, 1997, The Hipparcos and Tycho catalogues, Astrometric and photometric star catalogues derived from the ESA Hipparcos space astrometry mission, A collaboration between the European Space Agency and the FAST, NDAC and INCA consortia and the Hipparcos Industrial Consortium led by Matra Marconi and Alenia Spazio, ESA SP-1200).

Linear response Catalogue magnitude plotted versus observed flux in a star field. Red marks are observed, the pink line represents linear response.

The graph confirms that the detector is linear. A number of stars lie below the line, but this may be due to their later spectral type, making them redder and probably register more easily and with higher flux.


Copyright © 2003 Horst Meyerdierks
$Id: theory.shtml,v 3.4 2006/09/18 17:17:17 hme Exp $