You need three levels of software. The first is the operating system (cf. Computer and operating system). The second is the driver software, which helps the operating system to make use of the additional hardware, here the webcam. Drivers may come with the operating system, with the additional hardware, or from a third party. The third level is the application that allows the user to make use of the resources in their computer and its peripheral hardware (cf. Acquisition software).
While any webcam manufacturer will give you software for Windows, and while many will also give you software for MacOS, Linux is a rather small market that they usually ignore. What then often happens is that some Linux enthusiast wants to make it work, sits down and starts investigating the webcam and its communication with Windows, and starts to write software for Linux. The manufacturer then may help them out with some insider knowledge, or even with some paid time of a programmer.
Windows and Linux have different opinions of how drivers should be used. Under Linux a webcam driver is a modular extension of the Linux kernel; it can be used by applications, but it does not interact with the user. Under Windows the driver is also an addition to the operating system and can be used by different applications. But the driver will present a graphical user interface. As a consequence I might run an application that happens to have a user interface in Spanish, but when it comes to setting the brightness and contrast it brings up an English dialogue from the driver.
Linux has the the better design in terms of software engineering; the driver is down there with the kernel close to the hardware, the application is above it, and above that is the user. But in reality different webcam manufacturers use different control mechanisms. Under Windows that's fine, because the user can talk to the driver that is specific to the webcam in use. Under Linux the user must hope that the application is sufficiently aware of the particular webcam.
The QuickCam VC driver for Windows contributes two items to the user interface. These are Video Format and Video Source:
The table lists the parameters and shows their factory default and the optimum values we should use. This is optimised for faint objects.
- Saturation and colour balance.
- The colour quality of the camera is not very good at night. Hence we set the saturation to zero and the colour to manual.
- Auto exposure.
- The other auto/manual choice affects brightness, contrast and exposure time. For our purposes the automatic setting cannot be relied upon, it may easily cause bright objects of interest to saturate their pixels.
- Video quality.
- The video quality can range from 0 to 4. The higher the value, the higher the compression of the data transfer from the webcam. A value of 4 results in serious defects. The defects become less and the noise becomes better behaved as the video quality value is decreased. However, going all the way to 0 results in bright corners: The background signal is lowest in the centre of the field and rises in a circular pattern with the highest background signal in the corners of the detector. Therefore the optimum value for video quality is 1, and this is an important setting.
- A clear statement that can be made from experiments is that the extreme values of 0 and 255 must be avoided. Set to 0 we have only noise in the image, set to 255 artefacts appear in the background. While it may or may not be a good idea to increase the sensitivity above the default of 128, any such argument is quite weak and I leave this at the default setting of 128.
- Brightness and contrast.
- Contrast and brightness are not independent of each other, and their names are misleading. Increasing B scales up all signal, keeping the ratio max/min constant. Increasing C shifts all signal down, keeping the difference max-min constant. Hence decreasing C to some degree counteracts decreasing B. Closer inspection of test images - in particular histograms - confirm that both parameters should be set as high as possible. This makes best use of the dynamic range of the 8-bit digitisation: Firstly, the full range of 0 to 255 tends to be used and 255 reached for relatively faint sources. Secondly, the histogram is smoothest: At C below 192 there is a tendency that every second valid readout value is more frequent than the next, in effect we would have a 7-bit camera instead of an 8-bit one. So the best settings are 255 for both these parameters. (For bright sources these settings may render even the shortest exposure too long. In those cases turn down both parameters to 192 or both to 128.)
- Exposure time.
- The above leaves only the exposure time parameter E. Above a value of 222 the exposure does in fact not increase any further, so a standard setting of 224 for deep sky work makes sense. (For brighter objects this has to be reduced to avoid saturation, i.e. to avoid the readout values reaching the maximum value of 255 and not recording an accurate measure of the incoming light. Experiments are consistent with the postulate that E is a logarithmic measure of the frame exposure time, that a change by 16 means a factor 2 in time, that the mid value E = 128 corresponds to the fastest rate for frame delivery of 30 Hz, and that the longest exposure is 1.83 s, occurring at E = 220 and any value above. That makes the shortest exposure around 1/8000 second.)
De Marchi Daniele has written a Linux V4L (Video For Linux) driver for the QuickCam VC USB (cf. http://digilander.libero.it/demarchidaniele/qcamvc/quickcam-vc.html). Although the website indicates USB support only, when you come to downloading the driver there is support for the parallel port, too. Download from http://sourceforge.net/project/showfiles.php?group_id=19538.
To build this kernel module from source, you also need to have the source
code for the kernel itself installed in
/usr/src/linux. In my
case of a Red Hat kernel it means I have to install the kernel-source package
and I have to create a symbolic link in
/usr/src, because Red Hat
put the source in
/usr/src/linux-2.4 instead of
After building you are left with three kernel modules. To use the webcam
you need to insert the
qcamvc.o module plus one of
qcamvc_pp.o, depending on which
version of the QuickCam VC you have. You also need the
module and modules for the port, for me
I did not install the
qcamvc* modules into
/lib/modules, but copied them into
/lib/modules they are liable
to be overlooked in the next system update or system re-install. Here is an
order of module insertion that works:
insmod parport insmod parport_pc insmod videodev insmod /usr/local/libexec/qcamvc.o insmod /usr/local/libexec/qcamvc_pp.o
For the Philips ToUcam Pro, recording applications will offer the two menu items Video Format and Video Properties, the latter with two relevant tabs:
The table lists for the ToUcam Pro the factory defaults and optimum settings for all the parameters, separately for bright extended objects and for faint objects. Gamma is listed as the interface setting and in parentheses the Gamma value.
|Black and white||No||No||Yes|
|Gamma||0.67||0.0 (1.4)||0.0 (1.4)|
- Image size.
- For bright extended objects we prefer a large detector size (VGA, 640x480) to make better use of the long focal length we try to achieve. (The size of the detector does not change, the pixels are smaller.)
- Full auto.
- This is of course not a setting we can accept. We we would not be able to take dark/bias frames with the same settings as the targets. Even with everything set to manual one has to watch out: There are interdependencies such that changing the frame rate usually modifies the shutter speed and gain.
- The gain setting controls the electronic amplification of the current that results from reading the electrons out of the detector. This mainly makes the image brighter, but also increases the noise. We use this control to adjust the noise to be somewhat more than one ADU. That way signals that are less than one ADU can still be detected by stacking lots of frames. At the same time, as little dynamic range as possible is lost to noise. From experiments at exposures of 0.04 s and 0.002 s the optimum gain setting in general is 0.5 for VGA size. 0.75 may result in higher signal, but the noise rises even faster. Going down to 0.33 will give less signal and it will reduce noise too much. For SIF size the noise is reduced and we need a higher gain setting of, say, 0.65.
- Shutter speed.
- With the gain fixed, this is the only control to adjust the brightness such that as much signal as possible is obtained in individual frames without saturation on objects of interest. There is a discrete set of exposure times ranging from 0.0001 s to 0.04 s. Two of the values on offer seem to change automatically to neighbour values: 0.0001 s tends to be adjusted to 0.0002 s and 0.00067 s is adjusted to 0.001 s.
- This control is available only if gain (and exposure time) is automatic. It appears to be merely changing gain between mid and medium high values.
- Adjusting this to maximum makes the most count rate. Although noise increases faster than signal in this change, this may actually be helpful. For bright extended objects leave this at the default of 0.5, but for faint objects change it to 1.0.
- Black and white.
- For colour imaging this has to be off. Although one can turn colour images into grey later, common sense would suggest that one whould turn this on if grey is all we want.
- We simply leave this at the medium setting (0.5). This seems to be best even for grey imaging.
- Frame rate.
- Changing the frame rate will change exposure time and gain. Be sure to adjust exposure and gain after setting the frame rate rather than before. As the dialogue indicates, higher frame rates are paid for by a loss in image quality. In fact, even 5 Hz frame transfers are uncompressed only if the image size is also only 320x240 (SIF). So always use the lowest frame rate.
- A gamma correction modifies the brightness accorging to y = x1/γ so that γ > 1.0 increases the brightness and does so mostly where the image is faint. My experiments indicate γ = 2.0 at a setting of 0.33 and γ = 1.4 at a setting of 0.0. For ordinary imaging we can perhaps get away with the minimum setting. But for photometry we will have to correct the dark and the target frames before dark correction for this nonlinear response.
- White balance.
- The possible settings are automatic, indoor, flourescent light (FL), and outdoor. We can also use two sliders to set red and blue. For grey imaging the setting is not very important. For someone like me who can't tell how much red there is in an image, it is dangerous to use manual settings. On theoretical grounds we might choose outdoor, because that is for illumination by sunlight. But it gives images with a strong red/brown taint. A good solution seems to be to use the Moon to obtain an automatic colour balance and then to fix it by changing to manual. As the Moon is not always available, it would make sense to save this automatically acquired manual setting in the user defaults.
Nemosoft Unv. have produced a driver for several Philips and other webcams. The compression algorithm used for the data transfer across the USB is proprietary. Philips have given the necessary information to the authors of the driver, but this means they cannot give some of their source code away. This in turn prevents it from being included in the Linux kernel, which must all be under the GNU General Public Licence (GPL, cf. Computer and operating system).
The makers of the Philips webcam driver for Linux have therefore split their
code into two kernel modules. One is
pwc.o, contains the bulk of
the code, is open software under GPL, and is now part of the Linux kernel
(since Linux 2.4.5). This alone can make the webcams work, but only if no
compression is needed, such as SIF size at 5 Hz. The second part is the
decompression code, which cannot be published, therefore cannot be part of the
kernel, and we must therefore download ourselves. It is available from
I have placed this
pwcx-i386.o module into
/usr/local/libexec and make sure it gets loaded straight after the
pwc.o module. That in turn gets loaded as soon as the webcam is
detected on the USB. To set its frame rate to 5 Hz I set some options for
pwc.o module. All this is done by adding the following
lines to the file
/etc/modules.conf, which is read when the system
post-install pwc /sbin/insmod --force /usr/local/libexec/pwcx-i386.o >/dev/null 2>&1 || : options pwc size=vga fps=5 compression=0
The frame rate cannot be changed later by most applications, so it has to be set here to avoid the default of 10 Hz being used. The size can be changed from some applications, but not all.
If it is necessary to switch from VGA size to SIF size, the following three commands unload both modules and reload pwc with SIF size:
/sbin/rmmod pwcx-i386 /sbin/rmmod pwc /sbin/insmod pwc size=sif fps=5 compression=0
Not re-loading pwcx-i386 also makes extra sure that we do not use compression on the USB. If we need to switch back from SIF to VGA we do the following:
/sbin/rmmod pwc /sbin/insmod pwc size=vga fps=5 compression=0 /sbin/insmod --force /usr/local/libexec/pwcx-i386.o
This module manipulation can be done with the webcam connected to the USB, but it must not be in use (no application must have connected to it).
Copyright © 2003 Horst Meyerdierks
$Id: driver.shtml,v 3.3 2004/02/21 18:13:39 hme Exp $