There are a number of ways to hyperspectral or multspectral imaging. For this article, we are going to look at systems that look at UV, Visible and Near IR (NIR) light. There is not exact definition of when a multispectral system become hyperspectral. The difference lies in the number of different bands. Certainly, a 5 band system that covers 400nm to 1100nm is considered multispectral. But a system that sees 500-700nm with 20 bands 10nm wide would be considered hyperspectral.
A quick overview of the various types are:
1: Spatial Scanning. A strip of a scene is passed through a slight and then dispersed through a prism to separate the different frequencies. The prism spreads the light over an image sensor. As the scanner moves across the screen, the hyperspectral datacube is aquired. An advantage of this method is that the equipment is relatively inexpensive (hyperspectral imaging tends to be *very* expensive), but a big disadvantage is that you need precisely controlled motion to construct the datacube correctly. These sort of systems work well with items moving on a conveyor belt, but not so well when shooting from an airplane where there are a lot of moving things going on.
2: Spectral Scanning. A multispectral camera takes a series of pictures through various filters or light sources. This system requires the object to be stationary since you need time to change filters or lights. An advantage of this system is relatively low cost and very good resolution assuming the object being inspected does not move.
3: Spectral - Spacial Scanning. This system uses a 2 dimensional multispectral image sensor with a Linear Variable Filter (LVF) is placed over the sensor. The LVF looks like a rainbow filter where one side has one frequency of light and the filter varies continuously to the other side.
With the camera or object slowly moving the image from one side of the sensor to the other while taking a series of pictures, you can use software to construct a datacube. An advantage is relatively low cost and it can be used for a variety of applications, it also requires computing resources to construct the datacube correctly. We build this sort of camera and use feature based object recognition to construct the datacube.
4. Non-Scanning. This method is a bit hard to describe, but you basically take one picture and get a hyperspectral datacube. As you can imagine, the devices are very complex and tend to result in images that have lots of spectral frequency information but poor spatial detail resolution. From Wikipedia:
This is a method for capturing hyperspectral images during a single integration time of a detector array. No scanning is involved with this method and the lack of moving parts means that motion artifacts should be avoided. This instrument typically features detector arrays with a high number of pixels.
Although the first known reference to a snapshot hyperspectral imaging device—the Bowen "image slicer"—dates from 1938, the concept was not successful until a larger amount of spatial resolution was available. With the arrival of large-format detector arrays in the late 1980s and early 1990s, a series of new snapshot hyperspectral imaging techniques were developed to take advantage of the new technology: a method which uses a fiber bundle at the image plane and reformatting the fibers in the opposite end of the bundle to a long line, viewing a scene through a 2D grating and reconstructing the multiplexed data with computed tomography mathematics, the (lenslet-based) integral field spectrograph, a modernized version of Bowen's image slicer. More recently, a number of research groups have attempted to advance the technology in order to create devices capable of commercial use. These newer devices include the HyperPixel Array imager a derivative of the integral field spectragraph, a multiaperture spectral filter approach, a compressive-sensing–based approach using a coded aperture,a microfaceted-mirror-based approach, a generalization of the Lyot filter, and a generalization of the Bayer filter approach to multispectral filtering
5. Spatiospectral Scanning. This is similar to spatial scanner but instead of dispersing light by frequency through the slit, the dispersing element is before the slit and the datacube is acquired by frequency.
For many purposes, you don't need a hyperspectral imaging system. The systems are normally complex, expensive, require sophisticated software to create the datacube and often suffer from motion capture issues. When you look at reflection, absorption and fluorescence of many (most) materials, the curves tend to be broad. For instance a plant, paint pigment, dye, etc, don't normally have particular curve spikes. For most uses, you could use 50nm or 100nm wide bands and have more data than you need.
A common method low cost method is to use a multispectral camera in a room with controlled lighting and use a series of camera filters to take pictures of a stationary object to be analyzed. That can work well assuming the light source is controlled, filters are available and you understand lighting limitations. For instance, most UV shortpass filters also pass infrared light. If you use a UV light source that also emits infrared (fluorescent tube, xenon light, HID, etc) then a tiny bit of IR will turn your picture into a mostly IR picture because the silicon camera sensors sees IR light very well and UV light poorly. This method can also have the limitation that finding filters that fit the camera lens in all the frequencies you want may be expensive or not even available.
A relatively novel and low cost method (poor man's hyper/multi spectral system) is to use a multispectral camera in a room with dark lighting and use a series of illumination sources or a tunable illumination source. We have a range of inexpensive flashlights from 280nm to 1550nm. One could simply have a range of flashlights to illuminate the object being inspected and then create the datacube in software such as the free ImageJ.
You can see these light sources by clicking here.