In World War II, the US military used infrared film cameras and special filters to detect camouflaged objects. Plants reflect near infrared light (NIR), light above 700nm, quite strongly because the NIR light does not have sufficient energy to support photosynthesis. The NIR light does not have enough energy level per photon to synthesize the organic molecules needed for photosynthesis. In addition, by reflecting the NIR, the plant keeps the leaf cool and does not cook in the sun. The military discovered that by looking at plants with an infrared camera, the plants would look white while a painted green object typically looked dark. Thus, while the human eye may see a camouflaged green object as green and quite similar in color to a green plant, the infrared camera sees the plant as white while the green paint is dark making detection easy.
Example of healthy vegetation taken with an IR-Only camera.
As an outgrowth of the military work, Dr. Len Haslim, Senior Scientist at NASA, worked on using visible and infrared cameras to detect vegetation from space and to detect when vegetation was stressed or not healthy. He noticed that when plants become stressed, their infrared reflectivity drops faster than their visible green color. You can see were a plant is having a problem by comparing the ratio of the NIR light to the visible green. A variety of mathematical indexes have been developed to quantify the relationship between the NIR and visible light. The oldest one is called the Normalized Difference Vegetation Index (NDVI).
NIR and RED stand for the reflectance values of the NIR and visible red color using a spectrometer. NDVI can vary from -1.0 to + 1.0. A value closer to +1.0 represents are higher likelihood of a plant and of a healthy plant. Values close to -1.0 are often water. Values around -0.10 to 0 often represent sand, snow and barren rock. Shrubs and grassland typically have values around 0.20 to 0.40. Values approaching +1.0 are typical for temperate and tropical rainforests.
In our vegetative camera, the blue channel captures the visible light while the red channel captures the infrared. Therefore, the equation for our camera would be:
The values for the red channel and blue channel can be read directly using a program such as Photoshop and using the eyedropper tool to find the RGB value of a spot. Photoshop will give you a RGB values from 0 to 255 representing the intensity of the spot.
The cameras at the time used Kodak color infrared (CIR) film (Aerochrome type 1443) in conjunction with a Wratten 12 filter. The Kodak CIR film had the unusual property where the blue crystals would expose with either blue light or infrared light while the red and green crystals would expose to only visible green and visible blue. The Wratten 12 filter blocks visible blue light and passes green, red and infrared light. By using the Kodak CIR film with the Wratten 12 filter, you would end up with the blue channel getting infrared light while the green and red would have normal visible blue and green. Thus, with one picture, you would get both infrared and visible data.
Kodak has largely phased out their CIR film (and film business in general) forcing people who study crop stress to find different methods.
With digital cameras, it is not very easy to duplicate the CIR film and typically more complicated to get the visible and NIR data. The problem has to do with the way digital cameras see color. Digital cameras start off with a black and white photodiode. These photodiodes are arranged in a matrix which make up the base of the camera photo sensor. The photodiodes are made from silicon which will respond to light from under 400nm to about 1200nm. The human eye can see from about 400nm (blue) to 700nm (red).
In order for the digital camera black and white photodiode to see color, a pattern of red, green and blue dots is photolithography printed on top of the black and white photodiodes. This pattern is called the Color Filter Array (CFA). Typically, the pattern looks like this:
Because the human eye is the most sensitive to green, the cameras usually have 2 green, 1 blue and 1 red sensor for every 4 pixels. When you take a picture, the cameras mathematically decodes the red, green and blue dots in a process called Debayering and assigns an RGB value for each pixel in the digital picture.
The red, green and blue dyes that are used to create the dots on the electronic sensor happen to also transmit light in the NIR to different degrees. Because the color dyes transmit NIR, all color digital cameras must have an IR Cut Filter (ICF) to block the NIR. The ICF is placed in front of the image sensor to block the NIR light. If the digital camera did not have an ICF. then the color channels would contain both visible and infrared information mixed together. For example, the blue channel would have both visible blue light and infrared light mixed together and likewise the red and green channels would also contain infrared data.
Below is a Canon 450D camera spectral response without an ICF.
Note that between 700nm and 1000nm, which is the NIR, the red, green and blue channels all have some response in the NIR. The red channel is most open to the NIR with blue and green opening up around 800nm. Humans can see from approximately 400nm to 700nm. Above 700nm, the human eye can see light but does so very poorly. Peak human response is at 550nm, or green.
The result is that a digital camera with the ICF removed will not respond like the old Kodak CIR film. Remember that the Kodak CIR film red and green crystals did not respond to NIR. Only the blue crystals did, and by using the visible blue blocking Wratten 12 filter, you could get one film picture with the blue band showing only NIR while the green and red bands contained only visible green and red.
Today, the work-around typically done by researchers is to use two different digital cameras taking simultaneous pictures. One camera takes an IR-Only picture, and the second takes a normal visible picture (no NIR). In post production, the researcher combines the two pictures by taking the red channel from the IR-Only camera and pasting it as the new blue channel in the visible picture to emulate the Kodak CIR film. Some researchers prefer to paste the IR-Only red channel as a new red channel in the visible picture because the red is easier to see than the blue. You can see the process that one of our customers follows using the two camera system here.
Needless to say, this procedure of using two cameras is complicated. You have to use two different cameras, simultaneous shutter release systems, the same lens, parallax issues and lots of post production work. Or you can use one UV+VIS+IR camera and take two pictures using two different filters such as our CC1 IR blocking filter and our 715nm IR pass filter.
In 1994, Kodak made a very limited number of 6.0 megapixel DSLR cameras called the DCS-460CIR specifically for the USDA forest service at the request of Tom Bobbe of the USDA. Some estimates put that Kodak made fewer than 10 of the DCS-460CIR cameras. The Kodak green and blue color dyes used had limited NIR response and along with some custom software, the camera could generate a two band CIR type of image. Kodak stopped making all DSLR cameras in 2004, and only made the CIR camera in 1994..
We have created a digital cameras that can see either Blue-Green-NIR or Green-Red-NIR. With one picture from one camera, you get a picture that contains both NIR and visible color information.
As a plant gets sick, the near infrared reflectivity drops. Below is an example of a how the spectral response changes for a leaf that is picked. We measured the leaf over a 24 hour period with our Ocean Optics HR2000 spectrometer.
Notice how the healthy leaf at 0 hours (just picked) shows very strong infrared reflectivity above 700nm. Within 24 hours, the NIR reflectivity drops off by 75%. Think about how the NDVI ratio changes. You can see that the visible color (400nm to 700nm) has changed somewhat while the NIR has changed dramatically.
If we look at the Quantum Efficiency of the 450NDVI camera, you can see that the camera sees in the blue and near infrared bands.
If we overlay the Leaf Spectral Reflectivity Change graph on top of the Quantum Efficiency graph, you can see how the camera will respond to a change in the near infrared of the leaf in the red band while the blue band will pick up the leaf blue light absorption.
For those interested in further vegetation indexes, you might want to research the Perpindicular Vegetation Index (PVI), Soil-Adjusted Vegetation Index (SAVI), Atmospherically Resistant Vegetation Index (ARVI), Global Environment Monitoring Index (GEMI) and Fraction of Absorbed Photosynthetically Active Radation Index (FAPAR).