Coined in the 1960s by a geographer named Evelyn Pruitt, the term remote sensing refers to the collection of data about Earth's surface (and atmospheric) features or phenomena without being in direct contact with such features or phenomena. This entry first provides background on electromagnetic radiation (EMR), which is the information link that enables the collection of data about Earth's surface without being in contact with the surface. Next, the entry discusses sensors that capture EMR and are mounted on platforms (aircraft or satellite), which enables the remote collection of different types of imagery. Finally, the principles and strategies of image interpretation for extracting geographic information are covered.
There are advantages and disadvantages in using remote sensing technology for geographic research and applications, relative to more conventional, ground-based or in situ (i.e., direct) observation methods. The primary advantage is that remote sensing enables spatially continuous and contiguous, large-area sampling of Earth's surface and atmosphere. This means that data are comprehensive, synoptic (i.e., many features are observed simultaneously), and efficiently collected. Other advantages are the noninvasive, nondisturbing nature of sensing remotely and the ability to collect data for areas that are inaccessible or inhospitable for making observations on the ground. Often, historical information on geographic features or conditions can only be derived through interpretation of archived imagery captured in the past. The main disadvantages of remote sensing are that remote observations are less detailed and precise and more uncertain than direct measurements made on the ground. Also, the sensing process, the image-processing steps, and the act of image interpretation all can yield more artifacts (i.e., false information) than would normally result when making in situ observations.
EMR is the important link that enables information about Earth's surface to be extracted. As the term suggests, both electric and magnetic fields are associated with EMR transfer, which can occur even in the absence of matter (i.e., in a vacuum). The orientation of the electric field determines the polarization of EMR. EMR travels at the same speed (the speed of light, or 300,000,000 m [meters] per second) through outer space and, effectively, through Earth's atmosphere, irrespective of wavelength or frequency. Wavelength and frequency are inversely related (i.e., short-wavelength EMR is high-frequency EMR).
All matter emits EMR, and the wavelength of emitted radiation is an inverse linear function of the temperature of the emitting material. A change in the electrical charge of matter, which can result from atomic-level changes in energy states or molecular-level motions such as vibration and rotation, causes EMR to be emitted. The sun emits shortwave radiation in an amount that is approximately equal to a blackbody (defined as a perfectly absorbing material that is in thermal equilibrium) having a temperature slightly less than 6,000 Kelvin. Earth's surface reflects incoming solar radiation (called solar irradiance) and emits long-wave radiation. The actual amount of radiation emitted by Earth surfaces also depends on the emissivity properties of different materials, which pertains to the efficiency of emission relative to a blackbody at the same temperature. Most of the EMR emitted by Earth surfaces is radiated within the thermal-infrared (TIR) part of the EMR spectrum at wavelengths ranging from 3 to 30 µm (micrometers), where 1 µm = 10-6 m.
Most remote sensing of the environment is based on passive imaging of solar reflective EMR, primarily in the wavelength range between 0.3 and 3.0 µm. This range includes the longer portion of the ultraviolet (0.3-0.4 µm), visible (0.4-0.7 µm), near-infrared (0.7-1.2 µm), and shortwave-infrared (1.2-3.0 µm) portions of the EMR spectrum. Passive optical remote sensors on aircraft or satellites capture EMR that reflects and/or is emitted from Earth surface materials and scatters off of or is emitted from atmospheric constituents. In most geographic applications, the surface-leaving radiation (called exitance) is the signal of interest, and the scattered or emitted atmospheric radiation is noise.
Solar radiation passes through the atmosphere in its path to Earth's surface and then, on being reflected, passes through the atmosphere again en route to a sensor on an aircraft or satellite platform. Atmospheric constituents such as gas molecules, small particles called aerosols, larger particles called particulates, and water in liquid and solid form (e.g., cloud droplets, raindrops, and ice) have the ability to absorb (intercept) and/ or scatter (redirect) EMR. Of these interactions, scattering is the most important factor for solar reflected radiation, while absorption is more important when sensing long-wave emitted radiation. Atmospheric scattering diffuses the direct solar irradiance, reduces the amount of reflected solar radiation traveling to a sensor, and redirects solar radiation into the view of the sensor. Absorption limits which wavebands within the EMR spectrum are useful for sensing Earth's surface properties. For long-wave (TIR and microwave) sensing, absorption reduces the magnitude of emitted radiation leaving a surface and results in the atmosphere reradiating EMR into the view of a sensor.
The spectral signature of a surface material or object is the characteristic pattern of the magnitude of reflected or emitted EMR as a function of wavelength. As a type of inversion problem, information about the composition or condition of surface features can be extracted from spectral signatures represented in remotely sensed data. The amount of reflected EMR captured by a remote sensor is primarily a function of the reflectance properties of the surface materials. However, the reflected EMR leaving a surface is also influenced by the magnitude and direction of the solar irradiance and the view direction of the remote sensing instrument. The amount of emitted long-wave radiation leaving a surface is primarily a function of the surface temperature and, to a lesser extent, the emissivity of the surface. Radar sensing is achieved by a process in which microwave EMR is actively transmitted from an antenna and the backscattered EMR is received by the antenna, which is operated in a side-looking mode. The amount of backscatter, and therefore the strength of the received signal, is primarily a function of the moisture, microroughness, and geometric characteristics of surface materials and features.
Numerous types and sources of remotely sensed image data are available. The selection of the most appropriate image type should be based on a close match between the spatial, temporal, and attribute aspects of one's information requirements and the spatial, spectral, radiometric, and temporal characteristics of the available imagery. Two important specifications associated with these characteristics pertain to two data properties: (1) resolution or grain and (2) extent or coverage. Spatial resolution and extent aspects of an image characterize the fineness of spatial detail and the areal extent covered by an image, respectively. Each spatial characteristic must be traded off relative to the other; that is, high spatial resolution comes at the expense of limited coverage, and vice versa. Spectral resolution pertains to the range of wavelengths for a particular spectral waveband, while the number and location of wavelength bands in the EMR spectrum determine the spectral coverage. Radiometric resolution and coverage pertain to the fineness and range of EMR energy levels quantified by a remote sensor. The finer the radiometric resolution, the greater the ability to quantify fine differences in surface properties or identify subtle differences in surface material composition. Finally, temporal resolution and duration, which are mostly determined by aircraft or satellite mobility characteristics, represent the time interval between remote sensing observations and the record length of available data, respectively.
The primary differences between imagery types are associated with the type of (1) platform (airborne vs. satellite) and (2) sensor (photographic vs. digital and optical vs. microwave). Airborne imagery is captured by sensors on fixed-wing (e.g., airplanes and jets) and rotary-winged (e.g., helicopters), or lighter-than-air (e.g., blimps) platforms. Airborne platforms operate at lower altitudes and tend to be more flexible and mobile than satellites. They are able to provide imagery with higher spatial and temporal resolution but usually with more limited spatial and temporal coverage. Satellites normally provide regular, large-area image coverage at set times and with coarser spatial resolution. The two principal types of satellite orbits, sun synchronous (i.e., polar orbiting) and geostationary (i.e., equatorial orbiting), determine spatial and temporal resolution and coverage characteristics. Sun-synchronous satellite systems are able to provide higher-spatial-resolution images less frequently, while geostationary systems provide frequent images that cover much of the globe simultaneously but with coarser spatial resolution. Photographic cameras capture imagery on film, providing very high spatial resolution and fidelity but limited flexibility in spectral range and image formats. Higher spectral and radiometric qualities are achieved with digital sensors such as digital camera systems and optical-mechanical scanners, which directly measure EMR levels in an electronic, computer-compatible manner. Aerial photographic film can be electronically scanned to produce digital images. Digital images are more readily corrected, enhanced, and converted into geographic information system (GIS)-compatible form. Digital image data can be transferred by electronic means through microwave downlink and wired communications.
Particular platform-sensor combinations tend to be used to capture the most commonly available imagery, though any type of imaging sensor can operate on either airborne or satellite platforms. The most common airborne imagery has been aerial photography. The use of airborne digital cameras and linear array sensors to capture high-spatial-resolution images of land areas is increasing rapidly. Aerial photographic film is available in black-and-white panchromatic, black-and-white infrared, true-color infrared (CIR), and false-color infrared formats. Most satellite imagery is captured by digital imaging radiometers, since it is difficult to retrieve photographic film from a satellite. These sensors capture images through pushbroom sampling, using a large linear array of detectors, or whisk-broom sampling, using oscillating mirrors and a few detectors.
The three primary means for acquiring image data are as follows:
Obtaining existing imagery from an archive source
Capturing one's own imagery
Hiring a commercial firm or public agency to acquire new imagery coverage
Obtaining existing imagery is the least expensive, since the cost of acquisition has been subsidized. However, with this approach, one has no control over the spatial, spectral, radiometric, and temporal characteristics of the imagery. of course, this is the only means for acquiring historical imagery. If current imagery is needed or no source of useful imagery exists, one can capture one's own airborne imagery. Consumer-level digital cameras and video sensors enable convenient imaging from a rented aircraft. To capture vertical imagery, one may need to design, build, and install a camera mount. Though not suitable for precise mapping, such imagery can be useful for reconnaissance and qualitative landscape assessments, particularly along linear features such as streams, roads, or utility corridors and for inventorying (i.e., making counts of) surface features.
If current imagery is needed and/or particular spatial or spectral image characteristics are required, then imagery must be purchased from a commercial vendor. Aerial survey firms can provide new and archived aerial photography and/ or digital airborne imagery. Similarly, satellite imagery with high (starting in 2000) and moderate (since 1972) spatial resolution can be obtained from both public and commercial suppliers around the world.
Image interpretation is the act of analyzing remotely sensed images and extracting useful information about a scene, the Earth surface area represented by an image. Interpreting images is both an art form and a science. Humans have innate interpretative abilities, but interpretative ability improves with experience. By following scientific principles of systematic investigation and establishing interpretative rules, information extraction can be optimized and subjectivity can be minimized.
There are four general types of image interpretative tasks that an image analyst may wish to accomplish. In order of complexity, these tasks are as follows:
Detection: Locating the occurrence of a particular type of feature or phenomenon; a binary or dichotomous search process (e.g., detecting standing water or the presence of vegetation)
Identification: Locating or delineating and determining the types of features or phenomena (e.g., identifying vegetation community types)
Measurement: Quantifying the length, area, or number of occurrences of objects (e.g., measuring the length of a road segment or counting the number of dwellings in a neighborhood), once such objects have been detected or identified
Analysis: Examining the spatial relationships and geographic attributes of a scene (i.e., the ground area covered by an image), often by incorporating information derived from detection, identification, and measurement of scene objects and phenomena (e.g., analyzing the socioeconomic characteristics of a neighborhood or inferring habitat quality from vegetation patterns)
These general image interpretation tasks are performed for a variety of geographic applications, including
renewable resource management;
exploration and engineering geology;
urban and transportation planning and engineering;
emergency response, law enforcement, and military reconnaissance; and
Appropriate tools and strategies for image interpretation vary depending on whether images are in hardcopy or digital form. Most hardcopy images are aerial photographs, which are not normally geographically referenced. As for a map, the spatial scale of a hardcopy image is specified by the representative fraction, the ratio of the length of a feature on an image to its actual length on the ground. The representative fraction should be expressed as a dimensionless ratio per unit image length (e.g., 1:10,000 scale or 1/10,000 scale). However, it is common practice for local government agencies in the United States to use dimensioned-scale equations in nonmetric units (e.g., 1 in. [inch] = 2,000 ft. [feet]). Dimensioned-scale equations can be readily converted to a dimensionless representative fraction by converting the units of one side of the equation to match the units of the other side (e.g., 1 in. = 2,000 ft. ? 1:2,000 scale; 1 in. = 24,000 ft. ? 1:24,000 scale). Since the representative fraction increases when the denominator (called the scale factor) decreases, a large-scale image portrays a smaller areal extent with greater spatial resolution than a small-scale image. The scale of uncorrected aerial photographs captured over varying terrain varies across the image, making it difficult to measure lengths and areas in an accurate manner.
Hardcopy images can be interpreted with the aid of analog optical viewing tools such as magnifying film loops or microscopes. Aerial photographs are often captured with approximately 60% overlap to enable stereoscopic viewing. A stereoscope is a device that forces each eye of the interpreter to simultaneously view each overlapping portion of the photographic pair, such that the interpreter observes the scene in three dimensions. Being able to see topographic features and terrain variations in stereo is very effective in visualizing landscape relationships or mapping terrain-controlled features such as vegetation. If a digital GIS layer is to be generated from an analog map, the map must be digitally encoded by scanning or manually digitizing the interpreted features.
Digital images can be displayed and viewed on a computer monitor or printed onto a variety of hardcopy media. Computer-based image processing and display systems enable interactive viewing, processing, and interpretation in a very flexible and efficient manner known as on-screen interpretation. While a novice image analyst may not attempt to exploit the semiautomatic image interpretation (e.g., image classification or object recognition) and quantification capabilities of image-processing software, on-screen interpretation is readily achievable and powerful. The purpose of more automated approaches is to generate and update geospatial data sets in a faster, cheaper, more repeatable, and more reliable manner. Though the reliability of semiautomated approaches is improving, some form of manual, interactive editing and quality checking of the resultant products is required.
Up to three different wavelength bands, enhanced images or dates of imagery can be displayed simultaneously in the three color planes (red, green, and blue) of a computer image display. A single-band, panchromatic (i.e., broad visible wavelength band) satellite image can be displayed simultaneously in all three color planes to yield a gray-tone (i.e., black and white) image. True-color images are displayed when red-, green-, and blue-wavelength bands are displayed in red, green, and blue color planes, respectively. When other waveband/color plane combinations are displayed, the resultant images are called false-color composites. The most common is the false-color infrared composite, where near-infrared, red, and green wavelengths are displayed in red, green, and blue color planes, respectively. As with the color representation of CIR photographs, healthy green vegetation appears bright red, and inorganic urban surfaces (e.g., concrete and roofing materials) are portrayed in blue-gray.
Many image analysis and mapping tasks can be performed with digital images displayed on a computer monitor. An analyst can conveniently modify the scale and tone/color of a digital image and, using image-processing software, correct and enhance spatial and radiometric characteristics. (Image corrections and enhancements can also be performed by image suppliers prior to obtaining the imagery.) Spatial scale may be changed by digitally zooming, which means that there is no inherent representative fraction associated with an image displayed on a computer monitor. Once an image is enhanced and displayed, an analyst can visually interpret the image on the computer monitor. Point, line, and polygon objects can be counted, measured, and/or digitized interactively by cursor control devices (e.g., mouse). Information attributes (e.g., category names) about these spatial objects must be encoded manually. These encoded objects can be saved to a digital file. If the image has been georeferenced, then a file of image-digitized features with encoded attributes is essentially a GIS layer. Similarly, a dated GIS file can be overlaid graphically with a more current georeferenced image and be updated interactively through this “heads-up digitizing” approach.
Remote sensing provides an efficient means for deriving geospatial data and information about Earth's surface (and atmosphere). Many geospatial data and information requirements can be met through visual interpretation of remotely sensed images. Digital imagery and image-processing software are becoming more accessible, allowing for more efficient and reliable interpretation and digital encoding of geospatial data.
Remote sensing is a rapidly advancing technology and the availability, cost, and characteristics of remotely sensed imagery change constantly. Most of these changes are positive, meaning that the imagery is becoming higher quality and is available in a more convenient, timely, and economical format.
Aerial Imagery: Data, Aerial Imagery: Interpretation, Atmospheric Remote Sensing, GIScience, Image Enhancement, Image Fusion, Image Interpretation, Image Processing, Image Registration, Image Texture, Imaging Spectroscopy, Microwave/Radar Data, Multispectral Imagery, Multitemporal Imaging, Photogrammetric Methods, Radiometric Correction, Radiometric Normalization, Radiometric Resolution, Remote Sensing: Platforms and Sensors, Remote Sensing in Disaster Response, Scale in GIS, Spatial Resolution, Spectral Resolution, Temporal Resolution
Any method of obtaining and recording information from a distance. The most common sensor is the camera ; cameras are used in aircraft,...
Remote sensing is the process of collecting data about the earth’s surface and the environment from a distance, usually by sensors mounted on...
REMOTE SENSING IS the field of integrating the information, technology, and analysis of data collected by both satellite and airborne sensor...