Master in Architecture
(GIS)
Architecture M.Arch
Services in High Rise Buildings
1. Details of Module and its StructureModule Detail
Subject Name M.Arch - Architecture
Paper Name GIS
Module Name/Title Satellite Imagery - Digital Image Processing
Architecture M.Arch
Services in High Rise Buildings
2. 2. Development TeamRole Name
National Coordinator
Subject Coordinator Dr. Monsingh D. Devadas Paper Coordinator Dr. Pratheep Moses Content Writer/Author (CW) Dr. S. Vijaysagar Content Reviewer (CR) Dr. Pratheep Moses Language Editor (LE)
Architecture M.Arch
Services in High Rise Buildings
e-Text & Learn More
INTRODUCTION
Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. Access to low cost, efficient computer, hardware and software is common place and the sources of digital image data are many and varied. These sources range from commercial earth resource satellite systems to the meteorological satellite to airborne scanner data to airborne digital camera data to image data generated by scanning microdensitometers and other high resolution digitizing systems. All of these forms of data can be processed and analysed using the techniques described in this chapter.
Digital Image processing is an extremely broad subject, and it often involves procedures which can be mathematically complex. The central idea behind digital image processing is quite simple. The digital image is fed into a computer is programmed to insert these data into an equation or series of equations and then store the results of the computation for each pixel. These results form a new digital image that may be displayed or recorded in pictorial format or it may be further manipulated by additional programs. The possible forms of digital image manipulation are literally infinite. However virtually all these procedures may be categorized into one (or more) of the following five broad types of computer-assisted operations:
Natural color composite 3,2,1 False color
composite 4,3,2
Architecture M.Arch
Services in High Rise Buildings
Image Rectification and Restoration
These operations aim to correct distorted or degraded image to create a more faithful representation of the original scene. This typically involves the initial processing of raw image data to correct for geometric distortions to calibrate the data radiometrically and to eliminate noise present in the data. Thus the nature of any particular image restoration process is highly dependent upon the characteristics of the sensor used to acquire the image data. Image rectification and restoration procedures are often termed preprocessing operations because they normally precede further manipulation and analysis of the image data to extent specific information.
Image Enhancement
These procedures are applied to image data in order to more effectively display or record the data for subsequent visual interpretation. Normally image enhancement involves techniques for increasing the visual distinctions between features in a scene. The objective is to create “new” image from the original image data in order to increase the amount of information that can be visually interpreted from the data. The enhanced images can be displayed interactively on a monitor or they can be recorded in a hard copy format either in black and white or in color.
Image Classification
The object of these operations is to replace visual analysis of the image data with quantitative techniques for automating the identification of features in a scene. This normally involve the analysis of multispectral image dada and the application of statistically based decision rules for determining the land cover identity of each pixel in an image. When these decision rules are based solely on the spectral radiance observed in the data we refer to the classification process as spectral pattern recognition. In contrast the decision the rule may be based on the geometric shapes size and patterns present in the image data. This procedural falls into the domain of spatial pattern recognition. In either case the intent of the classification process is to categorize all pixels in a digital image into one of several land cover classes or “themes”. These categorized data may then be used to produce thematic maps of the land cover present in an image, and /or produce summary statistics on the areas covered by each land cover.
Data Merging and GIS integration
These procedures are used to combine image data for a given geographic area with other geographically referenced data sets for the same area. These other data sets might simply consist of image data generated on other dates by the same sensor or by other remote sensing systems.
Frequently, the intent of data merging is to combine remotely sensed data with other sources of information in the context of a GIS. For example, image data are often combined with soil, topographic, ownership, and zoning and assessment information.
Architecture M.Arch
Services in High Rise Buildings
Biophysical Modeling
The objective of biophysical modeling is to relate quantitatively the digital data recorded by a remote sensing system to biophysical features and phenomena measured on the ground. For example, remotely sensed data might be used to estimate such varied parameters as crop yield, pollution concentration, or water depth. Likewise, remotely sensed data are often used in concert with GIS techniques to facilitate environmental modeling. The intent of these operations is to simulate the functioning of environmental systems in a spatially explicit manner and to predict their behavior under altered conditions, such as global climate change.
IMAGE RECTIFICATION AND RESTORATION
The intent of image rectification and restoration is to correct image data for distortions or degradations that stem from the image acquisition process. Obviously the nature of such procedures varies considerably with such factors as the digital image acquisition type (digital camera, along – track scanner, across – track scanner), platform (airborne vs. satellite), and total field of view.
Geometric Correction
Raw digital images usually contain geometric distortions so significant that they cannot be used as maps. The sources of these distortions range from variations in the altitude and velocity of the sensor platform to factors such as panoramic distortion, earth curvature, atmospheric refraction, relief displacement, and nonlinearities in the sweep of a sensor’s IFOV. The intent of geometric correction is to compensate for the distortions introduced by these factors so that the corrected images will have the geometric integrity of a map.
The geometric correction process is normally implemented as a two-step procedure. First those distortions that are systematic or predictable are considered. Second those distortions that are essentially random or unpredictable are considered. For example a highly systematic source of distortion involved in multi-spectral scanning from satellite altitudes is the eastward rotation of the earth beneath the satellite during imaging. The process of deskewing the previous sweep. This is known as skew distortion. The process of deskewing the resulting imagery involves offsetting each successive scan line slightly to the west. A process called resampling is used to determine the pixel values to fill into the output matrix from the original image matrix.
Radiometric Correction
As with geometric correction the type of radiometric correction applied to any given digital image data set varies widely among sensors. Other things being equal the radiance measured by any given system over a given object is influenced by such factors as changes in scene illumination atmospheric conditions viewing geometry and instrument response characteristics. Some of these effects such as viewing geometry variations are greater in the case of airborne data collection than in satellite images acquisition. Also the need to perform correction for any or all of these influences depends directly upon the particular application.
Architecture M.Arch
Services in High Rise Buildings
In the case of satellite sensing in the visible and near-infrared portion of the spectrum it is often desirable to generate mosaics taken at different times or to study the change in the reflectance of ground features at different times or locations. In such application it is usually necessary to apply a sun elevation correction and an earth-sun distance correction. The sun elevation correction accounts for the seasonal position of the sun relative to the earth. Through this process image data acquired under different solar illumination angles are normalized by calculating pixel brightness values assuming the sun was at the zenith on each date of sensing.
The earth-sun distance correction is applied to normalize for the seasonal changes in the distance between the earth and the sun. The earth-sun distance is usually expressed in astronomical units. (As astronomical unit is equivalent to the mean distance between the earth and the sun approximately 149.6x 10 km . The irradiance from the sun decreases as the square of the earth-sun distance.
Noise Removal
Image noise is any unwanted disturbance in image data that is due to limitations in the sensing, signal digitization, or data recording process. The potential sources of noise range from periodic drift or malfunction of a detector, to electronic interference between sensor components, to
intermittent ’hiccups’ in the data transmission and recording sequence. Noise can either degrade or totally mask the true radiometric information content of a digital image.
As with geometric restoration procedures the nature of noise correction required in any given situation depends upon whether the noise is systematic (periodic) random or some combination of the two. For example multi-spectral scanners that sweep multiple scan lines simultaneously often produce data containing systematic striping or banding. This stems from variations in the response of the individual detectors used within each band. Another line-oriented noise problem sometimes encountered in digital data is line drop. In this situation a number of adjacent pixels along a line (or an entire line) may contain spurious DNs. This problem is normally addressed by replacing the defective DNs with the average of the values for the pixels occurring in the lines just above and below. Alternatively the DNs from the preceding line can simply be inserted in the defectively pixels. Random noise problems in digital data are handled quite differently than those we have discussed to this point. This type of nose is characterized by nonsystematic variations in gray levels from pixel to pixel called bit error. Such noise is often referred to as being ‘spikey’ in character and it causes images to have a salt and pepper or snowy appearance.
IMAGE ENHANCEMENT
The process of visually interpreting digitally enhanced imagery attempts to optimize the complementary abilities of the human mind and the computer. The mind is excellent at interpreting spatial attributes on an image and is capable of selectively identifying obscure or subtle feature.
However the eye is poor at discriminating the slight radiometric or spectral differences that may characterize such features. Computer enhancement to visually amplify these slight differences to make them readily observable. Most enhancement techniques may categorize as either point or local
Architecture M.Arch
Services in High Rise Buildings
operations. Point operations modify brightness value of each pixel in an image data set independently local operations modify the value of each pixel based on neighboring brightness value Either or on the individual components of multi-image composites. The resulting images may also be recorded or displayed in black and white or in color choosing the appropriate enhancement (s) for any particular application is as art and often a matter of personal preference. Three techniques can be categorized as contrast manipulation feature manipulation or multi-image manipulation.
1. Contrast manipulation. Gray-level thresholding level slicing and contrast stretching.
2. Spatial feature manipulation. Spatial filtering edge enhancement and Fourier analysis.
3. Multi-image manipulation. Multi-spectral band rationing and differencing principal components canonical components vegetation components intensity-hue-saturation (HIS) color space transformations and de-correlation stretching.
Level Slicing
Level slicing is an enhancement technique whereby the DNs distributed along the x-axis of an image histogram are divided into a series of analysis-specified intervals or “slices”. All of the DNs falling within a given interval in the input image are then displayed at a single DN in the output image. Consequently if six different slices are established the output image contains only six different gray levels. The result looks something like a contour map except that the areas between boundaries are occupied by pixels displayed at the same DN. Each level can also be shown as a single color. Level slicing is used extensively in the display of thermal infrared images in order to show discrete ranges coded by gray level or color.
Contrast Stretching
Image display and recording devices typically operate over a range of 256 gray levels (the maximum number represented in 8-bit computer encoding). Sensor data in a single rarely extend over this range. Hence the intent of contrast stretching is to expand the narrow range of brightness values typically present in an input image over a wider range of gray values. The result is an output image that is designed to accentuate the between features of interest to the image analyst. Consider a hypothetical sensing system whose image output levels can vary from 0 to 255 illustrates a histogram of brightness levels recorded in one spectral band over a scene. Assume that or
hypothetical output devices (e.g., computer monitor) are also capable of displaying 256 gray levels (0 to 255). Note that the histogram (figur.1a) shows scene brightness values occurring only in the limited range of 60 to 158. If we were to use these image values directly in our display device (figure.1b), we would be using only a small portion of the full range of possible display levels.
Display levels 0 to 59 and 159 to 255 would not be utilized. Consequently the tonal information in the scene would be compressed into a small range of display values reducing the interpreter’s ability to discriminate radiometric detail.
A more expressive display would result if we were to expand the range of image levels in the scene (60 to 158) to fill the range of display values (0 to 255). In figure 1c, the range of image
Architecture M.Arch
Services in High Rise Buildings
values has been uniformly expanded to fill the total range of the output device. This uniform expansion is called a linear stretch.
One drawback of the linear stretch is that it assigns as many display levels to the rarely occurring image values as it does to the frequency occurring values. To improve on the above situation a histogram-equalized stretch can be applied. In this approach image values are assigned to the display levels on the basis of their frequency of occurrence. As shown in figure 1d, more display values (and hence more radiometric detail) are assigned to the frequency occurring portion of the histogram. The image values range of 109 to 158 is now stretched over a large portion of the display levels (39 to 255). A smaller portion (0 to 38) is reserved for the infrequently occurring image values of 60 to 108.
For special analysis specific features may be analyzed in greater radiometric detail by assigning the display range exclusively to a particular range of image values. For example, if a narrow range of values in a scene represented water features, stretching this small range to the full display range could enhance characteristics in the water features. As shown in the figure 1e, output range is devoted entirely to the small range of image values between 60 and 92. On the stretched display minute tonal variations is the water range would be greatly exaggerated. Being displayed at a single bright white level (255) on the other hand would wash out the brighter land features.
SPATIAL FEATURE MANIPULATION
Spatial filtering
In contrast to spectral filters, which serve to block or pass energy over various spectral ranges spatial filters emphasize or de-emphasize image data of various spatial frequencies. Spatial frequency refers to the “roughness” of the tonal variations occurring in an image. Image areas of high spatial frequency are tonally “rough”. That is, the gray levels in these areas change abruptly over a relatively small number of pixels (e.g., across roads or field borders). “Smooth” image areas are those of low spatial frequency where gray levels vary only gradually over a relatively large number of pixels (e.g., large agricultural fields or water bodies). Low pass filters are designed to emphasize low frequency features (large area changes in brightness) and de-emphasize the high frequency components of an image (local detail). High pass filters do just the reverse. They emphasize the detailed high frequency components of an image and de-emphasize the more general low frequency information.
Spatial filtering is a “local” operation in that pixel values in an original image are modified on the basis of the gray levels of neighboring pixels. For example a simple low pass filter may be implemented by passing a moving window throughout an original image and creating a second image whose DN at each pixel corresponds to the local average within the moving window at each of its position in the original image. Assuming a 3X3-pixel window is used the center pixels in the original image contained in the window at that point. This process is very similar to that described previously under the topic of noise suppression. (In fact low pass filters are very useful for reducing
Architecture M.Arch
Services in High Rise Buildings
random noise). A simple high pass filter may be implemented by subtracting a low pass filtered image (pixel by pixel) from the original unprocessed image.
CONVOLUTION
Spatial filtering is but one special application of the generic image processing operation called convolution. Convoluting an image involves the following procedures:
1. A moving window is established that contains an array of coefficients or weighting factors.
Such arrays are referred to as operators or kernels and they are normally an add number of pixels in size (e.g., 3X 3 5 X 5.7 X 7)
2. The kernel is moved throughout the original image and the DN at the center of the kernel in a second (convoluted) output image is obtained by multiplying each coefficient in the kernel by the corresponding DN in the original image and adding all the resulting products. This operation is performed for each pixel in the original image.
EDGE ENHANCEMENT
We have seen high frequency component images emphasize the spatial detail in digital images. That is these images exaggerate local contrast and are superior to unenhanced original images for portraying linear features or edges in the image data. However high frequency component images do not preserve the low frequency brightness information contained in original images. Edge enhanced images attempt to preserve both local contrast and low frequency brightness information. They are produced by “adding back” all or a portion of the gray values in the original image to a high frequency component image of the same scene. Thus edge enhancement is typically implemented in the three steps:
1. A high frequency component image is produced containing the edge information. The kernel size used to produce this image is chosen based on the roughness of the image. “Rough” images suggest small filter size (e.g., 3X3 pixels) whereas large (e.g., 9X9 pixels) are used with
“smooth” images.
2. All or a fraction of the gray level in each pixel of the original scene is added back to the high frequency component image. (The proportion of the original gray levels to be added back may be chosen by the images analyst)
3. The composite image is contrast stretched. This results in an image containing local contrast enhancement of high frequency features that also preserves the low frequency brightness information contained in the scene.
Architecture M.Arch
Services in High Rise Buildings
FOURIER ANALYSIS
The spatial feature manipulations we have discussed thus far are implemented in the spatial domain- the (x, y) coordinate space of image. An alternative coordinate space that can be used for image analysis is the frequency domain in the approach an image is separated into its various spatial frequency components through application of a mathematical operation known as the Fourier transform. Conceptually this operation amounts to fitting a continuous function through the discrete DN values if they were plotted along each row and column in an image. The peaks and valleys along any given row or column can be described mathematically by a combination of sine and cosine waves with various amplitude frequencies and phases. A Fourier transform results from the calculation of the amplitude and phase for each possible spatial frequency in an image.
After an image is separated into its component spatial frequencies it is possible to display these values in a two-dimensional scatter plot known as a Fourier spectrum. If the Fourier spectrum of an image is known it is possible to regenerate the original image through the application of an inverse Fourier transform.
Fourier analysis is useful in a host of image processing operations in addition to the spatial filtering and image restoration application we have illustrated in this discussion. However most image processing is currently implemented in the spatial domain because of the number and complexity of computations required in the frequency domain. This situation is likely to change with improvement in computer hardware and advances in research on the spatial attribute of digital image data.
MULTI-IMAGE MANIPULATION
Spectral Ratioing
Ratio images are enhancements resulting from the division of DN values in one spectral band by the corresponding values in another band. A major advantage of ratio images is that they convey the spectral or color characteristics of image features regardless of variations in scene illumination conditions two different land cover types deciduous and coniferous trees occurring on both the sunlit and shadowed sides of a ridgeline. The DNs observed for each cover type are substantially l9wer in the shadowed area than in the sunlit area. However the ratio values for each cover type are nearly identical irrespective of the illumination condition. Hence a rationed of the scene effectively compensates for the brightness variation caused by the varying topography and emphasizes the color content of the data.
Ratioed images are often useful for discriminating subtle spectral variations in a scene that are masked by the brightness variations in images from individual spectral bands or in standard color composites. This enhanced discrimination is due to the fact that ratioed images clearly portray the variations in the slopes of the spectral reflectance curves between the two bands involved
Architecture M.Arch
Services in High Rise Buildings
regardless of the absolute reflectance values observed in the bands. These slopes are typically quite different for various material types in certain bands of sensing. For example the near-infrared to red ratio for healthy vegetation is normally very high that for stressed vegetation is typically lower (as near infrared to red (or red to near infrared) ratioed image might be very useful for differentiating between areas of the stressed and non-stressed vegetation. This type of ratio has also been employed extensively in vegetation indices aimed at quantifying relative vegetation greenness and biomass.
Ratio images can also used to generate false color composites by combining three monochromatic ratio data sets. Such composites have the twofold advantage of combining data from more than two and presenting the data in color, which further facilitates the interpretation of subtle spectral reflectance differences. The manner in which ratios are computed and displayed will also greatly influence the information content of a ratio image. For example the ratio between two raw DNs for a pixel will normally be quite different from that between two radiance values computed for the same pixel. The reason for this is that the detector response curves for the two channels will normally have different offsets, which are additive effects on the data. This situation is akin to the difference one would obtain by ratioing two temperatures using the Fahrenheit scale versus the Celsius scale). Some trial and error may be necessary before the analyst can determine which form of ratio works best for a particular application.
It should also be noted ratio that ratios can blow up mathematically (become equal to infinity) in that divisions by zero are possible. At the same time ratios less than I are common and rounding to integer values will compress much of the ratio data into gray level 0 or 1. Hence it is important to scale the results of ratio computations somehow and relate them to the display device used one means of doing this is to employ an algorithm.
PRINCIPAL AND CANONICAL COMPONENTS
Extensive interband correlation is a problem frequency encountered in the analysis of multi- spectral image data. That is image generated by digital data from various wavelength bands often appears similar and conveys essentially the same information. Principal and canonical component transformations are two techniques designed to remove or reduce such redundancy in multi-spectral data. These transformations may be applied either as an enhancement operation prior to visual interpretation of the data or as a preprocessing procedure prior to automated classification of the data.
However the concepts involved may be expressed graphically by considering a two-channel image data set such as that shown in figure 2. In (a) a random sample of pixels has been plotted on a scatter diagram according to their gray levels as originally recorded in bands A and B.
Superimposed on the band A/band B axis system are two new axes (axes I and II) that are rotated with respect to the original measurement axes and that have their origin at the mean of the data distribution. Axis I defines the direction of the first principal component and axis II defines the direction of the second principal component. The form of the relationship necessary to transform a data value in the original band A/band B coordinate into its value in the new axis I/axis II system is
Architecture M.Arch
Services in High Rise Buildings
DN1 = a11Dna+ a12DNb DN11= a12Dna+ a22DNb
Where
DN1, DN11 = digital number in new (principal component) coordinate system DNa, DNb = digital number in old coordinate system
a11 a12 a21 a22 = coefficient (constants) for the transformation
In short the principal component data values are simply linear combinations of the original data values. As in the case of ratio images principal component images can be analyzed as separate black and white images may be combined to form a color composite. If used in an image classification process, principal component data are normally treated in the classification algorithm simply as if they were original data. However the number of components used is normally reduced to the intrinsic dimensionality of the data thereby making the image classification process much more efficient by reducing the computation required. Canonical component analysis also referred to as multiple discriminant analysis may be more appropriate when information about particular features of interest is know on the basis of a random undifferentiated sample of image pixel values.
In figure 2 b the pixel values shown are derived from image areas containing three different analyst- defined feature types(the feature types are represented by the symbols and +). The canonical component axes in this figure (axes I and II) have located to maximize the separability of these classes while minimizing the variance within each class. For example the axes have been positioned in this figure such that the three feature types can be discriminated solely on the basis of the first canonical component (CCI) values located along axis I.
IMAGE CLASSIFICATION
The overall objective of image classification procedures is to automatically categorize all pixels in an image into land cover classes of themes. Normally multi-spectral data are used to perform the classification and indeed the spectral pattern present within the data for each pixel is used as the numerical basis for categorization. That is different feature type manifest different combination of DNs based on their inherent spectral reflectance and emittance properties. In this light a spectral “pattern” is not at all geometric in character rather the term pattern refers to the set of radiance measurement obtained in the various wavelength bands for each pixel. Spectral pattern recognition refers to the family of classification procedures that utilizes this pixel-by-pixel spectral information as the basis for automated land cover classification
Temporal pattern recognition uses time as an aid in feature identification In agricultural crop surveys for example distinct spectral and spatial changes during a growing season can permit discrimination on multi-date imagery that would be impossible given any single date. For example a field of winter wheat might be indistinguishable from bare soil when freshly seeded in the fall and spectrally similar to an alfalfa field in the spring. An interpretation of imagery from either date alone would be unsuccessful regardless of the number of spectral bands. If data were analyzed from
Architecture M.Arch
Services in High Rise Buildings
both dates however the winter wheat fields could be readily identified since no other field cover would be bare in late fall and green in late spring.
SUPERVISED CLASSIFICATION
We use a hypothetical example to facilitate our discussion of supervised classification. In this example let use assume that we are dealing with the analysis of five-channel airborne MSS data. (The identical procedures would apply to Landsat MSS, TM or SPOT HRV multi-spectral data.) Figure 2 shows the location of a single line of the MSS data collected over a landscape composed of several cover types. For each of the pixels shown along this line the MSS has
measured scene radiance in terms of DNs recorded in each of the five spectral bands of sensing blue green red near-infrared and thermal infrared. Below the scan line typical DNs measured over six different land cover types are shown. The vertical bars indicate the relative gray values in each spectral band. These five outputs represent a coarse description of the spectral response patterns are the various terrain features along the scan line. If these spectral patterns are sufficiently distinct for each feature type they may form the basis for image classification. Figure 3 summarizes the three basic steps involved in a typical supervised classification procedure. In the training stage (1) the analyst identifies representative training areas and develops a numerical description of the spectral attributes of each land cover type of interest in the scene. Next in the classification stage (2) each pixel in the images data set is categorized into the land cover class it most closely resembles. If the pixel is insufficiently similar to any training data set it is usually labeled “unknown”. The category label assigned to each pixel in this process is then recorded in the corresponding cell of an
interpreted data set (an output image). Thus the multidimensional image matrix is used to develop a corresponding matrix of interpreted land cover category types. After the entire data set has been categorized the results are presented in the output stage (3) Being digital in character the results may be used in a number of different ways. Three typical forms of output products are thematic maps tables of full scene or sub scene area statistics for the various land cover classes and digital data fields amenable to inclusion in a geographic information system (GIS). In this latter case the classification output becomes a GIS “input”.
Architecture M.Arch
Services in High Rise Buildings
Parallelepiped Approach Classification
Minimum Distance to Mean
Maximum likelihood classification
Maximum likelihood classification: another statistical approach Assume multivariate normal distributions of pixels within classes For each class, build a discriminant function
For each pixel in the image, this function calculates the probability that the pixel is a member of that class
Takes into account mean and covariance of training set
Each pixel is assigned to the class for which it has the highest probability of membership
Architecture M.Arch
Services in High Rise Buildings
2014 – Kharif – October
Architecture M.Arch
Services in High Rise Buildings
2015 – Rabi – March
Architecture M.Arch
Services in High Rise Buildings
Architecture M.Arch
Services in High Rise Buildings
Digitally classified Land Use Map – 2014-2015
Unsupervised Classification
The analyst requests the computer to examine the image and extract a number of spectrally distinct clusters…
Spectrally Distinct Clusters
Saved Clusters
Architecture M.Arch
Services in High Rise Buildings
Architecture M.Arch
Services in High Rise Buildings
The result of the unsupervised The analyst determines the classification is not yet information ground cover for each of the clusters…
until…
It is a simple process to regroup (recode) The result is essentially the same as
the clusters into meaningful information that of the supervised classification:
classes (the legend).
Architecture M.Arch
Services in High Rise Buildings
Architecture M.Arch
Services in High Rise Buildings
Pros
Takes maximum advantage of spectral variability in an image
Cons
The maximally-separable clusters in spectral space may not match our perception of the important classes on the landscape
After iterations finish, you’re left with a map of distributions of pixels in the clusters
How do you assign class names to clusters?
Requires some knowledge of the landscape
Ancillary data useful, if not critical (aerial photos, personal knowledge, etc.)
Comparison between Supervised and Unsupervised Classification
Use spectral (radiometric) differences to distinguish objects Land cover not necessarily equivalent to land use
Supervised classification
Training areas characterize spectral properties of classes
Assign other pixels to classes by matching with spectral properties of training sets
Unsupervised classification
Maximize separability of clusters
Assign class names to clusters after classification