In the past five years, geospatial professionals and academics alike have increasingly focused efforts on extracting useful spectral information from panchromatic satellite and aerial imagery. Since the start of the 20th Century, panchromatic imagery has been used to extract building footprints, road centerlines and parcel boundaries. Traditionally, this extraction is a manual process completed by rooms full of dedicated GIS professionals. With the growing commercial archives of 50-centimeter (cm) panchromatic data collected by WorldView-1 (WV1), attention has turned to automated processing techniques which can extract information from large geographies quicker than can be done by humans. These techniques typically rely upon spectral (and at times spatial) characteristics that are easier to detect in multispectral (i.e. color + near infrared bands) signatures. One distinct advantage of panchromatic data are the reduced file sizes which makes processing this imagery quicker. As such, if reliable techniques can be found for extracting useful information from pan-only imagery, it could streamline many remote sensing workflows.
When extracting information from satellite and aerial imagery, one can employ supervised and unsupervised classification techniques. In supervised classifications, users identify a cluster of known pixels that include the features they are hunting for; and then statistics are employed to ‘match’ these spectral signatures to other pixels in the imagery. In theory, if the training dataset of known pixels is robust, you are able to find all/most of your features of interest throughout the entire scene in an automated fashion.
In unsupervised classifications, pixels are clustered together into classes by the similarity of their spectral signatures without the use of a training set. Similarity is measured with a variety of mathematical and statistical techniques that admittedly are more technical in nature than the scope of this article. For the analysis presented here, I focus on an unsupervised classification technique offered in ENVI called ISODATA.
ISODATA is an unsupervised classification technique that can be run on a single band as it clusters data based on the distance each pixel’s spectra is from the mean value of the class it creates. These classes are arbitrarily created to start and then are added to and subtracted from through an iterative process. Once the iterations are completed, you are left with a user-specified number of classes, which are clustered so that all the pixels fall within the user-specified distance from the classes’ mean spectral values.
ISODATA and WorldView-1 Satellite Imagery
For this analysis, I ran multiple ISODATA classifications on 50-cm panchromatic WV1 imagery collected March 22, 2010 over Austin, Texas. Varying both the number of classes as well as the number of iterations, I found that it was most important to increase the number of iterations as too few resulted in many unclassified pixels. In the end, I used 15 iterations with a maximum number of classes set to 15 – all the other default ISODATA settings remained unchanged. The results of the classification were a bit dizzying at first as the animation below attests to.
Upon closer inspection of the ISODATA results, it was apparent that the unsupervised classification technique did a ‘good’* job at identifying bodies of water. As such, I extracted the water class as a shapefile and brought it into ArcGIS. Inside of ArcGIS, it was apparent that many pixels were mis-identified as water. Many of these erroneous identifications were small clusters of pixels. To clean up these small clusters, I filtered out all polygons that covered more than 500 pixels. What I was left with was surprisingly accurate as the series of images below show.
As should be apparent from the full resolution animation above, the ISODATA classification did a ‘good’ job at extracting water from the panchromatic WV1 imagery. The classification was certainly not without fail as river rapids and shallow areas in ponds and rivers were commonly misidentified. This suggests that the ISODATA classification works well on deep, flat bodies of water that have little reflectance (i.e. the blackest pixels in the imagery). While any shallow area or water with foam on top (e.g. river rapids, waves, turbulent zones) would be misidentified. The ISODATA classification also did a ‘good’ job at detecting contiguous surfaces covering larger areas such as roof tops and parking lots; though extracting these classes to polygons in an automated fashion would likely involve more elaborate GIS filtering.
In sum, the results of this analysis suggest that panchromatic data has value beyond the traditional visual uses. With additional tweaking to the ISODATA classifications and elaborate GIS filtering, land-use classes other than water could likely be extracted in an automated fashion from WV1 panchromatic imagery.
Brock Adam McCarty
* I use the term ‘good’ in quotations here as I have done nothing to quantify these results short of a visual analysis.