Posted on July 19th, 2012

G-FAQ – Why Can I See Some Small Objects in Imagery and Others Not?

For this month’s Geospatial Frequently Asked Question (G-FAQ), I explore the topic of spatial resolution as it relates to satellite and aerial imagery; and its impact on the objects that you can and cannot see in this data. Have you ever purchased or downloaded imagery with 1-foot pixels expecting to see a feature that was approximately two feet in diameter, for instance a small bush, only to be disappointed when you could not find the feature when you inspected the full resolution data? If you answered yes in your head just now, then this GFAQ is the article for you as I explore these topics:

What is meant by spatial resolution and pixel size? How is the color of a pixel determined? Why can I see some small objects in my imagery and others not? What is the general rule of thumb to determine what can and cannot be seen in imagery?

To start off this discussion, let’s explore resolution as it relates to satellite and aerial imagery. There are four types of resolution: spectral, temporal, radiometric and spatial. Spectral resolution is defined by the range of the electrum magnetic spectrum a sensor measures – or a band – as well as by the number of bands it can measure. Temporal resolution can be defined as either the fixed repeat time a sensor makes to each location on the planet – for instance every 16 days for Landsat 7 – or the regularity of archive coverage for companies such as DigitalGlobe, Astrium and RapidEye. Radiometric resolution refers to the bit depth of a sensor (typically 11-bit or 12-bit); and the higher the bit depth, the more sensitive the sensor is to detecting photons reflected from a surface.

Spatial resolution is commonly thought to define the amount of ‘ground’ each satellite/aerial imagery pixel covers. For example, WorldView-2 features 50-centimeter (cm) spatial resolution in its panchromatic band so that each pixel covers 50-cm by 50-cm in the real world, or 0.25 square meters (sq m). The reason I use the expression above, ‘commonly thought,’ is that IKONOS can be delivered with 80-cm pixels, for example, but this does not necessarily equate to the actual length of each side of the square that was imaged on the ground. This is actually the ground sampling distance (GSD), and GSD rarely, if ever, matches the pixel size. The GSD of IKONOS is 82-cm at nadir (i.e. when the satellite is pointing directly down when it images the ground); it increases as the satellite tilts to collect data so that at 30 degrees off-nadir, the GSD is actually 1-m. While imaging companies can use resampling techniques to convert a 1-m GSD to a pixel size of 80-cm, they cannot create data that is not there; hence 80-cm IKONOS imagery collected at 30 degrees off-nadir will be ‘fuzzier’ than data collected at 10 degrees. This resampling of data from the actual GSD to a set pixel size has serious implications on the use of the Nearest Neighbor kernel which many academics prefer – if you would like more info on this topic, drop me a line at as this discussion is outside the scope of this GFAQ.

In order to understand how the color of an individual pixel is determined, we first need to understand the basic principles behind optical imaging. An optical satellite and/or aerial sensor is really just an extremely advanced camera that is suspended in space; and similar to a camera, these passive sensors require a light source (i.e. the Sun) to take an image of the ground. The Sun radiates our planet with photons – which are packets of energy – covering a large range of the electromagnetic spectrum; and from this, the human eye can detect only a small portion, or the visible spectrum. When a photon strikes the surface of plant, road or any other object, it undergoes molecular reactions which change its energy state (or alter its wavelength) and then it is reflected back out.

A satellite and/or aerial sensor detects the intensity of photons reflected from the surface, measuring them across a range of wavelengths or a spectral band. If a surface is painted blue, many more ‘blue’ photons (or those with a wavelength of ~450 to 475 nanometers) are reflected to the sensor than are photons in any other energy state. As such, the sensor records a higher intensity value (called a digital number) in its blue band; and then, when you display the imagery in natural color, the pixel appears blue. Most sensors have 4 spectral bands with three in the visible spectrum, i.e. blue, green and red; and then one band in the near infrared spectrum as this data is important for assessing plant health.

There is one key consideration to layer on top of this discussion of pixel color and that is the amount of ground covered by each pixel. As pixel size grows, so too does the amount of land it covers and then so too does the chance that each single pixel covers multiple surface types. So if a 50-cm WorldView-2 pixel covers both part white sidewalk and part green grass, then its ‘color’ will be a combination of these two surfaces (i.e. greenish-white) when displayed in natural color. The animation below will help to describe this phenomenon better than words possibly can.


The animation above shows how the color of a pixel is determined. First a red grid (i.e. pixels) is burned into an overhead image of a parking lot shown in natural color. Then the approximate color of each pixel is shown in words; and finally, each pixel is filled with this approximate color to show how a satellite sensor might see this parking lot from space.


This animation helps to explain why features that are smaller than pixel size can be visible in satellite and/or aerial imagery. Take for example the white parking lines painting on black asphalt in the animation above. If a satellite were to image this small area, it would average the pixel to a light grey color. Now imagine you are looking at this imagery and specifically at the parking lot, you would see that the majority of the pixels covering the parking lot are deep black; and while the parking lines might be grayish-white, they would appear to be simply white on such a dark, black background. And it is this combination of a brightly colored surface surrounded by a dark background (or vice versa) that is typically responsible for objects appearing in satellite and/or aerial imagery that are smaller than the pixel size.

In order to develop general rules that describe this phenomenon, we need to understand the difference between detecting a feature on the ground and truly resolving it. In the case of a parking line, this is a simple linear object that needs little more than a row of pixels for us to identify it on the ground. Parking lines also have strong visual clues associated with them in that they are positioned in a particular way (i.e. in rows) on a black surface (i.e. a parking lot) with cars clustered between them. Now consider a dark manhole surrounded by black asphalt in imagery with 50-cm resolution. While this manhole may appear as a slight discoloration from the surrounding black asphalt, it would not be distinct enough for you to tell if it was indeed a manhole or simply a dark canvas bag left in the parking lot. What I have explained here is the ability to detect a manhole versus resolving a parking line – these are different terms with important implications I discuss below.

Here then are some general rules of thumb to determine the objects you can detect and/or resolve in satellite/aerial imagery:

  1. An object that is brightly colored on a dark surface; or one that is dark on a bright surface will be the easiest to detect in imagery.
  2. Objects that have distinct linear edges, such as cars, are easier to detect and resolve than are objects with fuzzy, wavy or ‘soft’ edges such as trees and bushes.
  3. As a very general rule of thumb, an object should cover about 10 pixels of an image for it to be truly resolvable – unless it is a unique feature such as a white parking line on a black parking lot. So for 1-m satellite imagery, a feature should cover ~10 sq m which is roughly the size of a car or light truck; and for 5-meter imagery, it should cover ~2,500 sq m which is roughly the size of a commercial building. If you inspect the images in our online gallery, you will see this rule of thumb works quite well.
  4. Adding the non-visible spectra (i.e. the near infrared band or more) to your visual analysis can help you detect features that you might not in natural color alone. This is a point however that warrants a complete G-FAQ of its own – so stayed tuned to future editions for an elaboration of this topic.

Until our next edition of G-FAQ, happy GIS-ing!

Do you have an idea for a future G-FAQ? If so, let me know by email at

Find Out More About This Topic Here:

Brock Adam McCarty

Map Wizard

(720) 470-7988

This entry was posted in The Geospatial Times and tagged , by Apollo Mapping. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.