Shining a light on the global spread of cities
Reference article: Goldblatt, R., M.F. Stuhlmacher, B. Tellman, N. Clinton, G. Hanson, M. Georgescu, C. Wang, F. Serrano-Candela, A.K. Khandelwal, M.-H. Cheng and R.C. Balling Jr. 2018. Using landsat and nighttime lights for supervised pixel-based image classification of urban land cover. Remote Sensing of Environment 205: 253–275. DOI: 10.1016/j.rse.2017.11.026
Mapping our urban planet
As people across the world migrate toward cities, urban areas are steadily expanding. Adding an area about the size of Belgium from 1970–2000, this ongoing mass migration is creating new urban ecosystems out of the forests, farmlands, and other landscapes that preceded them. A new study by Goldblatt and their colleagues demonstrate how we can better keep tabs on the spread of urban areas using a novel combination of satellite data.
A bird’s-eye view, 700 kilometers up

Satellite observations of the Earth’s surface can be a powerful tool for routinely monitoring human and natural processes, like for example detecting Amazon deforestation. Earth orbiting satellites, like the Landsat series in operation since 1972, make images of the planet surface by collecting reflected light of different wavelengths while passing overhead. The mix of wavelengths in each image pixel (its spectrum) contains information on that little piece of land because the light had to interact with the surface before being reflected back upwards to the sensor. An entire science of remote sensing has grown up around interpreting what these spectral mixes tell us about the land, air, and ocean, and researchers stay busy thinking up new sensor designs to fly on future missions.
Crunching the numbers
The usual approach to mapping different types of land cover is to feed satellite data through a computer program (often in the realm of “machine learning”) that compares the spectral signature of each pixel to a kind of template, a set of example pixels of the land cover of interest. This “training data” has to first be produced by a human researcher using their eyeballs to identify areas that represent each targeted class of land cover. The computer program then attempts to assign each pixel a cover type based on its statistical similarity to the training data it was given. The researchers then often have to go back and test the accuracy of their map against a set of “validation” data, another independent collection of human-classified pixels.
All this human intervention means training and validation data tend to be rare and expensive. However, such specific training and validation data are still frequently necessary because classification tends to works well only for limited geographical areas and times, making accurate global land cover mapping difficult. And mapping cities can prove even more tricky, given the vagaries around the definition of “urban” land, and the complex mix of cover found in many built-up areas.
Use a robot to train a robot

To streamline satellite mapping of urban areas, Goldblatt and colleagues turned to another signature of our cities – the nighttime glow of our electric lights. Testing their approach on the countries of Mexico, India, and the U.S., the researchers combined 30-meter-resolution Landsat data with lower-resolution data available from a constellation of military weather satellites that can detect the glow of urban lights at night (as well as gas flares and squid boats). Since these nighttime lights give an independent sense of where “urban” land is likely to be, the researchers could then tell a computer to sample these pixels to form its own training data for classification, rather than asking a human to do the work. They then used several other machine learning techniques to allow the computer to optimize and tweak parameters to get the best final fit to another independent set of validation data.

The result of their work is an automated process for mapping cities that is both more accurate and in higher spatial resolution than other satellite-based maps – demonstrating a technique that can be potentially applied across the globe and requiring much less intensive human labor. Improvements in image processing as demonstrated in the study may allow for a better and more frequently updated picture of the global extent of cities that can be useful in a host of fields from urban planning, to climate modeling, to disaster response.
The next time you find yourself looking out across the city lights, imagine that the glow you see might be quietly helping computers and satellites map all the places humans have come to live across the planet.
Reviewed by: