Isochronic travel time maps

A few years ago, a modern version of an old isochronic map from 1914 made the rounds on social media. It showed the amount of time it would take to get anywhere in the world, starting from London, UK. It's a beautiful map but it was only useful for travel times from London. I decided to make my own version with other places as the origin, using Python and QGIS. Even without detailed travel route data, it turned out pretty well.

Isochronic travel times from Toronto, Canada

General approach

The general approach is pretty simple:

  1. Pick a starting airport.
  2. Make a map that divides the world into pixels. Each pixel represents a particular latitude and longitude location. For example, with a 360 by 180 pixel map, each pixel represents one degree of latitude and longitude.
  3. For each pixel, find out the minimum time it takes to get to it from the starting airport.

In practical terms, even with flights, public transit and street direction information, there probably isn't any database that has directions to reach every pixel. Instead, there would probably be a sparse sampling of the world, where the time for some pixels is known (e.g. pixels that overlap streets) while others have unknown times. The pixels with unknown times will have to be filled in with an estimate.

I needed a way to get the directions from the home airport to various locations around the world. There is a free database of world airport locations and flight routes. With this data, I can calculate the time it takes to fly to any destination airport in the world from the starting airport.

Timing it as-the-crow-drives

To fill in the pixels between airports, I decided to simply calculate the “driving” time to each pixel from each possible destination airport (all 3000 of them). Then I added the flight time to the driving time to get the total time to a particular location by way of each of those 3000 airports. Then I took the minimum total time over all airports to get the travel time to that pixel.

The driving time is simply calculated by assuming a constant 35 km/h speed and distance is as-the-crow-flies. To be more specific, the distance is the great circle distance calculated by the Haversine formula. This is a massive simplification but the resulting map seems plausible.

PyTorch for faster processing

There are 3000 airports and I needed to calculate the driving time to every pixel in the world map. Calculating the distance from 3000 airports for each pixel turned out to be really slow with NumPy due to the number of pixels. If I wanted to print my poster at 300 dpi, then I need 10800 x 5400 pixels. I decided to use PyTorch to speed things up. While it is a machine-learning library, PyTorch also has many of the same operations as NumPy with GPU support. Doing the calculations on the GPU is about 20 to 30 times faster.

For even faster processing, I could have also used a lower resolution map and allowed Matplotlib's contourf function to generate a smooth vector layer, but exporting it to QGIS with the right projection seemed difficult compared to just exporting a high resolution raster. Another thing to speed up processing is to only update pixels that are within a certain distance from each airport, but because the airports are sparsely separated, the radius has to be pretty large to avoid gaps. This still leaves a lot of pixels to process.

Data

The airport data I used is from OpenFlights. It consists of a list of airports in the world with coordinates. There is also a list of valid routes between the airports. These databases are licensed under the Open Database License.

Code

The script here implements what I described above. It will create a result that looks like this:

These results are a little different from the map that I created because the code here is simplified. It does not use flight route information and assumes direct flights to every airport. It also lacks a few functions that would complicate the explanation.

Here's the overall script first. There is nothing tricky: just a few parameter values and set up for the calculations. The script uses matplotlib and cartopy to preview the map. Much of the work is done in utility functions, which will be described later. First, the imports:

import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from matplotlib.image import imread
import numpy as np
from cartopy import cartopy
import torch
import pandas as pd
import math
import time
 
#[... Utility Functions Described Later ...]

Then the main script:

if __name__ == '__main__':
 
    # Starting Airport
    home = 'YYZ'
 
    # Output resolution
    # Note: 300 dpi at 36" x 18" = 10800 x 5400 (5 GB memory GPU memory)
    rows = 1800
    cols = 3600
 
    flight_speed = 835 # km/h
    ground_speed = 35 # km/h
 
    # Specifies data type and processing device for Torch.
    dtype = torch.float
    device = torch.device("cuda:0") # change to "cpu" for CPU processing.
 
    # Generate arrays for the lat/lon indices for each pixel
    latgrid, longrid = generate_grids(rows,cols)
 
    longrid_cuda = torch.from_numpy(longrid).to(device)
    latgrid_cuda = torch.from_numpy(latgrid).to(device)

The travel time map is a raster array, where each pixel represents a particular latitude/longitude coordinate on Earth. To facilitate the calculations, the latitude and longitude for each pixel is placed in two separate arrays with the same size as the desired travel time map. These arrays are then converted into PyTorch tensors and put on the GPU.

    # Read the airports list
    airports = read_airport_list('airports.dat')
 
    # Get the time to each destination from the starting airport.
    destinationtimes = destination_times(home, airports, flight_speed)

The airports.dat file is a CSV and read_airport_list reads it and stores the information in the airports variable. With the list of airports read, destination_times calculates the time to fly from the home airport to each of the other airports in the list. It returns a list of flight times to reach each airport from the home airport.

    start = time.time()
 
    # Calculate the total travel time to each pixel. The basic idea is to
    # calculate how long it takes to get to each pixel via each of the
    # destination airports and then choosing the minimum time over all airports.
 
    times = pixel_travel_times_simple_minimum(
            destinationtimes, longrid_cuda, latgrid_cuda, ground_speed)
 
    end = time.time()
 
    print('Generation time:', end-start)
 
    times[times > 36] = 37 # Clip values over 36 hours.

The pixel_travel_times_simple_minimum() function calculates the actual travel time map. This is the function that goes through the list of destination airports and finds the time it takes to drive to each pixel from each airport, giving a total travel time per pixel via that airport. It then takes the minimum time over all airports as the travel time to that pixel. With the map generated, it clips the values over 36 hours because the last level on the legend is simply “Over 36 hours”.

After generating the times matrix, it can be shown in a plot with matplotlib and cartopy:

    # Show a quick plot in Matplotlib to see what it looks like.
 
    ax = plt.axes(projection=ccrs.Mercator())
    ax.set_global()
 
    # These are the levels to show the isochronic contours
    levels = [0, 6, 12, 18, 24, 30, 36, 42]
    # These are the colours of each level as RGB.
    colors = ['#BB4444','#DD5555','#FF9999','#FFCC55','#CCEEAA','#77DDCC','#AADDFF']
    # Show the contours.
    contours = ax.contourf(longrid, latgrid, times, 
            transform=ccrs.PlateCarree(), levels=levels, colors=colors)
 
    # Make a colorbar (legend) for the ContourSet returned by the contourf call.
    cbar = plt.colorbar(contours)
    cbar.ax.set_ylabel('Travel Time')
 
    # draw coastlines.
    ax.coastlines()
 
    # Show the plot
    plt.show()

The plot uses a Mercator projection but the times matrix is Plate Carree. cartopy handles the projection transform easily. All that has to be done is set the axes projection and set a transform on the contourf plot. Plate Carree means that each raster column represents a fixed interval of longitude and each raster row represents a fixed interval of latitude. Or as Wikipedia puts it:

The projection maps meridians to vertical straight lines of constant spacing (for meridional intervals of constant spacing), and circles of latitude to horizontal straight lines of constant spacing (for constant intervals of parallels).

With the overall script out of the way, here are the utility functions:

def raster_to_map(r, c, rows, cols):
    """
    Convert rows/cols to latitude/longitude
    This is a PlateCarree projection (pixels map linearly to degrees) and the
    entire image spans -180 to 180 and -90 to 90.    
    """
 
    lat = (r / rows) * -180 + 90
    lon = (c / cols) * 360 - 180    
 
    return lat, lon

raster_to_map converts image coordinates in rows and columns to latitude and longitude. This is just a linear conversion since the projection is Plate Carree and pixels map linearly to degrees.

def generate_grids(rows, cols):
    """
    Generate the grid of latitude/longitude coordinates
    It returns two arrays: one for latitude and one for longitude
    The corresponding pixels give the lat/lon coords for that pixel.
    """
 
    # Generate 1D array of coordinate indices for rows and cols
    # rowcoords is [0,1,...,rows-1]
    rowcoords = np.linspace(0, rows - 1, rows) # row coords
    # colcoords is [0,1,...,cols-1]
    colcoords = np.linspace(0, cols - 1, cols) # col coords
 
    # Stacks the 1D arrays into 2D arrays
    colgrid, rowgrid = np.meshgrid(colcoords, rowcoords)
 
    # Translate to map coordinates.
    return map(np.float32, raster_to_map(rowgrid, colgrid, rows, cols))

The travel time map is an image that spans 360 x 180 degrees and each pixel represents a particular location. The number of rows and columns set at the beginning of the script defines the resolution of the image. For example, 360 columns x 180 rows means each pixel has a resolution of 1 degree. We want two arrays that are the same size as the image store the latitude and longitude coordinates of each pixel. By operating on these two arrays, we can perform calculations on each pixel's geographic location. generate_grids creates these latitude and longitude arrays.

To do it, generate_grids first creates two arrays to store the raster row and column coordinates of each pixel. These are then converted to latitude and longitude using raster_to_map.

The np.meshgrid may be worth explaining. Let's say you have 3 rows and 4 columns. Then:

>>> import numpy as np
>>> rowcoords = [0,1,2]
>>> colcoords = [0,1,2,3]
>>> colgrid, rowgrid = np.meshgrid(colcoords,rowcoords)
>>> colgrid
array([[0, 1, 2, 3],
       [0, 1, 2, 3],
       [0, 1, 2, 3]])
>>> rowgrid
array([[0, 0, 0, 0],
       [1, 1, 1, 1],
       [2, 2, 2, 2]])

meshgrid basically just stacks the 1D coordinate arrays (rowcoords and colcoords) into 2D arrays, which together give the (row,col) coordinates of each pixel. These are then converted to latitude and longitude by raster_to_map in a vectorized operation. The returned arrays should be float32, so I used Python's map call to apply the np.float32 function on the raster_to_map return values.

def read_airport_list(file):
    """
    Read the airport list file.
    """
 
    airport_cols = ['AirportID','Name','City','Country','IATA',
            'ICAO','Latitude','Longitude','Altitude',
            'Timezone','DST','Tz','Type','Source']
 
    # with na_values, \N is interpreted as NaN.
    airports = pd.read_csv(file, 
            names=airport_cols, header=None, na_values=['\\N'], index_col=4)
    airports = airports.dropna(how='any') # drop any airport with missing data
 
    return airports

read_airport_list uses Pandas to read the airports.dat file. It uses column 4 as the index (the IATA code) so we can query the airports dataframe using the IATA code. For example, airports.loc['YYZ']['Latitude'] gets the latitude field for YYZ.

def haversine_torch(lon1, lat1, lon2, lat2):
    """
    Calculate the Haversine distance using PyTorch
    The arguments need to be tensors. To convert a scalar:
    torch.tensor(val)
 
    lon2 and lat2 can be single value tensors.
 
    The result will be a tensor.
    If there is only one value in the result, then use .item() to retrieve it.
    """
 
    # Convert to radians (all 3 do the same thing):
    #lon1, lat1, lon2, lat2 = [lon1 * math.pi/180, lat1 * math.pi/180, lon2 * math.pi/180, lat2 * math.pi/180]
    #lon1, lat1, lon2, lat2 = [x * math.pi/180 for x in [lon1, lat1, lon2, lat2]]
    lon1, lat1, lon2, lat2 = map(lambda x: x * math.pi/180, [lon1, lat1, lon2, lat2])
 
    londiff = lon2 - lon1
    latdiff = lat2 - lat1
 
    distance = 2 * 6371.0 * torch.asin(
            torch.sqrt(
                torch.sin(latdiff/2.0).pow(2) + 
                torch.cos(lat1) * torch.cos(lat2) * torch.sin(londiff/2.0).pow(2)))
    return distance

haversine_torch uses PyTorch to calculate the Haversine Distance between (lon1,lat1) and (lon2,lat2). All of the input parameters must be PyTorch tensors. PyTorch will use the GPU for calculations if the tensors are on the GPU. The distance calculation is element-wise and lon2 and lat2 can be single value tensors, so it can calculate the distance from every pixel to a particular location.

It seems that as long as lon2 and lat2 are single value tensors created by converting a scalar (e.g. torch.tensor(35)), they don't have to be explicitly put on the GPU and will still be processed on the GPU. If they aren't single value tensors, they have to be put on the GPU explicitly with .to(device).

haversine_torch is not very fast if it's repeatedly called to calculate single distances (all arguments are single valued tensors) and a simpler numpy version is much faster. This is not an issue for the script described here.

def destination_times(home_airport_code, airports, flight_speed):
    """
    Simple flight time to each destination airport. 
    Calculates the time it takes to fly directly to each destination airport.
    Output is a dictionary with the destination IATA code as the key and the
    flight time + location as values: 
    { IATACode: [flight time, longitude, latitude], ...}
    """
 
    print("Finding travel time to each destination airport by simple direct flight")
 
    origin_lon = airports.loc[home_airport_code]['Longitude']
    origin_lat = airports.loc[home_airport_code]['Latitude']
 
    destinationtimes = {}
 
    for index, airport in airports.iterrows():
        # lat/lon of this airport.
        lat = airport['Latitude']
        lon = airport['Longitude']
 
        # distance from home airport to this airport.
        # In this case, haversine_torch returns a single value tensor, so use
        # .item() to retrieve it as a number.
        distance = haversine_torch(
                *map(torch.tensor, [lon, lat, origin_lon, origin_lat])).item()
        destinationtimes[index] = [distance / flight_speed, lon, lat];
 
    return destinationtimes;

destination_times calculates the time to fly to each airport from the home airport by direct flight. In reality, many destinations require layovers and multistop flights and the map I created makes use of the OpenFlights route database for this. But to keep this post simple, I will not get into those details.

The *map() call applies torch.tensor to the list of variables [lon, lat, origin_lon, origin_lat] and the * operator expands these back into separate parameters to pass into haversine_torch.

def pixel_travel_times_simple_minimum(destinationtimes, longrid_cuda, latgrid_cuda, ground_speed):
    """
    Generate the travel time matrix to each pixel on the map.
    destinationtimes indicates the time it takes to get to each destination
    airport from the home airport (the flight time)
    For each pixel, find the time it takes to get to it by flying to each
    destination airport and then "driving" the straightline distance to the
    pixel. The travel time to that pixel is the minimum travel time over all the
    destination airports.
    """
 
    print("Finding final travel times by minimum time")
 
    # Use the same type, size and device type as longrid_cuda
    times_cuda =  torch.ones_like(longrid_cuda) * 1e6
 
    for i, (index, destination) in enumerate(destinationtimes.items()):
 
        if (i % 100 == 0):
            print(i, index)
 
        # lat/lon of this airport.
        lon = destination[1]
        lat = destination[2]
 
        # distance to this airport for all pixels.
        distances_cuda_airport = haversine_torch(
                longrid_cuda, latgrid_cuda, *map(torch.tensor, [lon, lat]))
 
        # Find the minimum time over all airports.
        times_cuda = torch.min(times_cuda, 
                distances_cuda_airport / ground_speed + destination[0])
 
    return times_cuda.cpu().numpy()

Finally, the pixel_travel_times_simple_minimum calculates the “driving” time to each pixel from each of the airports and finds the minimum over all airports to use as that pixels travel time.

QGIS

The times array can be exported as a GeoTIFF using GDAL and imported into QGIS for visualization and mapping. The latitude extents had to be trimmed. This is because I wanted the map to use Web Mercator projection, but QGIS does not handle projecting WGS 84 (latitude and longitude) coordinates that are too far north or south. The times layer is displayed using a discrete color ramp. For the coastlines and labels, I used the Natural Earth data set.

I had some trouble getting the Print Layout working well enough to export at the resolution and size that I wanted. I think the issue was mostly the map scale and how it translates to the Print Layout. With those issues eventually resolved, I exported the Print Layout to an image and imported it into Inkscape to do the final design.

Discussion

I would love to hear your feedback. Enter your comment below [ Terms of Use ]:
ZTMPQ
 

About Peter Yu I am a research and development professional with expertise in the areas of image processing, remote sensing and computer vision. I received BASc and MASc degrees in Systems Design Engineering at the University of Waterloo. My working experience covers industries ranging from district energy to medical imaging to cinematic visual effects. I like to dabble in 3D artwork, I enjoy cycling recreationally and I am interested in sustainable technology. More about me...

Feel free to contact me with any questions about this site at [user]@[host] where [user]=web and [host]=peteryu.ca

Copyright © 1997 - 2020 Peter Yu