From elevation data to visual experience

Overview

A mountain exists in the world as stone, ice, gravity, and weather. To render it on a screen, a digital system must first reduce it to numbers — a grid of elevation values, each cell recording how high the earth stands at that point above some reference datum (usually mean sea level). This grid is called a Digital Elevation Model, or DEM. Everything that follows in digital terrain visualisation — the shaded relief, the false-colour palette, the spinning flythrough, the photorealistic render — is a transformation of that grid of numbers into pixels. The mountain you see on Google Earth is not a photograph of a mountain. It is a mathematical surface, coloured and lit by algorithms, viewed through a virtual camera that obeys the same laws of projection as a Renaissance perspectival drawing. Understanding this pipeline — from raw measurement to visual output — is the key to understanding what digital terrain visualisation is, what it inherits, and what it invents.

The pipeline has three stages. First, data acquisition: how the numbers are gathered. Satellite radar (the Shuttle Radar Topography Mission, or SRTM, which in February 2000 measured the elevation of nearly the entire Earth’s surface from the Space Shuttle Endeavour), airborne LiDAR (Light Detection and Ranging — a laser scanner flown on an aircraft that pulses millions of light beams toward the ground and measures the time each takes to return, building a point cloud of extraordinary density and precision), and photogrammetry (the technique of extracting three-dimensional measurements from overlapping photographs, whether taken from aircraft or from satellites like the ALOS mission). Each method has different resolution, coverage, and accuracy. SRTM gave the world a 90-metre grid (later refined to 30 metres) of nearly the entire globe. LiDAR can achieve sub-metre resolution but covers only small areas. Photogrammetry falls between.

Second, the elevation model itself. Three terms are used, and the differences matter. A Digital Surface Model (DSM) records the height of whatever the sensor hits first — treetops, rooftops, the surface of a glacier. A Digital Terrain Model (DTM) strips away vegetation and buildings to reveal the bare earth beneath. A Digital Elevation Model (DEM) is the generic term that covers both, though in common usage it often means DTM. The difference is not pedantic: a DSM of a Himalayan valley shows the forest canopy draped over the slopes; a DTM of the same valley reveals the landforms hidden beneath — the river terraces, the moraine ridges, the fault scarps that the trees conceal. Stripping the surface to bare earth is itself a creative act, a kind of digital archaeology that reveals structure invisible to the eye.

Third, rendering. The grid of elevation values is transformed into a visual image. The simplest rendering is hill-shading — simulating a light source (conventionally placed in the northwest, following the Swiss cartographic tradition documented in the cartography report, B5) and calculating how each cell of the grid would be illuminated, producing a greyscale image of light and shadow that gives the flat grid the appearance of three-dimensional relief. Add hypsometric tinting — mapping elevation values to a colour ramp, typically green at low elevations through brown and grey to white at the highest — and you have the familiar terrain map. Drape satellite imagery over the 3D surface and you have a “natural colour” view. Place a virtual camera at an oblique angle and you have a 3D perspective view. Animate the camera along a path and you have a flythrough. Each of these is a choice, and each encodes a way of seeing.

The state of the art includes Mapbox Terrain (vector tile-based terrain rendering for web maps), Google Earth (the application that in 2005 made terrain visualisation a mass experience), Cesium (an open-source platform for 3D geospatial visualisation), three.js terrain renderers (bringing elevation data into the browser through WebGL), and Blender GIS (a plugin that imports real-world terrain data into the 3D modelling software Blender for artistic rendering). These tools are the digital descendants of Swiss hill-shading, but they possess capabilities no analog cartographer could have imagined: real-time rotation, dynamic lighting, continuous zoom from continental scale to individual boulders, and the ability to drape any dataset — temperature, vegetation, population, history — onto the surface of the earth.

Note on method: this report is written from training knowledge. Web resources were not consulted in real time. URLs in the final section are provided from known-good sources but should be verified before use.

Origins and evolution

The digital rendering of terrain begins, like so much else in computing, with military necessity. In the 1960s and 1970s, the United States military needed to model terrain for line-of-sight analysis (can this gun emplacement see that valley?), route planning (which path avoids detection?), and missile guidance (a cruise missile flying at treetop height needs a detailed model of the ground beneath it). The Defense Mapping Agency developed some of the earliest gridded elevation datasets and the algorithms to visualise them. The first computer-generated terrain images were crude — wireframe grids rendered on cathode-ray tube displays, the mountain reduced to a mesh of green lines on black — but they established the fundamental principle: terrain as a mathematical surface that a computer can rotate, illuminate, and view from any angle.

Through the 1980s and early 1990s, terrain visualisation remained the province of specialists: military analysts, geologists, a handful of academic cartographers. The data was expensive, classified, or available only at coarse resolution. The software ran on workstations that cost tens of thousands of dollars. The aesthetic was functional — grey-shaded relief maps, false-colour elevation plots, wireframe perspectives printed on pen plotters. There was no public audience.

Two events changed everything. The first was the Shuttle Radar Topography Mission (SRTM) in February 2000. Over eleven days, the Space Shuttle Endeavour carried a radar interferometer that measured the elevation of the Earth’s surface between 60 degrees north and 56 degrees south latitude — roughly eighty percent of the planet’s land area — at approximately 90-metre resolution (later reprocessed to 30 metres for global release). The data was made freely available by NASA and the USGS. Overnight, anyone with a computer could download a detailed elevation model of the Karakoram, the Andes, the Alps, or any other mountain range on Earth. The SRTM dataset is the cartographic equivalent of the printing press: it democratised access to the shape of the world.

The second event was Google Earth. Originally developed as EarthViewer 3D by Keyhole, Inc. (a company partly funded by the CIA’s venture capital arm, In-Q-Tel — the military origins of terrain technology run deep), it was acquired by Google in 2004 and released as Google Earth in 2005. For the first time, a mass audience could fly over the Himalaya in three dimensions, zooming from orbital altitude down to valley level, the terrain draped in satellite imagery, the mountains rising from the screen with startling presence. Google Earth did for terrain visualisation what the Gutenberg Bible did for literacy: it made a previously elite experience universally accessible. Within a year of its release, hundreds of millions of people had seen the Earth’s surface rendered in 3D.

The rise of WebGL (a standard for rendering 3D graphics in web browsers, supported from around 2011 onward) brought terrain visualisation out of standalone applications and into the browser. Cesium, an open-source JavaScript library launched in 2012, allowed developers to build Google Earth-like experiences on the open web. Mapbox GL, released around 2014, brought hardware-accelerated 3D terrain rendering to web maps with elegant cartographic styling. deck.gl, developed by Uber’s visualisation team, added high-performance geospatial layers. Suddenly, a web developer with modest skills could embed a 3D terrain view in a webpage.

Simultaneously, the open data movement expanded the range of available elevation data. The Japanese Aerospace Exploration Agency (JAXA) released the ALOS World 3D dataset at approximately 30-metre resolution, derived from the ALOS satellite’s stereo imagery. The European Union’s Copernicus programme released the Copernicus DEM at 30-metre and 90-metre resolution. OpenTopography began aggregating and serving high-resolution LiDAR datasets. Resolution has steadily improved: from 90-metre SRTM in 2000, to 30-metre SRTM and ALOS by the mid-2010s, to 12.5-metre ALOS refined products, to sub-metre LiDAR datasets for selected areas. Each leap in resolution reveals finer structure — individual ridgelines, gully networks, glacial striations — that coarser data could only suggest.

The democratisation of tools has been equally dramatic. QGIS, a free and open-source geographic information system, can import elevation data, generate hill-shading and contour lines, and produce publication-quality terrain maps. Blender, a free 3D modelling application, combined with the Blender GIS plugin, allows artists to import real-world terrain data and render it with cinematic lighting, atmospheric haze, and physically based materials. Aerialod, a small free application by Ephtracy, renders elevation data as voxel landscapes with a distinctive toylike aesthetic. The tools that once required a military budget and a room-sized computer now run on a laptop.

Colour

Begin, as a painter would, with the ground. A raw DEM has no colour. It is a grid of numbers — elevation values, nothing more. To see it, you must map those numbers to something the eye can read. The simplest mapping is greyscale: low elevations rendered as dark grey, high elevations as light grey (or the reverse). This produces an image that looks like a plaster cast of the landscape, every fold and ridge revealed in neutral tone. There is a stark beauty to greyscale bare-earth DEMs — the kind of beauty you find in an unglazed clay sculpture, where the absence of colour forces attention to form. When you strip a Himalayan valley to its bare-earth DEM, the terrain becomes a sculpture in grey: alluvial fans spread like opened hands, moraine ridges trace the former extent of glaciers, river terraces step down toward the current channel like a staircase built by geological time. This is the landscape as Brancusi might have carved it — reduced to essential form.

Hypsometric tinting is the convention of mapping elevation to colour. In its most familiar form — the one you have seen on a thousand wall maps and atlases — it runs from green at low elevations through yellow and brown to white at the peaks. Green means lowland, verdant, warm. Brown means highland, barren, windswept. White means snow, ice, the domain above life. This palette is so ubiquitous that it has become invisible: we no longer see it as a choice; we see it as the colour of the earth. But it is a choice, and a problematic one. The green-to-white ramp implies that low places are vegetated and high places are snowy, which is true in the Alps and the Himalaya but absurd in the Sahara or the Tibetan Plateau, where high terrain is brown desert and low terrain is also brown desert. The convention encodes a European temperate-zone assumption about what landscapes look like.

Worse, many digital terrain visualisations use the rainbow colour scale — the full spectral ramp from blue through green, yellow, orange, and red. This is the palette of a thousand bad scientific posters, and its problems are well-documented in perception research. The human eye does not perceive the rainbow as a smooth gradient: it sees sharp boundaries between green and yellow, between yellow and orange, that do not correspond to any real feature in the data. A rainbow-coloured DEM of the Karakoram will show false visual boundaries — apparent terraces, apparent cliffs — where the terrain is actually smooth, simply because the colour ramp happens to jump between perceptual categories at those elevations. The rainbow palette lies to the eye. It creates pattern where none exists.

The Swiss cartographic tradition, as described in the cartography report (B5), offers a superior approach. Eduard Imhof, the great cartographer of ETH Zurich, developed hand-painted hypsometric palettes for Swiss topographic maps that used a carefully modulated sequence of warm and cool tones: ochre and warm brown in the valleys, cooler grey-brown on the middle slopes, blue-grey and violet in the shadows of high rock faces, white with a faint blue tint for snow and ice. These palettes were designed not for abstract elegance but for perceptual truthfulness — they corresponded, with subtle accuracy, to what a human eye actually sees when looking at Alpine terrain under natural light. Digitising Imhof’s palettes — translating his hand-mixed watercolour gradients into numerical colour ramps that a computer can apply to a DEM — has been one of the quiet achievements of modern terrain cartography. The website shadedrelief.com, maintained by Tom Patterson (a cartographer at the U.S. National Park Service), has been a key resource in this effort.

Satellite imagery draped over a 3D terrain model creates what is called a “natural colour” view — the Google Earth aesthetic. But “natural colour” is itself a construction. The satellite image is captured in specific spectral bands (red, green, blue, and often near-infrared), at a specific time of day, in a specific season, under specific atmospheric conditions. The image processing pipeline applies atmospheric correction (removing the blue haze that satellite sensors see through many kilometres of atmosphere), contrast enhancement (stretching the tonal range to fill the display), and colour balancing (adjusting the white point to compensate for the colour of the illumination). The “natural” colours of a Google Earth view of the Himalaya are no more natural than the colours of a hypsometric tint — they are a different construction, built from different assumptions, but a construction nonetheless. The season matters enormously: a winter image of the western Himalaya, with snow covering the passes and the forests bare, tells a completely different visual story from a monsoon image of the same terrain, with green valleys and cloud-shrouded ridgelines.

The best digital terrain artists use colour with restraint. There is a kinship here with shan-shui painting (A9), where the ink painter’s decision to work in monochrome is not a limitation but a discipline — the assertion that form matters more than surface appearance, that the essence of the mountain is its shape, not its colour. A beautifully rendered greyscale hill-shade of the Karakoram, lit from the northwest with careful attention to the falloff of shadow in deep gorges, can be more visually powerful than any satellite-draped 3D flythrough. Less is more. The mountain emerges from the restraint.

Composition and spatial logic

The defining compositional innovation of digital terrain visualisation is the virtual camera: a mathematical point in space, with a position, an orientation, a field of view, and a projection model, which renders the terrain surface as seen from that vantage. Unlike a physical camera, the virtual camera can be placed anywhere — in orbit, at the summit, inside the mountain, a centimetre above a glacier surface. Unlike a painter, the digital artist is not constrained by human experience: they can show the mountain from viewpoints no human eye has ever occupied and no human foot could ever reach.

This freedom is both a gift and a danger. The 3D perspective view — the oblique aerial view that tilts the terrain toward the viewer, so that mountains rise from the screen with dramatic relief — has become the default idiom of digital terrain visualisation. It is visually compelling. It conveys the three-dimensionality that plan-view maps suppress. But it also introduces distortions that the viewer rarely notices. The most pervasive is vertical exaggeration. Real mountains, displayed at true 1:1 scale on a screen, look surprisingly flat. The Himalaya, with nearly nine thousand metres of vertical relief spread across two hundred kilometres of horizontal distance, has an average gradient that is steep by geological standards but gentle to the eye at screen scale. To make mountains “look like mountains,” terrain renderers routinely apply vertical exaggeration — stretching the elevation values by a factor of 1.5, 2, or even 3. This makes the terrain more dramatic, but it distorts the viewer’s sense of slope, steepness, and form. A vertically exaggerated Everest looks like a needle; the real Everest, seen from a distance, is a broad pyramid. The exaggeration is so universal, and so rarely disclosed, that most viewers of digital terrain have never seen the mountains at their true proportions.

Camera position and field of view compound the distortion. A wide-angle virtual camera (like a fisheye lens) exaggerates the foreground and compresses the background, making near terrain loom and far terrain shrink. A telephoto virtual camera flattens the scene, stacking mountain ranges against each other like theatrical scenery. The choice of camera parameters is a compositional decision as consequential as the choice of vantage point in a painting — but in digital terrain visualisation it is often made by default, by whatever the software happens to set, rather than by deliberate artistic judgment.

The flythrough — an animation in which the virtual camera moves along a path through the terrain — introduces temporal composition. The viewer experiences the landscape not as a single image but as a sequence of changing views, a journey through digital space. This format inherits something from the Chinese handscroll (described in the shan-shui report, A9): the landscape unfolds in time, revealing itself progressively rather than all at once. But where the handscroll’s pace is set by the viewer’s hands, the flythrough’s pace is set by the animator. The rhythm of revelation — how quickly the camera moves, when it pauses, what it lingers on, what it rushes past — becomes the compositional structure.

Level of detail (LOD) systems, used by all real-time terrain renderers (Google Earth, Cesium, Mapbox), create a spatial hierarchy that echoes atmospheric perspective in painting. Terrain near the virtual camera is rendered at full resolution — every ridge, every gully, every boulder. Terrain at the horizon is rendered at reduced resolution — smoothed, simplified, stripped of fine detail. The effect is analogous to what happens in the atmosphere: near objects are sharp, far objects are hazy. In painting, this is called aerial perspective, and it is one of the oldest depth cues in art. In digital terrain rendering, it is an engineering optimisation (rendering far terrain at full resolution would overwhelm the graphics card), but it produces a visual effect that the eye reads as natural depth. The engineering constraint and the aesthetic principle happen to align.

Pattern and geometry

When you look at a DEM from directly above — the plan view, the cartographer’s view — the elevation data reveals geological pattern with a clarity that no ground-level observation or even aerial photograph can match. The most striking patterns are drainage networks: the branching systems of rivers and their tributaries that dissect the terrain surface. A dendritic (tree-like) drainage network, with its trunk stream gathering branches that gather smaller branches in a self-similar fractal pattern, indicates terrain of uniform geological composition — the water carves its paths without encountering structural barriers. A trellis drainage pattern, with main streams flowing along structural valleys and tributaries joining at right angles, indicates folded sedimentary rock — the rivers follow the soft strata and cut across the hard ones. A radial pattern, with streams radiating outward from a central point, indicates a volcanic cone or a structural dome. Each drainage pattern is a signature of the underlying geology, written in water on the face of the earth, and a DEM makes it legible at a glance.

Beyond raw elevation, DEM data supports derived analyses that reveal further pattern. Slope analysis calculates the steepness of each cell, producing a map of gentle and precipitous terrain. Aspect analysis calculates the compass direction each cell faces — north-facing, south-facing, east, west — producing a map that, in a Himalayan context, immediately reveals the asymmetry between sun-drenched southern slopes (warm, dry, often deforested) and shaded northern slopes (cool, moist, often forested). Curvature analysis distinguishes between convex surfaces (ridgelines, where water diverges) and concave surfaces (valley bottoms, where water converges). Each of these derived layers transforms a single elevation dataset into a rich, multi-dimensional portrait of the terrain’s character.

The fractal geometry of terrain is one of the profound mathematical insights of the twentieth century. Benoit Mandelbrot, in The Fractal Geometry of Nature (1982), observed that natural landforms exhibit statistical self-similarity across scales: a coastline looks equally jagged whether measured at the scale of a continent or a bay; a ridgeline has the same roughness whether viewed from orbit or from a hillside. This property is described by the fractal dimension — a number between 2 (a perfectly flat surface) and 3 (a surface so rough it fills three-dimensional space) — and it governs both the analysis and the synthesis of terrain. When procedural terrain generation algorithms (described in C4) create artificial landscapes, they rely on fractal noise functions — mathematical recipes that produce surfaces with the same statistical roughness as real terrain. The plausibility of procedural mountains depends on getting the fractal dimension right: too smooth and the terrain looks melted; too rough and it looks like crumpled foil.

Contour lines extracted from DEM data close the circle between digital and analog cartography. A contour line is a line of constant elevation — the digital equivalent of Imhof’s hand-drawn brown curves. When a GIS application extracts contours from a DEM, it traces the boundary between cells above and cells below each chosen elevation value, producing the same sinuous, flowing patterns that a cartographer once drew by hand from field survey data. The digital contour is mathematically precise but aesthetically raw — it needs smoothing, generalisation, and careful labelling to match the quality of a hand-drawn contour map. The gap between an automatically generated contour and an Imhof contour is the gap between a MIDI piano and a Steinway: both produce the same notes, but only one produces music.

Local legends and iconography

Digital terrain visualisation has no indigenous iconographic programme. It has no centuries of painted convention, no mineral pigments, no ritual context. It is, in the strict sense, a technology without a culture. But it is not without ideology. The technology carries within it a set of commitments — about who sees the earth, from what vantage, and for what purpose — that are as consequential as any iconographic programme, and considerably less visible.

The military origins are foundational. DEM data was developed for targeting, route planning, and line-of-sight analysis. The algorithms that render terrain on your screen were first written to help cruise missiles navigate valleys and avoid radar detection. The SRTM mission that gave the world free elevation data was a collaboration between NASA and the National Geospatial-Intelligence Agency (NGA) — the intelligence agency responsible for geospatial surveillance. The very first commercial satellite terrain viewer, Keyhole (later Google Earth), was funded by the CIA’s venture capital arm. None of this makes the technology evil, but it does mean that when you look at a 3D mountain on a screen, you are looking through a lens ground, in the first instance, for war.

The surveillance implications intensify with resolution. At 90-metre SRTM resolution, you can see the general shape of a valley. At sub-metre LiDAR resolution, you can see individual houses, field boundaries, and footpaths. High-resolution terrain data, combined with satellite imagery, enables a form of remote surveillance that was previously impossible. Governments, military forces, and intelligence agencies can map the terrain of any region on Earth without setting foot there. This capability has obvious applications in border security, counter-insurgency, and territorial control. It is the digital continuation of the Great Trigonometric Survey’s project — making the landscape legible, governable, controllable — conducted now by satellite rather than by theodolite, but serving recognisably similar purposes.

And yet the same technology serves radically different ends. Community mapping projects use open-source GIS tools and freely available elevation data to document indigenous land rights, map customary territories, and challenge state-imposed boundaries. Disaster response organisations use terrain data to model flood inundation, predict landslide paths, and plan evacuation routes. Conservation groups use LiDAR to map forest structure, monitor deforestation, and discover archaeological sites hidden beneath jungle canopy. The technology is agnostic; the ideology is in the application.

There is, finally, what the writer Frank White called the “overview effect” — the cognitive and emotional shift experienced by astronauts who see the Earth from space. Digital terrain visualisation, with its ability to place the viewer at orbital altitude looking down at the entire Himalayan arc, approximates this experience. The overview effect is often described as a secular revelation: the Earth is one system, without borders, fragile and beautiful. This is not a traditional iconography, but it functions as one — it generates awe, it shifts perspective, it reframes the relationship between the viewer and the land. When a user zooms out on Google Earth until the Himalaya appears as a single sinuous arc of white across the brown mass of Asia, they are experiencing something analogous to what a Tibetan cosmographic painter depicts when placing Mount Meru at the centre of a painted universe: the mountain as axis, as anchor, as the structure around which the world is organised.

Key works and where to see them

The following tools, datasets, and projects represent significant moments in the evolution of digital terrain visualisation. Each is accessible online or as free software and is worth exploring firsthand.

Google Earth (2005–present). The application that made terrain visualisation a mass experience. Available as a desktop application and at earth.google.com. Its 3D terrain, draped in satellite imagery, remains the most widely experienced digital rendering of mountains in history. Navigate to the Himalaya, tilt the view, and zoom into any valley — you are looking at the convergence of SRTM elevation data, satellite photography, and real-time 3D rendering. For all its ubiquity, Google Earth rewards careful looking: experiment with the time slider to see the same terrain in different seasons and different years.

The SRTM Global Elevation Dataset (2000). The dataset that democratised terrain data. Freely downloadable from USGS EarthExplorer (https://earthexplorer.usgs.gov) and NASA’s LP DAAC. At 30-metre resolution, it covers nearly the entire Earth. Download a tile covering the Karakoram, load it into QGIS, and generate your own hill-shade — the exercise is revelatory. You will understand, in a way that no amount of reading can convey, that the shaded relief map is a construction: change the light angle and the same terrain tells a different story.

Mapbox Terrain (2014–present). Mapbox’s terrain rendering for web maps, built on vector tiles and WebGL, brought cartographically styled 3D terrain to the browser. The Mapbox house style — muted earth tones, elegant typography, careful hill-shading — inherits the Swiss cartographic tradition and translates it into pixels. Visible at mapbox.com and in applications built on the Mapbox GL JS library.

Cesium and CesiumJS (2012–present). An open-source platform for 3D geospatial visualisation. Cesium renders terrain in the browser using WebGL and supports the draping of any imagery or data layer over a 3D globe. It powers applications from flight simulation to urban planning. Available at cesium.com. For an art student, Cesium is interesting as a platform where the compositional decisions — camera angle, vertical exaggeration, lighting — are exposed as adjustable parameters rather than hidden behind defaults.

OpenTopography (2009–present). A portal for high-resolution topographic data, particularly LiDAR. While Himalayan LiDAR coverage is limited, the platform hosts extraordinary datasets from other mountain regions — the Swiss Alps, the Cascades, the Southern Alps of New Zealand — that demonstrate what sub-metre terrain data reveals. Available at opentopography.org. The visualisation tools on the site allow immediate rendering of downloaded data.

The National Geographic Everest Map (1988, Bradford Washburn). A masterwork of terrain cartography that bridges the analog and digital eras. Washburn used aerial photogrammetry to produce a 1:50,000 contour map of the Everest massif at extraordinary detail. While the cartography was analog, the photogrammetric measurements that underlie it are essentially the same data-to-surface pipeline that digital terrain visualisation uses. Available in print from National Geographic and in select map collections.

NASA Scientific Visualization Studio. NASA’s SVS (https://svs.gsfc.nasa.gov) produces terrain visualisations and flythrough animations of remarkable quality, using elevation data combined with satellite imagery and atmospheric modelling. Their Himalayan flythrough sequences — showing the arc from the Karakoram to eastern Nepal, lit by simulated sunlight, with atmospheric haze — demonstrate what is possible when terrain rendering is treated as a visual art rather than a technical demonstration.

PeakFinder and PeakVisor. Mobile applications that use the phone’s camera orientation and GPS position to identify visible mountain peaks in real time, overlaying labels on the live camera view using a DEM in the background. These are augmented reality terrain visualisations — the digital elevation model is used not to replace the visible landscape but to annotate it. They demonstrate that DEM data is not only a source of images but a source of knowledge about what you are looking at.

QGIS + Blender GIS Pipeline. The combination of QGIS (free GIS software) and the Blender GIS plugin (which imports georeferenced terrain data into Blender’s 3D environment) has become the standard pipeline for artistic terrain rendering. The workflow is: download DEM and satellite imagery from free sources, import into QGIS, export a terrain mesh, import into Blender, apply materials and lighting, render. The results can be strikingly beautiful — cinematic mountain landscapes built from real data. Tutorials are widely available online and the entire pipeline is free.

Daniel Huffman’s Cartographic Art. Huffman (somethingaboutmaps.com) is a contemporary cartographer who uses digital tools — primarily QGIS, Blender, and Adobe Illustrator — to produce terrain visualisations of extraordinary aesthetic quality. His work demonstrates that the digital terrain pipeline, in skilled hands, can produce images that honour the Swiss cartographic tradition while exploring new visual territory: unconventional colour palettes, dramatic lighting angles, layered textures that recall watercolour rather than digital rendering.

Further exploration

The following resources are recommended for a reader wishing to explore digital terrain visualisation further. All were accessible online as of the author’s last knowledge; URLs should be verified.

Shaded Reliefhttps://shadedrelief.com — Maintained by Tom Patterson, formerly of the U.S. National Park Service, this site is the single best resource on the art and technique of digital terrain rendering. Patterson is a direct heir to the Imhof tradition, and the site offers tutorials, colour palettes, and examples of shaded relief done well. Start here. The manual shading tutorial alone is worth hours of study for anyone interested in how light and shadow create the illusion of terrain.

USGS EarthExplorerhttps://earthexplorer.usgs.gov — The portal for downloading free elevation data, including SRTM, ASTER, and other global DEMs. Registration is free. Select an area of interest, choose a dataset, and download. The experience of loading a raw DEM into QGIS and seeing the Karakoram emerge in greyscale is genuinely moving — the numbers become a landscape.

OpenTopographyhttps://opentopography.org — High-resolution topographic data, primarily LiDAR. The site includes tools for visualising data in the browser and educational resources on terrain analysis. Even if your area of interest lacks LiDAR coverage, the available datasets from other mountain regions demonstrate the extraordinary detail that high-resolution elevation data can reveal.

Copernicus Open Access Hubhttps://scihub.copernicus.eu — The European Union’s portal for Copernicus satellite data, including the Copernicus DEM at 30-metre resolution. Free registration. The Copernicus DEM is one of the most recent global elevation datasets and offers excellent quality across the Himalayan region.

Mapbox Documentationhttps://docs.mapbox.com — Technical documentation for Mapbox’s terrain rendering system, including the Mapbox GL JS library. For a reader interested in how web-based terrain visualisation works under the hood — how vector tiles are constructed, how terrain shading is calculated in the fragment shader, how the camera model works — this is a clear and well-written resource.

Cesium Documentation and Tutorialshttps://cesium.com/learn — Cesium’s learning resources provide a good introduction to browser-based 3D terrain rendering, including explanations of terrain tiling, level-of-detail systems, and the CesiumJS API. Useful for understanding the engineering that makes real-time terrain visualisation possible.

Blender GIS Tutorials — Search for “Blender GIS terrain tutorial” on YouTube. Multiple creators have produced step-by-step guides for importing real-world DEM data into Blender and rendering it with cinematic lighting and materials. The workflow is accessible to a motivated beginner and the results can be visually extraordinary. The channel of Klaas Nienhuis offers particularly clear instruction.

NASA Visible Earthhttps://visibleearth.nasa.gov — A curated collection of satellite images and terrain visualisations produced by NASA. Search for “Himalaya” or “Karakoram” to find rendered views of High Asian terrain. The images range from simple false-colour composites to elaborately produced visualisations with atmospheric effects and oblique lighting.

somethingaboutmaps (Daniel Huffman)https://somethingaboutmaps.com — Huffman’s portfolio demonstrates what is possible when cartographic skill, aesthetic sensibility, and digital terrain tools converge. His work is proof that digital terrain visualisation need not look generic: in the hands of an artist, the same DEM data that produces a forgettable Google Earth screenshot can produce an image that belongs on a gallery wall.

Edward Tufte, “Envisioning Information” and “Visual Explanations” — Available in print. Tufte’s books do not focus on terrain specifically, but his principles of visual clarity, data-ink ratio, and the critique of chartjunk apply directly to terrain visualisation. His analysis of how graphical excellence arises from the union of statistical content and visual design is essential background for anyone who wants to produce terrain visualisations that communicate rather than merely decorate.