When algorithms build mountains — form without meaning

Overview

Generative mountain art is art that uses algorithms, rules, and controlled randomness to create mountain forms. It is the newest tradition in this survey and the most technically novel. It is also, in a specific and important way, the most impoverished — because it generates form without meaning.

To understand what this means, consider what a mountain is in every other tradition documented in this survey. In shan-shui painting, a mountain is a philosophical proposition about the relationship between the vast and the transient. In a Pahari miniature, a mountain is the setting for a divine love story, its layered ridges painted in specific pigments that carry specific emotional weight. In a thangka, a mountain is the seat of a deity, its geometry governed by proportional canons that encode cosmological truth. In a colonial survey drawing, a mountain is a measured object, triangulated and named, brought under imperial control through the act of mapping. In every case, the mountain means something. It has a name, a history, a community of people who live beneath it and tell stories about it.

A procedurally generated mountain has none of this. It has ridges and valleys, snowfields and treelines, atmospheric haze and dramatic lighting — but it has no name, no history, no sacred peak, no pilgrimage path, no naga’s pool. It is pure topography: convincing in form, empty of content. Understanding this gap is essential for himalaya-darshan, which must render specific, meaningful mountains — not generic algorithmic terrain.

The spectrum of generative mountain art is broad. At one end are the mathematical approaches: fractal landscapes built from noise functions, where the entire mountain is a mathematical equation made visible. Then come the simulation-based approaches: software that mimics geological processes like hydraulic erosion and tectonic uplift, carving algorithmically plausible valleys into algorithmically plausible ridges. Then the creative coding tradition: artists who write programs in languages like Processing, p5.js, or GLSL shader code to create mountain forms that are both algorithmic and aesthetically intentional — the computer as paintbrush rather than landscape architect. And most recently, AI-generated imagery: systems like Stable Diffusion, Midjourney, and DALL-E that have been trained on millions of photographs and paintings and can produce mountain images from text descriptions — “a Himalayan peak at dawn, oil painting style” — in seconds.

Each of these approaches has its own relationship to truth, beauty, and specificity. Each has something to teach himalaya-darshan about what digital tools can and cannot do. And each, in its own way, demonstrates the same fundamental lesson: an algorithm can build a mountain, but it cannot know one.

Origins and evolution

The story begins with a mathematician, not an artist. In the 1970s and early 1980s, Benoit Mandelbrot developed the mathematics of fractal geometry — the study of shapes that exhibit self-similarity across scales. His key insight was that the shapes of nature are not the smooth curves and straight lines of classical geometry. “Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth,” he wrote in The Fractal Geometry of Nature (1982). Instead, natural forms have what he called “fractional dimension” — a coastline, measured at finer and finer scales, reveals ever more detail, and its effective length depends on the size of your ruler. Mountains, Mandelbrot argued, have the same property: a ridge looks jagged whether you view it from an aeroplane or from ten metres away. This mathematical observation would become the foundation of all procedural terrain generation.

The moment procedural landscape entered the visual imagination was precise: SIGGRAPH 1980, the annual computer graphics conference. Loren Carpenter, then a researcher at Boeing, presented a short film called Vol Libre — a two-minute flythrough over a fractal mountain landscape, generated entirely by recursive subdivision of triangles with random displacement at each level. The audience — professional computer graphics researchers — gave it a standing ovation. The mountains were crude by modern standards, but they were recognisably mountains, and they had been created not by an artist drawing each ridge but by an algorithm that understood, in a mathematical sense, what mountain-ness looks like at every scale. Carpenter was immediately hired by Lucasfilm, where he would help develop the technology behind the “Genesis effect” in Star Trek II: The Wrath of Khan (1982) — one of the first uses of fractal terrain in a feature film.

The next crucial development came in 1983, when Ken Perlin, working on the visual effects for the film Tron, invented what is now called Perlin noise — a mathematical function that generates smooth, natural-looking randomness. If you imagine pure randomness as television static (every pixel independent, no pattern), Perlin noise is the opposite: it produces gently undulating fields of value that flow smoothly from one point to the next, like the surface of gently rolling hills. By layering multiple frequencies of Perlin noise (a technique called “fractional Brownian motion” or “fBm” — adding together noise at different scales, each layer half the amplitude and twice the frequency of the last), you can generate terrain that has both large-scale mountain forms and small-scale rocky detail. Perlin noise, and its later refinement Simplex noise (2001), became the mathematical foundation of virtually all procedural terrain generation. When you see a computer-generated landscape in a film, a game, or a tech demo, the odds are very high that Perlin noise is somewhere in its ancestry.

Through the 1990s and 2000s, procedural terrain became a standard tool in visual effects and video games. The Lord of the Rings films (2001-2003) used a combination of real New Zealand landscapes and digitally extended terrain. Video games like Minecraft (2011) used noise-based terrain generation to create infinite explorable worlds. Software packages like Terragen, World Machine, and later Gaea gave artists direct control over procedural terrain — not by drawing mountains, but by designing the rules and parameters that generate them.

A parallel tradition, less commercially visible but artistically significant, is the demoscene — a subculture of programmers who create audiovisual demonstrations (“demos”) in extremely small file sizes, often 4 kilobytes or 64 kilobytes. A 4KB demo must generate everything — terrain, textures, lighting, music — from code alone, with no stored assets. The constraints produce extraordinary ingenuity. Elevated by Rgba and TBC (2009), a 4KB demo, generates a photorealistic mountain landscape with atmospheric scattering, volumetric clouds, and a sweeping camera path — all from a program smaller than this paragraph. The demoscene treats procedural landscape as a pure art form: the beauty of the result and the economy of the means are both part of the aesthetic.

The creative coding movement — artists working with tools like Processing, openFrameworks, and the GLSL shader language on platforms like Shadertoy — brought procedural terrain into the gallery and the browser. On Shadertoy, a web platform where artists share real-time shader programs, you can find hundreds of procedural mountain landscapes running live in your browser, each one a self-contained mathematical poem. The most celebrated practitioner is Inigo Quilez, a mathematician and graphics engineer whose terrain shaders achieve a level of beauty and atmospheric subtlety that rivals landscape painting — all generated from pure mathematics, with no stored images or textures.

The most recent revolution is AI image generation. Beginning in 2022, systems like Stable Diffusion, Midjourney, and DALL-E demonstrated the ability to generate photorealistic and stylised images from text prompts. Type “a snow-capped Himalayan peak at sunrise, dramatic lighting, photorealistic” and the system will produce a convincing image in seconds. These systems work not by understanding mountains but by having been trained on millions of images — they have learned the statistical patterns of what mountain photographs and mountain paintings look like, and they can recombine those patterns in novel ways. The results are often strikingly beautiful at first glance, but they raise questions about specificity, authorship, and meaning that are central to this survey’s concerns.

Colour

To describe how procedural terrain is coloured, it helps to use the language a painter would use — because the problems are, at root, the same problems painters have always faced, even though the tools are mathematical.

The simplest and oldest approach to colouring procedural terrain is the elevation gradient: assign colours based on height. Below a certain altitude, paint the surface green (vegetation). Above the treeline, paint it brown or grey (bare rock). Above the snowline, paint it white. This is the digital equivalent of hypsometric tinting in cartography — the colour bands on a physical relief map. It works in the same way and fails in the same way: it produces generic, unconvincing colour because real mountain colour depends on far more than elevation. The north face of a ridge is darker and holds snow longer than the south face. A limestone cliff is pale grey; a basalt outcrop is nearly black. A meadow in June is electric green; the same meadow in October is gold and brown. A glacier has a blue-white quality entirely different from fresh snow. The elevation gradient knows none of this. It produces a mountain that is coloured like a diagram, not like a place.

Better approaches exist. Texture splatting blends different surface textures — grass, rock, scree, snow — based on both elevation and slope angle. A steep cliff gets a rock texture regardless of its altitude; a gentle slope at high elevation gets alpine meadow. This is more convincing, but still formulaic. Physically-based rendering (PBR) goes further, simulating how light actually interacts with different surface materials — the rough micro-facets of granite scatter light differently from the smooth surface of wet rock or the translucent crystals of snow. PBR can produce surfaces that look genuinely tactile: you can almost feel the grit of sandstone, the slickness of wet slate. Atmospheric scattering simulates how the atmosphere itself colours terrain — the way distant mountains turn blue-violet, the way dust and humidity warm the light at low sun angles, the way high-altitude air is thin and the light correspondingly harsh and clear. This is the digital equivalent of atmospheric perspective — the technique that Leonardo da Vinci codified and that shan-shui painters achieved through the graduated dilution of ink.

The best procedural artists combine all of these. Inigo Quilez’s terrain shaders on Shadertoy, for instance, compute not just the shape of the terrain but the angle of the sun, the density of the atmosphere, the scattering of light through haze, and the subtle colour variation of rock surfaces — all in real-time, all from pure mathematics. The results can be breathtaking: a mountain range at golden hour, the peaks catching the last light while the valleys are already in blue shadow, mist pooling in the low places. It approaches the chromatic richness of a Hudson River School painting or a Turner watercolour.

AI-generated mountain images handle colour differently and, in some ways, more convincingly — at first glance. Because the AI has been trained on millions of photographs and paintings, it has absorbed the statistical patterns of real mountain colour: the way alpenglow warms a snowfield, the way storm light bruises a ridge, the way monsoon clouds turn the world green and grey. A Midjourney mountain can be strikingly beautiful. But look carefully and you may notice that the colours are averaged — they represent a kind of statistical composite of “mountain colour” drawn from millions of training images. The result is a mountain that looks like everywhere and nowhere: vaguely alpine, vaguely Himalayan, vaguely Rocky Mountain, but specifically none of these. The geological specificity that gives a real mountain its colour — the red sandstone of Zanskar, the black schist of Rohtang, the white granite of the Karakoram — is absent. The AI produces a “mean mountain,” chromatically plausible but geologically generic.

Composition and spatial logic

Procedural terrain generation can, in principle, extend a landscape infinitely in all directions. Add more noise, compute more triangles, and the terrain continues — over the horizon, around the planet, endlessly. This is its great technical achievement and its great compositional weakness.

Consider the spatial logic of every other tradition in this survey. A shan-shui painting composes specific mountains in a specific arrangement: the peak rises here, the mist gathers there, the scholar stands at this precise point where the path turns. Guo Xi’s theory of the three distances — high distance, deep distance, level distance — is a compositional grammar for guiding the viewer’s eye through a meaningful spatial experience. A Pahari miniature frames a specific scene within a specific landscape: Krishna and Radha meet beneath a specific tree on a specific hillside, and the layered ridges behind them create a specific rhythm of colour and form. A photograph captures a specific moment from a specific vantage point — the photographer chose to stand here, not there, and to press the shutter at this instant, not another.

Procedural terrain generates a mountain, not the mountain. The camera can be placed anywhere; the terrain extends in all directions; the composition is, by default, accidental. The most common camera mode in procedural terrain is the flythrough — a virtual camera soaring over generated landscape, the perspective of a bird or an aircraft. This is exhilarating but compositionally vacuous. There is no “here” and no “there,” no foreground subject and no background context, no moment of arrival and no sense of place. The viewer is everywhere and therefore nowhere.

Some creative coders overcome this by constraining their generative systems — writing rules that produce specific compositional qualities. A procedural system might be designed to always place a dominant peak in the upper third of the frame, to generate a valley that leads the eye from foreground to background, to simulate mist in the middle distance that creates the effect of shan-shui’s “deep distance.” These constraints transform the system from a terrain generator into something closer to a compositional tool — the algorithm proposes, and the artist’s rules dispose. The best work in this vein, such as some of the terrain shaders on Shadertoy, achieves genuine compositional beauty: a single mountain catching the light against a darkening sky, framed by atmospheric haze, the camera placed with deliberate intent. But this beauty is the artist’s contribution, not the algorithm’s. The algorithm provides the material; the human provides the meaning.

The infinite-terrain paradigm also has implications for scale. In the human traditions of this survey, the size of the mountain relative to the human figure is a deliberate artistic choice that carries philosophical weight — in shan-shui, the tiny scholar beneath the vast peak expresses a Daoist understanding of human insignificance; in a Pahari miniature, the figures are large relative to the landscape because the human drama is paramount. Procedural terrain has no inherent scale. Without a human figure, a building, or a known landmark, a procedural mountain could be ten metres tall or ten kilometres tall. This scalar ambiguity is another form of the same problem: the mountain has form but not identity.

Pattern and geometry

The mathematics underlying procedural terrain connects directly to Mandelbrot’s founding insight about the geometry of nature. A real mountain ridge exhibits statistical self-similarity: the pattern of peaks and saddles you see from a hundred kilometres away is echoed in the pattern of bumps and notches you see from a hundred metres away, which is echoed again in the texture of individual rocks. This is not perfect repetition — a mountain is not a crystal — but a statistical resemblance across scales that Mandelbrot described with the concept of fractional dimension.

Noise functions are the mathematical tool that generates this self-similarity. Perlin noise, the most widely used, produces a smooth, continuous field of pseudorandom values. At any single frequency, it creates gently rolling terrain — hills, not mountains. The trick is layering: add together multiple octaves of noise, each at a higher frequency and lower amplitude than the last. The first octave gives you the broad mountain shapes — the major peaks and valleys. The second adds medium-scale ridges and gullies. The third adds small-scale rocky detail. The fourth adds fine surface roughness. The mathematical term for this layering is fractional Brownian motion (fBm), and the result is terrain that has detail at every scale — the defining characteristic of fractal geometry.

Different noise parameters produce strikingly different geological characters. High-frequency noise with sharp amplitude falloff creates jagged, angular terrain — the splintered peaks of an alpine range. Low-frequency noise with gentle falloff creates rolling, rounded hills. Worley noise (also called cellular noise), which computes distances to randomly distributed points, produces terrain that looks like cracked mud or crystalline rock formations. By combining different noise types, the procedural artist can suggest different geological processes: the angular fracture of tectonic uplift, the smooth curves of glacial erosion, the dendritic branching of river valleys.

Erosion simulation adds a layer of physical plausibility that noise alone cannot achieve. The simplest form, hydraulic erosion, simulates raindrops falling on the terrain surface, flowing downhill, picking up sediment where the water moves fast (steep slopes), and depositing it where the water slows (flat areas, lake beds). Run this simulation for thousands of iterations and the terrain develops river valleys, alluvial fans, and drainage networks that look remarkably like real geology. Thermal erosion simulates the way steep slopes shed material through rockfall and scree accumulation, gradually smoothing sharp peaks and filling valley floors. These simulations do not know anything about geology — they are simply applying the physics of water flow and gravity to a mathematical surface — but the results are convincing because the same physics shapes real mountains.

The geometry of procedural terrain is, in this sense, honest. It approximates the mathematical structure of real landforms because it uses (simplified versions of) the same physical processes that shape them. Where it differs from reality is in specificity. Real geology is the product of specific events: this fault formed when tectonic plates collided at this angle; this valley was carved by this glacier during this ice age; this cliff is limestone because this region was an ocean floor two hundred million years ago. Procedural terrain has no such history. Its geometry is generic — statistically plausible but historically empty.

Local legends and iconography

This section is necessarily different from its counterpart in the other reports in this survey, because the subject of this section is absence. The other traditions are rich in iconography, narrative, and cultural meaning — shan-shui painting has its scholar-recluses and its Peach Blossom Springs; Pahari miniatures have Krishna and Radha; thangka painting has its Buddha fields and protector deities; even colonial survey art has its named peaks and measured triangulations. Procedural terrain has none of this. A generated mountain has no history, no name, no stories, no sacred sites, no communities. It is pure form without content.

This absence is not a failure of the technology. It is an inherent limitation of the generative paradigm. An algorithm that generates terrain from noise functions is operating in a space of pure mathematics — it knows about frequencies, amplitudes, and gradients, not about pilgrimage routes, sacred groves, or the home of a naga. The mountain it produces is ontologically different from the mountains in every other tradition in this survey: it is a surface, not a place.

AI-generated mountains inherit this problem in a subtler and more insidious way. When Midjourney generates an image in response to the prompt “sacred Himalayan mountain with a temple,” it will produce something that looks plausible — snow-capped peaks, a structure that resembles a temple, perhaps prayer flags — because it has learned the visual patterns associated with those words from its training data. But the temple is not a real temple. The mountain is not a real mountain. The prayer flags are not at a real pass. The image is a statistical collage of “Himalayan-ness” assembled from millions of photographs, and it refers to no specific place, no specific tradition, no specific community of faith. It is a simulacrum — an image that resembles meaning without possessing it.

This matters profoundly for himalaya-darshan. The project’s purpose is to render specific, meaningful mountains — the Tirthan valley, the Parvati watershed, the peaks that have names in local languages and stories in local traditions. Procedural terrain generation and AI image synthesis can assist with certain aspects of this rendering: texturing rock surfaces, distributing vegetation plausibly, simulating atmospheric effects, generating the fine detail that would be tedious to model by hand. But they cannot substitute for the specificity of real terrain data (digital elevation models derived from satellite measurement of actual mountains) and real cultural knowledge (the names, stories, and sacred geographies that make a mountain meaningful to its people). The mountain must be Tirthan, not “a mountain.” The pass must be Jalori, not “a pass.” The pool must be the one where the local tradition says the naga dwells, not a randomly placed body of water that looks generically appealing.

The gap between form and meaning is the central lesson of generative mountain art for this survey. Every other tradition documented here understands, in its own way, that a mountain is not just a shape. The procedural tradition, precisely because it can generate convincing shapes so effortlessly, makes the distinction between shape and meaning impossible to ignore.

Key works and where to see them

The following works, tools, and artists represent the most significant achievements and reference points in generative mountain art. Unlike the other reports in this survey, many of these works are inherently digital — they exist as running programs, not as physical objects — and can be experienced directly through a web browser.

Loren Carpenter, “Vol Libre” (1980). A two-minute film of a fractal mountain flythrough, presented at SIGGRAPH 1980. This is the founding work of procedural landscape as a visual medium. The mountains are generated by recursive subdivision — starting with a simple triangle, splitting it into smaller triangles, and adding random vertical displacement at each level. The result is crude by modern standards but historically pivotal. The film is available in various archives online and in documentaries about the history of computer graphics.

Inigo Quilez, terrain shaders on Shadertoy (2013 onwards). Quilez, a Spanish mathematician and graphics engineer who has worked at Pixar and Oculus, is arguably the greatest practitioner of real-time procedural landscape. His terrain shaders — programs that generate and render an entire mountain landscape in a single fragment shader, running live in a web browser — achieve a level of atmospheric and chromatic beauty that rivals painting. His shader “Rainforest” and his various terrain demonstrations are landmarks of the medium. Visit shadertoy.com and search for his work (username “iq”).

Rgba+TBC, “Elevated” (2009). A 4-kilobyte demo that generates a photorealistic mountain landscape with volumetric clouds, atmospheric scattering, and a sweeping orchestral score — all from a program smaller than a typical email. Winner of the 4KB intro competition at Breakpoint 2009. It remains one of the most celebrated achievements of the demoscene and a testament to what procedural generation can accomplish under extreme constraint. Available on pouet.net.

Terragen (first released 1999, developed by Planetside Software). The leading dedicated software for procedural landscape rendering. Terragen generates terrain from layered noise functions and renders it with physically accurate atmospheric scattering, volumetric clouds, and global illumination. It has been used in numerous films and television productions to create digital landscapes that are indistinguishable from aerial photography. It represents the state of the art in “art-directed procedural terrain” — the artist designs the generation rules rather than drawing the terrain directly.

World Machine (developed by Stephen Schmitt) and Gaea (developed by QuadSpinner). Procedural terrain generation tools used extensively in film and game production. These applications allow artists to build terrain through node-based workflows: connect an erosion node to a noise node, feed the result through a thermal weathering node, and the software generates terrain that looks geologically plausible. Gaea, developed by an Indian studio, is particularly notable for its erosion simulation and its ability to generate terrain that mimics specific geological styles.

Sebastian Lague, “Procedural Terrain Generation” series (YouTube, 2016 onwards). An Australian creative coder whose tutorial series walks through the mathematics and implementation of procedural terrain generation with exceptional clarity. His videos are among the best introductions to the subject for someone who wants to understand not just what procedural terrain looks like but how it works — the noise functions, the mesh generation, the erosion simulation. Accessible to a motivated novice.

Houdini terrain tools (SideFX). Houdini is a professional visual effects application used in major film productions. Its terrain tools combine procedural generation with simulation-based erosion and allow the results to be art-directed at every stage. Many of the CG landscapes in contemporary blockbuster films — the extended environments in the Marvel and Star Wars franchises, for example — are built in Houdini.

Blender procedural terrain (Blender Foundation). Blender, the open-source 3D application, supports procedural terrain generation through its node-based shader and geometry systems. Because Blender is free, it is the most accessible entry point for a student who wants to experiment with procedural landscape. The community has developed extensive tutorials and node setups for terrain generation.

.kkrieger (Farbrausch, 2004). A complete first-person shooter game — with procedural terrain, textures, enemies, and music — in 96 kilobytes. While not specifically a mountain landscape, it demonstrated the extreme possibilities of procedural generation and influenced a generation of creative coders. Available on pouet.net.

Further exploration

The following resources provide entry points for deeper study. Because generative mountain art is an active and rapidly evolving field, the most current work is found online rather than in print.

Shadertoyhttps://www.shadertoy.com/ The central platform for real-time shader art. Search for “terrain,” “mountain,” or “landscape” to find hundreds of procedural mountain landscapes running live in the browser. Each shader’s source code is visible, so you can study the mathematics directly. Start with the work of Inigo Quilez (username “iq”) and explore from there. Requires a modern web browser with WebGL support.

Inigo Quilez’s websitehttps://iquilezles.org/ A comprehensive resource on the mathematics of computer graphics, written by the leading practitioner of real-time procedural landscape. Articles on noise functions, distance fields, ray marching, and terrain rendering are explained with both mathematical rigour and visual clarity. Essential for understanding the mathematical foundations.

The Book of Shadershttps://thebookofshaders.com/ An interactive introduction to fragment shaders — the programs that run on the graphics card and determine the colour of each pixel. Written by Patricio Gonzalez Vivo and Jen Lowe. It does not focus specifically on terrain, but it teaches the foundational concepts (noise, fractals, patterns) that underlie all procedural landscape generation. Beautifully designed, with interactive examples that run in the browser.

Processinghttps://processing.org/ and p5.jshttps://p5js.org/ Processing is a programming language and environment designed for visual artists, created by Casey Reas and Ben Fry at MIT. p5.js is its JavaScript counterpart, running in the browser. Both are excellent starting points for a novice who wants to write code that generates visual forms, including terrain. The communities are welcoming and the documentation is extensive.

Terragenhttps://planetside.co.uk/ The website for the leading procedural landscape rendering software. The gallery section shows what the tool can achieve in skilled hands — photorealistic mountain landscapes that are entirely procedurally generated. A free non-commercial version is available for experimentation.

World Machinehttps://www.world-machine.com/ A procedural terrain generation tool with a node-based interface. The website includes tutorials and a gallery. Useful for understanding the workflow of professional terrain generation — how noise, erosion, and texturing are combined to produce plausible geological forms.

Pouet.nethttps://www.pouet.net/ The central archive of the demoscene. Search for “terrain” or browse the 4KB and 64KB categories to find procedural landscape demos. The comments and production notes often explain the technical approaches used. A fascinating subculture where mathematical elegance and visual beauty are equally valued.

Sebastian Lague’s YouTube channelhttps://www.youtube.com/@SebastianLague Tutorials on procedural terrain generation, erosion simulation, and related topics, explained with clarity and visual sophistication. His “Coding Adventures” series is particularly accessible. The terrain generation series walks through the complete pipeline from noise function to rendered landscape.

Red Blob Gameshttps://www.redblobgames.com/ Amit Patel’s website of interactive tutorials on procedural generation, pathfinding, and related topics. His articles on noise-based terrain generation and polygon map generation are among the clearest explanations available, with interactive diagrams that let you adjust parameters and see the results in real time. Ideal for a visual learner.

GPU Gems (NVIDIA)https://developer.nvidia.com/gpugems/ A series of books on GPU programming techniques, freely available online. Several chapters address procedural terrain generation, atmospheric scattering, and real-time landscape rendering. More technically demanding than the other resources listed here, but invaluable for understanding the rendering pipeline that turns procedural geometry into convincing visual landscapes.