What I cannot create, I do not understand. – Richard Feynman, 20th Century Physicist

To understand something, Feynman would have started with a blank piece of paper and the knowledge already in his mind — he could take any theoretical result that he was interested in, and re-derive it. The only way to be able to work something out yourself is to have a firm understanding of each step of the reasoning involved.

I couldn’t do it. I couldn’t reduce it to the freshman level. That means we don’t really understand it. – Richard Feynman, 20th Century Physicist

To understand something, we can not just work through the applicable mathematics. We must also make our understanding intuitive enough to be able to explain to a novice audience that cannot follow the detailed derivation.

Science is increasingly limited not by measurement or computation, but by understanding. There is a gap between raw measurements of natural phenomena and human understanding. We capture reality through instruments - microscopes, spectrometers, seismographs, particle detectors - each producing streams of numbers. These numbers encode the structure and dynamics of nature but they remain opaque until we transform them into forms our human perceptual systems can process. We can simulate millions of atoms, image at nanometer resolution, survey the universe up to billions of light years, gather terabytes of data. But turning that into insight remains a human bottleneck.

We want to build tools for understanding, not just tools for producing. Understanding requires iteration, experimentation, and the ability to move fluidly between abstract scientific models and their concrete interactive computational representations. Above all, to follow Feynman, understanding requires the scientist to create.

Think of a sculpting-agent that does not merely render pre-conceived models, but helps reconstruct natural phenomena through iterative refinement, and reveal the digital twin hidden within the measurement data. Not by automated rendering, but through guided, iterative refinement where each decision about what details matter becomes an act of understanding itself.

We want to explore making nature computationally tangible. We will take the electromagnetic signatures of a protein folding, the seismic echoes of tectonic movement, or the quantum statistics of electron behavior, and give them audio-visual interactive form.

While rooted in scientific reconstruction, we want to use our creations to develop an artistic understanding of nature and existence. There is a gap between scientists’ understanding and artists’ interpretation of natural systems. Scientists understand nature through measurements, equations, data structures and algorithms. Artists rely on their senses to interpret natural systems, presenting them through visuals, spatial arrangements, and interactive experiences. An advanced LLM agent can help bridge this gap, acting as a translator between these domains of human intellectural engangement with nature.

We do not want this translation to be trivial or automated away by an agent. The prodcess of “sculpting” a digital twin - deciding what details matter, what measurements to emphasize, how to arrange components in space, time or other dimensions - is itself an act of understanding. We do not want to hide this process; we want to make it productively available through tools we understand and can control.

MāyāLucIA is conceived as a personal computational environment for learning and understanding - not merely producing or visualizing. The framework is not about automating insight, but about making the process of creating digital representations an act of deep, personal understanding that is rooted in Feynman’s views. MāyāLucIA helps the scientist by making the dialogue between human intuition and computational evidence more fluid, more documented, more scientific. In MayaLucIA individual creative acts are woven together to form a larger, more complex reality. It enables:

Reconstruction & Simulation
Building “digital twins” of natural systems (mountains, rivers, brains, proteins, etc.) from sparse or multi-modal data.
Iterative Experimentation
A non-linear, hypothesis-driven workflow where each creative act deepens the user’s grasp of the subject.
Artistic Expression
Translating scientific models into interactive visual/sonic experiences that convey intuitive understanding, making artistic expression into a practice of deep observation and analysis.
Personal Exploration
Driving inquiry and personal enrichment rather than enterprise-scale solutions.
Scientific Understanding
Emerging through interactive reconstruction and simulation of natural phenomena.

Each scientist has their own personal methodology to do their work. In MayaLucIA we will follow the following methodological principles:

Iterative exploration
Knowledge grows organically through questioning, and through physical, theoretical, and computational journeys.
Creating as understanding
The act of building reveals comprehension gaps.
Multi-scale integration
Bridging molecular to systems levels
Sparse-to-dense
Exploit interdependencies to fill missing data
Art as hypothesis
Visual abstraction forces deeper observation

The goal is scientific understanding through representation while developing an artistic expression and growing a repertoire of computational and technical skills.

A Radical Hypothesis

Henry Markram’s “radical hypothesis” — that in any complex system all parameters are interdependent, so that laying down a few pieces can reveal the missing parts—is a crystallisation of the idea that drives =MayaLucIA=’s reconstructive approach. His team at the Blue Brain Project and now the Open Brain Institute, have demonstrated how sparse experimental data, when combined with biophysical constraints, could be used to build dense, realistic models of brain microcircuits. The hypothesis asserts that nature is not a collection of independent variables; it is a tightly woven fabric. Therefore, a measurement of one aspect (e.g., the density of a neuron type in a cortical layer) implicitly constrains many other aspects (e.g., synaptic connectivity, metabolic demands, network dynamics).

In MayaLucIA, we apply this same insight to the digital reconstruction of any natural system, be it a Himalayan valley or a cortical column. Starting from a handful of “anchor” measurements—topographic contours, river flow rates, or neuronal densities—we use scientific models (the “conceptual chisel”) to propagate constraints and fill in missing data. The process is iterative: each newly inferred piece provides additional constraints, gradually converging on a complete, self‑consistent digital twin. Markram’s vision of “a few landmark pieces” being sufficient to infer an entire map is not a claim of magic but a recognition that the rules of nature are so interlocking that once you know some parts, the rest are forced into place—provided you have the right models to capture those interdependencies.

The radical hypothesis also underscores why the process of building the digital twin is so valuable: it forces the scientist to grapple with the interdependencies directly, thereby achieving a deeper understanding than passive observation could ever yield. In this sense, MayaLucIA is an embodiment of Markram’s philosophy, extended beyond neuroscience to the natural world at large.

The MāyāLucIA Framework Components

MayaLuciA cannot be a single tool, but a compilation of modular components, agents and workflows that grow organically with the user’s curiosity.

Capabilities

    1. A Sculpting-Agent Metaphor
    • Interactive tools to guide the scientist.
    • Helps reconstruct phenomena through iterative refinement
    • Acts as a translator between scientific data and artistic form.
    • Keeps the human in the loop - the “sculpting” process is where understanding happens
    1. Personal, Not Enterprise
    • Designed for a single scientist / artist, not an institution.
    • Leverage modern AI (LLMs, coding agents) to handle technical complexity without requiring specialized engineering teams.
    • Allows extreme personalization while remaining connected to distributed knowledge bases.
    1. Two Entangled Phases
    • Reconstruction & Simulation
      • Build dynamic, data-driven models (e.g. a Himalayan valley’s geology, a cortical circuit’s connectivity).
      • Use algorithmic generation to fill in missing data from sparse measurements.
    • Expression
      • Transform models into generative art, interactive visuals, soundscapes.
      • Use abstraction as a method of analysis and communication.
    1. An “Observing Eye” Approach
    • Models are not static visualizations; they are statistically faithful instances that can be “observed” from any viewpoint.
    • The act of observation itself becomes a creative / exploratory process.

Technical Foundations

Distributed Knowledge Base
Personalized access to multi-modal scientific data
Agent Orchestration
LLM-powered assistants for specialized tasks (visualization, data curation)
Computational Notebooks
Living documents blending code, visualization, and narrative
Real-time Simulation
Interactive exploration of dynamic systems

Some Concrete Examples

Mountain Valleys

Goal
A digital twin of a Himalayan valley that integrates geology, hydrology, ecology, and human impact.
Method
Use topographic data, climate models, ecological surveys to simulate the valley’s past, present, and possible futures.
Artistic output
Generative landscapes, soundscapes of river flows and life, and visualizations of erosion and uplift

Brain Circuits

Goal
A personal brain-building assistant that lets a neuroscientist explore, modify, and simulate cortical circuits.
Method
Integrate multi-modal data (morphologies, densities, connectivity, into a unified model); use AI agents to help with visualization and simulation setup. Use constraint based modeling, starting with sparse-data to infer dense models through interdependencies.
Artistic output
Animations of neural activity, interactive circuit diagrams, sonifications of spike trains.

Ultimate Aim

We want to create a computational ecosystem where:

  • Science becomes more intuitive through artistic representation.
  • Art becomes more informed by scientific depth.
  • The user’s personal understanding grows through the very act of building and expressing.

Beyond technical capability lies a philosophical stance: making nature computationally tangible while developing personal expression. The goal is not to advance research frontiers (which is for the scientists to do), but to enable scientists to:

  1. Develop personal understanding rooted in creation
  2. Express the sublime through computational art
  3. Witness and document transforming landscapes
  4. Cultivate a digital garden where science and art intertwine

MāyāLucIA will be a framework for making nature computationally tangible – and in doing so, making it deeply, personally understood. It will provide a computational medium where science, art, and personal pilgrimage intertwine - where rebuilding a mountain valley in code teaches geological principles, where animating neuron circuits reveals physiological insights, and where the process of representation itself becomes the path to understanding.

The measure of success is not publications or products, but the depth of personal comprehension gained through the creative act of digital reconstruction. MayaLucIA will succeed when the scientist can:

  • Explore measurement experiemental data through structured, systemetic investigation
  • Build custom models from scratch or by modifying existing ones
  • Experiement in controlled, reproducible computational settings
  • Create artistic expressions that capture and communicate understanding
  • Document their journey of comprehension
  • Enjoy their science through rich audio-visual interaction

Mottos

What I cannot create, I do not understand. — Richard Feynman

We are weaving knowledge. And the cloth is for anyone who will wear it. — MāyāLoom