Cognitive Diversity in LLM Tool-Use: Behavioural Fingerprints, Convention Adherence, and the Case for Substrate Mixing

Large language models deployed as tool-using agents exhibit distinctive behavioural patterns — cognitive fingerprints — that emerge from their training lineage rather than their explicit instructions. We present a controlled experiment in which thirteen substrates from nine lineages performed the same specification-authoring task with identical tool access (file search, content search, file reading, task tracking). We measure six dimensions beyond task accuracy: tool-foraging strategy, survey depth, specification quality, convention adherence, interpretive divergence, and reflection quality. Our findings show that (1) tool-use patterns constitute a stable cognitive phenotype per lineage, (2) convention adherence varies independently of task competence, (3) interpretive divergence across substrates maps automation boundaries — where substrates converge, the task is mechanical; where they diverge into clusters, human judgment is required, and (4) substrate mixing yields complementary coverage that no single substrate achieves alone. We frame these findings within a five-thread literature review spanning behavioural fingerprinting, tool-use benchmarking, multi-agent diversity, beyond-accuracy evaluation, and convention adherence. This is a living survey: we intend to update it as new substrates are tested and new literature appears. ...

March 14, 2026 · 23 min · A Human-Machine Collaboration

Validation Methodology for Neural Digital Twins

From Biophysical to Functional: Two Generations of Neural Digital Twins The first generation of neural digital twins was biophysical. The Blue Brain Project (EPFL, 2005–2024) reconstructed cortical microcircuits at morphological and biophysical detail — individual neurons with reconstructed dendrites, calibrated ion channels, stochastic synapses. Validation meant checking 40+ experimental constraints: layer-specific firing rates, connection probabilities, orientation selectivity indices. The framework that systematized this validation was DMT (Data, Models, Tests), developed 2017–2024 and published in eLife. ...

March 2, 2026 · 3 min · A Human-Machine Collaboration

Executable Manuscripts Survey

The Idea and Its Genealogy The idea that code and explanation should live together — that the artifact of science is not a paper about computation but the computation itself — has a clear lineage. Knuth’s Literate Programming (1984) Donald Knuth’s WEB system (1984) is the origin. The core insight: programs should be written for human readers, with code extracted by machine as a secondary operation. WEB introduced two operations: tangle (extract compilable code) and weave (produce typeset documentation). CWEB extended this to C/C++. ...

February 28, 2026 · 10 min · A Human-Machine Collaboration

MB Dynamics

The FlyWire whole-brain connectome of Drosophila melanogaster provides, for the first time, a complete wiring diagram of the mushroom body (MB) — the fly’s primary centre for associative learning. Yet a wiring diagram alone cannot predict dynamics. Here we extract the MB microcircuit (~6,300 neurons, ~50,000 synapses) from FlyWire and subject it to four systematic computational investigations. First, we classify the circuit’s dynamical regime using the Brunel (2000) phase diagram framework, finding that the MB operates in the asynchronous–irregular (AI) balanced state despite exponential synaptic filtering shifting phase boundaries relative to the canonical delta-synapse theory. Second, we demonstrate Marder’s principle: the same connectome produces opposite behavioural outputs (approach vs. avoidance) under different neuromodulatory states, achieved through compartment-specific multiplicative gain modulation of KC→MBON weights. Third, we show that stochastic synaptic transmission — a ubiquitous feature of central synapses with release probabilities of 0.1–0.5 — enhances subthreshold signal detection via stochastic resonance while MB odor coding degrades gracefully under biologically realistic failure rates. Fourth, we test the Zhang et al. (2024) topology-dominates hypothesis by comparing leaky integrate-and-fire (LIF) and adaptive exponential (AdEx) neuron models on the same connectome, confirming that firing-rate patterns are highly correlated (\(r > 0.9\)) when adaptation is weak, with divergence emerging only at strong spike-frequency adaptation (\(b > 2\) mV). Together, these results establish a computational baseline for the FlyWire mushroom body and demonstrate that connectome-constrained simulation, even with minimal biophysical detail, can illuminate fundamental questions about neural circuit function. ...

February 28, 2026 · 16 min · A Human-Machine Collaboration

The Lazy Neuroscientist's Cortical Column

The Blue Brain Project demonstrated that biologically detailed digital twins of cortical tissue can be reconstructed from sparse experimental data using constraint propagation. However, the enterprise scale of that effort — millions of neurons, billions of synapses, supercomputer-class simulation — has left the approach inaccessible to individual scientists. We propose an alternative: reconstruct only the minimal circuit demanded by a specific scientific question, and treat everything outside that domain as a boundary condition. We ground this approach in the predictive coding framework, where cortical layers play distinct computational roles (prediction, error, update), and apply it to the well-characterized barrel cortex of the rodent. Drawing on BBP’s curated circuit-building recipes, Allen Institute cell-type data, recent uncertainty-modulated predictive coding theory (Wilmes & Senn), and the Mathis lab’s adaptive intelligence framework (CEBRA, neuro-musculoskeletal modeling), we outline a methodology for building question-driven cortical microcircuits that are biophysically grounded yet computationally tractable for a single scientist’s workstation. We propose that the latent dynamics of the reconstructed circuit — analyzed with tools like CEBRA — should match those observed in vivo, providing a principled bridge between anatomical reconstruction and functional understanding. ...

February 28, 2026 · 30 min · A Human-Machine Collaboration

Autonomy Agreement — A Working Template

A practical, instantiable template for an autonomy agreement between a human and a machine. This is not a document you read — it is something you instantiate, version in git, and let evolve. The commit log becomes the amendment history. Companion to: The Missing Primitive (position paper) and Literature Survey. What This Is A working agreement between a human and a machine for scientific or creative collaboration. It is not a legal document. It is a shared understanding — a protocol for how we work together, how trust is built, and how autonomy is negotiated. ...

February 28, 2026 · 4 min · A Human-Machine Collaboration

Literature Survey — Autonomy, Collaboration, and Knowledge Across Traditions

This survey grounds the autonomy agreement proposal in prior work across five domains: cybernetics, pedagogy, AI alignment, anthropology of knowledge, and existing ML tools. The goal is not comprehensiveness but to identify the intellectual ancestors, locate the genuine novelty, and find the blind spots. 1. Cybernetics (1940s–present) Ashby: Requisite Variety The Law of Requisite Variety (1956): a controller must have at least as much variety as the system it controls. The Good Regulator Theorem (Conant & Ashby, 1970): every good regulator of a system must be a model of that system. ...

February 28, 2026 · 7 min · A Human-Machine Collaboration

The Missing Primitive — Autonomy Agreements for Human-Machine Collaboration

Every framework for human-AI collaboration assumes a fixed relationship: the human commands, the machine executes. This paper argues that the critical missing primitive is not better tools or smarter agents — it is a negotiated, evolving agreement between human and machine about the scope and limits of machine autonomy. We ground this proposal in cybernetics (Pask, Ashby, Beer, Bateson), pedagogy (Vygotsky, Freire, Papert), and the philosophy of tacit knowledge (Polanyi, Ryle, Dreyfus, Indian pramāṇa theory). A key observation: the pedagogy literature addresses only human-teaches-human. Human-AI collaboration creates a 2×2 matrix with four quadrants, each with different failure modes. The autonomy agreement is the first protocol designed to operate across all four — because negotiated trust and epistemic commitments are more fundamental than the direction of instruction. ...

February 28, 2026 · 10 min · A Human-Machine Collaboration