EPOCH
EPOCH © 2025 by Stéphane Fosse - This book is published under the terms of the CC BY-SA 4.0 licenseChapter 2
1930
The Dawn of Automatic Computation
A world in transition awakens to the mechanization of calculation. The fertile ground of the 1930s draws nourishment from a rich intellectual heritage: Leibniz’s binary system (1703), Jacquard’s punched card (1801), and Boole’s algebra (1854). These concepts, dormant for decades, are poised to revolutionize our relationship with mathematics.
The crash of the 1929 stock market collapse still echoes as this pivotal decade opens. The American administration is mired in its own numbers. The 1930 census illustrates this bureaucratic quagmire, where two full years were needed to manually process the forms. An eternity.
Facing the Great Depression, Roosevelt deploys his New Deal. The state expands, social programs emerge. Each citizen becomes a card, each benefit a figure to record. The administrative machinery groans under the load.
Meanwhile, the eye of the totalitarian storm descends upon Europe. Nazi Germany, with its cold classificatory obsession, uses tabulators from Dehomag, an IBM subsidiary. Lives reduced to holes in cardboard. The bureaucracy of horror takes shape.
As the 1930s draw to a close, the air grows thick with tension. Intelligence services attempt to pierce the mystery of Enigma. This German encryption machine, with its rotating rotors, defies analysis. Cryptography will give birth to the first electronic calculators, a vital challenge.
Radio transforms information. Waves now carry voice and music across continents and oceans. In studios, technicians manipulate increasingly sophisticated equipment. Yet while information travels at the speed of light, its processing remains archaic.
Company counters become equipped. The characteristic clacking of mechanical calculating machines punctuates employees’ days. IBM, Burroughs, and NCR prosper by selling these precious aids to commercial calculation.
At the heart of this ferment, flashes of genius illuminate the scientific horizon. At Cambridge, Alan Turing publishes “On Computable Numbers” in 1936. This foundational text poses the question of the theoretical limits of automatic computation. His abstract universal machine establishes the concept of the modern computer, though no one grasps its significance. Meanwhile at MIT, Claude Shannon, a brilliant 21-year-old student, links Boolean algebra to electrical circuits in his master’s thesis. This fruitful union will spawn the digital electronics of the 20th century.
The factory gradually automates. Ford’s assembly lines represent only a first step. Industrialists seek systems capable of executing and regulating processes. Punched cards, heirs to the Jacquard loom, emerge as instruction carriers.
All scientific domains collide with the limits of manual calculation. Quantum physicists, astronomers, meteorologists: all juggle complex equations that saturate human processing capacity. Pencil and paper no longer suffice.
Aviation demands diabolical precision. A tiny error in aerodynamic calculations spells catastrophe. At the Aberdeen ballistics laboratory, female mathematicians, called “computers”, tirelessly calculate trajectories for artillery. Meticulous work upon which human lives depend.
A new generation emerges from universities. These young scientists conceive of calculation differently: no longer as a series of isolated operations, but as a process, a logical flow. They think in algorithms before the word becomes commonplace. AT&T’s Bell Labs forms an unparalleled crucible of innovation. In its corridors, mathematicians, physicists, linguists, and engineers cross paths. This fertile interdisciplinarity nourishes nascent computer science.
The race to mechanize calculation accelerates. The electromagnetic relay gradually replaces the mechanical lever. Faster, more reliable, it constitutes a first form of electronic breakthrough. Patents pile up in national patent offices.
In this fever of innovation, certain standards begin to emerge, such as IBM’s punched card with its 80 columns and 12 rows. This cardboard rectangle asserts itself as the universal carrier of coded information, the direct ancestor of our digital files.
When Hitler’s troops invade Poland in September 1939, no one suspects that the global conflict will serve as a technological accelerator for computing. Military urgency will soon transform mechanical calculators into electronic giants.
These 1930s, caught between the Great Depression and World War II, carry within them the seeds of our digital age. Theoretical concepts await their material embodiment. The ground is ready, the storm rumbles. Paradoxically, in this world on the brink of the abyss, the roots of the computer revolution germinate.
Vannevar Bush’s Differential Analyzer
More than two centuries after Gottfried Wilhelm Leibniz’s reflections on the mechanization of mathematical reasoning, a dream that had long remained in the realm of fantasy took shape in the laboratories of the Massachusetts Institute of Technology (MIT). One man, Vannevar Bush, born in 1890 in Everett, Massachusetts, undertook this colossal task during the interwar period.
He created the differential analyzer, completed in 1931. A machine resulting from a collaborative effort with Frank D. Gage, Harold L. Hazen, King E. Gould, and Samuel H. Caldwell. The idea was not entirely new, however. Sir William Thomson had suggested fifty years earlier that the integrators designed by his brother could, if connected together, solve differential equations. But the technical constraints of the Victorian era made this dream unattainable.
Bush succeeded where others had failed. His differential analyzer solved sixth-order differential equations or three second-order equations simultaneously, a genuine technical feat. The machine relied on torque amplifiers that supported considerable mechanical loads, a system of transmission rods (bus shafts) that connected the various units, and impressive dimensions to maximize plotting precision. A bold concept.
The heart of this invention aimed at three qualities rarely combined: extreme flexibility, mechanical robustness, and acceptable precision. Under normal conditions, the machine achieved precision of one thousandth for each individual unit. Momentary errors naturally compensated during the integration process, recalling the behavior of a planimeter whose deviations eventually balance out.
To use this machine, several hours were needed to configure it after completing the necessary plots and determining the connection diagram. The actual solving generally took about ten minutes for each set of boundary conditions. Operators needed to acquire some experience but gained in return an intimate understanding of the differential equations they were manipulating.
Vannevar Bush and his collaborators had to overcome significant technical obstacles. Mechanical backlash and integrator slippage were complex issues. They developed an ingenious system called lashlock to eliminate backlash in worm gears and designed two-stage torque amplifiers producing very high torque ratios with minimal input torque.
To validate the reliability of their invention, they conducted forty rigorous tests on a complete integration unit. These tests, performed with loads varying from zero to one foot-pound of output torque, at different positions and in both directions of rotation, revealed an average deviation of only 0.032 % from the calibration constant. The maximum deviation, observed during a single test, did not exceed 0.12 %.
This machine marked the history of scientific computing. By positioning itself midway between rudimentary mechanical calculators and future electronic computers, the differential analyzer demonstrated that complex mathematical calculations could be mechanized. This breakthrough opened unprecedented perspectives for solving challenging problems in physics and engineering.
Gödel’s Incompleteness Theorem
Kurt Gödel published a text in 1931 that radically changed our understanding of mathematics. His paper “On Formally Undecidable Propositions of Principia Mathematica and Related Systems” revealed an insurmountable barrier in formal mathematical systems. This revelation, the famous incompleteness theorem, would later transform theoretical computer science.
Mathematicians of the early 20th century dreamed of a discipline with solid and unshakable foundations. David Hilbert embodied this quest with his program aimed at proving the complete consistency of mathematics through formal methods. Bertrand Russell and Alfred North Whitehead had written their Principia Mathematica between 1910 and 1913, an ambitious attempt to reconstruct all of mathematics from elementary logical axioms. It was in this intellectual world that Gödel posed a disturbing question: do these formal systems have intrinsic limits?
His approach was brilliant. He created a correspondence between mathematical statements and numbers, called Gödel numbering. Each symbol, formula, and proof was assigned a unique numerical code. This trick transformed mathematical propositions into manipulable arithmetical objects. Using this system, Gödel constructed a particular mathematical proposition which, translated into ordinary language, states: “I cannot be proven in this formal system”. This construction evokes the ancient liar paradox of Eubulides (5th century BC), but without falling into contradiction.
The result was devastating for Hilbert’s program: in any consistent formal system capable of describing elementary arithmetic, mathematical truths exist that are neither provable nor refutable within that system. His second theorem drove the point home by proving that a consistent formal system cannot demonstrate its own consistency. The edifice of certainties that mathematicians sought to build collapsed.
Five years later, Alan Turing, inspired by this work, developed the concept of the universal machine. Gödel’s numerical encoding technique showed him how to represent programs as numbers, an idea that underlies our current computers. Turing established the existence of the halting problem, a computational question that no algorithm can systematically solve.
Computability theory was born from these discoveries and defines the boundaries of automatic computation. Programming languages, their compilers and interpreters bear the imprint of these theoretical results. When we attempt to formally verify computer programs, we encounter the limitations identified by Gödel.
Researchers continue to explore these complex territories. Complexity theory focuses on the resources needed to solve decidable problems. Fuzzy logic offers alternative paths in the face of the limits of classical systems. Mathematical proof assistant tools incorporate these constraints into their design.
For knowledge representation, artificial intelligence draws on Gödelian encoding techniques. Expert systems rely on these foundations while acknowledging the inherent barriers to formal reasoning. On a philosophical level, these theorems challenge us to consider the nature of thought, particularly whether there is a difference between human mathematical intuition and the capabilities of formal systems.
Modern cryptography draws directly on the numbering methods invented by Gödel. Number theory, central to his proofs, now constitutes a pillar of computer security. Researchers working on type systems, program verification, or proof assistants navigate an intellectual space whose contours Gödel traced.
Nearly a century after their publication, the incompleteness theorems remain at the heart of fundamental computer science. They remind us of the inherent limits of formal systems and stimulate our creativity in designing new approaches.
The Turing Machine
Alan Mathison Turing, a 24-year-old British mathematician fresh from King’s College, published the scientific article On Computable Numbers, with an Application to the Entscheidungsproblem in 1936. An intellectual upheaval that would establish the foundations of theoretical computer science.
In it, he described an abstract machine of striking simplicity: an endless tape divided into cells, a head that reads and writes symbols, a set of states and rules dictating its behavior. This minimalist construction concealed extraordinary power: it could solve any mechanically computable problem.
Turing’s genius lay not so much in the complexity of his model as in his way of rethinking computation. He broke with traditional mathematical abstractions and drew inspiration from the work of human computers, those people who executed calculations by hand according to precise procedures. His machine modeled computation as a sequence of deterministic elementary actions.
This new approach settled a pressing question posed by David Hilbert in 1928, the Entscheidungsproblem. Turing demonstrated the nonexistence of a universal method for deciding whether a given mathematical formula is provable. His conclusion aligned with that of Alonzo Church, obtained independently through lambda calculus. But Turing went further: his universal machine, capable of simulating any other Turing machine, sketched the idea of a programmable computer that would shape the architecture of the first electronic calculators designed by John von Neumann.
World War II transformed these theoretical ideas into a vital concern. At Bletchley Park, Alan Turing put his intelligence to work deciphering German communications. He designed the “Bombes”, electromechanical machines that automated the search for Enigma encryption keys. These 211 machines decoded up to 3,000 messages per day. This contribution shortened the conflict by approximately two years according to General Eisenhower.
With peace restored, Turing continued his research on thinking machines, first at the National Physical Laboratory, then at Manchester. He invented the first computer chess game and proposed his famous test: if a human cannot distinguish a machine’s responses from those of another human during a conversation, then the machine exhibits a form of intelligence. This idea remains a reference point in artificial intelligence research.
But fate struck brutally. In 1952, police arrested Turing for homosexuality, a crime at the time in the United Kingdom. Sentenced to hormonal treatment intended to “cure” his sexual orientation, he suffered devastating effects on his health. Stripped of security clearance and banned from traveling to the United States, Turing died in 1954, presumably by suicide. He was 41 years old.
His public recognition would wait a long time, as his work at Bletchley Park remained classified. The 1970s began to lift the veil on his contributions. In 2009, Gordon Brown, British Prime Minister, offered an official apology for the treatment Turing had suffered. Queen Elizabeth II granted him a posthumous pardon in 2013. Four years later, the “Turing” law extended this pardon to all men convicted of homosexuality.
Turing’s scientific legacy radiates through our digital world, as his theoretical machine remains the benchmark for understanding the limits of computation. A system is called “Turing-complete” if it equals its computational power, a criterion that has become standard for evaluating programming languages and computer architectures.
His insights on artificial intelligence continue to nourish current research. The Turing Medal, the highest distinction in computer science, bears his name. In mathematics, his work on computability and decidability opened entire fields. His final research on morphogenesis, using mathematics to explain biological patterns, testified to his boundless curiosity.
Today, his portrait adorns the 50-pound note since 2021. His name marks universities, institutes, and scientific prizes. Turing’s story tells both the birth of modern computer science and the evolution of attitudes. His broken life and visionary work continue to inspire mathematicians, computer scientists, and artificial intelligence researchers of the XXIst century.
Every contemporary digital computation bears his mark. Our world of algorithms, data, and artificial intelligence flows directly from his vision. The next time you use a computer, remember the young mathematician who, in the 1930s, was already imagining our digital future.
The Z1 Mechanical Computer
Berlin, 1936. Konrad Zuse, a 26-year-old man, works away in his parents’ apartment. This civil engineer has just left his position at aircraft manufacturer Henschel, where he worked as a calculator. He has a fixed idea: to build a machine capable of automating tedious mathematical calculations. Unknowingly, he is about to write an important page in the history of computing machines.
Technical naivety works in his favor. He knows nothing of IBM’s punched cards or Charles Babbage’s designs from the previous century. Free from any influence, he rethinks the architecture of calculating machines from scratch. His first technical choice is the binary system, at a time when all existing machines operate in decimal. His machine will separate memory from the arithmetic unit, manipulate floating-point numbers, and store 64 values of 24 bits.
The heart of the Z1 rests on a surprising invention. Rather than using conventional gears, Zuse manufactures metal plates that slide within a frame. A movement in one direction symbolizes a 1, the absence of movement represents a 0. These plates, stacked and connected by vertical rods, form purely mechanical logic gates. The arithmetic unit is divided into two distinct sections: one processes exponents, the other the mantissas of floating-point numbers. Four calculation phases punctuate the execution of micro-instructions, each associated with a cardinal direction.
To program his invention, Zuse reuses 35mm film stock. He punches this film according to the precise code of eight holes per instruction line, two for the operation and six for memory addressing. His eight instructions cover elementary arithmetic operations, memory transfers, input and output of results. The Z1 can also recognize special cases such as zero value (exponent -64) or infinity (exponent 63). Dedicated circuits stop the machine in case of invalid operations.
However, the elegant mechanics clash with real-world constraints. The perfect coordination of thousands of moving parts becomes a nightmare. Completed in 1938, the Z1 never functions properly. Allied bombing in 1943 reduces the machine to ashes, but the architecture will survive in the Z3, an electromechanical relay version that Zuse builds in 1941.
Forty years later, in the 1980s, Konrad Zuse begins reconstructing his firstborn for the Deutsches Technikmuseum in Berlin. This replica, completed in 1989, benefits from modern manufacturing techniques: more compact, its twelve mechanical layers contain approximately 6,000 logic gates. Despite these refinements, the machine retains the quirks of the original and requires constant monitoring.
The Z1 is remarkably at the origin of our current computers: architecture separating computation and memory (later named “von Neumann architecture”), standardized floating-point representation, binary system. These characteristics are even more impressive when one knows that Zuse had neither electronics training nor deep knowledge of formal logic.
The scarcity of materials in pre-war Germany forced Zuse to optimize every component. This minimalist approach contrasts with contemporary American projects like ENIAC or Harvard’s Mark I, which benefited from considerable budgets and large teams.
While the Z1 does not constitute a universal computer in the strict sense defined by Turing—it notably lacks conditional branch instructions—it nevertheless marks a breakthrough in the evolution of calculating machines. Its binary architecture and modular design remain the foundations of modern computing. It remains an anomaly, the almost solitary work of a visionary who, through his ignorance of established conventions, reinvented the art of mechanical calculation in the 20th century.
Lambda Calculus
When Alonzo Church published his work on lambda calculus in the 1930s at Princeton, he probably had no idea of the monumental impact his creation would have on the future of computing. This mathematical formalism, austere in appearance, conceals an elegance that still fascinates researchers nearly a century later.
The story truly begins with a remark by Frege in 1893: any function with multiple arguments transforms into a sequence of functions with a single argument. Take an addition function: instead of directly computing the sum of two numbers, we first build a function that expects a first number, then returns another function that expects the second. This vision corresponds remarkably to the physical reality of computers. When the machine loads a number into memory, it is ready to associate it with any other value.
In 1924, Moses Schönfinkel discovered that two elementary functions, K and S, are sufficient to construct all others. Church later took up this idea by creating a more refined notation: the application of a function F to an argument A is simply written FA, with parentheses appearing only when necessary.
The 1930s saw major results flourish. Church and Rosser proved confluence of reductions in 1936, a fundamental mathematical property of lambda calculus, which states that the final result does not depend on the order of intermediate calculations. That same year, Alan Turing established a connection between his abstract machine and Church’s formalism—both approaches compute exactly the same functions.
Lambda calculus achieves the remarkable feat of representing natural numbers without using digits. A number n is also a function that applies another function n times to an argument. This representation, called Church encoding, makes all arithmetic operations possible. Recursive functions, pillars of modern programming, are expressed through a mechanism called fixed point.
Church then formulated his famous thesis: computable functions are those definable in his formalism. This statement links an intuitive notion to a precise mathematical concept and remains impossible to prove rigorously. Kleene strengthened it in 1936 by proving the equivalence between lambda-definable functions and Gödel’s general recursive functions.
The imprint of lambda calculus on modern computing proves profound. John McCarthy drew directly from it to create LISP in 1958, ancestor of an entire family of functional languages like Haskell or ML. These languages inherit the fundamental concepts of Church’s formalism: functions as first-class values, evaluation by reduction, sophisticated type systems.
In the 1970s, Dana Scott constructed a rigorous mathematical semantics for lambda calculus. His work gave birth to domain theory, a powerful tool for understanding and verifying programming languages.
The influence of lambda calculus also touches compiler design. The representation of programs as trees rather than linear sequences of instructions, inspired by the structure of lambda terms, optimizes memory usage—a technique still relevant in current compilers. Dependent type theory, an extension of the simple type system of the original lambda calculus, now serves as the foundation for proof assistants like Coq or Agda, tools that formally verify the validity of programs or mathematical proofs.
The concepts of lambda calculus nourish parallel architectures, web languages like JavaScript, and modern type systems. Its minimalist philosophy—everything is a function—combined with its extraordinary expressiveness makes it a valuable instrument for thinking about computer systems.
Shannon’s Logic Circuit
The design of electrical circuits in the 1930s was an almost artisanal process. Engineers working on early computers assembled relays and switches based on their technical intuition, without formalized methods. Their creations, born from personal experience, lacked a rigorous theoretical framework.
It was at MIT that a young student named Claude Shannon revolutionized this approach. In 1937, he built a bridge between two seemingly distinct worlds: switching circuits and Boolean algebra. This mathematical discipline, developed in the 19th century, had never been applied to electricity. Shannon’s insight was that a closed circuit represents the value 1, an open circuit the value 0. Two components in series function as the AND logical operation, while their parallel arrangement corresponds to the OR operation.
His master’s thesis A Symbolic Analysis of Relay and Switching Circuits, published in 1938, formalized this correspondence. The document did not merely present an abstract theory. Shannon detailed concrete applications: a binary adder and an electric combination lock. These examples demonstrated the power of an approach that transformed circuit design into a mathematical discipline.
The impact was immediate. Engineers now had tools to calculate the minimum number of components needed for a given function. Costs decreased, reliability improved. Instead of working through trial and error, they could verify their concepts before physical construction.
Shannon was not alone in this pursuit. In Japan, Akira Nakashima had been working since 1935 on similar concepts for the NEC company. In the USSR, Viktor Shestakov explored comparable ideas, inspired by the work of physicist Paul Ehrenfest. The convergence of this research showed that the time was ripe for this conceptual breakthrough.
The arrival of electronic computers in the 1940s and 1950s gave new dimension to Shannon’s work. Mechanical relays gave way to vacuum tubes, then to transistors. The mathematical approach adapted perfectly to these new technologies. The constant miniaturization of components made the use of formal methods indispensable.
The development of integrated circuits in the 1960s raised the question of manually designing chips containing thousands of logic gates. The principles established by Shannon then became the foundation of computer-aided design tools. This software automatically translates abstract descriptions into optimized circuits.
The semiconductor industry has continued to evolve since then, but the theoretical framework has remained stable. Today’s computers, despite their dizzying complexity, still operate according to the principles identified by Shannon. His theory illustrates how mathematical abstraction can generate major technological advances.
The symbolic representation of systems proposed by Shannon also inspired the development of programming languages and formal verification methods. His influence extends to theoretical computer science, particularly automata theory and the study of algorithmic complexity.
His thesis received the Alfred Noble Prize from the American Institute of Electrical Engineers in 1940. Herman H. Goldstine later called it “one of the most important master’s theses ever written”, which had transformed the design of digital circuits “from an art into a science”.
This scientific achievement embodies the successful fusion of mathematical theory and engineering practice. Without this vision, modern electronics would have followed a very different path. Shannon’s genius was to understand that an abstract formalism from the 19th century could solve the technical problems of the 20th: automatic computation and information processing. Today’s computers, with their billions of transistors, remain faithful to the principles he formulated. Few ideas traverse decades this way without losing their relevance.
The Complex Number Calculator
A plywood board, two relays salvaged from a dumpster, strips cut from a tobacco tin, batteries, and light bulbs. Who would have thought that these makeshift materials assembled by George R. Stibitz in his kitchen in 1937 would mark our history of computing? This mathematician at Bell Telephone Laboratories (Bell Labs) had just noticed a parallel between the positions of telephone relays and binary notation. His domestic tinkering gave birth to a one-bit binary adder.
Stibitz’s evenings turned into circuit design sessions for other arithmetic operations. In 1938, he presented his work to Thornton Fry, head of the laboratory’s mathematics section. Fry raised a practical question: could these relay calculators handle complex numbers? This task was then mobilizing an army of human computers at Bell Labs.
Stibitz tackled the challenge head-on. The plans were completed by February 1938. He partnered with Sam Williams, an engineer specializing in switching systems. Their collaboration bore fruit in 1939 with the completion of the Complex Number Calculator.
The machine shone through its technical innovations. It used a binary-coded decimal system with four relays per decimal digit. It processed numbers up to eight decimal digits, with two additional internal digits to limit rounding errors. Its structure comprised two distinct calculation units: one for the real part of complex numbers, the other for the imaginary part.
The user interface constituted a revolution in itself. The machine was hidden in a closet, accessible only for maintenance. Users worked at three operator stations scattered throughout the Bell Labs building on West Street in New York. Each station had a keyboard for input and a teleprinter for displaying results. This configuration represents the first use of remote terminals in computing history.
The multiplication keys, as well as the division key, activated subroutines of about a dozen steps. These executed complex operations using the two calculation units, which worked exclusively on real numbers. This architecture already foreshadowed the modern notion of subroutines.
On September 11, 1940, the Complex Number Calculator was unveiled to the public at a meeting of the American Mathematical Society at Dartmouth College. For the occasion, a console was modified to communicate with the computer via a long-distance telephone line. Participants, including the famous Norbert Wiener (founding father of cybernetics), submitted problems on the keyboard. The data traveled to the relay equipment in New York, and the results returned to the teleprinter in less than a minute. This demonstration inaugurated remote control of a computer, heralding the data transmission that would explode in the 1960s.
The Complex Number Calculator served until 1949, with remarkable reliability. During World War II, the network design groups, its primary users, ran it almost continuously from 8 a.m. to 9 p.m., six days a week. The machine had not been designed for such a regime. Built as a demonstration model before the war, it lacked self-checking features and contact protection standard in telephone exchanges. The war prevented the construction of a more robust second machine. Toward the end of the conflict, it had to be shut down for two days to replace worn relay contacts.
The legacy of the Complex Number Calculator is immense. It proved the feasibility of automatic calculations with reliable electromechanical components. Its success spawned other relay calculators at Bell Labs during the war: the Model II in 1943, capable of iterative operations, then the Models III and IV in 1944-1945, more powerful with approximately 1,400 relays and seven teleprinter units each.
These machines established fundamental principles still present in modern computing: the use of binary for calculations, the separation between calculation unit and user interface, the notion of subroutines, and remote access to computing resources. Bell Labs’ relay calculators continued to operate efficiently 13 to 15 years after the war, some remaining in service well after the arrival of the first commercial electronic computers.
The Complex Number Calculator, born in a kitchen, showed that complex mathematical tasks could be automated reliably. Its influence extended to automatic telephone message accounting systems, where similar calculators processed detailed call billing. This machine represents an essential link between mechanical calculators and our computers.