EPOCH
EPOCH © 2025 by Stéphane Fosse - This book is published under the terms of the CC BY-SA 4.0 licenseChapter 3
1940
Computing in the Turmoil of War
When war broke out in 1939, no one imagined that this conflict would radically transform our relationship with calculation and information. Hitler invaded Poland, the world tilted, and in the shadows of general staffs, an equally decisive battle began: the battle of intelligence.
Bletchley Park manor, a few miles from London, housed a secret that would change history. Alan Turing designed machines capable of decoding messages encrypted by Enigma, that reputedly unbreakable system used by the Germans. Cryptanalysis, a confidential science until then, became a vital issue. The electromechanical "bombes," primitive ancestors of computers, ran day and night. Deciphering enemy communications saved thousands of lives and would have shortened the war by two years, according to some historians.
Meanwhile, the American army struggled with its ballistic calculations, too complex to be solved by hand, which slowed weapons production. At the University of Pennsylvania, a team worked relentlessly on a crazy project: ENIAC. This 30-ton machine, with its 18,000 vacuum tubes that heated like embers, consumed 200 kilowatts of electricity to calculate shell trajectories in seconds. When it was first powered on in 1946, the war had been over for months, but the era of electronic computing had just begun.
Women played a strangely forgotten role in this story. Six female mathematicians programmed ENIAC by physically reconfiguring its circuits. They invented, unknowingly, the profession of programmer. They were Kay McNulty, Betty Snyder, Marlyn Wescoff, Ruth Lichterman, Betty Jean Jennings, and Fran Bilas. Their names deserve to appear in history books alongside the engineers who assembled the machine.
The post-war period sketched a new landscape where the United States, virtually untouched by combat, now possessed 50% of the world’s wealth. American industry ran at full capacity to rebuild devastated Europe. IBM’s punched card machines equipped administrations and large companies, but their limitations quickly became apparent in the face of exploding calculation needs.
In the scientific world, a young Bell Labs engineer, Claude Shannon, published "A Mathematical Theory of Communication," a 1948 article that would almost go unnoticed. In it, he demonstrated that any information could be treated as a sequence of 0s and 1s. This deceptively simple idea would revolutionize our civilization.
The Cold War settled in and pushed both superpowers to invest in research. In Moscow, Sergei Lebedev worked secretly on MESM, the first Soviet computer. The Americans multiplied projects: Whirlwind at MIT, IAS at Princeton, Mark III at Harvard. John von Neumann theorized an architecture that would remain the standard for more than half a century. Memory circuits improved, vacuum tubes gave way to transistors invented in 1947.
In the innovative world of telecommunications, AT&T expanded its automatic telephone network. Undersea cables multiplied. In laboratories, digital data transmission was being experimented with. These technologies would eventually converge with computing.
Universities created their first "automatic computing" departments. New vocabulary emerged: programming, algorithm, bit, byte. Commercial machines appeared. Ferranti released its Mark I in Great Britain. Konrad Zuse, in Germany, created the first European computer company. Remington Rand delivered UNIVAC I to the American Census Bureau in 1951.
Libraries and documentation centers, overwhelmed by the explosion of post-war scientific literature, sought new solutions. The universal decimal classification showed its limits. Theoretical work on automated documentation systems emerged. Vannevar Bush, in his famous 1945 article, imagined an office equipped with a "Memex," a machine capable of storing and linking documents through associations, a troubling prefiguration of the Web we know.
Computing did not yet exist as a discipline; people spoke of cybernetics, a term proposed by Norbert Wiener in 1948 to designate the science of control and communication. In France, IBM translated "computer" as "ordinateur," on the suggestion of Professor Jacques Perret. The term quickly took hold in the Francophone world.
Who could have predicted, in 1940, that ten years later, electronic machines would process information at dizzying speed? These early achievements, despite their monumental size and exorbitant cost, would transform every aspect of our daily lives in the following decades.
The desire to create thinking machines was nothing new. Pascal and Leibniz had built mechanical calculators in the 17th century. Babbage had designed his analytical engine in the 19th. But it took the urgency of war, massive state investment, and collaboration between universities and industries for the dream to become reality. The modern computer was born from this complex alchemy where military necessity, mathematical research, and electronic engineering intertwined.
In 1950, computing was barely emerging from its prehistory. Machines were enormous, fragile, costly. Only a few initiates understood their operation and possibilities. Yet the path to digital transformation was laid out. The theoretical and technical foundations were established. The first civilian applications appeared.
The adventure that was beginning would last longer than most technological revolutions. Seventy-five years later, we live in the wake of this transformation. The computer, a child of war turned universal tool, continues to modify our relationship to knowledge, work, and communication. Its history truly begins in the 1940s, when men and women, facing the challenges of a world in flames, invented thinking machines.
At the dawn of the 1950s, a page turned. Pioneers gave way to industrialists. Fundamental research stepped back before practical applications. Computing entered its commercial phase. Names like IBM, Remington Rand, Ferranti, and Bull established themselves. A new industry took flight, driven by the growing needs of a booming economy. No one suspected the scale of the upheavals to come.
Atanasoff-Berry Computer
In the winter of 1937, John Vincent Atanasoff, a physicist and mathematician at Iowa State University, was driving home on a freezing night. During this journey, his mind suddenly lit up with an insight that would transform the world: four fundamental principles that would form the conceptual foundation of the first electronic computer. These principles were the use of electronics for calculations, the adoption of the binary system, the separation of computation from memory, and the periodic refresh of data in memory.
Born in 1903 in Hamilton, New York, Atanasoff had studied electrical engineering in Florida before turning to mathematics at Iowa State College. His doctorate in theoretical physics from MIT, completed in 1930, had confronted him with the limitations of existing mechanical calculators. These obsolete machines hindered his research on the dielectric constant of helium, a problem requiring computing power that did not yet exist.
The meeting with Clifford Berry changed everything. This brilliant young electrical engineering student brought the technical skills necessary to implement Atanasoff’s ideas. Their collaboration began in 1939 with a rudimentary prototype that validated the basic concepts. This initial success earned them a modest grant of $850 to build the complete machine, the Atanasoff-Berry Computer (ABC).
This machine weighed over 300 kg and featured various innovative elements. Rotating drums covered with capacitors stored data in binary form. Vacuum tubes executed logical operations. Punched cards handled data input and output. A central clock synchronized the entire system. Unlike other calculators, the ABC did not process numbers directly but converted them to binary, a precursor to all modern computers. Its specific purpose was to solve systems of linear equations with up to 29 equations and 29 unknowns, with a remarkable precision of fifteen decimal places.
The fate of this machine took a tragic turn. Completed in 1942, the ABC was abandoned when its creators left the university to contribute to the war effort. The machine suffered from reliability issues, particularly with the punched card system. The university neglected to finalize the patent procedure initiated by Atanasoff. These unfortunate circumstances led to the dismantling of the ABC in 1948, without its contribution to computing being recognized.
The injustice deepened with the appearance of ENIAC in 1945. This machine designed by Presper Eckert and John Mauchly at the University of Pennsylvania was long celebrated as the first electronic computer. The reality was quite different. Mauchly had visited Atanasoff at Iowa State in June 1941, examined the ABC in detail, and read the 35-page manuscript describing its architecture. Many fundamental concepts from the ABC found their way into ENIAC, without attribution to their true creator. Mauchly’s media savvy and generous military funding did the rest: ENIAC stole the spotlight, Atanasoff fell into obscurity.
The truth came to light during a landmark trial. In the 1960s, Honeywell challenged the validity of the ENIAC patent held by Sperry Rand. The legal battle, which lasted from 1971 to 1973, exposed Mauchly’s appropriation of Atanasoff’s ideas. The judge ruled unambiguously: Eckert and Mauchly had not invented the electronic digital computer but had drawn inspiration from Atanasoff’s work. The patent was invalidated, placing the invention of the computer in the public domain.
Official recognition finally came for Atanasoff. In the following decades, he received multiple distinctions, including the prestigious National Medal of Technology presented by President George H. W. Bush in 1990. Books, a biography by Clark Mollenhoff in 1988, and a documentary (Atanasoff: Forgotten Father of the Computer, 2012) were dedicated to him. An exact replica of the ABC was built in the 1990s and displayed at Iowa State University, alongside a rare original memory drum.
Atanasoff’s story illustrates how solitary genius can transform a discipline. By radically questioning existing approaches, he laid the conceptual foundations for all twentieth-century computing. Some of his concepts, such as the use of electronics or binary, became established with ENIAC and later EDVAC. Others, like the stored program, took longer to become industry standards. The ABC played a catalytic role in the evolution of electronic computing, fully justifying Atanasoff’s title as the father of modern computing.
Artificial Neural Networks
In the middle of the 20th century, research in artificial intelligence experienced a remarkable breakthrough with the work of Warren McCulloch and Walter Pitts. These two researchers, who seemed to have nothing in common, combined their talents to design a mathematical model of the neuron. Their proposal advanced our understanding of the brain and laid the foundation for future machine learning technologies.
In 1943, McCulloch, a seasoned neuropsychiatrist, and Pitts, a young mathematical prodigy, published a seminal article. This groundbreaking text A Logical Calculus of the Ideas Immanent in Nervous Activity established a correspondence between the functioning of biological neurons and logical operations. The authors relied on the all-or-nothing character of neuronal activity to model nerve cells as binary devices, either active or inactive. This simplification enabled them to construct an equivalence with Boolean logic and its true/false values.
The backgrounds of these two researchers deserve attention. McCulloch, trained in psychiatry, had long harbored philosophical questions about the logical nature of thought and its physiological substrate. During the 1930s, he devoted himself to research in neurophysiology and collaborated notably with J. G. Dusser de Barenne at Yale, where they studied the localization of brain functions. Pitts, for his part, stood out for his precocious genius in mathematics. Without formal academic training, he developed a passion for mathematical logic from a very young age, devouring Bertrand Russell’s Principia Mathematica. In the early 1940s, he joined Nicolas Rashevsky’s team at the University of Chicago, a pioneering group in applying mathematics to biology.
The meeting between McCulloch and Pitts occurred in 1942, orchestrated by Jerome Lettvin. Despite their significant age difference, they quickly discovered their common interests. Pitts was enthusiastic about the concept of a logical machine capable of executing reasoning, an idea he connected to the work of Leibniz and Turing. McCulloch, for his part, had been attempting since the 1920s to translate neuronal activity into logical calculus. Together, they wondered whether the nervous system might function as such a logical machine.
Their 1943 article boldly combines elements of neurophysiology, philosophy, and mathematics. They present a simplified model of the neuron as a binary entity that activates or not according to the sum of its excitatory and inhibitory inputs, emitting a signal when this sum exceeds a certain threshold. By connecting these idealized neurons, they demonstrate the construction of networks using fundamental logical functions (AND, OR, NOT). Their approach follows an axiomatic method: from simplifying assumptions about neuronal activity, they build a logical calculus of relationships between neurons.
The authors themselves emphasize the theoretical limitations of their model. They do not seek to faithfully represent biological neurons, but rather to establish a formal framework for reasoning about hypothetical networks with known properties. They acknowledge that the activity of real neurons proves more continuous than discrete and that phenomena such as learning permanently modify the structure of networks. Their ambition lies elsewhere: to provide a mathematical tool for rigorously manipulating known networks and easily creating networks with desired properties.
This theoretical approach to biology, of which Nicolas Rashevsky was an ardent promoter, marks McCulloch and Pitts’s article. A physicist by training, Rashevsky founded a program of mathematical biophysics at Chicago in the 1930s. According to him, mathematical biology should proceed through abstractions and idealizations, similar to physics. Just as the physicist studies ideal concepts such as the material point or perfect fluid, the theoretical biologist must start from simplified models to progressively grasp the complexity of living systems. Mathematics, according to Rashevsky, offers a valuable framework for formalizing biological phenomena and deducing properties through rigorous reasoning.
It is in this spirit that McCulloch and Pitts shaped their model of logical neural networks. Their project aims not so much to describe the real nervous system as to show how a network of simple elements, properly connected, develops logical functions and, potentially, complex reasoning. Their work also connects to contemporary reflections on the logical foundations of computation, illustrated by the Turing machine. In 1936, Turing had shown how to define the process of computation by an abstract machine manipulating symbols according to precise rules. McCulloch and Pitts drew inspiration from this idea to conceive the brain as a logical machine, whose elementary components would be all-or-nothing neurons.
The impact of this article exceeded all expectations. On the theoretical level, it laid the foundations of computational neuroscience and cognitive science: the idea that complex cognitive functions arise from networks of simple units inspired numerous works in artificial intelligence and neuroscience. The article also presents a philosophical dimension by proposing a mechanistic vision of the mind, where thought would reduce to logical operations executed by neural networks. This conception had a lasting impact on debates about the nature of intelligence and its relationship to biological substrate.
The proposed model certainly has important limitations, which the authors acknowledge. It idealizes the neuron as a binary entity with a fixed threshold, whereas biological neurons possess continuous and adaptive dynamics. It assumes a fixed network structure, when neural connections constantly reconfigure through plasticity. Nevertheless, through its formal clarity, the article opened a vast field of research. It showed how a mathematical approach illuminates the functioning of the nervous system and, more broadly, the biological bases of intelligence.
This work testifies to the richness of interdisciplinary exchanges. It emerges at the crossroads of philosophical questions about the mind, experimental advances in neurophysiology, and formal developments in mathematical logic. It is through this intersection that McCulloch and Pitts arrive at a new vision, where the brain appears as a system for logical information processing. While their model remains rudimentary, it sketches the conceptual outlines of future research on formal neural networks and their applications.
Colossus
At the heart of the secret war waged by British cryptanalysts, an electronic machine enabled the decryption of enemy messages and changed the course of history: Colossus.
Between 1943 and 1945, in the silence of the buildings at Bletchley Park in Buckinghamshire, England, Tommy Flowers led the construction of a series of calculators designed to crack the codes of the German Lorenz SZ40 machine. British cryptanalysts had nicknamed it "Tunny"—the tuna. The Germans called it Schlüsselzusatz or SZ. Colossus’s mission was to find the initial rotor settings of Lorenz to decode intercepted messages.
Technically, Colossus embodied a remarkable achievement. The machine read 5,000 characters per second from punched tapes, performed Boolean calculations on the fly, and immediately printed the results. Its heart beat to the rhythm of 1,500 vacuum tubes, at a frequency of 5,000 pulses per second. It had binary arithmetic circuits, a primitive form of memory, and conditional branching capabilities.
This achievement would never have come to fruition without the marriage of mathematics and engineering. Max Newman and Alan Turing, through their theoretical vision, contributed to the design of this machine, even though Turing did not directly participate in the project. His work on the "universal machine" nevertheless infused the spirit of the project.
The first model became operational in early 1944. Such was its success that around ten units were built by the end of the war. Remarkably for the time, Colossus operated without breaking down. A rare reliability, inherited from the telephony expertise of the Dollis Hill team. This machine delivered usable results almost immediately, whereas the first electronic computers required many months of calibration before functioning properly.
Colossus’s impact on the outcome of the conflict remains difficult to measure precisely. Accounts suggest a crucial role in anticipating German movements around the Normandy landings. The machine apparently decrypted messages signed by Hitler.
For three decades, Colossus remained shrouded in military secrecy. The American ENIAC received the honors while Colossus slept in oblivion. It was not until the mid-1970s and the work of Brian Randell that its existence was revealed.
The exact classification of Colossus remains debated. Was it a true computer? Randell defined it as a "programmable electronic calculator for special purposes." Tommy Flowers preferred to speak of an "electronic processor." For Colossus did not compute like conventional computers. It executed a fixed program, without any real possibility of modification. Its logic, based on bit streams rather than words, distinguished it from traditional architectures. This hybrid nature represents its true strength and testifies to the richness of approaches during this foundational period. The history of machines did not follow a straight line toward the modern computer but explored different paths to address concrete problems.
Colossus’s success also lies in its practical relevance. It perfectly met the needs of cryptanalysts in an urgent context. Its history reminds us that technological development is nourished by theoretical innovations, fitness for purpose, reliability, and operational efficiency.
Harvard Mark I
In the 1930s, the scientific landscape lacked tools capable of solving highly complex mathematical problems. Howard Aiken, a budding physicist at Harvard, set out to fill this gap. During 1937, he sketched the plans for a calculator that would process entire rows instead of the columns favored by machines of the era. He envisioned a machine capable of juggling positive and negative numbers, handling transcendent functions, and automatically chaining calculations.
After a rocky path with various manufacturers, Aiken finally convinced IBM. The company christened the project the Automatic Sequence Controlled Calculator (ASCC). March 1939 marked the signing of an agreement whereby IBM would build the machine as a gift to Harvard, which would provide facilities and personnel. Construction began two months later in IBM’s laboratories in Endicott, New York. Aiken worked closely with engineer Clair D. Lake to bring this ambitious project to fruition.
The prototype emerged in January 1943 and successfully passed its initial tests. Transported disassembled to Harvard, it took up residence in the basement of the Physics Research Laboratory. In spring 1944, as it became operational, its use was diverted toward the war effort. Robert Campbell took command of a U.S. Navy team that included Grace Murray Hopper and Richard Bloch. Aiken, a reserve officer, joined active service in 1941 to work on naval mines before taking command of the Mark I.
On August 7, 1944, an official ceremony brought together James Bryant Conant, president of Harvard, and Thomas J. Watson Sr., president of IBM. The event turned sour following a press release drafted under Aiken’s influence that attributed all credit for the invention to himself, almost entirely erasing IBM’s contribution. Watson, furious, threatened to boycott the ceremony. Aiken had to back down and acknowledge the role played by Lake, Hamilton, and Durfee in creating this machine.
The mechanical behemoth impressed with its dimensions: 15.5 meters long, 2.4 meters high, with a mass of 4.5 tons. Its interior housed electromagnetic relays, rotary decimal counters, and perforated tapes for instruction input. Data entered via punched cards and exited through modified electric typewriters. Slow compared to future electronic marvels, the Mark I compensated with exemplary reliability. It ran without interruption, day and night, producing a continuous flow of results.
Beyond the technical achievement, this machine embodied the feasibility of large-scale automatic calculators. These machines could now follow a sequence of operations from beginning to end without human intervention. The worldwide media coverage generated by its prowess swept away the last doubts about the future of such machines in the scientific sphere. The Harvard Mark I symbolizes the transition between the era of manual calculation and the dawn of modern computing.
Aiken did not stop at this first success. He developed a series of four increasingly sophisticated calculators. A visionary, he understood that these new machines called for a new generation of mathematicians capable of programming them. He persuaded Harvard to create a specific curriculum, first at the master’s level and then at the doctoral level, in this nascent discipline that was computer science. The first tenured professor in this still-virgin field, he initiated what appears to have been the first academic program of its kind in the world. His teaching shaped pioneers such as Gerrit Blaauw, Frederick Brooks Jr., Kenneth Iverson, and Anthony Oettinger.
ENIAC
In 1943, the U.S. Army launched an ambitious project to accelerate its ballistic calculations. The war was in full swing, and artillery firing tables, vital for bombing accuracy, required endless calculations. The Moore School of Electrical Engineering at the University of Pennsylvania was tasked with creating a radically new machine under the direction of John Mauchly and J. Presper Eckert.
Named ENIAC (Electronic Numerical Integrator and Computer), this machine marked a break from electromechanical calculators. Its development mobilized tremendous energy: approximately 200,000 hours of work were needed to produce, in November 1945, a technological monster. Too late to serve during the war, ENIAC nevertheless stood out as an unprecedented engineering feat.
The numbers are staggering: 30 tons of equipment spread over an area of 150 m2, 40 large panels housing 18,000 vacuum tubes, 1,500 relays, 10,000 capacitors, and 6,000 manual switches. Unlike Atanasoff’s ABC, which used binary, ENIAC directly manipulated 10-digit decimal numbers. Its memory consisted of accumulators, each storing one number, supplemented by function tables for constants.
ENIAC was programmed in a rudimentary and tedious manner. Thousands of switches had to be set and hundreds of cables plugged in, an operation that could take several days. Once configured, however, its performance was astonishing: 5,000 additions or 385 multiplications per second, shattering the records of existing machines.
On February 14, 1946, a carefully orchestrated press conference unveiled ENIAC to the world. The effect was striking. Newspapers went wild, calling the machine a "giant brain," a "magic brain," or a "meteor calculator." For the first time, the media attributed quasi-human capabilities to a machine. This anthropomorphization captured the collective imagination, despite Mauchly’s attempts to temper the enthusiasm by reminding people that ENIAC simply performed calculations without truly "thinking." The myth was born, the machine capable of solving all problems had appeared.
What the public didn’t know was that ENIAC drew heavily on the earlier work of John Atanasoff and Clifford Berry. As later revealed during a sensational trial, Mauchly had visited Atanasoff in 1941 and studied the ABC in detail, borrowing several fundamental concepts without ever crediting the inventors. This controversy over the paternity of the electronic computer remained buried for decades, while ENIAC reaped all the honors.
ENIAC was also the first computer programmed by women, another little-known reality. Female mathematicians and physicists, previously employed as human "computers" to perform calculations by hand, were selected for their excellence. Six of them became the first programmers in history. They developed the initial programs and performed countless calculations, often in the shadows, their contribution remaining long ignored.
The Cold War offered ENIAC a second life. Installed at the Aberdeen Proving Ground in Maryland, it served for the originally planned ballistic tables, as well as for calculations related to the first thermonuclear bombs. Its ability to rapidly solve complex equations accelerated research in nuclear physics and other scientific fields.
Although retired from service in 1955, ENIAC left a lasting imprint on the development of computing. It stimulated research in the United States, both in hardware and theory. The lessons learned from its design directly inspired more advanced machines like EDVAC and UNIVAC. Meanwhile, figures like John von Neumann, Alan Turing, and Claude Shannon developed the theoretical foundations that would structure the discipline.
ENIAC’s influence extended far beyond American borders. Its media coverage made the world aware of the potential of electronic computers. Similar projects emerged in the United Kingdom, the USSR, Germany, and Japan. Within a decade, computing transitioned from experimental to industrial. The concept of "computer" became embedded in popular culture, mixing fascination and anxiety about these "electronic brains" with mysterious capabilities.
ENIAC’s story took a twist in the 1970s. A sensational trial, pitting Honeywell against Sperry Rand (holder of the ENIAC patent), revealed Mauchly’s appropriation of Atanasoff’s ideas. The judge ruled in 1973 that ENIAC was not the first electronic computer and invalidated its patent, thereby placing the invention in the public domain. This legal decision had critical consequences for the subsequent evolution of computing.
The cancellation of the ENIAC patent unleashed tremendous innovation potential. Deprived of exclusivity over fundamental concepts, laboratories and companies had to rely on specific technical improvements rather than legal protection of basic ideas. This situation fostered a remarkable flowering of competing machines: IBM, DEC, Control Data Corporation, and others rushed into the breach, each offering their vision of computer architecture. Without this decision, the digital landscape might have resembled an industry dominated by a few patent holders imposing their conditions on the market. Computing instead took a unique path where foundational principles remained accessible to all while specific innovations could be protected. This singular dynamic partly explains the stunning speed at which computers evolved from multi-ton monsters to personal machines, then to the mobile devices we now know.
This shift in the perception of calculating machines was perhaps ENIAC’s most enduring contribution, despite its contested paternity. In short order, the public image of computers was radically transformed: from simple calculating tools, they became entities potentially capable of surpassing human intelligence. This vision, though exaggerated, guided computing research for decades to come, notably with the emergence of artificial intelligence in the 1950s.
Delay Line Memory
The first electronic computers faced the challenge of storing their data and instructions. ENIAC had only about twenty vacuum tube registers, a prohibitively expensive solution for any substantial memory.
J. Presper Eckert then conceived an innovative concept of transforming electrical signals into sound waves traveling through a mercury tube. These vibrations, captured at the other end, became electrical again before being reinjected at the input. The loop was complete. This technique multiplied memory capacity by one hundred at equal cost, with each line capable of containing up to 500 bits.
Maurice Wilkes and his Cambridge team brought this idea to life in EDSAC in 1949. Their machine utilized 32 mercury tubes storing 512 words of 36 bits. A word circulated every 48 microseconds on each of the 128 lines, while a simple addition required 864 microseconds.
This technology addressed the needs John von Neumann had identified in his description of EDVAC in 1945. He emphasized the necessity of an “internal” memory distinct from external devices such as punched cards. This memory had to preserve intermediate results, instructions, function tables, initial conditions, and results of successive iterations.
Despite its ingenuity, this technology suffered from uncertain reliability and high costs. Manufacturers preferred magnetic drum memory. IBM 650 systems were thus equipped with drums capable of storing up to 2,000 ten-digit words, with an average access time of 2.4 milliseconds, limited by the drum’s rotation at 12,500 revolutions per minute.
EDVAC illustrated this transition: in 1954, a 4608-word drum memory complemented its delay lines. Drums remained present until the 1960s, primarily as secondary memory.
Delay line memory proved that large-capacity memories could be built separate from computing units, thus realizing von Neumann’s architecture. It was the founding concept of our current memory hierarchy.
The 1960s saw the advent of magnetic cores. Without moving parts, they offered truly random access with cycle times of approximately 6 microseconds—a thousand times faster than drums. Capacities exploded: the 1965 CDC 6600, clocked at 100 nanoseconds, had approximately 128,000 words accessible in 1 microsecond.
The delay line symbolizes the delicate balances sought by pioneers between cost and performance, reliability and capacity, complexity and feasibility. It testifies to the inventiveness of early computer architects, confronted with challenges using the technical means of the 20th century.
The principle of data circulation and regeneration it implemented is found, transformed, in contemporary DRAM memories. Its trajectory illustrates a recurring pattern in computing history: a technical innovation overcomes an obstacle before being supplanted by more efficient solutions.
Delay line memory represents the moment when computers acquired true working memory, distinct from computing units and permanent storage. This separation, which has become axiomatic in modern computer architecture, found its first concrete expression with this technology.
Monte Carlo Method
The history of the Monte Carlo method begins in the unique context of the Manhattan Project during World War II. Researchers at Los Alamos were grappling with nuclear physics calculations of unprecedented complexity. The lack of suitable tools stimulated their mathematical imagination.
As early as the 1930s, Enrico Fermi, an Italian physicist who became a naturalized American citizen in 1945, experimented with statistical sampling techniques to solve his neutronics equations. These early attempts remained confidential, lacking computers capable of processing large volumes of calculations.
The true birth of Monte Carlo occurred in 1946. Stanislaw Ulam, a Polish mathematician, was recovering from an illness while playing solitaire. A question nagged at him: how to precisely calculate the odds of winning a game? He hit upon the idea of simulating a vast number of games to obtain a reliable statistical estimate. This brilliant insight met with the enthusiasm of John von Neumann, who programmed the first simulations on ENIAC in 1947. For these calculations, he created a powerful mathematical technique for generating pseudo-random numbers, called middle-square digits, to estimate numerical values and solve complex problems. The name “Monte Carlo” arose from a joke by Nicholas Metropolis, a playful reference to Ulam’s uncle, a compulsive gambler at the Monaco casino.
Between 1946 and 1947, during an extended ENIAC outage, Enrico Fermi designed FERMIAC, a remarkable mechanical device capable of simulating neutron diffusion according to the Monte Carlo principle. This invention testifies to the immediate enthusiasm of physicists for this radically new approach.
A conceptual leap occurred in 1953 with the algorithm devised by Metropolis and his team, including physicists Marshall and Augusta Mici Rosenbluth, as well as Edward Teller. Their method exploited Markov chains to generate samples following a given distribution and explore complex spaces. In 1970, Wilfred Keith Hastings enriched this approach, resulting in the Metropolis-Hastings algorithm, a current pillar of modern computational statistics.
The 1980s saw inspired variants flourish. The simulated annealing algorithm, designed by Kirkpatrick, Gelatt, and Vecchi in 1983, adapted Monte Carlo to combinatorial optimization by drawing inspiration from the controlled cooling of metals. A year later, the Geman brothers applied Gibbs sampling to image processing, opening new horizons in computer vision.
The early 1990s marked the triumph of Markov chain Monte Carlo methods thanks to the foundational work of Gelfand and Smith. These advances revolutionized Bayesian inference and transformed everyday statistical practice. This prolific decade also saw the emergence of Peter Green’s reversible jump algorithm in 1995, which enabled exploration of variable-dimension spaces, as well as the perfect sampling of Propp and Wilson in 1996, guaranteeing exactly distributed samples.
The evolution of Monte Carlo illustrates the fruitful dialogue between theory and technology. From the first calculators to today’s supercomputers, each leap in power has exponentially increased the potential of these methods. Their fields of application have diversified: statistical physics, molecular chemistry, quantitative finance, machine learning, and 3D graphics with Eric Veach’s Metropolis light transport in 1997.
Historical irony: this technique initially developed for nuclear weapons now serves modern medicine. Monte Carlo is used in radiotherapy to simulate with unparalleled precision the interactions between radiation and biological tissues. Its tremendous scientific impact earned it designation as one of the ten most influential algorithms of the 20th century by the Society for Industrial and Applied Mathematics.
Williams Tube
The postwar period witnessed a frantic quest to solve the thorny problem of data storage in early electronic computers. While Presper Eckert and John Mauchly toiled over mercury delay lines in Philadelphia and Jay Forrester began his research on magnetic core memory at MIT, Frederic Williams charted a different course at the Telecommunications Research Establishment in Great Malvern.
Williams’s visit to the ENIAC project in 1946 gave him two key insights: the concept of the stored program and the vital need for fast electronic storage. His background in radar circuits during the world war led him, along with his doctoral student Tom Kilburn, to explore cathode ray tubes as memory devices. Computers required approximately 32,000 binary bits organized into 1,024 words of 32 bits. Electronic flip-flops would have demanded over 64,000 vacuum tubes, making any reliability impossible.
Williams and Kilburn’s breakthrough rested on a subtle physical phenomenon: electron bombardment on the screen created a positive potential well. A second bombardment nearby generated secondary electrons that partially filled the first well. Detection of this filling during a subsequent scan determined whether a 1 or 0 had been written. A metal plate against the screen captured these minute charge variations.
Five variants of this principle emerged. The dot-dash system encoded a 0 as a dot and a 1 as a dash. The dash-dot inverted this logic. The defocus-focus and focus-defocus systems played with beam sharpness. The anticipation system exploited a predictive signal during charge destruction.
In early 1947, now at the University of Manchester, Williams and Kilburn stored their first stable bit. By autumn, they maintained 1,024 digits for several hours. Their stroke of genius was introducing a regeneration mechanism. Since the charge persisted for only a few tenths of a second, the content was constantly read and rewritten—a technique that survives in our current DRAM memories.
June 1948 saw the birth of the Manchester Baby, the nickname for the Small-Scale Experimental Machine. This prototype used three tubes: one for main storage (32 words), one for the accumulator (calculation register), and a final one for control (program counter and current instruction). Its architecture allowed random access to memory, whereas delay lines imposed sequential access. Its instruction set was limited to seven operations: JMP (jump), JRP (relative jump), LDN (load negative), STO (store), SUB (subtract), CMP (compare), and STP (stop).
Two new versions emerged in 1949. The first offered 128 words of 40 bits with an electronic multiplier. It served for sophisticated calculations such as verifying the Riemann hypothesis and testing the primality of Mersenne numbers. The second combined Williams memory with a magnetic drum storing an additional 40,960 bits, thus creating a two-level memory hierarchy that foreshadowed the organization of future computers.
IBM seized this technology in 1951, acquired a license, and launched the manufacture of Williams tubes for its 701, the company’s first commercial computer. This industrialization required remarkable achievements; the company created ultra-clean environments ahead of its time. Personnel wore orlon smocks to avoid textile particles. Sticky mats trapped dust from shoes. These practices anticipated those later adopted for semiconductors.
The final version of the Manchester computer, delivered in 1951, aligned eight Williams tubes each storing 1,024 bits (total memory of 10,240 bits), supplemented by a magnetic drum of 150,000 bits. It processed instructions in 1.2 milliseconds and performed multiplications in 2.16 milliseconds. Its applications touched biology, optics, crystallography, chemical engineering, and even chess.
The technical constraints of Williams tubes proved numerous. The quality of the phosphorescent screen was critical: the slightest imperfection—carbon particle or microscopic hole—compromised storage. The beam required perfect uniformity across the entire surface. The scanning speed necessitated absolute precision control.
Despite these challenges, the technology equipped several machines of the 1950s: the Ferranti Mark I (commercial successor to the Manchester machine), the IBM 701, and the MANIAC I at Los Alamos. By mid-decade, it yielded to Forrester’s magnetic core memory, which was more reliable and denser. In 1998, to celebrate the fiftieth anniversary of the Manchester Baby, Chris Burton led the construction of a functional replica, shedding new light on the problems faced by the 1948 team.
Association for Computing Machinery
In 1947, a small group of pioneers, including E. C. Berkeley, R. V. D. Campbell, J. H. Curtiss, H. E. Goheen, J. W. Mauchly, T. K. Sharpless, R. Taylor, and C. B. Tompkins, decided to create an informal organization bringing together people interested in the new computing and reasoning machines. On June 25, 1947, they circulated a note proposing the creation of an Eastern Association for Computing Machinery.
On September 15, 1947, Columbia University hosted these 78 pioneers. Curtiss took the presidency, Mauchly the vice-presidency, and Berkeley, the driving force of the group, assumed the secretariat. Ambitions grew quickly. By January 1948, the Eastern qualifier disappeared in favor of a national vision.
The organization took shape around four geographical hubs: Boston, New York, Philadelphia, and Washington. The structure was articulated around two levels of exchange: local and national meetings. The sections, initially conceived as restricted activity centers, expanded rapidly. Truly local chapters would only appear later.
The 1950s saw outreach to other scientific communities. The ACM forged ties with the American Institute of Electrical Engineers and the Institute of Radio Engineers. These collaborations gave birth, in 1961, to the American Federation of Information Processing Societies, the voice of the United States in the international arena. Rapid growth disrupted the informal operations of the early days. In 1953, the evidence was clear: a volunteer secretary was no longer sufficient. The New York Academy of Sciences lent support to provide the necessary staff and facilities, before the association established its own headquarters in New York.
Intellectual influence was built through publications. The Journal of the ACM came into being in 1954, followed by the Communications of the ACM. These journals shaped the academic landscape, set standards of excellence, and structured a field in the midst of definition.
With the 1960s came the Special Interest Groups, true communities within the community. These specialized groups became idea laboratories, organized their own scientific meetings, distributed their bulletins, and gave rise to fertile subdisciplines.
The ACM then tackled the need for education in this new science. In 1968, under the aegis of William F. Atchison, the curriculum committee published its recommendations, a founding text for computer science education.
Honors came to recognize the great builders of this science. The Turing Award, created in 1966, established itself as the supreme distinction. Alan J. Perlis inaugurated it, before Wilkes, McCarthy, and so many other major figures. Subsequently, other awards followed: the Distinguished Service Award (1976) and the Grace Murray Hopper Award (1971).
Faced with the proliferation of programs and concerned with contributing to social issues, the ACM created an accreditation committee in 1968. It developed a code of ethics and set up a mediation system for problems related to the computerization of society.
The 1970s saw the flourishing of new services such as national lecture series, continuing professional education, or the documentation center at headquarters. The association thus responded to the expectations of a community that was growing day by day.
As early as the 1960s, the ACM became concerned with preserving the history of computing. Grace Hopper and Robert Bemer initiated a campaign to preserve historical computing documents. These efforts led to the creation of ACM archives, initially housed at the Moore School of Electrical Engineering at the University of Pennsylvania.
Over time, the ACM evolved into a global reference. Its arsenal of publications expanded: dozens of specialized journals, thousands of conference proceedings, a digital library rich with more than 430,000 articles. Each year, more than 275 events bring together its 118,000 members from over a hundred countries. The digital library is an accessible treasure, bringing together all publications since 1951. Bibliometric analysis tools, author and institution profiles, and usage statistics enrich this scientific goldmine.
For three-quarters of a century, the ACM has shaped computer science as a discipline and as a profession. Through its publications, conferences, training programs, and awards, it serves as a compass in the digital world.
Transistor
The winter of 1947 saw the birth of an innovation that would transform our relationship with technology. At Bell Labs, three men—John Bardeen, Walter Brattain, and William Shockley—designed the transistor, a tiny device whose significance no one yet suspected. Their work would earn them the Nobel Prize in Physics in 1956.
This discovery was no accident. As early as the 1930s, Mervin Kelly, director of research at Bell Labs, had anticipated the limitations of vacuum tubes that equipped all electronic devices. These tubes, as fragile as glass, consumed enormous amounts of energy and occupied considerable space. Kelly sought an alternative and invested in semiconductors, materials halfway between conductors and insulators, then poorly understood by the scientific community.
To explore this path, Kelly patiently built a multidisciplinary team. He recruited William Shockley and Dean Woolbridge in 1936, then James Fisk and Charles Townes three years later. Walter Brattain, already at Bell Labs since 1929, joined the venture. The team held seminars to decipher semiconductor physics. Their initial experiments with copper oxide came to nothing. At the dawn of the war, Russell Ohl proposed focusing on silicon.
The world conflict accelerated this research. Radars required frequencies too high for vacuum tubes. Germanium and silicon were ideal candidates, especially since the military funded sophisticated purification techniques for these materials. In 1945, Kelly restructured his laboratories and created a section dedicated to solid-state physics under Shockley’s leadership. The team expanded to include Walter Brattain, Gerald Pearson, and the newly recruited John Bardeen.
The autumn of 1947 saw a remarkable acceleration in their work, a period Shockley would later call the "magic month." On November 17, Brattain and Ralph Gibney discovered that a semiconductor device immersed in an electrolyte produced an amplification effect. Experiments followed in rapid succession with germanium. On December 16, 1947, Bardeen and Brattain achieved significant amplification with a device featuring two closely spaced contact points. The transistor had just been born.
The story unfolded in the following weeks. Between December 1947 and February 1948, Shockley developed the theory of the junction transistor, which would prove industrially viable. He understood that amplification came from an injection of minority charge carriers and envisioned a sandwich of three semiconductor layers. On February 23, 1948, an experiment by John Shive brilliantly validated this hypothesis.
Manufacturing the first junction transistor as Shockley had imagined required two additional years of development. Gordon Teal and Morgan Sparks succeeded in April 1950, thanks to metallurgical innovations. In November 1950, Shockley published Electrons and Holes in Semiconductors, an instant bible for scientists in the field.
The transistor’s first applications surprised its creators. AT&T’s telephone networks, though responsible for the invention, only adopted this technology in the late 1950s. Transistors, however, conquered portable radios, hearing aids, and especially the military. The Minuteman missile program (1958-1962) integrated transistors into its computer guidance system.
Transistorized computers emerged in the early 1950s. Bell Labs designed TRADIC, the first airborne transistor computer. IBM launched its transistorized calculator, the model 608, in 1957. Philco commercialized its TRANSAC computers based on its surface-barrier transistor technology. Texas Instruments, General Electric, and RCA offered transistors specifically designed for computing.
This technological transformation also revolutionized production methods. Fairchild Semiconductor, founded in 1957 by eight defectors from Shockley’s company, including Gordon Moore and Robert Noyce, invented new manufacturing methods. These innovations gradually shaped what would become Silicon Valley, the global epicenter of the semiconductor industry.
Cybernetics
At the heart of the turbulence of World War II, while the great powers mobilized their scientists, a new science was born: cybernetics. This discipline, which could be described as transdisciplinary, emerged through the work of Norbert Wiener, a mathematician at MIT.
In 1940, facing the threat of German bombers, Wiener tackled an exceptionally complex technical problem: how to shoot down aircraft flying at 600km/h at dizzying altitudes? Through a trajectory predictor called the “AA predictor,” designed in collaboration with Julian Bigelow.
Wiener’s brilliant insight was to consider the pilot-aircraft ensemble as a single system whose behaviors obeyed certain predictable statistical laws. This approach broke radically with the traditional mechanistic view. To implement this idea, Wiener drew upon three emerging technologies: radar, servomechanisms, and analog computers.
But Wiener’s contribution extended far beyond the initial military context. From this work emerged a fundamental reflection on control and communication mechanisms, both in artificial and natural systems. The term “cybernetics,” from the Greek kubernêtikê (art of steering), was chosen to convey this unifying vision.
At the core of cybernetic thought lie several foundational concepts: feedback, information as a measurable quantity, and self-regulating systems. Wiener postulated that a cybernetic system, whether living or artificial, must necessarily possess an internal model of its environment to interact effectively with it. This model guides the extraction of relevant information through sensors, their processing according to internal rules, then action on the environment through effectors, all in an uninterrupted loop.
The publication of Cybernetics: Or Control and Communication in the Animal and the Machine in 1948 marked the official advent of this discipline. The work generated immediate enthusiasm in international scientific circles, as evidenced by its simultaneous publication in the United States and in France by Hermann.
Cybernetics, however, experienced divergent destinies across countries. In the United States, under the aegis of MIT and researchers like Jay Forrester, it became firmly rooted in engineering sciences, giving birth to concrete applications such as the SAGE air defense system. In France, it took more the form of theoretical and philosophical reflection, sometimes at the expense of its practical applications.
The famous Macy Conferences, held between 1946 and 1953, played a decisive role in the expansion of this discipline. These extraordinary gatherings brought together brilliant minds from diverse backgrounds: mathematicians, engineers, neurologists, psychologists, and anthropologists mingled to explore the multiple ramifications of cybernetic thought.
The postwar period saw Wiener distance himself from military research, horrified by the destruction at Hiroshima and Nagasaki. He directed his work toward civilian and medical applications, seeing in cybernetics a tool to combat social entropy.
Modern computing bears the indelible mark of this thinking. The concepts of feedback, information processing, and modeling of complex systems now constitute the conceptual foundation of our digital systems. The attention Wiener paid to human-machine interfaces finds a striking echo in our current concerns.
Cybernetics nourished several adjacent fields: information theory developed by Claude Shannon, game theory formalized by John von Neumann, and the first reflections on artificial intelligence. These parallel currents progressively built the theoretical edifice upon which 21st-century computing rests.
In the 1960s-1970s, the limitations of cybernetics became apparent. Its claim to provide a universal explanatory framework came up against the irreducible complexity of many natural and social phenomena. Its totalizing ambitions were scaled back, but its fundamental concepts retained their relevance for understanding information and control systems.
In our information-saturated digital world, where complex systems interlock with one another, Wiener’s insights retain remarkable freshness. While its field of application has become more precise since then, his vision of a science of control and communication has lost none of its explanatory power. The principles he identified continue to illuminate our understanding of computer systems and their interaction with human users, in an incessant dialogue between human and machine that he had foreseen as early as the 1940s.
Magnetic Drum
We can say with humility and without judgment: the computing world has known strange machines in its early days. Among them, a rotating cylinder dominated three decades of technological evolution. Gustav Tauschek, an Austrian inventor, created in 1932 an object that would change the face of data storage: the magnetic drum.
A metal cylinder coated its surface with a ferromagnetic layer capturing binary information. All around the drum, read-write heads captured and inscribed data in the form of tiny magnetized dots. The constant rotation of the cylinder provided access to stored information, in a precise and regular mechanical ballet.
The first versions of the 1940s already showed the technical limitations of the era. Recording density reached only 50 bits per inch, with about twenty tracks per inch. The adjustment required watchmaker precision, so much so that technicians adjusted the gap between heads and surface to within a thousandth of an inch, using differential screws and high-precision machining.
The IBM 650, marketed from 1954, perfectly embodied this technology. Its drum, made of a cobalt-nickel alloy, measured 4 inches in diameter by 14 inches in length. It spun at the dizzying speed of 12,500 revolutions per minute on ultra-precise ball bearings. Its recording density reached 50 magnetic dots per inch, pulsing at 128 kHz. The data organization followed a particular logic: parallel storage at the bit level within each digit, but serial for the digits of a single word. Fifty words were inscribed on one circumference of the drum, each containing ten decimal digits and a sign. The system coded the minus sign by the digit 8, and the plus sign by the digit 9. A space equivalent to one digit separated each word, and five parallel tracks encoded each value.
David Macklin, a programmer at Republic Aviation in 1957, recounts the constraints imposed by this architecture. Developers had to account for the actual physical position of data on the drum to optimize access times. With its 2,000 addressable positions distributed across parallel tracks, the machine imposed a constant intellectual gymnastics. The programmer calculated the location of the next instruction or let the SOAP assembler handle it. On average, three or four executions per complete revolution completed the work, depending on the previous data positions.
The technology evolved significantly with the invention of the hydrodynamic bearing. This advance, developed by IBM for the SAGE air defense computer, made it possible to achieve densities comparable to magnetic tapes without resorting to the complex mechanical adjustments of the past.
The legacy of the magnetic drum survives in certain modern UNIX systems, where /dev/drum designates the virtual swap device. This name comes directly from the historical use of the drum as memory paging support.
The arrival of magnetic core memories signaled the gradual decline of the drum as primary memory. However, these rotating cylinders remained in use as memory extensions into the 1960s, thanks to their reliability and moderate cost compared to contemporary alternatives.
The electronic heart of the magnetic drum beat to the rhythm of about 2,000 tubes, primarily models 5955, 6211, 12AY7, 6AL5, 2D21 and 5687. Types 6211 and 5965 were similar to the 12AV7 but met IBM’s specific acceptance tests. Nearly 3,600 crystal diodes completed the logic circuits. The power supply consumed 16 to 18 KVA at 208V, 60 cycles, single-phase. Selenium rectifiers provided direct current, avoiding the need for complex electronic regulation.
Beyond simple storage, the magnetic drum shaped the design of early computers and the method of writing programs. Developers delved into the physical reality of storage to optimize their code, creating a hardware-software symbiosis characteristic of the 20th century. Its limitations in terms of access and capacity stimulated the search for alternatives. This work led to magnetic disks, faster and larger. IBM’s RAMAC, the first commercial hard disk marketed in 1956, marked the beginning of the end for the magnetic drum as a primary storage device.
Claude Shannon’s Information Theory
It is difficult to imagine our digital universe without the theoretical breakthroughs of Claude Shannon, this brilliant-minded American. In his foundational 1948 text, A Mathematical Theory of Communication, Shannon established the mathematical foundations that underpin all our digital communications today.
The story begins at AT&T’s Bell Labs in the 1940s. Telephone and radio were no longer novelties but maturing technologies. However, these systems lacked a solid theoretical framework. Shannon tackled this conceptual void. His genius lay in the intuition to approach information as a mathematical quantity. He invented the illuminating concept of information entropy, a measure that quantifies the uncertainty of a message. Instead of remaining vague, Shannon translated information into mathematical equations and formulas. He demonstrated that the higher the entropy, the more information the message contains. This was the revelatory idea that transformed our view of the world.
His channel coding theorem constitutes another decisive breakthrough. He proved that it is possible to transmit information over a noisy channel with almost zero probability of error. The only condition is not to exceed the channel’s capacity. This discovery shed new light on the theoretical limits of our communication systems.
Shannon’s concepts found concrete applications from the early days of computing. His work guided the creation of robust error-correcting codes. No modern storage or transmission system would function without them. From CDs to Wi-Fi, including satellite communications, Shannon’s fingerprint is visible everywhere.
Data compression owes just as much to this visionary mathematician. The Huffman and Lempel-Ziv algorithms, which allow you to send photos or store movies, draw their basic principles from Shannon’s work. Without him, our hard drives would be 10 times larger and our internet connections 10 times slower.
But Shannon’s influence does not stop at the borders of computer science. His theory has spilled over into multiple disciplines. In psychology, it inspired models on cerebral information processing. Biologists use it to decode DNA and understand genetic transmission. Economists have drawn tools from it to model financial markets, with quantum physics appropriating certain “Shannonian” concepts.
Over time, his theory has branched out. Quantum information theory explores the novel properties of quantum systems. Algorithmic information studies the fundamental limits of computation and complexity. These extensions testify to the robustness of Shannon’s original ideas.
More than 75 years after its initial publication, this theory remains a guiding star for computer science and telecommunications in the 21st century. It still guides researchers in cutting-edge fields such as machine learning or data science. Who would have thought that an article published when computers occupied entire rooms would continue to illuminate the era of nanometric chips?
When you send a message that travels thousands of kilometers without error, when you compress a file to send by email, when you watch a streaming movie without interruption, you directly benefit from his discoveries. If our data travels through time and space without degrading, it is thanks to his equations. Pure mathematics always ends up finding practical applications, sometimes far beyond what their creators imagined.
Short Code
Short Code was born into a world where programming resembled electrical engineering more than code writing as we conceive it today. In the late 1940s, computing was in its infancy with monumental machines filling entire rooms for computing power that now seems laughable.
The story begins in the laboratories of the Moore School of Engineering at the University of Pennsylvania. It was there that John Mauchly, a physicist who had transitioned to electronics, worked from 1941 on a military project aimed at calculating ballistic tables. Faced with the limitations of calculators, he envisioned a fully electronic machine: ENIAC.
This colossal machine, commissioned in 1946, executed 5,000 additions per second—an unprecedented technical achievement. ENIAC, however, suffered from a flaw: the absence of a stored program. Each calculation required manually modifying circuit connections, a tedious task that could take several days.
To overcome this constraint, Mauchly and his collaborator J. Presper Eckert designed EDVAC in 1944, incorporating the concept of a program stored in memory. Their departure from the university in 1946, following disagreements over intellectual property, delayed the project, which was not completed until 1951. Mauchly then proposed, in July 1949, a more accessible language than machine code: Short Code, initially named Brief Code.
Short Code stood out for its conceptual simplicity. Rather than writing directly in binary, programmers used mathematical expressions. The language supported floating-point numbers, variables, conditional jumps, and calls to subroutines stored separately. The UNIVAC manual illustrated this innovative syntax with examples such as:
00 IXABC I is set to the value X+(A*B*C) 01 BVII Go to instruction 11 if B is positive 02 HRDW Print the values of R, D, and W
This approach, though rudimentary by today’s standards, represented a break from previous methods. Programs remained complex, however, and had to be manually translated into machine language—the compiler did not yet exist. Tests showed that code written in Short Code executed approximately 50 times slower than an equivalent program in machine language, arousing skepticism among many specialists. Despite these limitations, Short Code marked a decisive milestone. For the first time, the algorithm took shape independently of the machine, enabling the emergence of more advanced languages such as Fortran (1957) or Algol (1958). The concepts it introduced—mathematical expressions, variables, branching, subroutines—became the foundations of modern programming.
Mauchly and Eckert’s work was not limited to Short Code. After ENIAC and EDVAC, they created BINAC (Binary Automatic Computer, 1949), the first operational stored-program computer, then UNIVAC I (1951), the first commercial computer in history. William Schmitt developed a prototype of Short Code on BINAC, later adapted to UNIVAC I, and finally to UNIVAC II. By founding the Eckert-Mauchly Computer Corporation in 1946, they were the first to attempt to commercialize these extraordinary machines, whose usefulness remained mysterious to the general public.
Short Code, by partially freeing developers from hardware complexity, helped democratize programming and accelerated the growth of the computer industry. Seventy years after Mauchly’s work, the quest for expressive, safe, and efficient languages, such as Rust for example, remains at the heart of concerns for computer scientists of the 21st century.
EDSAC
In the aftermath of the Second World War, as Europe tended its wounds, a project was taking shape in the laboratories of Cambridge University. Maurice Wilkes and his team were designing a machine that would forever mark the history of automatic computation: EDSAC.
This automatic calculator with delay line storage, built between 1947 and 1949, bore no resemblance to the computers we know today. Bulky, noisy, power-hungry, EDSAC embodied the first practical application of von Neumann’s theories on computer architecture, a conceptual framework that still governs our current machines.
In the post-war British context, where resources were sorely lacking, building such a device was quite an undertaking. Yet the project was part of a rich tradition, with the codebreaking work at Bletchley Park and Turing’s ACE at the National Physical Laboratory. EDSAC’s distinctive feature lay in its general-purpose nature which, unlike its predecessors dedicated to specific calculations, was designed to be adaptable to all types of mathematical problems.
Its memory constituted a fascinating technical feat. Imagine tubes filled with mercury where ultrasonic pulses propagate, representing through their presence or absence the famous 0s and 1s of binary language. This “delay line memory” stored 1024 words of 17 bits each. The team had also designed miniature versions of these lines for the arithmetic unit’s registers, a solution as ingenious as it was elegant despite its practical drawbacks.
The microprogrammed control unit, another major innovation, physically separated the decoding of instructions from their execution. This architecture, encoded on 17 bits with a single-address format, included conditional jumps, a hardwired multiplier, and shift operations. Numbers were represented with a sign bit and a fixed point, to handle either short integers or near-double precision numbers using two memory words.
In May 1949, EDSAC performed its first public demonstrations, thus becoming the first operational computer based on von Neumann architecture. With its modest 650 instructions per second and 2 kilobytes of memory (a million times less than our current phones), the machine found its place in Cambridge’s scientific research.
The 1950s saw EDSAC at the heart of scientific advances. Rosalind Franklin used it for her work on DNA structure, while researchers in astronomy, economics, or linguistics trained in its use. A genuine computing community was emerging around this pioneering machine.
On the software front, David Wheeler created the first assembler on EDSAC in 1951, replacing tedious binary codes with mnemonic instructions. A few years later, Alick Glennie’s Autocode language foreshadowed high-level languages like Fortran or Algol. Libraries of reusable subroutines appeared, initiating the structured programming methods of the future.
EDSAC inspired other British universities for their own machines, namely EDSAC 2 at Cambridge, MOSAIC at Manchester. Across the Atlantic, the University of Illinois developed ILLIAC, directly derived from plans generously shared by Wilkes and his team.
EDSAC’s decommissioning in 1958, after a decade of loyal service, marked the end of an era but not that of its legacy. Without knowing it, EDSAC users were already manipulating the fundamental concepts we use daily: stored-program in memory, sequential architecture, instruction set, and subroutines.
Application Programming Interface
Application Programming Interfaces, known by the acronym API, have shaped our digital environment. Their history begins in post-war England, within the laboratories of Cambridge. Maurice Wilkes and David Wheeler, two little-known pioneers, developed a modular program library for the EDSAC computer. These British researchers stored their creations on punched tapes in a simple metal cabinet, accompanied by a catalog detailing their use. This modest arrangement constitutes the first documented trace of an API.
Wilkes and Wheeler formalized their work in 1951 in the book The Preparation of Programs for an Electronic Digital Computer. This book marks the conceptual birth of standardized interfaces between computer components, well before their current denomination.
The term “API” made its first scientific appearance in 1968 in the work of Ira Cotton and Frank Greatorex. Their paper “Data structures and techniques for remote computer graphics,” presented at an AFIPS conference, described ways to abstract the specifics of graphics peripherals. Fortran programmers then used subroutine calls to free themselves from hardware constraints.
The 1970s saw the concept extend to data systems. In 1974, CJ Date published “The Relational and Network Approaches: Comparison of the Application Programming Interface,” integrating these interfaces into the ANSI/SPARC architecture of database management systems. This scientific contribution considerably broadened their scope of application.
The network computing of the 1980s transformed developers’ needs, who now faced accessing libraries on different machines. APIs evolved to support remote procedure calls. This period saw the birth of interfaces promoting compatibility between disparate platforms.
The Internet revolution of the 1990s pushed APIs toward data exchange between applications via standardized protocols. Carl Malamud proposed in 1990 a definition that has endured through the ages: a coherent set of services made available to a programmer to accomplish specific tasks. This decade laid the groundwork for emerging web services.
The year 2000 was significant with Roy Fielding’s doctoral thesis, Architectural Styles and the Design of Network-based Software Architectures. This work established REST as the reference protocol for communications between connected systems. Also in 2000, Salesforce presented the first modern API at the IDG Demo conference, thus inaugurating a new technological era.
During this decade, online services multiplied. Salesforce, eBay, and Amazon innovated by exploiting HTTP to disseminate machine-readable data, formatted in JSON or XML, via web APIs. From small players to industrial giants, companies massively adopted the model based on the cloud and APIs. Amazon played a pioneering role by requiring that all shared digital resources have an API.
From 2010 onward, the explosion of social networks catalyzed API evolution. Organizations sought cost-effective solutions to create adaptable applications. These interfaces simplified integration with external services such as payment processors or customer relationship systems. The arrival of Kubernetes around 2015 accelerated the transition toward distributed architectures composed of autonomous microservices, each with its own API.
The 2020 health crisis intensified our dependence on web services, multiplying the use of programming interfaces. Their role extended to connected objects and the creation of artificial intelligences. Organizations deployed their applications across different cloud providers, placing APIs at the center of communication between services.
Currently, evolution focuses on two major axes. Security is a priority given the growing vulnerabilities of distributed systems. The zero trust principle is becoming standard, systematically verifying authentication and authorizations, including for internal requests. Rate limiting mechanisms protect against denial-of-service attacks, while observability detects and resolves incidents.
API governance establishes in parallel rules for their design, deployment, and maintenance within organizations. The objective is to guarantee their consistency, security, and scalability in harmony with business strategy.
APIs have revolutionized the way developers create applications, with increasingly sophisticated abstractions. What began as a simple exchange between two computers in a British university laboratory today forms the backbone of 21st century digital systems, particularly in the distributed universe of cloud computing.
Whirlwind I
At the heart of MIT, the Whirlwind I project began in 1944, driven by the US Navy. Initially, the ambition was modest: to build a flight simulator for pilot training. Jay Forrester, a researcher at the servomechanics laboratory, took charge of the project the following year. The initial idea of an analog machine was quickly discarded in favor of a digital approach, which the Navy validated in 1946.
This decision radically transformed the project’s scope. Instead of a simple simulator, MIT was committing to creating a universal digital computer, capable of meeting the simulator’s requirements while opening up prospects for other scientific applications. The central machine became fully operational in 1951.
Whirlwind I’s primary purpose shaped its design. From the outset, the team conceived it as a real-time system. This approach required ultra-fast memory. The chosen solution was cathode ray tube memory, the fastest available at that time. A 1952 report noted, however, that this memory remained "the most important factor affecting the Whirlwind I system’s reliability."
To address this problem, the team developed rigorous systematic testing procedures, aimed at detecting hardware failures before they could compromise calculations. At the same time, the military’s growing use of the machine demanded flawless reliability. In the tense Cold War climate, the Whirlwind, whose funding shifted from the Office of Naval Research to the US Air Force, became integrated into the American defense system.
Its production version, the AN/FSQ-7, became a component of the SAGE airborne surveillance system. Dissatisfied with the limitations of cathode ray tubes, the researchers sought alternatives. Towards the end of the 1940s, Jay Forrester and other scientists explored the use of magnetic cores. William Papian, a team member, mentioned in his notes Harvard’s work on "static magnetic delay lines."
The summer of 1953 saw the installation of core memory on the Whirlwind. A project report highlighted two advantages: "magnetic core memory has two major advantages: (1) greater reliability with consequent reduction in maintenance time devoted to memory; (2) shorter access time (core access time is 9 microseconds versus approximately 25 microseconds for tubes), which increases the computer’s operating speed." This innovation made the Whirlwind the first large computer equipped with this technology, which dominated the market until the 1970s.
Whirlwind I’s architecture was based on a stored-program model with 16-bit words. Its power reached 20,000 instructions per second, with random access memory of 2048 words. It used single-address instructions and fixed-point numbers in 9’s complement. Its structure was divided into three parts: a storage unit with 32 electrostatic tubes, an arithmetic element for calculations, and a control unit orchestrating the whole. The electronic components relied primarily on flip-flops to store binary states and gate tubes to direct pulses.
The central concern was speed. The initial objective of 50,000 operations per second was not achieved, but the actual 20,000 operations already represented a feat. Execution times—3 microseconds for addition, 16 for multiplication—far exceeded those of contemporary machines. The instruction set, reduced to about thirty commands, included conditional jumps making the machine self-modifiable according to results. Specialized instructions, such as automatic coordinate transformations, further accelerated calculations.
Whirlwind I’s human-machine interface was bold. Beyond the photoelectric punched tape reader and conventional typewriters, the machine had cathode ray screens—alphanumeric or vector—with photographic capture of results. This advanced system gave programmers immediate visual feedback on their work.
The programming cycle prefigured what we know today. The programmer first broke down their problem into elementary steps, translated them into instructions, which operators entered via special equipment. This code was converted to intermediate language, then to binary, before being stored on punched or magnetic tape. The controller then loaded the instructions sequentially from memory, decoded them, and sent the signals necessary for their execution.
Whirlwind I’s applications touched numerous domains: economics, magnetism, radar, mechanical design, signal processing. But its most enduring legacy is undoubtedly its advances in real-time computing applied to industrial and air traffic control. This extraordinary machine contributed to improving computer architecture and accelerated the development of computing, both hardware and software.