Stéphane FOSSE

EPOCH

EPOCH © 2025 by - This book is published under the terms of the CC BY-SA 4.0 license

Chapter 6
1970

When the Digital World Transformed Everything

While bombs rain down on Vietnam, three engineers at Intel complete the design of the first commercial microprocessor. Their invention, almost unnoticed in a world obsessed with the Cold War, will change everything.

The opening decade unfolds in a peculiar climate. America bleeds in Vietnam. Young people take to the streets. The USSR and the United States eye each other warily. Oil becomes a weapon. And amidst this global chaos, a handful of men and women invent the digital future.

American campuses are boiling over. Social protest rejects established authority, including that of large centralized computers symbolizing institutional power. At Berkeley as at MIT, students dream of a different kind of computing—more personal, more accessible. In university laboratories, people work on machines that bear no resemblance to the air-conditioned colossi of corporations. There, away from the paths marked out by IBM, another vision of the computer is born.

In 1973, the oil shock strikes. Lines grow at gas stations. Inflation soars. Faced with this crisis, companies seek to reduce their costs. Computerization appears as a solution. Banks are the first to embrace it, as they must manage an increasing number of transactions in an unstable economic context. Their needs propel the development of ever more sophisticated databases. Oracle emerges in this context, based on an idea by Larry Ellison inspired by a theoretical IBM article.

In the East, behind the Iron Curtain, the USSR attempts to keep pace. It launches its own computing plan, sometimes copying Western architectures. The SALT agreements change nothing—technology remains a power issue. Soviet computers equip factories and administrations but suffer from a growing lag. Computing visibly marks the difference between the two blocs.

Japan plays its own tune. Without the military constraints weighing on American industry, Japanese companies focus on consumer electronics. They excel in miniaturization. Sony, Panasonic, and Sharp establish themselves worldwide. Their components soon equip the first personal computers. The country, defeated in 1945, takes its technological revenge.

Europe advances in scattered order. France relies on its Calcul Plan and the International Company for Computing to create a national champion. Germany depends on Siemens. The United Kingdom develops ICL. But these national initiatives struggle against American giants. European universities, however, shine in theoretical research. The Pascal language, created by Niklaus Wirth at the Swiss Federal Institute of Technology in Zurich, marks an advance in programming language design.

Television then reigns supreme over Western households. Images of the first step on the Moon, viewed by hundreds of millions of spectators, demonstrated the power of communication networks. Telecommunications engineers reflect on digital data transmission, particularly with the first videotex experiments. In France, the PTT launches studies that will lead to Minitel, the first mass-market network before the Internet era.

In schools and universities, computer science takes its first steps as a teaching discipline. The BASIC and Logo languages are designed with an educational objective. Seymour Papert, at MIT, theorizes learning through programming. These reflections open a new field: computing is not only a calculation tool but also a means of learning differently.

The working world transforms—cathode ray tube screens gradually replace cardboard files. In offices, secretaries discover word processing. The first daisy wheel printers reproduce typewriter quality but with electronic flexibility, changing administrative professions. Unions worry and negotiate agreements on screen working conditions, but the march toward all-digital is launched.

Science fiction seizes upon the computer theme. “2001: A Space Odyssey” popularized the image of HAL 9000, an artificial intelligence gone mad. In 1973, the film “Westworld” imagines uncontrollable robots. These works express the hopes and fears aroused by computerization. Simultaneously, the first commercial video games like Pong attract a new audience to these machines once reserved for scientists.

Healthcare doesn’t escape this wave. Hospitals equip themselves with computer systems for patient management. The first medical scanners, appearing at the beginning of the decade, produce digital images that only computers can process. Pharmaceutical research increasingly relies on molecular modeling. Medical computing emerges as a new discipline.

Large cities become laboratories where urban computing applications are tested. Traffic light management, optimization of water or electricity networks benefit from real-time control systems. These applications require computers capable of reacting instantly to their environment, stimulating the development of embedded systems.

International trade metamorphoses with the adoption of electronic exchange standards. Barcodes appear in American supermarkets. Banking transactions are automated with the SWIFT network. Supply chain computerization begins. These developments accelerate the globalization of subsequent decades, where information flows will circulate as quickly as goods.

Steve Jobs and Steve Wozniak found Apple in 1976. Their Apple I symbolizes the spirit of this new computing: artisanal, personal, almost rebellious. At the same time, Bill Gates and Paul Allen create Microsoft. These young entrepreneurs bear no resemblance to the leaders of traditional large computing companies. They inaugurate a new era where the computer is both a consumer product and an object of passion.

Fundamental research advances rapidly. Alan Kay’s work at Xerox PARC on graphical interfaces enables more intuitive computing. Dennis Ritchie and Ken Thompson’s creations at Bell Labs—the C language and the UNIX system—launch modern computing. These innovations, initially confidential, will transform the industry’s face over subsequent decades.

At the end of the 1970s, the first amateur computer clubs gather enthusiasts around rudimentary machines like the MITS Altair 8800. These communities develop a culture of sharing that contrasts with the closed world of professional computing. Specialized magazines publish programs that readers copy line by line. No one yet imagines that these obscure practices represent 21st-century computing.

The computer, once a tool reserved for large organizations, begins its transformation into a daily companion during the 1970s. The ideas emerging during this period will shape our digital world for the following 50 years, designing a society where information processing is as essential as the production of material goods.

Top

ALOHAnet

In September 1968 at the University of Hawaii, Norman Abramson faced a problem that seemed almost insurmountable. How could he connect computers scattered across multiple islands, separated by hundreds of kilometers of ocean? Cables cost a fortune, telephone links were slow and unreliable. Yet the university needed to connect its main campus at Manoa, near Honolulu, to research centers scattered across Oahu, Kauai, Maui, and the Big Island of Hawaii.

The idea germinating in his mind seemed somewhat crazy: use radio waves to enable computer communication. In 1968, no one had yet attempted such an endeavor. Computer communications were confined to direct cable connections, point-to-point, following patterns inherited from telegraph and telephone systems. But Hawaii imposed its own rules. The volcanic geography of the archipelago transformed what elsewhere would be a constraint into a laboratory of innovation.

The system Abramson envisioned relied on a deceptively simple architecture. At the heart of the setup stood an IBM 360/65 equipped with 750 KB of memory, installed on the main campus. This machine communicated with a more modest computer, an HP 2115A christened MENEHUNE, named after the small beings of Hawaiian mythology reputed for accomplishing “impossible” tasks in a single night. The MENEHUNE managed radio transmissions through two 100 kHz UHF channels: one broadcast at 407.350 MHz to remote terminals, the other collected their responses at 413.475 MHz. Each channel operated at 24,000 baud.

But the project’s true boldness lay elsewhere. In the downlink direction, from MENEHUNE to the terminals, everything was conventional: a single source transmitting, recipients listening, scheduling following well-established priority rules. It was in the other direction that innovation emerged. How to manage communications when dozens of terminals all want to speak simultaneously on the same channel? Traditional multiplexing techniques would impose on each terminal a fixed time or frequency slice, whether it used it or not. An unacceptable waste for computer traffic consisting of short bursts separated by long pauses.

The solution proposed by Abramson’s team broke all established codes. Each terminal transmitted whenever it pleased, without asking anyone’s permission. Data traveled in standardized packets: 80 eight-bit characters for payload, plus 64 bits for identification, control, and error detection. When MENEHUNE received an intact packet, it sent an acknowledgment. Otherwise the sender waited a random delay and tried again. This approach accepted the unacceptable: packet collisions. Two terminals transmitting simultaneously would see their signals mix, creating gibberish incomprehensible to the receiver. Rather than seeking to avoid these collisions at all costs, ALOHA considered them a necessary evil, the price to pay for unmatched operational simplicity.

Abramson’s mathematical calculations revealed the limits of this philosophy. The system exploited at best only 18.4% of the channel’s theoretical capacity, a performance reached when offered load represented exactly half of that capacity. Beyond that, collisions multiplied, retransmissions accumulated, and the network collapsed into chaos of interfering signals.

ALOHA proved that a system could function without a conductor, without central coordination, without pre-established scheduling. Terminals managed among themselves, accommodated conflicts, found their equilibrium in what resembled organized anarchy.

In 1973, a young Xerox engineer named Robert Metcalfe visited the installation. He understood that ALOHA’s principles could adapt to media other than radio waves. A few years later, Ethernet would be born, a wired transposition of Abramson’s ideas that would revolutionize local area networks.

Meanwhile, the Hawaii researchers refined their creation. They invented “slotted ALOHA,” a variant where transmissions could only begin at predefined instants, like beats of an invisible metronome. This synchronization doubled theoretical efficiency, raising maximum throughput to 37% of channel capacity. The improvement paved the way for an entire family of derived protocols.

The project’s success illustrates a truth about technological innovation. Often, the most severe constraints generate the most creative solutions. Hawaii could not simply copy continental recipes. Geographic isolation forced researchers to fundamentally rethink computer communications. In doing so, they discovered universal principles that transcended their particular situation.

Wi-Fi networks, satellite communications, cellular systems—all incorporate mechanisms inspired by the Hawaiian protocol. The idea that a network can function without strict central control, managing conflicts rather than avoiding them, has become a paradigm of distributed computing. Forty years after its creation, ALOHA retains its inspirational force. Not so much through its raw performance, far surpassed by current standards, but through the conceptual audacity it exemplifies. Sometimes, you just need to let the machines fend for themselves.

Top

FORTH

Charles "Chuck" Moore worked at MIT, then at the Stanford Linear Accelerator Center. The computing landscape of the time frustrated him because FORTRAN and ALGOL monopolized attention, but these languages forced him to spend an inordinate amount of time between writing code, compiling it, and executing it. He was looking for something more direct, more immediate.

His solution took the form of a text interpreter that he wrote in ALGOL. The principle was that everything you gave it was either a number or a word representing a command. Gone were the separate phases of editing, compilation, and execution. Everything happened interactively, in real time. He invented special words like ":" to tell the system it needed to start compiling the following code, and ";" to tell it to return to normal interpretation mode. When the compiler encountered such a block, it stored this "word"—the equivalent of our current functions—in a dictionary for later reuse.

FORTH’s architecture relies on two stacks working together. The first stores temporary values and parameters, the second manages stack frames. This approach produces compact and fast code, but above all, code that is infinitely extensible. In 1970, Moore was working on a third-generation IBM computer that limited names to a maximum of five characters. He therefore named his language "FORTH," an allusion in English to a fourth generation of programming language.

The National Radio Astronomy Observatory in Arizona welcomed the first public version of FORTH in 1971. The system was used to collect and analyze data while controlling the radio telescope in real time. The astronomy community adopted this technology. Moore then created several versions: MiniFORTH for minicomputers, MicroFORTH for microcontrollers. Each variant adapted to the specific constraints of its environment.

FORTH uses postfix notation, also called reverse Polish notation. Operators come after their operands. The expression 1 2 + means that 1 and 2 are added. However, the operation is interactive: when the user types this expression, the interpreter first pushes 1 onto the stack, then 2, before executing "+" which retrieves these two values, adds them, and puts the result back on the stack.

FORTH’s real strength comes from its ability to grow. Programmers create new commands, "words," which become an integral part of the language. This extensibility adapts FORTH to the precise needs of each application. The language combines low-level calculations with a powerful mechanism for declaring functions. Once compiled, FORTH programs run almost as fast as hand-written assembly. With a compiler that optimizes aggressively, FORTH is highly efficient in numerical processing. For embedded systems, prototype hardware, or boot loaders, a minimal compiler kernel is all that’s needed. Once this kernel is available, you can build a complete FORTH compiler using nothing but FORTH.

FORTH’s minimalist philosophy is reflected in memory management. The language only offers storage of simple or double values, typically integers. To build an array, you must allocate a variable and increment an internal pointer in the dictionary to reserve more space. An array in FORTH is just a region of memory without any particular documentation. The language doesn’t offer a native heap either; programmers write their own memory management routines.

Concrete applications of FORTH are multiplying. Sun Microsystems integrates it into the OpenBoot firmware of its workstations. WearLogic has developed an electronic wallet based on an AVR microcontroller programmed in FORTH. AEMS (Advanced Energy Monitoring Systems) uses FORTH for its Yatesmeter, a sophisticated instrument that monitors pumps in real time. These successes show that FORTH remains relevant for embedded systems and hardware control.

Technological evolution has not diminished interest in FORTH. Its interactive approach and ability to build abstractions step by step make it a valuable educational tool. Projects like pbFORTH for LEGO Mindstorms robots reveal its potential for teaching programming. Its particular syntax and mental model, which differs from conventional languages, offer students another perspective on programming concepts.

The FORTH community is active. User groups still exist in several countries, conferences like euroFORTH are held regularly. Recent developments include GNU Gforth, a portable version that complies with standards, and VFX Forth, a modern optimizing compiler that speeds up FORTH programs by a factor of three to five compared to traditional compilers.

Rather than imposing an abstraction between man and machine, FORTH creates a direct link to the hardware while allowing the construction of useful abstractions.

Top

Pascal

Niklaus Wirth works at the Swiss Federal Institute of Technology in Zurich. A frustration seizes him when faced with existing programming languages. These tools seem incoherent to him, their constructs defy all logical explanation. For a mind accustomed to systematic, methodical, and orderly reasoning, this situation is unbearable.

He designs Pascal with two distinct ambitions. He first wants to create a language truly suited to teaching, where each concept is naturally reflected in the syntax. But he refuses to sacrifice efficiency for the sake of pedagogy: his implementations must run reliably on the machines of the era.

The legacy of Algol 60 weighs heavily in this approach. This language meets pedagogical requirements better than any other at the time. Wirth adopts its structuring principles and the form of its expressions. Yet he resists the temptation to make Pascal a simple subset of Algol 60, because certain declaration mechanisms would prevent him from integrating the new features he has in mind. These features primarily concern data structures, whose absence in Algol 60 largely explains why that language is confined to a restricted application domain. By introducing records and files, Wirth hopes that Pascal can tackle commercial management problems, or at least serve to demonstrate them effectively in a programming course.

The first version appears in 1970, accompanied by an implementation on CDC 6000 computers. Three years of intensive use, teaching, and experimentation give birth to a revised version in 1972. The changes reflect lessons learned in the field: constant parameters give way to value parameters, the class structure disappears, file handling is rethought.

Standardization arrives in 1983 with the ISO 7185 standard, which defines unextended Pascal. An update occurs in 1990, the year extended Pascal standard ISO 10206 is born. This distinction addresses the tension between maintaining the simplicity of the original language while meeting the growing needs of modern programming.

Extended Pascal brings a separation between modularity and compilation. Each module allows the export of interfaces containing values, types, schemas, variables, procedures, and functions. The system finely controls visibility, and character strings receive unified treatment. Fixed-length strings, character values, and variable-length strings are compatible with each other. The concatenation operator combines them freely, while the programmer specifies the maximum lengths of variable strings. Variable binding constitutes a remarkable innovation. A variable declared as a binding can connect to file storage, the real-time clock, or command lines. This restriction limits external connections to only those variables explicitly intended for this purpose.

The emergence of object-oriented programming pushes certain Pascal compilers toward this new paradigm. In 1993, the Pascal standards committee publishes a technical report on “Object-Oriented Extensions to Pascal”. The members of this committee represent a surprising range of organizations: from Pace University to the US Air Force, from Apple Computer to Microsoft and Digital Equipment Corporation.

Pascal marks the history of programming through its clear syntax and strong typing philosophy. An entire generation of programmers discovers through it the principles of structured programming and rigorous design. Its massive use in teaching during the 1970s and 1980s forges disciplined programming habits that will long outlive it. Commercial success accompanies pedagogical success, notably thanks to Borland’s Turbo Pascal. These commercial versions often enrich the standard language with practical extensions, never betraying the clarity and rigor that characterize the original.

The Pascal language embodies the idea that a program must be a precise mathematical construct, whose correctness can be formally proven. This vision contrasts with current trends that favor development speed and flexibility, sometimes at the expense of formal rigor, and therefore sometimes of security. Wirth may have been right: in a world where software governs our lives, the discipline he advocated has never been more necessary.

Top

DEC PDP-11

In 1970, Digital Equipment Corporation revolutionized the minicomputer world with the PDP-11. This machine marked a decisive step in computer evolution, not through raw power, but through an architectural vision: the UNIBUS. This unified bus connected processor, memory, and peripherals on a common interface, where previous machines multiplied specialized connections. The innovation appeared technical, yet it radically transformed computer system design.

The PDP-11’s history began around 1968 in DEC’s laboratories. Gordon Bell, Harold McFarland, and Roger Cady led a team confronted with the limitations of minicomputers of the era. Cramped address space, insufficient registers, absence of hardware stack, approximate interrupt management... These defects handicapped the development of sophisticated applications, so the engineers explored different approaches. A prototype called PDP-X emerged, followed by a DCM (Desk Calculator Machine) version. These experiments ultimately converged toward the PDP-11’s definitive architecture.

The result exceeded expectations. The PDP-11 offered an orthogonal instruction set, the same addressing modes worked with all instructions. Its eight general-purpose registers provided unprecedented flexibility, one serving as a stack pointer, another as a program counter. Direct byte addressing, quite rare, facilitated character processing and variable-length data. These technical innovations translated into more natural and efficient programming.

The market responded enthusiastically. Between 1970 and 1975, DEC sold more than 20,000 PDP-11 units. This success relied on the architecture’s modularity which allowed an extended range from small systems costing a few thousand dollars to configurations reaching several hundred thousand dollars. Technological advances accompanied this rise in power. The first models relied on magnetic core memory, soon replaced by MOS semiconductor memory, then by fast bipolar cache memory.

The family expanded. The initial PDP-11/20 model gave way to the 11/45 in 1972, a more robust machine that gained in performance. The 11/70 arrived in 1975 with its cache memory and extended addressing capacity of 4 megabytes. At the opposite end, the 1975 LSI-11 exploited large-scale integrated circuits to miniaturize and reduce costs. From this diversity, the PDP-11 would conquer varied territories such as scientific computing, business management, industrial process control, and communications.

DEC understood that a computer without an operating system is an empty shell. The company developed several software environments adapted to specific needs: RT-11 for simple real-time applications, RSX-11M for complex real-time multitasking, RSTS/E for interactive time-sharing. These mature systems, accompanied by quality development tools, facilitated the PDP-11’s adoption by a diverse clientele.

The introduction of DECnet in 1975 interconnected PDP-11s with each other and with other DEC machines according to a distributed approach that contrasted with IBM’s hierarchical philosophy. DECnet deployed a complete network architecture, from the physical layer to applications. Its successive evolutions enriched its capabilities: Phase II in 1978 established gateways to VAX systems, Phase III introduced adaptive routing.

The PDP-11’s architecture directly inspired future processors like the Motorola 68000. Competitors adopted UNIBUS as a de facto standard. The general-purpose register organization and sophisticated addressing modes of the PDP-11 permeated modern processor design. Mechanisms like program counter-relative addressing or auto-increment/auto-decrement became widespread throughout the industry.

The PDP-11 also found its place in UNIX history. Ken Thompson and Dennis Ritchie chose this machine at Bell Labs to develop their operating system. The clarity and consistency of the PDP-11’s architecture made it an excellent pedagogical support. Numerous university textbooks used it as a reference, thus training an entire generation of computer scientists in its concepts.

But time gradually revealed this architecture’s weaknesses. The 16-bit address space limited memory to 64 kilobytes per program, a constraint that weighed increasingly heavily with applications’ growing sophistication. DEC added memory management mechanisms to circumvent this limitation, at the cost of increased complexity. The proliferation of instruction set variants, notably the different floating-point options, complicated software portability.

The decline began in the 1980s. 16-bit microcomputers gained power while remaining much less expensive. Above all, DEC launched the VAX which offered 32-bit address space and relegated the PDP-11 to second place. Production nevertheless continued until the 1990s for specialized applications that did not require the VAX’s capabilities.

The PDP-11 left its mark on the era. It established DEC as a major player in the computer industry, and demonstrated that a well-designed architecture can transform a market and influence technological evolution.

Top

B-tree

When discussing modern computing, certain innovations remain in the shadows despite their ubiquity. The history of the B-tree begins in the late 1960s. Computers were already juggling considerable volumes of data, but accessing this information stored on magnetic disks posed a real challenge. Each read required time-consuming mechanical movements, and programmers desperately sought ways to organize their indexes to limit these accesses. At Sperry Univac, Howard Chiat and Meyer Schwartz tackled this challenge in collaboration with Case Western Reserve University. They were not alone: Bruce Cole, Stewart Radcliffe, and Michael Kaufman conducted parallel research at Control Data Corporation, supported by Stanford University.

These scattered efforts found their theoretical culmination through Rudolf Bayer and Edward McCreight. At Boeing Scientific Research Labs, these two researchers formalized the emerging concepts and published their paper Organization and maintenance of large ordered indices in 1972. This text laid the mathematical foundations of the B-tree and revolutionized index management.

The genius of the B-tree lies in its conceptual simplicity. Where classic binary trees limit each node to two children, the B-tree allows wider nodes containing multiple keys. This architectural freedom radically transforms operational efficiency. The tree automatically maintains its balance during insertions and deletions, guaranteeing a logarithmic height that minimizes disk accesses. Imagine an intelligent phone directory that constantly reorganizes itself so that each search requires an identical number of steps, regardless of the directory’s size.

Other variants emerged in the following years. Donald Knuth proposed the B+-tree, a clever modification that concentrates all data in the tree’s leaves. Internal nodes serve only for indexing, considerably simplifying sequential data traversal. This variant became the reference standard in database management systems. In 1977, Bayer partnered with K. Unterauer to create the Prefix B+-tree. Their innovation consisted of storing only the prefixes necessary to distinguish entries, saving space and increasing the branching factor.

Industrial adoption followed naturally. IBM integrated B-trees into VSAM (Virtual Storage Access Method), demonstrating their commercial viability. The system exploited a variant of the B+-tree and introduced optimizations such as replicating sequence nodes on disk cylinders. This concrete implementation proved that the theory could indeed improve production performance.

Multi-user requirements emerged, along with the need for multiple programs to simultaneously modify the same structure without corrupting it. Bayer and Mario Schkolnick solved this equation in 1977 with sophisticated locking protocols. Their system allowed multiple simultaneous readers while properly managing concurrent modifications, a technical feat from which modern databases inherit.

Theoretical research accompanied these practical developments. Raymond A. Miller studied optimal tree construction in 1977, while Andrew Chi-Chih Yao analyzed the probabilistic behavior of nodes in 1978. His calculations revealed an average occupancy rate of 69%, a figure that helped designers size their systems.

The influence of B-trees extends beyond their origins. MySQL exploits them in its InnoDB engine, PostgreSQL uses them for its indexes, and numerous file systems such as NTFS, HFS+, or Ext4 organize their data according to these principles. This ubiquity testifies to their conceptual robustness.

The 1990s and 2000s brought new problems. Multicore architectures and complex memory hierarchies required adaptations. Researchers developed variants optimized for processor caches and created the B𝜀-tree specifically for SSD disks. These evolutions showed how an idea can adapt to emerging technological constraints.

Big Data and real-time processing pushed the limits even further. New versions improved parallelism and scalability, while current research explores optimization for non-volatile memories and distributed architectures.

More than fifty years after its formalization, the B-tree continues to structure our daily data. From smartphones to data centers, this discreet yet essential invention silently organizes digital information.

Top

Floppy Disk

In 1967 at IBM, David Noble was tasked with solving an apparently trivial problem: finding a way to load programs into the new System/370 computers at startup. The semiconductor memories equipping these machines lose their content as soon as they’re powered off, unlike the older magnetic core memories. An external, reliable medium was therefore needed.

Noble’s team, which included Alan F. Shugart, Herbert Thompson, Ralph Flores, and Warren L. Dalziel, began by exploring magnetic tape cartridges. But Noble had another idea: why not use a flexible disk? The first experiments bordered on amateurism. The team bought a 45 RPM vinyl record player from a store and jury-rigged a magnetic read head mounted on the arm. The flexible disk rested on a foam-lined platter.

These experiments quickly revealed a nightmare, as the slightest speck of dust or stray hair caused data loss. Worse, these impurities literally “crawled” across the disk, pushed by the read head. The brilliant idea emerged from an evening work session between Herbert Thompson and Ralph Flores. Frustrated with the turntable approach, they abandoned this path and rushed to a grocery store to buy pink absorbent paper, the kind used for cleaning.

By cutting a file folder to the right size and gluing this pink paper to it, they created the first protective envelope. They cut openings for the read head and the central mounting system. The dust problems disappeared instantly. This pink paper would later give way to a specialized white material, but the principle remained unchanged for decades.

The first IBM floppy disk drive, dubbed “Minnow”, hit the market in 1971. It was read-only and equipped the IBM 3330 hard disk controllers as well as certain System/370 models. The following year, Memorex made a major breakthrough with the 651, the first OEM drive capable of both reading and writing. Wang Laboratories and DEC eagerly adopted it to replace their aging punched tape readers.

The real turning point came in 1973 when IBM launched its 3740 data entry system equipped with the 33FD drive. A single floppy disk could now store the equivalent of 3,000 punch cards, drastically reducing paper consumption. Its 80-kilobyte storage capacity revolutionized working habits. The industry adopted this format as a de facto standard. Shugart Associates, the company founded by Alan Shugart, took the lead in the compatible drive market.

In 1975, the market was organized around well-defined segments. Data entry accounted for 12,500 units sold, intelligent terminals 10,200, minicomputers 3,100, word processing 800 units, not to mention other emerging applications. A drive cost about $380, a considerable sum.

The year 1976 marked the arrival of the 5.25-inch format, born from an informal conversation at Shugart Associates. Customers complained about the bulkiness of the 8-inch drives, which weighed 17 kg each. The team developed a prototype the size of a cocktail napkin in six weeks. This innovation preserved the technical performance of the 8-inch format in a case adapted to the emerging desktop computers.

IBM responded in 1977 with the 53FD model, the first double-sided drive that doubled capacity while maintaining compatibility with single-sided disks. Adapting this technology proved to be an uphill battle. Shugart Associates nearly went bankrupt with its double-sided read heads inspired by Winchester hard drives. It was ultimately Tandon that found the viable solution with one fixed head and one mobile head.

This year 1977 also saw the birth of the Apple II, the first mass-produced personal computer equipped with 5.25-inch floppy disk drives. Personal computers adopted this configuration as a standard that helped standardize storage media. The software industry could finally take off.

The genesis of the 5.25-inch format reveals entertaining anecdotes. Wang Laboratories dreamed of a desktop computer with a $100 drive. Steve Jobs, unknown to the general public, regularly showed up at Shugart Associates seeking an affordable drive to replace the audio cassettes Apple used for data storage. Don Massaro and Jimmy Adkisson’s team deliberately chose the 5.25-inch dimension because it was the smallest size that didn’t allow the diskette to slip into a pocket, thus avoiding damage from carelessness.

The 8-inch format experienced its golden age in the early 1980s with 1.5 million units produced annually. The industry comprised 25 American manufacturers, 11 Japanese, and 8 European. Margins eroded in the face of fierce competition, dropping from 55% to 40%. Japanese manufacturers, obsessed with production quality, gradually dominated the global market.

The decline of the 8-inch format began with the success of the 5.25-inch, but production continued until the late 1990s. IBM maintained contracts with YE Data in Japan to deliver 5,000 to 10,000 drives per year, exclusively for maintaining old systems still under contract.

The 1980s saw most personal computers accommodate both a 5.25-inch drive and a 3.5-inch drive. The latter format eventually became the new standard. The 1990s marked the peak of the 3.5-inch floppy with over five billion units sold each year. But the arrival of rewritable CDs, DVDs, and then USB flash drives spelled the end for this technology. In 2011, Sony, the last global manufacturer, closed the book on floppy disk production.

The floppy disk saga implicitly tells the story of the complete transformation of computer storage media. From a simple program loading tool, it evolved into a versatile peripheral that heralded modern removable storage. Without it, neither the software industry nor the democratization of personal computing would have reached the scale we know today.

Top

ed

Some tools mark their era through radical simplicity rather than sophistication. The ed editor belongs to this category of innovations that, born at Bell Labs in the early 1970s, shaped how we conceive text editing. Its development responded to the hardware constraints of teletype terminals that could only print one line at a time on paper, but its design reveals a philosophy that transcends these technical limitations.

The genius of ed lies in its ability to transform text editing into a dialogue between user and machine. Unlike modern editors that expose the entire document, ed requires thinking of text as a sequence of numbered lines, stored in a memory space called a buffer. Far from being a weakness, this abstraction is a strength that forces users to develop a precise mental representation of their document’s structure.

The syntax of ed stands out through its conciseness. The commands a, i, d, s, and p suffice to cover most editing operations. Behind this apparent simplicity lies an addressing system of remarkable richness. Users can designate the current line with a simple dot, the last line with a dollar sign, or specify entire ranges using regular expressions. This approach revolutionizes text manipulation by introducing a formal grammar to describe textual patterns.

Regular expressions constitute perhaps the most enduring legacy of ed. These patterns, which describe text structures rather than literal strings, transform search and modification into logical operations. The dot representing any character, the asterisk quantifying repetitions, the brackets defining sets—all concepts that would later inspire programming languages and text analysis tools. The substitution command s illustrates this power since it can transform entire documents with a single instruction, provided the user masters the language of patterns.

This editing philosophy finds its purest expression in ed’s minimalist interface. When faced with an error, the editor simply displays a laconic ?. This parsimony, which disconcerts beginners, reflects a conception where the tool must disappear behind the task. The creators of ed operated on the principle that efficiency arises from mastery, and that verbosity harms concentration.

Technical limitations become apparent in certain constraints, such as the 512-character maximum per line, or the 256-character limit for global commands. These restrictions, which seem trivial today, testify to the ingenuity required to create powerful tools with limited resources. Programmers had to consider every byte, optimize every algorithm.

The influence of ed extends beyond the realm of text editing. The grep utility takes its name from the command g/re/p (global search for regular expression and print), transforming an ed feature into a standalone tool. The sed editor extends ed’s logic to stream processing, while vi preserves its fundamental concepts while adapting them to visual editing. This genealogy reveals how innovations propagate and evolve within the UNIX ecosystem.

ed persists in all UNIX and Linux systems. Its presence is no longer anecdotal—in constrained environments, during limited network connections, or in administration scripts, its original qualities regain their relevance. The editor that seemed destined for computing museums proves sometimes to be the most suitable tool for certain situations.

The original ed documentation, written by Brian W. Kernighan, sets the standards for UNIX technical documentation. It combines theoretical rigor and pragmatism, explaining what the tool does and why it does it that way. This pedagogical approach, where each concept builds on concrete examples, still influences how we document software.

Top

File Transfer Protocol

In 1971, the Internet resembled a technological landscape where every simple operation required mastering cryptic commands. The idea of exchanging files between remote machines was a technical challenge. This is when the File Transfer Protocol, better known by the acronym FTP, was born with the publication of RFC 114 by the Internet Engineering Task Force. This document describes the first version of a protocol that would span decades.

Initially, RFC 114 from April 1971 proposed a simple architecture with a single full-duplex connection to carry both commands and data. This straightforward approach reflected the state of computer networks at the time. But the protocol’s designers would come to understand the limitations of this approach. Through successive revisions in the 1970s, a major innovation emerged with the complete separation of the control channel and the data channel. This architecture, formalized in RFC 765 from June 1980 and then perfected in RFC 959 from October 1985, establishes two distinct TCP connections. The first, on port 21, serves solely for command exchanges between client and server. The second, traditionally on port 20, is dedicated to the actual file transfer. This evolution, seemingly minor, conceals remarkable intelligence. If the transfer crashes, the control connection remains active and error messages continue to flow.

The protocol spread throughout American universities. Students discovered they could submit their assignments to remote servers, developers distributed their software, and system administrators synchronized their data between machines. FTP joined the pillars of the Internet alongside email and remote access.

But FTP conceals subtleties. The protocol offers two operating modes that reflect the evolution of network architectures. Active mode, implemented first, has the server initiate the data connection to the client. This approach worked perfectly in the early days of the Internet, when machines were directly connected to the network. The arrival of firewalls and network address translation changed everything. How can a server establish a connection to a client hidden behind a NAT? Passive mode reverses the logic: it’s the client that establishes both connections. This evolution perfectly illustrates how protocols adapt to emerging technical constraints.

The distinction between ASCII mode and binary mode betrays the legacy of early computer systems. UNIX terminated its lines with one character, Windows with two, and Mac with yet another. FTP’s ASCII mode automatically converts these markers according to the destination system. Binary mode, on the other hand, copies the file bit by bit without any transformation. Though seemingly archaic today, this differentiation testifies to the diversity of systems that FTP had to connect.

Anonymous authentication constitutes one of FTP’s most democratic innovations. Rather than forcing each user to create an account, the protocol accepts the username “anonymous” with an email address as a password. This functionality revolutionized the distribution of free software and public documents. At the same time, FTP supports traditional authentication by username and password for restricted access.

Security gradually became a critical issue. FTP, in its original version, transmits everything in clear text: credentials, commands, and data. An eavesdropper placed on the network could capture passwords or the contents of exchanged files. This weakness stimulated the development of secure extensions. FTPS encapsulates FTP in SSL or TLS, either implicitly on port 990 or explicitly by negotiating encryption on a standard connection. SFTP, despite its name, is actually a distinct protocol based on SSH that offers a modern and secure alternative.

The integration of FTP into operating systems reveals its status as a fundamental protocol. UNIX natively integrates command-line clients and servers. Windows incorporates FTP support into its file explorer, democratizing access to the protocol. Software like FileZilla enriches the user experience with intuitive graphical interfaces while scrupulously respecting standards.

Resume capability after interruption transforms the user experience: no longer necessary to completely restart the download of a multi-gigabyte file after a network interruption. IPv6 support and extensions for large files modernize the protocol. Each evolution preserves backward compatibility, a cardinal principle of Internet protocol design.

FTP’s security flaws raised awareness in the technical community about the importance of encryption. The complications of active mode with firewalls fueled reflection on network protocol architecture. Managing two simultaneous connections enriched understanding of the challenges in securing communications.

The emergence of cloud storage and modern web protocols gradually eroded FTP’s market share. HTTPS became the standard for many consumer uses thanks to its simplicity and integrated security. Yet FTP maintains its relevance in certain professional contexts, particularly for web server administration and website maintenance.

This longevity is explained by certain qualities, including the protocol’s simplicity, its proven robustness, and its ability to evolve without breaking compatibility. The separation between control channel and data channel, a 1970s innovation, proved invaluable for error management and transfer resumption. Standardized numeric response codes facilitate automation and programmatic management. These evolutions prove that a fifty-year-old protocol can adapt to modern requirements. FTP perfectly illustrates that in computing, the value of a technology is not measured solely by its novelty, but by its ability to evolve and the strength of its foundations.

Top

Intel 4004

On November 15, 1971, a small advertisement appeared in Electronic News. Intel Corporation presented the 4004, which it described as a “microprogrammable computer on a chip.” Behind this wording lay an invention that would change the world: the first commercial microprocessor in history.

This innovation was born from an unlikely encounter between an American startup specializing in memory and a Japanese calculator company. In 1969, Busicom was looking for a partner to develop integrated circuits for its future calculators. Intel, founded the previous year by Bob Noyce and Gordon Moore, accepted this contract to balance its finances while waiting for its main business to take off.

Busicom’s initial specifications called for twelve different integrated circuits, each dedicated to a specific function. Three Japanese engineers, including Masatoshi Shima, arrived at Intel in June 1969 to supervise the project. Ted Hoff, the company’s twelfth employee and head of applications, had to liaise with this team from Asia. He examined the specifications and frowned. The proposed system required too many connections between chips, which would make the final product prohibitively expensive. He proposed abandoning the idea of specialized circuits to create a universal programmable processor, accompanied by a few memory chips. The architecture would be simpler, cheaper, and above all infinitely more flexible.

Stanley Mazor joined the venture in September 1969. His experience at Fairchild in computer design brought the necessary technical credibility to the project. Together, Hoff and Mazor refined their vision and managed to convince Busicom’s skeptics. They developed a 20-byte interpreter to execute macro-instructions, thus demonstrating the viability of their approach.

In October 1969, at an official meeting, Busicom gave Intel’s new architecture the green light. Shima extended his stay until December to write the calculator’s basic programs. The signed contract granted the Japanese company exclusivity over this system. All that remained was to transform this great idea into physical reality. However, neither Hoff nor Mazor mastered integrated circuit design. Intel recruited Federico Faggin in April 1970. This Italian had developed the silicon-gate MOS technology at Fairchild, essential for achieving the desired performance and integration density.

The team finalized the MCS-4 system architecture. Four circuits made up the whole: the 4004 processor, the 4001 ROM memory, the 4002 RAM, and the 4003 input-output register. The heart of the system, the 4004, contained approximately 2,300 transistors on a 3.0 × 4.0 mm chip. It executed 8,000 instructions per second and addressed up to 4 KB of ROM and 640 bytes of RAM.

Manufacturing of the first prototypes began in October 1970. The peripheral chips worked fairly quickly. The 4004 proved more resistant but required several revisions before reaching maturity in March 1971. A month later, Busicom validated the entire system in its calculator. Intel pulled off a masterstroke in negotiations. In May 1971, the company recovered the marketing rights for the MCS-4 for all applications other than desktop calculators, in exchange for a price reduction for Busicom. Ed Gelbach, the new vice president of marketing, orchestrated the official launch of the 4004 in November 1971.

The success exceeded expectations. Intel followed up with the 8008, the first 8-bit microprocessor, announced in April 1972. These two processors opened two distinct paths that still structure the semiconductor industry today: on one side, integrated microcontrollers for embedded systems, heirs to the 4004; on the other, processors for programmable computers, descendants of the 8008.

The 4004’s legacy can be measured in staggering figures. In 1995, twenty-four years after its birth, 4-bit microcontrollers represented half of the units produced worldwide. The 8008 architecture evolved into the 8080, then into the x86 processors that equip the majority of today’s personal computers.

In 2024, nearly 70% of semiconductors manufactured globally are microprocessors, microcontrollers, or associated components. This market is worth over 100 billion dollars. From automobiles to phones, household appliances to connected devices, these chips descended from the 4004’s DNA populate our daily lives.

This technology has changed our relationship with machines and transformed the way we live. It perfectly illustrates how an elegant technical solution to a specific industrial problem sometimes generates unpredictable societal upheavals. The 4004 was initially just a component for Japanese calculators. It became the foundation of our digital civilization.

Top

Intel 8008

In April 1972, Intel launched the Intel 8008, the first 8-bit microprocessor in history. This innovation did not come out of nowhere, however. Three years earlier, Computer Terminal Corporation (CTC) was looking for a solution to modernize its Datapoint 2200 terminal. The company contacted Intel, a company founded just one year earlier by Gordon Moore and Robert Noyce, which was limited to memory manufacturing.

CTC had a concrete problem: its terminal used MSI (Medium Scale Integration) components scattered across multiple boards. The idea of concentrating all this logic on a single chip appealed to the company, which named this project with the codename 1201. Stan Mazor and Ted Hoff took charge of the functional specifications before handing over to Federico Faggin and Hal Feeney for the technical implementation.

The architecture faithfully replicated that of the existing CTC processor. An 8-bit accumulator worked with six 8-bit general-purpose registers (B, C, D, E, H, L). The processor handled 14-bit addressing, providing access to 16 KB of memory. Its instruction set remained basic but covered common arithmetic and logical operations, with rudimentary interrupt handling. Technically, approximately 3,500 PMOS transistors etched at 10 microns were packed into an 18-pin package. The clock frequency reached 200 kHz, enabling execution of 60,000 instructions per second.

This 18-pin constraint nearly compromised the project. Intel wanted to control its production costs and refused to increase the number of connections. The design team found itself at a technical impasse, wondering how to pass all the necessary information with so few available connections. The solution came from multiplexing the address and data buses, a technique that complicated usage but made manufacturing economically viable.

The first prototypes came off the production lines in late 1971. Unfortunately, they revealed operational defects. The engineers had to correct the etching masks and restart production. March 1972 finally saw the birth of the first fully functional chips, but CTC had changed its mind in the meantime. The company now considered the 8008 too slow and preferred to develop its own solution in TTL logic.

Intel found itself stuck with a product on its hands but decided to commercialize it. The gamble proved successful. The 8008 found its place in computer terminals, industrial controllers, and scientific instruments. Its launch price—$120 for the standard version, $180 for the fast variant—remained affordable for many designers.

The creation of this processor disrupted the habits of draftsmen who drew the masks by hand on enormous sheets exceeding 1.5 meters in width, at a scale 500 times larger than the actual circuit size. Logic simulations were laboriously performed on the few available mainframe computers in time-sharing mode. The team invented techniques such as bootstrap loads to improve the circuit’s electrical performance.

Intel did not leave its customers to fend for themselves with this unprecedented technology. The company published detailed programming manuals, complete reference documentation, and offered evaluation boards like the SIM-8. These resources enabled engineers to master the microprocessor’s specificities.

Kodak integrated the 8008 into its photocopiers, Toledo Scale into its industrial scales. McDonald’s used it for its first point-of-sale terminals. Some applications continued to use this processor more than ten years after its release. The lack of binary compatibility with subsequent generations like the 8080 explains this loyalty. Rather than reprogramming everything, many preferred to keep their existing developments.

This small component demonstrated that a single chip could house a complete calculator, mass-produced at reasonable cost. Electronic designers began to see things differently. The 8008’s architecture, with its general-purpose registers and bus multiplexing, had a lasting influence on subsequent generations of microprocessors.

Sales remained modest, a few hundred thousand units over its entire commercial career. These figures paled compared to those of the 4004 or the future 8080, but Intel’s management hesitated. It feared becoming a competitor to its own computer manufacturer customers by entering this market too boldly. Texas Instruments attempted to create a rival to the 8008 without succeeding. Ironically, this company filed in 1971 the first patent covering the microprocessor concept, while Intel had only protected certain specific technical aspects of its implementation. A legal battle that illustrated the radical novelty of these technologies.

The lessons from the 8008 directly fed into the design of the 8080. This time, the engineers adopted a 40-pin package that finally gave them the necessary flexibility. They kept the general architecture but improved it considerably in every aspect.

The entire x86 family that dominates the personal computer market retains traces of the 8008. Architectural choices such as byte order (little-endian) or register organization survived fifty years and are found in the most modern processors. As the first 8-bit microprocessor ever commercialized, the 8008 established technical and commercial directions that shaped the semiconductor industry for subsequent decades. Its creation required innovations in methodology and tooling. Despite its technical limitations, it revealed the tremendous potential of microprocessors.

Top

C

At Bell Labs in the early 1970s, the mood was far from optimistic. The company had just walked away from Multics, that monumental project undertaken with MIT and General Electric that had consumed considerable budgets without delivering on its promises. Disappointed, Ken Thompson decided to take matters into his own hands to create his own computing environment, according to his personal vision.

He salvaged an old DEC PDP-7, a machine already outdated with its meager 8 KB of memory. On it, he built the first versions of UNIX, initially in assembly language. But programming directly in assembly was a daily nightmare. Thompson had tested Fortran, without success. He created B, a stripped-down version of BCPL that fit within the PDP-7’s memory constraints.

Dennis Ritchie joined the team and found that B had its limitations. Between 1969 and 1973, he undertook to evolve it into something more ambitious. 1972 was the decisive year. The move to the PDP-11 revealed B’s weaknesses: its character handling was problematic, and its pointers generated absurd conversions between word indices and addresses. B completely ignored floating-point numbers, even though the PDP-11 would soon be able to handle them.

Ritchie first introduced types with “New B”, adding int and char with their arrays and pointers. But it was with C that the idea truly took shape. The type system became sophisticated, allowing the composition of complex structures from basic types. Ritchie had a brilliant insight for declaration syntax: make it mirror usage. If you write int *p, it’s because *p gives you an int.

Array handling sparked debates. In C, an array name automatically converts to a pointer to its first element in expressions. This approach elegantly solved the string problem while preserving existing B code. The preprocessor arrived in 1972-1973, modest at first with just file inclusion and a few macros.

By 1973, C was mature. Ritchie and Thompson tackled a complete rewrite of the UNIX kernel. It was a bold gamble: no one had yet proven that a high-level language could produce code fast enough for an operating system. The gamble paid off. Between 1973 and 1980, C continued to grow: unsigned and long types, union, enumerations, structures becoming first-class citizens.

Brian Kernighan and Dennis Ritchie published The C Programming Language in 1978. This small white book, which everyone called “K&R”, became the language’s bible. Its remarkable clarity contributed greatly to popularizing C. But the language lacked an official standard, and each compiler brought its own small variations.

Standardization became necessary in the early 1980s. C was overflowing from UNIX and colonizing other systems. The ANSI X3J11 committee formed in 1983 at the initiative of M. D. McIlroy. By the end of 1989, the ANSI C standard was ready, adopted as international standard ISO/IEC 9899-1990. The committee remained cautious, introducing only one major change: integrating argument types into function signatures. This idea, borrowed from C++, strengthened type checking without breaking existing code.

C’s expansion was meteoric. In the 1980s, compilers appeared on virtually every architecture and system. Commercial software developers for microcomputers adopted it massively. This success stemmed from the language’s balance: simple enough to master, close enough to the machine to produce efficient code, abstract enough for portability.

C spawned a prolific lineage. Bjarne Stroustrup created C++, and others gave birth to Objective C and Concurrent C. Subsequently, the modern languages that borrowed its syntax and concepts are countless. Pointers, array handling, and type declaration syntax remain lasting contributions, even if they spark debate.

The language has its flaws. Modularizing large systems remains problematic with its two-level naming structure. Manual dynamic memory management regularly traps programmers. These weaknesses have not, however, prevented the development of complete operating systems in C.

The history of C shows how a tool born from immediate practical necessities becomes, through judicious design choices, a standard that spans eras. Its longevity in a constantly shifting technological landscape testifies to the soundness of its creators’ insights. More than fifty years after its birth, C still ranks among the most widely used languages in systems and embedded programming. Go, of which Ken Thompson is co-creator, is trying to catch up with it.

Top

Prolog

In 1970, Alain Colmerauer was not trying to invent a new programming language. This assistant professor of computer science at the University of Montreal was leading the TAUM project (Traduction Automatique de l’Université de Montréal) and simply wanted to advance his research on computational language processing. Two young 25-year-old instructors from the newly established Faculty of Science at Luminy, Robert Pasero and Philippe Roussel, joined him for two months. They discovered Q-systems, a language that Colmerauer had developed for his translation project.

Jean Trudel, a Canadian doctoral student under Colmerauer’s supervision, was working on automated theorem proving. He built upon Alan Robinson’s seminal paper on the resolution principle, published in 1965. He had the advantage of having taken a logic course with Martin Davis in New York and already mastered unification using a modern approach where all computations reduced to pointer manipulation.

In early 1971, the team reconvened in Marseille. Alain Colmerauer obtained a position as associate professor of computer science, and Jean Trudel accompanied him thanks to a two-year scholarship from Hydro-Québec. Their goal was to perform deductions from French texts. The work was divided such that Jean Trudel and Philippe Roussel tackled deductions, while Robert Pasero and Alain Colmerauer focused on natural language. Their working environment was luxurious by French standards of the time. Their department’s IBM 360-44 offered nearly 900 KB of internal memory, without virtual memory. Jean Trudel developed an interactive monitor, and they monopolized the machine at night.

In June 1971, the team invited Robert Kowalski, one of the inventors of SL-resolution. This meeting changed everything. For the first time, they engaged with a specialist capable of explaining the resolution principle, its variants, and its refinements. Kowalski discovered researchers passionate about his work and determined to apply it to natural language processing.

In February 1972, the Institut de Recherche d’Informatique et d’Automatique (IRIA) granted them 122,000 FF for 18 months. This funding allowed them to invest in a teletype terminal, a dedicated 300-baud connection to the IBM 360-67 in Grenoble, and the recruitment of Henry Kanoui.

The fall of 1972 saw the birth of the first Prolog system. Philippe Roussel developed it in Algol-W while Alain Colmerauer and Robert Pasero created in parallel the French-language human-machine communication system. The interaction between Philippe, who coded Prolog, and Alain and Robert, who programmed in a language under development, became permanent. It was Philippe’s wife who suggested the definitive name: Prolog, an abbreviation of “PROgrammation en LOGique” (Programming in Logic).

The first major program written in Prolog was the human-machine communication system. Its 610 clauses were distributed as follows: Alain wrote 334 for analysis, Robert Pasero 162 for the deductive part, and Henry Kanoui developed French morphology in 104 clauses. This morphology handled the relationships between singular and plural for all common nouns and all verbs, both regular and irregular, in the third person singular present tense.

Official recognition came in 1973. The CNRS established an “associated research team” named “Human-machine dialogue in natural language” and allocated 39,000 FF for the first year. Users of the preliminary version of Prolog had programmed enough for their experience to inform a second version, oriented toward a true programming language rather than merely an automated deductive system.

Between June and December 1973, three DEA students—Gérard Battani, Henry Méloni, and René Bazzoli—wrote the interpreter in FORTRAN and its supervisor in Prolog. This version introduced all the basic features of current Prologs. It was also during this period that the occur check disappeared, deemed too costly.

Dissemination truly began in 1974-1975. David Warren stayed in Marseille from January to March 1974 and used Prolog to write Warplan, his plan generation system. The interactive version of Prolog running in Grenoble via teletype generated numerous requests. Gérard Battani and Henry Méloni were overwhelmed with distributions. They shipped Prolog to Budapest, Warsaw, Toronto, Waterloo, and traveled to Edinburgh to help David Warren install it on a PDP 10. Hélène Le Gloan, a former student, installed it at the University of Montreal. Michel Van Caneghem did the same at IRIA in Paris before joining the team. Maurice Bruynooghe took Prolog to Louvain after three months in Marseille.

In 1975, the entire team participated in porting to a 16-bit minicomputer, Télémécanique’s T1600. The machine had only 64 KB and required a specific virtual memory management system. Pierre Basso tackled this task and won the competition for the shortest instruction sequence that achieves 32-bit addressing while testing for page faults. Each lab member then translated two pages of FORTRAN into machine language. The reassembled fragments worked. After five years, they finally had their own machine, and their Prolog ran—slowly, but it ran.

The story of Prolog’s birth ends in late 1975, but its influence endures. Alan Robinson’s January 1965 article, “A machine-oriented logic based on the resolution principle,” contained the seeds of the Prolog language. While this article gave rise to a significant body of work on automated theorem proving, the specific contribution of the Marseille team was transforming this theorem prover into a programming language. To achieve this, they did not hesitate to introduce purely computational mechanisms and restrictions that constituted heresies for the existing theoretical model. These modifications, often criticized, ensured Prolog’s viability and thus its success.

Robert Kowalski isolated the concept of “Horn clause,” legitimizing their main heresy: a linear demonstration strategy with backtracking and unifications only at clause heads. The Marseille team had ideal conditions for this creation. Colmerauer belonged to the first generation of French computer science PhDs, and his specialty was language theory. He had gained valuable experience creating Q-systems within the University of Montreal’s machine translation project. The meeting with Philippe Roussel and the particular circumstances in Marseille did the rest. The team benefited from freedom of action in a newly created scientific center and, without external pressures, could devote itself fully to its project.

Smalltalk

In 1969, at the heart of Xerox PARC in Palo Alto, Alan Kay began dreaming of a different kind of computer, amid the excitement of the first interactive machines, emerging graphical displays, and the mice that Douglas Engelbart had just invented. Kay envisioned a “personal medium” capable of amplifying human intelligence. From this radical vision would emerge Smalltalk, a language that would revolutionize our approach to programming.

The idea grew from three major influences. Simula had introduced objects, LISP mastered symbolic manipulation, and Sutherland’s Sketchpad demonstrated what graphical interaction could achieve. Alan Kay mixed these ingredients with his characteristic boldness and set out to define “the most powerful programming language in the world” in a single page of code. He succeeded with Smalltalk-71, a first version created in just a few days that demonstrated an entire system could be based on message passing between objects. Gone were the languages where objects coexisted with primitive types in an awkward hierarchy. Here, everything is an object: numbers as well as control structures, windows as well as characters. This conceptual radicalism was no intellectual whim. It forged a system of rare coherence, where every element obeyed the same fundamental rules.

The concept matured through successive versions. Between 1972 and 1976, the PARC team refined this world of communicating objects. Dan Ingalls made a decisive contribution with the invention of overlapping windows. This graphical interface would later become an industry standard, particularly when Apple drew inspiration from it for the Macintosh.

With Smalltalk-76, Ingalls consolidated the gains from previous experiments by proposing a complete system: just-in-time compilation, automatic memory management, integrated graphical interface. The development environment allowed code modification while the program was running, something unprecedented. This flexibility transformed programming into continuous exploration.

But it was with Smalltalk-80, released publicly in 1980, that the system reached maturity. Ten years of research culminated in this version that standardized the language and its environment. The fundamental concepts crystallized: everything is an object, objects communicate through messages, classes are themselves objects, inheritance enables code reuse. These principles would lastingly influence the subsequent evolution of computing.

Smalltalk’s strength lies in its conceptual purity. Where other languages like C++ or Java awkwardly graft object-oriented features onto a procedural foundation, Smalltalk applies its paradigm throughout. This theoretical coherence is accompanied by remarkable practical innovations: the integrated development environment, the garbage collector that frees the programmer from memory management, the modern interface with its windows, menus, and mouse.

Smalltalk’s expansion coincided with the emergence of personal computers powerful enough to host it. Steve Jobs discovered the system during his visit to PARC in 1979. The impact was such that it directly shaped the development of the Macintosh interface. The influence then spread throughout the industry: Objective-C, Ruby, Python, Java each borrowed elements from Smalltalk’s object model.

Alan Kay had initially designed Smalltalk for teaching programming to children. Objects intuitively model real-world entities, the graphical interface facilitates experimentation. This educational dimension survives in projects like Squeak and Etoys, which perpetuate the idea of computing accessible to all.

Smalltalk’s development illustrates the importance of institutional context in innovation. Xerox PARC offered its researchers exceptional freedom, without immediate commercial pressure. This autonomy made the emergence of an innovative system possible. Paradoxically, Xerox never managed to capitalize commercially on this major innovation.

The 1990s and 2000s saw numerous implementations flourish: VisualWorks for professionals, Squeak for education, GNU Smalltalk for the free software world. This diversity testifies to the persistent vitality of the model and its adaptability.

Smalltalk is no longer a dominant language today, but the principles it introduced—objects, messages, inheritance, interactive environment—have become industry standards. The evolution toward touch interfaces and block-based programming extends its vision of intuitive and accessible computing. This story reveals the importance of long-term vision in technological progress. Smalltalk combines an elegant theory of object-oriented programming with practical innovations in user interface. Its journey shows that ideas need time to find their place, and that the true impact of an innovation is often measured by subsequent generations.

Top

IBM VM/370

In 1972, IBM launched VM/370, standing for Virtual Machine Facility/370. Behind this technical name lay the ability to run multiple operating systems in parallel on a single physical machine. The idea was not entirely new, as it extended work done on CP-67/CMS for the System/360 Model 67, but its systematic implementation scaled to enterprise computing.

It took boldness to envision this concept at the time. VM/370 created “virtual machines,” complete but simulated computing environments, each believing it had its own computer. This illusion rested on four components: the CP control program that orchestrated physical resources, the CMS conversational system for interacting with users, the RSCS subsystem that managed file exchanges, and IPCS which handled incidents. The architecture masked complexity in favor of efficiency, with each virtual machine operating in its bubble, completely unaware of its neighbors. This isolation enabled different IBM systems—DOS, OS/VS1, OS/VS2—to coexist without interfering with one another. The technical feat lay in this ability to partition a single computer into multiple sealed environments.

On the memory side, VM/370 introduced dynamic address translation that converted virtual addresses into real physical addresses. The system divided memory into 64 KB segments and 4 KB pages, creating multiple levels of abstraction. This intellectual gymnastics allowed running virtual machines whose combined memory exceeded available physical memory. The innovation extended to peripherals with “spooling,” this technique that managed input-output operations for all virtual machines. A disk or printer could serve multiple environments without creating conflicts. VM/370 juggled resources like a magician with his props.

Each user received their identifier and password, and resource access followed strict security rules. Isolation between virtual machines constituted a natural barrier sufficient to ensure that a crash in one would not affect the others. This robustness immediately appealed to developers who could test their programs without fear of bringing down the production system.

The impact on the industry exceeded IBM’s expectations. Companies discovered they could exploit their expensive computers much more intensively. Instead of leaving resources idle, they shared them among various applications. Gone were the endless waits for computer scientists to test their programs—everyone had their own virtual environment.

VM/370 gradually adapted to new IBM hardware, from the modest Model 135 to the powerful Model 168. The system integrated support for 3330, 3340, and 3350 disks, gained network functionalities, and improved in performance. This constant evolution testified to the vitality of the concept.

The fundamental principles of VM/370—isolation, resource sharing, dynamic memory management—still shape modern virtualization technologies. VMware, Xen, KVM, and other contemporary solutions draw from this conceptual heritage forged in the 1970s. More than a technical feat, VM/370 changed the way enterprise computing was conceived. The system proved it was possible to partition computing resources without sacrificing performance or security. Future architectures would therefore be more flexible and economical.

The system’s influence on computer science education was considerable. Schools and universities adopted VM/370 to teach operating systems. Students could experiment on their own virtual machines without monopolizing an entire computer. This democratization of access to computing resources trained a generation of computer scientists familiar with virtualization. VM/370 illustrates how a technical innovation can transform an entire sector. By demonstrating the viability of virtualization, IBM was redefining computer usage. This transformation continues to resonate in modern data centers, where virtualization now constitutes the norm rather than the exception.

Top

Ethernet

In 1972, at Xerox’s laboratories in Palo Alto, Bob Metcalfe and David Boggs worked on connecting computers to laser printers within a building. Their solution, which they named Ethernet, went far beyond this initial requirement by achieving 2.94 megabits per second.

The basic idea was brilliantly simple. All computers share the same cable, like people talking around a table. Before speaking, each listens to ensure nobody else is talking. If two start at the same time, they stop, wait a random moment, then try again. This method, a technique called CSMA/CD, works remarkably well despite its apparent chaos.

The first Ethernet used a 75-ohm coaxial cable. Machines connected through special boxes that detected collisions and electrically isolated each station. It was possible to connect or disconnect a machine without paralyzing the entire network, which represented a considerable advantage over centralized systems. Four years later, Metcalfe and Boggs described their invention in a paper in the “Communications of the ACM”. Ethernet left the laboratory to reach the industrial world. The first installations demonstrated the ability to connect hundreds of machines over more than a kilometer.

Xerox then understood the commercial potential of its creation. In 1980, the company partnered with Intel and Digital Equipment Corporation to transform Ethernet into an open standard. This strategic decision proved successful: freeing the technology from its creator accelerated its adoption. The new specification pushed throughput to 10 megabits per second and adopted 50-ohm coaxial cable. Machine addresses expanded from 8 to 48 bits, ensuring their worldwide uniqueness.

The IEEE took over with its 802.3 committee in the early 1980s. This standardization inscribed Ethernet into the OSI model and made it completely vendor-independent. The CSMA/CD protocol became the MAC sublayer of the data link layer, while physical aspects fell under 802.3 standards.

In 1990, the 802.3i standard abandoned the coaxial bus in favor of twisted pairs and star topology. Gone were the cables snaking through false ceilings; each machine now had its own link to a central hub, making installation and maintenance a breeze. Fast Ethernet arrived in 1995 with its 100 megabits per second, followed three years later by Gigabit Ethernet.

Metcalfe and Boggs made technical choices that endured. Ethernet handles neither priorities nor quality of service, leaving these questions to higher-level protocols. This apparent limitation proved to be a strength because the simplicity of the basic mechanism allowed for all subsequent adaptations. The system’s core remained remarkably stable. The controller monitors the channel and applies CSMA/CD rules. The transceiver interfaces with the physical medium. Frames carry source and destination addresses, a type field to identify the encapsulated protocol, data, and a CRC checksum. This streamlined architecture crossed decades without major modifications.

During the 1990s, switches radically transformed Ethernet’s nature. Instead of a shared channel where everyone listens, the network evolved into a set of point-to-point links. Different conversations could occur simultaneously. Full-duplex mode gradually replaced CSMA/CD, which became less useful with switching.

Why did Ethernet triumph where others failed? First through its low cost, a direct consequence of its simplicity. Then through its ability to evolve: throughput increases, media change, but fundamental principles remain. Finally through its openness, with no manufacturer able to monopolize the network.

The designers validated their choices through experimentation. The experimental Ethernet provided precise measurements showing medium utilization approaching 98% under certain conditions. The binary exponential backoff (BEB) mechanism, an algorithm to limit network load when a collision occurs between two messages, ensures stability under any load. These concrete data guided the development of the industrial standard.

Today, Ethernet carries Voice over IP, connects storage arrays, and powers data centers. Throughput reaches 400 gigabits per second over fiber optics. The technology born to connect a few computers to printers now structures the global internet. The most enduring solutions are not always the most sophisticated.

Top

TCP/IP

In the late 1960s, in the particular context of the Cold War, the United States sought to establish a military communication network capable of withstanding a Soviet nuclear attack. The U.S. Department of Defense, through its DARPA (Defense Advanced Research Projects Agency), launched the ARPAnet project in 1969, which aimed to interconnect the country’s main computing centers. Military constraints shaped the network’s design from the outset. There could be no Achilles’ heel, no central point whose destruction would compromise the entire system. Messages had to be able to take alternative routes dynamically if certain portions failed. More complex still, the network had to connect computers running different operating systems.

ARPAnet started with the NCP protocol, but its shortcomings became apparent in short order. In 1974, two researchers changed the game: Vinton Cerf and Robert Kahn published “A Protocol for Packet Network Interconnection”, a foundational article that introduced TCP. This publication outlined what would become TCP/IP, standardizing communication between machines. The genius of TCP/IP lies in its layered structure. IP handles routing data packets from one point to another on the network, while TCP ensures transmission control and reassembles complete messages upon arrival. This division of labor makes the system more robust and easier to evolve, an engineering principle that would prove its worth.

In 1983, TCP/IP officially replaced NCP on ARPAnet. Although this point is widely debated and debatable, the concept of the Internet was truly born at this moment, in the literal sense of the term. TCP/IP’s success rests on three decisive advantages. It works on any hardware, computer, or physical network. It maintains communication when part of the network fails through dynamic routing. It breaks messages into packets that can take different paths before being reassembled. First-generation IP addressing uses 32-bit addresses that uniquely identify each machine. The system classifies these addresses into categories A, B, and C to simplify management. But the explosive growth of the network quickly revealed the limitations of this approach. Computer scientists then developed workarounds such as subnetting, while awaiting IPv6 and its 128-bit addresses.

The TCP/IP architecture is organized in four layers: Network Access for physical connection, Internet (IP) for routing, Transport (TCP/UDP) for transmission, and Application for user programs. This hierarchy facilitates adding new functionality without disrupting the existing system, a tremendous advantage for the system’s evolution. Several auxiliary protocols complement TCP/IP. ARP translates logical IP addresses into physical network card addresses. ICMP carries control and error messages. These mechanisms contribute to the overall system’s reliability and efficiency.

Berkeley UNIX contributed to TCP/IP’s dissemination during the 1980s. Native integration of the protocol into BSD facilitated its adoption by universities and research centers that extensively used this operating system. Technical documentation was public and accessible to all in the form of RFCs, documents open to community contributions. This decentralized approach stimulated innovation and continuous improvement, an approach that would durably influence technological development.

The explosion of the World Wide Web in the 1990s spectacularly validated the choice of TCP/IP. The protocol absorbed without flinching an exponential growth in the number of users and the arrival of resource-intensive applications. Its flexibility allowed the addition of advanced features such as multicast for broadcasting to multiple recipients, and quality of service to prioritize certain flows. The history of TCP/IP continues to be written. The anticipated exhaustion of IPv4 addresses and new security and performance needs drove the development of IPv6 in the 1990s. This version offers a virtually infinite address space—3.4×1038 addresses—and substantial improvements. Migration to IPv6 is progressing very slowly, complicated by the need to maintain compatibility with existing systems.

From military project to universal standard, TCP/IP has spread to all contemporary domains. Computers of course, but also smartphones, connected devices, industrial equipment—the list grows longer every day. This ubiquity stems from its remarkable ability to adapt to technological developments without breaking with its legacy. The standardization of TCP/IP made possible the emergence of a global network where billions of machines communicate, regardless of their origin or geographic location.

Top

Telnet

In the computing world of the early 1970s, ARPANET had just been born, but its promises of universal interconnection were running up against technical reality. Each manufacturer developed its own protocols and standards, creating so many isolated technological islands. A terminal operating with the EBCDIC character set simply could not communicate with a PDP-10 computer configured for ASCII. Machines from different manufacturers needed to communicate with each other.

The Telnet protocol emerged to address this problem. Its designers understood that they needed to invent a common computer language that would allow any terminal to connect to any remote computer. This insight gave birth to the Network Virtual Terminal (NVT), a concept of remarkable elegance in its simplicity. It functions as a universal translator. Rather than forcing each machine to understand all the possible dialects of its peers, the system imposes a single standardized format. Each computer needs to master only one translation: that between its native language and this virtual lingua franca. An EBCDIC terminal translates to the NVT, which then retranslates to the ASCII expected by the destination machine. The efficiency of this approach immediately struck ARPANET engineers.

But Telnet’s history contains a delightful episode. In 1972, Bernard Cosell and David Walden, both BBN employees working on the Terminal Interface Message Processor, found themselves facing a recurring puzzle. Telnet specifications were constantly evolving, and each modification required simultaneously updating all existing instances across the network. A monumental task that considerably slowed development. It was on a plane en route to UCLA that Cosell had his epiphany. Why not let the two ends of a connection negotiate between themselves the functionalities they wished to use? This idea, which he developed with Walden during the flight, revolutionized the approach to network protocols. Back on the ground, they designed a mechanism based on four terse commands: DO, DON’T, WILL, and WON’T. These four magic words transformed Telnet into a living protocol, capable of evolving without breaking what already existed. When a machine proposed a new functionality via WILL, its counterpart could accept it with DO or politely decline with DON’T. If it did not understand the proposal, a simple refusal sufficed, and communication resumed on the common ground of the NVT. This ongoing negotiation allowed the protocol to gradually enrich itself, like a language that naturally integrates new terms without losing its coherence. Ray Tomlinson, already famous for inventing email, was then working on the Telnet implementation for the TENEX system. As soon as he discovered the mechanism imagined by Cosell and Walden, he immediately grasped its significance. David Walden formalized the idea in RFC 435, published in January 1973, laying the groundwork for what would become a lasting standard.

Final standardization came ten years later, with RFC 854 in 1983. Telnet gained its modern form: bidirectional TCP connection on port 23, 7-bit ASCII encoding, and sophisticated data flow management. Engineers added an ingenious synchronization mechanism, designed for emergency situations where a user must regain control of a runaway remote program. The “Synch” signal uses TCP’s urgent capabilities to bypass normal flow control and guarantee immediate transmission of critical commands. This elegant architecture made Telnet far more than a simple terminal emulator. The protocol became the backbone of remote access to computer systems. In the 1980s and 1990s, it fueled the explosion of the first online services and BBSes, those electronic bulletin boards that preceded today’s forums. Its simplicity appealed to developers and users alike.

Designed in the 1970s, when computing existed in a world of relative trust, Telnet encrypted nothing. Passwords, sensitive data—everything traveled in clear text across networks. This weakness, trivial at first, became prohibitive with the democratization of the Internet and the emergence of security concerns. The arrival of graphical interfaces dealt an additional blow to the venerable protocol. Users gradually abandoned text-mode terminals for colored windows and clickable icons. SSH, which appeared in the 1990s, inherited Telnet’s strengths while adding the encryption it so sorely lacked.

Though banished by security experts, Telnet has not disappeared. In server rooms, it remains the diagnostic tool par excellence. Network administrators have long used it—perhaps still do—to test connectivity on a given port or configure recalcitrant old equipment. This persistence testifies to the robustness of its original design.

The virtual terminal concept spread to countless subsequent protocols. The option negotiation mechanism, born on that plane to UCLA, inspired engineers facing the challenge of extensibility. How do you evolve a protocol without breaking what exists? Cosell and Walden had found a disarmingly simple answer that continues to serve as a model. Faced with the growing complexity of systems, the temptation was strong to develop sophisticated solutions with abundant features. Telnet chose to favor interoperability and flexibility over sophistication.

Top

The SQL Language

In 1969, an IBM researcher named Edgar Frank Codd published a seemingly innocuous paper: “Derivability, Redundancy, and Consistency of Relations Stored in Large Data Banks”. No one suspected that this text would revolutionize the way we store and manipulate computer data. The following year, Codd followed up with a more ambitious paper in Communications of the ACM: “A Relational Model of Data for Large Shared Data Banks”. This time, the industry took notice.

Codd’s relational model broke with the hierarchical and network approaches dominant at the time. He proposed organizing data in simple tables, linked to each other through mathematical relationships. The idea seems almost trivial today, but it revolutionized computer science thinking. Academic and industrial communities embraced these theoretical concepts.

At IBM, Donald Chamberlin and his team wasted no time. In 1974, they developed System R, the first truly worthy relational database prototype. The associated query language was named SEQUEL, for Structured English QUEry Language. This choice revealed an ambition: to create a language close to natural English, accessible to non-specialist users. Gone were cryptic commands, replaced by instructions comprehensible to the masses.

The project progressed incrementally. SEQUEL-XRM emerged between 1974 and 1975, followed by a complete rewrite in 1976-1977. This new version, dubbed SEQUEL/2, introduced essential functionalities: multi-table queries, multi-user support, advanced data management. But an unexpected obstacle arose: the British company Hawker Siddeley Aircraft Company already owned the SEQUEL trademark. IBM had to rename its language with the acronym SQL.

The story took an unexpected turn in 1979. While IBM was perfecting its prototypes, a small California company named Relational Software commercialized the first relational database management system. In a stroke of marketing genius, this company numbered its product “Version 2”, thus avoiding the usual reluctance toward first versions. Relational Software would become Oracle Corporation, beating the IBM giant at its own game.

IBM caught up with SQL/DS in 1981, and DB2 in 1983. Meanwhile, customer tests conducted as early as 1978 had validated the approach: SQL won over users with its ease of use and power, allowing them to describe what they wanted to obtain without specifying how to obtain it. This declarative approach contrasted with traditional procedural languages and democratized database access.

Standardization naturally became necessary. In 1986, the American National Standards Institute certified SQL as an official standard. The International Organization for Standardization followed in 1987. These first standards, approximately 150 pages long, standardized the SQL dialect of IBM DB2. However, they suffered from gaps and imprecision, consequences of the divergent commercial interests of software vendors.

The SQL2 standard of 1992 corrected these flaws. This 500-page document eliminated some of the weaknesses of the previous version and standardized advanced conceptual functionalities, sometimes beyond acceptable technical capabilities. Three levels of compliance emerged: Entry for minor modifications, Intermediate for at least 50% of the standard, and Full for total compliance.

SQL99, or SQL3, marked a new evolution at the end of the 1990s. This 2000-page standard integrated object-relational models, calling interfaces, and advanced integrity management. It replaced the old compliance levels with two categories: Core SQL99 and Enhanced SQL99. The document was now structured into five specialized parts, testifying to the growing complexity of the language.

The 21st century saw the birth of SQL:2003, SQL:2008, and SQL:2011, each version enriching the language with new capabilities. But beyond this technical chronology, SQL owes its success to deeper qualities. Its syntax inspired by natural English makes it accessible to non-specialists. Its declarative nature frees users from procedural constraints. The underlying relational model structures data in a clear and coherent manner.

The concepts of primary and foreign keys, normalization, and referential integrity guarantee the consistency of stored data. This mathematical rigor, combined with flexibility of use, explains SQL’s massive adoption by the industry. Oracle, MySQL, PostgreSQL, Microsoft SQL Server, and so many other systems perpetuate this legacy, extending the language’s capabilities while preserving compatibility.

Top

System R

During 1974, a team of researchers at IBM’s San José laboratory sought to demonstrate that the relational model theorized by Edgar F. Codd four years earlier could work in reality, with real data and real users. This project, named System R, was not merely a technical experiment. It was about building a bridge between the mathematical elegance of the relational model and the harsh constraints of the industrial world.

The team led by W. F. King began with what they called “Phase Zero”, an exploratory period that lasted from 1974 to 1975. The first prototype, limited to a single user, already implemented the foundations of what would become SQL. This initial version relied on the XRM relational access manager developed by Raymond Lorie at IBM’s Cambridge Scientific Center. The researchers discovered that transforming an elegant theory into functional code raised unexpected questions. How to store data efficiently? How to guarantee transaction integrity? How to ensure queries execute within reasonable timeframes?

The real revolution occurred during “Phase One”, between 1976 and 1977. The team designed a two-tier architecture that clearly separated responsibilities. The RSS (Research Storage System) handled everything related to physical storage, locks, and modification logging. Above it, the RDS (Relational Data System) managed permissions and query optimization. This separation was not just engineering elegance: it enabled system maintenance and evolution.

In early 1976, Raymond Lorie proposed the idea of compiling SQL queries instead of interpreting them. Traditionally, database systems analyzed and executed queries on the fly, which limited their performance. System R’s approach transformed each query into optimized machine code. For applications that repeated the same operations thousands of times, the gain was spectacular. For one-off queries, the compilation overhead was largely offset by execution speed.

The automatic query optimizer constituted the system’s brain. It analyzed each request, consulted statistics stored in the system catalog, and chose the most efficient data access strategy. Its algorithm calculated input-output costs and processor time for different approaches. Indexes, in the form of B-trees, guaranteed fast access to large data volumes.

Hierarchical locking addressed concurrent access issues with variable granularity. A user could lock a single record for a specific modification, or an entire table for a maintenance operation. Three isolation levels allowed developers to choose between strict consistency and performance. The most stringent level guaranteed that reading a record multiple times within a transaction always yielded identical results.

Recovery after failure relied on an ingenious mechanism of “shadow pages”. During a modification, the system created a new version of the page at a different disk location, preserving the old one until transaction validation. Periodic checkpoints purged obsolete versions. While this complicated the physical data organization, this approach considerably simplified crash management.

“Phase Two”, from 1978 to 1979, took System R out of the laboratories and installed it with real users. IBM pilot sites and three selected customers tested the system under real conditions. These experiments confirmed the initial intuition: developers gained productivity through the uniformity of the SQL language. Whether writing application code, launching ad-hoc queries, or defining data structures, they used the same syntax.

Testing also revealed the system’s limitations. For simple transactions manipulating little data, using hash tables or direct pointers would have yielded better performance. The shadow page mechanism, despite its advantages for recovery, prevented efficient grouping of related data on disk.

An unexpected phenomenon emerged during load testing. The team called it a “convoy”: when a process holding a highly demanded lock was put into long wait by the scheduler, all other processes became blocked in a chain behind it. The solution required rethinking the lock release protocol to avoid these bottlenecks.

System R transformed the industry permanently. Its architectural innovations—query compilation, automatic optimization based on statistics, hierarchical locking—were adopted by practically all relational systems that followed. In 1981, IBM commercialized SQL/DS, its first relational database product, followed two years later by DB2 for mainframes. Oracle, Ingres, and other vendors developed their own systems, democratizing this technology.

The impact went far beyond the technical. System R proved that a computer system could combine ease of use with internal sophistication. Users manipulated intuitive concepts—tables, rows, columns—without worrying about the complex mechanisms that guaranteed consistency and performance. This separation between logical and physical views became a principle of business computing.

The project validated Edgar F. Codd’s vision on the importance of abstraction in data management. It also established the importance of a rigorous experimental approach in technological development. Rather than settling for elegant theoretical demonstrations, the San José team built a complete system and tested it under real conditions.

Faced with the emergence of NoSQL systems that challenge certain aspects of the relational model, System R’s architectural principles retain their relevance. Declarative query optimization, transaction management, and the separation of logical and physical levels remain benchmarks.

Top

Intel 8080

In the technological ferment of the early 1970s, Intel was attempting to establish itself in the microprocessor market. Robert Noyce and Gordon Moore’s company had just launched the 4004 in 1971, followed by the 8008 the next year, its first 8-bit processor. However, these initial achievements, despite their innovative nature, revealed their limitations in the face of growing market expectations.

The arrival of Italian physicist Federico Faggin in Intel’s ranks had provided decisive momentum to these developments. His expertise in logic design and silicon-gate MOS technologies enabled the integration of all circuits onto a single chip, where other manufacturers struggled to master this complexity. Yet by 1972, criticism was mounting against the 8008. Plessey and Nixdorf, two European computer companies, pointed to a primitive interrupt structure, address and data multiplexing that penalized performance, and a sluggish operating speed of 0.5 MHz.

These criticisms resonated all the more strongly as they came from potential customers with real needs. The Sac State 8008 project, designed to manage medical records, illustrated the possibilities of 8-bit processing well, but also revealed the processor’s shortcomings for professional applications. Intel understood that a complete overhaul was necessary.

Development of the 8080 began in November 1972 under Masatoshi Shima’s direction, supervised by Federico Faggin. Shima brought to the project his experience gained at Ricoh in designing interfaces between peripherals and minicomputers. His working method contrasted with usual practices: he created detailed tables to validate each instruction, applying a systematic approach to logic verification that foreshadowed modern design methods.

Sixteen months elapsed between the first sketches and commercialization in January 1974. The time breakdown reveals the meticulousness of the process: one month to define the product, eight months to design the chip, and five months to manufacture masks, produce wafers, and debug the whole system. This rigor paid off: the 8080 integrated approximately 5,000 transistors in a coherent and efficient architecture.

The NMOS technology adopted for the 8080 came directly from Intel’s developments on 4K dynamic memories. This technical lineage explains the three supply voltages required: +5V, +12V, and -5V, a constraint that would disappear with subsequent generations. The 40-pin package, more than twice the 8008’s 18 pins, testified to the project’s ambition and finally freed the processor from connectivity limitations.

The internal architecture revolutionized this approach. Shima abandoned the 8008’s limited internal stack in favor of a stack pointer in main memory, providing hitherto unknown flexibility. The registers were organized according to a thoughtful hierarchy: three pairs (BC, DE, HL) with differentiated capabilities, HL being the most powerful, BC the most basic. This asymmetry, sometimes criticized by programmers accustomed to uniform architectures, resulted from a deliberate choice to optimize hardware resources.

The instruction set was considerably enriched while preserving assembly-level compatibility with the 8008. A remarkable innovation, the 8080 introduced 16-bit operations: increment, decrement, addition, and subtraction. Shima favored this generic approach over adding a specialized index register, a debated choice that nonetheless offered appreciable programming flexibility.

Initial performance tests revealed the magnitude of the technological leap. Two benchmark programs – a 256-byte memory transfer and 16-digit decimal additions – demonstrated gains of 10 to 20 times compared to the 8008 at identical frequency. Running at 2 MHz, the 8080 widened the gap with its predecessor.

Intel accompanied the launch with a bold commercial strategy. The initial price of $360 per unit might have seemed prohibitive, but the company was betting on its processor’s added value. This gamble proved successful: manufacturers accepted this premium to benefit from the enhanced capabilities. The ecosystem developed around the 8080 reinforced this position: specialized circuits such as the 8224 (clock generator) and 8228 (bus controller), Intellec development systems, all contributed to simplifying processor adoption.

The success of MITS’s Altair 8800, based on the 8080, confirmed Intel’s intuitions. This personal microcomputer, sold as a kit, paved the way for a new generation of machines accessible to hobbyist enthusiasts. Personal computing found its first true technical foundation there.

A design flaw nearly tarnished this success. The main ground line, too narrow, limited use to low-power TTL circuits. Intel responded by offering the corrected 8080A version, which became the market reference. This responsiveness to technical problems illustrated the company’s newfound maturity.

The 8080’s influence extended far beyond Intel’s borders. Federico Faggin, who left to found Zilog, designed the Z80 drawing directly from his work on the 8080. Motorola responded with the 6800, a competing architecture but with similar principles. This emulation stimulated innovation throughout the microprocessor sector.

Beyond its performance, the 8080 established lasting standards: 8-bit data bus separated from 16-bit address bus, 40-pin package, general-purpose register architecture. These characteristics were found in many subsequent processors, testimony to the soundness of Shima and Faggin’s technical choices.

The 8080’s legacy is also measured against the industry it helped create. By demonstrating the commercial viability of 8-bit microprocessors, it enabled personal computing to come into its own and confirmed Intel’s dominant position in this strategic market. Its balanced architecture and performance established benchmarks that guided technological evolution for years.

Top

CP/M

In 1972, Gary Kildall, then a consultant for Intel through his company Microcomputer Applications Associates, developed PL/M, a programming language designed for Intel microprocessors. This language, derived from the XPL language used for writing compilers, was the first high-level language specifically designed for microprocessor programming. The first substantial program written in PL/M was a text editor for the Intel 8008 microprocessor, which would later become CP/M’s ED editor (Control Program for Microcomputers).

In 1974, Kildall recognized the need for an operating system to exploit Shugart Associates’ new 8-inch floppy disk drives. That year, the dominant storage medium was punched paper tape, which cost approximately 100 times more than floppy disks for equivalent data capacity. Kildall proposed to Intel the development of a complete operating system including an editor, assembler, and loader, for $20,000. Intel declined the offer but provided Kildall with the necessary hardware to continue his development.

CP/M development took place in the workshop behind Kildall’s house at 781 Bayview Avenue in Pacific Grove, California. Unable to create alone the complex electronic interface between the microprocessor and the floppy disk drive, he called upon John Torode, a university friend teaching at Berkeley, who designed the necessary controller. In fall 1974, Kildall successfully ran CP/M for the first time, loading the system from paper tape to floppy disk and then booting the computer from it.

One of CP/M’s major innovations was its modular design separating hardware-dependent functions into a distinct component called BIOS (Basic Input/Output System). CP/M could thus easily adapt to different hardware configurations by modifying only the BIOS. The rest of the system, including the file manager and command interpreter, remained identical regardless of the platform. This portability proved decisive for CP/M’s commercial success.

In 1976, Gary and Dorothy Kildall founded Digital Research Inc. (initially named Intergalactic Digital Research) to commercialize CP/M. Initial sales were conducted from a rented office at 716 Lighthouse Avenue in Pacific Grove. CP/M floppy disks were sold for $70 each, and the couple went to the post office daily to collect checks resulting from advertisements placed in magazines such as Byte and Dr. Dobb’s Journal.

A licensing agreement with IMS Associates, Inc. in 1977 significantly increased CP/M’s credibility in the industry, making the system a standard adopted by most personal computer manufacturers of the era, including Altair, Amstrad, Kaypro, and Osborne. In 1978, revenue reached $100,000 per month and Digital Research moved into a Victorian house at 801 Lighthouse Avenue. By 1980, the company employed over 20 people and generated $3.5 million in revenue, five times that of Microsoft at the time.

In 1980, Bill Gates, who knew Kildall from discussions about a possible merger between their companies, recommended Digital Research to IBM for the operating system of its PC under development. Negotiations failed over confidentiality and business model issues: IBM wanted to purchase CP/M without royalties, while Digital Research preferred a per-copy-sold model. Microsoft seized this opportunity by purchasing a CP/M clone from Seattle Computer Products, renamed PC-DOS for IBM and MS-DOS for other manufacturers.

Faced with this situation, Kildall threatened IBM with a lawsuit for illegal copying of CP/M. IBM agreed to offer CP/M-86 as an option on the PC, but at $240 versus $40 for PC-DOS. This price difference massively steered the market toward PC-DOS/MS-DOS. Digital Research nevertheless continued to thrive for several years, developing multitasking operating systems for the IBM PC-XT and introducing graphical interfaces before Apple and Microsoft.

In the mid-1980s, MS-DOS’s dominance in the IBM-compatible PC market seriously affected Digital Research’s business. Gary Kildall, who never particularly enjoyed managing a large company, sold Digital Research to Novell in 1991. The assets were subsequently resold to Caldera in 1996, which would use DRI’s intellectual property in a lawsuit against Microsoft.

CP/M underwent major evolutions. MP/M (Multi-Programming Monitor) added multitasking and multi-user capabilities. CP/NET, introduced in 1980, connected CP/M computers to MP/M servers via various network protocols. A 16-bit version, CP/M-86, was developed for Intel 8086/8088 processors. However, the 8-bit version achieved the greatest success, with approximately 200,000 installations on over 3,000 different hardware configurations.

CP/M established fundamental concepts such as the separation between operating system and hardware through the BIOS, the use of standardized commands, and disk file management. These principles directly influenced MS-DOS’s design and, by extension, the evolution of personal operating systems. In 1995, the Software and Information Industry Association awarded Gary Kildall a lifetime achievement award, citing eight major contributions to the microcomputer industry. In 2014, the IEEE installed a commemorative plaque in front of Digital Research’s former headquarters, recognizing CP/M as an important milestone in personal computing history.

CP/M’s success illustrates the importance of open standards and software compatibility in the computer industry’s development. By enabling the development of a hardware-independent software ecosystem, CP/M helped transform microcomputers from specialized tools into general-purpose platforms accessible to all. This standardization fostered the emergence of an independent software market, foreshadowing the dominant economic model of the personal computer industry.

Top

Diffie-Hellman

In the 1970s, cryptography existed in a closed world. Military and government agencies held the keys to this secret science, leaving the rest of society to make do with poorly secured communications. The systems of that era imposed a significant constraint: two people wishing to exchange secret messages first had to meet physically to share a common key. Imagine having to cross the Atlantic to hand-deliver a secret code before being able to send a simple encrypted telegram!

Whitfield Diffie and Martin Hellman, two researchers from Stanford, revolutionized this logic in 1976. Their paper New Directions in Cryptography, published in IEEE Transactions on Information Theory, proposed the unthinkable: creating a shared secret between two strangers without them ever having exchanged any confidential information. This idea seemed as absurd as asking two people to choose the same card from a shuffled deck without consulting each other.

The feat relied on modular arithmetic and the fascinating properties of one-way functions. Diffie and Hellman exploited the fact that calculating gx mod p is easy but retrieving x when knowing only the result is extraordinarily difficult. Each participant generates their secret number, calculates a public version which they transmit openly, then combines this information with their correspondent’s public number. Through a kind of mathematical magic, they both obtain an identical secret result.

This discovery revolutionized traditional cryptography. AT&T quickly grasped the commercial interest of this innovation by developing the Common Channel Interoffice Signaling system to protect their telephone communications. But it was with the explosion of the Internet in the 1990s that the protocol found its true destiny. SSL, and its successor TLS, integrated Diffie-Hellman at the heart of their mechanisms, transforming each HTTPS connection into a living demonstration of this mathematical prowess.

The protocol’s security relies entirely on the difficulty of calculating discrete logarithms. No classical computer can efficiently solve this mathematical problem, which explains why the protocol has held strong for nearly fifty years. However, quantum computing threatens this balance. Shor’s algorithm could one day transform this reputedly unsolvable problem into a simple calculation exercise.

Mathematicians have been working tirelessly to adapt and improve the original protocol. Victor Miller and Neal Koblitz proposed a variant using elliptic curves in 1985, called ECDH. This version drastically reduces the size of keys needed while maintaining a constant level of protection. A real efficiency gain that has won over many developers concerned with optimizing their applications.

The Diffie-Hellman protocol gave rise to RSA, developed by Rivest, Shamir and Adleman in 1978, and inspired an entire generation of cryptographers. This innovation also democratized cryptography research, breaking the monopoly of secret agencies and giving birth to a worldwide academic community.

Official recognition took time. Diffie and Hellman received the Turing Award in 2015 only, nearly forty years after their discovery. This wait was probably due to the Cold War climate surrounding any cryptographic innovation. Moreover, British researchers at GCHQ had developed similar ideas a few years earlier, but their work remained classified until 1997. A fine illustration of how state secrecy can sometimes delay scientific progress.

The anticipated arrival of quantum computing is currently mobilizing cryptographers worldwide. They are working tirelessly on post-quantum variants of the protocol, exploring new mathematical problems resistant to future quantum machines. This research extends the pioneering spirit of Diffie and Hellman, pursuing their vision of cryptography accessible to all. They proved that an open and academic approach to cryptography could rival the most secret government laboratories. This new intellectual paradigm unleashed researchers’ creativity and gave birth to a thriving computer security industry.

Every time you connect to your online bank or buy something on the Internet, you benefit from the legacy of these two visionaries. Their protocol continues to protect billions of daily communications, a living testament to an era when two researchers were able to imagine the impossible and transform it into reality.

Top

Altair 8800

In January 1975, when MITS (Micro Instrumentation and Telemetry Systems) launched the Altair 8800, no one suspected that this small metal box selling for less than $400 would revolutionize the computing world. Yet this first personal computer to achieve genuine commercial success had just been born in the facilities of a company on the brink of bankruptcy.

The story begins with H. Edward Roberts, an electrical engineering graduate from Oklahoma State University who served as an officer in the US Air Force. At the Albuquerque weapons laboratory, he met Forrest Mims III. Together, they worked on laser and model rocket projects. This collaboration gave birth to MITS in 1969, a company specializing in telemetry instruments for model aviation.

The company then turned to electronic calculators and enjoyed several profitable years. But in 1974, competition from major manufacturers caused prices to collapse. Roberts saw his revenue plummet. He needed to find something else, fast. That’s when he envisioned an affordable personal computer, an idea that seemed crazy when computing was the exclusive domain of large corporations and universities.

The Intel 8080 microprocessor had just arrived on the market. This 8-bit component far surpassed its predecessor, the Intel 8008. Roberts saw an opportunity to build his machine. The Altair’s architecture leveraged the processor’s capabilities: a 16-bit address bus accessing 65,000 words of memory, a basic instruction set of 78 instructions, and most importantly an expandable bus system. The basic configuration offered only 256 words of RAM, expandable by the user according to their needs and budget.

The Altair’s front panel was striking in its apparent complexity. Rows of switches and LEDs enabled direct programming in machine language. A spartan approach that discouraged novices, but one that reflected the cost constraints of the era.

Roberts bet everything on an article in the January 1975 issue of Popular Electronics. With William Yates, he wrote a paper with a catchy title: “World’s First Minicomputer Kit to Rival Commercial Models.” The magazine came out in December 1974, and the response exceeded all expectations. MITS had hoped to sell a few hundred units; orders arrived by the thousands within the first few days.

This wave revealed the existence of a market that no one had truly imagined. The Altair attracted two young Harvard students, Paul Allen and Bill Gates, who developed a BASIC interpreter for the machine. This was the birth of Microsoft. In March 1975, the Homebrew Computer Club formed around the Altair in Silicon Valley. Steve Wozniak cut his teeth there and presented the work that would lead to the creation of Apple Computer.

The Altair’s technical influence extended to making its system bus a standard known as S-100. Many manufacturers adopted this architecture for their own machines. The Altair used the CP/M operating system, which would dominate the microcomputer market until the arrival of the IBM PC and MS-DOS. The modular approach, with its expansion cards, inspired an entire generation of computers.

The machine’s limitations reflected the state of technology in 1975. No screen, no keyboard in the standard version. Programming was done through the front panel switches, a laborious exercise that deterred beginners. Memory was scarce and users had to wait for expansions to have real storage capabilities. But these constraints stimulated creativity: a multitude of manufacturers developed compatible cards and peripherals, creating a genuine ecosystem.

MITS discovered the joys and sorrows of unexpected success. The company struggled to fulfill orders and maintain an acceptable level of quality on hand-assembled kits. Technical support, provided notably through the Computer Notes bulletin, was not always sufficient in the face of questions from users sometimes overwhelmed by the complexity of their purchase. These growing pains did not prevent success, but they illustrated the challenges of creating a new market.

The Altair adventure ended in 1977 with the sale of MITS to Pertec Computer Corporation. Production ceased, but the legacy endured. The machine had proven that a personal computer could be commercially viable and technically credible. It gave birth to a community of enthusiasts that foreshadowed modern computing culture.

The Altair 8800 marks the boundary between two worlds: that of institutional computers, reserved for insiders, and that of personal computing, promised to all. Its open and modular architecture laid the foundation for principles that still govern the industry today. The enthusiasm it sparked among hobbyists transformed a technical curiosity into a cultural movement. More than an engineering feat, the Altair demonstrated that a different relationship with computing was possible.

Top

X.25

In 1976, when the International Telegraph and Telephone Consultative Committee adopted X.25, no one suspected that this protocol would survive for more than thirty years. Yet this technical standard, born from heated debates between French and British engineers, would lastingly shape the architecture of data networks.

The first questions about the future of digital communications were emerging. ARPANET had taken its first steps across the Atlantic since 1969, but in Europe, the situation was radically different. Telecommunications administrations reigned supreme over the infrastructures. In France, the Direction générale des télécommunications held the reins, just as the British Post Office did in the United Kingdom. These public monopolies did not look favorably upon the relative anarchy that characterized American developments.

Packet switching sparked passionate technical debates. This method, which breaks messages into small portions to optimize their circulation, divided the scientific community. On one side, Rémi Després from CNET staunchly defended virtual circuits: each communication follows a predetermined path through the network, which guarantees a certain order. On the other, Louis Pouzin and his team at IRIA advocated for datagrams: packets navigate freely, finding their route according to network availability.

This technical quarrel concealed much broader stakes. Virtual circuits appealed to traditional telephone operators through their predictability. They recalled the familiar world of classic telephone switching, where each call establishes a dedicated circuit. Datagrams, on the other hand, embodied a different philosophy, closer to the libertarian spirit blowing across American campuses.

The adoption process at the CCITT resembled a game of diplomatic chess. Larry Roberts, despite being the father of ARPANET, paradoxically supported virtual circuits from his private company. But without voting rights, his influence was limited. A similar situation existed for the Canadians at Bell Canada, who were nevertheless developing Datapac on this technical basis.

The turning point came with the unexpected rallying of the British Post Office to French positions. Halvor Bothner, the Norwegian rapporteur, then abandoned his initial position. He who had coined the term “datagram” ultimately sided with virtual circuits. This about-face sealed the fate of X.25, adopted in March 1976 and then solemnly confirmed in the fall.

Ironically, this French victory immediately benefited the international community. Spain and the Netherlands built their national networks on X.25. Bell Canada integrated it into Datapac, thus validating European technical choices. IBM, ever pragmatic, adapted its equipment to this standard as early as 1975.

In France, X.25 found its apotheosis with Transpac. This national network would support Minitel traffic for more than three decades, from 1980 to 2012. A remarkable performance for a technology that some had already deemed outdated in the 1990s. Banks, in particular, remained loyal to it: its inherent reliability and security corresponded perfectly to their strict requirements.

The emergence of the Internet could have sounded the death knell for X.25. TCP/IP, a direct heir to Pouzin’s datagrams, gradually established itself as the de facto standard. But X.25 persisted in its specialized niches. Costa Rica was still using it in 1999, France until 2012. This longevity contrasts with the resounding failure of OSI, that pharaonic ISO project that claimed to standardize all computer communications.

X.25 perfectly illustrates the caprices of technical innovation. A standard can be born from a political compromise, flourish in unexpected uses, and survive far beyond its initial predictions. It also testifies to the ability of French engineers to carry weight in international technical debates, when technical quality meets diplomatic skill.

This story above all reveals the complexity of technological ecosystems. Contrary to popular belief, the “best” technical solution does not always prevail. X.25 and TCP/IP coexisted for decades, each meeting specific needs. This peaceful coexistence contrasts with the standards wars that regularly shake the computer world.

Top

Cray-1

The year 1976 witnessed the birth of a machine that would redefine the boundaries of scientific computing. The Cray-1, the fruit of Seymour Cray’s technical obsession, emerged from an already remarkable journey in the world of supercomputers. Cray had previously designed the Control Data 6600 and 7600, two benchmark systems that had established his reputation as a visionary architect. But this time, something else was at stake: building the fastest machine ever conceived.

In 1972, Cray left Control Data to create his own company, Cray Research. The stated objective could be summed up in a few words: build the world’s most powerful computer. Four years of intensive development culminated in a spectacular achievement. The Cray-1 first struck observers with its appearance: a cylinder 2.75 meters in diameter and nearly 2 meters high, breaking with the usual rectangular aesthetics of computers. This circular form was not an aesthetic whim. It responded to an implacable technical constraint: minimizing connections between components to gain precious nanoseconds.

The figures were staggering. The machine reached 80 million floating-point operations per second in normal operation, with peaks at 250 million. A performance that shattered all established records. This raw power relied on a decisive innovation: vector architecture. The Cray-1 simultaneously processed entire data series through a "chaining" technique that orchestrated eight vector registers of 64 elements each.

Beneath its futuristic casing, the machine concealed a design philosophy of disarming simplicity. Seymour Cray used only four different types of integrated circuits: 5/4 NAND gates, memory chips, and registers, all based on ECL technology. This minimalist approach nonetheless concealed a dizzying complexity: 1,662 modules distributed across 113 different types, with each module capable of containing up to 288 integrated circuits. The ensemble mobilized the equivalent of 2.5 million transistors.

The electronic density generated a formidable thermal challenge. Concentrating so many components in such a reduced space produced heat that traditional cooling systems could not evacuate. Cray then developed a freon-based solution of remarkable sophistication. Aluminum and stainless steel cooling bars wound through each column of the machine, maintaining a stable temperature of 21°C. This technical feat was essential to the reliable operation of the entire system.

Memory reached one million 64-bit words, organized into 16 independent banks. This architecture offered the ability to access data simultaneously from multiple processes, reducing wait times that penalized performance. A SECDED system protected calculation integrity by automatically correcting single errors and detecting double errors.

Commercial success exceeded all predictions. Between 1976 and 1982, Cray Research sold approximately 80 machines at a unit price of 19 million dollars. Early customers included American government research centers, but also European institutions such as the European Centre for Medium-Range Weather Forecasts. This international dimension confirmed the technological lead Cray had achieved.

Application domains revealed the machine’s versatility. Meteorology, aeronautics, nuclear research: wherever numerical simulation required intensive floating-point calculations, the Cray-1 excelled. Its ability to rapidly solve complex systems of differential equations opened new perspectives for scientific modeling.

The technical influence went far beyond raw performance. The operating system and FORTRAN compiler, specially optimized to exploit vector architecture, established new standards. Programming proved more accessible than on competitors like the ILLIAC IV, favoring adoption by the scientific community.

Development involved considerable challenges. Perfecting the cooling system required eighteen months of intensive research. Manufacturing the five-layer cards, with their extremely precise interconnections, pushed the limits of existing techniques. These constraints stimulated innovations that benefited the entire industry.

In 1990, fourteen years after its commissioning, Lawrence Livermore National Laboratory retired its Cray-1, surpassed by new generations. The machine was sold at auction in 1993 for $10,000, or 0.05% of its initial price. This vertiginous drop illustrated the acceleration of technological progress in supercomputers.

The Cray-1 crystallized an era when the pursuit of absolute performance drove innovation. Its influence persists in current design principles: optimization of inter-component communications, sophisticated thermal management, simplified architecture despite the complexity of technical challenges. The machine embodied Seymour Cray’s vision, which prioritized elegance and coherence in the quest for technical excellence.

This achievement demonstrated that bold architecture, supported by meticulous engineering, could dramatically push the frontiers of scientific computing. The principles established by the Cray-1 confirm its place as a foundational milestone in the evolution of high-performance computing.

Top

UUCP

In 1976, Mike Lesk was working on a practical problem at Bell Labs. ARPANET already existed, but access remained the privilege of a restricted circle: in January of that year, only 63 hosts were connected to it. For UNIX users scattered across the United States, exchanging files or executing remote commands was an uphill battle. Lesk designed UNIX-to-UNIX Copy Protocol (UUCP), a pragmatic solution that transformed ordinary telephone lines into digital arteries.

The first version of UUCP was integrated into UNIX in February 1978. The protocol operated on 300-baud connections, a speed that seems trivial but represented notable progress. A few months later, in October, the Seventh Edition of UNIX included a revised version of the protocol. This time Lesk partnered with Dave Nowitz, with help from Greg Chesson, to correct the initial system’s teething problems.

In late 1979 at the University of North Carolina, Steve Bellovin, a student seeking to master the system’s intricacies, developed a modest news distribution program. Meanwhile, Tom Truscott and Bellovin experimented with a UUCP connection between UNC and Duke University. From this experimentation emerged an idea that would transform UUCP usage: why not distribute information to other sites using Duke as a hub? Remote sites reimbursed Duke for telephone costs, a rudimentary but effective system.

In early 1980, the network had three participants: UNC, Duke University, and Duke Medical Center’s physiology department. Jim Ellis presented the concept at a USENIX meeting in Boulder in January. The reception was favorable. Steve Daniels produced an implementation of the software called A News, distributed on the summer 1980 USENIX tape in Newark. The network expanded to 15 sites.

Armando Stettner and Bill Shannon, working at Digital Equipment Corporation, proposed an attractive deal to the University of California at Berkeley: they would cover Berkeley’s connection costs in exchange for network access via their decvax machine, located in New Hampshire. Stettner didn’t stop there and financed the first international connections to Europe, Japan, and Australia.

The network was named Usenet, in homage to USENIX. Growth exceeded all predictions: within a year, more than 100 sites were exchanging approximately 25 articles daily. This expansion revealed the initial system’s weaknesses. Mark Horton, a student at Berkeley, partnered with Matt Glickman, then a high school student, to completely rewrite A News. Their creation, B News, appeared in 1981. Horton continued to refine his work until 1984, before passing the torch to Rick Adams from the Center for Seismic Studies.

In June 1984, a geographic map of USENET revealed the phenomenon’s scope. Connections now extended to Australia, Hawaii, Canada, and Europe. Version 2.11 of B News, released in 1986, incorporated contributions from Rick Adams, Spencer Thomas, Ray Essick, and Rob Kolstad. Although new versions appeared until 1994, the system showed its limitations as early as 1989.

The NNTP (Network News Transfer Protocol), published in February 1986 by Brian Kantor and Phil Lapsley, heralded a new era. A year later, Geoff Collyer and Henry Spencer from the University of Toronto launched C News, a more efficient alternative that definitively buried the old systems.

The financial question obsessed network actors. Lauren Weinstein presented the Stargate project at the 1984 summer USENIX conference, a bold proposal to use unused portions of the television video signal to transport ASCII data at 65 kbps. Despite support from several companies and successful tests, the project bogged down and was eventually abandoned.

Facing the inadequacies of existing commercial networks, Rick Adams proposed to USENIX in 1985 the creation of a central site accessible via Tymnet. UUNET was born in 1987 with an initial budget of $35,000. The service met a real need: many people lost their email and Usenet access when changing employers. Success exceeded expectations. Installed on a 16-processor Sequent B21 computer, UUNET had more than 50 subscribers by June 1987 and several thousand five years later.

UUNET participated in the creation of the Commercial Internet Exchange Association in 1991. The following year, the company co-founded MAE-East with Metropolitan Fiber Systems, the world’s largest Internet exchange point for a time. The IPO occurred in May 1995, consecrating the concept’s commercial success.

UUCP’s success stemmed from its innovative technical characteristics. The system operated asynchronously, with task queuing. Transmissions occurred in the background, accommodating the constraints of often-busy telephone lines. The protocol required no operating system modifications and adapted equally well to direct connections and modems.

Security aspects were not neglected. Each site configured files and commands accessible to remote systems. Authentication relied on passwords, and some sites used automatic callback to verify caller identity. Sequence counters could be activated to counter identity spoofing attempts.

UUCP persisted into the 1990s, particularly in situations where permanent Internet connection was inaccessible or too expensive. On slow serial lines, it proved more efficient than TCP/IP. Its batch architecture was perfectly suited to email and Usenet news transport.

UUCP’s legacy spans decades. Email address syntax (user@host) owes its existence to it, as do certain email routing mechanisms. The protocol demonstrated that it was possible to build large-scale decentralized computer networks relying on existing telecommunications infrastructure. Its influence is found in the subsequent development of the Internet and asynchronous communication technologies.

Apple I

It was 1975, in the intellectual ferment of the Homebrew Computer Club in Silicon Valley. Steve Wozniak, a discreet but passionate figure at these electronics hobbyist meetings, had cobbled together something unusual: a computer he wanted to show the other club members. His friend Steve Jobs, whom he’d met four years earlier, immediately sensed something more than just an enthusiast’s project.

The heart of this machine was based on a MOS Technology 6502 microprocessor running at 1.023 MHz. Wozniak had the idea of integrating a video interface directly into his system, capable of displaying 24 lines of 40 characters on any standard television set. This was far from obvious. The machine came with 4 KB of RAM, expandable to 8 KB, with dynamic shift registers managing the display.

Everything changed when Paul Terrell showed up. This owner of the Byte Shop, a computer store in Mountain View, ordered 50 machines outright. Jobs and Wozniak looked at each other: they had neither the money nor the infrastructure to produce anything at that scale. Wozniak sacrificed his precious HP-65 calculator, Jobs his Volkswagen van. With this money and a 30-day credit negotiated with suppliers, they launched into the venture.

Contrary to the legend of the family garage, the reality of production proved more prosaic. The boards were manufactured and wave-soldered in a real factory, probably Santa Clara Circuits. The famous Jobs family garage mainly served for final testing and assembly. Daniel Kottke, Jobs’s friend, spent his evenings testing the machines while Wozniak intervened to troubleshoot the trickiest cases.

The Apple I hit the market in a form that surprised people. No case, no keyboard, no screen: just a bare motherboard that the buyer had to complete according to their means and desires. Paul Terrell was stunned when he received his order. He expected complete computers, not circuit boards. The price was set at $666.66, not out of provocation but because Wozniak loved repeating numbers.

On the software side, the Apple I remained spartan. A small monitor in ROM allowed direct entry of programs in machine language. Wozniak then developed his own BASIC interpreter, inspired by his experience at HP with calculators. But this BASIC had some flaws: impossible to manipulate decimals or numbers beyond 32,767, a limitation that would also plague the first version of the Apple II.

The audio cassette interface, added later, turned into a nightmare. Liza Loop, the first official buyer, struggled like many other users with her Apple I that refused to cooperate. Wozniak, brilliant with digital electronics, floundered a bit when tackling analog electronics. The Apple II would fortunately benefit from the support of other engineers to fix the problems.

Nevertheless, Wozniak incorporated remarkable technical innovations into his design. He cleverly diverted the video circuit counters to refresh the RAM, a technique he would refine on the Apple II. The video circuit kept its mysteries: certain parts remained obscure, even for Wozniak, who may have drawn inspiration from Don Lancaster’s “TV Typewriter,” published in Radio-Electronics in 1973.

In the end, fewer than 200 Apple Is came off the production lines. A modest score that in no way foreshadowed the tidal wave that would follow. Today, between 30 and 50 specimens survive in private collections or museums. Some come from Apple employees who recovered them in the 1970s from unsold stock or customer returns.

The Apple I was never a commercial success, but it forged something infinitely more precious: experience. Jobs and Wozniak learned on the job the ins and outs of computer design, production, and sales. Every flaw in the Apple I was a lesson for the Apple II, which would break all records with more than 6 million units sold over sixteen years.

This first machine, beyond a technical anecdote, embodied the idea that a computer could address the general public, escape from laboratories and universities to land in the average person’s home. The Apple I opened a breach through which millions of other machines would rush.

It also testified to that blessed era when a few electronics enthusiasts, armed only with their ingenuity and a few thousand dollars, could shake up an entire industry. The story of its creation perfectly captured this alchemy between Wozniak’s technical genius and Jobs’s commercial instinct, a recipe that would propel Apple to unimaginable heights.

Top

vi

In 1976, at the University of California, Berkeley, Bill Joy faced a problem familiar to all programmers of that era: how to edit text decently on a UNIX system? The available tools, such as ed, were more akin to intellectual torture than programming assistance. Peter H. Salus did not hesitate to describe them as “the most user-hostile editors ever created”. Imagine working line by line, never seeing your entire text!

Joy, a graduate student, refused to endure this situation. He was already working on a Pascal interpreter and collaborating with Chuck Haley to improve ed. The university had a PDP-11, a modest machine compared to the PDP-10s that MIT or Stanford could afford. This hardware constraint pushed them to innovate differently, seeking efficiency rather than raw power.

The summer of 1976 brought an unexpected breakthrough. Joy and Haley discovered em (Editor for Mortals), created by George Coulouris at Queen Mary College. Here was an editor that dared to display two error messages instead of one and offered a more visual approach to editing. Nothing groundbreaking in itself, but enough to spark inspiration. The two colleagues then developed ex, whose “visual” mode would give birth to the name “vi”.

The genesis of vi owes as much to chance as to genius. Joy remembers designing the interface one Saturday in 1976, with the radio in the background broadcasting Jimmy Carter’s question-and-answer session shortly after his election. This anecdote reveals the relaxed atmosphere in which a tool destined to mark computing history was born. The editor integrated its innovations while adapting to existing terminals, notably the ADM-3A. The famous mode system—normal, insert, command—naturally arose from these technical constraints and the art of making the most of every keystroke.

Joy did not bother with sophisticated commercial channels to distribute his creation. He offered copies on 9-track magnetic tapes for $50, of which $40 went to fund development. This artisanal distribution met with unexpected success: hundreds of copies went to universities and laboratories. The technical superiority of vi over its competitors explained this enthusiasm, reinforced by its distribution with BSD, Berkeley’s version of UNIX.

The arrival of VAX machines in 1977-1978 gave the project a second wind. Joy adapted vi to the capabilities of these new machines, particularly exploiting virtual memory. Development progressed well until a technical incident changed everything: a tape drive failure caused the source code to be lost. The planned advanced features, such as multi-window support, were abandoned. Joy preferred to stabilize what existed and produce proper documentation.

The story took a surprising turn when AT&T incorporated vi into UNIX System V. Joy learned this news after the fact, discovering his editor as a standard throughout the UNIX universe. This decision was explained by the massive adoption of vi within Bell Labs itself. Paradoxically, this popularity created problems: each team maintained its own copy, wasting machine memory.

vi established an editing philosophy that still divides opinion. Its modal system requires substantial initial learning but then offers unmatched text manipulation power. Some find it too complex, others swear by its efficiency once mastered. This duality has persisted through the decades without losing its relevance.

The 1990s saw the birth of vim (Vi IMproved), which extended the legacy of the original editor while adding features expected by modern users. vi is present on all UNIX systems and their descendants, including Linux and macOS. Its influence extends beyond these boundaries, and many modern editors offer a “vi mode” to attract its devotees.

This longevity is surprising in a field where obsolescence is rapid. A tool designed in the 1970s continues to serve millions of users daily. This longevity stems from its formidable efficiency for certain tasks, its universal presence, and its proven reliability. System administrators and programmers appreciate its guaranteed availability and rock-solid stability.

The vi adventure illustrates the importance of initial choices in software design. The constraints that guided Joy—memory economy, optimal keyboard exploitation, compatibility with limited terminals—produced a tool whose utility transcends technological eras. This success reminds us that good software is judged not only by its features but by its conceptual coherence and its response to users’ fundamental needs. Fifty years after its creation, vi continues to prove this truth every day.

Top

DEC VAX-11

In the mid-1970s, mainframe computers still dominated large organizations, while Digital Equipment Corporation’s 16-bit minicomputers ruled an expanding market. But at DEC, a few visionary engineers understood they were reaching the limits of this architecture. Programs were growing, data multiplying, and 16 bits were showing their weaknesses.

In March 1975, a team quietly formed in the Maynard, Massachusetts offices. Their mission: to design an innovative 32-bit architecture. The project bore poetic code names: “Star” for the hardware, “Starlet” for the operating system. This unusual approach was immediately striking: unlike the customary practice where hardware and software evolved separately, the teams worked hand in hand from day one.

Three hundred man-years of intensive development elapsed before DEC unveiled its masterpiece. On October 25, 1977, at the annual shareholders’ meeting, the presentation of the VAX-11/780 and its VMS system made a lasting impression. The chosen demonstration bordered on boldness: the computer faced a human in Scrabble. The word “sensibly” earned it 127 points and victory. This playful wink concealed an impressive technical reality: the VAX-11/780 combined the power of a mainframe, the interactivity of a minicomputer, and a finally affordable price.

The VAX-11/780 architecture challenged conventions. The 32-bit virtual memory freed programmers from the addressing constraints that had limited their ambitions. Error correction integrated into memory, a world first, transformed system reliability. DEC embraced a bold philosophy: designing a machine meant to last fifteen to twenty years, turning its back on the planned obsolescence already practiced by its competitors.

The market responded enthusiastically. By 1979, DEC’s sales crossed the $2 billion threshold. Universities massively adopted these machines for their research laboratories. Companies discovered a tool capable of handling their most demanding scientific applications. VMS, the operating system, impressed with its robustness and advanced features.

The VAX family gradually expanded. The VAX-11/750 arrived in 1980 with its semi-custom gate array technology. Two years later, the VAX-11/730 democratized access to this architecture. In 1984, the VAX 8600 set new performance records with quadruple the power of the original model.

A remarkable innovation emerged in 1983: VAXclusters. For the first time in the industry, multiple VAX computers could connect in a network and form a single system. This technical feat revolutionized high availability. Stock exchanges, industrial control systems, all critical environments embraced this technology that tolerated failures like no other.

In 1985, the MicroVAX concentrated the entire VAX architecture on a single chip, making more compact and less expensive systems possible. Success exceeded all expectations: 20,000 MicroVAX IIs found buyers during the first year of commercialization.

Technical evolution continued at a sustained pace. The 1987 CVAX adopted CMOS technology, more energy efficient. The Rigel processor in 1989, then the NVAX in 1991, continually pushed performance limits. The latter reached a speed thirty times that of the VAX-11/780 that had started it all fourteen years earlier.

VMS shaped the evolution of operating systems. Custom-designed for the VAX, it established unprecedented standards of reliability and functional sophistication. The native integration of DECnet anticipated the growing importance of machine-to-machine communications, foreshadowing the era of computer networks.

The early 1990s saw the emergence of 64-bit RISC processors. DEC prepared its transition to the Alpha architecture without abandoning its legacy. VMS, renamed OpenVMS, migrated to this new platform while maintaining support for older systems. The company continued its VAX production, releasing the 4000 model in 1996.

The VAX’s success rested on a skillful blend of ingredients: computing power, legendary reliability, sophisticated operating system, and exceptional technical support from DEC. Anecdotes abound about this extraordinary robustness. That VAX-11/780 that continued functioning after falling from a forklift. Those systems that ran for years without interruption, defying the laws of electronic wear.

The VAX’s 32-bit architecture influenced the design of many modern processors. VMS concepts, advanced virtual memory management or clustering, are found in our current operating systems. This integrated hardware-software design philosophy now stands as self-evident throughout the industry.

The VAX testifies to a bygone era when computer innovation privileged long-term vision and technical excellence over the race for novelty. Its exceptional longevity—some VAX systems still function today—validates this approach. Beyond the technical object, the VAX embodies a foundational stage in computer history, that pivotal moment between the era of mainframes and that of distributed systems shaping our digital world.

Top

RSA

In 1976, no one could have imagined that a research paper published by Whitfield Diffie and Martin Hellman would revolutionize the world of cryptography. Their publication, “New Directions in Cryptography,” introduced an idea that seemed purely theoretical: asymmetric cryptography. Until that date, all cryptographic systems operated on a single principle: two people wishing to communicate securely first had to meet to exchange a secret key. This constraint paralyzed the development of digital communications.

The trio of researchers—Diffie, Hellman, and Ralph Merkle—proposed three concepts that would revolutionize the field: public-key encryption, digital signatures, and key exchange. Their paper, however, contained only one concrete implementation: the key exchange method that would bear their names. This Diffie-Hellman protocol constituted the first practical method for establishing a shared secret without prior meeting.

The following year, three researchers from MIT would take the decisive step. Ron Rivest, Adi Shamir, and Len Adleman formed a team with complementary talents. Rivest excelled at applying theoretical concepts to concrete problems. He devoured scientific literature and generated a constant stream of new ideas. Shamir possessed that rare ability to pierce through to the essence of a problem beyond its apparent complexity. Adleman, a rigorous mathematician, evaluated each proposition with the precision of a Swiss watchmaker.

For months, the trio explored various avenues without success. Then came that April evening in 1977. Rivest had spent the Passover evening at a student’s home, where they had shared Manischewitz wine. Back home, unable to sleep, he settled on his couch with a mathematics textbook. The question that had obsessed him for a year turned over in his mind: does there exist a mathematical function that is easy to compute in one direction but impossible to reverse without particular information? By sunrise, Rivest had drafted the entire paper describing the RSA system.

The genius of this discovery lies in its conceptual simplicity. The RSA system relies on a fundamental arithmetic property: multiplying two prime numbers is child’s play, but finding those numbers from their product is a nightmare when they reach a respectable size. The system generates two mathematically linked keys: a public key that anyone can know, and a private key that only its owner possesses. What is encrypted with the first can only be decrypted with the second.

To demonstrate their confidence in this new system, the three inventors launched a bold challenge in the pages of Scientific American. They published a 129-digit number, the product of two secret prime numbers, accompanied by an encrypted message. One hundred dollars awaited whoever could decipher it before April 1, 1982. This wager testified to their conviction in the mathematical robustness of their creation.

RSA-129 would resist well beyond the fixed deadline. It would take until 1994 to see this numerical giant collapse under the coordinated assault of 600 people spread across 24 countries. Arjen Lenstra, Paul Leyland, Michael Graff, and Derek Atkins orchestrated this collaborative enterprise that mobilized 1,600 computers for seven months. The secret message revealed—“THE MAGIC WORDS ARE SQUEAMISH OSSIFRAGE”—rewarded this technical feat with poetry of rather relative quality.

This collective victory did not signal the death of RSA but illustrated the permanent evolution of the balance of power between cryptographers and cryptanalysts. Over the decades, numerous attacks emerged, often targeting implementation weaknesses rather than the mathematical foundations of the system. Michael Wiener demonstrated in 1990 that an excessively small private exponent compromises security. Paul Kocher revealed in 1996 that a smart card could betray its secret key through the simple time it takes to compute. Daniel Bleichenbacher discovered in 1998 that certain error messages divulge valuable information to a patient attacker.

These discoveries did not undermine the confidence placed in RSA, which became one of the pillars of the modern internet. Every time you connect to your online bank, make a purchase, or check your email, RSA works silently to protect your data. Its discreet yet omnipresent presence has made possible the explosion of e-commerce and the digitalization of financial services.

Yet, a sword of Damocles hangs over this cryptographic giant. In 1994, Peter Shor published an algorithm that could, on a sufficiently powerful quantum computer, efficiently factor very large numbers. This theoretical threat stimulates research in post-quantum cryptography, a race against time to develop new systems before quantum computers become reality.

Forty-five years after its birth, RSA retains its relevance and reliability. Its journey demonstrates that an innovation can transform society far beyond its creators’ initial intentions, as new cryptographic challenges emerge with the advent of the quantum era.

Top

Apple II

With the release of the Apple II in 1977, Steve Wozniak and Steve Jobs were no longer content to tinker in a garage—they wanted to reach the mass market. Their first consumer machine bore no resemblance to the electronic kits cluttering computer clubs at the time. Gone were the bare motherboards and makeshift cases: the Apple II came in an elegant beige plastic housing with an integrated keyboard, ready to use right out of the box.

Wozniak designed the architecture around the 6502 microprocessor, cheaper than the Intel 8080 that powered its competitors. His technical genius shows through in the simplicity of the circuit board, where each component finds its place with remarkable economy of means. The machine started with 4 KB of RAM, expandable to 48 KB, and came with Integer BASIC built directly into ROM. But what immediately stands out is the native support for color graphics and sound. Wozniak had actually developed these features with a specific goal in mind: programming Breakout in BASIC.

Jobs focused on marketing and design. He categorically refused metal cases that made computers look like industrial equipment. Jerry Manock designed a molded plastic housing with rounded corners and that characteristic beveled shape that would become Apple’s visual signature. The device had to fit into a living room like any television set.

Sales began in June 1977 at $1,298 for the base configuration. Success exceeded expectations. Individuals, schools, and small businesses competed for the machine. The open architecture with its eight expansion slots facilitated upgrades: additional memory, various interfaces, disk controllers. A complete ecosystem of third-party manufacturers developed.

With the arrival of the Disk II floppy drive in 1978, Wozniak hit a home run: for $495, his creation offered 113 KB of storage on 5.25-inch diskettes. The DOS operating system, in its successive versions 3.1, 3.2, and then 3.3, simplified file management. This reliable and affordable solution definitively opened the doors of the professional market to the Apple II.

The year 1979 would be remembered. Apple released the Apple II Plus, which replaced Integer BASIC with the more sophisticated Applesoft BASIC in ROM, featuring floating-point numbers. Standard memory jumped to 48 KB. But most importantly, VisiCalc appeared. This first spreadsheet in history revolutionized computer use in business. Many bought an Apple II solely to run VisiCalc.

The Apple IIe of 1983, the true star of the lineup, had 64 KB of RAM expandable to 128 KB and an 80-column display, making it a credible office productivity tool. The full keyboard with lowercase letters facilitated text entry. Compatibility with existing software was total, ensuring longevity of software investments.

Apple took a gamble on compactness with the Apple IIc in 1984. This integrated model with a built-in floppy drive targeted homes and schools. Abandoning expansion slots in favor of standardized ports simplified usage but limited upgradability. Its polished design earned it several aesthetic awards, a first for Apple.

The swan song came with the Apple IIGS in 1986. Its 16-bit processor was compatible with the 6502, its graphics and sound capabilities were impressive, and its interface drew inspiration from the Macintosh. Alas, Apple was already betting everything on the Macintosh. The IIGS, despite its undeniable qualities, remained in the shadow of its bigger sibling.

Production of the Apple II ended definitively in 1993 after sixteen years of an exceptional career. Over six million units sold testify to a success that transcends computing. This machine democratized the personal computer, proving it could be both powerful and accessible.

Commodore P.E.T

In 1977, when Commodore International unveiled its Personal Electronic Transactor, home computing barely existed. Computers remained the domain of universities, businesses, and a few enthusiasts capable of assembling complex kits. Jack Tramiel, Commodore’s boss, sensed an opportunity. Microprocessor prices were plummeting, and he envisioned a different device: a complete, ready-to-use computer in a single housing.

The P.E.T broke with everything that existed at the time. Gone were components scattered across multiple boards, gone the jury-rigged television screen and tape recorder diverted from its original purpose. Commodore integrated everything: processor, memory, 9-inch monochrome screen, keyboard, and cassette drive. This approach revealed an astute commercial vision. Where other manufacturers targeted electronics hobbyists, Tramiel aimed directly at future everyday users.

The machine’s heart beat to the rhythm of a MOS Technology 6502 processor running at 1 MHz. This technical choice, far from trivial, relied on the recognized qualities of this component: simple architecture, reduced cost, and honorable performance. The 4 KB RAM represented a reasonable compromise between functionality and cost. Moreover, Commodore offered enhanced versions with 8, 16, and 32 KB.

Microsoft BASIC resided in read-only memory, instantly transforming the P.E.T into a programmable machine upon power-up. This version of the language, adapted to the computer’s specificities, integrated commands dedicated to peripherals and the screen’s rudimentary graphics capabilities. Users could immediately type their first lines of code or load a program from cassette.

Storage posed a problem. Commodore’s Datasette, derived from a standard tape recorder, proved exasperatingly slow. Loading a program of a few kilobytes required long minutes of waiting, punctuated by characteristic beeps and squeals. Yet this solution remained the only economically viable way to democratize access to computing. Floppy disk drives still cost too much to equip households en masse.

Early users discovered the limitations of the original keyboard. Commodore had adopted the pocket calculator principle, with flat, closely-spaced keys that made typing painful. Complaints poured in, and the American firm corrected course by offering a full keyboard on subsequent models. This responsiveness to criticism demonstrated attentive market listening.

The P.E.T’s screen displayed 40 columns by 25 lines in green characters on a black background. No colors, no animated sprites like on future gaming machines. But this sobriety concealed unsuspected possibilities. Programmers ingeniously repurposed semi-graphic characters to draw shapes, create rudimentary interfaces, or design action games. The P.E.T also featured an optional high-resolution mode, MTU’s Visible Memory, which allowed individual control of each screen pixel.

In schools, the P.E.T found its place. Its robustness, reliable operation, and affordable price made it the ideal tool for introducing students to computing. Generations of students learned their first programming basics on these green and white machines. The IEEE-488 interface, inherited from the world of scientific instrumentation, enabled connection of professional measuring devices, extending the P.E.T’s use to laboratories and research centers.

The user community organized spontaneously. The P.E.T User Group emerged in 1978 and brought together thousands of members worldwide. Specialized publications, cassette program exchanges, regional meetings: a vibrant ecosystem developed around the machine. This collective dynamic compensated for technical limitations through overflowing creativity.

Independent developers created remarkable applications. VisiCalc, the first consumer spreadsheet, found in the P.E.T an ideal platform for attracting small businesses and independent professionals. Sophisticated games, management software, and educational tools progressively enriched the available catalog. This software diversity transformed the P.E.T from a programming computer into a genuine productivity tool.

Commodore learned lessons from this pioneering experience. The P.E.T’s successes and failures directly informed the design of the VIC-20 and then the legendary Commodore 64. The all-in-one approach validated, the importance of the price factor confirmed, the necessity of a robust software ecosystem understood: so many lessons that would guide the California company’s future creations.

The P.E.T gradually faded in the early 1980s, supplanted by more powerful and cheaper machines. This machine proved that a personal computer could appeal beyond the circle of initiates. It outlined a new market where technology yielded to usage, where complexity disappeared behind simplicity of use. The P.E.T fascinates collectors and computer historians. Not for its performance, long since surpassed, but for what it represents: one of the first steps toward democratizing the personal computer.

Top

Tandy TRS-80

In 1976, in the offices of Tandy Radio Shack in Fort Worth, Texas, Don French harbored a bold idea. This computer enthusiast and executive at a company with nearly 3,500 electronics stores across the United States wanted to convince his president, Lew Kornfield, to venture into selling computer kits. The idea seemed crazy: selling a $199 product in stores accustomed to moving merchandise priced around $30. Management was skeptical. But times were changing. Sales of CB radios, those Citizen Band units that had made the company’s fortune, were beginning to falter. The company needed a breath of fresh air, a product capable of maintaining its margins. French’s project finally got the green light.

Spring 1976 saw French and his colleagues visit National Semiconductor. There, they met Steve Leininger, a chip designer and member of the famous Homebrew Computer Club. Initially solicited as a mere consultant, Leininger eventually accepted a position at TRS. The man would revolutionize the initial project. While kits dominated the market against much more expensive assembled systems, he pushed TRS toward the latter option. His bet: offer a fully assembled computer for $399.95. With the monitor, the package would reach $599.95. The price remained low enough to appeal to management.

Caution prevailed at Tandy. Don French confirmed it: only 3,500 units were initially produced, exactly matching the number of Radio Shack stores. The logic seemed obvious: if the venture went south, the machines would serve in the retail locations. On August 3, 1977, New York’s Warwick Hotel hosted the official presentation of the TRS-80. The press conference went almost unnoticed. Two days later, fortune smiled on the company: the TRS-80’s presence at the Personal Computer Faire in Boston earned a front-page Associated Press article.

Upon his return, French discovered that his brainchild had taken on unexpected proportions. The numbers speak for themselves: approximately 250,000 units sold for this first model. The attractive price compared to competition from Apple or Commodore doesn’t explain everything. Radio Shack had an ace up its sleeve: its brand recognition and its roughly 5,000 stores and franchises in 1977. Anyone could now see and test a TRS-80 in a shopping mall or on a commercial street. The computer emerged from the circle of enthusiasts to reach the general public.

Businesses began taking interest in the TRS-80, but the first generation quickly showed its limitations. Compared to machines like the IBM 5110, it lacked power and storage space. Tandy struck back in June 1979 at the National Computer Conference in New York with the TRS-80 Model II. Mind you, this wasn’t a successor to the Model I, which then took on that name. Radio Shack’s advertising hammered the point: this was a different machine, considerably more expensive with a base price of $3,490 including a disk drive. The Z80 processor jumped from 1.77 to 4 MHz, memory climbed from 4K expandable to 16K toward 32K expandable to 48K, graphics gained resolution. Gone was the cassette drive, replaced by a specific TRSDOS and an enhanced BASIC.

July 1980 marked the arrival of the TRS-80 Model III, the true heir to the Model I. Electromagnetic interference from early personal computers prompted the FCC to tighten its directives. The Model I was discontinued in 1981. Like its big brother Model II, the Model III integrated everything into a single case: monitor, keyboard, disk drive. This design reduced radio interference and appealed to educational institutions, less exposed to theft than with separate components. The Z80 processor reached 2.0 MHz and featured a more powerful ROM BASIC 2. The architecture preserved broad compatibility with the Model I, but TRSDOS 1.3 was specific to the Model III. A complete model cost $2,495 in 1980, with the entry-level version priced at $699.

April 1983 saw the birth of the TRS-80 Model 4. Burned by compatibility problems between the Model I and Model III, developers bet on total compatibility with its predecessor. The computing world applauded this decision but lamented the absence of native software. The Model 4 broke new ground: 4 MHz clock, 64K of RAM with the possibility of adding 64K more, 80x24 screen with reverse video, TRSDOS 6 with Microsoft BASIC and CP/M compatibility. The basic configuration with a 16K cassette drive started at $999, the complete model with 64K and two floppy drives peaked at $1,999. Two limited versions would follow, but the original configurations would last until 1991.

The TRS-80’s triumph can be explained in different ways. First, that immediate availability in the vast Radio Shack network. Then, a remarkable pedagogical approach with the Level I BASIC manual, exemplary in its clarity. The modular design also appealed: everyone could upgrade their machine according to their means and needs. Radio Shack provided after-sales service, a mark of confidence. The company developed an entire ecosystem: varied peripherals, software ranging from games to professional applications (accounting, inventory management). Over 100,000 systems found buyers, placing the TRS-80 among the first widely distributed personal computers.

The impact on the computerization of small businesses, schools, and American households proved considerable. The TRS-80 democratized access to computers far beyond the circles of specialists and professionals. It embodied this transformation of the microcomputer market between the late 1970s and early 1980s, this shift from a niche market toward the general public and the professional world.

Top

Berkeley Software Distribution

When Ken Thompson, Dennis Ritchie, and Rudd Canaday created UNICS at Purdue University in 1969, they had no idea they were launching a lineage that would give birth to one of the most influential operating system families in computing history. UNICS was renamed UNIX two years later, and as early as 1973-1974, the system saw its first branches with PWB/UNIX and MERT. While PWB/UNIX would evolve into System III, MERT disappeared from the radar.

The true BSD adventure began on March 9, 1978, in the offices of the Computer Systems Research Group at the University of California, Berkeley. This first version of BSD was modest, but versions 2 and 3 marked a turning point. DEC’s 32-bit VAX machines gradually replaced the aging PDP-11s, and BSD adapted to this technological transition. The developers integrated Thompson’s reworked Pascal compiler, the vi editor that would become legendary, and the C shell that was a game-changer for system administrators.

The story might have ended there without DARPA’s providential intervention. Bob Fabry secured an 18-month funding that transformed BSD 3 into an innovation laboratory. Virtual memory made its appearance, accompanied by demand paging and sophisticated page replacement algorithms. These improvements laid the groundwork for BSD 4, which introduced the ARPANET protocol—the ancestor of TCP/IP—and redesigned memory management. The termcap database advanced terminal management, while device support gained robustness.

Jim Kulp contributed to BSD 4.1 with job control, but it was especially the automatic hardware detection that transformed users’ lives. Gone were the tedious kernel recompilations with every configuration change: the system now recognized and configured devices automatically at boot time. A kernel could now run on different machines without human intervention.

The year 1983 marked the pinnacle with BSD 4.2. Bill Joy and Kirk McKusick delivered their masterpiece with the Berkeley Fast File System, which far surpassed traditional UFS in terms of performance and disk space utilization. Signals revolutionized inter-process communication, IPC structured complex exchanges, and TCP/IP established itself as the networking standard. These innovations inspired a generation of systems: SunOS 1.2, Ultrix-II, Mach, and MIPS OS drew directly from this technological goldmine.

BSD 4.3 was released in 1986 with the mission of consolidating achievements and refining TCP/IP. The system reached remarkable technical maturity, but the commercial aspect entered the picture. In 1990, BSDi launched BSD/386 based on BSD Net/2 code. This commercial version positioned itself as a credible alternative to AT&T’s System V, with a compelling argument: a more affordable price. However, the sword of Damocles of AT&T’s license still hung over BSD.

The bombshell came in 1992 when AT&T took BSDi to court for trademark violation and false advertising. After an initial failure, AT&T came back in 1993, this time against both BSDi and the University of California. The charge: unauthorized distribution of proprietary code. The situation became more complex when AT&T sold USL and UNIX rights to Novell. It would take until February 4, 1994, to see the birth of 4.4BSD Lite, purged of all elements under AT&T license.

Meanwhile, Bill Jolitz was working behind the scenes on 386BSD. His approach differed: rather than negotiating with lawyers, he developed alternatives to the problematic components. This approach resulted in a completely free version of BSD, freed from the legal constraints that had hindered its development. Distributions temporarily slowed, but this technical liberation constituted a strategic victory for free software.

The following years saw several branches emerge: NetBSD was born from BSD Net/2 code, FreeBSD extended the legacy of 386BSD, DragonFlyBSD later split from FreeBSD, and Apple discreetly integrated FreeBSD elements into macOS X’s hybrid Mach/BSD kernel.

BSD’s technical legacy spans decades. The FFS file system still inspires contemporary developers. Berkeley’s TCP/IP stack remains a reference for understanding network protocols. Job control and automatic hardware detection are now part of the standard UNIX landscape. But beyond these technical contributions, BSD inaugurated an unprecedented development model: collaboration between university and industry, collective code improvement, sharing of innovations.

Its permissive license stands in contrast to more restrictive approaches. This openness favored the adoption of BSD code in commercial projects, creating an ecosystem where academic research and industrial development mutually enriched each other. macOS perfectly illustrates this symbiosis: a commercial system that integrates and values free components.

BSD’s fragmentation into multiple variants—FreeBSD, NetBSD, OpenBSD—paradoxically reflects its vitality. Each branch explores specific territory: NetBSD focuses on universal portability, OpenBSD prioritizes absolute security, DragonFlyBSD rethinks multiprocessor architectures. This diversity enriches the ecosystem rather than weakening it.

In the grand history of computing, BSD embodies the transmission of technical knowledge across generations of developers. The system traveled from university to industry, from research to critical applications, without losing its soul or intrinsic qualities.

Top

Space Invaders

In 1978, at Taito Corporation’s offices in Tokyo, Tomohiro Nishikado was working on a project that would soon revolutionize the nascent video game industry. The Japanese engineer had no idea he was about to create one of the most enduring cultural phenomena of his time. His Space Invaders emerged in a context where Japanese manufacturers were attempting to catch up with the Americans, who had dominated the sector since the resounding success of PONG six years earlier.

Where PONG relied on rudimentary TTL circuits, Space Invaders introduced an architecture coupling a microprocessor with bitmap video memory. This combination enabled the simultaneous display of dozens of animated graphic objects, an unmatched performance for the era. The game’s visual ingenuity lay in its monochrome projection onto a background colored by filters—an economical yet remarkably effective solution.

Nishikado’s genius extended beyond purely technical aspects. He forged the conventions of modern video gaming: scoring system, multiple lives, increasing difficulty, infinite loop. The concept appeared disarmingly simple: a laser cannon facing hordes of aliens descending inexorably. The better the player performed, the faster the invaders accelerated their deadly descent. This mechanic created tension that never relented.

The Japanese public’s reception bordered on collective frenzy. Arcade halls transformed into temples dedicated to the new phenomenon. Some establishments abandoned all their other games to install exclusively Space Invaders machines. Legend has it that a shortage of 100-yen coins struck the archipelago—an invented story but symptomatic of the national craze. The numbers speak for themselves: hundreds of millions of dollars in revenue within a few months.

This lightning success redrew the industry’s geography. Japanese manufacturers, galvanized by this unexpected triumph, poured capital into research and development. Namco, Sega, and Nintendo emerged from this creative effervescence that would dominate the following decade. The arcade game industry reached economic maturity.

The technical constraints of the original system—incapable of filling more than a quarter of the screen in real time—paradoxically stimulated innovation. Namco responded in 1979 with Galaxian and its sprite technology, which managed moving objects more efficiently. This technological arms race never ceased: digital signal processors for 3D in the 1990s, then specialized graphics processors at the turn of the millennium.

Beyond entertainment, Space Invaders crystallized the geopolitical tensions of its time. 1970s America viewed Japan’s economic ascent with apprehension. The pixelated invaders echoed fears of an "invasion" of Japanese products into the American market. Ironically, Atari acquired the game’s rights for the United States, illustrating the complexity of exchanges between the two powers.

This cultural mutation came with professionalization of the sector. Gone were the days of solitary tinkerers: developing a game now required a structured team. Programmers, hardware engineers, designers, graphic artists, and producers formed the first proper studios. This organization marked the beginning of the sprawling industry we know today.

The game’s social impact surprised its creators. Spectators gathered around arcade machines to admire virtuoso performances. These spontaneous gatherings held the seeds of video game spectacle culture that would explode with contemporary e-sports. Space Invaders revealed the collective dimension of a medium thought to be solitary.

Its influence endures in current creations, notably the defensive principle of protecting one’s territory against waves of assailants. Score, progression, mounting tension: so many elements that have become inseparable from the video gaming experience. Nishikado’s game demonstrates how creative intuition, supported by technical innovation, can transform an entire sector.

Space Invaders, this small program of a few kilobytes, redefined our relationship with digital entertainment. It marks the moment when video games emancipated themselves from their artisanal origins to establish themselves as an autonomous form of expression, capable of generating their own aesthetic and social conventions.

Top

TeX

Donald Knuth once received the proofs for the second volume of The Art of Computer Programming. It was 1977, and the result disappointed him so much that he decided to stop everything. The typographic composition of his mathematical formulas appeared so mediocre that he chose to create his own system rather than continue writing his books. The printing industry was undergoing a technical upheaval: it was abandoning hot metal typesetting, where each character was cast in metal and assembled by hand, to adopt phototypesetting. This new method was cheaper and faster, certainly, but Knuth would not accept the decline in quality that came with it.

The Stanford computer scientist devoted his 1977-1978 sabbatical year to developing TeX. He did not simply write a page layout program: he designed a complete system with its document description language and its DVI output format, independent of the printer used. His first trials took place on a DEC PDP-10 computer, in the SAIL language. Frank Liang and Michael Plass, his students, tested successive prototypes and actively participated in the development.

TeX redefined the art of typesetting through its sophisticated algorithms. Frank Liang developed a hyphenation system that adapts to different languages, while Michael Plass helped Knuth design paragraph justification. The technical innovation lies in the concept of “boxes and glue”: TeX manages the spaces between elements like virtual springs that compress or stretch according to needs. This approach produces justification of unmatched quality.

Knuth accompanied TeX with a second system: METAFONT. Rather than drawing letter outlines as PostScript and TrueType would later do, METAFONT programs characters by simulating pen movements. This parametric method generates entire font families by modifying a few variables. The Computer Modern fonts, created with this tool, have visually marked all TeX documents for forty years.

In 1982, the rewrite in Pascal gave birth to TeX82. This language change improved portability and served as the foundation for all subsequent versions. Knuth then adopted an original version numbering system: the numbers tend toward π for TeX and toward e for METAFONT. This mathematical approach reflects his desire to definitively stabilize the system. He committed to correcting only serious bugs, guaranteeing that a document would always produce an identical result, even decades later.

The American Mathematical Society placed its trust in TeX for its publications. This adoption by a reference institution contributed to its dissemination in universities around the world. Leslie Lamport understood that Knuth’s system was too complex for many users. In 1984, he created LaTeX, a layer of simplified commands that hides technical complexity behind a logical interface. The author could now focus on the structure of their document rather than its formatting, and its success surpassed that of “plain” TeX.

Knuth also reinvented software documentation. He developed specifically for TeX the principles of literate programming, blending code and explanations in a single document. This approach produced exceptionally complete documentation that helps other programmers understand and modify the system. A user community formed around TeX. The TeX Users Group was founded in 1980 and published TUGboat, a technical journal that accompanied the system’s evolution.

TeX remains the reference tool for scientists. Mathematicians, physicists, and computer scientists use it extensively to write their papers. Extensions like pdfTeX, XeTeX, and LuaTeX have added new functions without altering the original typographic quality. Paradoxically, Knuth’s innovations have never been adopted by mainstream software. Microsoft Word or LibreOffice Writer offer significantly inferior typesetting despite the power of current computers.

The story of TeX shows how one man’s demand for quality can change an entire discipline. It demonstrates the importance of open source code and exemplary documentation for a software’s survival. In a world where technologies succeed one another at a frantic pace, TeX crosses decades while maintaining all its relevance. Knuth had planned to spend a few months on this personal project. Nearly half a century later, his work continues to typeset the most beautiful mathematical or literary texts on the planet.

Top

WordStar

In May 1978, Seymour Rubinstein stormed out of IMSAI Manufacturing Corporation. A disagreement with Bill Millard had just shattered his career as marketing director. Two weeks earlier, Rob Barnaby, a talented programmer, had already jumped ship. Rubinstein tracked him down and made him a simple proposal: create software together for these microcomputers starting to emerge from California garages.

June 1978 saw the birth of MicroPro International Corporation. The name reflected Rubinstein’s ambition: to develop “PROfessional” programs for “MICRO-computers”. The first fruits arrived as early as September with WordMaster and SuperSort. WordMaster built on the ideas of a text editor that Barnaby had developed at IMSAI, completely rewritten to run on CP/M. SuperSort drew inspiration from IBM 360 sorting utilities. These two programs brought in $10,700 in September 1978 alone, enough to face the future with confidence.

But resellers were demanding something else: a real word processor that handled printing directly. In early 1979, Barnaby embarked on a technical tour de force. In six months, he wrote over 137,000 lines of assembly code. IBM would have estimated this work at 42 man-years. The first version of WordStar was released in June 1979.

The software disrupted established practices. For the first time, the screen showed what would appear on paper. WordStar integrated context-sensitive help that adapted to the user’s actions, guiding their steps without cluttering the interface. Typists found familiar ground: all commands remained accessible from the keyboard, and page breaks displayed on screen—an unheard-of luxury.

Success exceeded all expectations. MicroPro climbed from $500,000 in 1979 to $1.8 million in 1980, then $5.2 million in 1981. The arrival of venture capitalist Fred Adler in 1981 transformed the company. A real sales force was put in place. Revenue jumped to nearly $25 million, with a loss of one million for the fiscal year. In 1982, revenues reached $45 million.

Rubinstein thought big and aimed for international markets. He established European headquarters in Zug, Switzerland, to take advantage of favorable tax treatment. Offices opened in Germany, the United Kingdom, France, and Japan. WordStar was released in 42 languages. The entire world discovered this futuristic word processor.

WordStar inherited the philosophy of text editors rather than that of typewriters. Text was now a continuous flow, no longer a succession of pages. Formatting commands were inserted directly into the document via special characters, a method borrowed straight from programming. This approach appealed to academics and writers who handled lengthy manuscripts.

In January 1984, Rubinstein collapsed from a heart attack as the company prepared for its initial public offering. The company’s lawyers persuaded him to sign documents that stripped him of control. H. Glen Haney took the reins and gradually sidelined the founder.

The new management accumulated missteps. WordStar 2000 was released in total incompatibility with the original version, sowing confusion among loyal users. Advertising for WordStar disappeared for three years. The network of 1,500 resellers was sacrificed in favor of a few large distributors. Sales plummeted from $72 million to $40 million.

WordPerfect arrived in 1980 by combining editing and formatting in a single display. Microsoft Word arrived in 1983 with the novel approach that each character is an independent entity with its own attributes. This method, initially slower, would come into its own with the emergence of graphical interfaces.

The 1990s sounded the death knell for WordStar. The transition to Windows went poorly. The software passed from hand to hand, acquired by companies specializing in the distribution of low-cost programs, and finally by The Learning Company. Corel obtained a license but preferred WordPerfect, which it had just acquired. WordStar survived thanks to a few groups of nostalgic users.

Yet the legacy remains immense. WordStar proved that a microcomputer could rival professional typewriters. Its interface innovations inspired an entire generation of software. It helped structure the software industry by creating user license agreements and developing international distribution.

Top

dBASE

At NASA’s Jet Propulsion Laboratory in the 1970s, Wayne Ratliff probably had no idea he was about to write a chapter in the history of personal database systems. His JPLDIS system, designed to process information for the space laboratory, would become the ancestor of one of personal computing’s most significant software products.

Ratliff transformed his initial work into a product he named Vulcan, which he sold through classified ads in specialized press. The commercial venture took a different turn when Ashton-Tate decided to distribute the software under a new name: dBASE II. This naming convention, which cleverly avoided version I, immediately suggested the maturity of an already proven product. The strategy worked perfectly. On 8-bit microcomputers running CP/M, dBASE II became the essential reference for data management.

What distinguished dBASE from other tools was its comprehensive approach. Where competitors offered isolated functions, dBASE provided a fully integrated work environment. Once launched, users accessed a complete universe: table creation, data manipulation, information retrieval, export to other formats. The software went further by offering a programming language to automate repetitive tasks or build custom interfaces.

This integration pushed the concept to include a text editor, file management tools, and the ability to interact directly with the operating system. Everything happened within a single environment, without having to juggle between different programs. The command-line interface, recognizable by its famous dot prompt, became increasingly familiar to users who learned to interact directly with their data.

The technical architecture relied on logical information organization. Data tables, stored in .dbf files, structured information into columns and rows following the classic relational model. The system relied on different file types: indexes to speed up searches, memory for temporary calculations, report formats for presentation, screen formats for interaction, programs for automation.

dBASE III marked a major technical breakthrough. Maintenance and code evolution were facilitated following Ashton-Tate’s abandonment of assembly language in favor of C. This version introduced date management and significantly increased processing capabilities. dBASE III Plus, released in 1986, reached a new level with pull-down menus inspired by Framework, doubled performance, and native multi-user support.

The numbers speak for themselves: unlimited files, up to one billion records per file, 128 fields maximum per record, 4 KB per record. Tests showed that sorting was twice as fast as the previous version, while indexing gained a factor of ten. Compatibility extended to PFS:FILE, Lotus 1-2-3, VisiCalc DIF, MultiPlan, and standard ASCII files.

Networking took on particular importance. dBASE III Plus directly integrated multi-user functionality at no extra cost. Automatic record and file locking prevented data corruption during simultaneous access. Eight levels of password protection secured access to sensitive information. Data encryption and decryption occurred automatically at each workstation level.

Yet by the late 1980s, the machine began to seize up. Wayne Ratliff’s departure from Ashton-Tate in 1988 deprived the company of its original creator, while competition intensified with the arrival of products like FoxBASE and Clipper, often more efficient and less expensive. Legal problems mounted: Ashton-Tate lost several copyright infringement lawsuits, having never disclosed that dBASE descended directly from JPLDIS.

Unlike WordStar, which had successfully recalled its founders during similar difficulties, Ashton-Tate persisted in a strategy that proved losing. The market reorganized around new players: Borland absorbed Ashton-Tate, Microsoft acquired Fox (FoxBASE’s publisher), Computer Associates purchased Nantucket (Clipper’s creator).

After his departure, Wayne Ratliff developed Emerald Bay, a new product that bore troubling resemblances to Microsoft Access. In a 2007 interview with Visual Pro Magazine, Ratliff highlighted these similarities without making explicit accusations, leaving doubt about potential “borrowing” of concepts.

dBase’s integrated development environment concept, direct interaction with data through an intuitive user interface, data-oriented programming: all innovations that would leave a lasting mark on the industry. Microsoft Access, among others, directly inherited these pioneering concepts.

Jerry Pournelle perfectly captured the product’s ambiguity in BYTE magazine’s columns by describing it as “exasperating but excellent”. This phrase sums up the mindset where the personal software industry still hesitated between pure technical innovation and ease of use. dBASE embodied this tension: innovative in its concepts, sometimes bewildering in daily use. Above all, dBASE demonstrated how database management systems became indispensable tools, both in personal and professional computing.

Top

Ada

Picture for a moment the administrative nightmare that Pentagon IT projects represented in the mid-1970s. Over 400 programming languages and dialects were proliferating throughout the U.S. Department of Defense systems, generating three billion dollars in maintenance costs annually. This absurd situation often originated from the misguided initiative of a programmer who, convinced they were improving productivity, would cobble together an existing language for their particular application. Twenty years later, entire generations of developers still had to learn this dialect to maintain what had become a legacy program.

David Fisher was then leading the DoD’s initiative to break out of this deadlock. The idea wasn’t new: in the 1960s, the department had already mandated COBOL in its defense contracts. But this time, the approach would be different. For the first time in the history of programming languages, the requirements would be defined by a team completely separate from the one that would design the language.

The consultation process mobilized military, industrial, and academic experts from around the world. Between 1975 and 1977, several requirements documents emerged with evocative codenames: Strawman, Woodenman, Tinman, and finally Ironman. These specifications placed reliability, readability, and maintainability at the heart of concerns, far beyond the classic objectives of portability and efficiency.

Twenty teams responded to the call for proposals launched in 1977. Four were selected and designated by colors: Green, Red, Blue, and Yellow. After six months of hard work, only the Green and Red teams remained in the running. In May 1979, the Green team’s proposal won. Its leader, Jean Ichbiah of CII Honeywell Bull, had an ace up his sleeve: his experience with LIS, the “System Implementation Language” developed since 1972 at his company.

LIS had been designed to improve operating system reliability and maintainability. This philosophy fit perfectly with the DoD’s requirements. Ichbiah and his team drew on this experience to shape their proposal.

The next phase, called “Test and Evaluation”, transformed the project into a true global laboratory. About a hundred teams spread across all continents tested the language by recoding existing applications. Their feedback fueled successive refinements that led, in 1980, to a proposed standard.

The standardization process took on a pharaonic dimension. Over a thousand people around the world participated in this endeavor. Ichbiah’s team had to process approximately 7,000 comments on the proposed standard, relying on a computerized database to manage this flood of feedback. A technical feat.

In February 1983, Ada’s ANSI standardization was finalized. The language stood out for remarkable innovations. Its package structure allowed a clear separation between the user interface and the implementation. The concept of linear reading transformed code comprehension: a programmer could read an Ada program line by line, their understanding at any given line depending only on the preceding lines.

Robert Dewar developed the first Ada compiler at New York University, actually an interpreter intended for teaching. The second compiler, created by Rolm and Data General for the Eclipse minicomputer, was validated in June 1983. Western Digital offered the third with their MicroEngine, marking Ada’s entry into the microcomputer world.

In January 1984, the DoD struck a major blow: a directive mandated the use of Ada for all critical applications. This decision came after the validation of the Data General compiler, which proved the language’s practical viability. But Ada did not remain confined to military applications. The civilian sector adopted it, particularly in civil aviation, rail systems, and real-time embedded systems where reliability is paramount.

The language evolved over the decades. Ada 95 introduced object-oriented programming. Ada 2005 and Ada 2012 brought substantial improvements in contract-based programming. Today, Ada powers critical applications where the slightest failure can have potentially dramatic consequences.

Ada’s design introduced practices that have now become standard: rigorous separation between specification and implementation, formal compiler validation, meticulous standardization process. These innovations influenced the design of other languages and transformed software development methods.

Jean Ichbiah had a clear vision: “Developing a large program may take less than two years, but its maintenance will extend over more than twenty years.” This philosophy permeates every aspect of the language. Ada prioritizes code clarity and comprehension over ease of writing. An approach that seems counterintuitive at first but proves worthwhile in the long run.

The language’s name pays tribute to Augusta Ada Lovelace, considered the first programmer in history. This choice was not insignificant: it reflected the project’s ambition to make history in computing. Forty years later, Ada is regarded as a model of rigorous and methodical design. It demonstrates that investing in code quality generates substantial savings over a software’s entire lifecycle. A lesson that many IT projects would do well to consider.

Top

Usenet

The fall of 1979 at Duke University looked like any other: leaves were yellowing, students were returning to classes, and in the computer science department, Tom Truscott and Jim Ellis were wracking their brains over an apparently mundane problem. How could they efficiently connect the different computers on campus running UNIX? A technical question that would revolutionize the way people exchange information.

Computing still lived under the domination of large centralized systems. DEC’s PDP-11s were just beginning to colonize university departments, while personal microcomputers remained technological curiosities. For remote communication, people used sluggish 300-baud modems that crackled over telephone lines. ARPANET existed, certainly, but its access remained jealously guarded by the military and a few privileged laboratories.

Truscott and Ellis found their solution by teaming up with Steve Bellovin, a student at neighboring University of North Carolina. The latter developed the first version of Usenet in just 150 lines of shell script, building on UUCP, a UNIX protocol that had just appeared and allowed copying files between remote machines. The idea was simple: create thematic forums called newsgroups where anyone could post messages, which would automatically propagate from one site to another.

The chosen architecture reflected the constraints of the era as much as its creators’ ingenuity. Rather than centralizing data, each site locally stored all messages and transmitted them to its neighbors. Duke served as a central hub to limit telephone costs, a very real concern when each long-distance call counted in dollars. The A News protocol adopted a rudimentary but effective format: each message began with the letter A followed by a unique identifier, and headers indicated the path traveled to prevent messages from looping endlessly.

In January 1980, at a Usenix conference in Boulder, the team publicly presented their system. They described it as a “poor man’s ARPANET”, an economical alternative to the elite network. The source code was distributed freely on magnetic tape, in the spirit of sharing that characterized the UNIX community. This generosity wasn’t purely philosophical: without financial means, the creators bet on network effects to make their invention prosper.

The beginnings were nevertheless modest. In April 1981, only fifteen sites comprised the network, orbiting around Duke, Berkeley, and San Diego. The takeoff came from Mary Ann Horton, who had the idea of creating a gateway between Usenet and popular ARPANET mailing lists. Suddenly, discussions from SF-LOVERS and other renowned groups became accessible to Usenet users. Horton also developed the uuencode and uudecode tools, which solved the thorny problem of transferring binary files over a network designed for text.

The B News version corrected the original system’s growing pains. Gone were the days of having to read messages in arrival order: users could now freely navigate discussions, follow conversation threads, organize their reading. These improvements came at just the right time because the network was literally exploding. From 150 sites in 1981, Usenet jumped to 11,000 sites in 1988, processing 1,800 daily articles.

This growth reflected the broader evolution of computing. UNIX was spreading through universities and businesses, modems were getting faster, telephone lines more reliable. Above all, Usenet answered the need for technical mutual aid and knowledge sharing. Groups proliferated, covering every imaginable topic, from the most specialized to the most trivial.

But this expansion revealed the flaws of an architecture designed for a few dozen sites. Central nodes struggled to relay the growing flood of messages. Governance posed questions: while creating an official group required a community vote, the “alt.” hierarchy let everyone freely create their own forums. This organized anarchy fostered innovation but complicated controlling abuses.

The 1990s paradoxically marked both Usenet’s peak and the beginning of its decline. The system served as a platform for announcements that would change the world: Linus Torvalds presented Linux there, Tim Berners-Lee described the World Wide Web. Ironically, it was precisely the Web that would supplant Usenet by offering a more accessible interface to the general public. The arrival of AOL, then social networks, gradually diverted users toward more user-friendly solutions.

Usenet didn’t disappear entirely. The NNTP protocol replaced UUCP to adapt to the Internet, and data volume continued to grow, mainly for binary file sharing. Technical groups retain their usefulness for developers who appreciate the quality of discussions and absence of advertising.

Usenet proved that decentralized and participatory social networks could be created long before these concepts were fashionable. It enriched Internet vocabulary with words like “spam” or “troll”, introduced the idea of discussion threads, experimented with collaborative moderation. Its technical choices have durably influenced our way of communicating online.

Technological revolutions sometimes arise from very concrete needs, and from solutions cobbled together by a few students in a university laboratory. Between total openness and necessary control, between freedom of expression and moderation of abuses, Usenet embodied the tensions that still run through our digital communities.

Top

VisiCalc

In 1978, Dan Bricklin was attending classes at Harvard Business School when an ordinary situation sparked an idea that would transform personal computing. During a case study on Pepsi-Cola, he watched his professor fill in a financial spreadsheet on the blackboard. With each data modification, the instructor had to laboriously recalculate all the figures, erase, and rewrite. This scene reminded Bricklin of his own calculation sessions with his calculator: scribbled sheets of paper, errors creeping into transferred values, tedious restarts.

The student had solid technical experience from Digital Equipment Corporation, where he worked on digital typesetting systems. He knew about computer systems designed to reduce the number of operator manipulations. This experience led him to envision what he would later call an “electronic blackboard and chalk.” A system where calculations display, modify, and automatically propagate from one cell to another.

Back in his dorm room, Bricklin cobbled together a first prototype in BASIC on a borrowed Apple II. His program managed a small grid of twenty rows by five columns, but the principle worked. He reconnected with Bob Frankston, a friend from MIT with whom he had worked on Multics, the experimental operating system that familiarized them with sophisticated user interfaces. Frankston possessed the technical skills necessary to transform the idea into a commercial product.

Their meeting with Dan Fylstra was a game changer. Fylstra ran Personal Software and wrote for Byte Magazine. He immediately grasped the project’s potential and steered the two men toward the Apple II, a machine that had the decisive advantage of a disk drive. Without external storage, a spreadsheet would remain unusable. Fylstra also suggested adapting the software to HP85 and HP87 programmable calculators, but this avenue led nowhere.

In January 1979, Bricklin and Frankston founded Software Arts Corporation. To develop their spreadsheet, they acquired a Prime 350 minicomputer, chosen for its connection to Multics and its support for the PL/I language. This cross-development approach, unusual for the time, gave them access to tools far more powerful than those available on microcomputers. They wrote and compiled code on the Prime, then transferred it to the Apple II for testing.

Hardware constraints dictated their technical choices. The Apple II provided 48 KB of RAM, part of which was used by the operating system. Every byte mattered. Rather than using faster binary arithmetic, they opted for decimal calculation. This decision sacrificed performance but avoided rounding errors that would have unsettled users accustomed to their desk calculators.

The interface they designed was striking in its apparent simplicity. A grid of cells identified by coordinates: A1, B2, C3. Each cell could contain a number, text, or a formula. Users pointed to other cells rather than manually entering their values. When data changed, all calculations updated instantly. This simulation feature—the famous “what if”—simplified exploring different scenarios with a single gesture.

This design stemmed from their experience with large computer systems and their desire to create a tool accessible to the general public. The coordinate notation actually resulted from deep reflection on ergonomics. How to reference a cell intuitively? How to make calculation logic visible?

Personal Software released VisiCalc in May 1979 at a price of $100. Success didn’t come immediately, but it came quickly. In July, Ben Rosen published an analysis in the Morgan Stanley Electronics Letter that would prove prophetic. VisiCalc was the first “killer app” of personal computing, the first software that let you use a computer without knowing how to program. Buyers purchased an Apple II solely to run VisiCalc.

The program found its place on other machines. The 1981 IBM PC version employed the same optimization techniques as the original, demonstrating the soundness of the initial choices. But success attracted competition. Microsoft launched Multiplan in 1982. The software struggled in the United States but found success elsewhere in the world.

The arrival of Lotus 1-2-3 in 1983 reshuffled the deck. Mitch Kapor, a former Personal Software employee, had developed a spreadsheet specifically optimized for the IBM PC. His program integrated graphics, managed databases, named cells, and automated tasks with macros. Lotus 1-2-3 dominated the professional world.

VisiCalc couldn’t withstand this renewed competition. Relations between Software Arts and Personal Software deteriorated. Each company developed its own projects: VisiOn for Personal Software, TK!Solver for Software Arts. Conflicts escalated into cross-lawsuits in 1984, paralyzing product development. The advanced version of VisiCalc arrived too late in an already conquered market.

The legacy of the first electronic spreadsheet transcends its commercial success. VisiCalc proved that a microcomputer could serve as a serious professional tool. It accelerated business computerization and permanently influenced user interface design. Microsoft Excel, launched in 1985 on Macintosh and in 1987 on Windows, adopted its fundamental principles while adding macro programming capabilities.

Technically, VisiCalc established practices that became standard: cross-development between development machine and target platform, code optimization for limited resources, the importance of an intuitive interface. But this ease of use sometimes conceals underlying complexity. Studies reveal that most spreadsheets contain errors, with approximately 3% of cells being incorrect on average. These mistakes can be costly in financial terms or strategic decisions.

The VisiCalc adventure also illustrates the evolution of the software industry. The collaboration model between technical developer and commercial publisher, initially perceived as an asset, strained relationships when the market evolved. The separation between creation and commercialization, which seemed natural, revealed its limitations in the face of technological changes.

Forty years after its creation, VisiCalc continues to inspire. Its principles—polished user interface, resource optimization, balance between innovation and concrete needs—remain relevant. Its story reminds us that software succeeds as much through its technical design as through its ability to solve a real problem.