EPOCH
EPOCH © 2025 by Stéphane Fosse - This book is published under the terms of the CC BY-SA 4.0 licenseChapter 5
1960
The Dawn of a New Technological Era
The 1960s transformed our relationship with machines. Between two blocs glaring at each other in an undeclared war, computing carved out a path that exceeded mere military concerns. The shock caused by Sputnik in 1957 still echoed through the corridors of Washington. The creation of DARPA in 1958 was America’s response: a think tank that would revolutionize our relationship with technology.
America then invested colossal sums in research. MIT, Stanford, Berkeley—these names became synonymous with innovation. In their laboratories, young men in white shirts and narrow ties imagined the future. The American government, worried about the rising Soviet power, spared no expense to maintain its technological edge.
Companies began equipping themselves with these enormous machines at dizzying prices. Banks, insurance companies, and heavy industries sought to tame the growing flood of data overwhelming their accounting departments. These first computers, true electronic cathedrals, transformed work organization. Punched cards gradually replaced handwritten ledgers.
AT&T modernized its network with electronic switching. This technical change, hardly spectacular for the general public, constituted the first building block of future computer networks, a colossal edifice. Bell Labs engineers conducted the first digital transmission experiments.
The youth of the 1960s viewed the world differently. University campuses bubbled with new ideas. For these students, the computer was not merely a military or accounting tool. They saw it as a means of expression, a tool for freedom. This alternative vision of computing germinated in the minds of future microcomputer pioneers.
The space race stimulated innovation. How could a man be sent to the Moon without powerful computers? NASA demanded reliable computers capable of withstanding extreme conditions. Embedded computing was born from this constraint. Later, these advances were adapted to civilian needs. The Apollo program thus gave birth to innovations that subsequently extended beyond the space domain.
Europe, scarred by war, tried to keep pace. In France, Charles de Gaulle launched the Plan Calcul in 1966, dreaming of technological independence. The Compagnie Internationale pour l’Informatique was to embody this French ambition. The United Kingdom capitalized on the legacy of Alan Turing and the Bletchley Park computers. Germany, focused on its industrial reconstruction, automated its factories.
On the other side of the world, Japan awakened. Fujitsu and NEC, names unknown in the West, developed their first computers. The Land of the Rising Sun initiated its future dominance in electronics.
In 1964, IBM changed the rules of the game with its System/360. This family of mutually compatible computers created a de facto standard. Gone were the days when each machine spoke a different language. This standardization accelerated the adoption of computing in medium-sized companies. The software market found fertile ground to develop.
MIT bubbled with ideas. The Compatible Time-Sharing System allowed multiple people to use the same computer simultaneously, a concept that heralded interactive computing. At Xerox PARC, researchers invented the graphical interfaces that would later make computers accessible to everyone.
These technological transformations worried unions, who feared massive job losses. These questions were already being asked, so current with artificial intelligence. Society discovered that a technological revolution always brings social upheavals.
The computer fascinated the general public. Television showed these mysterious machines with blinking lights. The computer in Stanley Kubrick’s 2001: A Space Odyssey, HAL 9000, embodied both the promise and threat of artificial intelligence. Computing entered the collective imagination.
The late 1960s saw the birth of a new concern: personal data protection. The computerization of administrations raised unprecedented questions about privacy. The first legal texts framing these issues began to appear.
The 1960s thus saw computers leave laboratories to enter the real world. This decade defined the contours of our relationship with machines. The technical, economic, and cultural choices of this period continue to influence our digital present. In American universities of the 1960s, the history of the 21st century was being written. Computers weighed several tons and cost the equivalent of 4,000,000 euros today. Nowadays, everyone carries in their pocket a machine infinitely more powerful.
Quicksort
In 1961, Charles Antony Richard Hoare had no idea he had just designed one of computing’s most enduring algorithms. His Quicksort emerged in a particular context where computers were processing ever-growing volumes of data and traditional sorting methods were showing their limits. As a young British researcher, he proposed a radically different approach.
The idea seems simple at first glance. You choose an element from the array, the "pivot," then reorganize the other elements around it: smaller ones on one side, larger ones on the other. You then repeat the operation on each half until you obtain a completely sorted array. This "divide and conquer" method revolutionized the approach to sorting in computing.
His 1962 publication in the Computer Journal presented the first formal results. He demonstrated that his algorithm clearly outperformed existing methods. But Quicksort’s story was only beginning.
As early as 1965, R.S. Scowen spotted a weakness in the random choice of pivot. He developed Quickersort, which instead selected the median element of the array. This modification changed the game for partially sorted arrays, which had posed problems for Hoare’s original version.
R.C. Singleton took a new step in 1969 with his median of three method. Instead of settling for a single element, he examined three values: the first element, the middle one, and the last. He then chose the median of these three as the pivot. This trick improved average performance by about 5%, a substantial gain for the time.
Robert Sedgewick truly transformed understanding of the algorithm. Defended at Stanford in 1975, his doctoral thesis mathematically dissected Quicksort in all its aspects. His work, subsequently published in 1977 in Acta Informatica, established precise formulas for calculating execution time on real machines. He didn’t stop at theory and introduced loop unwrapping, a technique that reduced the overhead of managing internal loops.
The following year, in 1978, Sedgewick proposed a new partitioning method that would become the reference. Two indices traversed the array in opposite directions, gradually approaching each other. This approach minimized the number of element swaps, speeding up execution.
The 1980s saw the birth of more sophisticated variants. Roger L. Wainwright explored a hybrid approach in 1985 with Bsort, which mixed Quicksort and bubble sort techniques. Two years later, he developed Qsorte, capable of detecting already sorted subsequences and thus avoiding unnecessary partitioning.
In 1993, Bentley and McIlroy’s work aimed to create an optimized version for the C language standard library. They designed an adaptive algorithm that changed strategy according to array size. Their most remarkable innovation remained the three-part fat partitioning, specifically designed to efficiently handle arrays containing many identical elements.
At the end of the 20th century and the beginning of the 21st, researchers adapted Quicksort to character strings, parallel architectures, and large databases. The algorithm became embedded in virtually all standard libraries of modern programming languages.
Beyond its performance, Quicksort captured minds through its elegant structure. It served as a textbook case for teaching divide-and-conquer techniques and probabilistic analysis of algorithms. Computer science students worldwide dissected its operation, learned to calculate its complexity, and understood why it worked so well on average.
Quicksort’s evolution reflects that of algorithmics as a whole. Early work focused on theoretical complexity, seeking to prove mathematically the efficiency of methods. Gradually, attention shifted toward practical performance.
More than sixty years after its birth, Quicksort remains fully relevant. Its average speed, low memory consumption, and reliability make it the default choice for many computing systems. Modern processors, new architectures, and ever-growing data volumes continue to inspire researchers who still adapt this venerable algorithm.
Bull Gamma 60
In 1960, Compagnie des Machines Bull, a flagship of French industry, unveiled an exceptional creation: the Gamma 60. This machine, designed to handle both commercial and scientific tasks, made its mark through bold technical innovations.
The Gamma 60 stood out through its radical approach to information processing. Its distinctive feature was its ability to manage functions or problems simultaneously without specific programming of this parallelism. A technical achievement that enabled maximum utilization of hardware resources.
The internal organization followed a forward-thinking modular logic. Its core operated around a main unit equipped with magnetic core memory, coupled with a superior control unit. The latter comprised two strategic elements: a program distributor and a data distributor. Various specialized units revolved around these: arithmetic, logic, comparison, translation, magnetic drum storage, not to mention input-output devices such as card and tape readers, punches, and printers.
The main memory utilized saturated magnetic core technology, with 32,768 “catenæ”—blocks of 24 bits representing approximately 786,432 bits. This capacity allowed storage of 196,608 decimal digits or 131,072 alphabetic characters, with each catena accessible in just 11 microseconds. The Latin term “catena” meaning “chain” was deliberately chosen by the engineers to distance themselves from Anglo-Saxon computing vocabulary. This evidently did not catch on.
The Gamma 60’s coding system demonstrated remarkable adaptability. Numerical data used a 4-bit decimal code, while alphanumeric data employed 6 bits. Instructions appeared in pure binary. This variety of formats aimed to save space, encode efficiently, simplify conversion between media, and maximize transfer rates.
The arithmetic unit performed operations on both fixed-point and floating-point numbers. The numerical representation utilized two catenæ, structured with a sign bit, 40 bits to encode 10 decimal digits, and 7 bits dedicated to the decimal exponent, covering a range from 0 to 79. Depending on their complexity, instructions required 2 to 4 catenæ. For example, a three-address floating-point addition took 150 microseconds, compared to 88 microseconds for a single-address version.
The system’s peripherals displayed solid performance. The magnetic tape units used half-inch media with 8 information channels. With a density of 200 bits per inch and a running speed of 50 inches per second, they achieved throughputs of 20,000 decimal digits or 13,333 alphabetic characters per second. The readers processed 300 punched cards per minute, while flywheel printers produced 300 lines per minute, each capable of containing 120 characters from a palette of 60.
The Gamma 60’s true breakthrough lay in its control organization. Each functional element had its own program and its own control unit. Once configured, these modules operated in complete autonomy. Only the main memory and transfer buses were shared among the different units. The program distributor orchestrated the distribution of instructions and established the necessary locks to prevent conflicts between concurrent programs. In parallel, the data distributor managed access priorities to the transfer buses.
This sophisticated architecture enabled the side-by-side execution of highly heterogeneous tasks: payroll calculation, matrix inversion, multiple conversions from tape to printer, from cards to magnetic tape, or from punched ribbons to magnetic tape. And this with virtually no performance loss: the time lost in this simultaneous execution almost never exceeded a few percent compared to sequential processing.
Emerging from the French computer industry of the 20th century, the Gamma 60 was marketed starting in 1960, but its trajectory was unfortunately cut short by Bull’s financial difficulties, which led to the company’s acquisition by General Electric in 1964. This acquisition ended the independent development of the machine, but its concepts regarding parallel processing and modularity were innovative.
APL
At Harvard, Kenneth E. Iverson worked on his doctoral thesis under the supervision of Howard Aiken and Wassily Leontief. His research focused on the automatic solution of linear differential equations within an economic model. This initial foray into programming raised a question for him: how could algorithms be better represented and data manipulated?
The answer began to take shape in 1956 when Iverson taught in the brand-new Automatic Data Processing program that Aiken had just established at Harvard. He started developing a formal notation to describe and analyze data processing. This notation, which did not yet have a name, initially served as a pedagogical tool. Together with Frederick P. Brooks Jr., then a doctoral student, he wrote a book entitled “Automatic Data Processing” that formalized these early ideas.
In 1957, Iverson worked as a consultant at McKinsey & Company. His notation found concrete application there, acting as a communication language between designers and programmers working on complex systems. The teams discovered the value of this approach for clarifying architectures that were otherwise difficult to grasp.
Career circumstances brought Iverson to IBM in 1960. There he met Adin Falkoff, who immediately took interest in this notation for his work on associative memory systems. This collaboration would transform an academic idea into a new computing paradigm.
In 1962, Iverson published “A Programming Language”. The work formally presented his notation, but it remained confined to blackboards and paper. To use it on a computer, it had to be manually translated into Fortran or another existing language. A frustrating but necessary intermediate step.
Everything changed in 1964. Iverson and Falkoff decided to transform this theoretical notation into an operational programming language. Lawrence M. Breed joined them in 1965 to design a practical version of APL intended for interactive time-sharing systems. The first constraint to overcome concerned characters. The IBM 1050 terminal could only display 88 symbols. This limitation, initially perceived as a handicap, pushed the team to rationalize and generalize several aspects of the language. Ultimately, this constraint proved beneficial as it forced a salutary conceptual clarification.
By 1970, APL won over its first users. Thousands of people adopted it, including IBM’s research and design teams, staff at NASA’s Goddard Space Flight Center, and financial planners at major corporations. The language attracted users through its elegance and practicality, despite criticism from the established computing community who judged it too esoteric.
At Johns Hopkins University’s Applied Physics Laboratory, the F.T. McClure Computing Center pushed innovation further. The team developed special APL system software for the IBM 360/91 computer. These developments exploited the machine’s vector processing capabilities and radically transformed execution performance. In 1978, the center took another step forward with an APL system capable of handling very large applications through virtual memory technology.
APL’s true innovation lay in its treatment of arrays as primitive objects. This approach revolutionized the way algorithms were conceived. Programmers naturally thought in terms of vector and matrix processing rather than loops and conditions. The language distinguished itself through its remarkable syntactic simplicity with only three possible types of statements. Name assignment, branching, or neither. The semantic rules remained few and the definitions of primitive functions were independent of data representations.
The 1970s saw APL progress despite limited support from major computer vendors. The introduction of the shared variable system in 1973 with APLSV was an innovation that solved several APL/360 system problems, particularly communication between concurrent programs and with external devices. The possibilities expanded considerably.
The following decade confirmed this expansion. APL was installed on virtually every computer on the market, including personal computers such as the IBM PC and the Tandy TRS-80. Many commercial time-sharing companies offered APL as a service. Some specialized in international financial databases updated in real time, exploiting the language’s ability to efficiently process large volumes of numerical data.
Unlike other major languages of the era that moved away from traditional mathematical notations, APL continued their development. It retained established symbols when possible and employed consistent mathematical terminology. This fidelity to mathematical conventions did not prevent occasional departures dictated by principles of simplicity and uniformity, such as adopting a single form for dyadic and monadic functions.
APL’s development illustrates the importance of clearly stated principles. First, simplicity, pursued through uniformity, generality, and conciseness. Then pragmatism, manifested by attention to real applications and hardware limitations. The fact that the language was developed for eight years without a machine installation provided a freedom to modify that would have been impossible for languages constrained by an existing user base.
Simula
In 1961 at the Norwegian Computing Center, Kristen Nygaard initiated the development of a new programming language intended for discrete event simulation. This project, which would become Simula, marked the beginning of a close collaboration with Ole-Johan Dahl, who joined him in 1962. The two researchers shared experience at the Norwegian Defence Research Establishment, but their backgrounds differed: Nygaard had focused on operations research and Monte Carlo simulations, while Dahl had developed system software and programming languages.
The development of Simula responded to the concrete need for appropriate tools to simulate complex systems such as neural networks, communication systems, or traffic flows. However, programming such simulations in machine language or Fortran proved difficult. What was missing was a coherent set of concepts for understanding and describing the structure and interactions in these complex systems.
The first version of the language, Simula I, was developed between 1962 and 1964 under contract with UNIVAC. The designers chose to base their language on ALGOL 60, particularly for its block structure and programming safety. The major innovation of Simula I lay in the introduction of the concept of processes, which represented entities capable of executing actions in quasi-parallel while carrying their own data. These processes could interact and be programmed to simulate the behavior of real-world objects.
The development of Simula I required a new memory management system based on linked lists, developed by Dahl, freeing itself from the “last in, first out” (LIFO) constraint of ALGOL 60 and allowing processes to exist independently of the block call structure. The compiler was operational by the end of 1964, and the language achieved some success in the simulation community. It was notably used on UNIVAC 1100, Burroughs B5500, and URAL 16 computers. However, Dahl and Nygaard realized that the concepts developed for Simula I could be useful beyond simulation, for general programming and system design.
This reflection led to the development of Simula 67, a complete redesign of the language integrating the concepts of class and inheritance. The inspiration came partly from Tony Hoare’s work on record manipulation, presented at a NATO summer school in 1966. The conceptual breakthrough occurred in January 1967 with the idea of “class prefixing”, meaning that one class could prefix another, thus creating an inheritance relationship where the second inherits the properties of the first.
Simula 67 introduced fundamental concepts that would have a lasting influence on programming: objects as data structures with their associated operators, classes as templates for creating these objects, inheritance for specializing classes, and virtual procedures allowing the modification of object behavior in subclasses. The language distinguished the internal view of an object (its variables and constants) from its external view (the procedures accessible remotely).
The standardization of Simula 67 took place at the Simula Common Base Conference in June 1967, which established the Simula Standards Group to manage the language’s evolution. The first installations appeared as early as 1969 on Control Data computers, followed by versions for UNIVAC and IBM. However, the language’s dissemination was hindered by the high prices demanded by the Norwegian Computing Center, under pressure from the Norwegian Research Council, which considered that a programming language had only a three-to-five-year lifespan.
Alan Kay and his team at Xerox PARC drew inspiration from Simula to create Smalltalk, which popularized object-oriented programming in the 1970s. Simula’s concepts influenced the development of graphical interfaces, first on the Macintosh and then in Windows. The language inspired numerous successors, such as C++ by Bjarne Stroustrup, which widely disseminated object-oriented programming, Eiffel by Bertrand Meyer, and more recently Java.
The success of Simula’s ideas is partly explained by their origin in simulation modeling, where design in terms of cooperating objects is natural. The generality of the approach allowed its application to numerous aspects of software development. The concepts introduced by Dahl and Nygaard have become so ubiquitous.
SNOBOL
The year 1962 witnessed the birth of a project in the corridors of Bell Labs in Whippany, New Jersey, that would change the way string processing was conceived. David J. Farber, Ralph E. Griswold, and Ivan P. Polonsky were working at the Programming Research Studies department, a unit that would soon relocate to Holmdel. Under Chester Lee’s direction, this laboratory explored diverse territories ranging from automata theory to graph analysis and associative processors. Their daily work confronted them with the frustrating limitations of the SCL language, created by Chester Lee himself. Disappointing performance, restricted data space, and convoluted syntax that turned simple operations into veritable obstacle courses finally convinced them that an alternative was necessary.
After some attempts with COMIT, a language dedicated to natural language processing, the trio decided to take matters into their own hands. The development of what would become SNOBOL began in utmost secrecy, unbeknownst to Chester Lee, who was precisely refining a new version of his SCL. This somewhat comical situation took an embarrassing turn in November 1962 when both projects came to light. The department could not afford to fund two competing developments, but the SNOBOL team had a considerable head start. Chester Lee, being a good sport, finally accepted the inevitable.
The naming of the new language was rather improvised. First called SCL7, then renamed SEXI for String EXpression Interpreter, it ultimately inherited the name SNOBOL with a deliberately whimsical pseudo-acronym: StriNg Oriented symBOlic Language. This joke would later embarrass its creators during official presentations, never imagining the impact their creation would have.
In early 1963, the first operational version ran on the IBM 7090. The team made the bold choice of pure interpretation, immediately dismissing the idea of a traditional compiler, which they deemed unsuitable for the intended specificities. This decision proved rewarding, as three weeks were sufficient to obtain a usable version, an achievement in which the team took legitimate pride and which proved that a programming language could come to life quickly without consuming considerable budgets.
SNOBOL disrupted established practices by treating character strings as unified entities rather than simple arrays. This approach was accompanied by sophisticated pattern recognition mechanisms with backtracking, opening unprecedented perspectives for text analysis. The success achieved at Bell Labs naturally led to enhanced versions. SNOBOL2 arrived as early as 1964 with built-in functions for manipulating strings and numbers. SNOBOL3 followed by introducing user-definable functions.
In 1966, SNOBOL4 revolutionized the approach. Patterns became data objects in their own right, multiple types appeared including arrays and tables, dynamic compilation made its entrance with unevaluated expressions. The use of an abstract machine language called SIL guaranteed remarkable portability.
SNOBOL’s distribution philosophy broke from the practices of the time. Unlike established customs, the language spread freely, without any restrictions, source code included. This unusual generosity, accompanied by free technical support from Bell Labs, facilitated its massive adoption in universities and contributed greatly to its influence in the computing community.
Applications exceeded initial intentions. Designed to manipulate formulas and analyze graphs, SNOBOL found its way into document formatting and humanities computing. Compilation, rapid prototyping, and non-numeric data processing further expanded its scope. Its concepts of string manipulation and pattern recognition influenced numerous subsequent creations, while SNOBOL extensions enriched Fortran, PL/I, and many others.
Starting from a practical need of a few researchers, SNOBOL ended up transforming the approach to string processing and durably influencing the evolution of programming languages. It also demonstrates that freedom of experimentation and generosity in distribution, well before free software, constitute powerful drivers for technological innovation.
Spacewar!
In 1962, MIT had just received its new PDP-1 from Digital Equipment Corporation, a machine that stood apart from the usual computing behemoths. This computer featured a cathode ray tube display capable of showing images in real time, a technical feat that would prove game-changing.
Steve Russell, a young programmer passionate about science fiction, discovered this machine with wonder. An avid reader of Edward Elmer Smith’s space adventures and his Lensman series, he immediately envisioned recreating on this screen the cosmic battles that had so captivated his imagination. With a few accomplices, he embarked on creating what would become Spacewar!, unaware he was giving birth to the first interactive video game in history.
The principle they developed seemed deceptively simple: two spaceships battling in space, each controlled by a player. Rotation, acceleration, torpedo firing—the basic controls were enough to create fierce duels. But Russell and his team placed a star at the center of the screen whose gravitational pull disrupted trajectories and turned each match into a tactical challenge.
The PDP-1 wasn’t designed for this kind of exercise. Lacking multiplication and division instructions, it forced the programmers to exercise considerable ingenuity in calculating rotations and trajectories. This technical constraint, far from discouraging the team, stimulated their creativity. Alan Kotok and Bob Saunders also built custom control boxes, abandoning the computer’s crude panel switches for controls worthy of a space cockpit.
Peter Samson pushed the refinement even further. He programmed a star field of astronomical precision, faithfully reproducing the celestial vault. This addition, ironically dubbed Expensive Planetarium, transformed the screen into a window on the cosmos. The team soon added a “hyperspace” function, a panic button that teleported the distressed ship to a random point on the screen, at increasing risk of self-destruction with each use.
The distribution of Spacewar! followed a different rhythm than the legends tell. The 55 units that came off the production lines, about twenty of which equipped with displays, made the PDP-1 a rare machine. The game first circulated within the closed circle of universities and research centers that possessed these exceptional computers. It took until the late 1960s and the arrival of new display models from IBM, DEC, HP, and Control Data Corporation for Spacewar! to truly begin spreading.
Each installation developed its own variants. At the University of Minnesota, Albert Kuhfeld devised the “Minnesota Panic Button” in 1967, a camouflage device that made ships invisible. At Cambridge, M.S. Peterson and John C. Viner created DUEL, a version where space transformed into a “viscous” environment that gradually slowed combatants. At the University of Illinois, on the PLATO system, Richard Blomme adapted the game with ASCII character graphics and invented a player matchmaking system of striking modernity.
These programming nights at MIT never took place during official hours. Students waited for the machines to free up from their research tasks before indulging in their experiments. This nocturnal clandestinity forged an unprecedented computing culture, where the pleasure of programming trumped institutional constraints. Spacewar! was an unexpected training ground where students who programmed it absorbed computing principles faster than with traditional academic methods.
The game soon attracted attention beyond the academic world. In 1971, Nolan Bushnell and Ted Dabney, after discovering Spacewar! at Stanford, created Computer Space, the first attempt at commercializing video games in arcades. Bill Pitts and Hugh Tuck followed an identical path with Galaxy Game, an adaptation installed at Stanford. These initiatives marked the early stirrings of an industry that would revolutionize entertainment.
Russell and his team’s approach broke with the utilitarian vision of computing. Their machines, initially designed for scientific calculation and administrative management, suddenly revealed their creative and recreational potential. Spacewar! demonstrated the importance of interactivity, graphical interface, and user-centered programming—concepts that would influence the development of personal computing.
The source code freely circulating among universities represented the spirit of sharing that would later characterize free software. This openness encouraged programmers to modify and enrich the game, creating a multitude of versions. DEC adopted Spacewar! as a test program for its computers: if the game ran correctly, the machine was declared operational.
In 1972, Stewart Brand organized the “Spacewar Olympics” at the Stanford Artificial Intelligence Laboratory, a tournament that thrust the game into the spotlight. Rolling Stone magazine covered the event, introducing the general public to this new form of electronic competition. Ralph Gorin had developed for the occasion a sophisticated version supporting up to five simultaneous players, enhanced with space mines and other tactical innovations.
ASCII
The ASCII code was born in the 1960s, when every computer manufacturer was inventing its own rules. At that time, exchanging data between machines was an ordeal. A character on an IBM computer was unreadable on a Control Data machine, and vice versa. This technological cacophony hindered any serious development of computing.
In 1961, the American Standards Association decided to end this chaos. Its X3 committee received the mission to create a universal standard. The project landed in the hands of subcommittee X3.2, then working group X3.2.4. John L. Little, an engineer at the National Bureau of Standards, took charge of this delicate undertaking. He had to convince manufacturers who were already using their own systems, like Control Data with its 6-bit FIELDATA or IBM with its 8-bit EBCDIC. But each one held firm to their positions.
After months of debate, the committee decided on a 7-bit code. This compromise made it possible to represent 128 different characters, sufficient to cover the Latin alphabet, numbers, and punctuation. A 6-bit code would have been too restrictive, while 8 bits would have increased transmission costs. The 7 bits also left the possibility of adding an eighth bit for error detection, a valuable function when telephone lines crackled.
ASCII was born in 1963, underwent a revision in 1967 before its final version in 1986. The designers reserved the first 32 positions for control characters, inherited from teleprinters, such as carriage return, line feed, and tabulation. The following 95 positions accommodated letters, numbers, and symbols. Nothing was left to chance in this organization. The numbers aligned to facilitate conversions, the letters followed alphabetical order, and punctuation found its logical place.
In November 1968, the National Bureau of Standards adopted ASCII as the first federal information processing standard. President Lyndon Johnson signed this decision, making ASCII mandatory for all US government computer purchases. Since the government was the sector’s largest customer, this measure pushed the entire industry toward ASCII.
In reality, 7-bit ASCII is generally integrated into an 8-bit code: either the additional bit is set to zero, or it serves for parity checking. This approach works well with computers that process information in groups of 8 bits. The parity bit detects transmission errors: in an even parity system, it adjusts to maintain an even number of 1 bits. If an error occurs during transfer, the count changes and signals the problem.
ASCII’s success inspired national variants. Canada developed its version for French characters, India created ISCII, Vietnam VISCII, Yugoslavia YUSCII. Each country adapted the code to its linguistic needs. In 1972, ASCII was integrated into the international standard ISO/IEC 646, which authorized certain national rearrangements. But ASCII had established itself as the de facto standard, sometimes creating confusion when other countries redefined certain positions.
The arrival of 16- and 32-bit computers opened the way for 8-bit extensions. These versions kept the original 128 ASCII characters and added 128 new positions. The IBM PC introduced code page 437, which replaced control characters with graphic symbols. Digital Equipment Corporation designed the multinational character set for its VT220 terminal, one of the first extensions designed for international use rather than graphics. The ISO/IEC 8859 standard eventually prevailed, derived from Digital’s system. Microsoft then developed Windows-1252, which added typographic symbols necessary for quality printing. These three systems—7-bit ASCII, ISO-8859-1, and Windows-1252—dominated computing until the late 2000s.
Unicode has since taken over from ASCII, capable of handling tens of thousands of characters. But Unicode’s designers had the wisdom to preserve compatibility: the 128 ASCII characters retain their original codes. ASCII is valid in UTF-8, the currently dominant encoding, making it a subset of Unicode.
More than sixty years after its birth, ASCII continues to structure our digital exchanges. When everything collapses, when sophisticated formats fail, ASCII endures. The Internet rests on it, communication protocols use it, and our smartphones still decode its characters. This exceptional longevity testifies to a remarkable design, the fruit of a rare industrial consensus.
Bell 103 Modem
In 1963, AT&T commercialized the Bell 103, a modem model that would transform how computers communicate with each other. This device can be described as a simple and groundbreaking technology. It enabled bidirectional transmission of digital data over ordinary telephone lines at the respectable speed of 300 bits per second.
That year, computer networks were few and far between. Computers, those costly behemoths, were just beginning to spread to a few large corporations. Each machine remained isolated in its data center, unable to communicate with its peers. How could digital information travel over a network designed for the human voice? Yet the switched telephone network represented the only truly extensive and immediately available communication infrastructure.
AT&T found the solution in a technique called frequency-shift keying, hence its name: MOdulation/DEModulation. The principle rests on the clever idea of transforming each bit into a musical tone. The Bell 103 translated 0s and 1s into distinct sound frequencies that telephone lines could carry. For transmission, it used 1070 Hz to represent one bit and 1270 Hz for the other. On reception, it listened on different frequencies: 2025 Hz and 2225 Hz. This ingenious distribution allowed simultaneous communication in both directions, known as full duplex.
The connection between two Bell 103 modems resembled a musical ritual. The calling device first intoned a continuous note at 1270 Hz. Its correspondent answered with a tone at 2225 Hz. This acoustic handshake synchronized the two machines and confirmed they spoke the same language. Once this sequence was completed, data could flow freely.
The Bell 103 displayed disarming simplicity. No mechanism verified the integrity of transmitted data. Information traveled bare, without protection against interference or transmission errors. This minimalist approach reflected the technical constraints of that period, but also a certain confidence in the quality of the telephone network.
AT&T simultaneously launched its DATA-PHONE service, which opened the doors of the public telephone network to corporate computers. Machines previously imprisoned in their climate-controlled rooms suddenly discovered the existence of their counterparts. Files traveled from site to site, databases became remotely accessible, distributed processing took its first steps.
The speed of 300 bits per second makes us smile today, but it revolutionized the practices of the time. Thirty characters transmitted each second were more than sufficient for many professional applications. The computer interface respected the RS-232 standard, that famous 9-pin serial port equipping all computer systems. This standardization accelerated the adoption of the Bell 103 by guaranteeing its compatibility with different computer brands.
The success of AT&T’s modem created a de facto standard in the telecommunications industry. Its modulation frequencies and transmission method set the example. Competitors aligned with these specifications to ensure interoperability. This technical convergence culminated in 1981 when the International Telecommunication Union formalized these parameters in its V.21 recommendation, eighteen years after the birth of the Bell 103.
This exceptional longevity was explained by the robustness of the chosen technical solution. Frequency-shift keying, though rudimentary, also worked on failing telephone lines. It tolerated distortions, withstood interference, and maintained communication when other more sophisticated techniques failed.
Using the Bell 103 sometimes required picturesque manipulations. To establish a connection, the operator dialed the phone number, waited for the answer tone, then delicately placed the handset on an acoustic coupler. This device, resembling a plastic cradle, ensured the conversion between sound and electrical signals. Later versions fortunately integrated automatic dialing, making the procedure less acrobatic.
Against all odds, the Bell 103 survived until the 1990s, valiantly resisting the arrival of faster modems. This remarkable persistence stemmed from different factors. Its technical simplicity made it equipment of unshakeable reliability. When line quality degraded, it continued to function where its successors gave up. Its modest speed was then an asset, providing an acceptable fallback solution. Its ubiquity in businesses created a powerful network effect.
After the Bell 103, modem manufacturers refined modulation techniques to increase throughput. They added error correction protocols to secure transmissions. Automatic dialing and answering simplified daily use. These successive improvements led to the high-speed modems of the 1990s, capable of reaching 56,000 bits per second on the same analog telephone lines as their ancestors.
Music V
The 1960s saw the birth of the wild idea of making computers sing. At Bell Labs, Max Mathews observed oscilloscopes tracing the curves of the human voice and thought that after all, if speech could be converted into numbers, why couldn’t the reverse be done? In 1963, he published an article in Science that would raise eyebrows: “The Digital Computer as a Musical Instrument”. The idea seemed absurd. Computers calculate, they don’t make music.
Yet the principle Mathews developed could be explained in a few lines. The computer generates a sequence of numbers which, transformed into electrical pulses and smoothed out, produce sounds. Nothing simpler in theory. In practice, it’s another story. To obtain something decent, you need at least 30,000 samples per second if you want to cover audible frequencies up to 15,000 Hz. The catch is that Bell Labs’ IBM 7090 only calculates 5,000 numbers per second for even moderately complex sounds. Mathews then found a workaround by first storing everything on magnetic tape, then playing it back at the right speed. This workaround worked perfectly, but it was only a beginning, because the real innovation came with MUSIC, and MUSIC V, a programming language entirely dedicated to sound creation.
The power of the innovation lay in the separation between instrument construction and their use. Mathews invented “unit generators”, small program blocks that each did one specific thing like oscillating, generating noise, adding signals. By combining them, you create “instrument units”, true virtual instruments. This modularity predated by several years the Moog and Buchla synthesizers that wouldn’t see the light of day until after 1964.
The composer therefore worked in two stages. First, they built their instruments by assembling these generators like mounting an electronic circuit. Then they wrote their score as lists of numbers: start time, duration, pitch, amplitude... Each note was a line of figures. This numerical approach had to win over musicians first, but it offered unprecedented precision on every sound parameter.
Jean-Claude Risset, who joined Mathews’ team at Bell Labs between 1964 and 1969, pushed the system to its limits. He analyzed trumpet sounds under an acoustic microscope and discovered that their distinctive timbre came from subtle correlations: the louder the note, the more high harmonics it contained. Armed with this observation, he programmed virtual trumpets that reproduced this behavior. The result fooled listeners. But he didn’t stop at imitation and explored the unprecedented possibilities offered by digital synthesis. He created sounds impossible in the physical world, like his famous “infinite glissando” that seemed to rise eternally while returning on itself, thus exploring the limits of auditory perception through the computer.
The research conducted with MUSIC V transformed the understanding of sound. It showed the importance of attack transients in timbre recognition, the role of random micro-variations in perceived richness. The computer was an acoustics laboratory where each parameter could be isolated, modified, and its effect on perception observed. However, MUSIC V suffered from crippling limitations. Real-time playing was impossible, you had to wait for the computer to finish its calculations before hearing the result. Programming with lists of numbers put off musicians accustomed to musical staves, and sound quality depended on a fine understanding of acoustics that few possessed. These obstacles didn’t prevent certain composers from adopting the tool between 1964 and 1969, using the catalog of synthesized sounds published by Jean-Claude Risset as a reference. MUSIC V demonstrated that a computer could become a full-fledged artistic creation instrument, not just a calculating machine.
MUSIC V’s modular architecture directly inspired analog and digital synthesizers. Its separation between instrument definition and musical control foreshadowed the MIDI standard of the 1980s. It established programming principles – modularity, abstraction, separation of concerns – that went far beyond the sound domain.
MUSIC V transformed the conception of computing by showing that the machine was also a means of expression rather than a simple automaton. Mathews’ intuition finds its culmination today in every modern audio application that carries within it a bit of MUSIC V’s DNA, that first digital cry uttered in the carpeted corridors of Bell Labs.
Sketchpad
In 1963, Ivan Sutherland worked in the laboratories at MIT on the problem of interaction between humans and machines beyond simply typing lines of code on a keyboard. His answer was called Sketchpad, and it would be a game-changer. For the first time, drawing directly on a computer screen with a light pen became possible.
The idea was straight out of science fiction. Engineers who wanted to draw an electrical circuit or a mechanism first had to describe it in writing, line by line, instruction by instruction. Needless to say, designing even the simplest geometric shape was an ordeal. Ivan Sutherland understood that this logic needed to be broken.
He installed his system on the Lincoln Laboratory’s TX-2 computer, a machine equipped with a 9-inch cathode ray tube display. The arsenal consisted of a light pen for drawing and selecting, push buttons to launch commands, switches to activate various functions. A few rotary potentiometers made it possible to rotate or enlarge what appeared on screen. But the real magic happened elsewhere. Unlike a traditional drawing fixed on paper, a Sketchpad drawing kept in memory the relationships between its various elements. The system knew that one line remained parallel to another, that a circle stayed tangent to a straight line, that an angle preserved its measurement. When you moved part of the drawing, the entire assembly automatically adjusted to respect these relationships. This technical feat relied on the ring data structure, a bold software architecture. Each stroke, each curve transformed into a block of information that stored its shape, position, and connections with the other elements composing the drawing. The elements communicated with each other, coordinated, maintained their geometric agreements.
Ivan Sutherland took the concept further with his master drawings. You drew a symbol once—an electrical resistor or a gear—and could then reproduce it at will in the drawing, with each copy linked to its original model. If you modified the base form, all its instances transformed instantly. Industrial draftsmen, accustomed to constantly redrawing the same components, discovered an incredible time-saver.
Applications multiplied quickly. Printed circuit board designers found their footing in drawing repetitive patterns. Mechanics simulated the operation of joints and connecting rods. Engineers calculated the distribution of forces in a structure like a bridge and immediately visualized the results. The interface established rules that would span decades. The light pen functioned in two modes: it captured coordinates when positioning a new element, and it designated existing objects when you wanted to modify them. The system often guessed the user’s intentions through its “pseudo-position”: the cursor automatically locked onto nearby points and lines, facilitating drawing precision. Sketchpad transformed the screen into an intelligent drawing board.
The limitations of the era nevertheless constrained certain ambitions. Constraint calculations sometimes slowed the system down on complex drawings. Screen resolution and storage capacity imposed their law. But as early as 1963, Timothy Johnson extended Sketchpad into the third dimension with Sketchpad III. Lawrence Roberts built upon these data structures for his work on object recognition in photographs, thus initiating computer vision.
Ivan Sutherland conceived of the computer not as a sophisticated calculator, but as an amplifier of human intelligence. This vision inspired the entire personal computing movement that followed. By showing that a natural conversation between humans and machines involved images as much as text, Sketchpad redrew the contours of computing. Sixty years later, our touchscreens and drawing software still bear the trace of this pioneering intuition, but so too do generative artificial intelligences with their vision capabilities.
GECOS
Bull was going through a turbulent period when General Electric acquired the company in 1964. This acquisition triggered the creation of GECOS – General Electric Comprehensive Operating System – designed to equip the GE-635 system. The name bears the mark of the industrial upheavals that characterized computing at that time. Six years later, Honeywell absorbed Bull General Electric and once again redrew the company’s boundaries.
These successive ownership changes were not mere financial maneuvering. They directly shaped the technical evolution of operating systems and determined future strategic directions. When CII and Honeywell Bull merged in 1976, the legacy became even more complex. The commercial offering presented a bewildering peculiarity where three distinct operating systems coexisted under the same GCOS banner.
This tripartition reflected the fragmented geographic and technical origins of the system. GCOS6 originated from Honeywell’s research laboratories in Boston, GCOS7 emerged from Bull’s engineering offices in Paris, while GCOS8 inherited the initial developments from General Electric in Phoenix. Three cities, three teams, three technical approaches that gave birth to systems bearing the same name but incapable of communicating with each other.
The mutual incompatibility of these three lines constituted a troubling commercial paradox. Despite their common designation, no application could migrate from one to another. The only possible exchanges occurred through telecommunications, as if these systems belonged to distinct technological universes. This situation perfectly illustrated the integration difficulties that arose when multiple technical legacies collided abruptly.
Bull’s architecture in the 1980s also included CTOS, a multi-station system developed by Convergent Technology, which joined the family after CIT Transac acquired the technology. DNS completed this heterogeneous ensemble by managing the lower layers of telecommunications: X21 and X25 protocols, Ethernet, transport, and various terminals. This system ran on a Datanet front-end shared between GCOS7 and GCOS8, while integrating directly under GCOS6. This diversity reflected Bull’s strategic dilemmas in the face of changes in the computing market. As early as 1982, Jacques Stern formulated the clear-eyed assessment that Bull would never reach the critical mass to compete globally with its proprietary systems alone, regardless of their technical quality. This realization triggered reflection on the balance between maintaining the existing base and preparing for the future.
Bull’s engineers then devised technical solutions to facilitate this delicate transition. A UNIX coprocessor was created for GCOS7 and GCOS8, functioning as a front-end tightly coupled to the proprietary processor. This hybrid architecture allowed the addition of modern functionalities to legacy applications such as relational databases, web interfaces, or contemporary development tools. For GCOS6, the approach differed with the creation of an emulator under AIX, IBM’s operating system, designed to recover the code from business applications that had survived the transition to the year 2000. The maintenance of these three distinct lines was organized according to a geography inherited from Bull’s turbulent history. Boston continued to develop GCOS6, Paris handled GCOS7, Phoenix steered the evolution of GCOS8. This international distribution testified to the company’s ability to preserve distinct centers of expertise despite economic turbulence and ownership changes.
The alliance with IBM in 1992 stemmed from a realistic analysis: Bull could no longer keep pace alone with the frenetic evolution of the UNIX market. The adoption of AIX was accompanied by Bull’s specific technical contributions, particularly in telecommunications and multiprocessor architectures. Facing American dominance of the sector, this cooperation illustrated a form of European specialization. The European Commission actually encouraged this approach by supporting the development of a strong European computing industry. The creation of the "Round Table of 12" brought together the continent’s major companies, with Bull at the forefront, to coordinate efforts against American giants. This political initiative reflected awareness of the strategic stakes linked to technological independence. In 2025, we can see that the results are mixed, and the same questions remain.
The persistence of GCOS in the contemporary computing landscape is surprising for its longevity. Despite Bull’s repeated transformations and accelerated technological evolution, these systems continue to run critical applications in numerous private and governmental organizations. This resilience is explained by the intrinsic technical quality of the systems, customer loyalty, and Bull’s ability to evolve its platforms while preserving compatibility.
The history of GECOS reveals the contradictions and limits of the European computing industry. Between technical legacy and innovation, between local specificities and international standards, between independence and cooperation, Bull navigated a complex environment while attempting to preserve its technological distinctiveness. This experience illuminates contemporary issues of digital sovereignty and technological diversity in an increasingly uniform computing world.
IBM System/360
In 1964, IBM bet $5 billion on a groundbreaking idea, equivalent to $50 billion today. Never before had a private company invested such sums in a technological project. At that time, IBM dominated the world of electromechanical computing machines with its famous 80-column punched card. But the company was facing a paradoxical crisis. Its product line included eight computers that were completely incompatible with each other. Each migration to a more powerful system forced customers to repurchase all their peripherals and rewrite their programs. This situation was untenable.
Thomas Watson Jr., who chaired IBM, entrusted this thorny issue to Thomas Vincent Learson. Learson appointed Bob O. Evans, head of systems development in Poughkeepsie. Evans quickly reached a radical and bold conclusion: all ongoing projects had to be abandoned and everything started from scratch. The decision sparked heated debates, particularly with Frederick P. Brooks Jr., who was leading the development of the 8000 series.
Evans’ team devised an audacious vision. Code written for the smallest model in the family had to run without modification on the most powerful processors. Printers, communication devices, and storage units had to work with any computer in the range. This universal compatibility required a complete rethinking of computing. Operating system software had to be created for a family of compatible processors, unprecedented peripherals had to be designed, and an integrated circuit technology called Solid Logic Technology had to be developed. IBM mobilized its worldwide resources. Teams of engineers worked simultaneously on six new processors and dozens of peripherals, both in America and Europe. Hundreds of programmers began writing millions of lines of code.
On April 7, 1964, IBM unveiled the System/360 with military precision. The company organized press conferences in 165 American cities and 14 countries. The presentation revealed six new processors and 44 peripherals. The reception exceeded all expectations: more than a thousand machines were ordered in four weeks. But the euphoria of orders gave way to two years of hell. The production of SLT modules posed the first problem. IBM built a factory of over 9,200 square meters in East Fishkill. Production soared from 500,000 modules in 1963 to 12 million in 1964, then 28 million the following year. Quality defects piled up, delaying deliveries by two to four months. In 1966, the factory produced 90 million modules, more than all other semiconductor manufacturers combined.
Logistics turned into a nightmare. Orders flooded in while delays accumulated in component manufacturing and software development. Tension mounted between sales teams and production departments. An amusing episode actually illustrates this period: the company ran out of copper circuit breakers, temporarily paralyzing all production.
Software development represented the third pitfall. Initially budgeted between $30 and $40 million, the final cost reached $500 million. The team had to create an operating system that worked across different processors and peripherals, a feat never before accomplished. The code grew from 100,000 lines for the IBM 1401 to over one million for the System/360.
By 1966, seven to eight thousand systems equipped companies, generating over $4 billion in new revenue. IBM hired 25,000 employees that year and built over 270,000 square meters of additional production space. Between 1964 and 1970, revenue doubled, rising from $3.2 to $7.5 billion.
The System/360 standardized the 8-bit byte and introduced numerous innovations in storage and memory. Its architecture immediately inspired competitors. RCA launched the Spectra 70 series, compatible with the System/360. The Soviet Union developed the Ryad, an almost exact copy. Gene Amdahl, who participated in the system’s design, founded Amdahl Corporation to produce compatible processors. An entire industry emerged around compatible peripherals with companies like Telex, Memorex, and Storage Technology.
Bank of America, NASA, and other major organizations chose the System/360 for applications ranging from banking transaction processing to scientific calculations. Thanks to its unified architecture, companies could now compare price and performance across an entire product range, simplifying their purchasing decisions and IT planning.
In 1985, President Ronald Reagan presented the National Medal of Technology to Erich Bloch, Fred Brooks, and Bob Evans for their contribution to the System/360. This official recognition underscored the project’s historical importance in the development of modern computing. Its architecture influenced all subsequent generations of IBM computers, from the System/370 to the System/390, up to today’s IBM zSeries servers. The principles of compatibility and standardization it established remain pillars of the contemporary computer industry.
BASIC
In 1964, in the corridors of Dartmouth College, New Hampshire, John G. Kemeny and Thomas E. Kurtz completed BASIC, an acronym for Beginners All-Purpose Symbolic Instruction Code. This creation was born from legitimate frustration with the state of computing at the time. Giant, expensive computers required skills possessed by only a few initiates. To program, one first had to go to a punch card center where instructions were laboriously translated into punched cards, then wait in line at the computing center. A single error in the code meant starting all over again.
A former collaborator on the Manhattan Project and assistant to Albert Einstein, Kemeny had joined Kurtz at Dartmouth in 1956. Their first meeting had given birth to a collaboration that would last for decades. Before BASIC, they had attempted to create simplified languages such as Darsimco and DOPE, but these experiments had never really taken off. The idea that drove them remained to make computing accessible to the greatest number of people.
Having become director of the Kiewit Computing Center, Kurtz envisioned an innovative time-sharing system. This system required a language that was both simple and powerful. The two men set ambitious goals for a versatile language that was easy to learn, extensible, interactive, and above all hardware-independent. Drawing inspiration from FORTRAN and ALGOL, they designed a structure that favored common English words. Program lines were numbered, an innovation that greatly simplified editing: to delete a line, one simply indicated its number, and to modify it, one rewrote it with its number.
On May 1st, 1964, at four o’clock in the morning, two BASIC programs ran simultaneously on Dartmouth College’s General Electric 225. This date marked the official birth of the language. Kemeny and Kurtz then made a decision that would influence the entire history of computing: they declined to patent their invention. This generosity would transform BASIC into a worldwide phenomenon.
General Electric, which had provided the computer to Dartmouth, was among the first users of the new language. Around 1970, the company marketed machines equipped with the fifth version of BASIC, without waiting for the next version in preparation. This rush marked the beginning of the language’s fragmentation. Kemeny and Kurtz published their BASIC the Sixth in 1971, but other developers were already creating their own dialects.
Gordon Eubanks, the future CEO of Symantec, developed BASIC-E in 1970, using an innovative approach in which instructions were first transformed into intermediate code before being converted to machine code. This technique foreshadowed the one that Java would use decades later with byte code. Unlike its creators, Eubanks protected his next version, CBASIC, which he marketed through his company Compiler Systems.
The proliferation of BASIC versions concerned ANSI, which recognized in 1974 the need for standardization. A committee formed to define two standards: minimal BASIC and standard BASIC. The process stretched over years, long after the language had conquered the planet. ANSI finally published its specifications in 1978 for minimal BASIC and in 1987 for standard BASIC. ISO adopted these standards in 1984 and 1991 respectively.
The year 1975 saw the birth of Bob Albrecht and Dennis Allison’s TinyBASIC, a marvel of miniaturization that functioned with only 2 KB of RAM. William H. Gates III and Paul Allen created their version for the MITS Altair. Constrained by memory limitations, they chose an interpreter rather than a compiler. This solution, which fit in 4 KB, presented the unexpected advantage of greatly facilitated debugging through the interactivity it provided. Kemeny and Kurtz, initially skeptical of this approach, would eventually acknowledge its role in spreading their language.
The explosion of personal computers in the late 1970s transformed the BASIC landscape. Each manufacturer developed its own version: Radio Shack with its Level 1 BASIC for the TRS 80, Apple with the Integer BASIC of the Apple II in 1977, Commodore with PET BASIC, Atari with its BASIC in 1978. The list grew with Timex Sinclair 1000 BASIC, Texas Instruments’ TI-BASIC, and the Commodore BASIC of the VIC 20 and C64.
In the early 1980s, IBM disrupted the market by integrating an interpreted BASIC into its PC’s ROM. Extensible through loading BASICA, this BASIC accompanied each PC-DOS diskette. Its MS-DOS equivalent, GW-BASIC, quickly dominated the compatible PC market. Microsoft took a new step in 1984 with the commercialization of BASCOM, its first BASIC compiler.
Evolution accelerated with QuickBASIC in 1985, which reached version 4.5 in 1988. The language was enriched with structured syntax, sub-functions, customizable data types, and multi-file management. In 1990, Microsoft BASIC Professional Development System 7.1 crossed the symbolic barrier of 64 KB of memory.
The arrival of Windows once again transformed BASIC with Visual Basic, specially designed for graphical applications. This version, whose “object-oriented” character was debated among programmers, achieved phenomenal success. By the late 1990s, estimates suggested that 90% of Windows software was developed with Visual Basic.
The history of BASIC ultimately tells the story of technological democratization. This language born in a university laboratory accompanied the transformation of computing from a science reserved for experts to a tool accessible to all. It trained entire generations of programmers.
PL/I
Computing in the early 1960s resembled an archipelago of isolated communities. On one side, scientists operated their IBM 7090s, coded in FORTRAN, and gathered at SHARE meetings—a user group for IBM mainframes. On the other, the business world worked on IBM 7080s, swore by COBOL, and organized its own circles within G.U.I.D.E—another IBM computer user group. A few specialized machines like the IBM 7750 had their dedicated languages, such as JOVIAL (Jules’ Own Version of the International Algorithmic Language).
This geographical partition of computing territory was beginning to show its limits. Researchers no longer wanted to wait hours at their terminals to retrieve a simple numerical value; they demanded structured, readable reports worthy of their work. Their data files were swelling and now rivaled the volumes handled by businesses. On the commercial side, marketing departments were discovering the joys of statistical analysis and demanding floating-point calculations on their sales data. IT managers found themselves juggling two operating systems, two teams of programmers who literally didn’t speak the same language, and budgets that were spiraling out of control.
IBM had already laid the groundwork for a solution with the System/360 and OS/360, this family of machines and operating systems designed to unify everyone’s needs. What remained was to create a programming language that would follow the same philosophy. In October 1963, IBM and SHARE created the Advanced Language Development committee. Its mission was to define a universal language that would end this computing Tower of Babel.
The assembled team blended backgrounds and skills. Hans Berg from Lockheed brought his training expertise, Jim Cox from Union Carbide knew FORTRAN inside and out, Bruce Rosenblatt from Standard Oil chaired the discussions. On the IBM side, C. W. Medlock was the expert in optimized FORTRAN compilers, Bernice Weitzenhoffer moved effortlessly between FORTRAN and COBOL, George Radin led the project with his scientific programming experience. Meetings followed one after another every two weeks, three or four days at a stretch, mainly between New York and Los Angeles. Members outside IBM kept their full-time positions, with this new language remaining a side mission. Originally planned for December 1963, the specifications freeze ultimately slipped to February 1964. Deadlines were tight.
The idea of extending FORTRAN was abandoned. Its syntax didn’t fit modern terminals, its declarations organized by type rather than by identifier clashed with conventions, its column-major array storage puzzled business application programmers. The necessary extensions would have so transformed FORTRAN that maintaining any compatibility lost its meaning.
The language was initially called NPL—New Programming Language—in the grand tradition of descriptive acronyms. In 1965, a conflict with the British National Physical Laboratory forced IBM to rename it PL/I (sometimes written PL/1). The first official presentation took place in March 1964 in San Francisco, before the SHARE assembly. Reactions were mixed. Some praised its comprehensiveness and attention to programmers of all levels. Others pointed to its complexity and redundancies. One SHARE member compared it to “a Swiss Army knife with a hundred blades,” a metaphor that spoke volumes about the project’s ambitions.
Compiler development landed in IBM’s Hursley laboratories in England. John Fairclough’s team took charge of stabilizing the language and making it practically usable. Without the Hursley programmers, PL/I would have remained a beautiful theory. They built its compilers, participated in its standardization, and transformed an ambitious specification into an everyday working tool.
PL/I shook up established conventions. It introduced asynchronous tasks, defined ON conditions to specify code blocks to execute during particular events, authorized recursive procedures with static and automatic storage. Its palette of data types was impressive: arithmetic or character strings, decimal or binary, fixed or floating, real or complex. Arrays accepted arbitrary dimensions with freely chosen bounds. Input-output received particular care, representing more than 10% of the language definition. PL/I married FORTRAN’s formatting capabilities with certain COBOL functionalities for record processing, while exploiting OS/360’s new possibilities like asynchronous input-output.
One of PL/I’s most controversial peculiarities resided in its declaration handling. The language didn’t require all attributes to be explicitly declared. In case of omission, implicit types were assigned according to variable usage in the program. Missing attributes received default values depending on those explicitly mentioned. This flexibility responded to the constraints of an era when each program submission involved waiting and delays.
PL/I’s evolution testifies to a constant formalization effort. The first 1964 documents left numerous gray areas and ambiguities. Ray Larner and John Nicholls developed an initial formal semantic definition called Universal Language Definition II. The Vienna laboratory then took over with an even more rigorous definition. PL/I thus joined the select circle of the best-defined languages of its time.
Success was forthcoming. In the 1970s, IBM sold more PL/I compiler licenses than FORTRAN for System/370. COBOL certainly kept first place for business applications, but PL/I had found its niche. The language and its variants were used to program entire operating systems like MULTICS and OS/VS2 Release 2, as well as numerous compilers. Educational subsets facilitated its teaching in universities.
PL/I favored object code efficiency and ease of use over theoretical elegance. Its development relied on programmers’ concrete experience rather than abstract mathematical considerations. This philosophy contrasts with current concerns like extensibility or formal program verification.
IBM CP-40
The history of virtual machines begins in the laboratories of MIT, where Fernando Corbató and his team work on the Compatible Time-Sharing System. This system transforms computer usage by introducing time-sharing, but its implementation requires delicate hardware modifications. IBM, the supplier of MIT’s machines, maintains an on-site liaison office to support these technical adaptations.
In 1964, IBM unveils its new System/360 line, intended to unify enterprise computing. However, MIT customers discover with disappointment the absence of address translation capabilities in these new machines. This function seems essential to them for developing their time-sharing systems. Faced with this disappointment, IBM responds by creating the Cambridge Scientific Center under the leadership of Norm Rasmussen. The objective was to catch up in the field of time-sharing.
Robert Creasy, a veteran of the CTSS project, takes charge of the new CP-40 project. With Les Comeau, he conceives at the end of 1964 a radically different approach: simulating several independent System/360 machines on a single physical computer. This idea appeals through its conceptual simplicity. Each user has their own virtual machine, completely isolated from others. Gone are the risks of interference between programs.
Realizing this concept requires technical prowess. The CSC team modifies an IBM System/360 model 40 by grafting onto it a device called the Cambridge Address Translator. Bruce Lindquist and Rex Seeber design this mechanism around a 64-word associative memory. The challenge consists of translating addresses without slowing program execution. Mission accomplished: the translation occurs without noticeable performance loss.
The CP-40 system architecture rests on a clear separation between two components. On one side, the Control Program manages the creation and administration of virtual machines. On the other, the Cambridge Monitor System provides a simple operating environment for these virtual machines. This division of labor constitutes a remarkable architectural innovation. John Harmon leads the development of CMS with his team including Lyndalee Korn and Ron Brennan. They draw heavily on the CTSS interface to create a user-friendly system. Paradoxically, CMS is single-user and all the complexity of resource sharing falls to the Control Program.
January 1967 sees the production deployment of CP-40 and CMS. The system immediately demonstrates its ability to run OS/360 in a virtual machine. This backward compatibility would prove decisive for the technology’s future adoption. The team also discovers the pitfalls of virtual memory. The thrashing phenomenon surprises them: when paging is excessive, performance collapses abruptly.
The success of CP-40 paves the way for CP-67, adapted to the System/360 model 67 which natively integrates address translation. Dick Bayles, Dick Meyer, and Harit Nanavati lead this evolution. CP-67 brings substantial improvements such as dynamic memory management and flexibility in virtual machine configuration.
May 1968 marks the first distribution of CP-67 to eight pilot sites. In June, the system becomes available as a Type III program, a status reserved for contributions from IBM employees. The success exceeds all expectations. Former employees create two commercial companies that sell time-sharing services based on the CP/CMS platform: National CSS and Interactive Data Corporation.
The versatility of the virtual machine concept appeals to different communities. Operating system developers find in it a secure test environment. End users appreciate the coexistence of different operating systems on the same machine. Computer centers exploit this capability to facilitate their technological migrations.
August 1972 sees the birth of VM/370 as part of the System/370 Advanced Functions. The Burlington team, enriched by Dick Newson, Carl Young, and Dave Tuttle, pushes innovation further. VM/370 introduces the ability to run VM under itself, a considerable simplification for development and testing.
The separation between hardware management and system services still structures our contemporary hypervisors. The hardware interface as an isolation boundary is a central paradigm. This transparent simulation of hardware resources flourishes today in cloud computing.
Electronic Mail
Sixty years ago, the business world operated at the pace of paper. Executives dictated their letters to their secretaries, who mastered the delicate art of shorthand before transcribing on typewriters, with that characteristic sound of keys striking the ink ribbon. Each message followed the rigid protocol of the official letterhead, codified courtesy formulas, and carbon copies for the archives. The mail then departed into the maze of the postal system.
As early as 1965, at MIT, Tom Van Vleck and Noel Morris tinkered with something unprecedented. On their laboratory’s time-sharing computer, they created a command called mail that allowed users to send messages stored in virtual mailboxes. The idea seemed trivial, but it already disrupted habits: no more need to physically move to transmit information, no more waiting, no more paper lying around.
Six years later, Ray Tomlinson accomplished the gesture that would change the history of human communication. This engineer at Bolt Beranek and Newman was working on ARPANET when he decided to adapt the sndmsg program to enable distant computers to communicate. He needed a way to distinguish the user from their machine, and it was almost by chance that he chose the @ symbol on his keyboard. This at sign, which accountants used to note “so many units at such a price,” bridged the gap between digital identity and the computer’s physical address. First message sent: “QWERTYUIOP” or something close. Tomlinson forgot the exact content, which shows he didn’t grasp the magnitude of his invention.
The contagion effect was immediate on ARPANET. Researchers and engineers discovered they could exchange without time or geographic constraints, and instantaneously. No more interrupted phone calls, no more letters taking a week to cross the country. Larry Roberts, who directed ARPA’s IPTO office, quickly understood the value of this technology. In 1972, he developed rd, a program that sorted and classified received messages. Barry Wessler improved it with nrd, and John Vittal took a decisive step in 1974 with MSG, which introduced “reply” and “forward” functions. Electronic mail ceased being a simple transmission system to become a true conversation tool.
Technology followed usage. Early message formats resembled makeshift solutions, with each system having its own conventions. RFC 561 of 1973 attempted to impose order by defining the From:, Subject:, and Date: fields. This standardization work continued laboriously until RFC 822 of 1982, which defined the foundations of the format still used today.
Jon Postel, a legendary figure of the Internet, tackled the problem of message transport. Early systems used FTP, but this protocol proved unsuitable for the growing volume of exchanges. He designed Simple Mail Transfer Protocol in 1982, a protocol of elegant simplicity that would span decades without aging. This exceptional longevity in the computing universe testifies to the quality of its design.
The 1980s saw a proliferation of incompatible networks. UUCP connected UNIX machines, CSnet served universities, Bitnet linked computing centers. Each network had its addressing rules, conventions, and limitations. System administrators juggled with bewildering addresses like ihnp4!ucbvax!user@host, veritable computer roadmaps that specified the exact path of the message. This complexity disappeared with the arrival of the domain name system in 1984 and MX records two years later. The user@domain.com address then established itself as the universal standard.
In 1981, Eric Allman developed sendmail, and Dave Crocker created MMDF in 1982. These sophisticated programs handled message routing between heterogeneous systems, but ran into a limitation: electronic mail could only transport ASCII text. Impossible to send a photo, formatted document, or presentation. Users worked around the problem by encoding binary files, but the manipulation was tricky. The MIME standard of 1992 finally freed electronic mail from these constraints by allowing multimedia content. This evolution transformed messaging into a true professional exchange platform.
From a few hundred pioneer users on ARPANET, electronic mail now brings together more than 4.37 billion people. This massive adoption is explained by remarkably judicious technical choices: simple and robust protocols, open standards, and decentralized architecture. The system adapted to evolving uses without ever breaking its fundamental compatibility. The emergence of instant messaging and social networks has not succeeded in completely dethroning this sixty-year-old technology, which continues to route more than 350 billion messages across the planet each day in 2024.
Multics
In 1962, Joseph Carl Robnett Licklider, then director of DARPA, launched the idea for the Multics operating system. The contract was awarded in August 1964. Multics extended the work of the Compatible Time-Sharing System, developed at MIT under Fernando Corbató. Computers still operated as batch processing machines, with programs executing one after another, and users sometimes waiting entire days for their results. The first time-sharing systems struggled to manage more than a few simultaneous programs, forced to offload inactive ones onto magnetic tape.
The project brought together an unusual alliance consisting of MIT for research, General Electric for hardware, and Bell Labs for software development. GE secured the contract after IBM rejected the proposed paging and segmentation concepts. This convergence of academic and industrial talent assembled some of the greatest minds in computing of that era.
The technical ambition exceeded anything previously attempted. Multics integrated virtual memory, segmentation-based protection, hierarchical file system, shared-memory multiprocessing, security, and online reconfiguration. The system was among the first to be developed primarily in a high-level language and to support multiple programming languages. MacLisp and troff found their origins in Multics, as did much of the modern UNIX command line.
Implementation proved challenging. The GE 645 hardware arrived late, and Multics only achieved self-hosting in 1968. In 1969, DARPA threatened to pull the plug, pushing Bell Labs toward the exit. This withdrawal would give birth to UNIX, created by former Multics developers: Kenneth Lane Thompson, Dennis MacAlistair Ritchie, Douglas McIlroy, and Joseph Francis Ossanna.
Honeywell, which acquired General Electric’s computer business in 1970, commercialized Multics from 1973 until 1985 at an initial price of $7 million. The customer base consisted of the US Air Force, universities such as the University of Southwestern Louisiana and the French university system, and major corporations like General Motors and Ford. In total, approximately 80 licenses were sold. The last Multics system, operated by the Canadian Department of National Defence, was shut down in 2000.
The system architecture rested on two pillars: processes and segments. Processes provided execution contexts, while segments stored code, data, and I/O devices. These segments were organized in a directory hierarchy, ancestors of current file systems. A process’s protection domain delimited accessible segments and authorized operations.
Regarding security, Multics introduced protection rings, a hierarchical structure ranging from ring 0, the most privileged, to higher, less powerful rings. The GE 645 provided for 64 rings, but only 8 were actually implemented. The Multics supervisor, confined to rings 0 and 1, could only be modified by code executing in these privileged rings. This approach still influences modern processors, which typically use two levels: supervisor (kernel) and user.
In 1974, Paul Karger and Roger Schell of the US Air Force conducted a vulnerability analysis that revealed several flaws. A hardware vulnerability bypassed access checks in certain indirect addressing cases. Software flaws appeared, particularly related to the master mode of execution in user rings, a modification introduced to improve performance. These discoveries illustrated the difficulty of maintaining security in a complex system under performance optimization pressures.
Several factors explain Multics’s relative commercial failure. Integrated circuits evolved at an exponential rate, transforming the computing landscape. When Multics reached the market in 1973, it perfectly addressed the problems of 1964, but the world had changed. The system cost too much, depended too heavily on proprietary architecture, and was too focused on centralized shared computing while decentralized departmental computing was taking off. Meanwhile, Digital Equipment Corporation was marketing entry-level versions of the PDP-11, and the 8080 microprocessor was running the first versions of CP/M as early as 1974.
DEC PDP-8
In 1965, Digital Equipment Corporation surprised the computing world by introducing the PDP-8, a computer that fit in half a cabinet. The era of steel monsters occupying entire rooms seemed over. This machine inaugurated the minicomputer era and radically transformed our vision of what a computer system should be.
The story began three years earlier with the PDP-5. DEC experimented with the input-output bus, an innovative concept. Gone were the complex radial connections that required anticipating each piece of equipment and its wiring to a central point. The bus allowed the progressive addition of peripherals, an unprecedented flexibility. But the PDP-5 was a sketch, and DEC’s engineers saw further ahead.
The PDP-8 was born from this ambition. Logic technologies were evolving and promised spectacular speed gains. Ferrite core memories improved storage performance by reducing cycle time from 6 to 1.6 microseconds. Made possible by falling logic component costs, the program counter, previously stored in memory, migrated to hardware, drastically reducing execution times.
The PDP-8 architecture stood out for its simplicity. This 12-bit single-address computer did not seek complexity to impress. It embraced its limitations of reduced arithmetic calculations and modest primary memory. These constraints were its strengths in certain domains, particularly industrial process control and laboratory applications that found in it an ideal partner. Controlling a pulse height analyzer or a spectrum analyzer did not require the power of a giant; precision and reliability were sufficient.
The market responded enthusiastically. Approximately 50,000 units sold until 1979, not counting the CMOS versions that followed. For a computer not produced by IBM, this figure was remarkable. The PDP-8’s size partly explained this success. Placed on a laboratory bench or integrated into larger equipment, it adapted to space constraints. This flexibility actually gave birth to the OEM market, where manufacturers purchased computers to incorporate into their systems.
The PDP-8’s evolution tells the story of successive technologies. In 1966, the PDP-8/S attempted to conquer the low-end market with a serial architecture where data was processed bit by bit sequentially. The savings came at a steep price as performance collapsed and sales disappointed. DEC learned its lesson. In 1968, the PDP-8/I exploited medium-scale integrated circuits to surpass the original model while costing a third less. The PDP-8/L targeted customers content with simple configurations. Then came the PDP-8/E and its Omnibus architecture. This unified 144-pin bus revolutionized modularity. The processor, which occupied 100 cards in the PDP-5, now required only three 8 × 10-inch cards. Miniaturization accelerated and redefined possibilities. The PDP-8/M further compacted the concept to attract cramped installations. In 1975, the PDP-8/A leveraged hexagonal cards and semiconductor memories. Its microprogrammed processor fit on a single card, a remarkable technical achievement.
The culmination came in 1976 when Intersil etched the first PDP-8 processor on a single chip, using CMOS technology. This feat came with minimal power consumption. This miniaturization enabled systems of unthinkable compactness just a few years earlier, such as the VT78 video terminal.
The PDP-8’s input-output bus became a reference and its architectural simplicity inspired generations of engineers. This machine proved that a computer could reconcile simplicity, power, and economy. It opened a new market between costly systems and specialized calculators.
CDC 6600
In 1957, William C. Norris left his position at Sperry Rand with a handful of colleagues to set up shop in a disused Minneapolis warehouse. Their plan was to establish Control Data Corporation to build the world’s fastest computers. This audacious ambition took shape when Seymour Cray joined the team in 1958. Six years later, their CDC 6600 revolutionized computing by becoming the first true supercomputer.
IBM dominated the market with its 7094, the benchmark machine competing against CDC’s 1604. These computers represented the pinnacle of available computing power, but their legacy architectures constrained any major evolution. Maintaining backward compatibility imposed technical compromises that limited performance gains. Transistors were reaching their physical limits and could no longer drive machines to new heights on their own.
Seymour Cray and his team recognized that a breakthrough was needed. James Thornton, one of the engineers, attended a seminar at UCLA on high-performance machines where one thing became clear: they all exploited some form of parallelism. The team discovered that a quarter of computation time was lost in signal transmission between logic gates. This observation would shape their approach with the arrival of silicon transistors from Fairchild Semiconductor. These new components enabled unprecedented density and shortened critical paths. Circuit boards could now be mounted back-to-back, with all components housed inside. This organization solved at a stroke the cooling problems posed by increased circuit concentration.
The CDC 6600 broke the mold with its ten specialized functional units working in parallel under the coordination of a central control unit. With this architecture, different parts of a program executed simultaneously, an unprecedented technical feat. The central processor focused on scientific calculations while ten peripheral processors handled input-output and system tasks.
The approach remarkably anticipated future RISC processors. Memory was organized into pages protected by an advanced system. A sophisticated hardware scoreboard managed dependencies between instructions and optimized their parallel execution. These innovations propelled the machine toward one million instructions per second, a performance never before achieved.
IBM responded to the 6600 announcement by promising its own supercomputer. This counterattack temporarily stalled orders, as customers preferred to wait for the giant’s response. Problem: IBM’s machine existed only on paper. CDC filed an antitrust lawsuit and promptly launched its 7600, openly defying IBM, which this time remained silent. The 7600 was one of the most powerful computers ever built. Five years later, CDC won its lawsuit.
Scientific laboratories, universities, and military research centers massively adopted the CDC 6600. Its unmatched power advanced meteorology, nuclear physics, and aerodynamic design. SCOPE, the operating system developed specifically for the machine, provided advanced resource management and multiprogramming capabilities. Its modular design, innovative use of parallelism, and sophisticated resource management inspired future developments. Modern processors adopted its principles in instruction-level parallelism and the organization of specialized functional units. Control Data established itself as the leader in scientific computing and developed the CYBER series as a continuation of the 6600.
In 1972, Seymour Cray left CDC to found Cray Research. The company invested $300,000 in this new venture, maintaining close ties with its former architect. The emergence of microprocessors and mini-supercomputers in the 1980s gradually eroded CDC’s dominant position. Competition from DEC, IBM, and Cray Research intensified. CDC accumulated losses, pushing William Norris out in 1986. Despite several restructurings, the company split its computing operations in 1992, marking the end of an era.
The CDC 6600 remains a major technical innovation that established the foundations of high-performance computing. Its elegant design demonstrated that a carefully conceived architecture could transcend the apparent limitations of components. This machine ushered in the age of supercomputers and set performance standards that durably guided the evolution of scientific computing.
ELIZA
In 1966, Joseph Weizenbaum published an article in the “Communications of the ACM”. In it, he described ELIZA, a program capable of conversing in natural language with a human being. At MIT, where the researcher worked, this achievement created a sensation in artificial intelligence circles.
The name chosen for this program was not random. Eliza Doolittle, the heroine of George Bernard Shaw’s Pygmalion, embodies the transformation of a simple flower seller into a distinguished lady through the lessons of Professor Higgins. Joseph Weizenbaum saw in this metaphor a machine that learns to “speak” under a teacher’s supervision while retaining its limitations—a perfect illustration of his program.
On MIT’s MAC time-sharing system, ELIZA ran using the MAD-SLIP language on an IBM 7094. Users connected from remote terminals and began what resembled genuine conversation. They typed their sentences, the program responded. The illusion worked. Yet the mechanisms animating ELIZA were disconcertingly simple. The program scrutinized each sentence searching for keywords. Once a term was identified, it applied predefined transformation rules. If no keyword was detected, it drew from a repertoire of generic responses or reused a previous transformation.
This approach revealed its full subtlety in the hierarchization of keywords. When a sentence contained “I” and “everyone”, the program favored the latter term, because excessive generalizations often deserve attention. The transformation rules broke sentences down according to preset patterns to reconstitute them into coherent responses.
ELIZA’s modular architecture constituted its true innovation. The conversational rules lived in scripts separate from the program’s core. This design allowed behavioral modifications without touching the source code. Scripts thus emerged in German and Welsh, proof of the system’s flexibility.
The script that would make ELIZA famous simulated a Rogerian psychotherapist. Carl Rogers had developed a person-centered therapy where the therapist reformulates the patient’s statements without claiming to understand everything. This approach perfectly matched the limited capabilities of Joseph Weizenbaum’s program. Faced with “I am sad”, ELIZA responded “I am sorry to hear that you are sad” or “Can you explain what makes you sad?” The program maintained the illusion through reformulation, open-ended questions, and requests for clarification.
Joseph Weizenbaum did not anticipate the impact of his creation. Users attributed genuine understanding to ELIZA. Some requested private sessions with the program. This reaction troubled the researcher, who realized that humans projected their own interpretations onto the machine’s mechanical responses. They themselves constructed the meaning of the exchange. This spontaneous anthropomorphization still resonates in our relationships with voice assistants and other chatbots. ELIZA revealed how elementary mechanisms generate the illusion of intelligence, raising lasting questions about the nature of understanding.
The program’s technical weaknesses are glaringly obvious: no memory of past exchanges, no model of the interlocutor, no knowledge of the real world. Responses emerged from syntactic transformations blind to meaning. But these limitations do not erase ELIZA’s historical importance, which established the foundations of automatic language processing: keyword recognition, identification of minimal contexts, selection of appropriate transformations, generation of default responses.
Beyond the technical aspects, ELIZA crystallized our reflection on artificial intelligence. Joseph Weizenbaum observed this paradox where, once the mechanisms had been explained, the “magic” evaporated and the program revealed itself as a collection of trivial procedures. Yet the effect on users persisted. ELIZA teaches us that the perception of intelligence in our exchanges with machines depends as much on our projections as on the actual capabilities of the systems.
The questions raised by Joseph Weizenbaum’s program about the nature of understanding, the boundaries of simulation, and the ethical stakes of artificial intelligence have lost none of their relevance. ELIZA remains that foundational moment when computing began to explore the ambiguous territories between mechanical language processing and genuine intelligence.
Information Management System
In 1966, in the engineering offices of Rockwell’s Space Division, engineers working on the Apollo program juggled thousands of technical drawings, with each modification to a component triggering a cascade of verifications across the entire space capsule. There was no room for error when it came to sending men to the Moon.
Uri Berman, an IBM consultant seconded to Rockwell, observed this organized chaos. Together with Pete Nordyke, they envisioned a bold solution that completely separated data management from application code. Their prototype, which they named DATE (Disk Applications in a Teleprocessing Environment), radically transformed how engineers accessed technical information. Gone were the endless trips to paper archives—a terminal connected to an IBM 7010 now sufficed to check the status of a drawing and its ongoing modifications. This separation between data and processing formed the foundation of Data Language/One (DL/I), an interface that freed programmers from the physical constraints of disk storage. The idea gained ground that an application should not reinvent the wheel of file management. DL/I standardized these operations, improved data integrity, and simplified program maintenance.
The project took on new scope in 1966 with the arrival of IBM System/360 computers. Dr. Robert R. Brown assumed technical leadership, supported by consultant Bob Patrick. The team set itself the goal of creating software that met IBM Type 2 standards without modifying the OS/360 operating system. This seemingly limiting constraint would actually ensure the longevity of their creation.
The collaboration expanded beyond Rockwell. Caterpillar Tractor joined the venture, bringing its specific industrial management needs. About twenty programmers worked on what became IMS/360, integrating DL/I into a more ambitious architecture. On August 14, 1968, the first READY message from IMS appeared on a Rockwell terminal. The Apollo mission could now count on a new computer companion.
The IMS architecture rested on three pillars: the database manager (IMS DB), the transaction manager (IMS TM), and common services. This modularity allowed à la carte usage. Some clients exploited only the database component (DBCTL), while others focused on transaction processing (DCCTL). Everyone found what they needed according to their specific requirements.
An organizational innovation accompanied the technical deployment. Dan Gilbert became Rockwell’s first Database Manager in history, foreshadowing the database administrator profession. This position centralized responsibility for data organization and integrity, breaking with the previous dispersion of these tasks across different departments. IMS’s hierarchical approach contrasted with the relational world that would later dominate. This structure, born from the needs of managing space bills of materials, excelled at predefined access patterns. Relationships between data were established in advance, creating optimized paths. This philosophy differed radically from relational databases, where relationships are constructed dynamically during query execution.
In 1988, twenty years after its inception, IMS equipped 7,000 installations worldwide. In the 2000s, the system processed more than 50 billion transactions daily and managed over 15 million gigabytes of data for more than 200 million users. These volumes testified to its robustness and capacity for evolution.
Technological evolution did not spare IMS, which adapted without renouncing its principles. The introduction of OTMA (Open Transaction Manager Access) opened the system to TCP/IP applications, breaking the isolation of IBM’s proprietary network architectures. Java, XML, and web services gradually found their place, inscribing IMS into the service-oriented architecture era. Integration with Microsoft .NET or SAP further expanded its ecosystem.
IBM’s Parallel Sysplex gave IMS a second youth. Sharing data and messages between multiple instances optimized resource utilization while strengthening availability. These sophisticated disaster recovery mechanisms made it a preferred choice for applications where downtime is not an option.
Today, IMS remains a pillar of transaction processing in the financial, manufacturing, and government sectors. This longevity in a field marked by frantic innovation cycles validates the initial architectural choices. Separation of concerns, data integrity, and processing performance remain central priorities, more than fifty years after the first technical drawings of the Apollo capsule.
LOGO
In 1966, four researchers at BBN (Bolt, Beranek and Newman) in Cambridge, Massachusetts, embarked on a project that would transform learning: Seymour Papert, Wallace Feurzeig, Daniel Bobrow, and Cynthia Solomon created LOGO.
Seymour Papert’s background explains much. A trained mathematician, he spent five years in Geneva with Swiss psychologist Jean Piaget, absorbing his theories on cognitive development. Piaget rejected the idea that learning consists of filling an empty brain. For him, children actively construct their knowledge. In 1964, Papert joined the MIT Artificial Intelligence Laboratory, directed by Marvin Minsky. The two men shared a common obsession with understanding human thought by observing how machines learn.
The name LOGO comes from the Greek logos, meaning “word.” Wallace Feurzeig proposed this name because the language initially focused on manipulating words and sentences. The creators started from the simple observation that children naturally play with words, while mathematics often terrifies them. Technically, LOGO descends directly from LISP, the artificial intelligence language invented by John McCarthy. It inherited interactive evaluation, recursion, and list manipulation, while keeping its syntax accessible to young people.
The first full-scale experiment took place in 1969 at Muzzey Junior High School. Twelve terminals connected to a PDP-1 computer hosted 12-year-old students programming in LOGO. They created sentence generators, coded games like tic-tac-toe, and developed educational programs. Also in 1969, Seymour Papert and Cynthia Solomon left BBN to found the LOGO group at MIT’s AI laboratory.
The following year brought the innovation that would make LOGO famous: the “turtle.” This small robot moved across the floor, drawing a line behind it. Then a graphical version appeared on screen. Turtle geometry transformed mathematics learning: children “played turtle” with their bodies, establishing a direct bridge between physical movement and mathematical abstraction.
The 1970s saw the birth of the “microworlds” concept, these learning environments designed around specific domains. Language and geometry were joined by new territories of exploration. Andrea diSessa developed the “dynatortue” to explore Newtonian physics. The sprites on the TI-99/4 enabled animation. Radia Perlman pushed innovation further by creating interfaces for very young children: button boxes and the slot machine made programming accessible without knowing how to read or write.
In 1980, with the advent of personal computers, Seymour Papert published Mindstorms, a book that presented his pedagogical vision, where the computer transformed into an instrument that helps children think about their own thinking. Success was immediate. Commercial integrations multiplied: Terrapin and LCSI released their versions for the Apple II in 1981, soon followed by adaptations for practically all microcomputers of this generation.
The language evolved and diversified over the following decades. Object orientation entered LOGO with Object LOGO and TLC LOGO. LCSI’s LogoWriter combined programming and word processing. StarLogo introduced parallel programming and complex system simulation. Variants proliferated and over 300 dialects emerged.
LOGO pedagogy rests on principles that disrupted traditional teaching. Debugging ceased being the shameful correction of errors to become a natural process of learning through trial and error. Developing procedures taught children to break down problems into named sub-problems. Students chose their projects, teachers guided them.
These pedagogical innovations sparked fierce academic debates. Some studies, notably those by Pea and Kurland, failed to demonstrate the announced benefits on general problem-solving abilities. These results sometimes cooled institutional enthusiasm for LOGO, but these studies faced severe criticism. Their methodology and restrictive vision of learning limited their scope.
LOGO traversed the decades. Scratch, born at the MIT Media Lab, adopted the construction-based philosophy in a colorful block programming environment. NetLogo continued the multi-agent simulation tradition. App Inventor transposed LOGO’s principles to mobile programming. LOGO proposed a radical vision of education. Computing was not an additional subject to teach but a tool for rethinking learning. By giving children control of technology, it offered an alternative to lecture-based teaching. LOGO is a programming language and a philosophy that places the child at the heart of their learning.
Read Only Memory
In the world of electronic components, ROM stands apart. While most memories hasten to forget their data the moment power is cut, ROM carefully preserves its information, like a digital safe. This characteristic earns it its name: Read Only Memory.
The story begins in the 1950s, when engineers discovered they needed to store data permanently. The first ROMs emerged from this necessity, entirely hand-wired in factories. Each bit was literally etched into the material, frozen when the silicon took shape. These mask ROMs, as their creators named them, suited mass production perfectly. Manufacturing a hundred thousand identical copies of a program became economically viable. But woe to anyone who discovered an error after fabrication: no turning back.
This rigidity eventually became burdensome. The 1960s saw the birth of PROM, a ROM that users could program themselves. The idea of purchasing a blank chip filled with binary ones, then customizing it with specialized equipment, proved appealing. The operation remained irreversible, which earned these components the unflattering nickname of “one-time programmable.” One typo, and into the trash it went.
The following decade brought a breath of freedom with EPROM. For the first time, developers could correct their mistakes. The secret lay in a small quartz window positioned atop the chip. Exposing the component to intense ultraviolet light completely erased its contents, like an electronic eraser. True, the operation took time and cost more, but what a revolution for those who needed to test and retest their code!
EEPROM arrived in the 1980s with the promise of eliminating UV sessions. This time, erasure was electrical, with precision that modified each byte individually. Designers appreciated this information surgery, never mind if write times dragged on and cost discouraged intensive use.
Flash memory, appearing toward the end of the 1980s, finally combined all the sought-after advantages: respectable density, affordable cost, decent read speed, and electrical reprogramming. Erasure only worked on entire blocks, typically from a few hundred bytes to several kilobytes, which was its sole constraint. A technical detail that didn’t prevent its commercial triumph.
This evolution transformed our relationship with machines. ROM populates our electronic devices. It houses computer BIOS, those primitive instructions that wake the machine and verify its proper functioning. Calculators, printers, mobile phones—all these familiar objects conceal ROM that preserves their basic programs. Its nature naturally protects against unauthorized modifications, forming a silent but effective bulwark against intrusions.
Industry found in this technology a balance between permanence and flexibility. This trajectory illustrates a trend in computing: the constant pursuit of adaptability. Laboratories continue exploring new avenues such as resistive memories (ReRAM), phase-change technologies, or quantum approaches. This research sketches the contours of future components that could marry ROM’s traditional robustness with unprecedented performance. Some researchers also mention intrinsic protection against ransomware. The ROM adventure seems far from reaching its epilogue.
NLS
On December 9, 1968, Douglas Engelbart took the stage in a San Francisco auditorium packed with 2,000 spectators. For an hour and a half, this quiet man would revolutionize everyone’s vision of computing. His NLS (oN-Line System) unveiled what personal computers would become in the following decades. The mouse, hypertext, videoconferencing, real-time collaborative editing... so many inventions that seemed to appear out of nowhere but which, in reality, had been brewing in his mind for more than twenty years.
In the mid-1940s, Engelbart, an electronics technician in the Navy, came across an article by Vannevar Bush entitled “As We May Think.” Bush described the Memex, an imaginary machine capable of organizing and linking all human knowledge. This reading struck Engelbart like a revelation. Back in civilian life, he pondered this idea. Faced with the growing complexity of the world and increasingly difficult problems, he was convinced that tools needed to be created to multiply our capacities for thinking and collaboration.
In 1957, he joined the Stanford Research Institute as a consultant. There, he matured his vision of computing that augments human intelligence rather than replacing it. Unlike proponents of artificial intelligence who dreamed of thinking machines, he bet on human-machine symbiosis. In 1962, he put his ideas down in a paper that would make history: “Augmenting Human Intellect: A Conceptual Framework.” This manifesto earned him support from ARPA, NASA, and other organizations. The following year, he founded the Augmentation Research Center within SRI. The team he assembled would give birth to the NLS system. Bill English joined the adventure and co-invented with Engelbart that small wooden object that would revolutionize the human-machine interface: the mouse. Two orthogonal wheels in a housing, which advantageously replaced the light pen of the era. Other attempts emerged: devices controlled by the knee, by the head, but nothing matched the simplicity of the mouse. The team also developed a chord keyboard that transformed the user into a computing pianist, composing commands with one hand.
NLS transformed how people worked with information. Gone were programs that ran in batch mode; in their place came direct interaction. Documents became structured, linked to each other, modified in real time by multiple users simultaneously. It became possible to view them from different angles, search them, annotate them. Messaging integrated naturally into the system, as did videoconferencing. The screen was divided into windows, objects manipulated directly.
The 1968 demonstration orchestrated all of this with remarkable showmanship. Engelbart sat at his console, connected to a computer 50 kilometers away. His screen and image were projected onto a 6-by-5-meter wall. Behind the scenes, Bill English coordinated a technical team juggling video, sound, and remote connections. The technology of the time made the feat all the more impressive. On stage, he edited text as if it were natural, created hierarchical lists, inserted graphics. Fifty kilometers away, Bill Paxton collaborated with him live. Their two cursors danced together on the giant screen. The video conversation with the Menlo Park team astonished the audience. These actions seemed like magic at the time.
This “Mother of All Demos” planted the seeds of our modern computing. In the audience, Alan Kay and Charles Irby watched attentively. They would later join Xerox PARC, where the first commercial graphical interfaces would be born. Yet immediate success eluded its creator. Renamed Augment in 1978, NLS remained a tool for specialists. The system required expensive hardware and expert users. For Engelbart, power took precedence over ease of use. He compared learning his system to learning a musical instrument. This philosophy clashed with the emergence of personal computers that prioritized simplicity. NLS’s centralized architecture aged poorly in the face of this change. The team dispersed, its members leaving to develop more accessible interfaces elsewhere.
NLS was only a first step toward transforming human collaboration. Engelbart envisioned a process of bootstrapping where each generation of tools spawned the next, in a virtuous spiral between technical progress and the evolution of practices. This legendary demonstration embodied a certain vision of computing as an amplifier of collective intelligence. Our current multimedia practices stem from this performance, blending television, telecommunications, and computing. Above all, it reminds us that the potential for augmenting our cognitive capacities remains largely untapped.
Engelbart would continue until his death in 2013 to explore these questions, notably at the Bootstrap Institute he created in 1989. The Turing Award he received in 1997 crowned a visionary work.
ARPANET
In American laboratories of the early 1960s, computers lived in autarky. These expensive machines operated in isolation, each jealously guarding its data and programs. But at MIT, Joseph Carl Robnett Licklider saw further. In 1962, this visionary man sketched in his memoranda the outline of a “galactic network” that would radically transform our relationship with machines. He imagined a world where anyone could instantly tap into computing resources scattered across the country. His dream finally found concrete ground when he took charge of ARPA’s computer research program, this Pentagon agency specialized in cutting-edge projects.
Leonard Kleinrock published the first paper on packet switching at MIT in July 1961, an idea he would develop three years later in a complete book. Meanwhile, Paul Baran was working at RAND Corporation on a communication network without a central point, where redundancy would guarantee the survival of exchanges in case of partial destruction. Across the Atlantic, Donald Davies at the British National Physical Laboratory was exploring parallel paths.
In the summer of 1965, Lawrence Roberts and Thomas Merrill managed to connect two computers separated by an entire continent, one in Massachusetts, the other in California. A simple telephone line was sufficient. The experiment worked, certainly, but it also revealed the glaring limitations of the traditional telephone network for this type of use.
Roberts joined ARPA the following year and gave substance to the ARPANET project. His detailed plan, presented in 1967, initially aroused distrust among the researchers involved. The proposed technical architecture seemed too complex to them. Wesley Clark saved the situation by suggesting a solution of genius simplicity: create computers dedicated to routing data, called Interface Message Processors. These famous IMPs immediately won approval. In December 1968, Bolt Beranek and Newman secured the construction contract. The network awakened in September 1969. The first node took up residence at UCLA, where Leonard Kleinrock directed the Network Measurement Center. The second was installed at Douglas Engelbart’s Stanford Research Institute, which housed the Network Information Center. The University of California at Santa Barbara and the University of Utah completed this founding quartet.
October 29, 1969 will remain in history: on that day, the first bits traveled between UCLA and Stanford. Contrary to a widespread idea, ARPANET was not born from a military obsession related to the Cold War. The much more pragmatic objective was to make colossal investments profitable by sharing these prohibitively expensive computers among research centers. ARPA was funding often incompatible machines that sat idle much of the time. Connecting them was simply common economic sense.
Steve Crocker headed the Network Working Group which delivered the Network Control Protocol in December 1970, the first communication protocol between ARPANET host computers. Adoption occurred gradually between 1971 and 1972, allowing the first real applications to emerge. Bob Kahn orchestrated a public demonstration in October 1972 that caused a sensation at the International Computer Communication Conference in Washington. From the basements of the Hilton Hotel, visitors discovered this mysterious network in real time. The success exceeded all expectations. That year also saw the birth of Ray Tomlinson’s electronic mail, the first application that truly won over users.
Growth became exponential. Fifteen nodes by the end of 1971. In 1973, first international connections to University College London and the Norwegian Royal Radar Establishment. ARPANET had just invented the first international packet-switching network.
Vinton Cerf and Bob Kahn achieved a decisive breakthrough in 1973 by designing TCP/IP. They wanted to interconnect networks of different types, such as future radio and satellite networks. Their open architecture allowed each network to retain its specificities while communicating with others via TCP/IP. On January 1, 1983, ARPANET switched to TCP/IP. This technical migration required modifying all the network’s computers simultaneously. To force the transition, the designers progressively deactivated the old NCP protocol in 1982, one day after another, before the final shutdown. The operation succeeded perfectly, demonstrating the maturity of the new architecture.
ARPANET also invented a development method that survives today. Steve Crocker created the Request for Comments in 1969, these informal working documents where researchers shared their thoughts. RFCs became the reference tool for documenting and standardizing the network. This tradition of openness spans decades still. The initial Network Working Group evolved into the Internet Working Group and various structures such as the Internet Activities Board. Researchers and engineers developed protocols together, far from commercial or political pressures.
ARPANET transformed scientific collaboration. It gave birth to an international community united by a common culture of openness and cooperation. The technical and organizational choices of the 1970s influenced the architecture and governance of the contemporary Internet. In 1990, ARPANET quietly faded away, replaced by NSFNET and the first commercial networks that used TCP/IP.
B
In 1969, in the corridors of Bell Labs, Ken Thompson and Dennis Ritchie were working on UNIX, that operating system promising to revolutionize computing. But how could they program efficiently on these resource-limited minicomputers without getting lost in the maze of assembly language?
Thompson was well-acquainted with BCPL, the language developed by Martin Richards at Cambridge. Powerful, certainly, but too heavy for the PDP-7 and PDP-11 machines available to the researchers. The idea then emerged to create a simplified, streamlined version that would preserve BCPL’s spirit while adapting to hardware constraints. Thus B was born, a name chosen by simple alphabetical contraction that would prove prescient with the later arrival of the C language.
B’s philosophy lay in its radical simplicity. Gone were the complex types and syntactic subtleties that weighed down development. In B, everything is a machine word. Variables are automatically sized according to the architecture, typically 36 bits. This approach, which might seem limiting, actually concealed formidable efficiency for systems programming. Every byte counted when resources were measured in kilobytes.
The syntax inherited from BCPL but Thompson stripped it of its embellishments. Classic control structures remained: if, while, for. Arithmetic and logical operators kept their familiar notation. One innovation left a lasting mark on the computing landscape: the use of braces to delimit code blocks. This seemingly trivial convention would span decades to appear in countless modern languages.
B supported pointers and arrays, concepts essential for manipulating memory at the lowest level. This capability proved essential for UNIX, where every processor cycle had to be optimized. Thompson and Ritchie developed a remarkably compact compiler, fitting in just a few thousand lines. Later, they added an interpreter to facilitate interactive development. This compilation-interpretation duality offered the best of both worlds: performance for production, flexibility for testing.
Entire portions of the operating system, initially coded in assembly, were rewritten in B, making UNIX its privileged testing ground. The experience revealed B’s strengths and weaknesses. On one hand, development accelerated considerably and facilitated code maintenance. On the other, certain limitations emerged, notably in string manipulation where the absence of distinction between characters and integers complicated operations.
Portability guided design choices. In a computing world fragmented among incompatible architectures, B promised to free developers from these hardware constraints. Theoretically, an application written in B could migrate from one machine to another through simple recompilation. Though reality would prove more nuanced, this forward-thinking vision would influence future languages.
The evolution of architectures toward 8 and 16 bit words made B’s model less optimal. UNIX’s growing needs progressively exceeded its capabilities. Dennis Ritchie, drawing on the experience gained, undertook to design a successor. In 1972, C was born, preserving B’s spirit while resolving its main limitations through a more sophisticated type system.
This evolution perfectly illustrates the iterative nature of computing progress. B lived only a few years, but its influence exceeded this brief existence. Every programmer who writes a for loop in Java or JavaScript uses syntax inherited from B. The braces that structure code, the familiar operators, the procedural approach—so many elements that B bequeathed to posterity.
At Bell Labs and elsewhere, B served as training for an entire generation of programmers. Its relative simplicity made it an excellent pedagogical vehicle for understanding systems programming. Many who would later contribute to the rise of UNIX and the Internet cut their teeth on this language. Brian Kernighan wrote a tutorial that established enduring standards of pedagogical quality, demonstrating that a technical language could be taught with clarity and method.
B’s philosophy—favoring simplicity over complexity, efficiency over exhaustiveness—transcends eras. This minimalist approach, where each element must justify its presence through practical utility, still characterizes languages like Go, developed by Bell Labs veterans. An entire development culture is transmitted.
This story reminds us that computing innovation rarely proceeds through abrupt ruptures. It advances through accumulation, each generation building on the achievements of the previous one while seeking to surpass its limits. B drew inspiration from BCPL, spawned C, which would give birth to countless descendants, with each link in the chain counting.
The First RFC
In 1969, a handful of students and researchers tackled a project that would change the world: ARPANET. This initiative by the Pentagon’s Advanced Research Projects Agency aimed to connect four universities on the West Coast: UCLA, Stanford Research Institute, UC Santa Barbara, and the University of Utah. Each site had different machines, with their own systems and technical peculiarities. This diversity, which might seem an asset, proved to be more of a headache.
Steve Crocker, a student at UCLA, struggled to sleep one night in April. Technical discussions between teams were piling up, but nothing was being documented anywhere. How could these vastly different machines communicate, and how could the solutions found be documented? In the darkness of his room, he picked up a sheet of paper and began drafting what would become RFC 1, simply titled “Host Software”.
This document was born within the Network Working Group, an informal collective of young researchers who met physically to discuss the very network they were building to avoid travel. The irony of the situation escaped no one. These meetings took place without any particular hierarchy, without official mandate, in a relaxed atmosphere that contrasted sharply with the usual administrative burdens.
Crocker deliberately chose the term “Request for Comments” rather than a more authoritative formulation. He feared that too academic a tone might offend more experienced experts who were perhaps, he imagined, working on the same questions from their offices on the East Coast. This initial caution shaped the spirit of RFCs: documenting without imposing, proposing without constraining.
On April 7, 1969, this first RFC addressed questions about message structuring between computers that seem rudimentary today. Crocker proposed 16-bit headers with destination addresses of only 5 bits, which limited the network to 32 machines. He discussed links, those fixed connections between machines, far removed from today’s dynamic connections. The network latency, estimated at one second, seems enormous compared to today’s microseconds.
A visionary proposal emerged from this document: DEL, or Decode-Encode Language, a terminal description language intended to enable communication between heterogeneous systems. This idea gave rise to the communication standards that would later structure the internet.
The publication of this RFC established new principles. First, complete free access: the documents could be consulted by anyone without paying a cent. Second, openness: anyone could write an RFC, whether a tenured professor or a simple student. These democratic rules challenged academic and industrial conventions. Distribution was initially by postal mail, before evolving to FTP and then email. The first RFCs were sometimes handwritten or typewritten, witnesses to an era when computing was finding its way. Some documents still bear the marks of pen corrections, touching remnants of this artisanal period.
Jon Postel became the first official RFC editor in 1971. Under his benevolent authority, the process became more structured without becoming rigid. The documents gained uniformity while preserving their original spontaneity. He established quality criteria that raised the overall standard without discouraging contributions. Fifty years later, over 8,500 RFCs have been published. They cover all aspects of the internet, from fundamental protocols to the most sophisticated applications. This collection constitutes the technical memory of the internet, a documentary heritage unique in its scope and coherence.
The RFCs prove that it is possible to maintain rigorous standards within an open and collaborative framework. This adventure also reminds us that the foundations of the internet were laid by young people driven by the desire to share rather than to dominate.
RAM
In American laboratories of the 1960s, a glaring contradiction tormented engineers. On one hand, they were miniaturizing logic components down to the size of a small pea. On the other, they retained magnetic-core memory systems as large as barrels. These devices inherited from the 1950s worked marvelously, but their bulk was grotesque compared to the progress in integration.
Bob Norman of Fairchild Semiconductor sought a way out as early as 1961. He envisioned integrating several flip-flop circuits on a silicon chip, with each element storing one bit of information. The idea looked good on paper, but the technology refused to cooperate. Fairchild’s management shelved the project, deeming it unfeasible. However, Texas Instruments delivered a small computer to the US Air Force equipped with a few hundred bits of semiconductor memory. This demonstration proved the concept had legs, at small or large scale. IBM sensed the potential and published an ambitious report in 1965 titled Potential for Monolithic Megabit Memories. The company pushed experimentation by developing a 16-bit chip for the System/360 Model 95, a machine specifically designed for NASA’s needs in 1966.
Tom Longo turned the tables the following year. With his team at Transitron, he developed the first commercial multi-cell memory chip. These 16 bits exploited TTL technology and immediately found buyers at Honeywell. The success prompted Fairchild and Texas Instruments to develop their own versions.
From IBM’s laboratories, Robert Dennard revolutionized the approach in 1966. He invented the single-transistor DRAM cell, a technical feat that radically simplified memory architecture. Existing SRAM cells required multiple transistors per stored bit. This innovation paved the way for modern DRAM memories.
Fairchild revived its ambitions in 1967 with the SAM (Semiconductor Advanced Memory) program. The goal was to produce a 64-bit chip. The following year, manufacturers offered TTL memories of this capacity, organized into 16 words of 4 bits each, with access times reaching 60 nanoseconds.
Radiation Inc., future Harris Semiconductor, introduced PROM (Programmable Read-Only Memory) in 1969. These user-programmable components appealed to computer developers. They allowed microcode modification during debugging phases, an unprecedented flexibility that accelerated system development.
In 1970 with the Intel 1103, William Regitz and Joel Karp, supported by Ted Hoff and Ted Rowe, developed the first commercially viable DRAM. This 1,024-bit chip still suffered from limitations in speed and ease of use, but it definitively proved that semiconductor memories could equip computers.
Dov Frohman made a strong impact the following year with the Intel 1702, the first EPROM memory. This invention exploited a floating-gate technique conceived at Bell Labs. The principle: electrically program the memory and erase it by exposure to ultraviolet light through a small quartz window built into the package.
Bob Proebsting of Mostek changed the rules in 1973. His MK 4096 offered 4 Kbit in a package of only 16 pins thanks to a clever multiplexed addressing technique. This approach influenced all subsequent generations of DRAM, like an industry standard.
The capacity race accelerated. The 16 Kbit chips arrived in 1976, exploiting dual-layer polysilicon technology that optimized the arrangement of memory cells. The 64 Kbit followed in 1979, then the 256 Kbit in 1982. Each generation brought its share of technological innovations and pushed the limits of miniaturization.
In 1983, Intel struck hard with the first 1 Mbit CMOS DRAM. This technological transition to CMOS outlined the sector’s future. Paradoxically, Intel soon abandoned the DRAM market, leaving the field open to its Japanese and Korean competitors who rushed into the breach.
Innovation continued with the 1 Mbit chips in 1986. These components inaugurated the use of non-planar memory cells, stacked or trenched to save space. Capacities climbed relentlessly: 4 Mbit in 1988, 16 Mbit in 1991, 64 Mbit in 1994. Each technological leap came with increasingly formidable manufacturing constraints.
The year 1998 saw the birth of 256 Mbit DRAMs that introduced high-permittivity dielectrics. This material innovation allowed the continuation of miniaturization while maintaining the electrical properties necessary for proper memory cell operation.
This frantic race for performance followed Moore’s Law with striking regularity. The doubling of capacities approximately every two years was accompanied by a vertiginous drop in cost per stored bit. This economic democratization propelled random-access memories into all electronic devices, from personal computers to mobile phones.
UNIX
Bell Labs was buzzing with activity in the late 1960s. A team of researchers was working tirelessly on Multics, a titanic project carried out with MIT and General Electric. This colossal ambition was to create an operating system capable of managing multiple users and tasks simultaneously. Among these visionaries were Ken Thompson and Dennis Ritchie, two men who would soon shake up computing.
Multics was struggling. AT&T, weary of astronomical costs and endlessly stretching deadlines, pulled out in 1969. Thompson and Ritchie found themselves with solid experience but no machine to continue their research. Management refused their request to acquire a new computer. Thompson had the idea of recovering an old PDP-7 gathering dust in the laboratories. On this obsolete machine, he began building something unprecedented. He developed a basic kernel, an assembler, a shell, and some fundamental tools like rm, cat, and cp. The name UNIX emerged from a play on words with Multics, an ironic nod to simplicity in the face of its predecessor’s complexity.
The year 1970 was a game-changer. The team secured a PDP-11. This machine had no disk and only 24 KB of memory, but it enabled the emergence of the first official edition of UNIX. Development accelerated. Thompson created the B language, a streamlined version of BCPL, while Ritchie worked on what would become the C language between 1971 and 1973.
This invention of C revolutionized everything. With UNIX’s fourth edition in 1973, Thompson and Ritchie dared to completely rewrite it in this new language. This gamble, audacious at a time when operating systems were written in assembly, made UNIX portable from one architecture to another. The first six editions came in quick succession between 1971 and 1975.
The sixth edition, nicknamed V6, marked UNIX’s first major distribution outside Bell Labs. AT&T distributed the system to universities with its complete source code for next to nothing, a decision that shook the computing world.
The University of California at Berkeley entered the UNIX saga in 1975 when Bob Fabry installed the fourth edition there. Ken Thompson spent a sabbatical year there that energized two brilliant students: Bill Joy and Chuck Haley. They threw themselves into enhancing the system, developing the ex and vi editors, the latter remaining a cornerstone of text editing today, inspiring projects like neovim. In 1977, Bill Joy assembled the Berkeley Software Distribution (BSD), printed in only about thirty copies but destined to spawn many followers.
The 1980s saw a myriad of UNIX versions flourish. AT&T commercialized System III and then various releases of System V through its UNIX System Group. Berkeley meanwhile continued BSD development, bringing remarkable innovations like the Fast File System and especially the TCP/IP network stack. The latter, integrated into version 4.2BSD in 1983, propelled BSD to glory.
Bill Joy left Berkeley in 1982 to co-found Sun Microsystems, a company that would weigh heavily in UNIX’s evolution. Sun developed the Network File System (NFS) and the VFS/vnode architecture, making file sharing over networks invisible to users. In 1987, an unexpected alliance formed between Sun and AT&T to create System V Release 4, merging the best features of SunOS and System V Release 3.2.
This proliferation of UNIX versions created a crying need for standardization. The /usr/group, an association for promoting UNIX established in 1980, tackled the standardization of user interfaces. These efforts led in 1985 to the birth of the POSIX standard (Portable Operating System Interface for Computing Environments), now an essential reference for compatibility between UNIX systems. The X/Open group, created in 1984, developed the X/Open Portability Guide (XPG), integrating POSIX and other standards.
The early 1990s saw an unexpected rival emerge: Linux. In 1991, Linus Torvalds, a student at the University of Helsinki, developed a UNIX-compatible kernel capable of using GNU tools from the Free Software Foundation. Linux captured UNIX’s spirit while adapting it to modern architectures. Its free license accelerated its distribution and massive adoption.
UNIX revolutionized computing through its innovative concepts. The hierarchical file system, where everything is a file, radically simplified resource management. Pipes, suggested by Doug McIlroy in 1972, connected one program’s output to another’s input, creating processing chains of unmatched power. The shell offered a complete programming environment from the command line.
UNIX’s portability, made possible by its implementation in C, facilitated its adaptation to a multitude of architectures. This characteristic, combined with its modular design and exemplary documentation, explains its remarkable longevity. UNIX manuals, present from the first edition, established a technical documentation standard that remains a reference today. Its design philosophy still permeates software development: create simple programs that excel in their domain, use text as a universal interface, compose elementary tools to solve complex problems.