Stéphane FOSSE

EPOCH

EPOCH © 2025 by - This book is published under the terms of the CC BY-SA 4.0 license

Chapter 8
1990

The Dawn of a Connected World

The destruction of the Berlin Wall in November 1989 disrupted the global balance. This fracture in history created fertile ground for digital technologies that were waiting to spread. The redrawn geopolitical landscape changed the stakes for computing and its dissemination.

When Soviet republics opened their doors to Western technologies, they unlocked unsuspected markets for computer firms. Eastern mathematicians, renowned for their excellence, shared their approaches with their Western colleagues. This intellectual cross-fertilization enriched computer science thinking on both sides.

The 1990s saw the global economy break free from traditional barriers. Large corporations had to adapt to this new reality: managing planetary operations required information systems capable of processing mountains of data and synchronizing activities across time zones.

Finance underwent a radical transformation. Transactions became dematerialized, algorithms invaded trading floors. Stock exchanges multiplied the pace and volume of exchanges, demanding ever faster machines and ever more robust networks.

The Internet left the closed circle of academics and military personnel to invade the daily lives of individuals. Internet service providers flourished, first with modest modem connections that whistled and crackled, then with increasingly generous bandwidths. This opening gave birth to new services and modes of communication.

Citizens created their digital spaces of expression. Forums, electronic messaging, and personal websites wove invisible links between individuals. Virtual communities formed, bringing together enthusiasts across distances. This new socialization transformed user interfaces, which now had to appeal to an audience unfamiliar with computing intricacies.

Computers became fixtures on every professional desk. Companies massively invested in hardware and training. Word processing buried the typewriter. Spreadsheets conquered management. Slideshows revolutionized meetings by replacing transparencies and their overhead projectors. This rapid computerization created a voracious appetite for software adapted to professional realities.

In schools and universities, computing found its place in curricula. Computer labs became familiar places for students. This evolution responded to the demands of a job market seeking technical skills.

Video gaming transformed into a major industry during this period. Consoles pushed technical boundaries, with increasingly striking graphics. This sector energized research on graphics processors and brought computing into households through the gateway of entertainment.

Digital works flourished with the advent of multimedia. CD-ROMs made it possible to combine text, images, sounds, and videos in an accessible format. Digital encyclopedias revived knowledge. Entertainment applications breathed new life into personal computing.

Electronic components continued to shrink, giving birth to the first consumer laptops and personal digital assistants. Mobility became a requirement, forcing the adaptation of interfaces and operating systems.

Mobile telephony experienced explosive growth. Digital networks deployed across cities and countryside. Text messages invented a new language, concise and immediate. The future fusion between telephone and computer was becoming apparent.

Administrations modernized their IT infrastructures, digitized their dusty archives, and developed the first online services for citizens. These large-scale projects stimulated the market for professional solutions.

With the multiplication of connections, security became a central concern. The first viruses targeting personal computing sowed panic, followed by the development of antivirus software and firewalls. The protection of personal data emerged as an issue for the dawning 21st century.

Free software found its way with GNU and Linux. This alternative philosophy to proprietary software disrupted established models by proposing a collaborative and open approach to software development.

The final years of the decade saw the flourishing of the first online stores and the swelling of the Internet bubble. Investors rushed toward anything related to digital technology, sometimes with more passionate than rational fervor. The bursting of this bubble awaited at the turn of the millennium.

This decade changed our relationship with computing. Computers settled into our daily lives, transformed our professions, modified our leisure activities. They were no longer these strange entities confined to air-conditioned laboratories. The household equipment rate soared, reaching 23% in France in 1999, compared to barely 8.5% in 1990.

The world discovered itself connected, without yet fully measuring the consequences of this transformation.

Computing in the 1990s cannot be reduced to a simple technical evolution; it constitutes an anthropological rupture whose magnitude we are only beginning to perceive.

Top

Adobe Photoshop

In 1987, Thomas Knoll was working on his doctoral thesis in computer vision at the University of Michigan. On his Macintosh Plus, launched in January 1986, he tinkered with a small program he called Display, designed to show grayscale images on his black-and-white bitmap screen. Just an academic distraction, nothing more. Thomas had no idea he had just planted the first seed of what would become the world’s reference tool for image processing.

John Knoll, Thomas’s brother, discovered this program. John worked at Industrial Light and Magic, Lucasfilm’s special effects division. At ILM, they were searching for solutions to process images digitally. John immediately saw the potential of his brother’s small software and proposed they work together. This fraternal collaboration was about to change the game: Thomas adapted his code to handle color on the new Macintosh II, while John developed processing routines that foreshadowed the software’s future filters.

In 1988, John sensed the commercial potential of their creation. Thomas hesitated, aware of the titanic work required to transform a personal program into a true commercial application. John’s optimism prevailed. The two brothers set to work, enriching the features, adding support for various file formats, and developing sophisticated selection tools. After numerous naming attempts, the program received its definitive name: Photoshop.

Finding a publisher proved more complicated than expected. The Knoll brothers knocked on several doors. Barneyscan, a scanner manufacturer, agreed to market an initial version under the name Barneyscan XP. Approximately 200 copies of this version 0.87 accompanied their devices. The software circulated in the small world of imaging professionals. At Apple, engineers were enthusiastic and shared copies with their colleagues.

In September 1988, Adobe Systems entered the picture. Russell Brown, the company’s art director, fell in love with the software. Negotiations resulted in a distribution license agreement rather than an outright purchase. A judicious decision for the Knolls, who would receive royalties on each sale. The formal signing occurred in April 1989.

In February 1990, Photoshop 1.0 hit the market. Despite some early bugs, the reception was enthusiastic. Against the competition, the software distinguished itself through its optimized code and intuitive interface. Russell Brown delivered spectacular demonstrations, revealing the program’s creative possibilities and ease of use.

Version 2.0 in 1991 marked a decisive milestone. CMYK support revolutionized the printing industry by democratizing color separation techniques, previously reserved for professionals equipped with expensive hardware. Version 2.5 reached a new milestone by arriving on Windows, multiplying the potential audience.

1994 saw the birth of Photoshop 3.0, and with it, layers. This feature radically transformed digital image manipulation. Users could now overlay and modify elements independently. Contrary to rumors at the time, this innovation didn’t copy competitor Live Picture but stemmed from Thomas Knoll’s earlier research.

Version 4.0 in 1996 disrupted established habits. Andrei Herasimchuk redesigned the user interface to harmonize it with other Adobe products. These changes initially provoked ripples in the user community, but ultimately convinced through their coherence.

With version 5.0 in 1998, Mark Hamburg revolutionized the creative process by introducing the History palette. This innovation broke with the traditional linear logic of undoing, allowing non-sequential backtracking through modifications. Simultaneously, the direct integration of color management tools simplified the production of images intended for different media.

Photoshop’s success exceeded all predictions. The software established itself in photography, advertising, publishing, and web design. Adobe saw its revenue soar from $16 million in 1986 to over $1 billion in 1999. A flourishing ecosystem developed around the program: plugins, training courses, specialized publications.

By democratizing image manipulation, Photoshop nourished contemporary visual arts and transformed our relationship with photography. The verb “to photoshop” entered common vocabulary, proof of the software’s cultural omnipresence.

This exceptional success resulted from Thomas Knoll’s technical genius, John’s commercial intuition, Adobe’s marketing efficiency, all carried by the rise of personal computing and digital design. Thomas would never complete his thesis, but his “distraction” revolutionized the world of digital imaging. He continues to collaborate with Adobe as a consultant, while John pursues his career in special effects at Industrial Light & Magic.

Top

IBM PS/1

The PS/1 story begins in 1987, when IBM watched with concern as PC clones gained ground. These compatible machines were eating into its market share, selling for far less than Big Blue’s computers. Home users and small businesses turned away from IBM, more sensitive to price than brand prestige.

IBM was not ignoring this segment. Three years earlier, the company had taken a shot at it with the PCjr, a computer meant to appeal to American households. This machine offered interesting innovations: game cartridges, television output, and a wireless keyboard called the "Freeboard." But the PCjr crashed and burned in 1985, leaving IBM empty-handed in the consumer market.

Meanwhile, the company put all its eggs in the PS/2 line, aimed at businesses. These computers matched the competition in performance and offered extensive expansion possibilities, but their pricing made them unaffordable for home users. With over twenty models in the catalog, the lineup was incomprehensible for anyone simply looking for a family computer.

In 1990, IBM launched the Personal System/1. This time, the approach changed completely. The first models adopted a bold all-in-one design: the color monitor and system unit formed a compact assembly, connected by thick cables. The power supply was housed in the monitor, creating total dependency between the two components. This configuration recalled Apple’s compact Macintosh computers, except that Apple integrated everything into a single case.

Later versions would abandon this aesthetic choice to return to separate components, foreshadowing the future Aptiva line. The PS/1 ran PC-DOS, IBM’s version of the MS-DOS system. To simplify life for novice users, IBM developed a graphical interface nicknamed "4-quad." Windows or OS/2 remained technically installable, but IBM discouraged their use due to the performance and stability problems this caused.

A technical peculiarity distinguished these early machines: unlike typical installations on hard disk or floppy disks, the operating system and interface resided directly in read-only memory. Tandy had experimented with this approach on certain models in its 1000 line. The advantage proved immediate: fast startup and increased reliability.

Technical evolution accelerated over the years. The launch models featured an Intel 80286 processor at 10 MHz, modest performance but sufficient for the era’s office applications. Four years later, when IBM ceased production, the final PS/1 models integrated an Intel 486 DX2 running at 66 MHz, multiplying computing capacity.

This particular arrangement of the power supply in the monitor was not a first. The Amstrad PC1512 had adopted this solution, as had the Coleco Adam, which housed its power supply in the printer. But this configuration required using a proprietary connector between monitor and system unit, limiting replacement or upgrade possibilities.

IBM did not simply ship a machine. The company deployed an ambitious marketing strategy to recapture the consumer market. A telephone support service operated seven days a week, weekends included. PS/1 owners could interact with each other through a dedicated online service, the "Users’ Club," prefiguring the user communities that would flourish with the Internet.

The PS/1 adventure ended in 1994. IBM halted production, marking a turning point in its strategy. The following year, the PS/VP and PS/2e variants also disappeared from the catalog. These decisions reflected IBM’s gradual withdrawal from the consumer personal computer market, a sector where component standardization and price wars drastically reduced margins.

Top

Haskell

About a dozen functional languages coexisted in the late 1980s, each with its peculiarities but all sharing similar semantic foundations. This dispersion frustrated researchers who struggled to gain adoption for their ideas beyond narrow circles. The notion of unifying these efforts emerged during a meeting in September 1987, at the Functional Programming Languages and Computer Architecture conference in Portland. The participants decided to create a committee to design a common language that would serve as a stable reference.

This collective approach seems surprising nowadays. It’s hard to imagine that a committee could produce an elegant language, as design by consensus appears doomed to compromises and inconsistencies. Yet Haskell proves otherwise. The secret lies in the alignment of individual objectives and the importance placed on mathematical beauty.

John Backus had paved the way in 1978 with his Turing lecture. The creator of Fortran presented functional programming as a credible alternative to the von Neumann model. Coming from such a figure, this endorsement transformed the perception of the functional paradigm, which ceased to be seen as an academic curiosity.

Lazy evaluation fascinates Haskell’s designers. This technique, discovered independently by several teams in the 1970s, radically changes the way programs are conceived. Dan Friedman and David Wise at Indiana, Peter Henderson and James H. Morris Jr. at Xerox PARC, David Turner at St Andrews and Kent: all explore this promising path. Turner demonstrates its elegance in SASL and KRC, using lazy lists to simulate complex behaviors with disarming simplicity.

April 1st, 1990 marks the publication of the first Haskell report. This date, chosen by chance, would fuel numerous jokes. Who would have thought that a language born on April Fools’ Day would become so influential? Development continues for fifteen years, primarily via email. An era when technical exchanges still passed through thoughtful messages rather than incessant notifications.

Miranda influences Haskell’s design. David Turner commercializes this non-strict functional language through his company Research Software Limited, with notable success: 250 universities and 50 companies adopt it. Haskell inherits many syntactic characteristics from Miranda but distinguishes itself through significant innovations.

The type class system constitutes the first of these innovations. Philip Wadler introduces it in February 1988, elegantly solving the problems of numeric operator overloading. More systematic than the solutions adopted by Miranda or SML, this modular approach has since served as a model for other languages.

Purity represents Haskell’s second pillar. As a pure functional language, it guarantees that a function always returns the same result for identical arguments, without any side effects. This constraint, directly related to lazy evaluation, initially complicates input-output handling. But this apparent difficulty leads to the invention of monadic input-output, recognized as a major contribution to computer science.

Monads transform a technical problem into a conceptual solution. They encapsulate side effects within a rigorous mathematical framework, preserving the language’s purity while allowing interactions with the outside world. This theoretical elegance hides a practical complexity that sometimes discourages newcomers.

In 1999, “Haskell 98” stabilizes the language. The community was calling for this standardization, tired of the permanent evolutions that compromised code portability. The committee then dissolves, letting the language evolve freely. This two-speed approach works remarkably well: Haskell serves both as a laboratory for exploring advanced ideas and as a practical tool for real applications.

Successive versions progressively enrich the language. Version 1.1 in 1991, 1.2 in 1992, 1.3 in 1996, 1.4 in 1997: each iteration brings its share of improvements. This sustained pace testifies to the vitality of a community engaged in exploring new conceptual territories.

Haskell’s innovations inspire numerous languages and frameworks. Advanced type systems, effect management via monads, lazy evaluation: all concepts that now radiate throughout the computing ecosystem. Imperative languages also integrate functional elements, testifying to the lasting influence of this approach.

The name “Haskell” honors Haskell B. Curry, a mathematician and logician whose work on combinatory calculus influences programming language theory. This choice is not trivial as it anchors the language in a rigorous mathematical tradition, reminding us that functional programming draws its roots from formal logic.

Haskell occupies a singular position. Neither a mainstream language nor an academic curiosity, it maintains a subtle balance between theoretical rigor and practical utility. Its influence is measured less by its direct adoption than by its capacity to evolve programming practices. In a world where elegance often gives way to efficiency, Haskell reminds us that other paths exist, more demanding but infinitely more satisfying.

Top

HTML

The HTML language was born in 1991 at CERN, in Tim Berners-Lee’s office. This British physicist was working on a concrete problem: how to share documents among researchers scattered across the world? His solution consisted of a few lines of code that would revolutionize our relationship with information. HyperText Markup Language, its full name, was inspired by an existing standard, the Standard Generalized Markup Language (SGML), standardized by ISO in 1986. Berners-Lee added one ingredient: the ability to create links between documents.

The idea represented an immense conceptual leap at the time. Imagine a text capable of taking you to another text with a simple click, then to a third, and so on, creating a web of interconnected information. This is exactly what was made possible with HTML’s anchor tags. The language works through markers enclosed in angle brackets: <p> for paragraphs, <h1> for main headings. These tags serve a dual role: they structure content and determine its on-screen appearance.

As early as September 1991, a mailing list called www-talk brought together the project’s first enthusiasts. Dave Raggett, an engineer at Hewlett-Packard, Marc Andreessen from the National Center for Supercomputing Applications, and many others contributed to enriching the language. Their exchanges gave birth to HTML+, an enhanced version of future developments.

This creative ferment concealed a trap. Each browser—Lynx for text terminals, Mosaic for graphical interfaces, Arena as a European alternative—developed its own extensions. “Proprietary tags” multiplied, threatening the universality dear to Berners-Lee. A document readable on Mosaic became unreadable on Lynx. The web risked fragmentation before truly existing.

To avoid this balkanization, Berners-Lee founded the World Wide Web Consortium, the famous W3C, in 1994. The initiative received financial support from CERN, the American DARPA, and the European Commission. Three laboratories hosted this new institution: MIT in the United States, INRIA in France, and Keio University in Japan. This geography reflected the desire to make the web a truly global project.

The W3C’s first task was to standardize what existed. HTML 2.0 was released in November 1995, cataloging and harmonizing the tags used by different browsers. This version coincided with a pivotal moment: the price of personal computers dropped below $5,000, making them accessible to the general public. In the United States, political initiatives encouraged internet development, sensing its economic potential.

But standardization had to contend with conflicting interests. Netscape and Microsoft, which dominated the browser market, sought to impose their own innovations to stand out from the competition. Their engineers invented spectacular tags: <blink> made text flash in Netscape, <marquee> made it scroll in Microsoft. These additions provoked the ire of W3C purists, who saw in them a dangerous confusion between document structure and visual presentation.

The debate went beyond pure technology. It opposed two visions of the web: on one side, content creators demanding more control over their pages’ appearance; on the other, pioneers defending a strict separation between content and form. This tension still runs through discussions about web standards evolution.

The solution emerged in late 1996 with CSS (Cascading Style Sheets). This new standard delegated visual aspects to a specialized language, freeing HTML from this responsibility. The division of labor was formalized with HTML 4.0 in December 1997: HTML for structure, CSS for styling.

The W3C became professionalized. A “Process Working Group” including Netscape, HP, IBM, and Microsoft formalized standardization procedures. Their work resulted in 1999 in a “Process Document” that established the consortium’s governance rules. These bureaucratic mechanisms may seem tedious, but they guarantee transparency and fairness in the standardization process.

HTML’s history reveals the political stakes hidden behind apparent technical neutrality. The interoperability advocated by the W3C collides with browser vendors’ commercial strategies. The ideal of a decentralized web confronts attempts by certain dominant players to maintain control. These questions still resonate in the face of the growing influence of Google, Apple, or Meta on standards evolution.

Yet HTML has kept its original promise. It has given birth to an open web where anyone can publish without prior authorization. This vision follows in the lineage of the cybernetic utopias of the 1960s, which dreamed of horizontal and collaborative digital spaces. While the contemporary web sometimes strays from these founding ideals, HTML remains a pillar of internet architecture alongside the HTTP protocol and the URL system.

The W3C continues its mission. The arrival of actors like the Motion Picture Association of America or scientific publishers broadens the spectrum of interests to reconcile. The entertainment industry demands anti-piracy measures, publishers want to protect their content, while digital rights defenders fear a restriction of the web’s openness.

This permanent tension between innovation and preservation of founding principles constitutes HTML’s DNA. Thirty years after its creation, this language continues to evolve while maintaining its original simplicity. A tag is a tag, a link is a link. In a digital world in perpetual flux, this stability is reassuring. It reminds us that the internet is not just a marketplace or a technological playground, but also a public space that must be preserved and passed on.

Top

Gopher

At the University of Minnesota in 1990, Mark McCahill and Farhad Anklesaria led a team facing a practical problem, similar to Tim Berners-Lee’s: how to organize information on campus? The official committee proposed a solution that the team deemed too cumbersome and unsuitable. They decided to create their own system.

In the spring of 1991, the first version of Gopher was born. The name refers to two things: the university’s mascot (a gopher) and the idea of “going to fetch” information. The team designed a hierarchical architecture, similar to a file system. Users navigated through menus to access documents, just as they would in a traditional library where books are organized by subject.

The interface was deliberately simple. Text-based menus, designed to work even on slow connections. The creators integrated a full-text search engine using NeXT computers, making it possible to find the desired content. This combination of structured navigation and free search was immediately appealing.

The distribution strategy followed the free software model: the server source code circulated freely, but the team kept control of the clients to maintain system consistency. Administrators installed their servers easily and created links to other Gopher resources.

The success exceeded all expectations. By November 1994, more than 8,000 Gopher servers were running on the Internet. Librarians massively adopted the protocol, appreciating its logical organization and ease of use. The team developed extensions like Gopher+ to handle metadata and different document formats.

This story took place during a major transition period. Computing was gradually abandoning mainframes in favor of distributed systems. McCahill’s team, coming from microcomputer support, advocated a decentralized vision that opposed that of the central computer administrators. This tension reflected the paradigm shifts of the time.

Gopher democratized access to digital information before the arrival of the Web. “Gopherspace” was an information space that users navigated intuitively. The protocol elegantly solved the problem of organizing digital content by combining hierarchical navigation and textual search.

In 1993, the university made a strategic mistake. It attempted to monetize Gopher by imposing licensing fees on commercial users. This decision, driven by budget constraints, caused controversy. But the real challenge came from elsewhere: Tim Berners-Lee proposed to the team to merge Gopher and the World Wide Web. The apparent complexity of the Web made them hesitate.

This hesitation proved costly. Mosaic popularized the Web and gradually relegated Gopher to the background. The limited support for the protocol in this browser, combined with the growing richness of Web pages, diverted users toward the new standard. McCahill’s team lost control of the user experience, a determining factor in a protocol’s evolution.

The ideas developed for Gopher nonetheless survived. The hierarchical organization of content still influences many current systems. The constraints that guided its design—limited bandwidth, ease of use—have found renewed relevance with mobile applications and resource-constrained environments.

The experience taught several important lessons. First, the importance of controlling the user interface in the success of a network protocol. Second, the difficulties of monetizing Internet protocols. Finally, the impact of architectural choices on a system’s longevity. With this project, the University of Minnesota developed recognized expertise in distributed information systems.

The questions raised by Gopher remain relevant: how to organize access to information? How to navigate digital spaces? How to balance simplicity and functionality? System designers continue to seek elegant answers to these issues.

Top

PCMCIA

In 1985, the JEIDA (Japan Electronic Industries Development Association) identified a problem that would mark the computer industry. Portable computers were beginning to gain widespread adoption, but they suffered from a frustrating limitation: it was impossible to easily add peripherals as one would on a desktop machine. Each manufacturer developed its own memory card formats, creating a technological cacophony where nothing was compatible.

Everything changed in San Jose, California in 1989. About twenty American companies gathered around a table to create PCMCIA. The acronym "Personal Computer Memory Card International Association" concealed the ambition to ensure that all memory cards would speak the same language. Poqet Computer, a pioneer in this field, had envisioned a computer running solely on removable memory cards. But convincing software publishers to distribute their programs on these media proved to be an uphill battle without a unified standard.

The first PCMCIA standard was released in June 1990. The result of collaboration with JEIDA, it defined credit-card-sized cards equipped with a 68-pin connector. This version 1.0 was limited to governing memory cards, but the engineers on the technical committees quickly grasped the broader potential of this format. Why not add modems or network cards?

One year later, in September 1991, version 2.0 crossed the threshold. PCMCIA cards transformed portable computers into modular platforms serving as true extensions. The standard then featured three different thicknesses: Type I cards at 3.3 mm for memory, Type II at 5 mm for communication peripherals, and the substantial Type III at 10.5 mm intended for miniature hard drives.

The software architecture envisioned by PCMCIA’s designers demonstrated remarkable vision. Three layers worked together harmoniously: Socket Services managed hardware at the lowest level, Card Services handled resource allocation and automatic configuration, while card-specific drivers topped the ensemble. Inserting and removing cards without powering down the system became possible.

PCMCIA transformed mobile computing. Manufacturers competed ingeniously to offer modems, network cards, wireless adapters, and multiple peripherals in this compact format. Portable computers finally gained the flexibility they had been lacking, becoming true electronic Swiss Army knives.

Technical evolutions followed in rapid succession according to market needs. 1995 marked the arrival of version 5.0 with the 32-bit CardBus, capable of transferring up to 132 MB/s. Low-power 3.3V cards appeared, power management became more refined, and multifunction cards emerged. The following year, Zoomed Video created a direct link between PCMCIA card and graphics controller, freeing the processor from video decoding tasks.

What impresses about PCMCIA is its early technical sophistication. Each card contained its own CIS information system, a sort of electronic identity card describing its capabilities to the operating system. This plug-and-play approach preceded the arrival of USB by several years. The computer automatically recognized the inserted card and configured the necessary resources, without user intervention.

The physical robustness of PCMCIA cards established lasting benchmarks. Specifications required 10,000 insertion cycles in office environments and 5,000 under harsh conditions. The electrical design protected components: power pins connected first and disconnected last, preventing destructive voltage surges.

At the turn of the 2000s, CardBay modernized the interface by introducing a high-speed serial link based on USB, while maintaining compatibility with existing systems. This approach illustrated the maturity of the standard, capable of evolving without breaking the ecosystem built around it.

The concepts of hot-swapping and automatic configuration from PCMCIA directly inspired USB and PCI Express. The cooperation between American and Japanese industrialists demonstrated that a worldwide standard could emerge from collaboration rather than commercial warfare.

Beyond portable computing, PCMCIA spread throughout consumer electronics. Digital cameras, television decoders, and automotive embedded systems adopted this format for its reliability and ease of use. This versatility testified to the quality of the original design.

However, PCMCIA did not escape the limitations of its era. Its simple bus, inherited from a direct memory interface, lacked elaborate synchronization signals. The 16-bit width restricted performance for demanding applications. These technical constraints prepared its gradual replacement by more recent standards like ExpressCard.

PCMCIA belongs to computer history, but it proved that an industrial consortium could create a durable standard, capable of stimulating innovation while solving complex interoperability problems.

Top

MP3

In 1987, in the laboratories of the Fraunhofer Institute in Germany, researchers were working on Digital Audio Broadcasting, a project that seemed unremarkable. No one suspected they were laying the groundwork for a revolution that would shake the global music industry. The CD had just taken its first steps five years earlier, and the question of digital audio compression was being raised.

Karlheinz Brandenburg, a graduate in electrical engineering and mathematics from the University of Erlangen, led this research with an obsession: how to drastically reduce the size of audio files without destroying their quality? The answer lay in the quirks of our auditory system. The human ear has flaws that Brandenburg intended to exploit. Some sounds mask others, certain frequencies disappear in the shadow of more dominant ones. These psychoacoustic phenomena would become the pillars of the future MP3.

In 1988, ISO created the Moving Pictures Experts Group. This international consortium had the mission of establishing audio and video compression standards. Fraunhofer’s work finally found its institutional framework. Four years of development passed before the MPEG-1 Layer 3 standard officially came into being in 1992. This unwieldy name would soon transform into three letters that would change the world: MP3.

The magic works through a process of formidable complexity. The audio signal first undergoes meticulous temporal segmentation, then passes through a hybrid filter bank combining modified discrete cosine transform and polyphase filtering. A psychoacoustic model then calculates masking thresholds for each frequency band. Everything the ear does not naturally perceive simply vanishes from the final file. Only truly audible sound information remains, encoded with the necessary care.

This surgical approach produces spectacular results. A piece of music occupies about 10 megabytes per minute on a CD. With MP3 compression at 128 kilobits per second, that same piece fits into 1 megabyte. Ten times less space for quality that most listeners find acceptable. This technical feat would soon meet the Internet.

The mid-1990s saw an explosive convergence. The Internet expanded, connections improved, and MP3 found its way. In 1995, the format circulated on computer networks. WinAmp, a small audio playback software, accompanied this diffusion and transformed the personal computer into a hi-fi system. But this democratization came with an unexpected phenomenon: music piracy exploded.

Napster shook the record industry. This peer-to-peer exchange platform connected millions of Internet users eager to share their digitized music collection. Record labels discovered with horror that their business model was faltering. Lawsuits rained down, debates about copyright ignited. MP3 became the symbol of an unprecedented legal and economic battle.

The industry responded by creating legal offerings. Saehan Information Systems marketed the first portable MP3 player in 1998. Three years later, Apple made a big splash with the iPod and iTunes. Steve Jobs succeeded where many had failed: creating a viable ecosystem combining hardware and legal digital music distribution. The iPod was the essential technological object of the early 21st century.

The technical flexibility of MP3 partly explains its success. The format offers a range of bitrates from 32 to 320 kilobits per second. Each user adapts the size-quality tradeoff according to their needs and constraints. Three compression modes coexist: CBR maintains a constant bitrate, VBR varies the bitrate according to the complexity of the musical passage, ABR finds a compromise between the two approaches. This diversity satisfies both the demanding audiophile and the hurried user.

Under the hood, the technology impresses with its sophistication. The spectral analysis relies on the fast Fourier transform which decomposes the signal into its frequency components. A “bit reservoir” mechanism dynamically redistributes the bitrate between frames according to the needs of each audio segment. This intelligent management optimizes every available byte.

However, MP3 does not escape the compromises inherent in all compression. Artifacts can appear, like those pre-echoes that precede brutal instrumental attacks. High frequencies sometimes lose their clarity. These defects, imperceptible to many, annoy purists and motivate the development of more efficient successors like AAC or Opus.

The legal adventure of MP3 found its epilogue in 2017. The patents of the Fraunhofer Institute and Thompson Licensing expired that year. Their creators officially announced the end of support for the format, encouraging the adoption of more recent technologies. A page turned after twenty-five years of dominance.

MP3 has transformed our relationship with music. Gone are the CD shelves, finished are the walkmans and their fragile cassettes. Music is mobile, accessible everywhere, infinitely copyable. This new freedom upends listening habits but raises questions about the valuation of artistic work. How to fairly compensate creators in a world where their work becomes dematerialized?

Beyond music, MP3 perfectly illustrates how fundamental research can transform society. Work on psychoacoustics and signal processing revolutionized an entire industry. This technology also demonstrates the power of open standards and public research in technological innovation.

The legacy of MP3 is found in all modern audio codecs. Spotify, Deezer, Apple Music rely on the principles it established. Technically surpassed, MP3 remains the reference format in the collective consciousness.

Top

WAVE

In 1991, when Microsoft and IBM joined forces, they created far more than just an audio file format. WAVE was born from this collaboration, at a time when personal computers were truly discovering multimedia. Machines were beginning to produce something other than electronic beeps, and there was a need to store this new sonic richness somewhere.

The technical choice centered on RIFF, a file architecture designed as a large modular container. Each element found its place in chunks, small segments identified by four characters. An approach that transformed multimedia data management. Developers appreciated this flexibility of being able to add, remove, and reorganize information without breaking the overall structure.

The WAVE philosophy can be summed up in a few words: preserve the integrity of sound. No compression, no artifice, just the raw digital representation of the analog signal. The format consists of just two essential elements: the “fmt” segment describing the technical characteristics, and the “data” segment containing the audio samples. This apparent simplicity actually conceals remarkable robustness.

The 1990s saw WAVE gradually establish itself. Its natural compatibility with Windows gave it a definite advantage, but Macintosh and Linux systems adopted it as well. Audio professionals discovered a reliable tool, capable of reproducing exactly what they had recorded. Gone were the degradations caused by compression algorithms: what you heard at the output faithfully corresponded to what you captured at the input.

The European Broadcasting Union quickly grasped the format’s potential. In 1997, it launched the Broadcast Wave Format, an extension that enriched WAVE with specialized metadata. Suddenly, an audio file could tell its story: who created it, when, under what conditions, with what equipment. This traceability quickly became indispensable in the audiovisual industry, where every second of programming must be documented and verifiable.

But WAVE also revealed its limitations. The 4 GB boundary, inherited from the original RIFF structure, proved problematic when studios moved toward high resolution. A concert recording at 96 kHz could easily exceed this limit. The EBU responded in 2009 with RF64, an extension that pushed back these technical constraints. Archivists and producers could finally work without worrying about the duration of their recordings.

Data representation in WAVE follows precise logic. 8-bit samples use unsigned bytes, from 0 to 255, while 16-bit adopts two’s complement, between -32,768 and 32,767. This technical consistency greatly facilitates programmers’ work and guarantees faithful conversion to the analog signal.

The archival world massively adopted the format. The International Association of Sound and Audiovisual Archives made it an official recommendation. Their reasoning? A simple, documented format without compression guarantees readability fifty years from now. The British Library digitizes its audio collections in WAVE, as does the Library of Congress. These institutional choices definitively anchored the format in digital preservation culture.

In the 2000s, FLAC arrived and compressed losslessly, halving file sizes while preserving audio quality. Yet WAVE didn’t falter. In recording studios, editing suites, and archives, it remained the reference. Its simplicity ultimately constituted its strength: no complex algorithm to decode, no patent to circumvent, just raw, readable data.

The format evolved discreetly. Metadata became enriched with INFO segments, XMP or ID3 tags. These additions gradually transformed WAVE from a simple audio container into a true digital document, capable of carrying rich contextual information. Modern professional workflows take advantage of this flexibility to automate content management.

Tools like JHOVE accompanied this evolution by offering validation and technical analysis. Verifying WAVE file compliance, extracting its metadata, detecting potential corruption—these are indispensable functions when managing thousands of hours of audio archives. The community thus developed an entire ecosystem around the format.

More than thirty years after its creation, WAVE has weathered the ages without showing its age. Its longevity stems from a subtle balance between technical simplicity and versatility of use. Where other formats disappear with their original technologies, WAVE persists because it meets the need to preserve sound in its purest form, without artifice or compromise.

Microsoft and IBM’s 1991 audio format continues to capture our most precious sounds, proof that certain innovations transcend generations without losing their relevance.

Top

Python

The story begins with a weekend in December 1989. Guido van Rossum, a researcher at the National Research Institute for Mathematics and Computer Science in the Netherlands, uses his holiday break to tinker with a new programming language. Nothing too ambitious at first: he wants to create something more practical than ABC, a language he had worked on, something that works better with UNIX without being chained to it.

The name he chooses reveals his taste for British humor. Python is a nod to Monty Python’s Flying Circus, the troupe that left its mark on English comedy. This choice announces the mindset that will inhabit the developer community: a blend of technical seriousness and welcome irreverence in a sometimes too austere world.

He publishes the first version in February 1991 on alt.sources, the forum where programmers share their creations. Version 0.9.0 contains what will make Python unique, namely a syntax that prioritizes readability, an approach that contrasts with the languages of the time. Indentation, a syntactic element in its own right, is a direct inheritance from ABC. This idea grates at first. Programmers accustomed to braces and semicolons find it strange. Yet this constraint will prove liberating: it forces clean and readable code.

The philosophy crystallizes around a simple principle: “There should be one—and preferably only one—obvious way to do it.” This maxim guides all design decisions. Where other languages multiply syntaxes for the same operation, Python chooses simplicity. The code is more predictable, therefore easier to maintain and understand.

Python 1.0 arrives in January 1994 with a solid functional arsenal. Whether lambda, map, filter, or reduce, these tools borrowed from functional programming enrich expressive possibilities. The language finds its footing, attracts its first followers. Version 1.5 in December 1997 consolidates these foundations and prepares the ground for what follows.

The year 2000 brings Python 2.0 and its novelties. The garbage collector automates memory management, freeing the programmer from this chore. List comprehensions introduce an elegant syntax for manipulating collections: [x**2 for x in range(10)] advantageously replaces several lines of traditional loops. These additions don’t betray the original spirit but refine it.

Python’s growing success reveals a different approach to development. The language encourages modularity without brutally imposing it. Modules combine naturally, code reuses without gymnastics. This fluidity appeals to programmers tired of other tools’ complexity.

Then comes 2008 and Python 3.0, a version that disrupts habits. Van Rossum and his team choose to break compatibility to correct design flaws. This courageous decision temporarily fractures the community: old programs no longer work directly with the new version. But this boldness pays off in the long run. Python 3 establishes healthier, more consistent foundations.

Python’s dynamic typing facilitates rapid prototype writing. No need to declare a variable’s type before using it, the language handles it. This flexibility accelerates initial development while sometimes demanding more rigor in testing. Error handling, inspired by Modula-3, structures programs without weighing them down.

Object orientation integrates naturally into Python from its early versions. No conceptual shock, just a pragmatic approach that leaves the choice to the programmer, whether writing simple procedural code or organizing programs into sophisticated classes according to needs.

The richness of the standard library distinguishes Python from its competitors. “Batteries included,” as they say in the community. Email, web, cryptography, compression: most common needs find an answer in the base distribution. This abundance avoids searching for third-party libraries for standard tasks.

Accessibility makes Python a language of choice for learning. Its syntax, close to natural language, demystifies programming. Books like Python for Kids popularize its pedagogical use. Universities adopt it massively to introduce their students.

PyPI, the Python package index, transforms code sharing into child’s play. A simple pip install command suffices to add functionality to a project. This ease nourishes a dynamic ecosystem where everyone contributes and benefits from others’ work.

PEPs (Python Enhancement Proposals) organize the language’s evolution democratically. These improvement proposals structure debates, document technical choices. This open governance maintains coherence while welcoming external contributions.

Google adopts Python from its early days for parts of its search engine. This legitimization by a major technology company opens doors. Other organizations follow, attracted by the language’s productivity and its ecosystem’s quality.

The explosion of artificial intelligence propels Python to the forefront. TensorFlow, PyTorch, scikit-learn: specialized libraries multiply. Data scientists massively adopt the language for its analysis and visualization capabilities. NumPy transforms Python into a serious competitor to MATLAB for scientific computing.

Django revolutionizes web development in Python. This framework offers a complete approach: integrated ORM, automatic administration interface, sophisticated template system. Instagram, Pinterest, Mozilla use Django for their high-traffic sites, proving its industrial robustness.

Recent versions refine the user experience. Python 3.8 introduces the walrus operator (:=) which assigns a value while using it in an expression. Python 3.9 improves dictionary performance, the language’s central data structure. These evolutions reflect growing technical maturity.

Portability remains a major asset: a Python program runs identically under Windows, macOS, or Linux. This universality, combined with free cost and the open source model, eliminates many barriers to adoption. Companies appreciate this freedom that protects them from technological lock-in.

Integration with other languages opens perspectives. Cython compiles Python to C for performance gains. Jython transpiles to Java, PyPy recompiles on the fly. These bridges allow optimizing critical parts without abandoning Python’s productivity.

Python establishes itself in various domains: system automation, web development, data analysis, artificial intelligence, scientific computing. This versatility reflects van Rossum’s initial design choices. By prioritizing simplicity and readability, he created a tool that adapts to computing’s changing needs. Thirty years after that December 1989 weekend, the language continues to evolve while keeping its soul: making programming accessible without sacrificing power.

Top

Linux

UNIX was an operating system written in C that could adapt to different hardware. It was a way to work around the headache of computers that couldn’t communicate between manufacturers, such as those from IBM and Burroughs, for example. Only the kernel required specific adjustments for each architecture. Clever, but UNIX licenses cost a fortune: resellers inflated prices up to ten times the initial rate.

Andrew Tanenbaum, an American professor working in the Netherlands, then took a different approach. He wanted to show his students how an operating system actually worked. His MINIX, designed for Intel 8086 processors that were flooding the market, didn’t match the performance of commercial systems. Its strength lay elsewhere: Tanenbaum published the 12,000 lines of code in his book “Operating Systems: Design and Implementation”. This practice was already seen for BASIC programs on personal computers, like games, but not for entire operating systems. Curious programmers could finally look under the hood.

Linus Torvalds, a second-year computer science student in Helsinki, was among those passionate readers. This self-taught Finn was simultaneously discovering Richard Stallman’s ideas. Stallman had started his career at MIT’s AI lab, where he created the Emacs editor. In the early 1980s, he observed with bitterness how software companies poached the best programmers by imposing draconian confidentiality clauses. In his view, software should remain free, copyable, and modifiable by all.

The GNU system project (a recursive acronym for “GNU is Not UNIX”) was born in 1983. Stallman began with the GCC compiler in 1984, a technical masterpiece that surpassed the achievements of entire teams of commercial developers. But the system kernel was still missing.

Torvalds, inspired by MINIX, took on the task, and his first Linux version (0.01) appeared in September 1991, followed by 0.02 in October. The reception was immediate: programmers from around the world downloaded the code, tried it, improved it, and sent their contributions back to Torvalds. An unprecedented collaborative dynamic was set in motion.

Tanenbaum was not kind to this newcomer. In 1991, he harshly criticized Linux’s monolithic kernel architecture, calling it a major design flaw. “Linux is obsolete,” he asserted. Torvalds took the criticism in stride and continued his work. History would prove him right.

The association with GNU programs transformed Linux into a complete system. The GPL license ensured that the source code would remain accessible, creating a virtuous circle: more contributors, more improvements, more users. Students and programmers flocked to this open playground.

Red Hat and Debian launched the first commercial distributions, assembling precompiled software to simplify installation. The KDE and GNOME graphical interfaces made the system accessible to non-experts. Linux was leaving the labs to conquer desktops.

Tux the penguin, the project’s mascot, tells a delightful story. During a vacation in the southern hemisphere, Torvalds encountered a penguin that bit his hand. This amusing incident later inspired the choice of symbol. Humor was never far away in this adventure.

Linux developed its own philosophy. Each tool accomplishes a single task but does it perfectly. The interface treats files and input-output devices uniformly. Users combine these simple tools to create sophisticated functionality, customizing their environment to their liking.

The robustness is impressive: a failing program doesn’t bring down the entire system. Each process’s memory remains compartmentalized, preventing unauthorized interference. The development environment natively integrates a full suite of programming tools.

Linux represents the transformation of a student project into a global phenomenon. It runs web servers, supercomputers, and our Android mobile devices. An international community of developers proves it can create and maintain a sophisticated operating system, challenging established proprietary models. Linus’s small kernel has grown to challenge industry giants, championing collaborative development and open source code.

Top

Visual Basic

Visual Basic began with a simple idea: making Windows programming accessible to a broader audience than just C or assembly language experts. In May 1991, Microsoft made a major breakthrough by launching Visual Basic 1.0. For the first time, creating a graphical interface no longer required writing hundreds of lines of obscure code. All it took was dragging and dropping buttons, text boxes, and menus directly onto the screen.

This visual approach revolutionized established practices. Where previously one had to master the intricacies of the Windows API and manipulate complex structures, Visual Basic offered an intuitive logic: what you see on screen corresponds exactly to what the end user will see. A button placed in a corner of the window will appear in the same location in the finished application.

Microsoft didn’t stop there. In September 1992, a DOS version was released, attempting to transpose this ease of use into the text environment. Extended ASCII characters simulated graphical elements as best they could, but the experience was rudimentary compared to the Windows version.

Version 2.0 of November 1992 corrected initial flaws and accelerated program execution. The development environment gained fluidity, which encouraged even more programmers to take the plunge. With version 3.0 in summer 1993, Microsoft introduced a major innovation: direct database access. Integration with Access simplified the creation of management applications, a rapidly expanding market.

Visual Basic 4.0 was released in 1995. Windows 95 democratized 32-bit systems, and Microsoft adapted its language accordingly. This version navigated between two worlds: it generated 16-bit code for legacy systems while exploiting the capabilities of new 32-bit processors. Object-oriented programming made its appearance, timidly admittedly, but it opened new architectural perspectives.

Two years later, Visual Basic 5.0 definitively abandoned 16-bit support. This bold decision demonstrated Microsoft’s confidence in the future of 32-bit computing. Native compilation replaced traditional BASIC interpretation, significantly accelerating program execution. Developers discovered the ability to create their own reusable controls, paving the way for an ecosystem of third-party components.

Visual Basic 6.0, released mid-1998, ventured into web territory. Internet Explorer evolved into a full-fledged development platform, enabling the creation of hybrid applications blending desktop and web. This version enjoyed phenomenal success and still appears to be used in some companies, despite official support ending.

The new millennium brought us Visual Basic .NET in 2002. Microsoft executed a complete break with the past. Managed code, automatic memory management, and integration with the .NET Framework radically transformed the nature of the language. This abrupt transition unsettled part of the community, accustomed to the simplicity of previous versions.

Visual Basic .NET 2003 consolidated this new direction. Support for the .NET Compact Framework opened the doors to mobile development, which was confidential at that time. Microsoft provided automatic migration tools, but the transformation was often laborious.

In 2005, Microsoft simplified the nomenclature and dropped the .NET suffix. Visual Basic 2005 recovered part of its lost identity. The “Edit and Continue” feature revolutionized debugging by allowing real-time code modifications. The “My” namespace facilitated access to system resources, recapturing the spirit of simplicity from the early versions.

Subsequent versions accompanied the evolution of the .NET Framework: 2008 exploited version 3.5, 2010 relied on 4.0, 2012 used 4.5. Each iteration brought its share of technical improvements, but the essence of the language remained faithful to its initial mission: democratizing software development.

Event-driven programming constitutes the soul of Visual Basic. Unlike traditional programs that follow a linear path from beginning to end, Visual Basic applications operate at the rhythm of user interactions. A button click triggers a procedure, a keystroke activates another function. This reactive approach transforms the logic of programming.

This event-driven philosophy reflects the evolution of personal computing. Computers in the 1990s became increasingly interactive, and users expected responsive and intuitive interfaces. Visual Basic perfectly met this expectation by offering a development model aligned with this new reality.

The massive adoption of Visual Basic testifies to its relevance: by the late 1990s, nine out of ten Windows applications came from this environment. This dominance is explained by the explosion of the computer market, the standardization of Windows, and above all the accessibility of the language.

Visual Basic transformed the software development landscape by making the creation of Windows applications accessible to a much broader audience than just seasoned computer professionals. Accountants created their own management tools, engineers developed calculation applications, teachers programmed educational software. This democratization reshuffled the cards of the software industry.

Top

DEC OSF/1

In 1992, Digital Equipment Corporation was going through a period of transformation. The company, once a giant of minicomputing, sought to establish itself in the enterprise server market against increasingly fierce competition. It then launched the development of a particular version of OSF/1, adapted to its new Alpha architecture and intended for AlphaServer servers.

The project was born out of necessity: to create a system capable of competing with existing UNIX solutions while fully exploiting the capabilities of Alpha processors. The engineering teams set ambitious goals: managing at least two processors, achieving 120 transactions per second on TPC-A benchmarks, and supporting at least 1 gigabyte of memory. These specifications represented considerable technical work.

The chosen architecture relied on a system bus with a maximum of eight nodes. Seven slots accommodated up to four processors each, with the eighth reserved for the system-to-PCI bridge. This modular design responded to a clear philosophy: offering scalability without compromising performance. The engineers integrated emerging technologies such as PCI and EISA buses, a risky bet that would prove successful for compatibility with the PC ecosystem.

Memory management posed particular constraints. The system had to juggle processors of different speeds while maintaining perfect cache coherence. The adopted solution employed a 1 MB secondary cache using 15-nanosecond SRAM components. This choice resulted from a delicate compromise between performance and cost, with faster memories being prohibitively expensive for a commercial product.

Stream buffers constituted one of the system’s most remarkable innovations. These devices monitored transaction addresses for reads and anticipated needs by preloading frequently used data, dropping access times from 9 to 7 cycles—a significant gain that impacted overall performance.

On the I/O side, DEC adopted a hybrid approach between PCI and EISA. This conservative decision was explained by economic constraints. Although older, EISA remained less expensive than PCI while offering acceptable performance for certain applications. The system thus supported up to 18 network ports.

The system boot process integrated carefully designed high-availability features. Hot-swap storage devices allowed on-the-fly replacement of failing components. Even more impressive, the system automatically managed corrupted flash ROMs without human intervention—a technical feat that would spare system administrators many nightmares.

Multi-OS compatibility represented another significant asset. DEC OSF/1 coexisted with Windows NT and OpenVMS on the same machine, thus meeting the heterogeneous needs of enterprises. This versatility, rare at the time, would attract many customers concerned with preserving their existing software investments.

Version 3.0, unveiled in 1994, marked the culmination of these efforts. Full support for symmetric multiprocessing (SMP) placed the system among the major players. Developers had rethought synchronization mechanisms and kernel algorithms to take advantage of parallelism. The result exceeded expectations: measured performance far surpassed initial objectives.

The market reception was enthusiastic. The AlphaServer 2100, equipped with DEC OSF/1, set the benchmark for price-performance ratio. Independent tests confirmed the system’s superiority, consolidating DEC’s position in the enterprise server market.

This technical success had a lasting influence on the industry. The concepts developed for DEC OSF/1—advanced memory management, optimized multiprocessor support, modular architecture—spread to other systems. The balanced approach between innovation and pragmatism would serve as a model for subsequent developments.

In 1995, DEC renamed its system Digital UNIX, then Tru64 UNIX after the acquisition by Compaq in 1998. HP, which absorbed Compaq in 2002, continued development until 2012. This longevity testified to the solidity of the foundations laid by DEC’s teams in the early 1990s. In retrospect, DEC OSF/1 perfectly illustrates the challenges of this pivotal decade. Faced with the rise of RISC architectures and the explosion of distributed computing needs, DEC managed to create a system that lived up to its ambitions.

Top

Blowfish

In 1993, as the Data Encryption Standard was showing its first signs of weakness with its 56-bit key becoming vulnerable to brute-force attacks, Bruce Schneier introduced Blowfish to the world—a symmetric encryption algorithm that would revolutionize the approach to computer security.

He had not chosen this path by chance. Proprietary solutions dominated the market, often accompanied by restrictive patents and prohibitive costs. The idea of a free, no-cost, and high-performance algorithm was taking shape in the mind of this American cryptographer who wanted to democratize access to strong encryption. Blowfish was born from this vision: to offer a credible alternative to established standards, free from legal or financial constraints.

The technical specifications of Blowfish broke with the practices of the time. Unlike DES with its fixed key, this new algorithm accepted variable-length keys, ranging from 32 to 448 bits. This flexibility addressed the diverse needs of users, whether individuals seeking basic protection or governments requiring maximum security. The heart of the algorithm relied on a 16-round Feistel network, a proven architecture enhanced by a remarkable innovation: key-dependent S-boxes.

These substitution boxes constituted Blowfish’s signature. Rather than using fixed tables like its predecessors, the algorithm generated its own S-boxes from the provided key. Initialization began with the decimals of π, ensuring the absence of secret backdoors, then modified them through a complex process involving the key used. This approach made each implementation unique while preserving the desired cryptographic properties.

The official presentation took place in 1994 at the Fast Software Encryption workshop in Cambridge. The reception was mixed: while the technical innovation impressed, the cryptographic community remained cautious about this newcomer. The first security tests followed one another. Serge Vaudenay identified classes of weak keys in versions reduced to fewer than 14 rounds, but these vulnerabilities did not affect the standard version. Vincent Rijmen developed a second-order differential attack against a 4-round variant, a brilliant technical demonstration that did not in any way challenge the robustness of the complete algorithm.

The commercial adoption of Blowfish exceeded its creator’s expectations. Citi-Software Ltd’s Access Manager integrated it into its password manager, leveraging its execution speed to secure sensitive data. The AEdit word processor made it its document encryption engine, while Markus Hahn’s Blowfish Advanced CS used it for secure file deletion. This proliferation of implementations testified to the confidence the industry placed in this free algorithm.

Performance was one of Blowfish’s major assets. On 32-bit processors, it required only 18 clock cycles per encrypted byte, a remarkable performance compared to DES’s 45 cycles or IDEA’s 50 cycles. This efficiency came from the judicious choice of simple operations: XOR, addition on 32-bit words, access to pre-calculated tables. The subkey arrays fit in the cache of processors like the 80486 or 68040, thus optimizing memory accesses.

However, this execution speed came with a downside: initialization. Subkey generation required the equivalent of 521 encryption iterations, a long process that penalized applications requiring frequent key changes. This characteristic naturally directed Blowfish toward uses where the key remained stable: file encryption, established secure communications, data storage.

The theoretical security of Blowfish impressed with its mathematical solidity. Schneier had calculated that a 197-bit key would resist even if all the energy produced by the Sun were converted into computational operations. These projections, though hypothetical, illustrated the considerable security margin offered by the algorithm. A 128-bit key required billions of powerful computers for millions of years to be broken by brute force.

The key-dependent S-boxes reinforced this security by complicating differential cryptanalysis. Each key generated its own substitution tables, making it difficult to establish exploitable patterns. The Feistel network ensured optimal diffusion: after a few rounds, each output bit depended on all input bits, a central property for resisting statistical attacks.

Despite its undeniable qualities, Blowfish revealed certain limitations over time. Its 64-bit block size, standard in the 1990s, seemed insufficient given the growing security requirements that now favored 128-bit blocks. The algorithm’s memory footprint, with its multiple tables, posed problems on resource-constrained embedded systems such as smart cards.

These constraints did not prevent Blowfish from establishing itself durably. Thirty years after its creation, the algorithm still powers numerous systems, testament to its solid and balanced design. Its successor Twofish, a finalist in the 1998 AES competition, adopted several of its innovations while correcting the identified limitations. But Blowfish retained its followers, attracted by its ease of integration and proven performance.

Blowfish’s impact went far beyond the technical realm. It demonstrated that a free cryptographic algorithm could rival the most sophisticated commercial solutions. This success inspired developers and researchers, contributing to the rise of the free cryptography movement. OpenSSL and other projects like GnuPG relied on this demonstration to legitimize their collaborative approach.

The public analysis that Blowfish received also validated Kerckhoffs’s principle, according to which the security of a cryptographic system must rely only on the secrecy of the key, not on that of the algorithm. This transparency, far from weakening security, strengthened it by subjecting the algorithm to the critical scrutiny of the international scientific community.

Blowfish remains an essential pedagogical reference for understanding the mechanisms of modern symmetric cryptography. Its history illustrates the transition from hardware encryption to software encryption, the democratization of cryptographic tools, and the growing importance of performance on consumer architectures. This successful synthesis between theoretical security and practical efficiency made it a model for many subsequent algorithms.

Top

JPEG

In the 1980s, a paradoxical situation characterized the digital world: images proliferated but no standard existed to compress and exchange them. Telecommunications were taking their first steps toward multimedia while computing discovered the joys of color. Faced with this normative void, an international initiative emerged in 1986: the Joint Photographic Experts Group, better known by its acronym JPEG.

This collaboration brought together researchers from ISO and CCITT (Comité consultatif international télégraphique et téléphonique, the predecessor of ITU-T). Their mission seemed clear on paper: to invent a universal compression method. In reality, the challenge proved far more complex. How could visual quality and storage space economy be reconciled? How could a format be created that would work equally well for color fax transmission and for future applications not yet imagined?

The work began with an all-encompassing exploration phase. In June 1987, twelve different compression techniques competed during comparative tests. The atmosphere must have been electric: each team defended its method with the conviction that theirs would change the world. After this first confrontation, three approaches stood out and warranted further investigation.

The three finalists underwent a new battery of evaluations in January 1988 in Copenhagen. It was there that the technique which would dominate the following decades emerged: the Discrete Cosine Transform, better known by the acronym DCT. This mathematical approach, developed in the 1970s, finally found its ideal application field.

The JPEG principle resembles a sophisticated recipe. The image is first divided into small 8×8 pixel squares, as if cutting it into a mosaic. Each piece undergoes the famous DCT transformation which converts pixels into spatial frequency coefficients. This step reveals the ingenuity of the process: rather than storing each pixel individually, the variations and repetitions in the image are encoded.

Then comes quantization, the most delicate step in the process. Here, the algorithm makes choices: it eliminates information that the human eye perceives poorly or not at all. This ruthless selection constitutes the heart of lossy compression. Huffman encoding completes the chain by compacting the resulting data according to their frequency of appearance.

The results exceeded expectations. Compression ratios of 10:1, even 20:1, are commonplace without visible alteration of quality. For the first time, hundreds of images could be stored on the digital equivalent of a few floppy disks.

Standardization stretched over years, revealing the complexity of the exercise. The JPEG committee opted for a modular architecture that authorized different operating modes. The baseline mode constitutes the minimal foundation: every decoder worthy of the name must support it. This pragmatic approach avoided the pitfall of standards that are too rigid or too permissive.

The first drafts circulated as early as 1990 in laboratories and companies. ITU-T validated the specification in 1992 under reference T.81, followed by ISO in 1994 with standard ISO/IEC 10918-1. These official dates mask a more nuanced reality: the format became established in practice well before its administrative validation.

The committee made a decision that would prove decisive for the future: the baseline mode components remain royalty-free. Only certain optional features could be subject to RAND licenses. This policy contrasted with an era when patents often constituted barriers to adoption.

The Independent JPEG Group published a complete implementation under a free license in 1991. This source code, distributed under terms similar to BSD, became the technical reference. Developers worldwide could finally integrate JPEG into their applications without complex license negotiations.

The IJG didn’t stop there. Its developers regularly refined their code, corrected bugs, and added advanced features from the standard. This collaborative approach heralded the open source spirit that would triumph a few years later.

The first applications remained faithful to the initial objectives: fax and videotex exploited the new format’s capabilities. But it was the Web explosion in the mid-1990s that propelled JPEG toward planetary success. Web pages became enriched with photographic images without becoming impractical on 56K modems.

The arrival of digital cameras amplified the phenomenon. Kodak, Canon, and Nikon massively adopted the format for their devices. Memory cards, expensive and limited in capacity, directly benefited from JPEG compression. A camera from 2000 stored several hundred images on a 64 MB CompactFlash card.

At this time, the number of JPEG images in circulation was already counted in billions. Each day brought its share of new digital photographs. The format was invisible, so naturally did it establish itself. Users manipulated .jpg files without knowing of the committee’s existence that made them possible.

Yet limitations appeared progressively. At high compression rates, characteristic artifacts manifested: blocking effects, oscillations around sharp edges, degradation of uniform areas. These defects were troublesome when compression was pushed to the extreme, while they remained imperceptible in moderate use.

HDR images revealed other shortcomings of the original format. Traditional JPEG is confined to 8 bits per color channel, a ceiling that seemed derisory compared to modern sensors capable of 12 or 14 bits.

These limitations motivated the development of JPEG 2000, a complete overhaul based on wavelets rather than DCT. This new version promised better quality at equal compression rates and natively supported HDR images. It also integrated advanced features like progressive resolution enhancement.

Paradoxically, JPEG 2000 never managed to dethrone its predecessor. Technological inertia played its role: why change a format that works? The quality gains, real but subtle, did not justify the migration cost for most applications.

Classic JPEG still dominates digital photography. Smartphones generate trillions of images in .jpg format each year. Instagram, Facebook, and Twitter rely heavily on this technology that is over thirty years old.

This lasting success is explained by favorable factors. Efficient compression is the main argument, but other elements count just as much. The standard’s flexible architecture allowed its adaptation to very diverse uses. The open policy regarding patents facilitated industrial adoption. The free nature of IJG democratized access to the technology.

Thirty-five years after its creation, the JPEG committee continues its work. The modular approach, rigorous comparative testing, and balanced management of intellectual property constitute lessons for today’s standardizers. It now explores new avenues such as artificial intelligence applied to compression. But its firstborn remains its finest legacy: a format that has become so universal that it is no longer noticed.

Top

IBM ThinkPad

In IBM’s corridors during the 1920s, a small black notebook accompanied every employee. On its cover, a simple word: “THINK.” This pocket notepad embodied the company’s philosophy and its constant call to reflection. Seventy years later, this spirit would give its name to one of the most influential laptop lines in computing history.

IBM’s awakening to the realities of the portable market dates back to 1980. A team of internal analysts delivered a bold prediction: laptops would surpass desktop machines in sales volume before 1996. This revelation prompted John Akers, CEO at the time, to restructure the organization. In 1992, he created the Personal Computer Company, an independent division entrusted to Robert J. Corrigan to catch up with the accumulated lag behind Japanese and Californian manufacturers.

The first steps proved laborious, but the ThinkPad 700T was unveiled in April 1992. Equipped with an Intel 386SX/20 processor and a 10-inch monochrome STN screen, it struggled to convince. Its 4 or 8 MB of memory and two 10 MB storage units seemed paltry against market expectations. But this experience laid the groundwork for a more ambitious project.

At IBM’s Yamato laboratory, near Tokyo, Arimasa Naitoh orchestrated a passionate team. This Japanese engineer enjoyed an exceptional reputation for solving the most intricate technical problems. Alongside Ken Yonemochi and Koichi Higuchi, he leveraged the Japanese art of miniaturization to rethink every component. Meanwhile, Richard Sapper, a German designer consulting for IBM since 1980, made a radical decision. Gone was the ubiquitous beige in computing: the future ThinkPad would be black.

This unusual color reflected an aesthetic ambition that transcended conventions. Sapper drew inspiration from Japanese elegance and German functionality to design a different computer. The ThinkPad 700C revolutionized established norms.

The 10.4-inch color TFT screen constituted its first major breakthrough. While competitors still settled for monochrome displays, IBM bet on color. This technological risk-taking was accompanied by a more controversial innovation: the TrackPoint. Ted Selker, a scientist at IBM, designed this small red button located at the heart of the keyboard. Pressure-sensitive, it replaced the traditional mouse. The idea divided opinions. Some users fell in love with it immediately, others hated it. But this originality forged the ThinkPad identity.

Under the hood, the specifications impressed. The IBM 486 SLC processor clocked at 25 MHz, memory expandable up to 16 MB, and a 120 MB hard drive placed the machine at the peak of performance. The 3.8-hour battery life challenged the standards. Marketed at $4,350, the 700C displayed a price 15% lower than its direct rival, the Toshiba 4400. This pricing aggressiveness paid off: 100,000 orders poured in during the first two months.

Success encouraged IBM to multiply experiments. The 1993 ThinkPad 500 explored ultra-portability with its 7.24-inch screen. Exclusively in Japan, the 550BJ model integrated a Canon printer, anticipating total mobility needs. The 750P laid the groundwork for touch technology with its pressure-sensitive screen, a concept perfected by the 360P through a rotating mechanism. The 755CD marked another milestone by incorporating a CD-ROM drive, equipment that would remain standard for two decades.

But it was the ThinkPad 701C that crystallized IBM’s genius. John Karidis, an engineer at Big Blue, imagined the unthinkable: a keyboard that unfolds when the screen opens. Two mechanical parts articulate to form a full keyboard in a compact chassis. This achievement earned the 701C, nicknamed “Butterfly,” a place in the permanent collections of New York’s Museum of Modern Art. Cruel paradox: despite this artistic consecration, the model quickly disappeared from the catalog, victim of its mechanical complexity.

The 1990s saw the range structure itself. The T series established itself as the professional reference, combining robustness and performance. The X line targeted demanding nomads, prioritizing lightness without sacrificing reliability. The A models addressed the brute power needs of mobile workstations. This clear segmentation transformed the ThinkPad into a true cultural phenomenon. Owning a black ThinkPad became a social marker in corporations. In the first year alone, sales exceeded one billion dollars.

Irony struck in 2004. IBM, inventor of the PC and creator of the ThinkPad, recorded a one-billion-dollar loss in its personal computing division. Markets evolved too quickly, Asian competition pressed hard. In May 2005, Lenovo acquired IBM’s PC business for $1.75 billion. The ThinkPad changed ownership but retained its soul.

The T60, the first Lenovo-branded ThinkPad, reassured the worried. Its visual identity endured, the TrackPoint persisted, the robustness remained. Intel dual-core processors made their appearance, the magnesium alloy cage reinforced the structure. Lenovo understood it had inherited precious heritage.

Transformations nevertheless began. The switch to 16:10 screens with the 2008 T400 sparked passionate debates on specialized forums. The adoption of the chiclet keyboard on the 2011 T430 divided the user community. But it was the removal of physical buttons from the touchpad on the T440 that provoked the most virulent uprising. Lenovo partially backtracked on the T450, proof that the ThinkPad legacy cannot be handled with impunity.

The X1 Carbon series, launched in 2012, embodied assumed modernity. Its carbon fiber chassis pushed the limits of lightness without compromising the brand’s legendary solidity. These technical innovations were accompanied by faithfulness to the original aesthetic codes: matte black, the red TrackPoint, the sobriety of lines.

More than 150 million ThinkPads circulate worldwide. This exceptional longevity transcends trends. Peter Hortensius, an executive who navigated between IBM and Lenovo, offers an enlightening explanation: the ThinkPad embodies “less an object than a value system.” Reliability, sobriety, efficiency: these principles span decades and resist the sirens of marketing.

Top

OpenGL

In the computer graphics world of the 1990s, each manufacturer had its own programming interface, its own rules, its own vision of 3D rendering. Silicon Graphics used IrisGL, others had their in-house solutions. Developers who wanted to run their applications on multiple platforms had to rewrite their code as many times as there were different systems.

It was amid this chaos that Kurt Akeley and Mark Segal, both at SGI, had an idea that seemed almost utopian: create an open and universal standard for 3D graphics programming. They didn’t just dream about it, they made it happen. In June 1992, OpenGL was born, the first truly cross-platform specification for 3D.

To prevent OpenGL from becoming another proprietary SGI product, the company immediately established a supervisory board: the Architecture Review Board. Digital Equipment, Evans & Sutherland, Intel, IBM, Hewlett Packard, Intergraph, Microsoft, Silicon Graphics, and Sun Microsystems all sat down at the table together. Imagine the scene: direct competitors agreeing to work together on a common project.

OpenGL’s gamble lay in a few simple guidelines. First, standardize access to graphics hardware without meddling in what didn’t concern it. Window management? Left to the operating system. User interface? Not our problem. This minimalist approach became its strength.

The creators made bold choices. They refused to include functions that couldn’t be hardware-accelerated. Everything related to convenience or data management was delegated to higher-level libraries. GLU, the utility library that accompanied OpenGL, handled the matrices and NURBS surfaces that developers were asking for.

In January 1996, OpenGL 1.1 reached an important milestone. Extended textures, logical operations in RGB mode, and vertex arrays were game changers. These improvements addressed the pressing demands of an industry beginning to understand the potential of real-time 3D.

Adoption didn’t happen overnight. First came scientific applications, followed by industry. CAD software understood OpenGL’s value: front-face culling, dashed lines, stencil buffers—all these features were there, ready to use.

The video game world remained skeptical until John Carmack of id Software made a decision that would change everything. In 1996, he decided to use OpenGL for a new version of Quake, even though PCs equipped with affordable 3D acceleration could be counted on one hand. Yet this version of Quake demonstrated that an open standard could compete with any proprietary solution. Graphics card manufacturers took notice.

OpenGL grew with its era. When GPUs became programmable, the 2004 version 2.0 introduced GLSL, the shader language that gave developers direct control over vertex and pixel processing. This major evolution reflected the emergence of a new generation of graphics processors.

The mobile device explosion gave birth to OpenGL ES. This streamlined version, tailored for embedded systems, became the graphics soul of smartphones and tablets. Its success strengthened the OpenGL ecosystem as a whole, creating a virtuous circle between desktop and mobile.

In 2006, the Khronos Group took over the reins of OpenGL. This non-profit organization ensured the standard’s continuity while adapting it to new technologies. The governance change marked a new stage in OpenGL’s maturity.

Thirty years after its birth, OpenGL’s application domains are staggering. Medical imaging relies on it to visualize 3D scans. Flight simulators use it to recreate realistic environments. Hollywood employs it for special effects in film and television. Adobe integrated it into After Effects, Premiere Pro, and Photoshop. Major television networks like CBS, NBC, CNN, and BBC used OpenGL for their election coverage.

OpenGL’s extension mechanism partly explains this exceptional longevity. Manufacturers can add new features without breaking existing functionality. These extensions, once widely adopted, eventually join the official standard. This flexibility allowed OpenGL to evolve without ever losing backward compatibility.

OpenGL remains the only truly universal graphics API. Windows, macOS, Linux, embedded systems—wherever there’s a screen and a graphics processor, OpenGL can be deployed. WebGL brings its capabilities to web browsers, Vulkan takes over for high-performance applications. The legacy continues.

Top

IBM Simon

When IBM unveiled the Simon Personal Communicator in 1992, no one fully grasped the significance of this invention. The idea seemed almost outlandish: merging a mobile phone with a personal digital assistant in a single device. IBM engineers envisioned a mobile terminal that would do everything at once: make calls, organize appointments, send electronic messages. A vision that exceeded market expectations.

BellSouth Cellular launched it commercially in 1994. The price tag: $900, a hefty sum equivalent to approximately $1,960 today. The device impressed with its dimensions: 20.3 by 6.4 by 3.8 centimeters. Some compared it to a military walkie-talkie, so massive it appeared. Yet beneath this black chassis lay cutting-edge technology: an x86-compatible processor, integrated fax modem, Type II PCMCIA card slot, and most notably an 11.4 by 3.8 centimeter LCD touchscreen.

For this was the Simon’s true revolution: its touchscreen. Gone was the traditional physical keyboard; users interacted directly with their fingers or a stylus. This approach upended established conventions. The graphical interface featured eleven pre-installed applications covering daily needs: calendar, address book, calculator, notepad, email, fax, and of course telephony. The Simon incorporated a web browser, a feature virtually nonexistent on mobile devices in 1994.

The custom-developed operating system offered an intuitive interface with clear icons. Switching from one function to another required just a simple tap on the screen. Users could scribble handwritten notes, sketch drawings, compose and send emails. The onboard memory stored all this data while the PCMCIA slot opened interesting expansion possibilities.

Alas, success did not follow. Barely 2,000 units rolled off the production lines. Most were returned to BellSouth and eventually destroyed. The reasons for this commercial failure? First, the price, prohibitive for many. Second, the battery life, disappointing. Third, the weight, a deal-breaker for daily use. Finally, the ergonomics, despite innovations, were perfectible.

But the real problem lay elsewhere. In 1994, networks weren’t ready. Mobile infrastructure remained rudimentary, data rates paltry, coverage uneven. How could one fully exploit a web browser when the connection struggled to display a text page? Users sought above all mobility and simplicity. A phone that makes calls, period. The Simon arrived ten years too early.

This unfortunate experience nevertheless concealed a remarkable conceptual achievement. IBM anticipated uses that wouldn’t become widespread until some fifteen years later. Apple’s iPhone in 2007 adopted many of the Simon’s ideas: touchscreen, intuitive graphical interface, convergence of functions. The first Android smartphones also followed this lineage.

The Simon’s story illustrates a truth about technological innovation: technology alone isn’t enough. A product can be technically perfect yet commercially disastrous if the ecosystem isn’t mature. Networks, usage patterns, and mindsets must evolve together. IBM learned this the hard way, but this lesson enriched understanding of mobile markets.

Thirty years later, the Simon still fascinates computing enthusiasts. As the first device to marry communication and personal organization, it laid the conceptual groundwork for today’s smartphones. Its commercial failure doesn’t erase its historical value. It bears witness to an era when manufacturers dared bold technological gambles without specific needs, even at the risk of suffering crushing setbacks.

IBM had envisioned pocket computing ahead of its time. Engineers dreamed of a single device capable of managing all its owner’s digital needs. The Simon didn’t win over consumers of its era, but this premature vision from 1994 now dominates the mobile market.

Adobe PDF

In 1990, John Warnock set an ambitious goal for Adobe Systems: to create a file format that would preserve the exact formatting of documents, no matter where they were viewed. This internal project, called "The Camelot Project," was born from a daily frustration: exchanging digital documents was an uphill battle. Fonts would disappear, layouts would fall apart, and systems refused to cooperate.

Warnock and his team had a major advantage: PostScript, the language Adobe had developed in the 1980s. But transforming this programming language into a practical file format required a different approach. Three years later, in 1993, PDF emerged in a structured binary form, abandoning PostScript’s complexity in favor of enhanced performance for interactive display.

The beginnings were modest. Adobe launched two complementary products: Acrobat to create PDF files, and Acrobat Reader distributed free of charge to read them. This dual strategy—paid on one side, free on the other—eventually paid off, but adoption remained gradual during the early years.

The Internet changed everything. Around the mid-1990s, the Web exploded and PDF found its natural place: a document could now travel across the network while retaining its original appearance. Companies, government agencies, and then individuals adopted the format for their digital communications. PDF fulfilled a simple promise: what you see is what your recipient will see.

Adobe didn’t rest on its initial success. Successive versions enriched the format with features that transformed it from a simple container for static documents into a true interactive platform. Annotations made it possible to mark up texts, hyperlinks to navigate between sections, forms to collect data, and digital signatures to authenticate documents. Multimedia made its appearance, integrating videos and sounds into files.

The technology followed this functional evolution, as it should every time. PDF developed a sophisticated approach to compression, adapting its methods to the content: JPEG for photographs, CCITT Group 4 for black-and-white images, LZW for text and graphics. This technical flexibility maintained quality while controlling file size, a delicate but essential balance.

Font management illustrated the format’s ingenuity well. Rather than being at the mercy of operating systems and their variable font collections, PDF offered two solutions: directly embedding fonts in the document or using intelligent substitution mechanisms. The document thus retained its appearance on a computer lacking the original fonts.

The year 2008 marked a turning point. Adobe entrusted the PDF specification to the International Organization for Standardization, transforming its proprietary format into the open standard ISO 32000-1:2008. This strategic decision confirmed the format’s maturity and paved the way for even wider adoption.

Specialized versions emerged for particular uses. The PDF/X format conquered the graphics industry and professional printing, PDF/A addressed long-term archiving needs, PDF/E adapted to engineering documents, and PDF/UA integrated accessibility requirements. Each variation testified to the format’s ability to adapt to the specific constraints of different sectors.

PDF 2.0 arrived in 2017 with the ISO 32000-2 standard, bringing improvements in digital signatures, metadata, and multimedia support. This version strengthened security and improved interoperability with contemporary technologies, proof that the format continued to evolve.

During these decades of evolution, the PDF ecosystem grew well beyond Adobe solutions. Open-source libraries, online services, and third-party applications offered their own approaches for creating, modifying, or viewing PDF files. This diversification consolidated the format’s status as a universal standard.

PDF gradually integrated Unicode for international text, XMP metadata to describe documents, and digital rights management mechanisms to secure content. It adapted to mobile screens and new viewing modes, proving its ability to adapt to technological transformations.

PDF dominates practically all professional and personal digital exchanges. Its success lies in a promise: to guarantee that documents retain their exact appearance, wherever they are viewed. This visual fidelity, combined with universal compatibility and a constant capacity for evolution, has made PDF much more than a file format—it’s a true common language of digital communication.

Top

Common Gateway Interface

The early Web resembled an immense frozen library. Consulting a page was like leafing through a book: one could read, but interaction was impossible. This situation changed radically a few years later when users began demanding more. They wanted to fill out forms, search for information, personalize their experience. Static pages were already reaching their limits.

Rob McCool worked at the National Center for Supercomputing Applications on the NCSA HTTPd web server. In 1993, he conceived an elegant solution to this problem: creating a bridge between the web server and external programs. This idea gave birth to the Common Gateway Interface, better known by the acronym CGI. The term gateway perfectly reflected its purpose: connecting two previously separate worlds.

The genius of CGI lay in its simplicity. When a visitor clicked a link or submitted a form, the server no longer simply returned an existing file. It launched a program on the server that generated a customized response, then transmitted this response to the browser. Suddenly, the Web came alive. Developers could write their scripts in the language of their choice, provided it respected a few basic rules.

These rules defined a standardized communication protocol. The server transmitted request information via environment variables called meta-variables. The script retrieved this data, processed it, and generated a response conforming to the HTTP protocol. This universal approach worked with any programming language, from C to Python to Perl.

Perl indeed became CGI’s champion. Its ability to manipulate text, its permissive syntax, and its availability on all platforms won over web developers. The first scripts processed HTML forms, queried databases, and assembled personalized pages. It was an era of experimentation.

These experiments gave rise to collections of shared scripts. Matt Wright, a high school student in Colorado, created “Matt’s Script Archive” in 1995, which became an essential reference. His FormMail script allowed form content to be sent by email and was downloaded thousands of times. Problem: Wright was young, inexperienced, and his scripts contained glaring security flaws. The Perl community responded by creating “Not Matt’s Scripts”, offering more robust alternatives.

For security posed a constant challenge. Each CGI script represented a potential gateway into the system. Developers had to scrupulously validate input data, avoid buffer overflows, and guard against malicious code injection. Administrators generally confined these scripts to a special directory, the famous cgi-bin, to limit damage in case of problems.

CGI’s architecture suffered from a congenital defect: each request triggered the launch of a new process. On a lightly trafficked server, this went unnoticed. But as soon as traffic increased, performance collapsed. The machine spent more time creating and destroying processes than actually handling the requests themselves.

Solutions emerged to work around this limitation. FastCGI kept processes alive between requests, eliminating the startup overhead. Mod_perl integrated the Perl interpreter directly into Apache, transforming scripts into persistent modules. These optimizations gave CGI a new lease on life.

Meanwhile, other approaches gained ground. PHP, created in 1994 by Rasmus Lerdorf, offered native integration with the web server. No more external processes needed: code executed directly within Apache. Microsoft developed ASP for its IIS servers, while Sun worked on JSP for the Java universe. Each sought their path toward dynamic web programming.

These new technologies did not erase CGI’s legacy. They adopted its fundamental concepts: separation between server and application logic, information transmission via the environment, request-response model. CGI had blazed the trail, others were widening it.

Evolution accelerated at the turn of the 2000s. Modern web frameworks brought higher-level abstractions, sophisticated state management, service-oriented architectures. Ruby on Rails revolutionized web development in 2004, Django did the same for Python. These tools rendered CGI obsolete for most uses.

Yet CGI refused to disappear completely. Its official standardization in RFC 3875 in 2004 recognized its historical importance. This specification codified ten years of practical experience, detailing every technical aspect of the interface. Apache and other servers continued supporting it, preserving compatibility with legacy applications.

This longevity is explained by the robustness of CGI’s conceptual model. The idea of a standard interface between web server and external programs remains relevant today. Modern microservices architectures and REST APIs take up this principle on a larger scale, including the stateless philosophy.

Top

FreeBSD

To understand FreeBSD’s history, we must return to the 1970s, when the University of California at Berkeley received UNIX source code from Bell Labs. Researchers then began developing their own improvements to this system, creating what was called BSD, for Berkeley Software Distribution. This academic work resulted in several major versions that marked history, the last being 4.4BSD-Lite.

This 4.4BSD-Lite version concentrated a decade of technical innovations. The socket interface for network communications appeared there, accompanied by the reference implementation of the TCP/IP protocol. The fast file system improved performance, while NFS support and the mmap virtual memory model laid foundations that remain current. These technical elements were not mere academic exercises: they transformed how computers communicated and managed their resources.

In the early 1990s, the situation was particular. The computing community sought a free and above all functional operating system. A group of developers decided to create a complete distribution from 4.4BSD-Lite. But they rejected the usual model of single leadership. Seven people formed what they named the Core Team, a collective direction that would define the project’s culture for decades to come.

The initial technical choice focused on the Intel 386 architecture. Developers aimed for stability and performance, particularly for networking and storage. The kernel integrated sophisticated mechanisms inherited from Mach technology for virtual memory management. The programming interface remained faithful to UNIX standards, facilitating the migration of existing applications.

Governance evolved in 2000. The nine Core Team members became elected every two years by active contributors. This democratic system allowed regular renewal and the emergence of new ideas. The project equipped itself with centralized development tools: version control, bug tracking. These infrastructures made remote collaboration much more fluid than before.

The ports system represented a major innovation. This collection of software prepared for automatic installation radically simplified the addition of third-party applications. Users no longer had to worry about dependencies or compilation problems. The pkg system later further improved package management, making the user experience comparable to the most advanced Linux distributions.

Documentation received unusual attention. Developers established a team dedicated to writing and maintaining manuals. These contributors obtained the same rights as programmers, a rare recognition that underlined the importance placed on documentation. This approach bore fruit: FreeBSD became renowned for the quality of its technical documentation.

In 2000, the FreeBSD Foundation brought institutional support to the project. This non-profit organization provided technical infrastructure and funded specific developments. It now employs about twenty people who work on development, documentation, and promotion of the system.

The choice of the Berkeley license was strategic. Less restrictive than Linux’s GPL, it allowed companies to integrate the code without obligation to publish their modifications. This flexibility attracted numerous companies in embedded systems and network devices. Apple found there the basis of Darwin, the core of iOS and macOS.

Exchanges with other BSD projects enriched development. While NetBSD brought its multi-architecture expertise and automated testing methods, OpenBSD contributed significantly to security with the SSH program and encryption components for HTTPS. This collaboration between cousin projects created a coherent technical ecosystem.

Internet infrastructure adopted FreeBSD. Its stability and network performance made it the preferred choice of access providers in the 1990s. Yahoo! and Hotmail used it for their servers, demonstrating its capacity to handle massive loads. Web hosts followed, finding in this system a reliable foundation for their services.

Community culture played a determining role. Developers established respectful communication rules, avoiding the conflicts that paralyzed other projects. This inclusive approach encouraged participation from contributors worldwide. The community became diverse and productive, far from the ego wars that harmed certain competing projects.

Thirty years later, FreeBSD remains relevant. The system continues to integrate modern technologies without sacrificing its stability. Its code appears in varied commercial products, from routers to game consoles. Network storage systems use it for its robustness. This discreet but omnipresent presence testifies to the quality of work accomplished.

This longevity is explained by mutually reinforcing factors. The technical foundation inherited from BSD was solid. Community organization proved effective. Complete documentation facilitated adoption. The permissive license attracted companies. But above all, constant attention to technical quality and balanced governance allowed weathering the storms of the computing world without losing course.

Top

Intel Pentium

In 1993, Intel abandoned its numerical nomenclature. The processor that was supposed to be called 586 ultimately took the name Pentium. This break was far from trivial, as it reflected a defensive strategy against AMD, which had marketed its Am486 by exploiting the similarity in naming. Intel attempted to register "586" or "i586" as trademarks, but was denied. A simple sequence of numbers lacked distinctive character in the eyes of the competent authorities.

The first Pentium was based on the P5 architecture. This superscalar processor operated between 60 and 66 MHz, had a 16 KB L1 cache, and used a system bus clocked at the same frequency range. Its manufacture using 800-nanometer technology represented a technical feat for the time. Two years later, the Pentium MMX integrated new instructions dedicated to multimedia processing. Intel developed the P6 architecture in parallel, marketed under the name Pentium Pro in 1995. This version introduced out-of-order execution and incorporated a level-2 cache in a multi-chip package.

The family expanded in 1997 with the Pentium II, which combined the advances of the Pentium Pro and MMX instructions. This model adopted a new physical format, the SECC (Single Edge Contact Cartridge) cartridge, which simplified assembly and testing. The Pentium III arrived in 1999 with the SSE (Streaming SIMD Extensions) instruction set and 128-bit registers enabling simultaneous processing of four floating-point numbers.

Intel switched to the NetBurst architecture with the Pentium 4 in 2000. This new design favored high clock frequencies through an extended pipeline. Mobile versions emerged for laptops, with optimizations for power consumption. In 2005, the Pentium D ushered in the multi-core era by integrating two Pentium 4 processors in a single package.

Intel modified its strategy in 2006. The Pentium brand now positioned itself between entry-level Celerons and the new high-end Core series. Recent Pentiums use the same chips as Core processors, but limited: reduced frequencies, partially disabled L3 cache, advanced technologies removed. The processors are less expensive while maintaining compatibility with the x86 architecture.

Technical developments necessitated hardware adaptations. Early models inserted into Socket supports, then Slot formats appeared to accommodate SECC cartridges. The increase in required connections led to the development of LGA 775, where pins are located on the socket rather than on the processor.

Miniaturization progressed, with manufacturing process technology advancing from 800 nanometers in 1993 to 32 nanometers for recent versions. This evolution enabled an increase in the number of transistors, improved performance, and reduced power consumption. In 2011, the Sandy Bridge architecture integrated graphics capabilities directly into the processor. This integration addressed the growing needs of common applications for graphics rendering.

Pentium processors contributed to democratizing personal computing. Their x86 architecture established itself as a de facto standard and created a vast software ecosystem. Backward compatibility maintained across generations allowed users to keep their applications when upgrading their hardware, even though we know in hindsight that this has its limits. The Pentium’s influence extends beyond personal computers. These processors served as the foundation for developing other product lines: Celeron for entry-level, Xeon for servers and workstations, embedded versions such as the EP80579 for systems-on-chip.

Industry shifts are reflected in the evolution of the Pentium. The rise of multimedia applications motivated the addition of specialized instructions. The growing importance of energy efficiency led to the development of optimized mobile versions. The emergence of parallel computing drove the adoption of multi-core architectures.

The Pentium illustrates a remarkable form of longevity in the computer industry. This brand, created in 1993, still exists in 2025, adapting to technological shifts and market needs.

Top

Lua

In 1993, three Brazilian researchers from the computer graphics technology group at the Pontifical Catholic University of Rio de Janeiro, Roberto Ierusalimschy, Luiz Henrique de Figueiredo, and Waldemar Celes, were working on computer projects for Petrobras, the national oil company. They had no idea they were about to create one of the most influential scripting languages in video game history.

Between 1977 and 1992, a strict protectionist policy made importing foreign software difficult in Brazil. Local companies had no choice but to develop their own tools. This is how Tecgraf had designed two specialized languages: DEL for data entry, and SOL for generating lithological reports intended for Petrobras. These two precursors contained the foundations of what would become Lua.

The first version of the language merged the capabilities of DEL and SOL into a more general approach. The name “Lua,” which means “moon” in Portuguese, echoes SOL (“sun”), thus creating a poetic continuity with the earlier work. The creators first chose a restrictive license, limiting commercial use, before gradually adopting a more open philosophy starting with version 2.1, and the MIT license in 2002.

Simplicity forms the heart of Lua. The language relies on a few fundamental concepts: tables (associative arrays), functions, and coroutines. This economy of means translates into a remarkably compact implementation of approximately 17,000 lines of C code, capable of running on nearly all platforms, from microcontrollers to supercomputers. But it is primarily its ability to integrate with other languages, specifically C, that constitutes its strength. The interface between Lua and C, called the C API and a central element of the language, transforms Lua into an excellent extension tool for existing applications.

Successive versions mark significant technical milestones. Version 2.1 from 1995 introduced semantic extensibility mechanisms, giving programmers the ability to adapt the language’s behavior. Version 3.0 from 1997 unified C and Lua functions into a single type. Version 4.0 from 2000 completely redesigned the C API to make it more consistent and reentrant. In 2003, Lua 5.0 revolutionized the internal architecture with the introduction of full lexical scoping, coroutines, and a register-based virtual machine, a remarkable innovation for a scripting language at that time.

The video game industry discovered Lua in 1998 when LucasArts used it for Grim Fandango. This pioneering adoption triggered a movement that would never stop. World of Warcraft, Angry Birds, The Sims, and hundreds of other titles integrated Lua to manage their game logic. The reasons for this success? Easy integration, exceptional lightness, satisfactory execution speed, and an accessible learning curve.

Beyond games, Lua established itself in unexpected domains. Adobe integrated it into Photoshop Lightroom, where over 40% of the code relies on this language. It can be found in network routers, digital televisions, scientific instruments, and all sorts of embedded applications. This versatility stems from a particular design philosophy: providing mechanisms rather than imposing policies. Instead of dictating a way to program, Lua offers general tools allowing developers to build their own solutions, whether in procedural, functional, object-oriented, or data-oriented programming.

The language’s development reflects a coherent vision maintained by a stable team since its inception. The creators systematically favor simplicity and consistency over the accumulation of features. Each new version undergoes rigorous testing, with alpha versions already displaying great stability and beta versions being virtually final. This caution guarantees the language’s reliability.

The Lua community is primarily organized around a mailing list created in 1997. This technical forum maintains a high level of discussion while remaining open to beginners. The 2003 publication of the book Programming in Lua greatly contributed to the language’s dissemination and the training of new users.

Lua’s internal architecture testifies to this pursuit of minimalism. The compiler operates in a single pass without intermediate representation, thus efficiently processing large data files. This characteristic proves valuable in the video game domain, where Lua often serves to describe game resources.

The history of Lua demonstrates that a project born from specific local constraints can become a universal tool. Its success rests on clear design principles, rigorous implementation, and a stable development team that never lost sight of the language’s original qualities: simplicity and efficiency. What was meant to be a temporary solution to a Brazilian problem became a worldwide standard, proving that the best ideas sometimes emerge from the most constraining situations.

Top

R

Two statisticians from the University of Auckland, Ross Ihaka and Robert Gentleman, were seeking a statistical environment for their teaching laboratory on Macintosh. Their reading of Abelson and Sussman’s book, The Structure and Interpretation of Computer Programs, and their interest in the Scheme language led them to develop a minimalist C interpreter of approximately one thousand lines of code. What began as an experiment would become one of the most widely used tools in statistics and data analysis.

The choice to adopt a syntax close to the S language, created at Bell Labs by John Chambers and his team, was natural. This decision ensured a degree of familiarity for statisticians accustomed to S, while leaving the freedom to innovate technically. R retained specific features inherited from Scheme, particularly in its memory management with fixed-allocation garbage collection that limited paging problems. Lexical scoping rules allowed functions to access variables defined at their creation, a characteristic that distinguished R from its contemporaries.

In August 1993, Ihaka and Gentleman deposited their first binary versions on StatLib and announced their work on the s-news mailing list. Martin Mächler from ETH Zurich, intrigued by the project, encouraged them to release the source code under the GNU GPL license. This suggestion was initially met with caution. The two researchers hesitated to fully open their code. Yet they took the leap in June 1995. This decision radically transformed the nature of the project: from a closed development between two collaborators, R became an international collaborative initiative.

The creation of automated mailing lists at ETH Zurich in 1996 accelerated external contributions. Bug reports, suggestions, and patches flowed in. The language progressively gained new functionalities. Faced with the scale of contributions, a larger group of core developers formed in 1997, establishing an organizational structure that endures.

The year 2000 marked an important milestone with the creation of the R Foundation for Statistical Computing, a non-profit organization based in Vienna. This foundation, formed by members of the development team, assigned itself three objectives: supporting the continued development of R, providing a reference point for interactions with the community, and managing the copyright of the software and its documentation.

R established itself as a comprehensive statistical tool with extensive capabilities for data manipulation, computation, and graphical visualization. Its modular architecture allowed the addition of functionalities through packages, constantly enriching its application possibilities. The system integrated powerful operators for matrix calculations, a collection of statistical analysis tools, and sophisticated graphical functionalities. The documentation adopted a format close to LaTeX, ensuring comprehensive documentation accessible both online and in print. This approach, combined with the availability of source code, gave users the means to understand function behavior.

R’s development model illustrates the advantages of free software in the scientific domain. International collaboration among developers, peer review of source code, and thorough testing in various real-world situations contributed to the software’s robustness. The size of the user community, estimated in tens or even hundreds of thousands of people, multiplying code verifications and improvements, reinforced this solidity.

The development cycle follows a regular rhythm with major versions published annually since 2013. Each version undergoes rigorous testing, including alpha, beta, and release candidate phases, concerning both source code and precompiled binary versions for different platforms. The team maintains a version management system based on Subversion, with distinct branches for the stable version and the development version. Bug fixes are integrated into the stable branch, while significant new functionalities are developed in the development branch.

R’s impact on statistical research and data analysis extends beyond the academic framework. Its adoption in industry, particularly in the finance, pharmaceutical research, and big data analysis sectors, testifies to its technical maturity. The availability of interfaces with other languages such as C, C++, and Fortran extends its capabilities to intensive computing.

R’s legacy lies in its combination of a truly functional programming language with a comprehensive statistical environment. This association gives statisticians the means to develop and test new analytical methods while offering end users a practical and extensible tool. The project’s sustainability rests on a distributed infrastructure, including a worldwide network of CRAN mirrors (Comprehensive R Archive Network) that ensures resource availability. This network distributes the core software as well as thousands of complementary packages developed by the community.

Top

DHCP

TCP/IP networks in the 1980s remained modest and their configuration was entirely static. Administrators manually assigned an IP address to each machine, which stored this information in its secondary memory. Any modification required direct intervention at the console, typically followed by a system reboot. This hands-on approach suited the infrastructures of the time, but it would soon show its limitations.

Network growth and the arrival of affordable workstations without secondary memory disrupted this organization. It became urgent to centralize the administration of links between IP addresses and computer hardware. The RARP (Reverse Address Resolution Protocol) protocol emerged as the first answer: a machine connected to a network segment could now discover its IP address and initiate TCP/IP communications normally. Meanwhile, BOOTP (Bootstrap Protocol) facilitated the configuration of diskless stations by retrieving all TCP/IP parameters and system data needed for startup. The introduction of BOOTP relay agents made it possible to cross the boundaries of a single network segment. BOOTP already integrated an extension mechanism using the last field of the frame for specific data, an idea that DHCP would adopt.

RFC 1531 defined DHCP as a standard in October 1993. This extension of BOOTP corrected two major weaknesses: the requirement for manual intervention to add configuration information for each client, and the inability to reuse IP addresses. The protocol gained popularity, which led to successive clarifications. In 1997, RFC 2131 became the reference for IPv4 networks, a status it still holds today.

The arrival of IPv6 required the development of DHCPv6, documented in RFC 3315. This was not a simple transposition of DHCPv4 to IPv6 addresses, but a substantially different protocol. The most notable change concerned client identification: DHCPv4 relies on the MAC address, while DHCPv6 introduces the DUID (DHCP Unique Identifier). The design of DHCPv6 abandons the principle of a single address per device, allowing devices to request multiple addresses. RFC 3633 added prefix delegation, a novel functionality with no equivalent in DHCPv4. RFC 3736 extended the protocol’s capabilities to support configuration of clients using stateless address autoconfiguration.

The Internet Systems Consortium (ISC) played a decisive role in the development of DHCP. The organization maintains two major systems: ISC DHCP and Kea. Ted Lemon and Vixie Enterprises wrote the first implementation, ISC DHCP, as a reference for the new protocol. Version 1.0 was released in June 1998, followed a year later by version 2.0. Version 3.0, published in 2001, integrated support for the IETF failover standard and asynchronous DDNS updates. Version 4.0, in 2007, brought IPv6 support.

A dedicated engineering team has been working on ISC DHCP since 2004. Ted Lemon and Shawn Routhier, former ISC employees, contributed to the project for many years. Thomas Markwalder has been the primary maintainer since 2016, while Francis Dupont has been actively involved in maintaining the software since 2007. The community has enriched the project with an LDAP lease storage system and a lease display script.

Kea represents an entirely new implementation, intended to replace the aging ISC DHCP. Initially designed within the BIND 10 application framework to support multiple DNS and DHCP applications, the project refocused on DHCP after DNS development was discontinued in 2014. Tomek Mrugalski and Marcin Siodelski led the initial development of Kea, which stands out for its modern REST management interface and modular architecture. Unlike ISC DHCP, Kea separates the DHCPv4, DHCPv6 and dynamic DNS daemons, and offers optional libraries to extend the core DHCP server functionality.

The DHCP community has organized itself around complementary tools. The perfdhcp software, distributed with Kea, evaluates DHCP server performance by generating heavy traffic from multiple simulated clients. It tests both IPv4 and IPv6 servers, providing statistics on response times and lost requests. The Anterius project, part of ISC’s Google Summer of Code program, demonstrated the possibility of developing a lightweight management dashboard for Kea. More recently, the Stork project, launched in 2020, offers a robust and extensible web interface, integrating with the Prometheus time-series database and the Grafana visualization tool.

DHCP illustrates the adaptability of network protocols to the changing needs of computing. From a simple automatic configuration tool, it has evolved into a sophisticated system managing dynamic address allocation in modern networks. Its continued development, driven by an active community, ensures its lasting relevance in the face of contemporary network infrastructure challenges.

Top

NCSA Mosaic

NCSA Mosaic arrived in 1993 and permanently transformed how people access resources on the Internet. The story begins a year earlier, when Marc Andreessen, a student at the University of Illinois, spent more time at the National Center for Supercomputing Applications than in lecture halls. The timing was no coincidence: supercomputing centers were undergoing a major transformation. Cray computers, having become prohibitively expensive compared to increasingly powerful microprocessors, were giving way to computer networks connecting researchers and educators.

The Internet at that time resembled a patchwork of disparate protocols and services. FTP transferred files, Gopher organized menus, WAIS searched for information. Tim Berners-Lee had just created HTTP and HTML at CERN, but the available tools didn’t really take advantage of these innovations. A few pioneering browsers like Erwise or ViolaWWW offered interesting features, without progressing beyond the prototype stage confined to specific platforms.

Andreessen and his colleague Eric Bina developed the alpha version of Mosaic during Christmas break. Their browser stood out from the start: it displayed images directly within the text, radically changing the experience. The intuitive graphical interface enabled navigation through simple clicks on hypertext links. The takeoff was meteoric. From twelve users at launch in early 1993, Mosaic had hundreds of thousands by mid-year.

This massive adoption stemmed from concrete technical reasons. The browser ran on UNIX, Windows, and Macintosh, handled the various Internet protocols, and installed easily thanks to a single executable. Internet users appreciated the bookmarks that saved their favorite pages and the browsing history. Image loading remained dependent on the modest bandwidth of that period, but the display speed was adequate.

The success prompted the NCSA to grant licenses for the source code. Spyglass became the primary distributor and supplied a version to Microsoft, which would serve as the starting point for Internet Explorer. Andreessen left the NCSA in late 1993 to start Mosaic Communications with Jim Clark, a company later renamed Netscape. Their commercial strategy was striking: distribute the browser free to individuals and educational institutions, charge businesses.

The impact on the Internet was dramatic. Web traffic jumped from 1.5% to 23.9% of total volume on NSFNet in two years. The number of accessible servers exploded: around fifty in January 1993, over 1,500 in June 1994. This growth raised governance questions. The NSF then entrusted domain name management to Network Solutions Inc., creating a commercial monopoly that would generate considerable controversy.

The Mosaic browser established the graphical interface standards adopted by all its successors: address bar, navigation buttons, integrated image display. It popularized hypertext navigation among the general public and inspired the freemium business model adopted by so many online services. The issues it revealed regarding security, with the subsequent addition of the SSL protocol, or privacy protection, remain highly relevant today.

In computing history, Mosaic marks the moment when the Internet transitioned from an academic tool to a mass medium. Its ease of use and technical innovations made the modern Web possible.

Top

Secure Socket Layer

When the World Wide Web opened to the general public in early 1990, no one truly grasped the security challenges that would follow. However, by 1994, Netscape Communications understood that Internet communications needed protection. The company then launched SSL (Secure Socket Layer) version 1.0, a protocol designed to secure communications between web browsers and servers.

The underlying idea was nothing new: back in 1978, Loren Kohnfelder had already proposed using digital certificates to guarantee the authenticity of public keys. This theoretical insight would serve as the foundation for SSL, which combines asymmetric and symmetric cryptography to ensure confidentiality and integrity of transmitted data.

The first public version, SSL 2.0, was released in 1995. But it came with a host of flaws: message authentication relied solely on MD5, the keys used for authentication and encryption were identical, and TCP connection closures exposed the protocol to truncation attacks. Worse still, nothing truly protected the initial negotiation against man-in-the-middle attacks.

Netscape responded quickly and released SSL 3.0 in 1996. This new iteration significantly improved key generation from the master secret. The protocol adopted a preliminary version of HMAC for message authentication and now mandated support for DH/DSS algorithms and Triple-DES.

Three years later, the IETF took over and standardized the protocol under the name TLS (Transport Layer Security) 1.0. Netscape’s proprietary technology became an open standard. TLS 1.0 was compatible with SSL 3.0 but strengthened security through complete use of HMAC.

Versions then followed at a steady pace: TLS 1.1 in 2006, TLS 1.2 in 2008, then TLS 1.3 in 2018. Each iteration abandoned obsolete mechanisms and corrected vulnerabilities discovered in the meantime. SSL 2.0 was officially deprecated, SSL 3.0 followed suit. From 2015 onward, TLS 1.0 no longer sufficed for systems processing banking data.

Behind SSL and TLS lies a complex public key infrastructure (PKI). Certificate authorities (CAs) issue X.509 certificates to entities operating web servers with specific DNS names. These certificates are typically signed by intermediate CAs, creating a chain of trust that traces back to a root present in certificate stores of operating systems or browsers.

Certificate validation by CAs has evolved over time. Domain validation (DV) simply verifies authority over a domain name. Organization validation (OV) authenticates the entity requesting the certificate. Extended validation (EV), introduced later, requires CAs to follow a strict protocol to verify the applicant’s identity.

SSL and TLS adoption remained sluggish for a long time, until Let’s Encrypt arrived in 2015. This automated and free certificate authority removed the financial barriers that had hindered certificate acquisition. Google and other web giants began favoring secure sites in their search results, which significantly accelerated protocol adoption.

SSL and TLS operation follows four main stages. The session begins with negotiation where client and server exchange their cryptographic capabilities. Next comes authentication, typically from server to client only. The third phase establishes a shared key via asymmetric cryptography. Finally, the session uses this key to encrypt exchanged data with a symmetric algorithm.

Web browsers have gradually integrated visual indicators to signal a secure connection. The padlock, which became the universal symbol of SSL and TLS security, appeared in the earliest versions of Netscape Navigator. Modern browsers display these indicators in the address bar, with color codes and explicit messages that inform users about the security level.

Early SSL applications primarily concerned e-commerce and online banking services. The protocol then expanded to secure email, virtual private networks, and many other services requiring confidential communications.

The evolution of computer threats has led to enriching TLS with new features. The HSTS (HTTP Strict Transport Security) security policy mechanism now forces the use of the secure protocol. Certificate Transparency promotes the detection of fraudulent certificate issuance.

The history of SSL and TLS clearly shows that in security matters, nothing is ever guaranteed. Successive versions of the protocol have had to continuously strengthen communication protection while maintaining compatibility with existing systems. This technology has established itself as the de facto standard for securing Internet communications, contributing to the growth of e-commerce and online services that demand confidentiality.

Top

Microsoft Windows NT 3.51

Microsoft launched Windows NT 3.1 (for "New Technology") in July 1993. This first version inaugurated a family of operating systems that would reshape the professional computing landscape. The platform targeted Intel x86 and RISC processors. IDC analysts then defined an advanced operating system by its 32-bit APIs, preemptive multitasking managing processes and threads, integrated networking, and demand-paged virtual memory.

Feedback on Windows NT 3.1 pushed Microsoft to develop version 3.51, which was released in 1995. Two variants appeared: Windows NT Workstation and Windows NT Server. The former sought maximum responsiveness for interactive applications, while the latter optimized network performance. This choice reflected a clear segmentation strategy between workstations and servers.

Windows NT Workstation 3.51 reduced its memory footprint by 4 to 8 MB compared to its predecessor. 16-bit applications gained 25 to 50% performance on typical desktop configurations. On RISC systems (Alpha AXP and MIPS), Intel emulation for Windows 16-bit and MS-DOS applications received substantial improvements.

Windows 16-bit applications could now run in separate DOS virtual machines (VDMs). A crashing application no longer compromised the stability of other programs. Integration mechanisms like DDE (Dynamic Data Exchange) and OLE (Object Linking and Embedding) continued to function across VDMs.

Microsoft enriched Windows NT 3.51 with OpenGL, this 3D graphics function library initially developed by Silicon Graphics. A committee bringing together Digital Equipment Corporation, IBM, Intel, and Microsoft validated this technology. CAD, industrial design, and scientific analysis applications leveraged these new 3D capabilities.

Network connectivity improved significantly. The system integrated a compatible NetWare redirector that facilitated access to files and printers on Novell servers via IPX/SPX. The TCP/IP stack doubled its performance. PPP and SLIP enabled TCP/IP connections over asynchronous lines, attracting the UNIX and Internet communities.

Security was reinforced with account lockout after several failed login attempts. This protection blocked brute force attacks on passwords. NTFS gained per-file and per-directory compression, reducing data size by 40 to 50% depending on their nature.

Microsoft improved reliability. When a fatal system error occurred, Windows NT 3.51 automatically saved the memory state to a debug file and restarted. This approach, borrowed from UNIX systems, maximized the availability of network-connected machines.

The system targeted four categories of users. First, enterprises sought its stability to limit the costs of frequent restarts. With 2,000 workstations restarting 4 times per month for 5 minutes, an organization lost 8,000 hours of productivity per year. Developers appreciated its robustness for creating Win16 and Win32 applications. Technical users (engineers, scientists, statisticians) exploited its performance for their intensive calculations, often ported from UNIX. Computing enthusiasts adopted its innovations without losing productivity.

Peripheral management extended to PCMCIA cards for high-end laptops. The system now handled SCSI scanners and HP plotters. Absolute input mode facilitated the use of digitizers and touch screens. Windows NT 3.51 also integrated drivers for Cinepak and Indeo video formats.

The user interface adopted Windows 95’s common controls. File Manager, Print Manager, and other system applications inherited a modernized presentation with tooltips. The login sequence was customizable, allowing for example bank smart cards.

Microsoft offered different technical support formulas, including telephone assistance available 24 hours a day for critical incidents. Enterprises could choose different service levels, from basic support to Premium contracts with guaranteed response time. A network of authorized centers completed the system.

Administration was simplified with five backup modes: normal (complete copy), copy (without marking), incremental (modified files), differential (changes since the last complete backup), and daily (same-day modifications). The system documented each backup with the tape name, date, operator identity, and sequence number.

Windows NT 3.51 marked an important milestone in Microsoft systems history. Its solid architecture, performance, and advanced features attracted enterprises and demanding users. The system established technical foundations that would influence future versions of Windows, demonstrating Microsoft’s ability to finally design reliable professional systems.

Apple Power Macintosh

The announcement of an alliance between Apple, IBM, and Motorola in May 1991 would have made any computer industry observer smile. These three companies, two of which were fierce competitors in the personal computer market, had nevertheless decided to collaborate. IBM was Apple’s historical enemy, the one against which the company with the apple logo had built itself since its beginnings.

This unlikely union had its roots in Apple’s growing difficulties with its Motorola 68000 processors. Intel was steadily eating away at market share with its x86 chips, which were less technically elegant in the eyes of purists but increasingly more powerful. Motorola could no longer keep pace with its American competitor. The new Macintosh computers were falling worryingly behind, and their competitiveness was eroding.

Apple had to respond. The company turned to RISC architecture, an acronym for Reduced Instruction Set Computing. This technology came from IBM, which had experimented with it since the 1970s in its 801 project, conducted in a building of the same name in Yorktown Heights, New York State. John Cocke led a team that was developing a modular processor architecture capable of powering both personal computers and larger machines.

On July 3, 1991, Apple and IBM officially sealed their partnership. Motorola joined them as the third party. This AIM alliance was to give birth to a new generation of RISC processors intended for future Macintosh computers. IBM and Motorola would design the chips together. Apple and IBM would collaborate on an object-oriented operating system. The PowerPC 601 emerged from this collaboration. This first chip was derived from IBM’s RSC processor. Apple wanted computers faster than Intel-based PCs without sacrificing price competitiveness. The Power Macintosh 6100/60 launched the line with this processor clocked at 60 MHz.

The RISC architecture brought tangible benefits. The processors simplified their instruction set to execute programs more efficiently. They included 32 general-purpose registers, compared to only eight for Intel x86 chips. This richness reduced external memory accesses and boosted performance.

The migration to PowerPC represented a technical puzzle for Apple. It was necessary to preserve compatibility with existing applications written for the 68000 architecture. The company achieved this through a software emulator that allowed users to continue running their usual programs on the new Power Macintosh computers with acceptable performance.

Apple streamlined its product line around three models: the 6100, 7100, and 8100 series. This simplification contrasted with the previous proliferation of references that complicated customer choice. The machines were distinguished by their clock speed, indicated after the serial number: 60 MHz for the 6100/60, 66 MHz for the 7100/66, 80 MHz for the 8100/80.

The Power Macintosh computers offered advanced features: 16-bit stereo audio, serial ports compatible with LocalTalk and GeoPort, integrated Ethernet connectivity. RAM reached up to 72 MB for the 6100/60 model, 136 MB for the 7100/66, 264 MB for the 8100/80. These characteristics placed them in a strong position against competing PCs.

A curious detail deserves attention in this story. Motorola never manufactured a single PowerPC 601 processor, despite its communications suggesting otherwise. IBM Microelectronics handled production alone. Motorola nevertheless participated in the development of subsequent generations such as the 603, 604, and 620.

This arrangement satisfied everyone. IBM penetrated the personal computer market with its RISC technology, initially designed for its RS/6000 workstations. Apple obtained powerful processors at a good price. Motorola maintained a foothold in the Macintosh ecosystem, although it lost its status as exclusive supplier.

The Power Macintosh computers achieved notable commercial success. They gave Apple the ability to sell machines faster than equivalent PCs in certain applications, particularly graphics. A Macintosh Quadra equipped with a 40 MHz 68040 processor already outperformed a PC with a 66 MHz 486. The new Power Macintosh computers widened the gap.

This technical success did not overturn Apple’s market position. The company maintained a share of approximately 10 to 15%, facing the overwhelming dominance of IBM-compatible PCs. Professional users, guided primarily by budget considerations, remained loyal to the Intel/Windows platform.

The PowerPC architecture continued its evolution for years. The processors gained in power and energy efficiency. Apple used it until 2006, before switching to Intel processors. This transition closed a singular technological chapter, where historical competitors had joined forces to create an alternative to established architectures.

Top

CD-RW

The idea of storing information on an optical disc dates back to the 1950s. Americans such as David Paul Gregg and James Russell envisioned writing with electron beams and reading with laser beams. The principle of the rotating disc and a reflective surface transformed these intuitions into concrete possibilities.

Around 1970, Hollywood became interested in optical discs for distributing films. MCA and Philips joined forces and launched the first consumer laser videodisc, LaserVision, in late 1978. Helium-neon lasers read the molded pits on a 30 cm disc, whose size recalled that of vinyl records. The video information was encoded in the variable spacing between the edges of these pits, arranged in a spiral.

In 1974, Philips laboratories embarked on developing an optical disc audio system. Their engineers relied on existing technologies while betting on future advances in integrated circuits and semiconductor lasers. The project gained momentum, and the team concluded that digital technology would surpass analog recording. The scale of the undertaking prompted Philips to seek a partner: Sony joined the venture in 1979.

The compact disc adopted a diameter of 120 mm, much more compact than LaserVision. The designers knew that contemporary semiconductor lasers delivered approximately 1 mW at 800 nm. They adjusted the optics accordingly. The laser beam passes through a transparent substrate of 1.2 mm before reaching the data engraved on the aluminum layer of the disc.

The first audio CD players arrived in stores in 1982. The technology found its place in the computing world. Philips and Sony announced the CD-ROM in 1984, and the first drives were delivered the following year as peripherals for large systems. International organizations validated the standard in 1985. However, data organization remained proprietary until 1988, when ISO 9660 became the reference.

Research on recordable and rewritable discs accelerated in the 1970s in the United States, Europe, and Japan. The limited power of lasers hindered progress. In France, Thomson-CSF and later Alcatel Thomson Gigadisc experimented with glass discs coated with layers containing malleable gold. Writing created microscopic bumps, but repeated laser readings deformed these features.

Another approach proved more fruitful for WORM (Write Once Read Many) media: coating glass or plastic with polymer dye mixtures. The optics were those of read-only discs, provided peak power of 50 to 100 mW was available. Philips and Sony defined the recordable CD (CD-R) in their 1988 “Orange Book”. By the late 1990s, the required lasers became affordable and CD-R burning spread throughout personal computing.

Two technologies competed for the rewritable disc market: magneto-optical recording and phase change. The former got off to a strong start in the early 1970s. It relied on synchronizing laser heating with magnetic field modulation. Read/write heads were complex, but the media tolerated a virtually unlimited number of cycles.

Phase-change media use a thin layer of chalcogenide alloy, such as AgInSbTe or GeSbTe. This layer is stable in two states: amorphous and microcrystalline, each exhibiting distinct reflectivity. A brief, intense laser pulse melts the layer, which cools in an amorphous state. A longer but less energetic pulse heats the film without melting it, triggering crystallization. From the 1970s to the 1990s, research refined alloy compositions and deposition processes.

In the mid-1990s, Korean and Taiwanese manufacturers entered the market. The first CD-ROM production in Taiwan and Korea began in 1994. Two years later, Taiwanese manufacturers accounted for 12% of global production. In 1996, LG alone represented nearly 10% of worldwide sales. This competition drove down drive prices.

The number of optical drive manufacturers rose from 2 in 1983 to 16 in 1985, climbed to 65 around 1995, then dropped back to 44 in 1999. Newly created specialized companies initially stimulated technical progress. Before 1988, the year of ISO 9660, the fastest drives relied on proprietary formats. After standardization, electronics and computing giants dominated the performance race.

Prices for optical burners plummeted: $15,000 in 1991, $5,000 in 1993, and under $1,000 in 1995. CD-RW established itself firmly in the computing landscape until the arrival of rewritable DVD in the 2000s. The technology left its mark on the history of digital storage.

Top

IBM Aptiva

IBM launched the Aptiva in September 1994. The computer succeeded the PS/1 line in the consumer market. Manufacturers were trying to stand out in a standardized world built around Intel and Windows, the famous “Wintel.” But the task would prove difficult.

The first Aptiva computers received Intel 80486 processors before evolving to Pentiums and AMD chips. IBM manufactured most machines in-house, except for the E series entrusted to Acer. The computers were sold as complete solutions: system unit, monitor, speakers, keyboard, and mouse. The first generation ran on IBM PC DOS 6.3 and Windows 3.1. Pentium versions offered Windows 95, and on certain models a “select-a-system” option that let users choose between PC DOS 7/Windows 3.1 and OS/2 Warp.

On the M, A, C, and S models, IBM integrated an in-house Mwave card to handle sound and modem. This proprietary solution accumulated compatibility and performance problems. IBM eventually abandoned it in favor of standard components. The company had to settle a dispute by financially compensating buyers for purchasing compatible peripherals.

In 1996 came the S “Stealth” series, which changed aesthetically. IBM abandoned the traditional beige of computers for a black design that broke with convention. Consumer electronics now influenced the appearance of machines. The S series separated the floppy and CD-ROM drives from the main case containing the motherboard. These elements were housed in a thin media console that served as a base for the monitor. A 1.8-meter cable connected everything to the system unit. This architecture freed up workspace: the main case could slip under the desk or into furniture while keeping the drives within reach.

IBM reduced the number of configurations to five models. The entry-level model featured a 166 MHz Pentium for $2,499. The high-end model used a 200 MHz Pentium sold for $3,099, monitor not included (budget between $499 and $799 depending on size). All machines had at least 16 megabytes of RAM, hard drives from 2.5 to 3.2 gigabytes, and 28,800 bits per second modems.

Between 1994 and 2001, IBM released the Aptiva in several series identifiable by a letter: M (Magic), A, C (Courageous), E, L, and S (Stealth). The first models adopted a desktop format (reference 2144) or tower (2168). Production then shifted exclusively to tower format. Many machine types followed: 2134-2138, 2140-2144, 2151-2159, 2161-2168, 2170-2178, 2193-2198, 2255, 2270-2274, 6832, and 6864.

IBM discontinued the Aptiva in 2001 without offering a direct replacement. This decision was part of the company’s withdrawal from the consumer market. Customers were redirected to the NetVista line, rather intended for professionals. This strategic shift reflected the difficulties traditional manufacturers faced with market standardization and the arrival of new players.

Top

Netscape Navigator

In 1994, Jim Clark and Marc Andreessen founded Netscape Communications with a clear vision: to create a universal interface providing access to the Web from any device. Clark had made his fortune with Silicon Graphics, while Andreessen had just graduated from the University of Illinois where he had led the development team for Mosaic, the first browser designed for the general public. Their partnership would revolutionize how the world accesses the Internet.

The first version of Navigator was released in December 1994 and achieved immediate success. Internet users adopted it massively. A year later, Netscape’s market value reached $7 billion. The company wasn’t limited to the browser: it developed server solutions that relied on Internet protocols to run intranets, extranets, and other business applications.

Navigator stood out through its technical advances. Its rendering engine loaded text and images in parallel, whereas competing browsers displayed text first before showing images. A caching system accelerated the loading of regularly consulted pages. Security received particular attention with the integration of the SSL (Secure Socket Layer) protocol to protect sensitive data. Anti-spyware and anti-advertising mechanisms reinforced this protection.

Navigator 1.1’s interface offered nine ways to browse. Users could directly enter a URL, follow hypertext links, use breadcrumbs to retrace their path, click on shortcut buttons, go back, consult history, manage bookmarks with annotations, preview links, or rely on visual markers. This functional richness made the Web accessible to those with no technical training.

Microsoft quickly understood that Navigator threatened its empire. In 1995, Bill Gates launched Internet Explorer and integrated it free of charge into Windows 95. The strategy paid off. Navigator’s market share, which had approached 90% in early 1996, began to decline. By the end of 1997, it fell below 50%.

Financial losses accumulated. In November 1998, Netscape agreed to be acquired by America Online (AOL) for $4.3 billion. AOL simultaneously concluded an agreement with Sun Microsystems worth $1.25 billion to market Netscape software and take over its divisions.

Netscape released Navigator’s source code under the name Mozilla project in March 1998. It was the first time a publicly traded company joined the free software movement. Thousands of people downloaded the code as soon as it went online. Developers from around the world contributed voluntarily to the project, like young Pavlov from Georgia. Netscape was counting on this community to improve the browser and counter Microsoft. The adopted license was inspired by the GPL (GNU Public License) while authorizing commercial exploitation.

Navigator’s codebase exploded, growing from a few hundred thousand lines to over 2 million. Engineers worked back-to-back shifts, some sleeping at the office to meet deadlines. Multi-platform compatibility further complicated matters: the code had to run on Windows, Mac OS, and various versions of UNIX.

In 1997, Netscape launched an ambitious overhaul with the Communicator 6.0 project. The goal: restructure the code into modules that were easier to maintain and rewrite certain parts in Java to simplify multi-platform development. In early 1998, the project was abandoned. Java didn’t offer the expected performance. Modularization continued more gradually in versions 4.5 and 5.0.

The merger with AOL in 1999 marked the end of Netscape’s independence. Many executives sold their shares during the final year, but CEO Jim Barksdale increased his stake and converted his securities into AOL shares worth over half a billion dollars at the time of the merger. Several engineers left the ship, disappointed by AOL’s marketing-oriented direction.

Netscape’s legacy remains immense. The company made the Web accessible to the masses by simplifying navigation. Its innovations in security and performance established standards that endure. The Mozilla project inspired numerous open source initiatives and gave birth to Firefox.

The Netscape adventure illustrates how quickly things can shift in the software industry. In less than five years, the startup had become a mere division of a large corporation. This story also demonstrates the strategic importance of web browsers, which have become the universal interface for accessing online services.

In 2003, Navigator’s market share was negligible. AOL maintained development until 2008, when the project ended definitively. The developers joined the Firefox team, which inherited Navigator’s patents. This succession testifies to the durability of Netscape’s innovations in the contemporary Web landscape.

Top

World Wide Web Consortium

In 1994, four years after Tim Berners-Lee created the Web, the World Wide Web Consortium (commonly referred to as W3C) was established. The international organization set itself a mission: to lead the World Wide Web to its full potential through the development of protocols and guidelines that ensure its long-term growth. In the 1990s, the rapid proliferation of Web usage made this structure indispensable.

Three technical elements form the foundations of the Web. The HTTP protocol handles communications, URIs enable universal resource identification, and HTML structures document markup. From 1993 onward, the combination of these technologies triggered explosive expansion. The free and open nature of these technologies largely explains this success: they remain accessible without usage rights, whereas other systems like Gopher failed after the University of Minnesota attempted to impose a paid license.

The W3C adopted an original structure. This international consortium brings together companies, universities, and public institutions. This configuration associates the various Web stakeholders in a neutral forum, away from direct commercial interests. Three institutions host the organization: MIT in the United States, ERCIM in France, and Keio University in Japan. Eighteen regional offices complete the setup and ensure a global presence.

Standards development forms the core of the activity. Technical recommendations follow a rigorous process. Working groups, composed of experts from member organizations, develop proposals. These are then subjected to public comment periods. This method ensures technical quality and adequacy to real needs. The early years produced fundamental standards: XML, CSS style sheets, the DOM document model.

In 2001, the W3C proposed allowing the inclusion of patented technologies in its standards, subject to RAND (Reasonable And Non-Discriminatory) licenses. The developer community, attached to free software principles, strongly opposed this proposal. After months of intense debate, the organization finally adopted a policy that favors royalty-free technologies. The episode illustrates the W3C’s ability to incorporate community feedback.

Areas of intervention expanded over the years. Accessibility is a central concern; recommendations emerged to make the Web usable by everyone, regardless of physical or mental capabilities. Internationalization constitutes another line of work, particularly Web usage in different languages and writing systems. The consortium simultaneously developed standards for adaptation to mobile terminals, anticipating the diversification of access methods.

The Semantic Web represents a central direction of work since the early 2000s. This evolution aims to enrich web content with machine-understandable metadata. The RDF and OWL technologies, standardized by the consortium, describe relationships between information and simplify automated processing. Specialized groups, such as the one dedicated to life sciences and health (HCLS), explore practical applications of these technologies in various domains.

The organization continuously adapts its working methods. Working groups, which initially brought together only member representatives, gradually opened up to invited experts. Public comment periods took on increasing importance and enabled the integration of feedback from a broader community of developers and users. This evolution reflects the desire to maintain a balance between technical expertise and field needs.

The 2010s saw the emergence of new technical initiatives. Native multimedia capabilities introduced by HTML5 reduced dependence on proprietary technologies. Standards for progressive web applications enabled the creation of sophisticated applications directly in the browser. The W3C simultaneously developed recommendations for privacy protection and security, in response to growing concerns in these areas.

The consortium’s governance follows an original model that combines technical direction and broad consultation. Tim Berners-Lee, as director, maintains a strategic guidance role while fostering the emergence of consensus. A permanent team of approximately fifty people ensures activity coordination, while over six hundred experts participate in various working groups.

The production of technical standards is accompanied by significant documentation and education work. The W3C publishes usage guides, organizes workshops, and maintains educational resources. This action contributes to the dissemination of best practices and helps with the adoption of new technologies. Translations of main documents are produced in numerous languages, which strengthens the international accessibility of standards.

The history of the W3C demonstrates that it is possible to collectively manage a global technical infrastructure. Through its operating mode that combines technical expertise, public consultation, and consensus building, the organization maintains coherent Web development while preserving its open and universal character.

Top

Yahoo!

In 1994, two Stanford electrical engineering students, David Filo and Jerry Yang, began compiling a list of websites they enjoyed. Their small personal project, which they initially named “Jerry and David’s Guide to the World Wide Web,” addressed a simple need: finding one’s way around an emerging Web. The name Yahoo! came to them later, an acronym for “Yet Another Hierarchical Officious Oracle,” a phrase that captured the humor and casual attitude of the two founders.

Word-of-mouth did the rest. In April of that year, the directory listed a hundred sites and attracted a thousand visitors per week. Five months later, the numbers surged: 2,000 sites indexed, 50,000 page views per day. Netscape, which had just launched its browser, decided to display a link to Yahoo! on its homepage, and traffic exploded.

Stanford’s servers couldn’t keep up. Marc Andreessen, from Netscape, offered to host the service for free. In March 1995, Yahoo! incorporated and received one million dollars from Sequoia Capital. The two students left the university, set up offices in Mountain View, and hired their first employees.

Online advertising began in August 1995. General Motors and Visa were among the first advertisers. Yahoo! experimented with targeted advertising based on users’ interests, an innovative approach for the time. The site simultaneously added news, weather, stock quotes, and other free content.

The April 1996 IPO turned euphoric. The stock rose 154% on the first day, valuing the company at $848 million. A performance that surpassed Netscape’s a few months earlier. Yahoo! used these funds to expand internationally, first in Japan with Softbank.

Acquisitions followed in rapid succession: Four11 for email, GeoCities for personal webpage hosting, Broadcast.com for multimedia. The group diversified its offerings while maintaining its guiding principle: a free, general-purpose web portal. The Internet bubble propelled it to great heights, with its market capitalization exceeding $125 billion in 2000. Then the bubble burst, advertising revenues collapsed. Most importantly, Yahoo! struggled to adapt. Google dominated with its PageRank algorithm. Facebook redefined social interactions. Turnaround attempts, notably under Marissa Mayer’s leadership between 2012 and 2017, failed to reverse the trend.

Verizon acquired Yahoo!’s Internet operations in 2017 for $4.5 billion, a fraction of its former value. Nevertheless, Yahoo! remains a case study of the early days of commercial Web. The company made the Internet accessible to the general public with a simple portal. It established viable business models, particularly through targeted advertising.

The strategic mistakes speak for themselves. Yahoo! outsourced its web search to Google and later to Microsoft, choosing not to invest heavily in this area. The culture of innovation withered, giving way to an unresponsive bureaucracy. Costly acquisitions often resulted in integration failures.

Some Yahoo! brands survive at Verizon. Yahoo! Finance still has a notable audience. This company’s history illustrates how quickly dominant positions erode in technology. It underscores the imperative of continuous innovation and adaptation to user behavior. Yahoo!’s journey embodies both the promises and pitfalls of commercial Web, the difficulties of building lasting technology companies.

Yahoo! is tied to the history of the Internet. By democratizing the Web in the 1990s, the company participated in spreading this technology. Its model of a free portal funded by advertising inspired numerous services, even though Yahoo! did not maintain its dominance.

Top

AltaVista

In 1995, Paul Flaherty, a researcher at DEC, had an idea during his vacation. He wanted to demonstrate the power of the Alpha processor that his company had just developed. From this intuition would emerge AltaVista, the first true Web search engine. Two DEC engineers, Michael Burrows and Louis Monier, took charge of developing the project.

The infrastructure relied on two evocatively named machines: Scooter and Turbo Vista. The first, equipped with a 20-gigabyte hard drive and one gigabyte of RAM, crawled web pages. The second, with 250 gigabytes of storage and two gigabytes of RAM, stored data and returned results. This configuration represented impressive computing power at the time.

AltaVista brought a major technical breakthrough. For the first time, a search engine indexed the full text of web pages. Its predecessors limited themselves to titles and headers. The interface, deliberately minimalist, nonetheless offered advanced features: keyword search, exact phrase search, and the ability to restrict results to a particular domain.

Success was immediate. From launch, 300,000 queries poured in each day. Two years later, this figure reached 80 million. In 1998, a study conducted among professional researchers revealed that AltaVista was their preferred search engine at 45%, far ahead of HotBot which garnered 20% of votes.

The speed of query processing, made possible by the Alpha processor architecture, made all the difference. The search engine offered innovative features such as multilingual search and automatic translation through its Babel Fish service. Its database contained more than 16 million web pages in 1996, a figure that kept growing.

But AltaVista’s story illustrates how a technological lead can melt away for lack of strategic vision. In 1998, Compaq acquired DEC. The following year, under Rod Schrock’s leadership, AltaVista abandoned its streamlined interface to transform into a web portal. The company wanted to compete with Yahoo!. This decision, however, diluted AltaVista’s core expertise in information retrieval.

In 1999, Compaq sold 83% of AltaVista’s shares to CMGI, owner of the Lycos search engine. CMGI prepared an IPO, but the bursting of the dot-com bubble forced the company to abandon it. Meanwhile, Google, founded in 1998, was gaining ground thanks to its exclusive focus on search and its PageRank algorithm.

In 2003, Overture Services acquired AltaVista, before being bought by Yahoo!. The pioneering search engine found itself integrated into the Yahoo! platform, losing its own identity. On July 8, 2013, Yahoo! ended AltaVista’s existence, redirecting its domain to its own search engine.

AltaVista’s legacy is nonetheless significant in Web history. Its technical architecture influenced subsequent search engines. It established the standard for large-scale full-text search. Its simple and effective interface inspired numerous competitors, particularly Google.

AltaVista’s failure can be explained by several factors. DEC initially considered the search engine merely as a technology demonstrator rather than a commercial opportunity. Frequent ownership changes prevented any coherent long-term strategy. The transformation into a web portal betrayed a misunderstanding of user expectations, who favored simplicity and search efficiency.

On the technical level, however, AltaVista had everything right. Its crawler, named Scooter, indexed millions of pages. The search engine offered advanced search operators such as NEAR to measure term proximity. It was the first to analyze web page metadata and to use linguistic analysis to improve results.

AltaVista’s decline coincided with Google’s rise, which refined concepts introduced by its predecessor. Google added the notion of page popularity to its ranking algorithm, whereas AltaVista focused mainly on textual relevance. This evolution better addressed the needs of users confronted with a rapidly expanding web.

AltaVista’s disappearance marked the end of a pioneering Web era, where pure technical innovation was no longer sufficient to guarantee success against competitors who better mastered commercial and marketing aspects.

Top

Apache Web Server

The Apache web server was born in 1995, but its history begins two years earlier with NCSA HTTPd. In 1993, Rob McCool developed the first portable HTTP server at the National Center for Supercomputing Applications at the University of Illinois. Distributed free of charge with its source code, the software enjoyed immediate success. Website administrators adopted it en masse, preferring this free solution to Netscape’s commercial servers, which cost several thousand dollars. NCSA HTTPd was then the most widely deployed server on the Internet.

But in 1994, the original team left NCSA to join Netscape, McCool included. Development came to an abrupt halt. Users found themselves with frozen software—functional, yes, but improvable. They had grown accustomed to fixing bugs themselves, adding features according to their needs, and then sharing these modifications on mailing lists in the form of patches. Brian Behlendorf was one of these active contributors; he notably created a password authentication system for the Hotwired website.

Faced with the project’s abandonment, a small group of webmasters decided to take up the torch. In February 1995, eight volunteer programmers including Brian Behlendorf and Roy Fielding created the “new-httpd” mailing list to coordinate their work. Two months later, in April, they released their first version: 0.6.2, built on NCSA HTTPd 1.3 and enriched with patches developed by the community.

The origin of the Apache name remains contested to this day. The foundation has long explained that it paid homage to the Apache Native American tribe, known for their endurance and warrior skills. Yet between 1996 and 2001, the project’s FAQ offered another explanation: “a patchy server”, a direct allusion to the numerous patches applied to the original code. Brian Behlendorf has successively validated both versions, suggesting that perhaps they both coexist.

Apache 1.0 arrived in December 1995. Robert Thau led this complete code rewrite, internally dubbed “Shambhala”. This version introduced a modular architecture that changed everything and simplified the addition of new features. Development became decentralized, innovation came from everywhere in the community, and results followed swiftly. By April 1996, Apache was the most widely used web server on the Internet. It has never relinquished that top position since.

The project drew inspiration from the IETF’s operating principles and its motto: “rough consensus and running code”. The rules remained simple. Anyone could contribute, but only core developers voted on official releases. Entry of new voting members required a nomination and unanimity among existing members. This system preserved the project’s coherence while remaining open.

Apache prevailed through its technical innovations. Virtual hosting, available from the summer of 1995 onward, addressed a pressing need for Internet service providers. This feature could manage multiple websites on a single server, up to 10,000 sites on one machine. Commercial solutions offered nothing comparable. The accessible source code provided another decisive advantage: users modified the software according to their needs, without waiting for a publisher to deign to implement a new feature or fix a bug.

IBM took interest in Apache in 1998. The computing giant was seeking a web server for its Websphere line. Rather than its own solutions, it chose Apache, recognizing its technical superiority. IBM understood where the value resided: not in the server, but in services and proprietary extensions like e-commerce systems. This collaboration marked the beginning of a lasting relationship. IBM in turn contributed to Apache’s development, enriching the project with its expertise.

The project’s structure took a new step in 1999 with the creation of the Apache Software Foundation, a non-profit organization. This evolution resolved legal and administrative aspects while preserving the project’s independence and collaborative culture. The ASF gradually developed a portfolio of projects, including Tomcat, the reference implementation of the Java J2EE architecture.

The core development team numbered about fifteen people, spread across the United States, Great Britain, Canada, Germany, and Italy. Contrary to stereotypes, these developers were not passionate teenagers but seasoned professionals: PhD students, doctors of computer science, experienced developers, company executives. Core developers produced about 80% of new features, but more than 400 people contributed to the code and over 3,000 reported issues.

Apache dominated the market despite fierce competition. Microsoft integrated its Internet Information Server into Windows NT as early as 1996, never exceeding 30% market share. Other players positioned themselves in niches, Zeus for very high-traffic sites for example. Some disappeared, like Netscape’s server, which had nevertheless reached 10% of the market.

In September 2009, Apache served more than 54% of global websites and more than 66% of the most visited sites. This dominance became self-sustaining: widespread adoption generated feedback and contributions that strengthened the software’s quality and stability, in turn attracting new users.

This success inspired countless other projects and shaped open source development practices.

Top

BeOS

In 1990, Jean-Louis Gassée and Steve Sakoman left Apple to found Be Inc. Their project was to create an entirely new operating system capable of taking advantage of modern machines without dragging along the burden of backward compatibility. Apple was going through a difficult period at the time, with its aging system showing its limitations in the face of hardware advances.

BeOS was born from this desire to start from scratch. Be’s engineers built a system designed primarily for multimedia, without worrying about the usual compromises. Multithreading runs through all layers of the system, with each task divided into multiple execution threads to make the best use of resources. The 64-bit BFS file system incorporates database capabilities and journaling to protect data. Symmetric multiprocessor management proved refined, which meant that BeOS ran faster on a Pentium 233 with 64 MB of memory than Mac OS X on machines eight times more powerful.

The 1995-1996 demonstrations were impressive. BeOS simultaneously ran multiple real-time 3D rendering windows, played video, and browsed the web without faltering, where competing systems struggled. Apple, desperately seeking a modern replacement for its aging system, entered into discussions to acquire Be Inc. in 1996. The negotiations failed over price. Apple turned to NeXT, bringing Steve Jobs back to the company he had created.

Be Inc. continued on its path and targeted the Intel PC market starting in 1998, while keeping its PowerPC version. The system won people over with its elegant interface, responsiveness, and technical innovations like dynamic queries in the file system. Users appreciated the stability, execution speed, and fine-grained file attribute management.

But BeOS failed to break through. Applications were sorely lacking. Developers hesitated to invest time in a platform with few users, and users waited for applications before adopting the system. A vicious circle. Windows dominated the PC market, Mac loyalists stayed on their platform, and space was tight for a newcomer.

Be Inc. attempted a pivot in 1999 toward emerging Internet appliances. Too late. Investors grew weary of funding a project that couldn’t find its audience. The company closed in 2001 and sold its patents to Palm.

The story doesn’t end there completely. Haiku, an open source project, took up the torch and reconstructed BeOS in spirit. Some of the system’s ideas—advanced metadata management or multimedia-oriented architecture—spread elsewhere.

BeOS illustrates the difficulties of a market where technical excellence is not enough. Timing plays its role: BeOS probably arrived too late to disrupt established systems. What remains is the mark of a modern architecture, an attention to performance that continues to inspire. The system shows that by freeing oneself from historical constraints, remarkable results can be achieved.

The lesson concerns the software ecosystem. Without popular applications, a system, however brilliant, withers away. This reality still weighs on current strategies, where building a developer community matters as much as technical prowess.

BeOS’s commercial failure does not erase its technical contribution or the questions it raised about market dynamics.

Top

DVD

In 1994, the industry sought a successor to the CD-ROM. Storage needs were growing, digital video was advancing, and the CD was showing its limitations. Two camps formed around competing technologies: Super Disc on one side, Multimedia CD on the other. Nobody wanted to relive the same fratricidal war that had left its mark in the 1980s between VHS and Betamax. The manufacturers eventually agreed on a common standard.

This convergence brought together players who could have been at odds: Japanese consumer electronics manufacturers, Hollywood studios, American computing giants. Together they created a disc the size of a CD but capable of storing seven times more data. The Japanese discovered the first players in November 1996. Americans had to wait until August 1997.

The DVD relies on a simple but effective technical trick. Where the CD stacks 1.2 mm of polycarbonate in a single layer, the DVD superimposes two 0.6 mm substrates. This architecture allows the use of a laser with a reduced wavelength: 650 or 635 nanometers versus 780 for the CD. The tracks tighten, the pits shrink. A single-layer DVD contains 4.7 GB. The dual-layer reaches 8.5 GB per side. A double-sided disc stores 17 GB in theory, but this format remains marginal.

Hollywood embraced the DVD enthusiastically. MPEG-2 compression fits a two-hour feature film on a single layer, with an image that surpasses that of VHS tapes and LaserDiscs. The studios appreciate the interactive menus, multiple audio tracks, and bonus features. Viewers choose their language, activate subtitles, discover deleted scenes. It’s no longer just a film you watch: it’s an experience you explore.

Manufacturers learned from the past. Every DVD player can read existing CDs. This backward compatibility reassures consumers who keep their collections of audio CDs and game CD-ROMs. The transition happens painlessly, without the obligation to replace everything. Microsoft integrates DVD support in Windows 98, standardizing its use in the PC world.

Computing benefits from this story. DVD-ROM drives progressively replace their predecessors in computer towers. Video game and multimedia software developers suddenly have generous spaces for their creations. Myst and Riven had to make do with multiple CDs; their successors will fit on a single DVD.

The market becomes complicated with the arrival of recordable formats. DVD-R appears in 1997 for single-write use. Then come DVD-RAM, DVD-RW, DVD+RW, each with its supporters and technical specificities. This profusion creates confusion but also stimulates innovation and drives prices down. Consumers sometimes get confused, but multi-standard burners eventually prevail.

Content protection obsesses the studios. IBM and Disney collaborate to develop the Content Scrambling System, an encryption meant to prevent pirated copies. The CSS system doesn’t impact playback performance but gives film producers the guarantee they demand. Without this protection, many of them would likely have been reluctant to adopt the format.

The sales figures speak for themselves. Between 1997 and 2002, more than 27 million players found buyers in the United States. The entry price plummeted: more than $1,000 in 1997, less than $200 in 2001. This lightning-fast democratization sealed the fate of VHS. Video stores transformed their shelves, households renewed their equipment.

The DVD attempted a few ventures outside its natural territory. DVD Audio, with mixed success, wanted to dethrone the CD with superior sound quality and multimedia functions. DVDPlus, also called DualDisc, offered a DVD side and a CD side on the same disc. These variations enriched the ecosystem without really taking hold, as DVD remained primarily associated with home video.

Beyond entertainment, the format found its place in education and professional training. Its ability to combine high-quality video, multichannel sound, and interactivity made it a valued pedagogical tool. Libraries used it to digitize and distribute their audiovisual collections. Companies distributed their product catalogs on interactive DVDs.

From the early 2000s, laboratories explored the future. Holographic DVD promised 100 GB per disc. This research prepared the arrival of Blu-ray and high definition. The DVD accomplished its mission: democratizing digital video, standardizing practices, preparing the ground for subsequent generations.

Top

GPU

In the 1980s, graphics processing remained the domain of rudimentary VGA controllers. These components merely received image data, organized it, and transmitted it to a monitor. Nothing more.

The IBM Professional Graphics Controller was released in 1984. This card integrated an Intel 8088 microprocessor specifically dedicated to graphics tasks, thereby freeing up the main processor. At $5,500 and with limited compatibility, it achieved only limited distribution. But the idea took root: entrusting graphics calculations to a specialized processor.

The OpenGL standardization by SGI stimulated an entire ecosystem of hardware solutions dedicated to graphics rendering. The following decade accelerated the movement. SGI introduced the RealityEngine in 1993, whose architecture established the foundations of the modern graphics pipeline. This pipeline principle, which transforms 3D coordinates into 2D pixels through successive stages, still structures GPU operation today.

Then came the 3dfx Voodoo in 1996. This consumer card dedicated to 3D acceleration revolutionized video gaming on personal computers. One million transistors, 4 MB of 64-bit DRAM memory, a frequency of 50 MHz: these specifications delivered unprecedented performance at the time.

NVIDIA made a strong impact in 1999 with the GeForce 256. It was the first card marketed under the GPU name. This terminological choice was far from trivial: it emphasized the complete integration of graphics processing functions on a single chip, including geometric transformation and lighting. Its 23 million transistors and 32 MB of memory inaugurated the modern era of the graphics processor.

The early 2000s brought programmability. In 2001, on NVIDIA’s GeForce 3, developers could program certain pipeline stages via shaders. This newfound flexibility opened horizons for visual effects and 3D rendering.

The year 2002 saw the arrival of a generation of fully programmable GPUs with the GeForce FX and the Radeon 9700. Developers could now program operations per pixel and per vertex, precisely controlling graphics rendering. The GeForce FX, with its 80 million transistors and 128 MB of DDR memory, testified to the growing complexity of these components.

NVIDIA’s GeForce 8800 in 2006 marked a turning point. Its unified architecture replaced specialized processing units with versatile processors capable of executing different types of graphics calculations. CUDA accompanied this evolution: this programming environment authorized GPU usage for general calculations, well beyond simple graphics.

The Fermi architecture, which NVIDIA unveiled in 2009, confirmed this orientation toward general-purpose computing. The first GPU architecture designed for scientific computing, it integrated a cache memory hierarchy, ECC error correction, and improved double-precision performance. GPUs moved closer to conventional processors while retaining their massively parallel processing power.

This convergence trend could be observed everywhere. AMD launched its Fusion line in 2011, which integrated GPU and CPU on a single chip. Intel developed Larrabee, an architecture combining x86 cores with wide vector units. The convergence between graphics and general-purpose processors accelerated.

Modern GPUs embody the power of parallel processing. Their architecture, the result of continuous evolution from the simple graphics controllers of the 1980s, now runs an extensive range of applications: 3D rendering, scientific computing, artificial intelligence. This versatility, combined with considerable computing power, makes GPUs indispensable components of contemporary computer systems.

Top

Microsoft Internet Explorer

The mid-1990s saw the Web take off. The first mainstream browsers appeared, and Microsoft decided to join the fray. On August 16, 1995, the Redmond company launched Internet Explorer 1.0. This marked the beginning of an adventure that would last more than twenty years and leave its mark on Internet history.

Internet Explorer was not born in Microsoft’s laboratories. The company took a shortcut by licensing the source code of Spyglass Mosaic, a browser developed by Spyglass. Other companies made the same choice at the time. Spyglass Mosaic was based on the work of the National Center for Supercomputing Applications, which had created NCSA Mosaic, the first graphical browser to achieve widespread success.

Initially, the team assigned to the project was tiny: half a dozen developers at most. Microsoft distributed its browser in several ways. It was included in the Microsoft Plus! pack for Windows 95, a set of extensions and utilities sold separately. Computer manufacturers received an OEM version to preinstall on their machines. Users could install it via the Internet Jumpstart Kit provided with the Plus! pack. An interesting point: it ran on the original version of Windows 95, without any system updates.

A few months later, Microsoft released Internet Explorer 1.5 for Windows NT. This version added basic support for HTML tables, an element that was becoming essential for building Web pages.

Microsoft’s strategy evolved quickly. The company realized that the Internet was not a passing fad and changed its approach. In subsequent versions of Windows 95, the browser became increasingly integrated with the operating system. This growing integration would trigger controversies that would mark the browser’s history.

Microsoft’s internal documents reveal that the company had not planned to include a browser with Windows 95. In early 1994, the plans mentioned only the addition of basic elements for connecting to the Internet: the TCP/IP stack and support for PPP and SLIP protocols. Steven Sinofsky wrote in June 1994 that no client software like Mosaic or Cello was planned for Windows 95.

The change occurred in stages. In 1995, Microsoft was still hesitating. Should the browser be marketed in the "Plus!" pack or integrated directly into Windows? Internal discussions show questions about the relevance of this integration, particularly due to space constraints on installation diskettes.

Technically, Internet Explorer 1.0 reflected its era. The graphical interface remained simple, the features limited. Users navigated page by page, without tabs or ad blocking. But this matched the needs of 1995 users who were discovering the Web.

Computer manufacturers contributed to the browser’s distribution. Microsoft signed agreements with them to preinstall Internet Explorer on new machines. This approach ensured massive distribution of the software.

The software architecture laid the groundwork for future versions. The code inherited from Spyglass Mosaic was reworked to adapt to Windows. Developers modified the interface and optimized performance for Microsoft’s operating system. Documentation shows, however, that Microsoft considered Internet Explorer as a product distinct from Windows. Internal and external communications presented the browser as a standalone application, with no connection to the operating system.

This first version constituted a milestone in Web history. Microsoft entered a rapidly expanding field with a pragmatic approach: acquiring existing technology rather than developing everything in-house.

Top

Java

In 1991, a team of engineers at Sun Microsystems launched the “Green” project under the leadership of James Gosling. Their ambition was to create a distributed system that would enable electronic devices to communicate with each other in a heterogeneous network. The idea was to offer consumer electronics manufacturers a technology suited to their needs.

The engineers began their work with C++. But the need to adapt code to different processors led them to consider compiler modifications. Despite possible extensions, C++ proved too complex to meet their requirements. This dead end led them to design a new language: Oak, named after the tree that Gosling could see from his office window.

Oak was created to program interactive television control devices and home automation systems. After several months of intense work, the team produced a complete operating system, a toolkit, an interface, an innovative hardware platform, and three custom chips. All of this culminated in the “*7” (Star 7), which Gosling described as a “handheld remote control”.

The Green team, which became FirstPerson Inc., responded to a 1993 call for tenders for a set-top box operating system and video-on-demand technology. Their solution stood out for its technical excellence, but the market went to SGI for reasons that Gosling considered non-technical. FirstPerson persisted in the set-top box sector until 1994, when reality set in: this market was not viable. The company disappeared, and half of its staff joined Sun Interactive to work on digital video data servers.

Everything changed in mid-1994. The Internet and the World Wide Web exploded. The team then decided to reorient Oak toward web applications. The language’s characteristics, particularly its architectural independence and platform neutrality, were remarkably well-suited to this environment.

In January 1995, Oak became Java, a name available for registration. The language transformed into a tool for creating web applications. To demonstrate its potential, the team designed HotJava, a web browser written entirely in Java, capable of executing Java applications (Applets) directly integrated into web pages. Netscape and Microsoft would adopt this functionality. Sun then released the initial development kit (JDK) along with HotJava.

Java’s simplicity was immediately striking. Code could be read and written easily, reducing the risk of errors. The runtime environment automatically handled memory allocation and garbage collection, eliminating a major source of bugs that plagued C or C++.

Java enforced a rigorous object-oriented approach that structured and modularized programming. Its interpretation allowed programs to run on various platforms without recompilation, thanks to the Java Virtual Machine (JVM). Programs compiled to bytecode, an intermediate format independent of hardware. This characteristic guaranteed the portability of Java applications on any system equipped with a Java interpreter.

The language’s robustness stemmed from its strong typing system and thorough compilation checks. The absence of pointers, those recurring sources of problems in other languages, reinforced program reliability.

Java then evolved into several editions. Java 2 Standard Edition (J2SE) targeted standard desktop computers. Java 2 Enterprise Edition (J2EE) addressed enterprise applications. Java 2 Micro Edition (J2ME) covered mobile and embedded devices.

Improvements followed one after another. Java 1.02, released in 1996, suffered from significant limitations, such as the inability to print. Subsequent versions progressively enriched the functionalities. Java 1.2, published in December 1998 and renamed Java 2 three days later, notably introduced more sophisticated graphics libraries.

The developer community embraced Java for various reasons. The language simplified the creation of network applications, a legacy of its initial design for distributed systems. Its integrated security model protected against malicious programs. Its multitasking capabilities facilitated the development of high-performance and responsive applications.

The architecture of the Java Virtual Machine testified to the technical sophistication of the system. The class loader, method area, heap, and Java stack composed this runtime environment. The execution engine combined an interpreter and a Just-In-Time compiler that optimized performance by translating bytecode into native code.

Java transformed web programming by introducing applets, which executed code in browsers. This innovation enriched the user experience on the Internet and inspired new approaches to web development.

The “Write Once, Run Anywhere” philosophy influenced the design of subsequent languages and platforms. Java’s security model, automatic garbage collector, and object-oriented programming approach established standards that endure in contemporary software development.

JavaScript

In 1995, the Web was undergoing rapid expansion. Netscape and Microsoft were engaged in an intense commercial battle over their respective browsers. Brendan Eich, an engineer at Netscape, received an unusual assignment: design a programming language in ten days, a tool that would execute directly in the Netscape browser. But this request was part of a broader strategic vision. Netscape no longer viewed the browser as a simple application; the company considered it, along with the server, as a new form of distributed operating system.

At that time, Sun Microsystems’ Java was establishing itself as the reference solution for complex Web applications. This compiled language generated bytecode for its virtual machine, adopted object-oriented principles from C++, and promised performance comparable to native languages. However, Netscape identified a gap: a second language was needed, lighter and interpreted, that would complement Java. An accessible tool for amateur programmers that would integrate seamlessly into Web pages.

Brendan Eich then designed JavaScript as a language with multiple influences. The syntax borrowed braces and semicolons from C, along with data structures, while the models drew inspiration from Smalltalk. The symmetry between data and code, characteristic of LISP, found its place in the design. For event handling, Eich turned to HyperCard. As for the object-oriented approach, it relied on runtime semantics using prototypes, in the manner of the Self language, rather than on class syntax like Java.

The language’s early years remained modest. JavaScript was mainly used to scroll messages in the browser’s status bar or animate a few images. Nothing particularly spectacular. But the language contained enough fundamental elements to endure. Its initially limited adoption gave it time to evolve gradually, particularly through the ECMA standardization process, which improved its performance and robustness.

The real turning point came between 2004 and 2005 with the arrival of Ajax. JavaScript could now retrieve data from servers and update HTML documents without reloading the entire page. Microsoft introduced this functionality in Internet Explorer via the XMLHttpRequest object, which was adopted by other browsers. The user interface partially shifted to the browser, creating much richer experiences. Gmail and Google Maps illustrated this transformation.

This change quickly revealed the weaknesses of browser JavaScript engines. Web pages no longer restarted every few minutes. They maintained extended sessions, manipulated large volumes of dynamic data, and communicated continuously with servers in the background. Google developed Chrome and its V8 interpreter to meet these new performance requirements. The market followed suit. JavaScript interpreter performance improved across all vendors.

In 2012, Mozilla launched asm.js, a strict subset of JavaScript optimized for executing code compiled from C or C++. Thanks to this innovation, complex games were ported to the Web. Epic Games provided a spectacular demonstration by adapting its Unreal Engine 3 to JavaScript in just a few days. The success of asm.js would directly influence the creation of WebAssembly, a more efficient binary format for executing native code in browsers.

Node.js then expanded JavaScript’s playing field beyond the browser. The language moved into server application development. Its event-driven nature, present from the beginning, facilitated the creation of highly scalable Web applications without the complexity of multithreaded programming.

Ongoing standardization through ECMAScript regularly brought new features. ES6 modules are one example. All while maintaining the backward compatibility essential to the Web’s longevity. JavaScript gradually transformed from a simple animation tool into a versatile language capable of handling complex applications, whether client-side or server-side.

The ecosystem surrounding it constantly enriched itself. Transpilers enabled the use of the latest features while remaining compatible with older browser versions. Package managers like npm facilitated code sharing among developers. React, Angular, and Vue.js standardized the development of complex user interfaces.

JavaScript’s success owes much to its accessibility and adaptability. Despite its initial imperfections, linked to its rushed creation, the language managed to respond to the Web’s changing needs. Its direct interpretation, flexibility, and native integration in browsers make it a central element of modern Web development.

JavaScript’s dominant position does not, however, signal the end of its evolution. WebAssembly opens new perspectives as a universal compilation format for the Web. Other languages can now execute in the browser. JavaScript nevertheless retains its role as the privileged interface with the DOM and Web APIs. It demonstrates its ability to coexist with new technologies.

PHP

In 1994, Rasmus Lerdorf, a Canadian programmer of Greenlandic origin, developed a few Perl scripts to manage visits to his online CV. These tools, dubbed “Personal Home Page Tools,” initially served to display his curriculum vitae and track traffic. Nothing spectacular at first, just a practical solution to a personal need.

Working as a web development consultant in Toronto, Lerdorf noticed he was constantly rewriting the same CGI scripts in C for his various clients. This repetition led him to consolidate his code into a C library that he integrated with the NCSA web server. He added a template system to simplify calls. The idea remained pragmatic: save time and work more efficiently.

On June 8, 1995, he released the first public version under the name “Personal Home Page Tools version 1.0.” He had rewritten his scripts as CGI binaries, adding functionality to process web forms and interact with databases. The whole thing then took the name “Personal Home Page/Forms Interpreter,” or PHP/FI. Lerdorf mainly sought to detect bugs faster and improve his code through feedback from other developers.

This first iteration already contained Perl-inspired variables, form handling, and the ability to embed directly in HTML. The syntax borrowed from Perl while remaining more accessible, though consistency wasn’t always present. Web developers began taking interest, attracted by this simpler way of creating dynamic pages.

Lerdorf didn’t consider PHP his property. Rather, he saw it as a tool for solving problems. When other developers asked to use his scripts, he willingly shared them. His value lay in his ability to solve his clients’ problems, not in exclusive code ownership.

The real turning point came in 1997. Zeev Suraski and Andi Gutmans, two Israeli developers from Technion IIT, rewrote the parser. This overhaul gave birth to PHP 3, where the name became a recursive acronym: “PHP: Hypertext Preprocessor.” Public testing began, and the official version was released in June 1998. The two Israelis continued their work by rewriting PHP’s core, thus creating the Zend engine in 1999. They founded Zend Technologies in Ramat Gan.

PHP 4 was released on May 22, 2000, powered by Zend Engine 1.0. This version significantly improved object-oriented programming support and overall performance. The language matured and established itself as a real tool for building dynamic websites.

Version 5 arrived on July 13, 2004, with Zend Engine II and a new object model. Object-oriented programming became truly usable, and PHP Data Objects (PDO) offered a unified interface for database access. In 2008, PHP 5 was the only actively developed stable version.

PHP’s history shows how much the community matters in its evolution. The language grew richer through contributions from numerous developers. The PHP Extensions Community Library (PECL) served as an incubator where groups of volunteers experimented with new ideas before they eventually integrated into the main distribution.

Project management followed a meritocratic logic. Code took precedence over theoretical debates. If a developer proposed a working implementation, it would probably be adopted, regardless of its degree of perfection. This philosophy accelerated innovation and feature additions, sometimes at the expense of overall consistency.

PHP’s success owed much to its ease of use and proximity to HTML. Developers could directly insert PHP code into their web pages, making the language accessible to beginners. This ease of learning largely explains its growing popularity.

Development continued with PHP 7.0 in 2015. The Zend Engine 3 significantly boosted performance. Execution speed improved dramatically, and new features appeared such as scalar type declarations and the combined comparison operator.

PHP 8.0 was released in 2020, marking a new stage with the JIT (Just-In-Time) compiler, union types, and match expressions. These developments strengthened the language’s robustness without sacrificing its ease of use.

Over the years, PHP has become essential in web development. Its evolution reflects changes in programming practices and the needs of the modern web. From a simple personal tool, it has transformed into a complete language supporting object-oriented programming, exception handling, and numerous advanced features.

Top

Ruby

On February 24, 1993, two Japanese computer scientists, Yukihiro Matsumoto and Keiju Ishitsuka, discussed online what to name their new programming language project. They wanted a gemstone. After considering “Coral”, they chose “Ruby”. One character shorter, thus simpler to type.

Yukihiro Matsumoto, whom everyone calls Matz, wanted to create a truly enjoyable object-oriented scripting language. He borrowed flexibility from Perl, object-oriented vision from Smalltalk, rigor from Eiffel, reliability from Ada, and expressiveness from Lisp. From this blend emerged a language distinguished by its elegance.

The first public version, 0.95, was released on December 21, 1995 on Japanese forums. Success in Japan was immediate. In 1997, netlab.jp, a Japanese company specializing in open source, hired Matz to work full-time on Ruby. Version 1.0 appeared in 1996, version 1.1 in August 1997. Development accelerated.

Matsumoto designed Ruby according to the principle of least surprise. Methods bear names drawn from common English vocabulary that naturally describe their function. For character strings, we find verbs like strip, split, delete, or upcase. Learning is intuitive, code maintenance easier.

In Ruby, everything is an object. Truly everything, including numbers and primitive types. This approach, inherited from Smalltalk, unifies the syntax. Developers apply the same principles to all elements they manipulate. The language gains consistency.

Ruby allows modification of its essential components, enabling programmers to redefine and enrich existing classes and add methods to built-in classes. For example, one can enrich the number class with a plus method to make code more expressive. This flexibility appeals.

In 2001, Ryan Leavengood began developing RubyGems, a tool to simplify library distribution. The project stopped when Leavengood left at version 0.4.0, but a new team took it over in 2003 and completely rewrote it. RubyGems then became a central element of the ecosystem.

The year 2004 changed everything. David Heinemeier Hansson created Ruby on Rails, a web framework that propelled Ruby onto the international stage. This platform attracted developers with its pragmatic approach and productivity. Apple integrated Ruby on Rails into Mac OS X v10.5 “Leopard”. This recognition by an industry giant accelerated the language’s adoption.

Ruby’s popularity peaked in 2006, when it received the title of “Programming Language of the Year”. In 2016, it reached eighth place in the TIOBE index. Between 2013 and 2020, Ruby evolved from version 2.0.0 to version 3.0.0. The language reached technical maturity.

The community remains dynamic. In 2022, there were 2.4 million Ruby developers worldwide. Airbnb, Shopify, Coinbase, and GitLab use Ruby and contribute to its development. The Rails Foundation, recently created, works on documentation, training, and language promotion.

Version 3.0, released in 2020, brought significant performance improvements. The “Ruby 3x3” project aims to triple the performance of version 3.x compared to Ruby 3.0. Developers optimize the MJIT compiler, refine the garbage collector, and improve concurrency support through Ractors.

Several alternative implementations enrich the ecosystem. JRuby runs on the Java virtual machine, TruffleRuby offers high performance thanks to GraalVM. Matz leads the development of mruby, a lightweight version designed for embedded systems. These variations expand the language’s usage possibilities.

Top

SSH

In 1995, at the University of Helsinki, Tatu Ylönen had just experienced a network attack on his institution’s systems. Passwords were circulating in plain text, intercepted by attackers exploiting the glaring vulnerabilities of protocols like Telnet, rlogin, or rsh. Faced with this concrete threat, the Finnish researcher decided to design a radically different solution: a protocol capable of encrypting communications and robustly authenticating the parties involved. SSH, for Secure Shell, was born from this immediate necessity to protect exchanges over the Internet.

The first version was released in July 1995 as free software. The success was immediate in academic and technical circles. Everyone understood the urgency of securing remote connections, and SSH met this need. Given this enthusiasm, Ylönen founded SSH Communications Security in December 1995. The company commercialized professional versions while maintaining a free version for non-commercial uses, a delicate balance between economic logic and broad dissemination of the protocol.

The following year marked a turning point with SSH-2, a complete redesign of the initial version. This new iteration adopted a three-layer architecture: the transport protocol handled encryption and server authentication, the authentication protocol verified user identity, and the connection protocol multiplexed communication channels within a single secure tunnel. This modularity brought increased flexibility and robustness.

The IETF launched a working group dedicated to SSH standardization in 1997. Between 2004 and 2006, several RFCs specified SSH-2 standards. This standardization promoted interoperability between implementations and established SSH as the de facto standard for remote system administration. But an event would disrupt the ecosystem in 1999: the appearance of OpenSSH, a free implementation developed within the OpenBSD project. This free and open alternative accelerated the universal adoption of the protocol. OpenSSH quickly became the reference, integrated by default into most UNIX and Linux systems.

On the technical level, SSH relies on a hybrid approach combining asymmetric and symmetric cryptography. When establishing the connection, public-key mechanisms authenticate and exchange keys. Once the link is established, symmetric encryption takes over to protect data with better performance. The protocol introduced the "known hosts" mechanism: clients memorize the public keys of contacted servers, thus detecting any man-in-the-middle attack attempts. SSH also enabled tunneling of other TCP/IP protocols via port forwarding, considerably extending its scope of application.

The flexibility of SSH-2 allowed support for various authentication methods: passwords, public keys, host-based authentication. This adaptability made it possible to calibrate security according to contexts. Data compression and channel multiplexing optimized bandwidth usage, making the protocol even more efficient.

In the 2000s, SSH established itself as a key building block of Internet infrastructure. Its use extended far beyond remote access to encompass secure file transfer with SFTP, entire system management, and application deployment. Companies massively adopted it to secure their administrative operations. The protocol continued to evolve, integrating more recent cryptographic algorithms such as AES and elliptic curves. The developer community remained mobilized to adapt SSH to emerging threats and use cases.

SSH’s impact on modern computing remains considerable. As a practical and secure solution for remote administration, the protocol has accompanied the growth of the Internet and the explosion of cloud computing. Its modular design has inspired other security protocols. SSH remains ubiquitous in development environments, data centers, and cloud services. It illustrates how a technical response born from a specific incident can become an indispensable international standard.

SSH’s lasting success lies in its ability to combine robust security with ease of use. By automating cryptographic complexity while remaining accessible, the protocol has democratized security best practices in system administration.

Top

Lynx

When discussing web browsers, the image that spontaneously comes to mind is that of a colorful, animated, interactive graphical interface. Yet, there has existed since 1992 a program that defies this representation: Lynx, a fully text-based browser born at the University of Kansas. Its history reveals how accessibility and simplicity can span three decades without aging a day.

Initially, Lynx was nothing like a web browser. Developers at Kansas were seeking to build an interface to access their local information system. Hypertext was gradually grafted onto it, then the Gopher protocol made its appearance. Only after these successive additions did Lynx learn to speak the language of the Web. This layered genesis explains the code structure: a mosaic of pieces assembled over time, without an initial master plan. Some might see this as a flaw; others, the mark of organic adaptation to real needs.

The program’s name refers to the lynx, a feline renowned for its keen vision. The analogy is not trivial: where other browsers multiplied visual ornaments, Lynx got straight to the point. Early users discovered a raw, stripped-down, yet functional Internet. This deliberate bareness proved a major asset for blind or visually impaired people, who could finally access the web via screen readers.

Navigation in Lynx relies on the keyboard. Arrow keys move the cursor between links, the Enter key activates the selected one, and a few shortcuts provide access to advanced features. Nothing more. This economy of means nevertheless conceals genuine technical sophistication. The program can handle HTML forms, understands cookies, and offers extensive configuration via the .lynxrc file. You can customize colors (yes, even in text mode), define your preferences, or hook up your favorite text editor to modify a page directly.

Image handling well illustrates the project’s philosophy. Rather than ignoring them outright, Lynx displays their alternative text—that “alt” attribute that web developers too often forget. This approach, born from a technical constraint, has become a reference point for accessibility. It forces one to ask: what does someone who cannot see actually see?

In today’s web, saturated with JavaScript and complex style sheets, Lynx retains its usefulness. System administrators working on the command line use it regularly. Low-bandwidth connections benefit from it. And then there are those who prefer to read without being distracted by animations or flashing advertisements. That said, the reading mode of today’s browsers serves this purpose perfectly well too.

Web developers still use Lynx as a diagnostic tool. Displaying a page in this browser means seeing its bare HTML skeleton. If the content is comprehensible under these conditions, that’s a good sign. This method is part of what’s called progressive enhancement: starting from a simple functional base, and enriching the experience for those with more advanced technical means.

That Lynx has survived this long is no accident. Its open source code and community of developers have managed to evolve it without betraying its founding principles. Updates maintain compatibility with modern standards, while the interface remains faithful to the original spirit. Few software programs can pride themselves on such consistency over three decades.

Top

MySQL

In 1979, Michael "Monty" Widenius was working at TcX, a small company, when he developed a reporting tool in BASIC. The hardware constraints of the era were severe: 16 KB of RAM, a 4 MHz processor. In this Spartan environment, Widenius learned to write lightweight and fast code. This experience shaped his approach to software development. His position as co-owner at TcX gave him a rare freedom for a programmer: control over his code. This combination of technical skill, intellectual property, and autonomy would prove crucial.

During the 1990s, TcX clients demanded an SQL interface to manipulate their data. Widenius explored several options: buying a commercial database license, integrating mSQL code... None truly satisfied him. He decided to create his own solution. In May 1996, MySQL version 1.0 was released, but remained confined to a limited circle. The first public version (3.11.1) arrived in October 1996, initially only for Solaris. A month later, Linux was supported.

MySQL gradually extended to other operating systems. Its features grew richer with each version. The distribution model relied on a particular license: commercial use was free as long as you didn’t redistribute the software with your own products. TcX also sold technical support. These revenues funded development.

MySQL version 3.22 already offered a substantial portion of the SQL language. Its optimizer was impressive, especially for a project driven mostly by a single person. The system excelled in speed and stability. APIs multiplied, making it usable from almost any programming language. But gaps remained: no transactions, no subqueries, no foreign keys, no stored procedures or views. Table-level locking sometimes considerably slowed operations.

Around 1999-2000, MySQL AB emerged as a separate company. The team grew with the recruitment of developers. A partnership with Sleepycat was formed to create an SQL interface to Berkeley DB files. The goal was to provide MySQL with transactional capabilities. Version 3.23 resulted from this work.

The Berkeley DB integration didn’t go as planned. Stability was never really achieved. But this effort wasn’t in vain: MySQL’s source code was now equipped to accommodate different storage engine types. In April 2000, with support from Slashdot, master-slave replication was introduced. The old ISAM engine was reworked to become MyISAM, which brought improvements including full-text search.

Around the same time, Heikki Tuuri proposed integrating InnoDB, his own storage engine with similar functionality. The table management interface, born from the Berkeley DB work, facilitated this integration. MySQL version 4.0, combined with InnoDB, was declared stable in March 2003.

But the real innovation of 4.0 was the query cache. This feature drastically improved the performance of many applications. The replication code on the slave was rewritten with two threads: one for network I/O from the master, the other to process updates. The optimizer gained efficiency. The client/server protocol became SSL-compatible.

During the development of 4.1, work on the 5.0 branch progressed. This version brought stored procedures, server-side cursors, triggers, views, XA transactions, and major optimizer improvements. Why create a separate branch? The developers wanted to prevent the addition of stored procedures from delaying the stabilization of 4.1. Version 5.0 was released in alpha in December 2003. Two alpha branches temporarily coexisted, creating some confusion.

Between 2005 and 2006, Oracle successively acquired Innobase and then Sleepycat, the two transactional storage engine providers for MySQL. These acquisitions raised concerns in the community. Larry Ellison, Oracle’s CEO, casually commented that he had spoken to almost every company in the sector. He described MySQL as a tiny company with revenues oscillating between 30 and 40 million dollars, compared to Oracle’s 15 billion in annual revenue.

Yet MySQL AB’s commercial trajectory followed an upward curve. In 2001, Mårten Mickos took over as CEO and transformed a technical startup into a viable business. Revenues climbed from 6.5 million dollars in 2002 to 75 million in 2007. The business model initially relied on dual licensing for OEM integrators, then shifted toward support subscriptions for end users with the launch of MySQL Network in 2005. The company counted 8 million active installations in 2006, employed 320 people spread across 25 countries, 70% of whom worked from home. That year, Mickos announced preparations for a 2008 IPO with a target of 100 million in revenue.

But Sun Microsystems struck first. In January 2008, the California-based company announced the acquisition of MySQL AB for approximately 1 billion dollars. Jonathan Schwartz, Sun’s CEO, called this the most important operation in his company’s history. He saw it as an opportunity to reposition at the heart of the web economy. The numbers spoke for themselves: 75% of MySQL installations ran on hardware from manufacturers other than Sun, 80% used Linux rather than Solaris. The acquisition gave Sun access to a massive installed base to which it could sell servers and software. With this billion dollars, MySQL established a new valuation standard for open-source software companies.

The integration at Sun proved chaotic. Widenius and Axmark, two of the three founders, left the company shortly after the acquisition. In 2009, Mickos resigned in turn to become entrepreneur-in-residence at Benchmark Capital. Within a few months, Sun lost the technical and commercial leaders who had built MySQL’s success.

That same year, Oracle announced the acquisition of Sun for 7.4 billion dollars. MySQL thus became the property of the main commercial competitor in the database field. This prospect raised many questions. Would Oracle continue to develop MySQL or seek to stifle it to protect its own database? Could companies that depended on MySQL still trust it? The European Commission closely examined the operation before giving its approval, not without demanding guarantees on maintaining MySQL’s development.

Widenius himself didn’t remain idle. As early as 2009, he started developing MariaDB, a fork of MySQL. This new database maintained full compatibility with MySQL while adding its own improvements. The non-profit MariaDB Foundation was created in 2012. The name this time came from Maria, the founder’s other daughter. This fork gradually became the reference alternative for those wary of Oracle. Wikipedia switched to MariaDB, Google too. Linux distributions began replacing MySQL with MariaDB in their default repositories.

MySQL nevertheless continued its evolution under Oracle’s governance. The team remained largely the same as before the acquisition. NoSQL features appeared. Oracle maintains MySQL under dual GPL and commercial licensing.

Top

RealPlayer

In April 1995, the Internet was breaking into households worldwide. 56k modems were being deployed and struggled to transmit .wav files, and no one yet imagined being able to listen to music or watch a video without waiting for it to download completely. Rob Glaser, who had just left Microsoft after ten years working on applications and multimedia, saw things differently. He founded RealNetworks with an idea that seemed almost crazy at the time: streaming audio directly from the Internet.

The first RealAudio Player disrupted usage patterns from its release. For the first time, people could listen to audio content via streaming, a word that wasn’t yet part of everyday vocabulary. Success came quickly: three years later, Microsoft integrated the player into Windows 98. This achievement becomes even more impressive when you realize that YouTube wouldn’t appear until ten years later.

The RealNetworks teams worked on two fronts. On one hand, they developed compression techniques that delivered decent quality despite ridiculously low bandwidth. On the other, they invented an original way of combining connection-based and connectionless Internet protocols to smooth out streaming. It was sophisticated tinkering, but it worked.

The company chose a strategy that would later become common: a free player for users, paid servers for those wanting to broadcast. Jim Breyer, from Accel Partners, joined the board of directors and investors poured in. In 1996, against the advice of some team members, RealNetworks launched a premium version of the player. The gamble paid off handsomely: more than three million copies sold, most of them online.

Video arrived in 1997 with RealSystem. Connections remained slow, but Glaser anticipated their improvement. To prove his technology was solid, he partnered with Spike Lee, who directed the first short films designed for the Internet. More than 500,000 people watched them in a month. The message got through.

The software changed names several times over the years—RealAudio, RealOne Player, RealPlayer G2—each version bringing its share of new features. Format conversion, video downloading, playlists, audio extraction... The features piled up. In 2015, the company tried something different with RealTimes, a mobile app for creating video montages from photos. It was a way of keeping up with new usage patterns, while having its core business elsewhere.

Glaser retained from his time at Microsoft a certain vision of convergence between technology and media. He pushed his company toward constant innovation, even at the risk of taking chances. In 2002, RealNetworks launched Helix, a platform capable of streaming any audio or video format. The company even opened part of its source code to developers, to encourage collective improvement of solutions.

RealPlayer transformed our relationship with online media, moving us from tedious downloading to instant streaming. Current platforms owe it a great deal, though few people remember this. The company lasted twenty-five years by constantly adapting.

Top

Sun UltraSPARC

Research conducted at Berkeley between 1980 and 1984 on RISC architectures, carried out in parallel with Stanford’s work on MIPS, led to the creation of Scalable Processor ARChitecture (SPARC) by Sun Microsystems in 1985. Four years later, Sun founded SPARC International to open this architecture to other manufacturers.

The first version, SPARC V7, was released in 1986. V8 arrived in 1990 with hardware support for multiplication and division, an MMU, and 128-bit floating-point operations. V9 represented a major milestone: it handled 64-bit registers and operations, redesigned exception handling, and integrated prefetching.

In 1995, Sun launched the first UltraSPARC, the first processor to integrate SPARC V9. Marc Tremblay participated in its design. This superscalar microprocessor executed instructions out of order and processed four simultaneously. Its pipeline had eight stages. Engineers had simplified the execution unit compared to the SuperSPARC to gain frequency, particularly by modifying the ALU branching.

The processor featured 32 64-bit registers divided into eight windows, totaling 144 registers. This technique prevented called functions from wasting time saving and then restoring registers. Seven input registers and three output registers provided access to two ALUs and the memory management unit. A single ALU handled multiplication and division.

The floating-point unit was divided into five blocks: addition and subtraction, multiplication, division and square roots, followed by two blocks dedicated to SIMD instructions of the Visual Instruction Set. Thirty-two 64-bit registers served these operations, including five for inputs and three for outputs.

The primary cache was divided into two 16 KB sections, one for instructions and one for data. An external unified secondary cache was to complement the system, with a capacity between 512 KB and 4 MB, accessible in one cycle. This cache used synchronous SRAM clocked at the processor speed.

Texas Instruments manufactured UltraSPARC using the EPIC-3 CMOS process at 0.5 μm on four metal layers. The manufacturer abandoned BiCMOS, which offered little advantage at this level of miniaturization. The chip integrated 3.8 million transistors in a 521-pin PBGA package.

Ten years later, the UltraSPARC T1 broke with this approach. Sun embodied its concept of throughput computing, which favored multiplying cores and threads over increasing frequencies or making pipelines more complex. In its full version, the T1 had eight cores each executing four threads, for a total of 32 threads. Sun sacrificed floating-point performance and cache size in favor of parallelism.

The T1 used a short in-order execution pipeline: fetch, thread selection, decode, execute, memory, and write-back. Its ALUs operated with a latency of one cycle, multipliers and dividers over multiple cycles. A single floating-point unit was shared among the eight cores. Each thread had its own instruction buffer.

The processor switched between threads according to several rules. By default, it alternated between available threads using an LRU policy. Long instructions triggered a thread switch to keep the pipeline active. The L1 cache was flow-through, with the L2 including the L1. The T1 deliberately made do with a reduced cache: the large number of threads masked memory latencies.

The UltraSPARC T1 embedded a hypervisor that added a privilege level above user and supervisor modes. This lightweight software layer offered a complete virtualization interface to guest systems. Sun fully documented this API and released the T1 VHDL sources under an open license, renamed OpenSPARC T1.

The T1 consumed no more than 70 watts, half that of a Xeon or an Itanium. This efficiency mattered for data centers where air conditioning could cost more than the hardware itself.

This architecture suited web servers, client-server applications, and certain databases that exploited its high parallelism. Sun acknowledged that scientific workloads, especially those making heavy use of floating-point operations, were not these processors’ forte.

UltraSPARC reflects the evolution of processor architectures, from early RISC to massively multicore designs. It demonstrates the strategies explored to increase performance: pipeline complexity on one hand, massive parallelization on the other.

Top

Wiki

In 1994, Ward Cunningham invented something that would change how we create content on the Internet. His “Wiki Wiki Web” was initially just a tool to facilitate teamwork at Tektronix, but it would become much more than that.

Cunningham had grown up in the American Midwest during the Sputnik years, tinkering with electronic circuits from childhood. After studying electrical engineering at Purdue and earning a master’s degree in computer science, he became passionate about object-oriented programming. While working with Smalltalk and Lisp, he noticed that these languages produced code that could be understood by reading it anywhere, without needing the full context. This idea of immediate readability would influence him.

The name wiki comes from Hawaiian and means “fast”. Cunningham had discovered it while taking the Wiki Wiki shuttles at Honolulu airport. It was exactly what he was looking for: a word that conveyed the speed with which web pages could be modified. Gone were the cumbersome processes requiring writing, review, correction, and finally publication. With the wiki, you wrote, published immediately, and others could review and correct afterward. A real game-changer.

Technical simplicity was at the heart of the system. A simple web browser was enough. No need for complicated software or knowledge of obscure markup languages. The syntax remained basic: to create a link to another page, you simply wrote a word in CamelCase, with a capital letter in the middle. The system automatically created the link.

At launch, Cunningham wrote about a hundred pages on computer programming. This trick gave the impression that a community already existed, that others were contributing. Visitors saw how it worked and felt encouraged to participate. He invited developers to create their personal pages, gradually familiarizing them with collaborative editing.

Developers embraced the concept. Within the first few weeks, clones of the system appeared. The wiki contrasted sharply with traditional content management systems that imposed validation at every level. Here, freedom was total.

Many feared vandalism and inevitable abuse. But the community self-regulated naturally. Cunningham explained it simply: removing inappropriate content required less effort than creating it. This asymmetry favored the emergence of relevant information. The system relied on mutual trust between users, built through complete transparency of modifications and the ability for anyone to contribute.

The wiki’s impact exploded in 2001 with Wikipedia. Jimmy Wales took the principles of the Wiki Wiki Web and added an appropriate license and a multilingual global vision. The wiki became a worldwide cultural phenomenon.

Research conducted at the University of Ulster demonstrated the educational effectiveness of wikis. Over 80% of students found the navigation intuitive and appreciated the freedom to edit. Without technical barriers, they focused on content rather than tools.

The wiki introduced lasting technical innovations: page versioning, automatic link creation, full-text search. These features have become standards of modern collaborative work. It proved that an online community could organize itself effectively without rigid hierarchy. This philosophy of self-organization has inspired countless collaborative projects on the Internet.

Thirty years later, the original Wiki Wiki Web is still running. Simplicity and openness remain enduring principles in collaborative technologies.

Top

Microsoft Windows 95

On August 24, 1995, at midnight, lines stretched in front of CompUSA and Best Buy stores. Such a thing had never been seen for software. Microsoft orchestrated the launch of Windows 95 with unprecedented scale: 300 million dollars invested in a worldwide advertising campaign, the Rolling Stones and their "Start Me Up" as the soundtrack, Jay Leno hosting the event alongside Bill Gates. The Redmond company even produced an hour-long show with Jennifer Aniston and Matthew Perry, then at the height of their fame thanks to Friends. The result? Over 40 million copies sold in five weeks.

Behind this media hype lay a radical overhaul of the computing experience. Windows 95 definitively turned the page on MS-DOS and Windows 3.1. The desktop now displayed icons, a taskbar ran along the bottom of the screen, and a Start button concentrated program access. This interface transformed the computer into a tool that anyone could master, without sacrificing the advanced features that interested experienced users.

The system’s 32-bit architecture required a compatible processor but unlocked new possibilities. Memory was managed better, resources were allocated more efficiently. Preemptive multitasking entered the scene for 32-bit applications, allowing multiple programs to run in parallel without the system bogging down. However, Microsoft had to solve a delicate equation: making old DOS and 16-bit Windows software coexist with new 32-bit programs. The Win32 API provided an answer by giving developers a consistent set of functions, compatible with both Windows NT and Windows CE. Programmers could thus port their applications from one platform to another without too much trouble.

Plug and Play technology aimed to simplify peripheral installation. In reality, its chaotic beginnings earned the system the ironic nickname "Plug and Pray." Despite these initial hiccups, the principle of automatic hardware detection later became self-evident. File management also gained flexibility: long filenames, up to 250 characters, replaced the archaic 8-character constraint inherited from DOS. MacOS enjoyed this freedom, and Windows 95 finally caught up. The VFAT file system preserved compatibility with older applications, a compromise typical of Microsoft’s approach.

The integrated network interface and native TCP/IP support arrived just as the Internet was really taking off. Microsoft simultaneously launched the Microsoft Network (MSN), a proprietary online service meant to compete with AOL, accessible through a monthly subscription. Internet Explorer was not part of the base system but was added in subsequent versions via the Microsoft Plus! pack. The "Briefcase" feature allowed file synchronization between different computers, foreshadowing the cloud services that would flourish a decade later. User profiles gave each member of a family or business the ability to customize their work environment.

The system included advanced network features: peer-to-peer file sharing, support for NetBEUI and IPX/SPX protocols, clients for NetWare and Microsoft networks. These characteristics facilitated its adoption in businesses, where it gradually supplanted Windows 3.1 and Windows for Workgroups. The competition hadn’t given up. Mac OS captivated with its elegance and multimedia capabilities but remained confined to Apple hardware. IBM’s OS/2 impressed technically but suffered from poor marketing. Linux, still an emerging free system, attracted technical users with its stability and modularity.

A typical computer of this era featured a 486 or Pentium processor, an IDE or SCSI hard drive, a double-speed CD-ROM drive, and a Sound Blaster sound card. These machines had to contend with Windows 95’s requirements: 4 MB of RAM at minimum, 8 MB recommended, and substantial disk space. Microsoft evolved the system through several versions (OSR1, OSR2) that integrated FAT32, USB, and AGP. Windows 95 remained available until 2001, gradually giving ground to Windows 98 and then Windows ME.

The system’s impact on the computer industry was considerable. It standardized the modern graphical interface and helped make the computer a familiar object in homes. This period marked the height of Microsoft’s dominance in the consumer market. A redesigned interface, a modernized architecture, aggressive marketing: Windows 95 established a standard that influenced operating system development for the following twenty years. The system remains in memory as the one that democratized personal computing.

Top

Microsoft DirectX

Windows 95 posed a significant problem for video game developers. Most titles still ran under DOS, which allowed applications to access hardware directly. With Windows and its secure memory management, graphics performance collapsed. Microsoft risked losing developers if nothing changed.

The solution came from a small company, RenderMorphics, whose source code Microsoft acquired. This became the starting point for DirectX. The early versions impressed no one: bugs galore, programming interface of daunting complexity. The system relied on execute buffers, packages containing lists of vertices to display and their instructions. Developers hated this approach.

Everything changed with DirectX 5. The DrawPrimitive interface radically simplified programming without sacrificing performance. Microsoft had just created a credible option for developing games under Windows. The video game market was exploding, and the battle between DirectX and OpenGL began in earnest.

John Carmack, the lead architect at id Software, didn’t mince words in December 1996. He judged DirectX “terribly flawed,” an API that imposed absurd constraints on programmers without providing anything in return. Other prominent developers bombarded Microsoft with letters, all advocating the adoption of OpenGL.

Microsoft organized comparative demonstrations to prove DirectX’s superiority. These tests were contested: the comparison methods seemed dubious. Debates often revolved around software rendering performance, an aspect that would quickly lose importance with the arrival of powerful and affordable graphics cards.

DirectX prevailed despite the controversies. The API integrated comprehensive tools for managing sound and input devices. The Apple market weighed little, which reduced interest in cross-platform compatibility. DirectX could also query graphics hardware capabilities, an appreciable asset.

The technical architecture relied on COM (Component Object Model), which allowed its use from C++, C# or Visual Basic. Pure C remained possible, at the cost of increased complexity. DirectX progressed alongside OpenGL in supporting custom shaders, giving programmers fine control over the graphics pipeline.

DirectX 8 abandoned DirectDraw, the 2D API that had long been a pillar of the system. Three years later, DirectX 10 arrived with Windows Vista after development conducted in collaboration with application creators and hardware manufacturers.

DirectX 10’s major contribution was the geometry shader, a new programmable stage in the graphics pipeline. Positioned between vertex processing and rasterization, it could generate up to 1,024 primitives from a single input. The designers maintained certain stages in fixed functionality, preferring performance to total flexibility.

DirectX 10 standardized the processing of different programmable stages via a “common core.” This virtual machine unified the management of input-output registers, temporary registers, and resource binding points. It used 32-bit assembly language, but Microsoft encouraged the use of the high-level language HLSL (High Level Shading Language) for programming shaders.

The technology has continued to evolve since. Microsoft wanted to keep Windows attractive to game creators, even if it meant weathering criticism about its business practices and lock-in strategy. DirectX has become an essential industry standard in video gaming.

A contested API at its beginnings transformed into a reference solution through well-conceived technical improvements and a keen understanding of developers’ needs. Technical and strategic choices weigh heavily in technology evolution, intertwining with commercial stakes and community dynamics.

Top

ICQ

In 1996, four Israeli developers tinkered with a messaging system that would change how people communicate online. Yair Goldfinger, Sefi Vigiser, Amnon Amir, and Arik Vardi, along with Yossi Vardi, the latter’s father, weren’t trying to create such a tool. Their initial project simply aimed to make checking messages on beepers more convenient. ICQ was born almost by chance, as an auxiliary tool to facilitate their own collaborative work.

At that time, staying permanently connected was a luxury. Modems tied up the phone line, and maintaining an online presence on IRC for chatting wasn’t always feasible. The team faced communication difficulties when working remotely. Hence the idea of ICQ, whose name says it all: I Seek You. The software’s primary function was to enable users to find each other online.

Two months were enough to develop the program. Each user received a unique identification number, the UIN, comparable to a phone number. Low numbers indicated early registration, creating a kind of social prestige among early adopters. You could see who was connected, exchange messages in real time, and search for other users through an integrated directory.

The software spread on its own, without a marketing campaign. Online gaming communities were the first to be won over. Players of MUDs, Meridian 59, or Diablo on Battle.net massively adopted ICQ to coordinate their games and stay in touch between sessions. This spontaneous adoption foreshadowed the role Discord would play much later in the gaming universe.

Success attracted America Online, which acquired Mirabilis in June 1998 for $407 million. Under AOL, ICQ continued its expansion while maintaining its identity and its Tel Aviv-based team. By 2001, the service had 100 million registered users, mostly young people between 13 and 29 who spent five hours a day connected.

Versions followed one another with their share of new features. ICQ integrated encryption starting with version 99b, while earlier versions lacked it. Voice calls followed, then video conferencing, file transfer, messages to mobile phones, and ICQphone for calling landlines. In 2013, ICQ became the first service to offer end-to-end encrypted video calls with the ZRTP protocol.

Social networks and mobile messaging reshuffled the deck. In 2010, AOL sold ICQ to Russian group Mail.ru for $188 million. The service was popular in Eastern Europe and Russia, with 42 million daily users. In 2021, Hong Kong experienced renewed interest in ICQ: downloads increased 35-fold in one week after controversies over WhatsApp’s privacy practices.

ICQ adapted to the mobile era with applications for Android, iOS, Windows, and macOS. Artificial intelligence made its appearance to suggest quick replies and transcribe voice messages. During the COVID-19 pandemic, the service deployed video conferencing features for up to 30 participants, meeting remote work needs.

These efforts weren’t enough. Against WhatsApp, Messenger, or Telegram, ICQ didn’t reclaim its position. In May 2024, VK announced a definitive shutdown for June 26, inviting users to migrate to VK Messenger and VK WorkSpace. A pioneer disappeared.

ICQ laid the foundations for modern instant messaging: presence statuses, real-time messages, contact lists, user search. The service demonstrated the importance of asynchronous and personal communication on the Internet, before the advent of social networks and their messaging features. Its influence persists in the features of our current applications.

Through its innovative character, ICQ transformed the Internet into a space for personal and immediate communication. Romantic encounters, discussions between players, professional conversations, family exchanges: for nearly three decades, the service wove a web of human relationships throughout the digital world.

Top

KDE

On October 14, 1996, a message appeared on Usenet. Matthias Ettrich announced the launch of a project called KDE, standing for Kool Desktop Environment. His ambition? To build a complete graphical desktop environment for UNIX and Linux. At that time, interfaces on these systems consisted mostly of rudimentary window managers like Feeble Virtual Window Manager for the X Window System. Users confronted with these spartan tools struggled to find the comfort of Windows or Mac desktops.

Ettrich made a technical bet: to rely on the Qt library. This choice immediately triggered controversy, as Qt’s license clashed with some free software purists. But the project moved forward. A first beta version was released in 1997, followed by KDE 1.0 nine months later. A community of developers rallied around this initiative.

The architecture rested on several building blocks. KWin handled window display. Konqueror, which succeeded KFM, served both as a file manager and web browser. DCOP, later replaced by D-Bus, enabled applications to communicate with each other. The KHTML rendering engine deserves special mention: Apple would draw inspiration from it to create WebKit, which would power Safari and Chrome in their early days.

Each iteration, KDE 2 and KDE 3, enriched the functionality during the 2000s. In 2008, KDE 4.0 changed the game. The interface and technical architecture were completely overhauled. Plasma, the new desktop environment, transformed the user experience. Applications adopted Qt 4 and exploited the possibilities offered by this new version.

Organizationally, the project became more structured. KDE e.V. was established, a German legal association managing financial and legal matters. It collects donations, protects intellectual property, and organizes Akademy, the annual conference that brings the community together. This professionalization did not diminish the project’s collaborative spirit.

The following decade brought KDE Frameworks 5, which broke down libraries into reusable modules. Plasma 5 modernized the desktop while preserving its legendary flexibility. The project adapted to touchscreens with Plasma Mobile, proof that KDE knows how to keep pace with evolving usage patterns.

The development model is resolutely open. Decisions are made without a hierarchical pyramid, by consensus. Contributors from around the world participate: developers of course, but also translators, designers, and documentation writers. Google Summer of Code and other mentoring programs regularly attract new talent.

The applications cover all daily needs. Calligra for office work, Amarok for music, KMail for email, Kate for text editing... These programs rival proprietary solutions. Some KDE innovations, such as automatic detection of forgotten email attachments, have been adopted elsewhere.

KDE’s influence extends beyond technology. The project proves that complex and innovative applications can be created as free software. It facilitates Linux adoption on personal computers by offering an interface accessible to Windows users. Distributions like openSUSE or Kubuntu have chosen it as their default environment, which speaks volumes about its maturity.

Thirty years after its birth, KDE stays the course: giving users back control of their digital environment, respecting their freedom and privacy. The project innovates in accessibility and security. Its longevity stems from its ability to attract new contributors without renouncing its values.

Top

Cascading Style Sheets

In 1994, at CERN, the birthplace of the Web, Håkon Wium Lie identified a problem: the Web lacked a means to style documents. Having come from the MIT Media Laboratory, where he had worked on personalized newspaper presentations, he could see that this gap was holding back electronic publishing.

The idea of separating structure from presentation already existed. Tim Berners-Lee had designed his NeXT browser in 1990 with a rudimentary style sheet. But he had published nothing, believing that each browser should find its own display solution. Others were experimenting on their own: Pei Wei with his ViolaWWW browser in 1992, which offered its own style language.

Then came NCSA Mosaic in 1993. This browser, which would popularize the Web, paradoxically represented a step backward: users could only modify a few colors and fonts. Web page creators complained. Marc Andreessen, one of Mosaic’s programmers, initially refused any changes before reversing course when cofounding Netscape.

On October 13, 1994, three days before the announcement of the Netscape browser, Håkon published the first version of Cascading HTML Style Sheets. Dave Raggett, principal architect of HTML 3.0, had pushed him to publish before the “Mosaic and the Web” conference in Chicago. Raggett had understood that HTML would never become a page description language.

Bert Bos joined Håkon. He was then developing Argo, a highly customizable browser with style sheets. Their two initial proposals differ from current CSS but already contain the original concepts. A distinctive feature of the language: it recognizes that the style of a web document cannot come solely from the author or the reader. Their preferences must be combined in a cascade, taking into account the capabilities of the display device and the browser.

The World Wide Web Consortium (W3C) became operational in 1995. A workshop on style sheets was held in Paris. Thomas Reardon from Microsoft committed to integrating CSS into Internet Explorer. At the end of 1995, the W3C established the HTML Editorial Review Board to validate future HTML specifications. The CSS specification became a work item toward a W3C Recommendation.

Internet Explorer 3 was released in August 1996. It was the first commercial browser to support CSS. Chris Wilson’s team at Microsoft reliably implemented most of the color, background, font, and text properties. Netscape followed with Navigator 4.0, but its implementation remained limited. Opera entered the race in November 1998 with version 3.5, which offered more complete CSS1 support.

CSS got its own working group at the W3C in February 1997. Chris Lilley chaired it. The group developed CSS level 2, which became a recommendation in May 1998. Meetings were generally held by phone for one hour per week, with approximately four annual face-to-face meetings in different cities around the world.

Eric Meyer improved the testing process by developing a CSS1 test suite with the help of many volunteers. This suite significantly facilitated the developers’ work. The Web Standards Project (WaSP) monitored browser compliance with W3C recommendations.

CSS developed with more than 60 modules defining different capabilities. The language adapted to new uses: creating electronic books in EPUB format, designing graphical user interfaces. Derived languages appeared. Qt Style Sheets for Qt widgets, JavaFX Style Sheets for JavaFX, MapCSS for describing map styles.

In 2008, web fonts underwent a major evolution. CSS level 2 integrated fonts into a web document, but only Internet Explorer implemented this feature. Microsoft and Monotype developed the proprietary Embedded OpenType format. The situation changed with the emergence of freely distributable fonts on the Web, with the WOFF format being the solution adopted by the community.

CSS has established itself as an essential Web standard: less than five percent of HTML pages do not use style sheets. Specialized conferences and numerous books testify to the vitality of this technology, which continues to evolve to meet the needs of the modern Web.

Top

Flash

Jonathan Gay is a high school student programming his first graphics applications on an Apple II in 1985. His experimentation with the Pascal language leads him to collaborate with Charlie Jackson, founder of Silicon Beach Software. Together, they develop Airborne!, a game for Macintosh that combines animation and digital sound.

He continues by creating SmartSketch, a drawing software that works on both Macintosh and Windows. Users can draw on the screen with an electronic stylus. This innovation gives birth to FutureWave Software in the mid-1990s. When the Internet develops, the company transforms SmartSketch into a two-dimensional animation tool capable of displaying animated graphics on the Web. FutureSplash Animator is born in 1995.

Microsoft adopts the software for its MSN site. Success is immediate. In 1996, Macromedia acquires FutureWave and renames the program Flash. A period of intense innovation begins: audio, video, and interactivity transform the creation and distribution of content on the Internet.

The late 1990s and early 2000s mark Flash’s heyday. The technology becomes the standard for online games, interactive websites, and animations. In 2000, ActionScript makes its appearance. This object-oriented programming language gives developers the ability to create sophisticated web applications. Millions of sites integrate Flash elements, from simple animated buttons to complete immersive environments.

Technical advantages explain this success. The vector format guarantees consistent graphic quality regardless of display size. The plug-in, lightweight (less than 1 MB), installs easily. Distribution reaches 98% of computers connected to the Internet. The sandbox security model protects users by isolating Flash content, and third-party publishers can create compatible tools following the publication of the SWF format.

Adobe Systems acquires Macromedia in 2005 and continues development. But smartphones and tablets arrive. In 2010, Steve Jobs publishes a letter that sharply criticizes Flash: insufficient security, instability, excessive battery consumption. Apple refuses to integrate the technology into iOS. The decline begins.

HTML5 accelerates the movement. This new standard natively offers in web browsers what previously required Flash: animation, audio, video, advanced interactions. Developers migrate toward these open, standardized, and more efficient technologies.

Adobe adapts. The company redirects Flash toward creating HTML5 animations and renames it Adobe Animate. In 2017, Adobe announces the end of support for late 2020. Twenty-five years of a technology that shaped a generation’s web experience thus come to an end.

Flash nevertheless leaves its mark on modern web development. The interactivity and user experience it popularized remain relevant. Fluid animation, audio-video synchronization, dynamic interfaces: these concepts are now integrated into Web standards. They testify to Flash’s lasting impact on the evolution of the Internet.

This trajectory illustrates the rapid changes in digital technologies. It shows the importance of adapting to new uses and standards. The transition from Flash to HTML5 symbolizes the shift toward a more open, secure, and resource-efficient Web. A technology endures only if it responds to users’ changing needs and the technical requirements of its era.

Top

OpenBSD

The history of OpenBSD is rooted in the epic of UNIX and the Internet. When IBM and Digital dominated enterprise computing in the 1970s, the US Department of Defense was funding research on distributed networks. At the University of Berkeley, a group of developers coordinated by researchers created BSD UNIX, which would become the reference implementation of the Internet’s TCP/IP protocols.

In the early 1990s, the Berkeley Computer Science Research Group, which coordinated BSD development, was about to be dissolved. BSD had started as a collection of software for AT&T’s UNIX and had evolved over the years into a complete operating system, and groups wanted to continue its development. Lynne and Bill Jolitz adapted the system to Intel x86 processors with 386BSD. Developers began sharing improvements in the form of “patchkits,” giving rise to two distinct projects: FreeBSD, focused on optimizing for PC hardware, and NetBSD, aiming for maximum portability.

In October 1995, Theo de Raadt forked NetBSD to create OpenBSD. The first version was released in July 1996. This new distribution distinguished itself through its radical commitment to security and elegant design. The team launched a comprehensive source code audit, systematically searching for and fixing security vulnerabilities. This approach produced a system renowned for its exceptional reliability, with only two remotely exploitable vulnerabilities in the default installation since its inception.

OpenBSD innovated by integrating preventive security mechanisms. The system adopted the W^X (Write XOR Execute) principle, where memory cannot be simultaneously writable and executable. It introduced address space layout randomization (ASLR) to vary jump targets and gaps between memory regions at each execution. Unreadable and non-writable guard pages were placed at the end of allocated memory blocks to detect overflows. The developers implemented privilege separation, where daemons run with minimal rights in a restricted environment.

The project distinguished itself by publishing its source code in real time via anonymous CVS, an innovative practice at a time when most projects worked behind closed doors. This transparency allowed the community to continuously verify and improve the system’s security. OpenBSD was a pioneer in the use of strong cryptography, becoming in 1997 the first free system to integrate IPSec by default. This decision led to complications with US restrictions on the export of cryptographic technologies, circumvented through project coordination from Canada.

Beyond its operating system, OpenBSD became the source of many widely used components. OpenSSH, created to replace proprietary secure connection solutions, established itself as the worldwide reference. The PF (Packet Filter) firewall, LibreSSL, the OpenSMTPd mail server, and other tools from the project were adopted by numerous systems. These software components are now found in most connected devices, from Cisco routers to Apple products, including Android systems.

OpenBSD’s philosophy prioritizes code correctness and security over immediate ease of use. The system activates only a minimum of services by default, requiring the administrator to explicitly configure desired functionalities. This approach, sometimes criticized for its rigidity, has proven effective in preventing numerous compromises. The developers strive to maintain compatibility with the original BSD model while strengthening security within the constraints of this framework.

The BSD license, more permissive than the GPL, allows the use of code in closed commercial products. This freedom has favored the adoption of OpenBSD as a foundation for various network security solutions. The system has established itself in critical infrastructures requiring maximum security: firewalls, edge-of-network servers, intrusion detection systems. While its use on workstations is limited by restricted support for consumer applications, OpenBSD continues to influence the entire industry through its security innovations.

Top

PostgreSQL

In 1986, at the University of California, Berkeley, Professor Michael Stonebraker launched the POSTGRES project. DARPA, the Army Research Office, and the National Science Foundation funded this initiative, which followed INGRES, a relational system developed a few years earlier by the same team. The ambition was to push the boundaries of the relational model in a context where criticism was coming from all sides.

The relational model, introduced by Edgar F. Codd, sparked heated debates in the computer science community. Traditionalists insisted that this type of system could not function properly and that the proposed query languages remained too complex for users. The Berkeley team decided to silence the critics by creating a working prototype.

The first demonstration version appeared in 1987. The technical choices were bold: UNIX as the operating system and the C language for programming. These decisions proved judicious, influenced by exchanges with Ken Thompson, a Berkeley graduate. The team did not hesitate to rewrite entire system components to improve quality, applying an iterative development method that proved successful.

The project’s early years saw intense research and development activity. In 1989, version 1 was distributed to a few external users. Version 2 was released in 1990 with a new rules system. Version 3, published in 1991, brought its share of improvements: support for multiple storage managers, query execution engine optimization, and a redesigned rules system.

POSTGRES found its place in varied applications. It appeared in financial data analysis, aircraft engine performance monitoring, an asteroid tracking database, medical information systems, and geographic applications. Universities adopted it as a teaching tool. In 1992, it became the primary data manager for the Sequoia 2000 scientific project.

The user community doubled in 1993. This growth generated a considerable support burden for the university team. In 1994, two students, Andrew Yu and Jolly Chen, added an SQL interpreter to the system. This evolution gave birth to Postgres95, a more compact and efficient version. The code base was reduced by 25% and performance improved by 30 to 50% on Wisconsin benchmark tests.

In 1996, the project took the name PostgreSQL. This new name emphasized continuity with POSTGRES and the integration of SQL functionalities. This new incarnation marked the beginning of a collaborative and open source approach. The system distinguished itself through its extensibility, which allowed adding new data types and operators without touching the core.

PostgreSQL’s development rests on a minimalist philosophy. The system favors using external tools for functions such as connection management, replication, or backup. This approach promotes the creation of a rich ecosystem of complementary projects and adaptation to technological evolutions like SSD storage or cloud deployment.

The project’s organizational structure reflects its collaborative nature. The core team operates with a minimal charter and no company is capable of employing the majority of its members. This distributed governance ensures that PostgreSQL remains under the control of its community of users and developers.

The emergence of the World Wide Web and mobile applications brought new constraints for traditional relational databases. PostgreSQL adapted by naturally integrating non-relational data types: complex documents, geographic data, GPS information, social network content. This ability to evolve without restructuring constitutes a first-rate technical advantage.

The 2000s saw the growing adoption of PostgreSQL by companies and organizations of all sizes. The system became a serious alternative to proprietary databases from Oracle, IBM, and Microsoft. Its open development model allows for continuous innovation and responsiveness to user needs that traditional vendors struggle to match.

PostgreSQL’s longevity and success result from sound initial architectural choices and stable governance. The system maintains a balance between technical innovation and stability, while preserving its independence from commercial interests. This unique combination explains its current position as the most advanced open source database management system.

Contributions to PostgreSQL come from varied backgrounds: academic researchers, user companies, service providers, independent developers. This diversity nurtures a dynamic of continuous improvement in code, documentation, and associated tools. The project maintains high quality standards while remaining accessible to new contributors.

PostgreSQL continues to evolve to meet modern requirements: large-scale performance, complex data analysis, artificial intelligence. Its extensible design allows the integration of these new functionalities without compromising its consistency or reliability. This adaptability, combined with its technical maturity, makes it a structural technology of contemporary computing capable of competing with historic proprietary solutions.

Top

USB

In 1990, connecting a peripheral to a computer was an ordeal. The back of machines looked like a tangled mess of disparate and dusty connectors: serial and parallel ports for printers, PS/2 ports for keyboards and mice, 5-pin DIN connectors for older keyboards, not to mention SCSI ports and game ports for enthusiasts. Each accessory required its own type of connection, with its specific hardware characteristics and dedicated drivers. A real headache for users.

It was Intel that decided to bring order to this chaos by launching a project in 1992. The idea? Create a universal connector that would gradually replace all the others. Two years later, six companies joined forces to give birth to the Universal Serial Bus: Intel, Compaq, IBM, Microsoft, LSI, and Hewlett-Packard. Apple briefly joined the group before withdrawing to develop FireWire, its own technology. The ambition was clear: unify the ports, connectors, and electronic signals that linked computers and peripherals.

The first version was released in 1996, with a throughput of 1.56 Mbps. Though modest, the innovation lay mainly in the principle: a single type of cable to transfer both data and power. The designers wanted a compact and inexpensive connector, capable of acting as a translator between the different ways peripherals communicated. Computers could thus manage several accessories simultaneously without them conflicting with each other.

Intel filed a patent for USB in 1997. But here’s the thing: patents can kill a standard in its infancy. When a single company holds all the rights, it can block competitors or demand prohibitive royalties. Intel understood this well. The company created a patent pool, where rights are shared among several holders who split the royalties. This approach allowed manufacturers to freely produce USB-compatible devices without fear of lawsuits.

The promoters founded the USB Implementers Forum (USB-IF), which brought together more than 700 companies, to get the standard adopted by as many as possible, far beyond the circle of computer manufacturers. This diversity played a decisive role in USB’s success.

Adoption was not immediate. IT companies began seriously integrating USB toward the end of the 1990s. The technology progressed rapidly: in 2000, USB 2.0 arrived to counter FireWire, which boasted speeds of 400 Mbps and allowed simultaneous bidirectional transfers. On paper, USB 2.0 reached 480 Mbps, but tests sometimes showed performance inferior to its rival. However, its broad compatibility and reasonable licensing fees limited FireWire’s expansion. Apple finally integrated USB into its machines in 2003, before completely abandoning FireWire 400 in 2005.

In 2011, seven billion USB-equipped devices were manufactured each year worldwide. The standard continued to progress, reaching speeds of 40,000 Mbps in 2019. The USB port became the reference for just about everything: mice, keyboards, printers, external hard drives. The Micro-USB and Mini-USB variants further expanded its territory. In 2019, the global USB peripherals market was worth 31 billion dollars.

Standardization transformed the user experience. Any USB accessory works on any computer, among thousands of available options. Manufacturers can design their products knowing exactly how they will connect to machines, which reduces risks when developing new devices. This standardization has unleashed innovation: Wi-Fi adapters, optical drives, Ethernet ports, mobile network dongles—all have been able to develop thanks to this common foundation.

The environmental aspect deserves emphasis. By making accessories compatible with any new hardware, USB limits the quantity of electronic devices discarded. Less waste, fewer raw materials extracted, less CO² emissions related to the manufacture of proprietary cables and accessories.

In the Internet of Things universe, energy consumption is crucial. The USB protocol stipulates that in active mode, a peripheral has limited current and must respond immediately to computer requests. If no communication occurs for a few milliseconds, the device enters suspension and its consumption drops. Recent innovations, such as low-power mode and crystal-free oscillators, have improved energy efficiency for battery-powered devices.

The patent pool has enabled manufacturers to converge on a common solution, even among competitors. The presence of alternative technologies, such as wireless connections or inductive charging, continues to push USB to evolve to remain relevant in an increasingly connected world.

VPN

The virtual circuits of the 1970s laid the groundwork for what would later become virtual private networks. These logical connections traversed networks to route long messages, creating a path between the source port and its destination. The concept worked well on highly meshed networks, but no one imagined then that it would become a cornerstone of communications security.

The expansion of the Internet in the 1990s changed everything. Companies found themselves facing a dilemma: how to connect their remote sites and mobile employees while preserving the confidentiality of their communications? Dedicated lines cost a fortune, and the Internet, while attractive for its ubiquity, exposed data to prying eyes. The solution came from encryption, an old technique brought up to date to create secure tunnels at the heart of the public network.

In 1996, Gurdeep Singh Pall and his team at Microsoft designed PPTP, the Point-to-Point Tunneling Protocol. PC Magazine awarded it the innovation of the year prize, and for good reason: it finally standardized what the industry had been seeking for years. PPTP carved tunnels through the Internet, secret passages where data could flow safely away from curious eyes. It was the first true industrial VPN solution.

Other protocols followed. L2TP and IPSec enriched the available toolkit. IPSec deserves closer attention: by working directly at the network layer level, it protected end-to-end communications with a robustness that made it an indispensable standard. Companies now had the tools they needed to build proper virtual private networks.

The 2000s saw VPNs grow in sophistication. Firewalls, strong authentication, fine-grained access management: solutions became more comprehensive. Remote work was gaining momentum, and with it the need to connect an ever-increasing number of remote users. Companies began abandoning their expensive dedicated connections in favor of VPN connections over the Internet. The savings were substantial, security remained solid.

SSL and TLS brought a welcome simplification. Gone were the complex client software deployments: a simple web browser was now enough to access corporate applications remotely. OpenVPN arrived in this context as a free and flexible alternative to proprietary solutions. The SSL/TLS technology it used appealed through its ease of use and compatibility with existing infrastructure.

Cloud computing and smartphones reshuffled the deck in the 2010s. VPN infrastructures had to adapt to unprecedented volumes of simultaneous connections. Mobile users multiplied, juggling between their different devices. Solutions integrated quality of service, performance optimization, everything needed to maintain user experience despite growing complexity.

Today, VPNs combine several technologies: IPSec for remote sites, SSL/TLS for web access, firewalls and strong authentication for security. They interconnect scattered offices, protect access to internal resources, and secure communications for mobile workers. This technological convergence meets current needs, but it wasn’t obvious twenty years ago.

The history of VPNs ultimately reflects that of network usage patterns. Starting from rudimentary virtual circuits, they transformed into true digital fortresses. Mobility, cloud computing, cybersecurity: all challenges that VPN technology has successfully addressed. It continues to evolve, integrating the latest cryptographic advances to remain relevant.

Individuals have also embraced VPNs, going far beyond their initial professional scope. Protecting privacy on the Internet has become a major concern, and consumer VPN services have flourished. Some use them to bypass geographic restrictions for political reasons, others to mask their browsing activity. This democratization has changed the perception of the technology, once reserved for network experts.

The Internet of Things and 5G will generate new secure connectivity needs. Threats evolve and become more sophisticated, and VPNs must keep pace. Post-quantum cryptography looms on the horizon, promising new algorithms to resist tomorrow’s computers. The VPN has not finished its evolution.

Top

Hotmail

In 1996, checking your email required a certain amount of logistical planning. You had to install software on your computer, configure server settings, and above all, remain tied to that machine to access your messages. Jack Smith and Sabeer Bhatia, two former Apple employees who had passed through Stanford, found this situation absurd. They wanted to read their emails from anywhere, without running up against their company’s strict firewalls. From this mundane frustration would emerge Hotmail, the first email service that could be used directly in a web browser.

The two partners pooled $4,000 in 1995 to build a prototype. The demonstration won over Draper Fisher Jurvetson, which injected $300,000 for 15% of the equity. But the money ran out quickly. A month before launch, the coffers were empty. Doug Carlisle threw them a lifeline: $100,000 in additional investment. Smith and Bhatia refused, fearing they would lose control of their creation. They chose to borrow instead.

On July 4, 1996, American Independence Day, Hotmail opened its doors to the public. The name played on the letters HTML, those four characters that structure web pages, by inserting them in capitals within “HoTMaiL”. The service offered 2 megabytes of free storage. That was small, but no one complained at the time. Bhatia’s real stroke of genius lay in a seemingly trivial detail: every message sent from Hotmail carried the signature “PS: I Love You” followed by a link to the service. This trick, which would later be called viral marketing, sent the numbers through the roof. One hundred thousand signups in less than a month, one million users in less than six months.

Microsoft observed this surge in power. By late 1997, Hotmail claimed 10 million users and captured a quarter of the webmail market. Bill Gates didn’t let the opportunity slip. On December 30, 1997, the deal closed for $400 million. Bhatia hesitated at first, not thrilled by the monopolistic image that stuck to Microsoft, but ultimately acknowledged Gates’s instinct.

Integration into the MSN galaxy accelerated growth. By early 1999, MSN Hotmail crossed 30 million users and registered 150,000 new signups every day. But this rapid expansion came with embarrassing setbacks. In 1999, anyone could log into any account by simply typing the password “eh”. Microsoft spoke of an “unknown security issue” and dismissed the possibility of a deliberate backdoor. Two years later, another flaw: a simple URL manipulation was enough to read someone else’s messages.

Google launched its Gmail service in 2004 with a gigabyte of storage, five hundred times more than Hotmail’s 2 megabytes. Microsoft responded by raising capacity to 250 megabytes and allowing 10-megabyte attachments. A complete overhaul of the service began, but the final version didn’t emerge from beta until 2007 under the name Windows Live Hotmail. These delays left the field open to Gmail, while Hotmail began to look like a dinosaur.

Microsoft then multiplied adjustments: Bing integration, compatibility with Firefox and Chrome, quick filters, automatic inbox cleanup. In 2010, the Exchange ActiveSync protocol arrived, followed the next year by aliases and instant actions. Despite these efforts, the service carried a tarnished reputation. Savvy users shunned it, spammers made it their favorite playground.

July 2012 marked a fresh start with Outlook.com. Microsoft abandoned the Hotmail name and offered a streamlined interface that contrasted sharply with the old version. Success was immediate: more than 10 million voluntary signups in two weeks. The migration of Hotmail accounts was completed in May 2013. Microsoft then announced 400 million active accounts. Users could keep their @hotmail.com address or switch to @outlook.com.

In 2019, a hacker exploited a security flaw to use a support agent’s credentials to access certain accounts. Dark mode also appeared that year, to save batteries and ease eyestrain.

Millions of @hotmail.com addresses remain in circulation, alongside @live.com, @msn.com, @passport.com, and @outlook.com. Hotmail’s story primarily tells that of a pioneering service, swallowed by a giant, which had to constantly reinvent itself to avoid disappearing into digital oblivion.

Top

Internet Archive

Brewster Kahle created Internet Archive in 1996. He wanted to build a universal digital library that would preserve the history of the Internet. Not an archive locked away in a digital vault, but an open space where anyone could retrieve vanished web pages. The Wayback Machine, launched in October 2001, provides access 24 years later to over one trillion web pages and more than 99 petabytes of data. Staggering.

Kahle established simple rules from the start. Use only standard hardware. Ban commercial software. Keep the architecture simple enough that you don’t need a PhD to maintain it. These pragmatic choices probably explain why the service has been running for over twenty years with fewer than five people managing it. An almost incredible longevity in the web universe.

The infrastructure relies on distributed storage nodes and front-end web servers. Data is distributed across more than 2,500 nodes, representing over 6,000 hard drives. Total capacity approaches one petabyte. Each day, the system processes tens of millions of requests and transfers more than 40 terabytes of data.

Unlike conventional websites, Internet Archive doesn’t use caching. This absence is surprising at first glance. But the designers quickly realized that archive queries are too dispersed in time and space. A traditional cache would serve no purpose. They preferred to directly optimize access to stored data.

The ARC format structures the storage of archived pages. Each file groups approximately 100 megabytes of unrelated pages, along with their headers. The organization makes storage efficient but requires sequential reading to retrieve a specific page. Each file exists on at least two different nodes, to avoid losing everything in case of failure.

To locate data, the system adopts an original method. No centralized index. Queries are sent via UDP to all storage nodes. Each node keeps the list of its files in memory and responds if it has what’s being sought. This distributed approach withstands failures well and simplifies adding or removing nodes.

Statistics show that English overwhelms other languages in accessed content. European languages follow, far behind. More than 82% of human sessions arrive through external links. Wikipedia leads the sites that send traffic to Internet Archive. Most pages accessed no longer exist on the active web. Proof that the archive fulfills its role as memory.

Two types of users share access: humans and robots. Robotic sessions are ten times more numerous but generate as much data as human sessions. Robots never arrive with a referrer in their requests, whereas human users almost always arrive via links.

The web’s growth imposes constant challenges. The index of archived URLs exceeds 2 terabytes. Initially, updating it was problematic. Disks overheated under intense activity. Moving to a better-cooled data center and an incremental update method solved the problem. Moreover, SSD drives open new possibilities. They outperform mechanical disks for random access, frequent in archive usage. These technologies could make cache systems efficient, previously impossible with traditional media.

Internet Archive collaborates with other institutions like Bibliotheca Alexandrina in Egypt or the European Archives. These collaborations serve to replicate data geographically. Availability improves, as does preservation. The service also offers specialized collections: the Million Book Project, the Prelinger Archives. The mission expands beyond web archiving.

In 2024, Google Search facilitates access to the past more than ever. Archived versions of web pages are directly accessible with a simple link to the Internet Archive’s Wayback Machine.

Internet Archive demonstrates that a minimalist approach can result in a sustainable service. Its architecture favors simplicity and robustness over sophistication. A large-scale archiving system is capable of functioning with a small team. This success now inspires other digital archiving projects worldwide.

Wi-Fi

When Norman Abramson launched the ALOHAnet project at the University of Hawaii in the 1970s, he probably had no idea he would transform our daily relationship with information. This experiment, which involved linking computers via radio waves across the Hawaiian islands, demonstrated that data could be transmitted without cables.

Things accelerated in 1985 when the Federal Communications Commission, under the impetus of Michael Marcus, authorized free use of certain frequency bands: 902-928 MHz, 2.4-2.4835 GHz, and 5.725-5.850 GHz. These ISM (Industrial, Scientific and Medical) bands, initially reserved for industrial and medical uses, became the playground for consumer wireless technologies. Other countries followed suit.

The first IEEE 802.11 standard appeared in 1997, with a modest throughput of 2 Mbit/s on the 2.4 GHz band. But it was Apple that truly popularized the technology with the general public in 1999, by integrating Wi-Fi into its AirPort base station and iBook laptop. These devices used the IEEE 802.11b standard, which reached 11 Mbit/s. The name “Wi-Fi,” created by a marketing agency, actually has no particular technical meaning, despite its phonetic similarity to “Hi-Fi.”

The technical architecture of Wi-Fi draws directly from work on the ALOHA protocol. The CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) principle allows a radio channel to be shared among multiple devices without constant interference. Research by Kleinrock, Tobagi, and Lam at UCLA helped refine these medium access mechanisms, adapted to the specific constraints of radio communications.

Standards then followed in rapid succession. IEEE 802.11a introduced OFDM modulation and the 5 GHz band in 1999. In 2003, IEEE 802.11g brought 54 Mbit/s to the heavily congested 2.4 GHz band. The real breakthrough came in 2009 with IEEE 802.11n and MIMO technology, which multiplies antennas to reach 600 Mbit/s. IEEE 802.11ac crossed the symbolic gigabit-per-second threshold in 2013, with a theoretical throughput of 3.5 Gbit/s.

This performance race responds to growing connectivity needs. Early Wi-Fi networks struggled to load a simple web page, while today we stream 4K movies on multiple devices simultaneously. IEEE 802.11ax, branded as Wi-Fi 6 and published in 2021, pushes throughput to 9.6 Gbit/s while better managing dense environments like stadiums or airports. The standard incorporates sophisticated spectrum-sharing mechanisms and reduces device power consumption.

Wi-Fi has changed far more than just how we connect to the internet. The technology has redesigned workspaces, enabling professional nomadism. Coffee shops have become makeshift offices, airports have turned into meeting rooms. In our homes, Wi-Fi has enabled the explosion of connected devices, from thermostats to light bulbs to speakers. Schools and universities have rethought their teaching methods around this permanent connectivity.

Radio communications security was long an Achilles’ heel, with early protections like WEP quickly becoming obsolete. WPA3 encryption now offers a satisfactory level of protection. Interference between devices, coexistence with other wireless technologies like Bluetooth, limited signal range: all problems that have found technical solutions over successive versions.

The future looks more ambitious. Wi-Fi 7 (IEEE 802.11be), finalized in 2024, targets 40 Gbit/s in theoretical throughput. These performance levels open prospects for virtual or augmented reality, which require minimal latency and considerable throughput. The integration of artificial intelligence in network management should improve dynamic spectrum allocation and user experience.

From the ALOHAnet experiment to Wi-Fi 6 networks, the journey illustrates computing’s ability to transform a laboratory idea into everyday technology. Wi-Fi has become invisible precisely because it is omnipresent, present in our pockets, our homes, our cars. This discreteness paradoxically testifies to its success: the best technologies are those we no longer notice.

Top

Google

The Internet entered a new dimension in the late 1990s. Web pages multiplied at an unimaginable pace, and search engines of the time struggled to keep up. AltaVista, Lycos, Excite: all faced the same problem. How could they sort through these millions of pages and separate the useful from the trivial?

Sergey Brin and Larry Page met in 1995 at Stanford. They were doctoral students, fairly brilliant, and began working on BackRub, a project that examined the links between web pages. The idea came to them quite naturally: an important page must be one that many other pages point to. The more references flowing in, the more legitimacy the page gains. It’s simple, almost obvious in hindsight.

A year later, BackRub transformed. The project took on a new name, inspired by the mathematical term "googol" which denotes 10 to the power of 100. This reference reflected their ambition: to index a colossal mass of documents. The PageRank algorithm was born from this thinking, assigning a score to each page based on the quantity and quality of incoming links, creating a sort of spontaneous hierarchy of web content.

In September 1998, the two students founded Google Inc. in a garage in Menlo Park. Andy Bechtolsheim, cofounder of Sun Microsystems, advanced them $100,000. A few months earlier, they had tried to sell their technology to Excite for one million dollars. George Bell, the CEO, refused. This decision ranks among the most monumental mistakes in Internet history.

The technology made the difference. In 1999, Google was already processing 3.6 million queries daily. The interface was clean, the speed impressive, and above all the results were accurate. Internet users quickly realized this. The company developed efficient indexing methods and built a distributed infrastructure capable of absorbing unprecedented data volumes.

Google didn’t just read page content. The algorithm examined their position in the web’s network. It analyzed the text of links pointing to a page, making content indexing possible without having visited them. This approach worked remarkably well for images or PDFs.

The business model gradually took shape. In October 2000, the AdWords advertising system was launched, with advertisers buying keywords to appear in results. The innovation lay in the auction system and the clear separation between sponsored links and organic results. User trust remained intact.

Growth accelerated at the turn of the 2000s. The number of indexed pages went from 1 billion in 2000 to over 4 billion in 2004. Google built its own data centers, equipped with custom-designed servers. This technical mastery maintained performance despite the traffic explosion.

The corporate culture fostered innovation. The motto "Don’t be evil" set a course. Engineers had 20% of their time for personal projects. Gmail and Google News would emerge from this freedom. The company attracted the best talent by offering them a stimulating environment and generous conditions.

The 2004 IPO used an original auction system. The success exceeded expectations: the valuation reached $23 billion on the first day. Google’s dominance in the digital economy was no longer in doubt.

Google transformed our relationship with knowledge. The search engine became a daily reflex for hundreds of millions of users. Our ways of searching and accessing information changed permanently. The influence extended beyond search: the techniques developed by the company, particularly in processing massive data sets, inspired countless innovations.

PageRank found applications in network analysis, far beyond web search. Advances in distributed data processing, infrastructure management, and machine learning established benchmarks that still structure 21st-century computing.

Top

VMware

IBM had already invented virtualization in the 1960s. The mainframes of that era allowed users to share the resources of a single physical machine across multiple virtual systems. But this technology remained confined to mainframes for decades, until a small team at Stanford University decided to transpose it to x86 processors.

In the mid-1990s, Professor Mendel Rosenblum and his students were attempting to build a supercomputer. To run different operating systems on their prototype, they experimented with virtualization. The results exceeded their expectations. Rosenblum then realized he had something bigger than a simple university project. He founded VMware in 1998 with his wife Diane Greene, Edouard Bugnion, Scott Devine, and Edward Wang.

The initial idea was simple: data centers were wasting their resources. Servers ran at 10 or 15% of their capacity because they were sized to handle peak activity. The rest of the time, processors and memory remained unused. Virtualization would allow multiple workloads to run on a single physical machine.

VMware Workstation 1.0 was released in 1999. It required a 266 MHz Pentium II and 64 MB of RAM. This first version supported MS-DOS 6, Windows 95/98/NT, Red Hat Linux 5.0, and a few other systems. The technical challenge was considerable: the x86 architecture had never been designed for virtualization. The engineers had to be clever.

Their solution combined two approaches. Direct execution for non-privileged application code, dynamic binary translation for system code. This hybrid method delivered decent performance while isolating virtual machines from each other. It was a feat: no one had managed to properly virtualize the x86 architecture until then.

ESX Server 1.5 arrived in 2002. This hypervisor could manage 64 virtual machines simultaneously with 3.6 GB of RAM each. Unlike Workstation, which installed on an existing operating system, ESX ran directly on bare metal. Performance was improved. That year, VMware secured its first patent (US 6397242) for its x86 virtualization techniques.

The following year marked the arrival of vMotion. This technology was a game changer: you could move a virtual machine from one physical server to another without service interruption. No need to stop applications for maintenance. VMware also launched vCenter Server, a centralized console to manage the entire virtual environment.

EMC Corporation acquired VMware in 2004 for 635 million dollars. The company opened offices in the United Kingdom, China, India, and Ireland. Three years later, EMC decided to take VMware public while retaining 90% of the capital. The stock was offered at 29 dollars. It closed its first day at 51 dollars, valuing the company at 19.1 billion dollars.

VMware vSphere was released in 2009. This platform integrated numerous advanced features for virtual resource management. Things accelerated in 2013 with NSX for network virtualization and vSAN for storage. Virtualization was no longer limited to servers: network infrastructure and storage also moved through the software layer.

Meanwhile, processors had evolved. AMD and Intel added hardware extensions dedicated to virtualization (AMD-V and VT-x). VMware adapted its products to take advantage of them. These hardware features facilitated 64-bit system virtualization and boosted performance.

Dell acquired EMC in 2015 and thus inherited VMware. The company maintained its operational autonomy and continued its development. It invested in environmental projects like Carbon Neutrality, aimed at reducing CO2 emissions from data centers. In 2023, Broadcom finalized its acquisition of VMware.

One figure sums up VMware’s impact beyond its own hypervisor: as early as 2009, the number of virtual servers exceeded that of physical servers. x86 virtualization had transformed the industry. It reduced infrastructure costs, optimized resource utilization, and brought flexibility to information systems. It also laid the foundation for cloud computing techniques: automation, resource elasticity, everything relied on these virtualization mechanisms.

By making virtualization accessible on the x86 architecture, VMware modernized infrastructure management. Its success pushed other players to develop their own solutions, like AWS Nitro, creating competition that benefits everyone.

Top

Microsoft Windows 98

After the triumph of Windows 95, Microsoft dominated the personal computing market in 1998. The company had broken with IBM, now in decline in the consumer computer segment, which was abandoning OS/2 in favor of Windows and NT. The operating system that Bill Gates had nearly seen disappear on several occasions now ruled over the desktop computer universe.

Windows 98, codenamed Memphis, was released on June 25, 1998. The OSR2 version of Windows 95 had already integrated Internet Explorer 3 and then 4, the FAT32 file system, and USB support. This new iteration was meant to consolidate these advances and bring significant improvements. Jim Allchin, senior vice president of US business systems management, set the course: improve network functionality, simplify the Control Panel, and automate the system further. The team prioritized qualitative optimization over adding eye-catching features. The trade press did not hesitate to criticize this approach, lamenting the absence of spectacular innovations.

Development began in January 1997 with an initial developer version. The first alpha version (build 1387) was released on February 7, 1997. On October 3, beta 2.1 (build 1602) finally allowed upgrades from Windows 3.x. Release Candidate 0 arrived on February 16, 1998, followed by the final beta version on May 9. Technology enthusiasts thus had time to test the system before its official release.

On the technical front, Windows 98 introduced the Windows Driver Model (WDM), which gradually replaced the VxD model inherited from Windows 2.0 for 386. This evolution enabled the arrival of the ACPI (Advanced Configuration and Power Interface) interface. Without WDM, system hibernation would have been impossible, Plug and Play device performance would have plateaued, and audio processing would have suffered significant limitations. Hardware manufacturers adopted this new model gradually, but this modernization greatly facilitated later transitions between Windows versions.

Hardware support improved considerably. USB management matured with the addition of hubs, imaging, and audio devices. Compatibility with IDE and SCSI controllers expanded, and AGP support appeared. DirectX 5.2 enriched the system’s multimedia capabilities, particularly for gaming. Multi-monitor management and DVD playback rounded out these new features.

Network functionality underwent a major overhaul. The integration of Winsock 2, SMB signing, DHCP optimization, and NDIS 5.0 modernized connectivity. Internet Connection Sharing simplified home networking. These technical improvements did not unfold without hiccups. During a public presentation at COMDEX in Chicago on April 20, 1998, Chris Capossela and Bill Gates suffered an embarrassing blue screen while attempting to demonstrate the system’s Plug and Play capabilities.

Hardware requirements included, at minimum, a 486DX2 processor at 66 MHz, 16 MB of RAM, and 500 MB of available disk space. Official recommendations called for a Pentium with 24 MB of RAM, with some university IT departments suggesting 32 MB. Users discovered that by using the /nm parameter during installation, it was possible to bypass these prerequisites and install the system on an 80386 with only 4 MB of RAM. A minimal installation occupied only 120 MB of disk space. Times have certainly changed.

The first days of commercialization saw 530,000 copies sold in stores. This figure, lower than the record sales of Windows 95, can be explained by more modest promotion and less spectacular changes. Some retailers and customers nevertheless attempted to recreate the atmosphere of Windows 95 launches, with waiting lines and special midnight events.

Windows 98 Second Edition, released on May 5, 1999, primarily brought bug fixes but included some improvements. Internet Explorer 5 replaced version 4, FireWire support appeared, and SBP-2 technology improved USB storage management. DirectX moved to version 6.1, while WDM support for audio and modems improved significantly. Support for the WinG API was dropped in favor of DirectX, and Windows Media Player replaced RealPlayer 4.

The development organization at Microsoft was ahead of current industry software methods. One team focused on the current product while another prepared the next version. This separation maintained a clear direction while preserving compatibility with earlier systems. The gradual introduction of WDM illustrates this approach: old drivers remained usable while manufacturers adopted the new model.

Windows 98 marked a milestone in the evolution of operating systems, despite mixed reception from the technical press. Its legacy lies in its ability to have consolidated the innovations of its era while preparing for future developments. The system was gradually supplanted by Windows 2000 and then Windows XP, which offered increased stability and security thanks to their NT architecture. The end of official support by Microsoft in the mid-2000s marked the conclusion of this period.

Top

XML

In 1996, the W3C entrusted Jon Bosak, an engineer at Sun Microsystems, with leading an ambitious working group. The mission was to design a markup language that would combine SGML’s structural power with a simplicity of use that SGML had never achieved. HTML dominated the web, but its limitations were becoming glaring: designed for visual formatting, it proved inadequate for logically organizing information.

Two years later, XML was officially born as a W3C recommendation. The language introduced strict tag-based syntax, imposed the notion of well-formed documents, and gave developers the freedom to create their own tag vocabularies through DTDs. Each sector could now shape its own data structures with flexibility without waiting for an organization to standardize them.

The major innovation lay in the radical separation between content and presentation. An XML document could be transformed into a web page, a PDF, or a text file, depending on current needs. XSLT, standardized the following year, provided the transformation tools necessary for this modularity. Data acquired unprecedented autonomy, freed from their destination format.

Web services emerged in 2000, e-commerce took hold, and businesses discovered the value of exchanging information between disparate systems without resorting to haphazard conversions. XML naturally became established. Its ecosystem grew richer: XPath let us navigate through documents, XQuery queried data, XML Schema gradually supplanted DTDs by providing more refined validation mechanisms.

This story has its roots much earlier, in 1969, in IBM’s laboratories. Charles Goldfarb, Edward Mosher, and Raymond Lorie developed GML there, SGML’s direct ancestor. Their insight: a document’s structure must be independent of its content. This principle, groundbreaking at the time, still underpins our approach to managing digital information.

Databases adapted. Traditional relational systems integrated native XML document storage, while specialized solutions appeared to meet specific needs. Entire sectors built their standards on this format, the pivot of service-oriented architectures known as SOAs: digital publishing reorganized its production chains, finance structured its data exchanges.

XML’s relative simplicity compared to SGML partly explains its success among developers. But it’s primarily its self-descriptive nature that appeals: an XML document carries within itself the keys to its understanding. This characteristic ensures a certain longevity: files remain readable and usable years after their creation, with external documentation mostly remaining optional.

JSON’s arrival in the mid-2000s somewhat disrupted the landscape. Lighter and more direct, it won over web development. Yet XML didn’t yield ground. It retains its relevance where structural rigor matters, where formal validation is essential, where complex transformations are necessary. The two formats coexist, each excelling in its domain.

XML has transformed our approach to documentation, encouraged structured thinking about information, and changed how organizations design their systems. The hierarchical markup principles it popularized are found in much more recent formats. The semantic web and modern microservice architectures bear its imprint.

The W3C orchestrated this entire standardization process with a method that serves as a reference. XML and its satellite technologies today form a coherent set of specifications, the result of remarkable collective work. These standards continue to evolve, adapt to emerging needs, and testify to the vitality of a format approaching its thirtieth year without showing its age.

Top

GNOME

In 1996, Miguel de Icaza embarked on an adventure that would transform the landscape of free systems: creating a complete graphical desktop environment for GNU/Linux. The GNOME project (GNU Network Object Model Environment) addressed a very real frustration. UNIX systems ran smoothly on servers, but their user interface remained rudimentary. How could they hope to attract the general public with such austere tools?

The situation became more complex with the emergence of KDE, which relied on Qt, a library whose license conflicted with free software principles. The community faced a dilemma: should they recreate Qt as free software or pursue something else? The GNOME team chose the latter. Emulating an existing API seemed like a dead end, as demonstrated by the difficulties of projects such as GNUstep, Wine, or LessTif.

The choice fell on Gtk+, a graphical library created for GIMP’s needs, the image editing software developed by Peter Mattis and Spencer Kimball. Written in C but designed with objects in mind, Gtk+ provided a solid foundation. Components like Gdk Imlib for image management and VFS for file manipulation were built around it.

The project’s beginnings reflected the spirit of free software: open discussions on mailing lists, source code shared via CVS, direct repository access for proven contributors. Version 0.0 was released in August 1997. March 1999 saw the arrival of version 1.0, immediately adopted by Red Hat Linux as its default environment.

GNOME was built in a modular fashion. Libraries like libgnome and libgnomeui coexisted with basic applications and development tools. This organization provided flexibility because everyone worked on their component without getting in each other’s way. The interface followed strict style rules to ensure overall consistency.

From the outset, internationalization was among the priorities. The GNU gettext system translated interfaces into numerous languages. GNOME became accessible to users worldwide, far beyond the English-speaking sphere.

The year 1999 saw the birth of two companies around the project: Eazel and Ximian, the latter founded by Miguel de Icaza. Eazel developed Nautilus, the file manager, but closed its doors in 2001. Ximian fared better by offering an enhanced version of GNOME and tools like Evolution, before Novell acquired it in 2003.

A milestone was reached in 2000 with the creation of the GNOME Foundation. This structure coordinated the efforts of various stakeholders and preserved the initial objectives. The foundation combined a democratically elected board of directors and an advisory board that brought together companies and non-profit organizations.

That year, Sun Microsystems made a notable choice: GNOME replaced CDE on Solaris. The project gained recognition in the professional world. Sun went further by creating a laboratory dedicated to accessibility, opening the environment to people with disabilities.

The technical architecture relied on CORBA (Common Object Request Broker Architecture) to enable component communication. This technology allowed interaction between applications written in different languages. The developers created ORBit, a streamlined version of CORBA tailored for GNOME’s needs.

The project encouraged the use of scripting languages like Scheme and Perl. This openness simplified application customization and automation. The GNOME card game used Scheme to define the rules for different solitaire variants.

Red Hat established itself as a major contributor, notably through its advanced research laboratory. The company implemented daily compilation that improved code quality by quickly detecting problems. Novell, Collabora, and Intel also contributed to the effort.

GNOME established a rigorous development cycle with major releases every six months. This sustained pace, innovative for the time, included phases of progressive freezes: first features, then the interface, and finally translations. This method stabilized releases without rushing.

The 2.0 version of 2002 consolidated the project’s architecture. It improved component integration and further harmonized the interface. Subsequent versions continued this work of consistency while adding new capabilities.

GNOME demonstrated that a free project can bring together volunteers and companies around common objectives. Its balanced governance, blending community democracy and economic stakeholder participation, inspired other initiatives. The project proved that a free alternative to proprietary environments could achieve a professional level, both in quality and functionality.

Top

Bluetooth

In Ericsson’s laboratories in 1994, engineers set out to solve a simple problem: replacing the cables connecting electronic devices with an inexpensive, low-power radio link. This project would take on a surprising name, borrowed from Viking history. Harald Bluetooth, King of Denmark in the 10th century, had unified Denmark and Norway. The designers saw it as a perfect symbol for a technology designed to connect different devices.

Four years later, in February 1998, five electronics giants—Ericsson, IBM, Intel, Nokia, and Toshiba—created the Bluetooth Special Interest Group. This association’s mission was to define common technical specifications. The movement gained momentum: 3COM, Microsoft, Lucent, and Motorola joined the group, which already had more than 1,900 members by 2000.

The first commercial version was released in 1999. Devices from different manufacturers struggled to communicate with each other. Version 1.1, in 2002, corrected these teething problems and stabilized throughput at 1 Mbit/s. Two years later, version 2.0 tripled transmission speed thanks to Enhanced Data Rate.

The operation relies on piconets, these small networks where a master device coordinates up to seven active slaves. The 2.4 GHz frequency band serves as the operating space, with an interesting feature: the transmission frequency changes 1,600 times per second among 79 available channels. This frequency hopping limits interference.

Security was not overlooked. The designers included three levels of protection, from open mode to encrypted connection, with authentication and authorization mechanisms. But it was in 2010 that everything changed. Version 4.0 introduced Bluetooth Low Energy, a variant that consumes much less power. Smartwatches, health sensors, and home automation devices found their gateway to the wireless world.

Six years later, version 5.0 quadrupled range and doubled speed. The Internet of Things exploded, and Bluetooth adapted. Protocol layers stack up: the radio layer handles waves, the baseband controls packets, the link manager establishes connections. Above, L2CAP segments and distributes data. Protocols like RFCOMM also emulate serial ports to maintain compatibility with existing systems.

Profiles define uses: file transfer, synchronization, telephony, audio. The Generic Access Profile serves as the foundation for all others. This layered architecture enabled Bluetooth to go beyond its initial role as a simple cable replacement. Forecasts predicted 5.4 billion devices shipped in 2023.

In 2017, the addition of mesh networking opened new possibilities. Extended networks became feasible, useful for connected lighting and building automation. In hospitals, sensors monitor patients remotely. Factories collect data to anticipate breakdowns. Bluetooth beacons guide customers through stores and revolutionize proximity marketing.

This technology illustrates how a simple radio link can evolve into a complex system. From point-to-point to mesh networks, from cable replacement to the backbone of IoT, Bluetooth has managed to transform itself without breaking compatibility with previous versions. Throughput increases, range extends, energy efficiency improves. And yet, a recent device can still communicate with an old peripheral from the 2000s.

Top

MSN Messenger

When MSN Messenger arrived in 1999, instant messaging was nothing new. IRC had existed since 1988, born from the imagination of the Finnish developer Jarkko Oikarinen. This real-time chat system had already proven its usefulness in 1991 during events like the Gulf War in January or the attempted coup against Mikhail Gorbachev in August. Thousands of internet users from around the world gathered there to exchange information in real time, creating a primitive form of global social network.

Microsoft entered a particular context. Home computing was becoming mainstream, and Bill Gates’ vision of a connected computer in every household, echoed by President Clinton, was beginning to materialize. Graphical interfaces had made machines accessible to novices. The ground was ready for consumer applications that no longer required advanced technical skills.

But MSN Messenger didn’t simply replicate IRC. Where the latter favored public channels and group discussions, Microsoft bet on the intimacy of private conversations between authorized contacts. The inspiration came from ICQ, created in 1996 by four Israeli developers from Mirabilis. This clever little program with its clever name (“I seek you”) had attracted 100,000 simultaneous users by June 1997. AOL purchased it in 1998, proof that the instant messaging market was attracting interest.

The technical architecture relied on a client-server model. Unlike some competitors who opted for peer-to-peer, Microsoft centralized exchanges on its servers with proprietary protocols. This approach gave the company total control over its platform, but also created specific security vulnerabilities. Communications went through TCP port 5190, but resourceful users quickly learned to circumvent restrictions by redirecting traffic to other ports.

Features became richer with each version. Contact lists with presence indicators, instant text conversations, file transfers, customizable avatars, mood messages, and of course the memorable Nudges. The ability to modify one’s status (online, busy, away, invisible) represented a notable social innovation. For the first time, users explicitly controlled their availability, something impossible with traditional telephones.

The arrival of mobile devices during the 2000s disrupted usage patterns. Versions for mobile phones appeared, but their adoption was mixed. The phones of that era suffered from crippling limitations: weak battery life, tiny screens, expensive connections. A study conducted in Sweden between 2007 and 2008 showed that users considered the mobile version as merely a supplement. They used it mainly to kill time, not really to communicate seriously.

Security posed problems from the start. Users exposed themselves to all sorts of threats: viruses, Trojan horses, conversation eavesdropping, identity theft. CERT documented attacks exploiting social engineering. Hackers played on users’ trust to spread malware disguised as legitimate improvements. The fact that many teenagers used MSN Messenger without parental supervision worsened the problem.

In businesses, the tool sparked heated debates. Some organizations saw it as a way to streamline internal communication and informal collaboration. Others worried about the risks of confidential information leaks and decreased productivity. Many ended up deploying secured and controlled versions, when they didn’t outright block access.

Interoperability remained a thorny issue. Users would have liked to chat with their contacts regardless of their messaging service. But Microsoft, AOL, and Yahoo maintained closed ecosystems despite repeated requests and pressure from the FCC. Solutions like Trillian allowed these barriers to be circumvented by aggregating multiple services, but without guaranteeing all native features.

The internet evolved. Social networks emerged and redefined online communication. Facebook offered its own integrated messaging. Modern smartphones made desktop applications obsolete. Microsoft eventually replaced MSN Messenger with Skype in 2013, marking the end of an era that had lasted fourteen years.

Contemporary messaging applications incorporate concepts invented or popularized by MSN Messenger: contact lists, presence indicators, instant conversations, file sharing, emojis. WhatsApp, Telegram, Signal, and Discord owe much to this pioneer.

MSN Messenger permanently altered our ways of communicating. By making instant messaging accessible to the general public, it transformed interpersonal relationships in the digital age. Its history reflects the technological and social transformations at the turn of the 20th and 21st centuries, a period when connected computing became an integral part of daily life.

Top

RSS

Dave Winer launched scriptingNews in 1997 while working at UserLand. This format sought to simplify content sharing on the Internet. Two years later, Netscape adopted the idea and created RSS 0.90 (Rich Site Summary), an XML format with an RDF header that it immediately deployed on my.netscape.com.

UserLand responded with scriptingNews 2.0b1, which integrated RSS 0.90 features. Netscape countered with RSS 0.91, a streamlined version without RDF but retaining the essence of scriptingNews. UserLand then abandoned its own format to adopt RSS 0.91. Netscape subsequently discontinued development of the technology.

The year 2000 marked a turning point. A group led by Rael Dornfest at O’Reilly designed RSS 1.0, a complete overhaul based on RDF and XML namespaces. This version broke with the 0.9x lineage. Dave Winer continued on his own with RSS 0.92, adding optional elements to version 0.91.

In 2002, after leaving UserLand, Winer created RSS 2.0. This iteration enhanced RSS 0.92 while maintaining backward compatibility with previous versions. Harvard University published the specification under a Creative Commons license in 2003, opening the format to everyone.

RSS enabled websites to announce their updates in a standardized format. Users subscribed to these feeds through aggregators and received new content without visiting each site. Blogs greatly benefited from this automation.

Libraries and documentation centers quickly adopted the technology for their alerts and monitoring services. The format distributed scientific journal summaries and new acquisitions. BioMed Central and Nature integrated it into their platforms.

The diversity of reading tools exploded. FeedReader and NetNewsWire coexisted with online solutions like Bloglines. Firefox and Opera added native feed management functions. Google Reader became a reference until its closure in 2013.

RSS’s XML structure ensured interoperability. Each feed contained mandatory elements: title, link, description. Optional metadata enriched the whole. The hierarchy organized content by categories and accepted multimedia attachments.

Online media transformed how information was consumed thanks to RSS. Readers centralized tracking of their favorite sources and filtered according to their interests. This autonomy reduced dependence on traditional portals.

Digital marketing seized upon the format to distribute commercial offers and newsletters. Syndication reached audiences through aggregators. Google created AdSense to monetize RSS feeds, opening up advertising opportunities.

Social networks and Twitter gradually changed habits. RSS nevertheless remained useful for information professionals and users concerned with controlling their sources.

The format inspired other standards such as Atom, standardized in 2005 by the IETF. These complementary specifications enriched the web ecosystem by facilitating structured content aggregation.

RSS’s distributed architecture influenced the social web. Automated publishing and subscriptions inspired social network notifications. The activity feed principle, central to modern interfaces, partly originated from there.

RSS’s simplicity made it a pillar of the participatory web of the 2000s. The technology demonstrated the importance of open standards for service interoperability. Its legacy endures in current content distribution protocols.

In 2024, millions of sites and applications still use RSS. Feeds power monitoring systems, news aggregators, and automation tools. This longevity testifies to the relevance of a decentralized model for information distribution.

Top

SOAP

In late 1997, Microsoft began exploring an idea that would transform communication between applications: using XML for remote procedure calls over HTTP. The ambition was simple: enable machines to communicate across networks with standard data types, without the complications of proprietary protocols. DevelopMentor, accustomed to collaborating with Microsoft, and Userland, which saw the Web as a publishing platform, joined the venture. The name SOAP emerged in early 1998.

But things quickly became complicated. The DCOM team at Microsoft put up strong resistance. Rather than adopting this new approach, they preferred to leverage the company’s dominant position to impose DCOM through HTTP tunneling. The in-house XML experts found the idea appealing but premature: they were waiting for the advanced features promised by XML Schema and namespaces. Faced with this deadlock, Userland took the initiative and published its own version of the specifications under the name XML-RPC during the summer of 1998.

In 1999, Microsoft made progress on XML Schema and integrated namespaces into its products. SOAP regained momentum, though the BizTalk team remained reluctant as their model was based on messaging, not remote procedure calls. On September 13, 1999, SOAP 0.9 was released for public review and submitted to the IETF. Three months later, SOAP 1.0 appeared with few changes.

In March 2000, the W3C announced it was considering activity around XML protocols. At the XTech conference, a lively session brought together several visionaries who debated future directions without reaching consensus. But May 8, 2000 marked a real turning point: SOAP 1.1 arrived at the W3C with IBM as co-author. This unexpected support changed everything. The new version proved much more modular and extensible, dispelling fears that SOAP would impose proprietary Microsoft technologies. IBM immediately published a Java implementation which it contributed to the Apache XML project for open source development. Skeptics began taking the protocol seriously. Sun expressed interest and worked to integrate web services into J2EE. Other vendors and open source projects followed suit.

In September 2000, the W3C formed a working group dedicated to the XML protocol, taking SOAP 1.1 as its starting point. After months of modifications, improvements, and difficult decisions about what to keep or abandon, SOAP 1.2 became an official recommendation in June 2003.

SOAP prevailed because it represented the best industrial compromise for standardizing XML-based multi-platform distributed computing. Its simplicity was its major asset: historically, architectures that achieved mass adoption did so thanks to this quality.

The protocol defines the unit of communication through an envelope that frames all information. A message contains a body where arbitrary XML can be placed, accompanied by headers that carry data outside the main body. The processing model establishes precise rules for handling messages when extensions come into play. SOAP faults handle errors by identifying their source and cause, while allowing the exchange of diagnostic information between participants.

Extensibility works through SOAP headers that carry extension data with the message and can target specific nodes along its path. SOAP offers a flexible data representation mechanism: it accepts data already serialized in various formats (text, XML) and provides a convention for representing abstract structures like programming language types in XML.

Remote procedure calls and their responses “map” naturally to SOAP messages. This is a common type of interaction in distributed computing that corresponds well to procedural language constructs. The binding framework defines an architecture for building bindings that send and receive SOAP messages over arbitrary transports. This framework notably serves to move SOAP messages across HTTP, the ubiquitous Internet protocol.

SOAP remains widely used in enterprises for application and service integration, especially with legacy systems. The banking and financial sectors are among its loyal users. Google employs it for most of its applications, as do PayPal, Amazon, and eBay. The protocol maintains its relevance thanks to its technical robustness and its ability to handle complex operations that require maintaining conversational state and contextual information. Its evolution enabled the development of web services and the rise of distributed computing.

Top

Transport Layer Security

The security of communications on the Internet today relies on a discrete yet essential technical foundation. In 1994, Netscape sought to protect exchanges on its Web browser and invented SSL (Secure Sockets Layer). The first public version, SSL 2.0, was released the following year. However, this initial attempt contained too many flaws to be truly reliable. SSL 3.0 arrived in 1996 to correct course.

Three years later, the IETF took over the project and renamed it TLS 1.0. This name change also marked a takeover by the international technical community. The protocol creates an encrypted tunnel between a client and a server, ensuring that no one can read or modify the data in transit.

The 2000s saw the emergence of players like Comodo, specializing in issuing digital certificates. Different formulas appeared: EV (Extended Validation) certificates in 2007, which require thorough verification, then DV (Domain Validation) certificates, simpler to obtain. TLS 1.1 was released in 2006 with some additional protections against certain attacks, without really convincing system administrators to migrate en masse.

In 2008, TLS 1.2 brought its share of innovations: extension support, enhanced security mechanisms. Yet adoption lagged. It would take several high-profile incidents for the sector to understand the urgency of modernizing its infrastructures.

Between 2011 and 2014, alerts multiplied. The attack against DigiNotar in 2011 revealed that a hacker had managed to penetrate this Dutch certificate authority and issue hundreds of fraudulent certificates. The BEAST attack exploited a vulnerability in TLS 1.0. The wake-up call was brutal.

In 2014 the Heartbleed flaw affected more than 300,000 public web servers. Three years after its discovery, nearly 180,000 devices remained vulnerable. POODLE in 2014, FREAK and DROWN in 2015-2016 confirmed that vigilance must be constant. Each incident accelerated the migration to TLS 1.2 and pushed for the development of a truly new version.

TLS 1.3 arrived in 2018 after several years of gestation. This version did not simply patch up the existing system: it thoroughly cleaned up the protocol. Obsolete or dangerous functionalities disappeared. The handshake, the initial exchange between client and server, went from two to just one round trip in most cases. The speed gain was immediately felt.

This version imposed Perfect Forward Secrecy, a property that prevents retrospective decryption of communications even if long-term keys fall into the wrong hands later. The number of cryptographic suites dropped from over 35 to just five. This major cleanup improved security but required adjustments in existing infrastructures.

The history of TLS reflects that of threats on the Internet. The protocol had to constantly adapt, juggling between innovation and compatibility with systems already deployed. Successive versions hardened cryptographic mechanisms and simplified their use. Sometimes, inertia slowed down developments. But security incidents regularly reminded us that we could not rest on our laurels.

Now, TLS protects a large portion of web traffic. The protocol is no longer limited to banking transactions or sensitive data. It has become the standard for all online communication, especially since Google indicated in 2015 that encryption influenced indexing, even though this requirement is more a consequence of the PRISM affair and the need to raise the overall security level of the web. Through its modular architecture, new algorithms are added as cryptography advances.

The lessons from TLS development have spread to other security protocols. The importance of formal validation, the gradual transition to new versions, the delicate balance between security and performance have become guiding principles in the field. Accumulated experience shows that the security of digital communications requires continuous evolution.

Top

Napster

Shawn Fanning was 18 years old when he created Napster in 1999. The story begins simply: this student wanted to share music files with his friends. He had no idea he was about to trigger one of the greatest revolutions in the history of music.

The technical principle was based on peer-to-peer. Users exchanged MP3 files directly between their computers, without a central server to store the data. Napster only maintained a directory of files available on connected machines. When someone searched for a song, the software established a direct connection with another member’s computer that had the file.

In less than two years, more than 26 million users joined the service and exchanged over 80 million songs. This explosive growth was due to the simplicity of use and the fact that it was free. All you had to do was type in a song title to download it. The MP3 format, which compressed audio files without too much quality degradation, made these transfers fast with the Internet connections of that era.

The record companies quickly understood the threat. They had always controlled music distribution through the sale of physical media. Napster disrupted this model by allowing the free circulation of works on the Internet. The Recording Industry Association of America (RIAA) estimated its losses at 55 billion dollars over a decade.

The lawsuit began in December 1999. The RIAA was suing Napster for contributory copyright infringement. The accusation was not about direct infringement, but about facilitating these violations by users. David Boies, Napster’s lawyer, relied on the Audio Home Recording Act of 1992, which authorized private copies for non-commercial personal use.

The courts had to settle an unprecedented question: did file sharing constitute “fair use” under American law? Four criteria came into play: the purpose of the use, the nature of the work, the proportion used, and the impact on the market. In July 2000, Judge Marilyn Patel ordered the halt of distribution of protected files. An appeals court temporarily suspended this decision, but Napster shut down in 2001.

This closure stopped nothing. Other services took over with more decentralized architectures: Gnutella, Kazaa, then BitTorrent. Each learned from the legal flaws of the previous one to better protect themselves from prosecution. The music industry found itself facing a technical problem that it could not solve by legal means alone.

Apple seized the opportunity by launching iTunes in 2003. Steve Jobs understood that a legal alternative that was as simple as Napster had to be offered. The bet worked: people accepted paying if the service remained convenient. This approach showed that a new economic model was possible.

The story of Napster mainly reveals the inadequacy of copyright laws in the face of the possibilities opened up by the Internet. These laws had been designed for a world where copying a work required industrial means. Digital technology allowed anyone to reproduce and distribute content without any degradation. The legal frameworks of the 20th century no longer held.

The service transformed music consumption habits. An entire generation became accustomed to free access and immediacy. This expectation did not disappear with the closure of Napster. It forced the industry to rethink its models, ultimately leading to streaming services like Spotify that attempt to reconcile free access and artist compensation.

The international dimension complicated matters. Napster reached users all over the world, but copyright laws remained national. How to apply the American legal framework to cross-border exchanges? This question remains relevant for all digital platforms.

The lawsuits created important case law. They showed that the Internet could not function as a lawless zone. Legislators around the world drew inspiration from them to develop their own regulatory frameworks. The Napster case established precedents that still influence debates on digital regulation. The balance between innovation, creators’ rights, and user expectations remains fragile.

Year 2000 Bug

The story of the Year 2000 bug begins well before the years of panic that preceded the turn of the millennium. In the 1960s, when programmers worked with punch cards limited to 80 columns and every byte of memory cost a fortune, coding dates with two digits seemed a perfectly reasonable decision. Who would have imagined that these programs would still be running forty years later? This space-saving measure, trivial at the time, would transform into one of the greatest technical headaches of the late 20th century.

The mechanism of the problem was disarmingly simple. Systems that stored only the last two digits of the year could not distinguish 1900 from 2000. Calculating duration between two dates, chronological sorting, validity checking: all operations that risked producing absurd results. The first signs appeared as early as the 1980s, well before the expression “Year 2000 bug” made newspaper headlines. In 1988, the British chain Marks & Spencer refused a delivery of canned goods whose system had interpreted the expiration date in 2000 as 1900, considering the products expired for nearly ninety years. The anecdote raised smiles, but it heralded far more serious troubles.

Awareness truly began around 1990. Experts started multiplying warnings: banking malfunctions, electrical grids shutting down, air traffic control paralyzed, medical equipment failing. In 1992, Mary Bandar, a 104-year-old woman, received an invitation to join a kindergarten class—her birth year “88” having been read as 1988 rather than 1888. The incident could have been amusing if it hadn’t revealed the scale of the looming problem.

Governments finally reacted. The United Kingdom created “Taskforce 2000” in 1996, followed the next year by “Action 2000”, endowed with an initial budget of one million pounds sterling that climbed to 17 million. The United Nations established in February 1999 the International Y2K Cooperation Center, funded by the World Bank. Companies launched massive compliance programs. The New York Stock Exchange devoted 30 million dollars to it over seven years.

But the problem wasn’t confined to large mainframe systems. Programmable logic controllers, ubiquitous in industry for controlling machines and processes, often used two-digit dates. PCs presented their own complications, related to their real-time clock and BIOS. Certain versions, like Award v4.50 from 1994-1995, simply couldn’t handle dates beyond 1999, just as Windows 95 was also incompatible with the transition to the year 2000.

Faced with this situation, a few technical strategies emerged. Expanding years to four digits represented the most solid solution, but it required thoroughly revising programs and databases. The so-called “windowing” technique preserved the two-digit format while adding interpretation logic: “00” to “19” corresponded to 2000-2019, “20” to “99” to 1920-1999. This method, less costly, had the flaw of simply postponing the problem to a later date.

Audits multiplied from 1997 onward. Firms began requiring their clients to certify the compliance of their critical systems to guarantee business continuity. Specialized companies developed automated tools to detect and correct problems in source code. COBOL programmers came out of retirement, demanding comfortable salaries to work on systems they had built decades earlier.

Tests revealed worrying flaws. The British Rapier missile system proved inoperable after 2000. A Swedish nuclear power plant shut down automatically during a millennium transition test. At Chrysler, a 1997 test paralyzed a plant’s security system, blocking access and preventing payroll management.

The overall cost of corrections ranged between 300 and 500 billion dollars. The United States spent 34 billion, Italy 2.5 billion, Venezuela 100 million. Industrialized countries, whose economies relied more heavily on digital systems, invested proportionally more than others.

Ultimately, the transition to the year 2000 caused less chaos than feared. Incidents occurred: fifteen nuclear reactors shut down, card payment systems malfunctioned, power outages affected Hawaii. But the predicted catastrophe didn’t happen, a direct result of efforts deployed during the preceding decade.

This experience left its mark. It demonstrated that software engineering principles like abstraction and encapsulation weren’t just theoretical concepts. It highlighted the dangers of single points of failure and the value of loose coupling between systems. It proved that international mobilization in the face of an identified computer threat remained possible.

Paradoxically, the success of “Y2K” management hindered the adoption of certain lessons. The industry continued to favor speed to market over robustness. Supply chains tightened, reducing their ability to absorb shocks. The current cybersecurity crisis testifies to this persistent difficulty in designing computer systems that are both robust and secure. The Year 2000 bug was, in the end, a missed opportunity to truly learn from our mistakes.

This illustrates well two architectural principles to keep in mind in any design work, computer-related or not: “Today’s problems come from yesterday’s solutions” and “Cause and effect can occur at distant times and places”.