Stéphane FOSSE

EPOCH

EPOCH © 2025 by - This book is published under the terms of the CC BY-SA 4.0 license

Chapter 9
2000

When humanity went online

The 21st century began in turmoil. The September 11, 2001 attacks shook our world to its very foundations. Overnight, the vision of digital technology was transformed. Governments toughened their approach to computer security, critical infrastructures were fortified, and communications were placed under surveillance. In Washington, the Patriot Act reshaped our relationship with digital privacy.

The Internet became democratized like never before. We witnessed the birth of what was called Web 2.0, a transformation where users became active participants. MySpace and Facebook created new virtual social territories. Twitter changed our relationship with news. Personal blogs flourished everywhere thanks to WordPress, giving voice to anyone who wished it. This ferment redefined our way of existing online.

The digital economy weathered a salutary storm. The bursting of the Internet bubble swept away the most fragile projects. The survivors had to prove their real value. Google established itself as the cartographer of this new information universe through its PageRank algorithm. Amazon revolutionized commerce by creating a completely reimagined shopping experience.

Mobility took center stage in our daily lives. The first smartphones began to transform our habits. The iPhone in 2007 caused a total disruption. The way we touched and manipulated devices would never be the same again. Wireless networks multiplied, Wi-Fi became ubiquitous, and 3G gave us the Internet in our pockets.

E-commerce matured and payment systems became secure, inspiring confidence in even the most reluctant consumers. Data analysis enabled online retailers to know us sometimes better than we knew ourselves, suggesting products with unsettling accuracy. To meet this explosion in usage, data centers had to reinvent themselves, become more efficient, less energy-intensive.

A new sharing experience emerged with peer-to-peer technologies. Napster rattled the music industry, while BitTorrent transformed our sharing patterns. The major record labels initially resisted before adapting. Last.fm and Spotify invented new ways of listening to music, heralding the coming video streaming revolution.

In businesses, work was transformed. Broadband and collaborative tools made remote work possible. Organizations invested massively in their information systems, deploying ERP and CRM solutions to better manage their operations and customer relationships.

Open source gained legitimacy. Linux, once considered a fringe project, took its place in the server rooms of major corporations. Firefox proved that a free browser could hold its own against giants. Passionate communities formed around these projects, demonstrating the power of collective development.

The digital divide concerned governments worldwide. School equipment programs were launched. Internet access became a matter of social equality. In Asia, China and India built powerful technology industries in record time, transforming the global balance of the sector.

In datacenters, virtualization exploded and was a game-changer. A single physical server could host multiple virtual machines. This evolution gave rise to cloud computing. CIOs began to outsource their infrastructures, gaining agility.

Cybersecurity became a major issue. Cyberattacks multiplied and became sophisticated. Viruses evolved into complex ransomware and malware. Faced with this growing threat, security solutions had to constantly adapt. Organized crime moved online, targeting banking and personal data.

In schools, digital technology made its entrance. Interactive whiteboards replaced chalk. Virtual Learning Environments created new connections between teachers, students, and parents. Educational resources were enriched, diversified, and digitized.

Scientific research achieved decisive breakthroughs thanks to growing computing power. Supercomputers made precise climate simulations possible, advances in genomics, discoveries in fundamental physics. Scientists learned to work with distributed computing systems to solve increasingly complex problems.

Automation intensified across all sectors. Embedded systems became smarter, more communicative. Industrial robotics reached a milestone, making factories more productive but also raising important social questions.

Environmental awareness appeared in the IT world. The power consumption of datacenters became a concern. Green IT emerged as a necessity, no longer just a marketing argument.

New forms of collaboration emerged. Wikipedia, launched in 2001, showed the way toward truly collective intelligence. Wikis and other collaborative tools transformed the way teams worked together and shared their knowledge.

Our electronic devices converged into a connected ecosystem. Televisions, cameras, Hi-Fi systems: everything became “smart” and interconnected. This convergence required new standards so our equipment could communicate with each other.

This decade shaped our current digital world. It was not merely a technological evolution, but a transformation of our society. We changed the way we communicate, work, and entertain ourselves. The period 2000-2010 will remain in history as the time when humanity truly went online, when our world became irrevocably connected.

Top

C#

Anders Hejlsberg arrived at Microsoft in 1996 with a solid reputation: he had developed the Turbo Pascal compiler and led the creation of Delphi at Borland. Microsoft then entrusted him with an ambitious mission: designing a new programming language for its future .NET platform. This language, initially called "Cool" during its conception, would become C#. The goal was clear: offer a modern alternative to Java, avoiding its pitfalls while adopting its best ideas.

C# distinguished itself from the start with its unified type system. All types, even basic ones like int or double, inherit from a single root type called object. This architecture simplifies data manipulation, regardless of its nature. The language also incorporates automatic garbage collection, which reclaims space occupied by unused objects, and structured error handling through exceptions.

Each new version enriched the language with substantial features. Version 2.0 introduced generics, which strengthened code reusability and type safety. Anonymous delegates expanded possibilities in event-driven programming. With version 3.0 came LINQ (Language Integrated Query) features, which radically transformed how data is queried and manipulated.

Hejlsberg and his team paid particular attention to version compatibility. Existing applications had to continue functioning with new library versions. This concern shows through in several design choices: the virtual and override modifiers remain distinct, method overload resolution rules are clearly defined. This rigor guarantees program longevity.

Research at Microsoft played a decisive role in the language’s evolution. Don Syme and Andrew Kennedy, researchers at Microsoft Research Cambridge, developed a prototype called Gyro to experiment with generics before their integration into C#. This collaboration between research and development illustrates well the pragmatic approach adopted: testing new ideas in exploratory projects before incorporating them into the production language.

C# gradually opened up to functional programming. Type inference, lambda expressions, all these concepts from languages like Haskell or ML were integrated over time. This evolution follows a general trend in the industry: programming paradigms blend together to offer developers more flexibility. The language nonetheless remains accessible to beginners, a characteristic intended from its conception.

The question of database integration comes up constantly. Developers must often juggle between C# and SQL, what Hejlsberg calls an "impedance mismatch" between these two worlds. LINQ addresses this problem by unifying the querying of various data sources directly in C# code, with consistent syntax.

Microsoft evolved C# and Visual Basic .NET in parallel, two languages that share many features while preserving their specificities. This strategy aimed to respond to the distinct preferences of their respective communities. The development team also maintains regular exchanges with researchers working on F#, the .NET platform’s functional language, to explore new directions.

ECMA-334 and ISO/IEC 23270 standardization established technical standards guaranteeing implementation consistency. This standardization reinforced the portability of C# applications and their independence from Microsoft.

C# resolutely adopts a component orientation. The language offers specific syntactic constructs for creating and using autonomous, self-documented software modules. This modular approach facilitates the development of complex applications and their long-term maintenance.

The C# developer community has considerably expanded, creating a rich ecosystem of libraries, tools, and best practices. Microsoft has managed to keep the language cutting-edge while preserving its stability and backward compatibility. The shift toward more declarative programming styles continues: developers describe their objectives rather than implementation details, leaving the compiler to optimize execution.

C# has adapted to the industry’s changing needs without denying its founding principles of simplicity and productivity. Guided by a clear vision and nourished by research, its continued development makes it an indispensable tool for modern software development.

FLAC

Josh Coalson launched the development of FLAC in 2000. His ambition? To create an open format for lossless audio compression, in response to the proprietary solutions that dominated the market at the time. Music lovers and sound professionals alike were seeking alternatives to lossy formats such as MP3, and Coalson would meet this demand.

FLAC’s lossless compression reduces an audio file to 50 or 60% of its original size without losing a single bit of sound information. A 300 MB Broadcast Wave Format file shrinks to a 100 MB file after compression. Once decoded, the result is strictly identical to the original in acoustic terms. This capability radically distinguishes FLAC from lossy formats that sacrifice information to save space.

The project joined the Xiph.Org Foundation in 2003. This nonprofit organization already hosted open multimedia formats such as Vorbis, Opus, and Theora. FLAC’s integration into this ecosystem of open standards reinforced its legitimacy. Version 1.2.1 was released in 2007 and marked a form of maturity: the specifications have since undergone no major modifications. The reference tools continue to evolve, with version 1.3.2 released in January 2017.

FLAC’s designers favored a pragmatic approach. The format consumes little memory during decoding and relies exclusively on integer operations. These technical choices enable its deployment on systems with modest resources. The data stream is organized as metadata followed by audio frames. Each frame contains a header with a 14-bit synchronization code, subframes, padding bytes, and a frame footer.

The flexible format accepts up to 8 audio channels and samples encoded between 4 and 32 bits. For stereo, four encoding modes coexist: independent, left/side, right/side, and mid/side. The compression thus adapts to the nature of the processed sound signal.

The music industry adopted FLAC in successive stages. Initially confined to audiophile circles and alternative platforms, the format gained ground with the emergence of high-quality streaming. TIDAL and Deezer Elite launched their services in 2014, with Qobuz following suit. Online distribution evolved in parallel: 7digital and Bandcamp now offer FLAC downloads.

Native support by operating systems represented a decisive step. Android integrated it from version 3.1, and Windows 10 followed. Apple maintained its own strategy with ALAC, initially proprietary and then released as open source in 2011. Recent web browsers, Chrome 56 and Firefox 51, play FLAC files directly.

Digital preservation finds considerable advantages in FLAC. Each file embeds an MD5 fingerprint of the original audio data in its header to verify integrity. CRC checksums identify corrupted frames during streaming. These built-in validation mechanisms give FLAC an advantage over the WAV format, which requires external verification.

Heritage institutions have taken interest in the format’s potential. The audio and video division of the U.S. Library of Congress considered its use for accessing sound recordings as early as 2005. The Internet Archive uses it in its Live Music Archives, where concerts and sessions are often available in both FLAC and WAV. The European Broadcasting Union uses it to distribute concert recordings via Musipop: original WAV files are converted to FLAC before satellite transmission.

Specialized tools accompanied the format’s growth. VLC, Clementine, and Foobar2000 provide playback across different platforms. Conversion utilities like dbPowerAMP and FFmpeg facilitate exchanges with other formats. Mp3tag and Kid3 manage metadata.

The Internet Engineering Steering Group approved the creation of the CELLAR working group in 2016 to formalize FLAC specifications. This initiative aims to establish an official standard and strengthen the format’s sustainability in the digital audio landscape.

An open project like FLAC can establish itself as a de facto standard in its field. The absence of patents or royalties on its use favors its adoption by the music industry and cultural institutions. This format remains a relevant choice for digital archiving and high-quality music distribution.

Top

JSON

Douglas Crockford was working at State Software and looking for a simple way to enable communication between a server and a web browser. He noticed that JavaScript had a convenient syntax for representing objects: key-value pairs within braces. The idea came to him to extract this notation from the language and turn it into a standalone data exchange format.

In 2001, Crockford formalized this subset of JavaScript under the name JSON, for JavaScript Object Notation. He published the specification on json.org and provided implementations in several languages. The format consisted of just a few rules: six data types were sufficient (strings, numbers, booleans, arrays, objects, and the null value) to represent complex structures. This brutal simplicity contrasted with the verbosity of XML, which was ubiquitous in data exchange at the time.

Official standardization came in 2006 with IETF RFC 4627. The document established the MIME type application/json and defined the precise syntax. Other versions followed: RFC 7158 in 2013, RFC 7159 in 2014, and RFC 8259 in 2017. Each revision clarified technical points and adjusted the specification to meet emerging needs in the field.

JSON became dominant because it addressed concrete needs. Its minimalist syntax remained human-readable while being easy for machines to parse. The absence of tags reduced the size of exchanged messages. And most importantly, it naturally meshed with JavaScript, the language that already dominated client-side web development. Developers had nothing new to learn because they were using structures they manipulated daily.

The web giants adopted it one after another. Yahoo! was among the first to integrate it into its services, followed by Google, Facebook, and Twitter. This massive adoption transformed JSON into a de facto standard for web APIs. REST architectures, which were becoming the norm for designing web services, aligned perfectly with this lightweight format.

The ecosystem around JSON developed at an astonishing pace. Libraries appeared in all major languages. Specialized variants emerged: GeoJSON for geographic data, JSON Schema for validating document structure. NoSQL databases like MongoDB or CouchDB adopted it as their native storage format, extending its influence beyond simple data transport.

Modern JavaScript frameworks like Angular or React manipulated JSON natively. Mobile development leveraged its lightweight nature to conserve bandwidth. JSON gradually became an invisible but ubiquitous component of web infrastructure, present in nearly every exchange between applications.

This simplicity also revealed its limitations. The inability to add comments hindered documentation. Number representation raised precision questions related to floating-point calculations. The absence of a standard format for dates forced developers to establish their own conventions. Variants like JSON5 or JSONC appeared to fill these gaps, at the risk of fragmenting the ecosystem.

A recent analysis of public JSON files shows that developers intensively exploit the format’s capacity to represent nested structures. Objects contained in arrays themselves nested within other objects have become commonplace. This flexibility allows great expressiveness while maintaining a certain regularity that facilitates automated processing.

JSON has transformed development practices. It has fostered the growth of decoupled architectures where components communicate through well-defined interfaces. Its ease of manipulation has encouraged the adoption of functional and reactive paradigms. Development tools have integrated specific features for working with this format, from syntax highlighting to automatic validation.

The JSON specification is deliberately minimalist. Its designers chose to preserve the original simplicity rather than add features. Specific needs are covered by extensions and complementary tools that don’t affect basic compatibility. This stability ensures the format’s longevity in a constantly evolving technological environment.

The format was not imposed by a standardization body but naturally adopted because it met a real need. Its success stems from its alignment with existing practices and its ability to remain simple when so many other solutions sought complexity. JSON has become a pillar of modern web infrastructure, one of those elements we no longer notice because they seem so self-evident.

Top

BitTorrent

By early 2000, the Internet was showing certain limitations. Transferring a large file was extremely challenging: email would saturate, FTP servers would collapse under the load. Bram Cohen had had enough of Internet bubble startups. These companies promised the moon, burned through investors’ money, then disappeared before releasing anything at all. He decided to take matters into his own hands.

His professional experience had taught him one thing: fragmenting files for secure storage worked well. This idea kept nagging at him. What if this principle were applied to data sharing? Instead of a single server distributing a file to hundreds of users, why not split the file into pieces and let everyone share what they had locally?

This is how BitTorrent was born in 2001. The concept could be summed up briefly: a file is divided into segments. You download these segments from multiple sources simultaneously. Meanwhile, you’re already sharing the pieces you’ve received with others. The more people involved, the better it works. Cohen named this reciprocity mechanism tit-for-tat, a name that captures the spirit well: give and take.

Summer 2001 saw the release of the first beta version. Cohen presented his protocol the following year at a conference. His initial ambition was to distribute Linux distributions. However, the technical architecture concealed genuine sophistication. The .torrent files contain only metadata: filename, size, segmentation, tracker address. This tracker coordinates exchanges between participants without ever storing a single byte of content.

Two algorithms made the difference. The first, called rarest first, prioritizes the least widespread segments in the network. The second, choking, regulates transfers by favoring those who actively share. These mechanisms ensure that a file remains available when thousands of users download it simultaneously.

Success materialized. By 2005, BitTorrent represented a substantial portion of global Internet traffic. Some used it for pirated content, as with all sharing protocols before it. Others saw an opportunity. Opera integrated BitTorrent into its browser that year and used it to distribute its updates. Blizzard Entertainment adopted it to distribute World of Warcraft patches. The Internet Archive followed in 2012 to make its digital collections accessible.

Regarding the protocol’s internal mechanics, files are divided into 256 KB blocks. Each block receives a unique identifier via SHA-1 hashing that guarantees its integrity. The system maintains a dynamic list of connected peers and continuously adjusts transfers according to each connection’s performance. This decentralization ensures robustness and scalability.

Cohen had hit the mark: a peer-to-peer system could solve massive distribution problems. The bandwidth savings attracted businesses. Distributing a multi-gigabyte software update to millions of users cost a fortune with the traditional approach. BitTorrent changed the game.

The open source code fostered the emergence of a developer community. The protocol’s relative simplicity accelerated its adoption. The incentive mechanisms for sharing guaranteed its practical effectiveness. These three elements explain why BitTorrent became so widely adopted.

More than twenty years later, the protocol is still running. Use cases have evolved, competing technologies have multiplied, but BitTorrent remains relevant for distributing large files. An idea born from frustration with technical limitations can durably transform Internet usage.

Top

Apple Mac OS X

Apple went through a difficult period in the 1990s. Its operating system Mac OS, created in 1984 with the first Macintosh, lagged technically behind the competition. Originally designed for a computer with 128 KB of memory, it lacked preemptive multitasking and memory protection. MultiFinder, introduced in 1988, only provided a rudimentary form of cooperative multitasking. The architecture was aging poorly. By the mid-1990s, Apple relied on decade-old code, initially written for Motorola 68000 processors and adapted to PowerPC with varying degrees of success. Part of the kernel even ran in a 68K emulator, which only made performance worse.

Apple then launched the Copland project in 1994, with the ambition to create a modern system that would retain the Mac OS interface and compatibility. The company held numerous presentations for developers and published documentation, but Copland never achieved acceptable stability. The project was cancelled in 1996. Apple found itself in an urgent situation.

The solution came from elsewhere, through the acquisition of NeXT, the company Steve Jobs had founded after leaving Apple in 1985. NeXT had built NEXTSTEP, a remarkable system based on the Mach microkernel from Carnegie Mellon University. NEXTSTEP combined this modern kernel with a BSD layer and offered preemptive multitasking, memory protection, support for multiple processor architectures, and an object-oriented development environment in Objective-C as early as 1989.

Transforming NEXTSTEP into Mac OS X required considerable work. Apple updated the Mach 2.5 code to version 3.0, updated the BSD portion with code from 4.4BSD and FreeBSD, added support for HFS file systems and Apple network protocols. The team developed I/O Kit, a new driver system that replaced the old DriverKit. This layer, written in a subset of C++, enabled an object-oriented approach for managing peripherals.

The Mac OS X architecture was based on modularity. The XNU kernel (X is not UNIX) combined three components: Mach managed system resources, BSD provided UNIX compatibility and network services, and I/O Kit handled drivers. This organization facilitated maintenance and evolution. Drivers loaded dynamically as KEXTs (Kernel Extensions), modules that could be added without recompiling the kernel.

For application transition, Mac OS X offered several environments. Cocoa, heir to NeXT’s OpenStep libraries, created native applications. Carbon offered a modernized version of classic Mac OS APIs and facilitated porting existing software. A Classic environment ran legacy Mac OS applications without modification.

The innovations went beyond the kernel. The system introduced Mach-O, a universal binary format capable of containing code for different architectures in a single file. This feature simplified software distribution and prepared for future hardware transitions. The boot process relied on OpenFirmware for PowerPC, then on EFI for Intel processors, providing extended capabilities compared to traditional BIOS.

Memory management demonstrated the care given to performance. On x86 systems, Mac OS X adopted a 4/4 approach where user space and the kernel were not mapped simultaneously, unlike the classic 3/1 or 2/2 GB split. This design made better use of available memory.

The transition to Intel processors in 2005-2006 demonstrated the architecture’s flexibility. Rosetta technology, based on Transitive’s QuickTransit, executed PowerPC applications on the new x86 processors through dynamic recompilation. The 32-bit kernel continued to function on 64-bit machines in compatibility mode, preserving compatibility with existing extensions while providing access to 64-bit features.

Mac OS X’s influence extended beyond desktop computers. The system served as the foundation for iOS for the iPhone and iPod touch, adapting the XNU kernel and certain components to the ARM architecture. This code reuse gave Apple technological consistency across its platforms.

The release of Darwin’s source code, the core of Mac OS X, under the Apple Public Source License in 1999 marked a turning point in Apple’s strategy. This openness allowed developers to study the system’s implementation and contribute to it. Projects like PureDarwin demonstrated that derivative systems could be created.

Mac OS X achieved a synthesis between UNIX heritage, NeXT innovations, and Apple’s expertise in interfaces. The system enabled Apple to modernize its platform while preserving compatibility with existing applications. This solid technical foundation supported Apple’s expansion into new markets and continues to evolve with user needs and technological advances.

Top

SHA-256

The late 1970s marked the beginning of cryptographic hash functions. These algorithms, which transform any data into a fixed-size digital fingerprint, would become major building blocks of computer security.

In 1976, Whitfield Diffie and Martin Hellman published their groundbreaking paper on public-key cryptography. They explained that a one-way hash function is necessary to construct digital signatures. The first concrete work arrived shortly after: Michael Rabin proposed a design based on DES encryption that produced a 64-bit result. Gideon Yuval demonstrated that collisions could be found by exploiting the birthday paradox with a complexity of 2^(n/2). Ralph Merkle, for his part, established the basic requirements: resistance to collisions, second preimages, and preimages.

The 1980s saw numerous proposals emerge. The cryptographic community realized the importance of these primitives for securing digital communications. Ivan Damgård formalized the definition of collision resistance in 1987. Two years later, Moni Naor and Moti Yung introduced a variant called Universal One Way Hash Functions.

At the turn of the 1990s, Ronald Rivest created MD5, an evolution of MD4 optimized for software. This function quickly became successful: it proved to be approximately ten times faster than DES in software implementation. More importantly, MD5 escaped the export restrictions that weighed on encryption algorithms and could be used freely.

The National Security Agency then developed the SHA (Secure Hash Algorithm) family. NIST published SHA-0 in 1993. But the agency discovered a vulnerability two years later and released a corrected version called SHA-1. In 2001, facing advances in cryptanalysis and the growing power of computers, the NSA designed the SHA-2 family. SHA-256 is part of it.

SHA-256 produces a 256-bit fingerprint and operates on 32-bit words. The algorithm processes messages in 512-bit blocks after specific padding. The process includes message expansion and iterative compression based on a Merkle-Damgård function. This construction ensures that the security of the compression function extends to that of the complete hash function.

The first cracks in MD5 appeared in 1992. Den Boer and Bosselaers found collisions for the compression function. In 1996, Hans Dobbertin discovered collisions for MD5 with a random initial value. These results did not yet really concern the community.

Everything changed in 2004. Xiaoyun Wang’s team achieved a breakthrough by perfecting differential cryptanalysis. Their work enabled finding collisions for MD5 in milliseconds. The techniques developed by Wang significantly reduced SHA-1’s security margin. The cryptographic community launched into a wave of research.

On December 31, 2008, Alexander Sotirov and his team made a major impact. They created a malicious CA certificate that exploited MD5 collisions. This attack theoretically allowed impersonating any website. The industry understood that MD5 had to be abandoned, despite its massive presence in existing systems.

NIST launched a competition in November 2007 to select SHA-3, a new standard hash algorithm. The objective: diversify available options and prepare for SHA-2’s replacement if needed. The competition attracted 64 submissions, of which 51 were selected for the first round. In July 2009, 14 candidates advanced to the second round.

Meanwhile, SHA-256 held firm. The best known attacks only compromised a limited number of the algorithm’s steps. SHA-256 resisted the techniques that had broken MD5 and weakened SHA-1. This robustness explains its growing adoption in numerous security protocols and applications.

Bitcoin’s arrival in 2009 gave SHA-256 an unexpected dimension. The cryptocurrency’s proof of work relies on finding partial collisions, exploiting the function’s preimage resistance. This application demonstrated SHA-256’s versatility.

SHA-256 is also found in the Internet of Things, distributed embedded systems, random number generation, and data encryption. This diversification testifies to its maturity and the trust it inspires.

This history shows that reliable alternatives must always be available when weaknesses appear in existing standards. It recalls the importance of maintaining standardized options. SHA-256’s longevity proves that rigorous design anticipates developments in cryptanalysis and available computing power.

Top

Microsoft .NET Framework

At Microsoft’s Professional Developers Conference in 2000, the company unveiled .NET, a new framework for Windows that would redefine how software was built on its platform. This project, which had started under the rather uninspiring name of “Next Generation Windows Services”, represented more than just an update to existing tools. Developers received the first beta versions of .NET 1.0 that same year.

Microsoft wanted to solve several problems at once. On one hand, Java was gaining ground and the Redmond company needed a credible response. On the other hand, Windows development remained fragmented across different technologies that didn’t always communicate well with each other. The idea was to create something more coherent that would truly simplify developers’ work. At the heart of this new platform was the Common Language Runtime, an execution environment that automatically handled previously complex tasks such as memory management and security aspects.

The first commercial release arrived in 2002 with Visual Studio .NET and framework 1.0. This version laid the foundations for an architecture that would endure through the following years: a common class library, a unified type system, and the ability to write code in multiple languages such as C# or Visual Basic .NET. Version 1.1 followed in 2003, bringing its share of performance improvements and new features.

With Visual Studio 2005 and framework 2.0, things took on another dimension. Generics made their appearance, allowing developers to write more robust and reusable code. This version consolidated .NET as an essential platform. The year 2008 marked a new turning point with multi-targeting, which gave the freedom to simultaneously target different framework versions—2.0, 3.0, and 3.5.

Versions then followed at a regular pace. Visual Studio 2010 integrated .NET 4.0, then the 2012 version welcomed 4.5. This progression allowed the platform to adapt to new software development requirements, particularly in web and cloud computing.

.NET’s technical architecture rests on a well-designed stack of layers. At the base, the CLR executes code and automatically manages memory through its garbage collector. Above come the class libraries, which provide everything needed to build applications. Depending on requirements, the framework offers different environments: console applications for processing without an interface, Windows Forms for traditional desktop software, ASP.NET for the web.

.NET’s compilation model stands out through an original approach. Source code doesn’t transform directly into machine language. It first goes through an intermediate step, MSIL (Microsoft Intermediate Language). This strategy offers several benefits: code is portable across different architectures, it can be optimized at runtime, and programming languages are interoperable.

This interoperability between languages actually constitutes one of .NET’s strengths. Beyond C# and Visual Basic .NET, the platform welcomes F#, IronPython, and even IronRuby. The Common Type System defines how to declare and use data types, while the Common Language Specification establishes rules ensuring that all these languages can work together.

Anders Hejlsberg played a decisive role in this story. Chief architect of C#, he had previously worked at Borland on Turbo Pascal and Delphi. This experience informed the framework’s design. Under his guidance, C# became .NET’s flagship language, regularly integrating innovations such as LINQ, which allowed writing queries directly in code.

.NET’s influence on the software industry was massive. The platform transformed how companies developed their Windows applications, giving them a truly productive and modern environment. The transition to the web became simpler with ASP.NET, which made creating solid web applications less challenging.

The framework introduced advanced concepts that few platforms offered at the time. Aspect-oriented programming, reflection, attributes: all tools that enabled the construction of more sophisticated applications. Automatic memory management freed programmers from tedious and error-prone tasks. Deployment became less problematic thanks to automatic installation of necessary components.

Microsoft progressively enriched .NET to keep up with technological developments. Cloud computing support, mobile development, universal Windows applications: the platform continually adapted to new paradigms and developers’ expectations.

Creating .NET represented a colossal investment. Hundreds of engineers worked on it for years. This effort paid off: the framework became a de facto standard for Windows development, adopted by millions of developers worldwide. Its influence endures with .NET Core, its modern, cross-platform version.

Top

Mozilla Firefox

In 1993, the Mosaic browser was created at the National Center for Supercomputing Applications. Its success was swift. Marc Andreessen, one of its creators, founded Netscape Communications Corporation the following year. The new browser, Netscape Navigator, captured the Web market.

Microsoft responded by developing Internet Explorer, which it integrated directly into Windows 95. The strategy worked. Netscape gradually lost ground. Facing this decline, the company made a bold decision in 1998: it released its browser’s source code through the Mozilla project. The mozilla.org organization was created to coordinate this community development.

The situation became more complicated in 1999 when AOL acquired Netscape. The integration went poorly. Mitchell Baker, a lawyer at Netscape and then at AOL, led mozilla.org with the rather original title of “Chief Lizard Wrangler”. Her team continued development while maintaining autonomy from AOL.

After four years of work, Mozilla was released in 2002 in its first stable version. The browser was packed with features: tabs, pop-up blocking, download manager. But its overly cluttered interface put off mainstream users. A small team then decided to start over with a simpler foundation.

The new project went through several names: Phoenix, Firebird, before becoming Firefox to avoid trademark conflicts. Version 1.0 launched in November 2004. Success was immediate. Firefox introduced innovations that would become industry standards: natively integrated tabs, a modular extension system, a streamlined interface. The approach contrasted sharply with what existed at the time.

AOL discontinued Netscape Navigator in 2003. Mozilla transformed into an independent nonprofit foundation, with $2 million from AOL as seed funding. The Mozilla Foundation could thus develop Firefox freely, staying true to free software principles and an open Web.

The browser arrived at the right time. Internet Explorer had stagnated technically. Firefox offered a real alternative, more secure and better at respecting privacy. Its open development attracted a global community of contributors. The foundation signed agreements with search engines, particularly Google, which generated revenue without compromising the project’s independence.

From 2004 to 2008, Firefox steadily gained market share from Internet Explorer. It reached 20% of the global market in 2008. An open source project was directly competing with a dominant proprietary software. Firefox pushed Web standards and interoperability, where Microsoft had favored its closed ecosystem.

Google Chrome appeared in 2008. This browser, first based on WebKit then on Blink, gradually established itself. Firefox nonetheless retained its loyal users, attached to its vision of an open Web that respects individual freedoms.

The mobile explosion led Mozilla to launch Firefox OS in 2013. This mobile operating system, built on Web technologies, aimed to offer an open alternative to iOS and Android. Despite partnerships with manufacturers, Firefox OS never took off. The project ended in 2015.

In 2024, Firefox has approximately 200 million users. The browser continues to innovate in privacy protection, performance, and standards compliance. The Mozilla Foundation pursues its mission for an open and accessible Internet for all.

Firefox demonstrates that an open source project can disrupt a market dominated by commercial giants. The browser proved the importance of transparent governance and an engaged community. Mozilla’s business model remains exceptional, however. The foundation derives substantial revenue from its partnerships with search engines without abandoning its nonprofit mission. This balance remains rare in the software world.

Top

Apple Xserve

In May 2002, Apple returned to the server market with the Xserve, a computer designed for server racks. The company had not ventured into this territory since the crushing defeat of its Network Servers in the 1990s. At that time, Macs equipped many Fortune 500 companies, but they remained at the doorstep of their IT departments, never truly gaining entry.

The Xserve adopts a compact 1U format with a height of 4.4 cm. The first generation features PowerPC G4 processors at 1 GHz and DDR memory, a first for Apple. The system architecture relies on four hot-swappable hard drive bays, each with its own Ultra ATA-100 bus. These proprietary modules come in 60 or 120 GB capacities, reaching a maximum total of 480 GB.

For connectivity, the device offers two Gigabit Ethernet ports, three FireWire ports, two USB ports, and one serial port for terminal connections. The front panel displays disk activity, processor load, and network status in color. Apple launches two versions: $2,999 for the single-processor model, $3,999 for the dual-processor model, including an unlimited Mac OS X Server license.

The machine evolves quickly. As early as 2004, the Xserve G5 takes over with PowerPC G5 processors reaching up to 2 GHz. This performance boost requires a complete chassis redesign to dissipate heat, which reduces the number of bays. This version introduces ECC memory, a novelty in Apple’s catalog, and switches to SATA drives.

Apple completes its offering with the Xserve RAID, a 3U rack-mounted storage system. This device accommodates 14 modules of 180 GB each and communicates with the server via 2 Gb/s Fibre Channel links, achieving throughput of 400 MB/s. Its pricing challenges the competition: in 2004, the cost per gigabyte stands at $3.14, far below Dell ($9.05) and HP ($11.39).

The transition to Intel occurs in 2006. The Xserve adopts 64-bit Xeon processors reaching up to 3 GHz, multiplying the G5’s performance by five. These processors generate less heat, which allows for the addition of redundant power supplies and increases storage capacity to 2.5 TB. Options multiply: dual-layer DVD burner, 32 GB of RAM, integrated RAID card.

The final iterations emerge between 2008 and 2009 with quad-core Xeon 5400 series processors. These machines reach new heights thanks to their refined design: independent 1,600 MHz front-side buses, 800 MHz DDR2 ECC memory, PCI Express 2.0 connectors. Storage breaks new ground with 15,000 RPM SAS drives offering 125 MB/s throughput and 3.5 ms access times.

Mac OS X Server constitutes the Xserve’s major asset. This UNIX-certified system integrates file and printer sharing, web hosting, messaging, and media streaming. The 10.5 Leopard Server version enhances collaborative functions with iCal Server for calendars, Wiki Server for teamwork, and Podcast Producer for automatic multimedia content publishing.

Technical support adapts to enterprise needs: phone and email support available around the clock. Spare parts kits enable technicians to perform on-site interventions. The Server Monitor application continuously monitors critical components: temperatures, fans, power supplies, hard drives, and network interfaces.

Apple discontinues the Xserve RAID in February 2008, favoring Promise Technology solutions certified for Xsan 2 instead. The Xserve server disappears in January 2011, but it remains a fine example of technical expertise in the enterprise server domain. The company then recommends using Mac mini or Mac Pro systems configured as servers. This withdrawal reflects a shift in Apple’s priorities, turning more toward consumers and professionals than toward IT departments. For nine years, Xserve offered an elegant alternative to conventional solutions, combining performance, ease of use, and compact size.

Top

Tor

In the 1990s, the U.S. Naval Research Laboratory was working on a problem that seemed quite removed from public concerns: how to protect the communications of its agents around the world? The solution they developed was called Onion Routing. No one imagined then that this military technology would become one of the most important tools for defending privacy on the Internet.

The technical principle is ingenious. Each message passes through several intermediate servers, encrypted in successive layers like the peels of an onion. At each step, a server removes one layer and forwards the packet to the next relay, never knowing either the origin or the final destination. An observer intercepting the traffic sees only one node among others, unable to trace the complete chain.

This era also saw the emergence of cypherpunks, those libertarian technologists convinced that encryption represented a bulwark against state surveillance. The meeting between these activists and military researchers at a university seminar created an unlikely alliance. The military needed a vast network of civilian users to drown their sensitive communications in abundant traffic. The cypherpunks wanted to democratize privacy protection technologies. Each party found what it needed.

In 2002, the protocol was reimplemented for the general public and released under a free license. This source code transparency mattered greatly: how could one trust a security tool funded by the U.S. State Department without being able to verify its operation? The community could now audit it freely.

The early days raised thorny technical questions. Should they add dummy data that masks genuine communications? Theoretical security would benefit, but connections would become unbearably slow. The developers made a choice: a network that’s too slow would never attract enough users. Yet Tor’s strength relies precisely on numbers: the more diverse the profiles, the better sensitive communications blend into the mass of daily traffic.

This philosophy guided the entire evolution of the project. Tor had to remain accessible to the greatest number while maintaining a high level of security. The gamble paid off. Journalists protecting their sources, human rights activists in authoritarian regimes, companies concerned about their trade secrets, ordinary citizens refusing advertising tracking: all found in this network an answer to their needs.

The network also makes it possible to host hidden services, recognizable by their ".onion" addresses. The physical location of these sites is masked, which raises controversies. Silk Road, that illegal marketplace shut down by the FBI, used Tor. But reducing the network to its criminal uses would be dishonest. Edward Snowden’s revelations in 2013 showed that the NSA was failing to compromise Tor on a large scale, confirming its technical robustness against the world’s most powerful adversary.

The architecture relies on thousands of volunteer servers scattered around the world. Individuals, associations, sometimes universities, run these relays out of conviction: preserving a free Internet. This decentralized infrastructure constitutes a technical bulwark against mass surveillance. It also illustrates that an alternative governance of networks is possible, outside commercial or state logic.

Tor’s major innovation concerns not so much the encryption of content as the protection of metadata. Who talks to whom, when, how many times? This information often reveals more than the content of the messages themselves. By tackling this structural problem, the network has inspired many other privacy protection projects.

Tor’s sustainability rests on a fruitful paradox. The U.S. State Department continues to fund its development to promote freedom of expression in countries that censor the Internet. Activists sometimes use it to escape Western surveillance. This duality, far from weakening the project, strengthens its legitimacy and resilience. Serving state and citizen interests simultaneously is not a contradiction but a condition for survival.

Faced with the limitations of abstract mathematical models, the developers favored empirical observation of real threats and concrete uses. This pragmatic approach has produced an imperfect but effective system against mass surveillance. The initial technical challenges forged this method: better a usable tool than a theoretically perfect but practically unusable solution.

The project shows that an alternative infrastructure can emerge from the margins of the Internet to become a pillar of online freedom of expression. Technical communities can create tools serving the public interest, outside the usual circuits. Onion routing born in a military laboratory has become a digital commons.

Top

Apple iTunes

In 2000, the music industry faced an unprecedented crisis. Illegal MP3 file sharing on the Internet was exploding, major record labels clung to their old business models, and users were abandoning record stores. Steve Jobs, who had returned to Apple in 1997, saw an opportunity: to create a cohesive ecosystem around digital music.

The story truly begins in 2001 with the first iPod. This elegant portable player needed software so people could transfer their music to it. iTunes was born from this practical necessity: to offer Mac owners a simple way to convert their CDs into digital files and sync them with their device. Nothing new in itself, but a solution that actually worked.

Two years later, in 2003, everything changed. Apple launched the iTunes Store, an online shop where you could buy individual tracks for 99 cents. The concept was bold at the time. Record labels sold complete albums, not isolated songs. Jobs had to negotiate hard with the big five record companies. His argument? Ease of use and attractive pricing would turn people away from piracy. The label executives, skeptical but with their backs against the wall, eventually agreed.

The success exceeded all expectations. One million tracks sold in one week. Users appreciated being able to preview a song before buying, paying only for the songs they really wanted. The interface was intuitive, without unnecessary frills. iTunes introduced smart playlists, rankings, and personalized recommendations that changed how people discovered music.

In October 2003, Apple made a surprising choice: adapting iTunes for Windows. The Cupertino company abandoned its usual exclusivity and opened its doors to millions of new users. This strategic decision expanded the service’s audience far beyond Mac enthusiasts. Anyone with a PC could now buy music online and sync their iPod.

Between 2004 and 2006, the music catalog constantly expanded. Deals with independent labels multiplied the available offerings. iTunes no longer just sold music: the platform welcomed podcasts, then TV series, and finally movies. The software gradually became a complete multimedia hub, far from its original purpose as a simple music library manager.

The App Store launched in 2008, designed for iPhone and iPod Touch applications. iTunes transformed into a digital crossroads where music, videos, applications, and digital books converged. The unified payment system and automatic synchronization between devices attracted more and more users. Apple had won its bet: creating an ecosystem that’s hard to leave once you’re in it.

The impact on the music industry is difficult to measure, but a few figures speak for themselves. In 2010, iTunes was the largest music seller in the United States, ahead of Walmart. Physical sales continued to collapse, certainly, but digital distribution revenues offset part of the losses. The service legitimized online music purchases and established a viable economic model for the digital era.

Technology evolved alongside usage patterns. Users accessed their library from any device thanks to cloud storage, introduced with iTunes Match in 2011. Streaming features gradually appeared, responding to new ways of consuming music. The AAC format chosen by Apple offered better sound quality than MP3 for a comparable file size. FairPlay digital rights management, less restrictive than competitors’, allowed playback on multiple devices while reassuring record labels.

In 2019, Apple announced the end of iTunes. The software would be split into separate applications for music, podcasts, and video on macOS. iTunes proved that a polished interface, combined with an adapted business model, satisfies consumers without sacrificing content creators’ interests. The platform redefined digital content distribution and inspired countless services that followed.

Top

Skype

It’s 1998. Niklas Zennström is working for Tele2, a Swedish telecom operator, when he pitches an Internet telephony project to management. The idea doesn’t find takers at the time, but it keeps working on Zennström’s mind. With the Dane Janus Friis, he first launches Kazaa, that peer-to-peer file-sharing software that will make headlines, before returning to his original project: Internet telephony.

In 2000, the Everyday.com portal gathers a team in which Zennström and Friis find three Estonian developers: Jaan Tallinn, Ahti Heinla, and Priit Kasesalu. Toivo Annus soon joins them. These developers come from Bluemoon, an Estonian company where they honed their technical skills creating video games during the Soviet era. The group forms the nucleus that will give birth to Skype.

Peer-to-peer technology, which the team perfected with Kazaa, becomes the heart of the system. Users communicate directly with each other, without going through central servers. Infrastructure costs plummet, communication quality soars. The name Skype comes from Sky peer-to-peer, first conceived as "Skyper," then shortened when they discover the domain name is unavailable.

Finding financing proves difficult. The legal controversies surrounding Kazaa have damaged the founders’ reputation. They have to wait until July 2002 for the Draper Richards fund to agree to invest $250,000 for 5% of the capital. A modest valuation that will later prove to be a spectacular investment.

On August 29, 2003, Skype opens its doors to the public. About twenty people have developed this software that offers free computer-to-computer calls. The ease of use and sound quality immediately appeal. On the first day, 10,000 people download the application. A few months later, there are a million.

The company cultivates its own distinct style. The offices display no signage, a legacy of Kazaa’s legal troubles that taught the founders discretion. In Tallinn, an inflatable pool sits in the meeting room. The team codes on the fly: as soon as a feature is designed, it’s integrated into the software.

In 2004, Skype launches SkypeOut. The service opens up to calls to landlines and mobile phones at rock-bottom prices. Calls between Skype users remain free, but it’s the beginning of monetization. Investors come running. The company raises $18 million from Index Ventures and Bessemer Venture Partners, among others.

The explosive growth attracts the giants. In September 2005, eBay acquires Skype for $2.6 billion. The Estonian developers become millionaires overnight, but the corporate culture begins to crack. Tensions rise between the Tallinn teams and those in London. The integration of a startup into a large company shows its limits, and relations deteriorate quickly. The founders retained intellectual property ownership of the peer-to-peer technology through their company Joltid, creating an inextricable situation. Microsoft finally acquires Skype in 2011 for $8.5 billion. A new chapter begins.

Under Microsoft’s leadership, Skype evolves. Edward Snowden’s revelations in 2013 show that the service, once renowned for its confidentiality, now cooperates with intelligence agencies. The rebel startup has become an institutional player in the digital world.

Skype’s impact on telecommunications remains massive. The service made free international calls accessible to everyone, popularized video conferencing. In 2012, Skype handled 167 billion minutes of international calls, more than all traditional telecom operators combined.

The founders have taken different paths. Jaan Tallinn invests in projects related to humanity’s survival; he now considers time more precious than money. The Estonian team, despite its fortune, maintains a certain modesty. No ostentatious spending, thoughtful investments.

Skype transformed an ambitious technical project into a universal communication tool. The service demonstrated the power of peer-to-peer and the importance of a simple interface. A technology can evolve, moving from disruptive innovation to institutionalization, while remaining useful to millions of people every day.

Top

WordPress

In 2003, a software called b2/cafelog came to an abrupt end. Its creator abandoned the project, leaving behind open source code that two developers, Matt Mullenweg and Mike Little, decided to take over. Mullenweg was only 19 years old and his programming skills came from self-teaching, far from traditional university curricula. From this initiative was born WordPress, which would become one of the most widely used content management systems in the world.

The first versions of the software displayed two priorities: compliance with web standards and simple installation. The team developed the Famous Five Minute Installation, an installation process that required only five minutes where competitors often demanded more than half an hour. This philosophy of accessibility found its roots in the values of free software: offering everyone the freedoms to use, study, modify, and distribute the program. In contrast to proprietary software that locks down its code, WordPress opened itself completely.

Technical evolution followed a sustained pace. In 2004, version 1.2 introduced plugins, those extensions that allowed adding functionality without touching the system’s core. The following year, themes and static pages appeared in version 1.5, before 2.0 brought caching and a redesigned administration interface.

The year 2007 saw improvements in rapid succession. Version 2.1 integrated auto-save and spell checking. Widgets arrived with 2.2, facilitating customization of sidebars. Version 2.3 introduced tags and overhauled the taxonomy system, making content classification more flexible.

In 2008, the Happy Cog agency designed a new administration interface for version 2.5. The dashboard gained modular widgets and a shortcode API was born. A few months later, version 2.7 improved overall usability, added automatic updates, and simplified plugin installation. Threaded comment management enriched interactions between authors and readers.

Alongside technical development, Matt Mullenweg founded Automattic, the commercial company that hosts WordPress.com, the online version of the system. The organization adopted an unusual working model: teams were entirely distributed, with no central office. In 2014, Automattic exceeded one billion dollars in valuation with more than 300 employees spread across 37 countries.

WordPress’s success rested on several pillars. The plugin system today counts more than 35,000 extensions, covering needs as varied as e-commerce, security, or performance optimization. The software’s architecture appealed to developers who could rely on its components to create complex web applications, far beyond simple blogging.

The community organized around WordCamps, local conferences that spread worldwide. These gatherings created connections between users, developers, and businesses, nurturing a dynamic ecosystem where knowledge and best practices were exchanged.

Major organizations eventually adopted the platform. The New York Times, the Wall Street Journal, and the Washington Post used it for their websites. WordPress powered approximately 23% of global websites, proving its capacity to meet the most diverse requirements, from local artisans to international media.

Subsequent versions continued to enrich the system. Version 3.0 in 2010 brought custom post types and multi-site management. Responsive design integrated naturally, adapting to the smartphones and tablets flooding the market. The visual editor improved, emoji support arrived, and site icons integrated directly into the interface.

WordPress evolved into a complete application platform. Developers exploited its APIs to transform a simple blog into a booking system, an online store, or a learning portal.

The business model played on two fronts: the software remained free and open, but services like hosting, technical support, or premium features generated revenue. This approach guaranteed the project’s sustainability without sacrificing its accessibility.

Recruitment at Automattic reflected this particular culture. Candidates went through practical tests and trial periods, with actual skills counting more than diplomas. This method aligned with free software values, where effective contribution outweighed formal qualifications.

By democratizing website creation, WordPress enabled millions of people to exist and express themselves online. Its collaborative development model inspired other free projects, demonstrating that a global community could create and maintain large-scale professional tools.

Project management followed a pragmatic line. Technical decisions responded to users’ real needs rather than passing trends. This stability reassured businesses choosing WordPress for their strategic projects, knowing the platform would evolve without abrupt disruption.

WordPress has become a universal web platform. This transformation was achieved while preserving the founding principles of simplicity and openness, while integrating the innovations necessary for its growth. The software managed to grow without denying its origins, remaining faithful to the spirit that drove its two creators in 2003.

Top

Xen

In 1999, at the Computer Laboratory of the University of Cambridge, Dr. Ian Pratt and his team launched the XenoServers project. Their vision: build an infrastructure of distributed servers across the Internet where individuals or organizations, once authenticated, could purchase computing resources on demand. The idea was not new in principle, but its technical implementation promised to be ambitious. The name XenoServers refers to the Greek xenos, the stranger to whom one offers hospitality without granting blind trust.

The project’s early steps focused on using Java virtual machines to execute client code. This solution ran into a prohibitive constraint that required rewriting all applications in Java. The team then changed course and aimed higher: virtualizing x86 hardware directly to run complete operating systems. At the time, virtualizing the x86 architecture was a technical challenge. Available approaches relied on instruction emulation and binary rewriting, methods that notably degraded performance.

Faced with these limitations, the researchers invented paravirtualization. The principle involves slightly modifying guest operating systems so they collaborate with a hypervisor, while allowing applications to run without modification. Keir Fraser, a doctoral student on the team, took charge of implementing this idea. He developed the core of the hypervisor and adapted Linux 2.2 to this new architecture. The hypervisor received the name Xen, which condenses that of the initial project.

One of Xen’s technical bets lies in its ability to host virtual machines belonging to different clients on the same physical server. Isolation must be total: a client cannot access another’s data or disrupt their operations, and each receives exactly the resources they pay for. The designers like to illustrate this requirement with a compelling example: simultaneously selling services to Coca-Cola and Pepsi while guaranteeing them absolute confidentiality.

The following years saw the project gain momentum. Laboratories like Intel Research and HP Labs provided financial and technical support. In 2003, the first free version of Xen was released under a dual license: GPLv2 for the hypervisor, BSD for components integrated into operating systems. This BSD license choice was not insignificant: it simplified porting to proprietary systems. Microsoft Research became interested in the project, with Paul Barham leading a prototype port of Windows XP.

The publication of the article "Xen and the Art of Virtualization" at the ACM SOSP conference marked the project’s academic consecration. Internet service providers began deploying Xen commercially, exploiting its capabilities for virtualizing computing, networking, and storage.

Christian Limpach contributed in 2004 to porting NetBSD to Xen before joining the Cambridge team. Under Pratt’s leadership, the open source community around Xen expanded. RedHat, SuSE, Sun, IBM, AMD: all adapted their operating systems or ported Xen to their hardware.

The researchers realized that extending the x86 architecture would make virtualization simpler and more efficient, while reducing the amount of privileged code required. A collaboration with Intel led to the development of Xen’s HVM mode, which leverages Intel’s VTx extensions. This mode avoids certain paravirtualized modifications to operating systems, while maintaining paravirtualization where it provides real gains. AMD-V support followed.

The year 2004 also saw the main contributors create XenSource in response to growing demands for consulting services. Funding an open source-based company in the United Kingdom and Europe remained a challenging path, as these business models were not well established there. Ian Pratt reconnected with Simon Crosby, a former university colleague, and went to Silicon Valley where XenSource secured a first financing round of $8 million from KPCB and Sevin Rosen.

XenSource’s first year focused on open source development. The initial strategy? Provide management tools and collaborate with Linux distributions. But when the latter announced their own solutions, this approach showed its limits. At the end of 2005, Peter Levine and Frank Artale joined the company through Accel Ventures. Their commercial vision reoriented XenSource toward direct confrontation with VMware. A strategic partnership with Microsoft in 2006 strengthened the company’s credibility.

In 2007, Citrix acquired XenSource. Disappointed by VMware’s entry into the desktop virtualization market, which directly competed with its application virtualization business, Citrix saw in this acquisition the opportunity to have a Microsoft-compatible platform.

Meanwhile, Amazon built its cloud infrastructure on Xen starting in late 2005. With millions of virtual machines deployed, Amazon is Xen’s largest user, and probably the one deriving the greatest financial benefit from it. Amazon’s ability to virtualize Windows rests on the adoption of Intel platforms equipped with virtualization extensions. Since then, at AWS, Xen has gradually given way to Nitro, the in-house solution.

In September 2022, following Citrix’s acquisition by Vista Equity Partners and Evergreen Coast Capital, XenServer became an independent business unit within the Cloud Software Group. This reorganization opens a new chapter for a technology born in a British university laboratory.

Xen’s success testifies to the role of open source in technological innovation and the capacity of academic projects to produce solutions that leave a lasting mark on the industry.

Top

Apache Cassandra

Web giants hit a wall. Facebook, with hundreds of millions of users, must handle a seemingly simple feature: inbox search. But beneath this simplicity lies a formidable technical challenge. Every day, billions of writes flood into the systems, and growth shows no sign of slowing down. Traditional databases, designed in the 1970s, reveal their limitations when faced with these volumes.

Avinash Lakshman and Prashant Malik set out to create a new solution to address the problem. Lakshman doesn’t come empty-handed; he participated at Amazon in designing Dynamo, a distributed storage system that serves as a reference. Their insight is to combine two previously distinct approaches. On one side, the architectural principles of Dynamo that have proven themselves at Amazon. On the other, the data model of BigTable, Google’s in-house solution. From this meeting, Cassandra is born, named after the prophetess of Greek mythology.

In July 2008, Facebook makes a decision that will change the game. Rather than jealously guarding its code, as Google and Amazon do by merely publishing papers describing their architectures, the social network releases Cassandra’s entire source code under the Apache license. The gesture marks a break from the practices of technology companies. The following year, the project joins the Apache Foundation incubator, before achieving in February 2010 the coveted status of top-level project.

Cassandra’s architecture breaks with established patterns. Where classic databases organize themselves according to a master-slave hierarchy, Cassandra adopts a model where all nodes play the same role. No conductor, no single point of failure. Data distributes automatically among participating machines, and the system scales horizontally without requiring an administrator to intervene. This total decentralization guarantees continuous availability, even when some nodes fail.

Data redundancy is embedded in the system’s DNA. Cassandra automatically replicates information across multiple nodes. The administrator simply indicates the desired number of copies, the rest happens automatically. If a machine fails, the data remains accessible elsewhere. This approach, simple in principle, proves effective in practice.

Netflix offers a compelling example of what Cassandra makes possible. In 2011, the streaming service deploys a cluster of 288 instances in the cloud. The system handles 1.1 million writes per second coming directly from clients. With replication across three different availability zones, this figure climbs to 3.3 million writes per second. These performances would have been unthinkable a few years earlier with conventional technologies.

The arrival of the CQL language greatly facilitates adoption. Its syntax resembles that of SQL, known to millions of developers worldwide. A programmer familiar with relational databases can get to grips with Cassandra without relearning everything. Certainly, differences remain: no joins for example, which reflects the system’s denormalized data model. But the entry barrier drops considerably.

The community grows rapidly. In 2012, over 1,000 production deployments are already running, from eBay to Disney and Netflix. Tools multiply, integrations with Hadoop, Spark, or Solr enrich the ecosystem. The big data world finds in Cassandra a reliable pillar.

DataStax emerges in 2010 and brings a commercial dimension to the project. The company hires the main contributors and offers an enterprise version with additional features, technical support, and administration tools like OpsCenter. The open source model finds there its viable economic counterpart.

The numbers speak for themselves. A comparative study presented at the Very Large Database conference in 2012 pits Cassandra against HBase. Read latency times prove up to 100 times faster, throughput eight times higher. These results don’t come out of nowhere: they stem directly from the initial architectural choices.

Versions follow one another and bring their share of improvements. Version 1.0 in 2011 optimizes performance and compresses data. Version 2.0 in 2013 introduces lightweight transactions via the Paxos protocol. Version 3.0 in 2015 adds materialized views. Version 4.0 in 2021 deploys zero-copy streaming and supports Java 11. Each iteration pushes the system’s capabilities a bit further.

Instagram, acquired by Meta, relies on Cassandra to manage data for over one billion active users each month. Use cases diversify: time series, real-time analysis, multimedia content management, e-commerce. Wherever volumes explode and unavailability costs dearly, Cassandra finds its place.

The impact extends beyond the purely technical dimension. Cassandra demonstrates that a database can combine high availability, linear scalability, and high performance. These three pillars respond exactly to the needs of contemporary web applications, where every minute of downtime counts in millions and where data never stops growing.

Cassandra finds favorable ground in hybrid and multi-cloud deployment. The system operates transparently across multiple data centers or different cloud providers. While keeping their data consistent, companies thus escape dependence on a single vendor.

Other projects draw inspiration from Cassandra. ScyllaDB incorporates its concepts in C++ to gain even more performance. The decentralized architecture and tunable consistency model now serve as a reference in the field of distributed storage.

Twenty years after its birth, Apache Cassandra remains a cornerstone of the NoSQL ecosystem. Its story tells of the maturation of distributed architectures and their capacity to adapt to the ever-increasing demands of modern applications. What began as an internal solution at Facebook has become a de facto standard for anyone who must manage massive data with strict availability guarantees.

Top

Google Gmail

In 2001, the free email market was dominated by three established players: Microsoft Hotmail, Yahoo! Mail, and AOL. Each offered roughly the same thing—a few megabytes of storage space for messages. Users spent their time sorting through emails, regularly deleting old messages to avoid exceeding the imposed limit. This was the norm, and no one really complained, probably because the service was free.

At Google, an employee finally voiced her frustration. She was tired of this constant chore, those 4 megabytes that forced her to constantly archive, organize, and delete. Paul Buchheit, an engineer at the company, heard this complaint. At the time, Google only did web search. Developing an email service represented a complete pivot for the company, a departure from its usual territory.

Buchheit wasn’t attempting this for the first time. Before joining Google, he had worked on a prototype webmail that never saw the light of day. Armed with this failure, he decided this time to move quickly. In a single day, he created a first version by recycling existing code from Google Groups, which was used to search through Usenet archives. This first draft did only one thing: search through one’s own emails. Nothing more, but it was a start.

The real technical breakthrough lay in the use of JavaScript and the XMLHttpRequest object. JavaScript had a bad reputation at Google, associated with invasive ads that polluted the web. However, this technology proved essential for building a responsive interface, without those page reloads that disrupted the user experience. The combination of JavaScript and XMLHttpRequest, which would later be called AJAX, made it possible to create something that truly resembled software installed on a computer.

Development proceeded step by step, with each new feature first tested by Google employees. Feedback came directly, improvements followed immediately. One of the notable ideas was organizing messages into conversations rather than isolated emails. This presentation adopted the logic of Usenet discussion groups, where following a thread of exchanges happened naturally.

The question of money arose. Marissa Mayer, who oversaw product development, leaned toward the classic model: limited free space, a paid option for those who wanted more. Buchheit proposed something else: displaying ads related to message content. The idea sparked debate about privacy and confidentiality. But this approach gave birth to AdSense, which became one of Google’s main revenue sources.

On April 1st, 2004, Google announced Gmail with an incredible promise: 1 gigabyte of free storage. Two hundred and fifty times more than the competition. The announcement date and the scale of the offer sowed doubt. Google loved April Fools’ pranks, and offering so much space seemed technically impossible, economically absurd. Many believed it was a joke.

The service launched by sending out invitations. Early users received a few invitations to distribute, creating a viral distribution system. This artificial scarcity allowed for gradual server scaling while building an image of an elitist, desirable service. Gmail invitations were even traded on eBay.

Gmail transformed Google. The company proved it could do more than web search, that it could create sophisticated web services. The techniques developed for Gmail, particularly that famous use of AJAX, became web development standards. Google published the Google Web Toolkit in 2005, a framework that allowed other developers to use these technologies.

The beta phase lasted five years. Gmail emerged from it in 2009, after progressively integrating new features: chat, spell checking, spam detection, connections with Calendar and Drive. The interface evolved but kept its original principles, that simplicity which was its strength.

The impact went far beyond email. Gmail redefined what could be expected from a web application. Microsoft and Yahoo! had to go back to the drawing board, drastically increasing their storage space. More broadly, the service showed that a web application could rival software installed on a hard drive.

Gmail reflects how Google approaches innovation. A trial-and-error approach, rapid iterations, attention to real needs. This method blends technical thinking with field observation. The service continues to evolve by integrating artificial intelligence to automatically sort messages, suggest responses, and detect phishing attempts.

Top

Scala

Martin Odersky launched Scala’s development in 2001 within the Programming Methods Laboratory at the École Polytechnique Fédérale de Lausanne. His approach addressed a specific question: how to better support component-based software in programming languages?

Three years later, in January 2004, the first public version of Scala emerged on the Java Virtual Machine platform. A .NET version followed in June. The chosen name, “Scala”, contracts scalable language: an extensible language that adapts to its users’ needs. This notion of adaptability runs through the entire design: from small scripts to large systems, the tool must be able to keep pace.

Scala’s originality lies in its fusion of object-oriented and functional programming under static typing. Where other languages keep these paradigms separate, Scala unites them. Its type system incorporates advanced concepts such as abstract types and path-dependent types, inherited from υObj calculus. Modular composition relies on mixins and traits. Views, in turn, offer modular component adaptation.

Compatibility with Java strongly shapes the design. Scala adopts a substantial portion of Java’s syntax and type system. A Java developer finds familiar territory. Java libraries can be used directly in Scala, and vice versa. Scala classes inherit from Java classes and implement their interfaces. This interoperability enables the insertion of Scala code into existing Java projects without rewriting everything.

March 2006 marked the release of a second major version. The type system gained robustness, class composition mechanisms became more refined. The syntax remains deliberately conventional, yet conceals technical sophistication that allows the expression of complex concepts in few lines.

Scala’s influences are manifold. On the object side, Simula and Smalltalk inspire the uniform object model. Universal nesting—this ability to nest almost all constructs within one another—comes from Algol, Simula, and then Beta. The functional approach recalls the ML family with SML, OCaml, or F#. Implicit parameters find their roots in Haskell’s type classes, adapted here to the conventional object world. The concurrency library based on actors owes much to Erlang.

Rather than imposing a fixed set of constructs, Scala relies on extensibility. Programmers create their own abstractions. This philosophy evokes the bazaar rather than the cathedral, according to Eric Raymond’s metaphor: the language grows through the addition of constructs invented by its users, not through modifications to its core.

Scala’s technical contributions are notable in several areas. The uniform treatment of generic types and abstract types breaks new ground. Class composition via traits offers an original path. The extraction mechanism enables pattern matching independent of representation. These innovations have been presented at various specialized conferences.

Industry has recognized Scala’s value, especially for distributed applications and massive data processing. Apache Spark, written in Scala, demonstrates the language’s capabilities on complex data processing systems. The concise syntax reduces code size compared to Java—up to 50% in common cases.

The Swiss National Fund, Microsoft Research, the MICS Research Competence Center, the European PalCom project, and the Hasler Foundation supported the development. The developer community actively participates with feedback and code contributions.

Scala’s evolution reflects a pragmatic approach. There’s no intent to build a perfect and rigid system. Adaptability and extensibility take precedence. This strategy gives developers the means to enrich the language according to their needs, creating a thriving ecosystem of libraries and frameworks.

In 2025, Scala continues its journey while maintaining its founding principles: the unification of object-oriented and functional programming, extensibility, and interoperability with established platforms. The language nourishes reflections on programming language design and software system architecture.

Top

NGINX

Igor Sysoev had a problem. In 2002, this Russian engineer was working for Rambler, an internet portal that was seeing hundreds of thousands of simultaneous connections flooding in. The servers were struggling. Apache, the dominant solution of the time, was showing its flaws: each connection demanded its own system resources, creating a burden that became unsustainable when traffic surged. Sysoev was looking for something else, a different approach to handle this avalanche of requests.

He then set out to write a new web server, which he named NGINX (pronounced engine-x). Its architecture broke with conventions: instead of allocating a process or thread per connection, the software adopted an asynchronous, event-driven model. A master process supervised everything, a few worker processes shared the workload, and this handful of components was enough to handle thousands of requests in parallel, achieving spectacular resource efficiency.

In October 2004, Sysoev released version 0.1.0. He chose this date as a reference to the launch of Sputnik, a nod to the first artificial satellite. The code was distributed under a BSD license, freely available. The first to try it were Estonian and Russian sites like rate.ee and zvuki.ru, which used NGINX to serve their MP3 files. The newcomer proved its worth.

The software gained ground, first in the Russian-speaking sphere, then beyond. System administrators discovered its talents: serving static content with remarkable speed, acting as a reverse proxy, distributing load across multiple servers, managing cache, and serving as a FastCGI gateway. NGINX proved versatile, reliable, fast. Its reputation crossed borders.

Seven years after the first public version, in 2011, Sysoev and Maxim Konovalov founded NGINX, Inc. The idea? Offer commercial support and advanced features while preserving the open source version. Investors reacted quickly: BV Capital and Runa Capital injected 3 million dollars as early as October 2011. Other rounds followed: 10 million in 2013, 20 million in 2014, 43 million in 2018. The company developed NGINX Plus, a paid offering that complemented the community version with extended monitoring and analytics capabilities.

F5 Networks acquired NGINX in March 2019 for 670 million dollars, a striking validation of the software’s importance in contemporary web infrastructure. But the story took an unexpected turn in December of the same year: a search operation hit NGINX’s Moscow offices. Rambler, Sysoev’s former employer, claimed intellectual property rights over the code. The matter was resolved thanks to Sberbank’s intervention, a Rambler shareholder, but the episode left its mark.

On the technical front, NGINX continued to evolve. It integrated HTTP/2, refined its compression and caching mechanisms, handled headers with flexibility. Shared memory was used to store cache, manage session persistence, control throughput. This efficiency proved appealing: in 2022, over 33% of websites worldwide ran on NGINX, surpassing Apache in various categories. Platforms like WordPress.com trusted it. The software adapted to modern architectures, including microservices and cloud.

January 2022: Sysoev announced his departure from NGINX and F5. He left behind a project that had reshaped the web’s infrastructure. His latest creation, NGINX Unit, launched in 2017, targeted multilingual environments and microservices architectures. The legacy endures.

From Rambler to servers worldwide, the software grew as the web changed its face. Its success stems from clever architectural choices, a pragmatic approach to performance issues, and a successful balance between open source code and a viable business model.

Top

OAuth

In 2006, Blaine Cook was working on Twitter’s API and facing a problem that was becoming increasingly common among web developers. He was looking for a way to connect Twitter to Flickr, the photo-sharing service. The only available solution was to ask users to entrust their Flickr credentials to Twitter. A security aberration, but it was the norm at the time.

This technical impasse prompted Cook and other developers from major web platforms to come together. Google had AuthSub, Yahoo used BBAuth, AOL offered OpenAuth, and Flickr had its own API. Everyone was cobbling together their own solution independently. They needed to move beyond this fragmentation. In October 2007, their collaboration resulted in OAuth 1.0, an open protocol designed to solve this puzzle: how to authorize a third-party application to access a user’s data without ever handling their password?

The mechanism relies on a simple yet effective principle. Imagine an online printing service that wants to access photos stored on another site. Instead of demanding the photo account credentials, it obtains a specific access token, limited to only the necessary operations. The user maintains control: they explicitly authorize access from a secure interface on the site hosting their photos. The printing service never knows the password.

The first security flaws emerged, and in June 2009, OAuth 1.0a corrected vulnerabilities that allowed identity spoofing. This revision clarified gray areas in the initial protocol and strengthened protections against attacks. The following year, in April 2010, the IETF published OAuth 1.0 as RFC 5849, conferring official technical specification status upon it.

The protocol defines three actors: the resource owner (the user), the client (the application requesting access), and the authorization server. The application first obtains temporary credentials. It then redirects the user to an authorization page where they review the requested permissions. After validation, the application receives a token it will use for future requests. At no point does it handle the user’s credentials.

Security relies on safeguards. Each request must be signed with a secret key. The protocol incorporates unique tokens (nonces) and timestamps to block any malicious reuse attempts. The designers anticipated session hijacking, replay attacks, and spoofing.

Facebook, Google, Microsoft, and Twitter adopted OAuth. This massive adoption by web giants created an ecosystem of interconnected applications. Libraries appeared in all common programming languages, making integration more accessible.

But OAuth 1.0 showed its limitations. The implementation complexity, particularly for cryptographic request signing, discouraged developers. Native mobile applications, increasingly numerous, posed problems not originally anticipated. These constraints motivated a complete overhaul of the protocol with OAuth 2.0.

The OAuth protocol established the standard for delegated authorization on the web. It introduced concepts now central: explicit user consent, strict limitation of granted privileges, clear separation between authentication and authorization.

OAuth also played an educational role. Developers and users became aware of the issues surrounding personal data protection. The protocol demonstrated that it was possible to create interconnected services without sacrificing security. This awareness accompanied the evolution of privacy regulations.

Moreover, modern architectures bear the mark of OAuth. OpenID Connect builds on its foundations to add a standardized authentication layer. REST APIs and microservices adopt its architectural principles. Connected devices and decentralized identity systems explore new paths starting from the foundations laid by OAuth.

The community continues to adapt the protocol to new uses. Feedback from large-scale deployments feeds reflections on its future evolution. OAuth remains a pillar of web security, nearly twenty years after its creation.

Top

Git

In 2005, Linus Torvalds found himself in an embarrassing situation. BitKeeper, the version control system the Linux community had been using for years, was no longer freely available to kernel developers. The story began with Andrew Tridgell, a respected developer, who decided to study BitKeeper’s protocol through reverse engineering. Larry McVoy, BitKeeper’s creator, considered this approach a violation of the terms of use and revoked the free license granted to Linux developers.

Torvalds attempted mediation for a few weeks, without result. Faced with this impasse, he had to find an alternative. He examined existing systems, but none satisfied him. He who considered version control managers one of the least interesting areas of computing, just after databases, embarked on creating his own tool. His experience with BitKeeper had at least taught him one thing: the advantages of a distributed system where each developer has a local copy of the repository.

Torvalds then disappeared for a week. When he resurfaced, he presented the first version of Git. Initial development progressed at a remarkable pace. In ten days, Git was capable of managing its own source code. This speed did not come from a massive amount of code written, but from deep reflection on data organization. Torvalds had designed an architecture that favored conceptual simplicity and performance.

The system relied on several technical choices. Each developer had a complete copy of the repository, which eliminated dependency on a central server and solved write access issues. Git used SHA-1 cryptographic hashing to identify objects in the repository. This approach guaranteed code integrity and made any modification of the history impossible without detection.

Branch and merge management constituted one of Git’s major innovations. These operations, known for being complex in previous systems, were simple and fast. Torvalds sometimes performed dozens of merges per day during intense periods of Linux kernel development. This fluidity transformed the way developers worked.

A few months after its creation, Torvalds entrusted Git’s maintenance to Junio Hamano. He recognized in him a developer with excellent technical judgment. Under his direction, Git evolved to become more accessible while retaining its qualities of speed, reliability, and flexibility.

Git’s adoption happened gradually. Google’s Android project chose it as its version control system in 2009, which brought important validation from a large company. The creation of GitHub in 2008 by Tom Preston-Werner, Chris Wanstrath, and P.J. Hyett accelerated the tool’s democratization. The platform offered a user-friendly web interface and social networking features for developers.

Microsoft, long reluctant about open source software, gradually changed its position. In 2012, the company began contributing to libgit2, a Git development library. The following year, it integrated Git support into Visual Studio and offered hosting via Azure DevOps. In 2017, Microsoft migrated Windows development to Git, creating the world’s largest Git repository with 300 GB of source code.

Microsoft’s acquisition of GitHub in 2018 for $7.5 billion marked a symbolic moment. This transaction illustrated Microsoft’s transformation and confirmed Git’s dominant position in the software industry. Stack Overflow’s 2018 statistics revealed that 88.4% of developers used Git, far ahead of Subversion (16.6%) or Mercurial (3.7%).

Git’s design reflected the specific needs of Linux kernel development. The ability to handle a large number of simultaneous contributions, the speed of operations, robustness against errors: these characteristics, combined with its distributed nature, made it a tool suited to large-scale open source projects. Developers could work autonomously on their local copies, then synchronize their modifications with the rest of the team when they wished.

In 2024, Git remains the de facto standard for version control in the software industry. It has transformed the way developers collaborate and share their code. The principles of its distributed architecture have inspired many other tools and practices in the field of software development.

Top

Redis

In 2009, Salvatore Sanfilippo, an Italian developer known by the pseudonym “antirez,” ran into a frustrating problem. The web log analyzer he was developing for his startup was painfully slow with MySQL. Rather than patching together yet another optimization, he decided to create his own database. What started as a prototype in Tcl evolved into a C project called Redis, short for REmote DIctionary Server.

A few weeks after writing the first lines of code, Sanfilippo made a choice that would seal his project’s future: he released it as open source. The decision proved wise. Developers embraced Redis en masse, drawn to its execution speed and apparent simplicity. The project achieved a success beyond its creator’s expectations.

Redis breaks with traditional database conventions. Where most store their data on disk, Redis keeps everything in RAM. This architecture provides unparalleled access speed, but requires persistence mechanisms to guard against unexpected system shutdowns. Yet Redis isn’t just fast: it natively handles varied data structures like lists, sets, hash tables, and sorted sets. A richness that distinguishes it from simple key-value stores.

VMware hired Sanfilippo in 2010 to work on Redis full-time while preserving the project’s BSD license. Pieter Noordhuis, a leading contributor, joined the team shortly after. This professionalization accelerated development and Redis gradually established itself in modern architectures.

Tech giants recognized its value. GitHub and Instagram were among the notable early adopters. Twitter, Uber, Stripe, Slack, and Alibaba followed suit. This widespread adoption stems from the tool’s versatility: Redis works equally well as a database, cache system, or message broker. Three roles for the price of one.

Sanfilippo designed Redis as a construction set. Rather than a monolithic product with fixed features, he created elementary building blocks that developers can assemble according to their needs. Lua scripting, the publish-subscribe system, and data streams were added progressively, following this modular philosophy. Redis resembles programmer’s Lego, where everyone builds their own solution.

The creation of Redis Labs in 2015 (which became simply Redis) professionalized the ecosystem. This structure offers commercial services around the project while maintaining its open nature. After eleven years leading Redis, Sanfilippo stepped back in 2020 to focus on writing and his family. A withdrawal from daily operations, not a complete break.

Redis’s technical architecture makes bold choices. Its single-threaded model avoids locking headaches and simplifies concurrency management. For data persistence, two mechanisms coexist: periodic snapshots and the append-only file (AOF). Master-slave replication distributes load and ensures availability. Pragmatic solutions that have proven their worth.

Redis introduces a distinctive way of thinking about data. Instead of limiting itself to classic key-value pairs, it directly manipulates complex structures through atomic commands. This approach reduces application code complexity and limits network exchanges between client and server. A significant time saver when working at scale.

The story could end there, but Sanfilippo returned in 2024 as Redis evangelist. His new mission: adapting the tool for the AI era. Besides the new license that is no longer open source by OSI standards, he’s working on vector sets, an extension of sorted sets tailored for embeddings and language models. Redis evolves with its time.

Top

YouTube

On February 14, 2005, three former PayPal employees registered the trademark, logo, and domain name for a project that would change the way we consume video on the Internet. Chad Hurley, Steve Chen, and Jawed Karim had the intuition that something was missing on the web: a simple way to share videos. The idea emerged during a dinner in San Francisco, fueled by two events in 2004 that highlighted this technological gap. First, Janet Jackson’s wardrobe malfunction at the Super Bowl, whose videos remained impossible to find online. Then, the Asian tsunami, whose images struggled to circulate on the web.

In May 2005, the site launched in beta version and immediately attracted 30,000 visitors per day. The first video uploaded gave no indication of the phenomenon to come. In this 19-second sequence titled "Me at the Zoo," Jawed Karim filmed the elephants at San Diego Zoo and commented on the size of their trunks. Nothing spectacular, yet this mundane fragment has now accumulated over 375 million views. Four months later, a Nike advertisement featuring Ronaldinho crossed the one-million-view threshold, revealing the platform’s commercial potential.

The official launch took place on December 15, 2005, made possible by a $3.5 million investment from Sequoia Capital. This funding was invested in improving servers and increasing bandwidth, as growth was dizzying. In January 2006, the site recorded 25 million daily views. March saw the number of available videos exceed 25 million, with 20,000 new uploads each day. The following summer, the 100-million daily views milestone was reached.

This explosive expansion generated considerable costs in IT infrastructure and Internet connections. The copyright issue emerged when numerous media companies discovered their protected content in videos uploaded by users. In October 2006, Google made the decision and acquired YouTube for $1.65 billion in stock. Rather than merging the platform with Google Video, its own service that was struggling to gain traction, the Mountain View giant chose to maintain YouTube as an autonomous operation.

Google then negotiated agreements with entertainment companies to reduce legal risks related to copyright. These arrangements authorized the use of protected content and allowed users to incorporate certain copyrighted music into their videos. In 2008, a partnership with MGM went further by authorizing free streaming of films and series, funded by advertising.

Through the Partner Program in May 2007, content creators could now monetize their viral videos, transforming their passions into profitable activities. Some users generated six-figure revenues. Institutional adoption followed in 2009, when the U.S. Congress and the Vatican opened their official channels.

YouTube Live arrived in April 2011, streaming concerts, sporting events, and royal weddings live. In December 2012, the "Gangnam Style" music video became the first video to exceed one billion views. The event forced the platform to modify its counter to handle numbers beyond the limits of a 32-bit integer.

The business model diversified with paid services. Music Key appeared in 2014, replaced by YouTube Red in 2015, and YouTube Premium in 2018. These offerings provided ad-free viewing and additional features. In 2016, YouTube Go targeted emerging markets with an application optimized for limited connections.

Artificial intelligence entered the platform. In 2019, the Video Reach service used machine learning to optimize advertising campaigns. The following year, YouTube abandoned its manual categorization system in favor of more sophisticated classification algorithms.

In 2023, the platform claimed approximately 2.5 billion monthly active users, rising to second-largest social network globally behind Facebook. The YouTube Shorts format, a direct response to short videos popularized by TikTok, generated over 50 billion daily views. YouTube Premium reached 125 million subscribers in March 2025.

In twenty years, YouTube has revolutionized online video content consumption. The platform established a new democratized distribution model, where anyone can become a creator. Its influence on digital culture, education, entertainment, and advertising continues to expand, making this service a pillar of the modern Internet.

Top

Amazon Web Services

In 2003, Amazon was an e-commerce site growing rapidly. The servers struggled to keep up with the pace of orders, and every activity spike revealed the limitations of an infrastructure hastily built in 1994. Jeff Bezos and his teams sought ways to handle the load without endlessly multiplying hardware investments.

The company had launched a service meant to enable retailers like Marks & Spencer to build their own stores on the Amazon platform. Except no one had anticipated the architectural chaos awaiting developers. Code had piled up over the years without any real master plan. Extracting the software components cleanly to offer them to third parties was impossible. Everything had to be rebuilt from the ground up.

Amazon therefore began breaking down its infrastructure into distinct services, each accessible through well-documented programming interfaces. This reorganization made the service functional, but it had an unexpected side effect: internal teams began working differently. Each service became autonomous, reusable, standardized. A new discipline took root in software development.

Meanwhile, Andy Jassy, Bezos’s chief of staff, noticed a troubling paradox. Amazon was hiring engineers extensively, yet development speed stagnated. Digging deeper, he discovered that teams were losing three months setting up their technical environment before starting to code. Database, compute servers, storage: everyone was rebuilding the same foundations on their own. A tremendous waste of time and energy.

In summer 2003, Bezos gathered his executives at his home for a strategic retreat. What began as a routine session on Amazon’s core competencies stretched well beyond the planned thirty minutes. Beyond e-commerce and logistics, one thing became clear: Amazon knew how to manage complex IT infrastructures. The constraint of thin margins had forced the company to optimize its data centers to excellence. This expertise was invaluable.

The idea then emerged to transform this infrastructure into an operating system for the Internet. All the components already existed at Amazon, but no one else had access to them. Developers worldwide were reinventing the same building blocks, losing the same three months, hitting the same problems. Amazon could sell them these ready-made components.

The vision took time to materialize. Amazon Simple Queue Service launched in 2006, closely followed by Simple Storage Service and Elastic Compute Cloud. These three services formed the foundation of AWS. Developers could now rent computing power, storage, message queues, without buying a single server. Everything billed based on usage, like water or electricity.

The business model disrupted established practices. Gone were the massive investments in machine rooms sitting idle half the time. Resources adjusted to demand, scaled up during peaks, scaled down during lulls. Technical complexity disappeared behind simple interfaces. Companies could finally focus on their business rather than managing servers.

AWS entered a virgin market. Surprisingly, competitors took years to react. Microsoft, IBM, Google watched the train pass before launching their own efforts. Too late: AWS had set the standards for cloud computing and captured over 30% of the market by 2016.

Revenue climbed to 10 billion dollars annually. From startups to giants like Netflix or Dropbox, everyone adopted Amazon services. An obsession with operational efficiency partly explained this success. Amazon built massive data centers in strategic locations, where electricity and cooling were cheap. Economies of scale allowed it to regularly lower prices while remaining profitable.

AWS marked the transition from ownership computing to consumption computing. Like public utilities during the industrial revolution, the cloud became a commodity available on demand.

Top

Hadoop

In 2006, Yahoo! faces a major challenge. Its WebMap infrastructure, the technology that builds the web graph to power its search engine, can no longer keep up. The graph contains over 100 billion nodes and 1 000 billion edges. The previous infrastructure, called Dreadnaught, has reached its limit at 800 machines. The architecture needs a complete overhaul to keep pace with the growth of the web. Yahoo! then chooses Apache Hadoop.

The story begins two years earlier. In 2004, researchers at Google publish two papers that will change everything. They describe their file system and the MapReduce architecture. These publications establish the technical foundation: a distributed file system capable of storing immense data volumes, combined with a model for processing them in parallel at large scale.

The first version of Hadoop emerges within the Apache Nutch project, before becoming independent in 2006. Four years later, Google obtains a patent on the MapReduce algorithm, but grants a license to the Apache Software Foundation. This decision paves the way for massive industry adoption.

The initial architecture rests on two pillars. HDFS (Hadoop Distributed File System) manages distributed data storage. MapReduce coordinates the execution of processing tasks. A central service, the JobTracker, oversees all operations, while TaskTrackers execute tasks on each cluster node.

This monolithic architecture quickly reveals its weaknesses. Developers attempt to repurpose MapReduce for uses it was never intended for. Some launch map-only tasks to run web servers or iterative calculations. These workarounds, despite their ingenuity, compromise the system’s stability and efficiency.

The Apache community undertakes a major overhaul. YARN (Yet Another Resource Negotiator) emerges. This new version separates resource management from the programming model. A central ResourceManager allocates cluster resources to applications. Each application has its own ApplicationMaster to coordinate its execution.

This architectural redesign transforms the ecosystem. New frameworks emerge: Apache Tez for executing task graphs, Spark for in-memory processing, Storm for real-time processing, Giraph for graph computing. Users are no longer constrained by MapReduce’s limitations.

Hadoop adoption in the industry reaches considerable scale. Yahoo! deploys YARN across all its production clusters. The system processes approximately 500 000 jobs daily, representing over 230 years of computation. On a 2 500-machine cluster, the transition to YARN doubles average CPU utilization. The company runs twice as many tasks as before.

Performance improves. In 2013, the MapReduce implementation on YARN sets records in the Daytona and Indy GraySort benchmarks. The system sorts 1.42 terabytes of data per minute on a 2 100-node cluster. These results demonstrate its capability to efficiently process massive volumes.

Hadoop’s architecture influences the design of modern distributed systems. Its ability to run on commodity hardware, its fault tolerance, and its simple programming model make it a reference point. The concept of data locality, where computation moves to the data rather than the reverse, becomes a defining principle of distributed computing.

Hadoop’s success inspires the development of numerous competing platforms. Twitter creates Mesos, Facebook designs Corona, Google develops Omega. Each system brings its innovations, but all share Hadoop’s architectural heritage: the separation between resource management and programming models.

Hadoop’s evolution reflects the transformation of data processing needs. From a specialized tool for web indexing, it transforms into a generic platform supporting a multitude of applications. This transformation comes with continuous improvements in performance, flexibility, and ease of use.

By demonstrating the viability of large-scale distributed processing on commodity hardware, Hadoop democratizes massive data analysis. Companies of all sizes can now build high-performance data processing infrastructures without investing in expensive specialized hardware. The collaboration between companies, developers, and researchers within the Apache Foundation has enabled rapid advances and widespread knowledge sharing.

Top

jQuery

In 2005, John Resig was a computer science student at the Rochester Institute of Technology. He spent his days coding websites and constantly ran into the same problem: making JavaScript work across different browsers was an uphill battle. Each browser had its own rules, its own quirks. You had to write multiple versions of the same code, test, fix, and start over. This situation exhausted developers and slowed everyone down.

He decided to create his own solution. The following year, at a technology event called BarCampNYC, he presented the first version of jQuery. His ambition could be summed up in one sentence: make JavaScript programming enjoyable. He drew inspiration from the work of Charles Willison, a British developer who had created css_selector.js, a library for selecting CSS elements. Resig took this idea and pushed it much further.

jQuery offered a simple interface for manipulating web pages. Developers could select HTML elements with complex expressions, modify them, and animate them. The selection engine worked remarkably well. Later, this engine was extracted into a separate library called Sizzle, as it represented such a significant technical advancement.

But jQuery’s real strength didn’t lie solely in its code. Resig had understood something that many other developers overlooked: good documentation was worth its weight in gold. Between 2006 and 2007, jQuery was the only open-source JavaScript library to offer complete and clear documentation. Other projects left developers to fend for themselves with the source code. This attention to detail changed everything.

Resig first hired a community manager. Not an additional developer, no: someone whose job was to answer questions and support users. He was thinking ahead and knew that a technical tool doesn’t spread on its own.

Initially, jQuery remained a side project. Resig was working on other things. His hiring at Mozilla changed the game. The company allowed him to dedicate work time to jQuery. He could then build the necessary infrastructure and prepare the project to continue without him. He later left Mozilla for Khan Academy, but jQuery was already launched.

The library achieved tremendous success in the late 2000s. It made sophisticated techniques like animations and Ajax requests accessible. Developers adopted it massively. According to W3Techs, approximately 74% of websites used jQuery at its peak. A staggering figure.

The project grew, and so did its governance. A jQuery Board emerged in 2011, followed by a jQuery Foundation in 2012. Then came a merger with the Dojo Foundation in 2015, creating the JS Foundation. In 2019, another merger with the Node.js Foundation: the OpenJS Foundation was born, and jQuery retained its status as a major project.

But the web was evolving. Browser vendors—Apple, Google, Microsoft, and Mozilla—began working together within the Web Hypertext Application Technology Working Group. Incompatibilities between browsers decreased. Browsers themselves integrated features that made jQuery less indispensable. The Fetch API replaced Ajax functions. The querySelector and querySelectorAll methods enabled element selection just as jQuery did. The classList interface simplified CSS class manipulation.

Browsers adopted automatic and continuous updates. Fixes and new features reached users automatically, without manual action. Developers could exploit new web features more quickly. They needed fewer compatibility solutions.

Internet Explorer gradually died. Microsoft discontinued support for versions 10 and earlier in 2016, keeping only IE 11. This disappearance freed developers from the constraints that had originally justified jQuery.

New frameworks emerged: React, Angular, Vue. They proposed a different approach, component-based, facilitating the construction of complex interfaces. They encouraged declarative programming where the developer describes the desired state rather than the steps to achieve it. jQuery, on the other hand, remained imperative.

Some major players abandoned jQuery. GitHub removed it from its interface. Bootstrap announced it would drop it in version 5, as jQuery constituted its main client-side JavaScript dependency.

Yet jQuery didn’t disappear. For simple sites requiring little interactivity, the library remains a proven and easy-to-implement solution. Its imperative approach remains more accessible to beginners than modern declarative paradigms. In certain professional contexts where old versions of Internet Explorer still run, jQuery retains its usefulness. The library continues to evolve, maintained by a loyal community. Many developers appreciate its concise syntax and use it by preference, while native alternatives exist. The progression of web standards and the emergence of new approaches testify to the continuous evolution of technologies, a process in which jQuery has largely participated by establishing practices that endure.

Top

PowerShell

In 2002, Jeffrey Snover wrote a document at Microsoft that would transform system administration on Windows. The “Monad Manifesto” presented an uncommon vision at the time: creating a shell that manipulates objects rather than text. The project’s name referred to Leibniz and his concept of fundamental units that could, through aggregation, form more elaborate systems.

Windows administration was going through an identity crisis at the time. On one side, simple graphical tools for basic operations. On the other, complex programming languages for advanced automation. In between, a void. Administrators struggled to compose automation solutions without diving into heavy programming. Microsoft had spent years simplifying Windows for novices, paradoxically making certain tasks more difficult for experienced users who favored the efficiency of command lines.

Snover drew inspiration from UNIX shells while seeking to overcome their limitations. In these environments, commands produce unstructured text that subsequent commands must parse, a fragile and error-prone process. PowerShell’s innovative idea lay in its object-oriented approach: the shell passes structured .NET objects between commands, eliminating this fragility.

The project nearly never came to be. Between 2004 and 2006, several major initiatives based on managed code at Microsoft experienced resounding failures, causing Windows Vista delays. Promoting a scripting language relying on .NET was risky. The Exchange Server team, the first to adopt PowerShell, developed an intermediate API layer that would allow them to switch to another solution if problems arose.

PowerShell finally arrived in 2006. It introduced “cmdlets”, lightweight commands implemented as .NET classes. This architecture ensured syntactic and semantic consistency rarely found in traditional shells, where each command defines its own conventions. Version 2.0, delivered with Windows 7, enhanced the tool with remote script execution and background tasks.

The standardization of verbs in command names (Get-, Set-, New-) and the introduction of providers marked a breakthrough. The latter handle different data sources—file systems, Windows registry, Active Directory—with uniform syntax. Administrators no longer had to relearn a set of commands for each source, which significantly reduced the learning curve.

Adoption accelerated when Microsoft integrated PowerShell into its server products. Exchange Server 2007 made PowerShell mandatory for certain tasks. This decision forced administrators to take the plunge, quickly recognizing the advantages in terms of efficiency and automation.

The shell’s architecture broke with the traditional approach. Instead of creating standalone executables, developers write .NET classes inheriting from a common base class. This class provides parameter parsing, data validation, logging, and error handling out of the box. This code sharing reduces development costs and improves the user experience.

The Integrated Scripting Environment (ISE), added later, offered administrators a modern tool for writing and debugging scripts. Syntax highlighting, auto-completion, and a graphical debugger supported the transition from graphical interfaces to script-based automation.

An active community formed around PowerShell. Administrators shared scripts and best practices, while software vendors adopted the tool to administer their products. This adoption validated Snover’s initial vision of a unified automation platform.

Security occupies a central place in PowerShell. The shell integrates code signing mechanisms and detailed activity logging. Organizations can control script execution and trace their use, meeting security and compliance requirements.

The impact on Windows administration proved considerable. PowerShell transformed interaction with the system, replacing repetitive manual actions with reproducible automated solutions. This evolution improved efficiency and reduced human errors in system operations.

In 2016, Microsoft made PowerShell open source under the MIT license and cross-platform. This decision reflected the company’s evolution toward greater openness and acknowledged the reality of modern IT environments, where Windows coexists with Linux and macOS.

Top

MongoDB

In 2007, three American entrepreneurs, Dwight Merriman, Eliot Horowitz, and Kevin Ryan, embarked on a platform-as-a-service project. They found that traditional relational databases struggled to meet the scalability and agility requirements of their infrastructure. This constraint led them to design their own solution.

The context of the era saw the myth of the universal database crumbling. IT architectures were diversifying, with everyone seeking answers to specific problems. NoSQL databases were emerging, as were time-series databases. This fragmentation of the technological landscape forced a rethinking of tooling choices, as each type of system addressed particular needs.

MongoDB was built around a document-oriented model that uses BSON, a binary variant of JSON. This architecture allows for flexible data representation while maintaining high performance. The system finely manages cache and data locality, with a limit set at 16 MB per document to ensure optimal memory usage.

Technical evolution brought its share of innovations. WiredTiger, the storage engine later adopted, introduced ACID compliance in transactions. MongoDB thus managed to marry the flexibility of NoSQL with the transactional rigor of traditional relational systems.

Horizontal scaling ranks among MongoDB’s major assets. Two sharding mechanisms coexist: range-based and hash-based. This architecture distributes data across multiple servers, making scaling transparent to applications. The query router exploits this geographic distribution to accelerate processing, so the code written by developers can remain identical regardless of the physical distribution of data.

Availability reaches 99.99% without human intervention thanks to automatic replication and self-healing capabilities. Replica sets automatically fail over during an outage, with a secondary node taking the place of the primary through a well-established election process.

Security, often the Achilles’ heel of NoSQL databases, has benefited from continuous improvements. MongoDB now integrates encryption at rest via TDE, LDAP and Kerberos authentication, TLS/SSL encryption, as well as protections against SQL injection. These additions place it in a strong position against its NoSQL competitors.

Use cases emerge naturally. Real-time analytics applications on large volumes, such as those in financial markets or Twitter stream processing, find MongoDB a valuable ally. E-commerce leverages its behavioral analytics capabilities to refine customer experience. The Internet of Things represents another prime territory, as the flexibility of the data model effortlessly handles the heterogeneity of information collected by sensors.

Commercialization comes in two versions: enterprise and open source. This dual approach creates an active developer community while ensuring steady revenue. Integration with major cloud platforms like Amazon Web Services and Azure strengthens its presence in the modern database market.

Some limitations nevertheless appear. MongoDB is poorly suited to highly relational data, environments requiring strict referential integrity, or operations involving multiple joins between documents. The absence of stored procedures and triggers constrains business logic outside the database. Data warehouses and complex graph representations do not correspond to its strengths.

MongoDB’s influence on the database ecosystem remains tangible. Its success validated the NoSQL approach for enterprise applications and shaped the design of modern data management systems. Its ability to process unstructured, semi-structured, and structured data within the same environment has redefined certain industry standards.

Ongoing development reflects the changing needs in data management. The progression toward distributed architectures, the growing importance of real-time analytics, and the emergence of new application models have charted its evolution. This continuous market adaptation explains its current position.

Top

Apple iPhone

In January 2007, Steve Jobs took the stage at Macworld in San Francisco to present a device whose existence was known to almost no one. Two years of absolute secrecy preceded this moment. Only three AT&T executives had been allowed to see the iPhone before its public unveiling. Jobs had negotiated directly with the American carrier for an exclusive partnership that would redraw the balance of power in the telecommunications world.

The phone market in 2007 looked much like it had several years earlier. BlackBerry dominated with its physical keyboards, Palm offered its Treo devices, and users made do with complex interfaces that required real learning time. Mobile Internet remained a frustrating experience, messaging worked after a fashion, and the idea of a truly intuitive phone seemed out of reach.

The iPhone arrived with its 3.5-inch touchscreen and a simple idea: you touch directly what you want to do. No stylus, no keyboard hidden beneath the screen. Fingers are enough, and the surface responds to multiple simultaneous touch points. Apple patented this multipoint technology and placed it at the heart of a device that claims to be three things at once: a phone, an iPod, and a way to browse the Internet. This convergence had never truly worked before.

Jobs applied his usual obsessiveness to the iPhone. He scrutinized every screw, examined the case curves from every angle, and even had the packaging redone because the adhesive strip wasn’t exactly where it should be. At Apple, the product unboxing experience matters as much as the product itself. This obsession with detail reflected a particular vision of what a technological object should be: something that works before you’ve read the manual.

When the iPhone reached stores in June 2007, lines formed spontaneously. Apple and AT&T sold two versions: $499 for 4GB, $599 for 8GB. These prices frightened no one. The company modestly aimed for 1% of the global market by 2008, or 10 million units. It would far exceed this target.

The distribution broke with industry practices. The iPhone would not be sold at Best Buy or RadioShack. Apple imposed its own stores and AT&T’s as the only points of sale. This restriction gave Apple complete control over the device’s presentation and the purchasing experience. Traditional retailers would wait impatiently for months.

Technically, the iPhone accumulated innovations like the proximity sensor that turns off the screen when you bring the phone to your ear. An ambient light sensor adjusts brightness according to surrounding light. An accelerometer rotates the display when you turn the device. The operating system derives from OS X and handles multitasking with an efficiency that astonished observers. Apple built hardware and software simultaneously, making them communicate from the design stage. Its competitors separated these two worlds.

Success transformed Apple. The company dropped the word “Computer” from its corporate name in favor of simply Apple Inc. This change reflected the reality of a company that had long since stopped selling only computers. The iPhone accelerated this transformation and shifted the center of gravity toward mobile devices.

A culture of secrecy reigned within the company. Teams worked on project fragments without seeing the whole. Only a handful of managers knew the complete vision. This compartmentalization frustrated some engineers but ensured that nothing leaked before official announcements. Jobs directed everything from a highly centralized structure, refusing to create autonomous divisions that might dilute control.

The iPhone landed in the United Kingdom and France in late 2007, then in Asia the following year. In each country, Apple replicated the American model: a single carrier, an exclusive agreement, controlled distribution. These practices raised legal questions in Europe, where bundling a phone with a single carrier ran up against certain regulations. Apple negotiated case by case.

Competitors responded as best they could. Nokia launched its N95 with enhanced multimedia capabilities. Other manufacturers rushed toward touchscreens, often without understanding that Apple’s success stemmed as much from the software interface as from the hardware. The market shifted in a matter of months: what the iPhone offered became the new benchmark.

In 2008, Apple opened the App Store and changed the rules again. External developers could create applications and sell them directly to users. The iPhone thus gained thousands of functions that no one had imagined at the outset. An economic ecosystem emerged almost instantly, attracting independent programmers and large companies alike.

Harvard University estimated the value of free publicity generated by the iPhone launch at $400 million. Apple had spent almost nothing on traditional campaigns. The media did the work for free, relaying every announcement, every waiting line, every first impression. This marketing success rested on meticulous preparation and a keen sense of spectacle.

Apple’s market value doubled in the twelve months that followed. Analysts understood that the iPhone was not just another product but the company’s new pillar. Jobs had been right: a device that combines phone, media player, and Internet browser in a fluid interface can change usage patterns. It marked a breakthrough in the history of mobile computing and telecommunications.

Clojure

In 2005, Rich Hickey embarked on the Clojure adventure. He financed this venture with his retirement savings, taking a sabbatical year to design a language that would solve the problems he had been encountering daily for years. His ambition? To create a tool as acceptable in enterprise settings as Java or C#, but free from the unnecessary complexities that burden information system development.

Hickey was no novice. Since 1987, he had worked as a software architect, navigating between C++, Java, and C#. This long experience taught him one thing: traditional object-oriented languages impose a steep price. Imperative programming with mutable state demands enormous effort whenever you need to modify a program without breaking data consistency. These rigid systems resist change despite encapsulation and abstractions.

Alongside his work, Hickey explored Common Lisp in his personal projects. This practice opened his eyes: all this complexity he fought at the office wasn’t inevitable. With Lisp’s flexibility, you tackle a problem at exactly the right level, without superfluous layers. But Common Lisp remained confined to experimentation. Companies refused to deploy applications written in this language. Twice, Hickey had to rewrite in C++ systems he had developed in Common Lisp, or transform them into SQL stored procedures to satisfy his clients.

The idea then germinated to marry Lisp’s power with Java’s acceptability. Hickey chose the Java Virtual Machine as his foundation. This tactical decision immediately eliminated adoption barriers: Clojure is a first-class citizen of the Java ecosystem, with its native interoperability and ability to reuse existing libraries.

The first technical challenge was to make immutable data structures performant enough for practical use. During the winter of 2006-2007, Hickey implemented a persistent version of hash array mapped tries with a branching factor of 32. Java’s array copy primitives proved efficient. This breakthrough delivered near-constant performance for common operations, finally making immutability viable in production.

In October 2007, the first alpha version of Clojure appeared on the jFli mailing list. The clojure.org site opened its doors, accompanied by a Google group. The reaction exceeded all predictions: 30,000 visits in three days. The eighteen months that followed saw the language expand considerably. The standard library quadrupled, growing from 100 to over 400 functions.

Clojure distinguishes itself through its state management. The language offers several reference types according to needs: refs use a transactional system inspired by databases, agents handle asynchronous updates, atoms offer simple atomic modifications. This approach contrasts with the uniform but limited model of traditional imperative languages.

May 2009 marked the release of Clojure 1.0, accompanied by the first book dedicated to the language. Version 1.1 arrived in December with chunked sequences and transients, bringing substantial performance gains. Version 1.2 in 2010 introduced protocols and user-defined types, completing the language’s polymorphic arsenal.

The year 2011 opened a new chapter with ClojureScript. This implementation runs on JavaScript engines, extending Clojure to client-side web development. The project demonstrates the robustness of the language’s abstractions: they work on a platform radically different from the JVM without conceptual modification.

In 2012, Hickey launched Datomic, a database that transposes Clojure’s principles to storage: immutability, explicit time, functional semantics. The EDN format (Extensible Data Notation) formalizes the serialization syntax used by Clojure, offering an alternative to traditional formats.

The community grew at its own pace. The Clojure/conj conferences began in 2010. ClojureBridge was born in 2014 to diversify the ecosystem through free workshops. The language found its place in various sectors: finance, climatology, retail, data analysis, publishing, healthcare, advertising, genomics.

Major players adopted it for critical systems. Walmart processes electronic receipts from its 5,000 stores with Clojure. Netflix analyzes 2,000 billion daily events thanks to the language. Nubank, a digital bank with over 12 million customers, builds its infrastructure on Clojure and Datomic.

The success rests on functional programming with performant immutable structures, native interoperability with the host platform, powerful concurrency abstractions, and generic data processing. These strengths make Clojure a tool suited to contemporary information systems. In 2025, development continues its measured course, improving tooling and developer experience rather than accumulating features.

Top

F#

Microsoft sought to consolidate its .NET platform in the face of Java’s rising dominance. The F# language emerged from a unique intersection: that between academic work on functional programming and the pragmatic needs of the software industry.

F#’s roots reach back to the 1970s, with Robin Milner’s work on ML (Meta Language). This language introduced type inference and pattern matching, concepts initially designed for manipulating mathematical proofs. The ML family, which includes Standard ML and OCaml, formed the theoretical foundation from which F# would inherit its core principles.

In 1997, Microsoft launched "Project 7," an ambitious initiative aimed at integrating fourteen languages, both academic and commercial, into the .NET ecosystem. Don Syme, a researcher at Microsoft Research in Cambridge, began developing F# in 2002. His goal? To marry the elegance of functional languages with the richness of the .NET platform.

The first public version arrived in 2005. F# distinguished itself through its ability to fuse functional and object-oriented programming. But above all, it offered seamless interoperability with other .NET languages. Developers could use existing libraries without friction, a decisive advantage for adoption.

Thanks to the inferred type system inherited from ML, the compiler automatically deduces the types of expressions. Code gains conciseness without sacrificing safety. Active patterns facilitate extensible pattern matching, while computation expressions simplify the writing of asynchronous code. These mechanisms transform the manipulation of complex data into fluid operations.

Microsoft integrated F# into its official lineup in 2007. The language found its place in financial and scientific domains, where the rigor of typing and the conciseness of functional code meet critical requirements. F# 2.0 was released in 2010, the first version fully supported by Visual Studio.

In 2012, F# 3.0 introduced type providers. This mechanism automatically generates types corresponding to external data schemas, whether from SQL databases, web services, or structured files. With data integration made almost trivial, institutions like Credit Suisse and Morgan Stanley adopted F# for their analysis and trading systems, where every error can cost millions.

In 2010, Microsoft released F#’s source code under the Apache 2.0 license. This openness catalyzed the formation of a contributor community. The F# Software Foundation was established in 2014, providing an organizational framework for the language’s development. F# gradually emerged from Microsoft’s shadow to become a vibrant community project.

The language adapted to the changing landscape of computing. Projects like WebSharper and Fable enabled writing web applications in F#. Support for .NET Core opened the path to cross-platform development. F# was no longer confined to Windows; it ran on Linux and macOS, aligning with the cloud computing era.

C# borrowed several of F#’s features: pattern matching, non-nullable types, and certain functional constructs. The approach to asynchronous programming via computation expressions also inspired other languages. F# demonstrated that a functional language could coexist with object-oriented programming and even enrich it. It remains a niche language, certainly, but a niche where code quality and application safety matter above all else.

Top

Groovy

In 2003, Java reigned over the computing world with its considerable library of components. Yet its rigid architecture discouraged those who simply wanted to script a task, prototype, or write minimal code. Ruby, Python, and Smalltalk seemed more agile, but their syntax bewildered programmers accustomed to Java. James Strachan then created Groovy: a language that would remain faithful to the Java spirit while eliminating its burdens.

The idea was simple: keep Java’s appearance to avoid disorienting developers, but lighten the syntax and reduce constraints. Strachan wanted a “Python with Java flavor,” where Groovy would serve as glue between existing Java libraries. The language had to integrate naturally into the JVM ecosystem without disrupting established habits.

March 2004 marked a decisive milestone: Groovy was submitted as Java Specification Request (JSR 241) and accepted by vote. This process required rigorous specification, a reference implementation, and compatibility tests. The workload proved overwhelming. Tensions emerged within the community over which directions to take. Strachan eventually withdrew, leaving Guillaume Laforge to take up the torch.

The JSR was never voted on in its final form, but this mandatory process forced the team to structure the language. Without this constraint, the features accumulated during initial development would likely have formed a disparate set rather than a coherent whole.

January 2007 saw the birth of version 1.0. It brought lexical closures, scripts, builders, interpolated strings (GStrings), and named parameters. The language natively integrated regular expressions, operator overloading, and literal syntax for lists and maps. The Groovy Development Kit enriched approximately 430 standard Java classes with new methods.

Two years later, Groovy exceeded its initial role as a scripting language. It was now used to develop complete applications. Its runtime metaprogramming capabilities, already robust, were supplemented by compile-time features. This evolution reduced code complexity without sacrificing performance.

In 2012, the language was gaining ground on projects traditionally reserved for Java. Users demanded more compile-time type checking and better performance. The team added a static nature to the language. Groovy didn’t abandon its dynamism: it simply offered the choice between static and dynamic typing according to project needs.

AST (Abstract Syntax Tree) transformations, introduced with version 1.6, were game-changing. They allowed extending the language without touching its grammar. This declarative approach, based on annotations, facilitated the application of Java design patterns. The @Immutable annotation, for example, automatically generated all the code necessary to create an immutable class conforming to best practices.

Version 2.4 brought Android support, which led to numerous optimizations in bytecode generation. The amount of code produced decreased, memory consumption of internal structures dropped, and performance improved. February 2020 saw the arrival of version 3.0 with its new parser based on Antlr4, offering better compatibility with modern Java syntax.

Groovy’s organizational history resembles an obstacle course. First hosted on Codehaus, the project migrated to the Apache Software Foundation in 2015 after the platform’s closure. G2One, SpringSource, VMware, Pivotal, and OCI sponsored its development in turn. Despite these changes, the open source community stayed the course and ensured the language’s progression.

Groovy left its mark on other technologies. Kotlin drew inspiration from its lexical closures, builder concept, default it parameter for closures, templates and interpolated strings, Elvis operator, and safe navigation. Swift and C# adopted similar features like the safe navigation operator.

Groovy’s philosophy rests on pragmatic extensibility. Rather than integrating every possible feature into the language core, it provides mechanisms for users to enrich the language according to their needs. AST transformations, type checking extensions, and domain-specific language (DSL) creation illustrate this approach.

The language remains alive and its community continues to innovate while preserving compatibility with Java, which remains at the heart of its design. Groovy maintains its balance between increased productivity for Java developers and dynamic features for specific use cases.

Android

In 2003, Andy Rubin founded Android Inc. in Palo Alto with the ambition of creating an operating system for mobile devices. The name suited him well: at Apple and later at MSN, his colleagues had nicknamed him “Android” because of his passion for robotics. A few years earlier, he had already founded Danger Inc., the company that gave birth to the Danger Hiptop, the phone that T-Mobile commercialized as the Sidekick in 2002.

Initially, the Android Inc. team focused on digital cameras. But that market collapsed while mobile phones were taking off. Rubin and his cofounders Rich Miner, Nick Sears, and Chris White decided to pivot toward a mobile operating system built on the Linux kernel. Money was desperately scarce. Steve Perlman, a friend and investor, even had to bring 10,000 dollars in cash to prevent bankruptcy.

Google acquired Android Inc. in 2005 for 50 million dollars. Larry Page and Sergey Brin were thinking ahead: they wanted to extend their empire beyond computer search. On July 11, 2005, the entire Android team joined the Mountain View offices.

At that time, the mobile landscape belonged to others. Nokia’s Symbian dominated everything with 63.5% of the global market in 2007. Microsoft’s Windows Mobile captured 12%, RIM’s BlackBerry carved out 9.6%. And Apple arrived in January 2007 with the iPhone, redefining what a smartphone was. Facing these giants, Google chose a different path: offering an open and free system.

The Open Handset Alliance was born in 2007. This consortium brought together 34 companies, from phone manufacturers to operators to various technology companies. Their mission was to develop open standards for mobile devices. Android was this alliance’s reference system, distributed as open source under the Apache license.

The first public version was released in 2008. The architecture relied on the Linux kernel, which managed the hardware. Above it ran Dalvik, a virtual machine specially designed by Google to optimize execution on mobile devices. Each application ran in its own instance of this virtual machine, so multiple programs could execute simultaneously.

Versions followed one another, named after desserts. Cupcake (1.5) arrived in 2009 with widgets and GPS navigation. Donut (1.6) improved search and accepted different screen resolutions. Eclair (2.0-2.1) managed different email accounts and refined the interface.

Froyo (2.2) in 2010 accelerated performance and strengthened security. Gingerbread (2.3) integrated NFC and better conserved battery. Honeycomb (3.0) in 2011 adapted Android for tablets. Ice Cream Sandwich (4.0) unified the experience between smartphones and tablets, with innovations like facial unlocking.

Jelly Bean (4.1-4.3) in 2012 made the interface smoother thanks to “Project Butter”. KitKat (4.4) streamlined the system so it could run on modest machines. Lollipop (5.0) in 2014 introduced “Material Design”, Google’s new aesthetic. Marshmallow (6.0) rethought application permissions and extended battery life.

The business model contrasted sharply with competitors. Google offered the system to manufacturers, making money through advertising and services like the Play Store. Apple kept its ecosystem closed, Microsoft charged licensing fees: Android took a third route.

Adoption exploded. In 2010, more than 100 million devices ran Android. In 2021, there were 3 billion worldwide. This dominance attracted the attention of competition authorities. The European Commission imposed a 4.34 billion euro fine on Google in 2018 for abuse of dominant position.

The system quickly exceeded its initial scope. It equipped tablets, smartwatches, televisions, cars. Android Wear handled connected objects, Android Auto handled automobiles, making this system a universal platform for mobile computing.

The technical architecture evolved with needs. ART progressively replaced Dalvik to improve performance and energy efficiency. The system integrated facial recognition, contactless payment, augmented reality. Security was continuously strengthened: data encryption, application isolation in sandboxes, monthly updates. The more transparent permission model gave users greater control.

Android transformed the mobile industry. The system democratized access to smartphones by allowing manufacturers to offer devices at all price points. It stimulated innovation through a platform open to developers. Its open-source model inspired other projects and influenced the evolution of mobile systems.

This success rests on a balance: the power of the Linux kernel, the efficiency of a virtual machine designed for mobile, an intuitive interface. Android satisfied manufacturers, developers, and users while adapting to technological changes and new use cases.

Top

Google Chrome

In September 2008, Google launched the beta version of Chrome, its own web browser. At that time, Internet Explorer dominated the market and Firefox had established itself as the essential alternative. But web applications were growing in complexity, HTML5 was in preparation, and existing browsers were showing their limitations. Google observed that browsers handled these new applications poorly, regularly causing crashes that affected the entire browsing experience.

Chrome arrived with a multi-process architecture. Each tab runs in its own process, isolated from the others. When a page crashes, only the affected tab closes. The browser’s other components continue to function normally. This break from traditional monolithic browsers transformed the user experience.

Security occupied a central place from the design phase. Sandboxing isolates rendering processes by drastically limiting their privileges and access to the operating system. A compromised process cannot infect the rest of the system. Chrome relies on four Windows security mechanisms to strengthen this protection against malware.

The V8 JavaScript engine marked a major technical breakthrough. Developed by a Google team in Denmark, it compiles directly to machine code without going through an interpreter. Hidden classes optimize the processing of similar objects. The garbage collector is precise while remaining lightweight, making JavaScript execution much faster than on competing engines.

Chrome’s interface is striking in its simplicity. The developers followed the principle of "simple interface, powerful core." They maximized page display space by reducing interface elements to the strict minimum. The address bar and search field merge into the omnibox. This minimalist approach would later inspire many other browsers.

Google simultaneously released Chromium, the open source version (and foundation) of Chrome. The majority of the source code was released under the BSD license, allowing developers to use it freely. This strategic choice promoted the adoption of Chrome technologies and stimulated innovation throughout the browser ecosystem.

Performance remains at the heart of concerns. In 2015, Chrome introduced a JavaScript bytecode cache. Frequently used scripts have their compiled code saved to disk. Pages load faster, even after a browser restart. This optimization illustrates the constant attention paid to execution speed.

Chrome integrates WebKit as its rendering engine, initially created by Apple. But Google retained only the WebCore component and developed its own adaptation layer. This customization gave it complete control over browser performance.

Success came quickly. One year after its launch, in August 2009, Chrome was the third most widely used browser in the world, surpassing Safari and Opera. In November 2010, it crossed the 10% market share threshold in the United States. The progression accelerated year after year.

In 2020, Microsoft abandoned EdgeHTML and rebuilt Edge based on Chromium. Brave, Opera, and many other browsers rely on this technology. A rich ecosystem developed around the project.

The Chrome Web Store, launched in 2010, centralized extension distribution. This structured platform facilitated extension discovery and installation, an asset in Chrome’s adoption, enriching the browser’s functionality.

Automatic updates changed the way browsers are distributed. Chrome updates silently in the background, without user intervention. This approach improves security by ensuring everyone has the latest fixes. No more waiting for users to decide to manually update their browser.

Chrome evolves with the Web. The browser progressively integrates advanced technologies like the CSS Paint API for customizing graphic rendering, or CSS 3D transformations for sophisticated visual effects. These features meet the needs of modern web applications that now rival native applications.

Chrome’s architecture inspires operating system design. Chrome OS, launched in 2011, extends the browser’s principles to the scale of a complete OS. Google sees this as the future of computing, a web-centric world where the browser is the primary application execution platform. This vision has been validated by the explosion of SaaS solutions in enterprises.

The pursuit of speed never stops. Optimizations affect page loading, JavaScript execution, and image rendering. Chrome introduced resource preloading and data compression to further reduce loading times. Every millisecond counts.

In 2023, fifteen years after its launch, Chrome dominates the web browser market. Its influence on the evolution of web standards and browsing technologies is immense. Chrome’s technical innovations have become widespread. The web has transformed into a true platform for sophisticated applications, and Chrome has played a decisive role in this transformation.

Top

HTML5

At the turn of the 2010s, the Web was going through a period where smartphones were proliferating, social networks were exploding, and developers were running up against a frustrating limitation: how to write code that works everywhere without having to rewrite everything for each platform? HTML5 arrived in this context, not as a simple technical update, but as a redesign of the Web’s architecture.

The story begins in 2004, when Ian Hickson, then at Apple, cofounded the WHATWG (Web Hypertext Application Technology Working Group). This group brought together engineers from Apple, Mozilla Foundation, and Opera Software, all convinced that the foundations of the Web needed rethinking. The W3C eventually incorporated their work into a broader vision of an open platform.

HTML5 wasn’t just a new markup language. It was a coherent set that encompassed the Document Object Model, CSS style sheets, JavaScript, and a whole series of standardized interfaces for video, geolocation, local storage, and graphics. This comprehensive approach was a boon for developers.

Matthew McVickar, a web developer at Ocupop, expressed it clearly: HTML5 formalized practices that developers were already using, but in a makeshift way. With the JavaScript interface for geolocation, for example, mobile browsers accessed GPS without going through proprietary solutions. No more workarounds, it was time for standards.

CSS3 added its contribution by transforming how page appearance was managed. Developers now created visual effects directly in the browser, without going through Photoshop or other external tools. The time savings were considerable, the flexibility increased.

Local storage marked a break with traditional cookies. This function allowed the storage of large volumes of structured data on the client side, thus enabling Web applications to run without an internet connection. Graphics capabilities were also enhanced with native support for SVG vector graphics and the Canvas element, which opened the door to 2D and 3D animations via JavaScript.

Ian Hickson, who had meanwhile moved to Google, emphasized the technical rigor brought by HTML5. Previous versions of HTML left room for interpretation: two browsers could implement the same specifications differently while claiming to respect the standards. HTML5 established precise rules that guaranteed true compatibility.

But deployment faced obstacles. Video crystallized tensions: no single standard for compression, streaming, or digital rights management. In 2010, Apple refused Flash on the iPhone and iPad, preferring its own version of HTML5. Microsoft and Google developed their approaches, forcing developers to juggle multiple versions of their code.

Hui Zhang, a professor at Carnegie Mellon and cofounder of Conviva Inc., knew the problem well. For him, video remained the thorniest aspect of online content. Bringing together codecs, streaming protocols, and rights management required time and compromises between industry players.

The W3C planned in 2014 to finalize the HTML5 specification. Philippe Le Hégaret, head of the interaction domain, nevertheless tempered expectations: development continued. The organization regularly received requests to add features, such as voice-to-text conversion directly in the browser.

The WHATWG took a radical turn in 2011 by abandoning version numbering. HTML was a “living standard” in permanent development, a vision that fit the reality of a constantly evolving Web. Ian Hickson said it bluntly: HTML development would continue as long as the technology remained relevant.

This technology showed how standardization could stimulate innovation without sacrificing compatibility. It also illustrated the importance of collaboration between industry players in advancing the Web.

Top

Memcached

In 2003, Brad Fitzpatrick faced a major challenge. LiveJournal, the blogging platform he had created, was literally exploding: over 2.5 million accounts and an infrastructure beginning to show its limits. Some 70 machines ran day and night, but the databases were buckling under the pressure of queries. The site had a particular feature that complicated everything: each piece of content had different security levels and appeared in multiple views. Generating static pages was impossible when the elements composing them each had their own lifecycle.

Fitzpatrick then observed a technical reality that would shape his entire approach. Processors were gaining speed year after year, while hard drives lagged behind. Why not leverage this computing power rather than exhaust resources with endless disk accesses? The idea emerged to use the RAM sitting idle in web servers, that memory just waiting to be put to work.

A first prototype was built in Perl. The trial proved disappointing: too slow, too memory-hungry. Fitzpatrick started over in C and built a single-process, single-thread daemon that relied on asynchronous I/O. To ensure his system would work everywhere, he integrated libevent, a library that automatically chooses the best file descriptor management strategy based on the execution environment.

The principle behind Memcached can be stated simply: transform available server memory into a vast shared pool. A distributed hash table where data circulates between machines through consistent hashing. Nothing superfluous in the architecture: the server doesn’t communicate with its peers, stores nothing on disk, offers three basic operations—set, get, delete—and that’s it. This intentional austerity delivers formidable performance, with O(1) algorithms that respond instantly. The slab memory allocator prevents the fragmentation that plagued early versions using malloc.

Facebook adopted Memcached in 2007 and developed mcrouter, an enhanced variant with sharding features. Twitter followed with twemcache, tailored to its specific needs. These web giants validated Fitzpatrick’s approach and made Memcached a de facto standard.

The figures are staggering. At Facebook, the system handles billions of requests per second and stores trillions of objects. The cache hit rate regularly reaches 92%: only a handful of requests actually need to query the database. The load is reduced accordingly. The system introduced “leases” to manage write conflicts and prevent hordes of clients from all attempting to regenerate an expired entry simultaneously.

The distributed architecture allows welcome flexibility. Servers can be added to or removed from the pool on the fly; the system adjusts seamlessly. This horizontal scaling capability quickly became common practice in modern web infrastructure design.

The community embraced the project. Client libraries sprouted for PHP, Python, Ruby, Java, C#, adding object serialization and transparent compression, making the tool more manageable in various contexts. The source code continues to evolve thanks to contributions from a thriving community that shows no signs of slowing down.

In 2024, Memcached still holds its ground in web infrastructures. The project maintains its original philosophy: stay simple, fast, reliable. This minimalist approach contrasts with more baroque alternatives like Redis, but perfectly meets pure caching needs. Memcached demonstrates that an elegant, stripped-down solution sometimes solves complex problems better than sophisticated machinery.

Top

OpenStack

By the end of the 2000s, hardware and software resources were gradually shifting toward cloud computing, promising enhanced security and centralized management. In 2009, NASA was looking for a platform for its internal web services. The market only offered proprietary solutions, locked in by a few industry giants.

The American space agency’s developers believed that an open-source alternative would better serve a broad community: academic researchers, government institutions, and various businesses. Nothing like this existed. So they decided to create this solution themselves. The initiative attracted Rackspace, whose engineers were working on a similar project to support their hardware and services sales business.

In early 2010, Rackspace representatives contacted NASA. The two teams met in a Thai restaurant, and this modest collaboration gave birth to OpenStack. The official launch took place in July 2010, with the ambition of helping organizations deploy cloud services on standard hardware. Four months later, the first version, named Austin, was released. It provided cloud computing and object storage for Infrastructure as a Service.

The initial projects enabled control of resource pools: compute, storage, and networking, all through a centralized dashboard in data centers. Administrators had fine-grained control, while users obtained their resources instantly. The community then committed to releasing new versions every six months.

The OpenStack Foundation was established in 2012. This independent nonprofit organization had the mission of promoting the development and adoption of the software. The structure ensured neutral management of intellectual property and resource sharing, creating a level playing field for all participants.

The governance model rested on three pillars: a board of directors set the vision and governance, a technical committee managed releases and cross-project requirements, and a user committee gathered field feedback and advocated for their interests. The community operated according to four principles: open source code under the Apache 2.0 license, an open community producing an active ecosystem, transparent development with public code reviews, and open design with biannual summits accessible to all.

Contributors came from diverse backgrounds. Software engineers contributing to the code, writers providing documentation and translations, application developers creating solutions on the infrastructure, and operators and users deploying and managing their clouds. Working groups gathered requirements from specific segments such as telecommunications or the enterprise world. The commercial ecosystem included cloud providers and vendors adding their value to the base software.

Adoption accelerated across different sectors. BMW, Disney, and Walmart proved the solution’s viability in production. PayPal adopted it for the agility and availability demanded by its flagship products. Time Warner Cable relied on this flexibility to manage significant data volumes in its centers.

The technology expanded far beyond its initially intended uses. CERN employed it to analyze data from its Large Hadron Collider, managing over 115,000 compute cores. Adobe Digital Marketing used it to transform its virtualization environment into self-service computing. Australia’s NeCTAR university adopted it for national collaborative research, supporting over 6,000 researchers.

In 2024, Rackspace reaffirmed its commitment with OpenStack Enterprise, a fully managed cloud solution for mission-critical workloads. The company had contributed over 5.6 million lines of code and ranked among the world’s largest OpenStack cloud providers. Its expertise included over one billion server-hours of production experience.

The impact on the IT industry was considerable. The platform enabled organizations of all sizes to deploy their own cloud infrastructures, thus escaping the dominance of proprietary services. Its open-source nature fostered collaborative innovation and reduced infrastructure costs. The open APIs and support from virtually all major IT vendors gave users the freedom to move their workloads between private and public clouds according to their needs.

The benefits included accelerated time to market through a self-service portal, an API-based platform for developing cloud-native applications, and drastically reduced provisioning times. Interoperability and hybrid cloud capability allowed businesses to choose the best environment for their applications without depending on a single vendor.

Evolution continued with the integration of new technologies such as containers, virtual machines, and bare metal servers. The global community now counts over 56,000 individual members in more than 180 countries and over 600 companies. This growth testifies to the platform’s continued relevance in a constantly changing technological landscape, where demand for flexible and open cloud solutions continues to grow.

Top

Bitcoin

In October 2008, as the financial crisis shook the entire world and trust in banks collapsed, a mysterious document circulated on the internet. Its author, Satoshi Nakamoto, whose identity—whether a single person or a group—would never be known, proposed nothing less than a revolution: creating a digital currency beyond the control of financial institutions. The document’s title, "Bitcoin: A Peer-to-Peer Electronic Cash System," clearly announced its ambition. The goal was to build a payment system where transactions occur directly between users, without intermediaries.

Three months later, on January 3, 2009, Nakamoto mined the first block of what would be called the blockchain. This genesis block contained a message that spoke volumes about the project’s motivations: "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks." This reference to the British newspaper was no accident. It reminded everyone that banks were about to receive yet another bailout package, financed by public money. Bitcoin was thus born from protest, from a rejection of a banking system deemed to have failed.

The beginnings remained modest. The first subsequent block appeared only five days later, on January 8. This unusual delay still intrigues people today. Some see it as a testing period, others as a biblical reference to the six days of creation. In any case, the network started slowly. At the time, anyone could mine bitcoins with a home computer. The difficulty was set to the minimum, at 1, a trivial level compared to current standards that require industrial facilities.

Active until December 2010, Nakamoto disappeared without a trace. His voluntary departure left the community of developers to continue alone. This absence fuels speculation about his identity, but also ensures that no one controls the project. Bitcoin is truly a common good, with no leader or owner.

Exchange platforms emerged in 2011, allowing bitcoins to be bought and sold for traditional currencies. Mt. Gox established itself and soon handled 70% of global transactions. But in 2014, the hacking of this Japanese platform resulted in the loss of 850,000 bitcoins. The shock shook the ecosystem and brutally reminded everyone of the risks associated with these new assets.

The Bitcoin protocol relies on ingenious mechanisms. Proof of work requires miners to solve complex calculations to validate transactions. In return, they receive a reward in bitcoins. This reward, initially 50 units per block, halves approximately every four years. This process, called "halving," limits total issuance to 21 million bitcoins. This programmed scarcity echoes that of gold and contributes to the cryptocurrency’s valuation.

Gradually, companies accepted Bitcoin as a means of payment. In 2012, a few pioneers took the plunge. Major players like Microsoft, Dell, or PayPal joined the movement. This adoption conferred new legitimacy on what had initially been merely a cryptographic experiment.

Bitcoin’s price went through spectacular cycles. In 2013, it climbed from less than $20 to over $1,000, before falling back. In 2017, another surge to nearly $20,000, followed by a collapse the following year. These extreme variations betrayed the speculative nature of the asset. No one really agrees on its actual value, if such a thing even exists.

Bitcoin’s success inspired other projects. Hundreds of alternative cryptocurrencies, known as "altcoins," emerged with various technical proposals. The blockchain found unexpected applications: smart contracts, decentralized finance, product traceability. This technology, designed for a digital currency, extended far beyond its initial use.

Some states reacted by outright banning Bitcoin, while others attempted to regulate it. Concerns focused on money laundering, investor protection, and tax evasion. The environmental impact of mining, which consumes large amounts of electricity, also raised growing criticism. The debate remains heated between advocates of absolute financial freedom and those calling for strict regulation.

More than fifteen years after its creation, Bitcoin has established itself as an asset class in its own right, recognized by institutional investors. This shows how a radical idea, carried by a few lines of code, can shake the certainties of a millennia-old financial system. The technology continues to evolve: the Lightning Network attempts to solve transaction speed problems, while new improvements are regularly proposed.

This unprecedented experiment opened an immense field in cryptography and distributed systems. Bitcoin is an object of fascination, a symbol of protest that rejects intermediaries, central authorities, and controls. Ultimately, what is money? Who should control it? How should we organize trust in a digital world? These questions continue to fuel reflections on the future of our connected societies.

Top

CouchDB

In 2005, Damien Katz left IBM with an idea in mind. His years working on Lotus Notes had taught him what a document database was, but also what it could become. At that time, the computing world revolved almost exclusively around relational databases. Yet something was off. Modern web applications and collaborative tools demanded something else, a flexibility that SQL couldn’t quite deliver.

He started by writing CouchDB in C++. The early prototypes worked, but concurrency management tripped him up. He searched for a solution, browsed technical forums, and stumbled upon the blog "Lambda the Ultimate" where someone was discussing Erlang. This language, originally designed for telecommunications at Ericsson, had exactly what he needed. Fault tolerance, native distribution, concurrent processing: it was all there. Katz then made a radical decision. He threw away his C++ code and started over entirely in Erlang.

This technical choice shaped CouchDB. Erlang brought with it a philosophy: systems must survive failures, processes can fail without bringing down the whole system, data can be distributed naturally. These properties, designed for telephone exchanges that needed to run without interruption, fit perfectly with the needs of a modern database. The parallel between telecommunications and data storage wasn’t obvious at first glance, but it proved relevant.

CouchDB broke with several established conventions. No rigid schema: JSON documents could evolve freely, transform as needed without requiring database migration. This freedom addressed the realities of web development where structures constantly change. The interface spoke HTTP, used GET to read, PUT to write, DELETE to remove. Nothing more, nothing less. A web developer found themselves on familiar ground.

JavaScript became the query language. Where other databases imposed their own dialect, CouchDB bet on a language every web developer knew. Views, those custom indexes that allow querying data, were written in JavaScript. MapReduce too.

Bidirectional replication stood among CouchDB’s most distinctive strengths. Two instances could synchronize, exchange their modifications after operating separately for days. For example, a smartphone loses its connection, continues working locally, then synchronizes everything once reconnected. Shortly after Apple’s iPhone launch, this capability anticipated a world where mobile devices and intermittent connections would be the norm rather than the exception.

To handle concurrent writes, CouchDB adopted an optimistic approach. No locks that block, no forced waits. Each document has a revision number, somewhat like Git with its commits. When two modifications conflict, CouchDB keeps both versions and lets the application decide. This conflict management, inspired by version control systems, provided unusual flexibility for a database.

The story took a turn in 2011. CouchDB merged with Membase, a distributed cache technology with remarkable performance. From this union came Couchbase, which combined CouchDB’s document richness with Membase’s raw speed. The hybrid creature inherited from both worlds: persistence and flexibility on one side, velocity on the other.

Technically, CouchDB rests on a few well-designed components. The storage engine writes to disk while maintaining indexes for fast access. The view system transforms documents into queryable indexes. The replication protocol ensures consistency between distant instances. These components interact by leveraging Erlang’s strengths, which orchestrates everything with a certain elegance.

CouchDB participated in the emergence of the NoSQL movement, and proved that data storage could be thought of differently. Its influence can be seen in the architecture of many modern databases that adopted its ideas: schema-less documents, web-friendly APIs, intelligent replication. PouchDB, its JavaScript derivative for browsers, extended synchronization all the way to the web client.

Use cases multiplied where flexibility mattered. Mobile applications that needed to work offline, collaborative systems where users modified the same data, tools that needed to replicate between distant sites. CouchDB excelled in these situations where connection wasn’t guaranteed, where structures evolved quickly, where data distribution was a necessity.

Top

Node.js

In 2009, Ryan Dahl presented Node.js at a European JavaScript conference. This technology would be a game-changer by enabling JavaScript to run outside web browsers. Until then, the language had remained confined to the front-end since its creation by Brendan Eich in 1995 for Netscape. Originally, JavaScript was mainly used to animate web pages and manipulate the DOM. Netscape had attempted a server-side version with Netscape Enterprise Server, but the attempt had failed.

The name JavaScript reflects a marketing strategy: riding the wave of Java’s popularity. Yet the two languages have nothing in common. JavaScript owes far more to Scheme and Self than to Java, from which it borrows only a few syntax elements. For years, serious programmers looked down on JavaScript, deeming it too slow and unsuitable for real applications. This reputation shattered with the arrival of more powerful engines, notably Google Chrome’s V8. Through just-in-time compilation, code inlining, and dynamic optimization, JavaScript could rival C++ on certain benchmarks and outperform Python in most cases.

Node.js leverages the V8 engine to build a robust server platform. The architecture breaks with traditional approaches. Where Apache creates a new thread or process for each connection, Node.js relies on a single-threaded event loop. This strategy avoids the overhead of creating and managing threads, which becomes significant when the server is overwhelmed with requests.

Input-output management makes all the difference. Instead of blocking execution while waiting for an operation to complete, Node.js uses callbacks that handle results asynchronously. The server can thus juggle multiple requests while an operation runs in the background. Resources are better utilized, idle time disappears.

The community quickly embraced the project. In 2010, Isaac Schlueter launched NPM (Node Package Manager), the true cornerstone of the ecosystem. Developers can share and reuse code without friction, which accelerates adoption. The following year, a native Windows version emerged and considerably expanded the audience.

Joyent, a software and services company based in San Francisco, steered development for several years before the project joined the Node.js Foundation in 2015. During this transition to more open governance under a technical steering committee, Node.js became a collaborative project of the Linux Foundation. In 2019, the merger with the JS Foundation gave birth to the OpenJS Foundation, which better coordinates the JavaScript ecosystem.

The NPM registry now hosts over two million packages, making it the world’s largest software repository. It contains frameworks like Express, development tools like browserify and gulp, and over 8,000 database clients, from redis to mongoose.

The single-threaded event loop might seem like a bottleneck on modern multi-core processors. Node.js circumvents this obstacle with the cluster module, which creates worker processes sharing the parent server’s ports. A load-aware round-robin algorithm distributes requests among available cores.

Node.js’s success popularized event-based asynchronous programming. This approach suits concurrency-intensive applications well, such as instant messaging or real-time services. The platform also fostered isomorphic JavaScript, where the same code runs on both client and server.

Node.js continues to evolve. Support for ECMAScript modules, performance improvements, and security enhancements are among current priorities. The platform remains relevant for microservices, serverless applications, development tools, and desktop applications via Electron. By offering a different way to build servers, Ryan Dahl redefined the construction of high-performance, scalable web applications.

Raspberry Pi

In 2006, Eben Upton and his colleagues at the Cambridge Computer Laboratory faced an unexpected problem: applicants to their program were becoming less and less proficient in computing. Worse still, their numbers were declining year after year. The diagnosis was clear. The computers of the 1980s, with their visible internals and direct programming, had disappeared in favor of sealed machines, carefully hidden behind plastic casings and polished interfaces. Computers no longer invited tinkering; they were simply used.

Upton dreamed of an affordable machine that would restore to young people the joy of experimentation he had known at their age. His initial prototype, named ABC Micro as a tribute to the BBC Micro, was limited to a board built around an Atmel microcontroller and a few memory chips. Early tests with children, however, revealed that this minimalist approach failed to spark the expected enthusiasm.

Also in 2006, Upton joined Broadcom. This opportunity gave him access to far more powerful components, notably the BCM2835 which integrated an ARM11 processor. This chip concentrated everything needed to run a real computer in a few square centimeters: HDMI output, 3D acceleration, video decoding, USB controller. The addition of a general-purpose ARM processor changed everything, particularly by running Linux on it.

The Raspberry Pi Foundation was established in 2009. Two years later, the final board took shape. Upton and his team envisioned limited production, a few hundred units per year at most. But the enthusiasm exceeded their wildest expectations. The foundation reoriented its strategy: it retained design, branding, and prototypes, but delegated manufacturing to RS Components and Premier Farnell.

In early 2012, the first 10,000 Raspberry Pis sold out within hours. The name circulated in technical circles and maker communities. The foundation seized this momentum to revive British electronics industry: after starting in China, production gradually migrated to Wales until achieving complete localization.

Users immediately diverted the board from its original educational purpose. The 32 input-output pins, which Upton had added almost as an afterthought, became the most prized feature. Raspberry Pi-powered drones appeared, along with stratospheric balloons and home automation installations. One hobbyist transformed his microwave by adding a touchscreen, voice commands, a web interface, and a barcode reader connected to a recipe database.

These sophisticated projects, requiring advanced skills, paradoxically reinforced the educational dimension of the project. They showed young people what could be accomplished with programming knowledge. Workshops in schools confirmed this intuition: sometimes merely changing the color of a snake in a game was enough to make a student refuse to stop. This first sensation of control over the machine produced an effect comparable to what Upton had experienced in his youth.

The Raspberry Pi combines technical and economic characteristics that explain its success. Its 700 MHz ARM1176JZF-S processor, coupled with a VideoCore IV GPU, delivers remarkable multimedia performance for a device of this size. The SDRAM memory, increased from 256 MB on Model A to 512 MB on Model B, is sufficient to run Linux comfortably. The absence of a hard drive, replaced by a simple SD card, keeps the price low and simplifies operating system changes.

Hardware updates followed in succession without disruption. The Raspberry Pi 4 features a Broadcom BCM2711 quad-core 64-bit ARM Cortex-A72 at 1.5 GHz, up to 8 GB of LPDDR4 memory, Bluetooth 5.0, dual-band Wi-Fi, and USB 3.0 ports. These developments preserve backward compatibility while expanding the realm of possibilities.

The Raspberry Pi has proven that a market exists for accessible single-board computers. With 19 million units sold, it ranks third among general-purpose computers. It has revived interest in hands-on computer learning and inspired an entire generation of engineers and creators. Its story demonstrates how a local educational initiative, by hitting the mark, can trigger an international movement that redefines our relationship with technology.