Stéphane FOSSE

EPOCH

EPOCH © 2025 by - This book is published under the terms of the CC BY-SA 4.0 license

Chapter 11
2020

The Digital Era Confronts Global Upheavals

Covid-19 struck the world without warning at the beginning of the 2020s, upending our most deeply rooted habits. Overnight, our screens became our only windows to the world. We witnessed an accelerated mutation of society: conference rooms gave way to video calls, classrooms to online learning platforms, cafés to virtual discussions. This brutal shift toward all-digital revealed both our technological dependence and our capacity for adaptation.

Digital infrastructures, once invisible to the general public, suddenly found themselves in the spotlight. The question of who controlled these vital resources became a political debate. Europe became acutely aware of its fragility in the face of American cloud giants: Amazon Web Services, Microsoft Azure, Google Cloud. The old continent responded by launching GAIA-X, an attempt to counter this hegemony. An ambitious project, perhaps belated, but symbolic of a will for strategic autonomy.

On the global chessboard, the Sino-American technological war took a dramatic turn. Washington wielded semiconductors as a weapon to slow China’s ascent, banning the export of advanced chips to Beijing. In response, China redoubled its efforts to create its own production chain.

The public appearance of ChatGPT in late 2022 captured the collective imagination. Artificial intelligence and language models demonstrated they could write, code, invent, and converse. This wave sparked both fascination and concern. Parliaments worldwide took up the issue, with Europe leading the charge through its AI Act. The question was no longer whether AI would transform our professions, but how quickly and to what extent.

Cybersecurity moved from the status of a technical concern to that of an existential issue. Ransomware attacks targeted hospitals, administrations, and vital industries. Russia’s invasion of Ukraine added a new dimension to the conflict: the war was also being fought in cyberspace, each ground offensive accompanied by its digital counterpart. This reality forced states and companies to strengthen their invisible defenses.

The cryptocurrency roller coaster captivated media attention. The spectacular collapse of FTX in 2022 served as a reminder of this young sector’s volatility. Regulators, long hesitant, began to establish guardrails. Central banks, initially skeptical, worked on their own digital versions of currency. Beyond speculation, blockchain found its place in industrial traceability and document certification.

The digital environmental footprint came to light. The servers that power our connected lives consume the energy equivalent of entire countries. Cryptocurrency mining was singled out for its excessive environmental impact. In response to this awareness, tech companies multiplied their green promises: data centers powered by renewable energy, more durable hardware, less resource-intensive software.

Our homes filled with connected devices: voice assistants in constant listening mode, watches tracking our vital signs, cameras monitoring our comings and goings. This technological intrusion raised questions about our privacy. European GDPR became a global reference, inspiring similar legislation around the world, from Brazil to California.

The social media galaxy went through a turbulent period. Elon Musk’s acquisition of Twitter transformed the platform into a controversial laboratory of unlimited free speech. TikTok conquered global youth, under the worried eye of Western governments concerned about the Chinese influence it carries. New networks, more intimate, emerged in reaction to the omniscient giants.

5G wove its global web, despite controversies surrounding Chinese equipment makers, Huawei foremost among them. This technology enabled unprecedented industrial applications including hyper-connected factories, autonomous and communicating vehicles, and remote surgery. Meanwhile, laboratories worked on 6G, promising dizzying speeds and infinitesimal latency.

Space, the new digital frontier, transformed into a playground for visionary billionaires. Low Earth orbit satellite constellations, like SpaceX’s Starlink, began weaving a global internet network, bringing connectivity to the most isolated regions. This space democratization nonetheless ran up against astronomers’ concerns, worried about seeing the night sky dotted with artificial points of light.

Virtual and augmented reality sought their path beyond the simple gadget. The concept of the "metaverse," resurrected by Mark Zuckerberg, crystallized both hope and skepticism. While the general public remained lukewarm, these technologies found their utility in professional training, industrial maintenance, and architectural design. Digital immersion progressed step by step, far from futuristic promises.

Quantum computing reached important milestones. Experimental processors achieved stability unthinkable just a few years earlier. While the universal quantum computer remains a distant horizon, specific applications emerged in molecular simulation and logistics optimization. This disruptive technology sparked strategic interest, each power fearing to fall behind in this twenty-first-century race.

Robots left factory cages to work alongside humans. These "cobots," more flexible and intelligent than their predecessors, adapted to complex and variable tasks. Post-pandemic labor shortages accelerated this trend. In warehouses, hospitals, and fields, these new machines changed the nature of work, raising as much hope as apprehension.

The State digitized at a forced march. Administrations offered smoother and more personalized online services. Cities equipped themselves with sensors to optimize traffic, lighting, and waste collection. Digital identity advanced, between administrative convenience and fears of authoritarianism. The path toward digital citizenship took shape, winding and fraught with obstacles.

Education retained lasting traces of the Covid period. Classrooms durably integrated educational software. Online courses gained pedagogical maturity. Artificial intelligence personalized learning paths. This accelerated digitization nonetheless widened inequalities, between connected and disconnected students, between innovative and neglected institutions.

Medicine embraced digital tools. Teleconsultation entered common practice. Diagnostic assistance algorithms proved themselves in detecting cancers and rare diseases. Connected medical devices enabled remote monitoring of chronic patients. This digital breakthrough confronted questions of confidentiality and care’s humanity, while opening new therapeutic perspectives.

A collective awareness of digital ethical issues gradually developed. Algorithmic bias, organized disinformation, the attention economy, widespread surveillance: these subjects left specialized circles to enter public debate. Tech-ethics established itself as a discipline influencing digital service design.

The first part of this 2020s decade will remain the one where digital became both our greatest strength and our major vulnerability. Cascading crises revealed its central place in organizing our societies. More than a simple economic sector, computing asserted itself as the nervous system of our civilization, carrying our collective hopes as much as our anxieties. This awareness calls for renewed governance, where innovation rhymes with responsibility, where technical progress harmonizes with social justice and environmental sustainability.

Top

Apple M1

When Apple unveiled the M1 chip in November 2020, few observers grasped the magnitude of the impending disruption. Yet this idea had its roots in an old frustration of Steve Jobs: dependence on processor suppliers. Since the Apple I and its modest MOS Technology 6502, the company had been subject to the technical choices and schedules imposed by others. This situation hardly suited a company that cultivated absolute control over its products.

The first attempts at emancipation date back to the 1990s with the Newton, that ahead-of-its-time tablet that was already collaborating with ARM. The Newton’s commercial failure masked valuable learning: Apple discovered the subtleties of processor design. The iPod then marked an intermediate step with its Portplayer PP5502 system on chip, equipped with two ARM cores. This architecture appeared in the original iPhone, with Samsung still manufacturing the components.

In 2008, Apple acquired P.A. Semiconductor for $278 million, an acquisition that went relatively unnoticed at the time but proved decisive. This Texan company brought the missing expertise to design in-house processors. The first fruit of this union was born two years later: the A4 that powered the first-generation iPad, then the iPhone 4.

Samsung manufactured these A4 chips, but Apple switched to Taiwan Semiconductor Manufacturing Company (TSMC). This migration revealed a carefully considered strategy: mastering the design while relying on Taiwanese manufacturing expertise. The subsequent A-series processors confirmed the relevance of this approach. Each generation exceeded expectations, delivering remarkable performance with controlled power consumption.

The secret lies in optimization. Unlike general-purpose processors from Intel or AMD that must satisfy a thousand different use cases, Apple’s chips focus on a closed ecosystem. This specialization pays off: an iPhone with less RAM than a rival Android smartphone often displays superior performance. The harmony between silicon and operating system makes the difference.

This success in mobile naturally pushed Apple toward computers. In June 2020, at WWDC, Tim Cook announced the Mac transition to ARM architecture. Six months later, the first MacBook Air and 13-inch MacBook Pro with the M1 chip hit the market.

The M1 marked a technological breakthrough. Etched in 5-nanometer technology at TSMC, it brings together 16 billion transistors on a tiny surface. Its architecture breaks conventions: CPU, GPU, Neural Engine, and unified memory coexist on the same substrate. This integration eliminates traditional bottlenecks between separate components. Data flows faster, latency decreases, energy efficiency improves.

The M1’s ARM architecture favors simplicity. Its RISC (Reduced Instruction Set Computing) instructions contrast with the growing complexity of Intel’s x86 processors. This minimalist philosophy, inherited from mobile processors, adapts perfectly to the needs of modern computing. Eight computing cores share the tasks: four optimized for pure performance, four others for energy efficiency. This intelligent distribution automatically modulates consumption according to workload.

The first tests astounded the industry. An M1 MacBook Air outperformed a 16-inch MacBook Pro equipped with an Intel Core i9 processor in many benchmarks, all while operating fanless. Battery life doubled, sometimes tripled compared to previous models. These spectacular gains reshuffled the laptop market.

Apple didn’t stop there. The M1 Pro, with its 33.7 billion transistors and ten computing cores, targets creative professionals. The M1 Max pushes the exercise to the extreme with 57 billion transistors and a 32-core graphics processor that rivals dedicated graphics cards. These variants established Apple as a key player in high-performance processors.

This rise in power disrupted the computing ecosystem. Intel, accustomed to dictating its terms for decades, discovered that a competitor could design more efficient processors by starting from a different architecture. Apple’s vertical approach, which controls the entire chain from silicon to applications, demonstrated its superiority over the traditional model of assembling generic components.

The shockwave extended beyond Apple. Qualcomm accelerated the development of ARM processors for Windows laptops. Microsoft adapted its operating system to better exploit this architecture. AMD and Intel rethought their strategies in the face of this new competitive landscape.

Beyond raw performance, the M1 embodies a different vision of computing. It favors the harmonious integration of components rather than the race for specifications. This philosophy extends to the entire Apple ecosystem, where each element is designed in symbiosis with the others.

Patient strategy can revolutionize an entire sector. Starting from legitimate frustration, Apple methodically built its expertise until it surpassed the historical leaders. This transformation anticipates the future of an industry where the boundary between hardware and software is fading in favor of a comprehensive approach to innovation.

Top

Zig

In 2015, Andrew Kelley embarked on the ambitious endeavor of creating a new systems programming language. At that time, developers had a substantial arsenal at their disposal with C, C++, Rust, and Go. Yet Kelley identified a recurring problem: each language carried its own limitations that complicated the writing of truly reliable software.

His reasoning began with a striking comparison. In aviation or the elevator industry, safety systems are layered to make accidents nearly impossible. Software, however, still suffers from a reputation for fragility and unpredictability. Kelley wanted to change this with Zig.

The new language adopted a radical philosophy: do less, but better. Zig eliminates C’s preprocessor, a mechanism deemed too complex and error-prone. This decision might appear regressive, but Kelley introduced other mechanisms that solve the same problems more elegantly.

Memory management in Zig breaks with current trends. No garbage collector here, unlike Java or Python. The unpredictable program interruptions to free memory disappear. Zig relies on a system of “allocators” that gives developers fine-grained control over memory, without the burden of traditional C. Developers know exactly when and how their program uses memory.

Errors, every programmer’s nightmare, receive special treatment. Gone are exceptions that can pop up anywhere in the code. Zig integrates errors directly into function return types. The compiler literally forces developers to handle every possible error. This constraint, burdensome at first, significantly improves the reliability of the final code.

The Zig compiler conceals the ability to execute code during compilation, which opens up metaprogramming possibilities without resorting to C++’s complex macros. Many errors are detectable by developers before execution and performance can be optimized upfront.

Interoperability with C represents a strategic asset for Zig. The language doesn’t merely use existing C libraries; it functions itself as a high-performance C compiler. This duality facilitates gradual adoption in existing projects. Developers can start by using Zig as a simple C compiler, then progressively migrate to its advanced features.

The integrated build system replaces tools like Make or CMake with a unified solution. No need to juggle different systems depending on the platform: Zig compiles identically everywhere. This standardization drastically simplifies life for developers working across multiple systems.

Several significant projects have adopted Zig. Bun.js, that Node.js alternative that’s been making waves, is developed with this language. Its creator, Jarred Sumner, explains this choice by Zig’s ease of learning compared to C++ or Rust, while maintaining modern features and excellent development safety.

The Zig community is growing around carefully crafted documentation and an ecosystem expanding on GitHub. The Zig Software Foundation, with Loris Cro as VP of community, structures the language’s development and promotion. This organization lends legitimacy to the project against established giants.

Zig’s different compilation modes deserve closer examination. In debug mode, the compiler automatically detects many common errors, helping developers identify problems. In release mode, these checks vanish to unleash the full power of the processor. This flexibility addresses the contradictory needs of development: safety during creation, performance in production.

The path to version 1.0 is still long. This deliberately distant milestone reflects the creators’ caution, preferring to stabilize the language rather than rush a release. Despite this development status, Zig runs in production in certain projects, proof of its technical maturity.

Zig doesn’t claim to replace C overnight. The ambition is more subtle: to propose a modern evolution that corrects C’s historical flaws without losing its qualities. This progressive approach makes Zig adoption gradual for developers, starting with using it as a C compiler before exploring its innovations.

The Zig adventure illustrates a constant quest in computing: finding the perfect balance between simplicity, safety, and performance. By eliminating accidental complexity while preserving control over hardware, this language charts an original path in the crowded landscape of systems development tools.

Top

Mojo

In 2022, Chris Lattner left Apple after designing Swift and founded Modular with Tim Davis, a former Google employee. They wanted to create a language that would finally reconcile Python with performance. A common ambition given the numerous attempts, but their approach differs from previous solutions.

Artificial intelligence is literally exploding. GPUs are running at full capacity, TPUs (Tensor Processing Units) are proliferating, and yet developers remain stuck between two worlds: on one side Python, readable but sometimes desperately slow, on the other C++ or Rust, fast but with a complexity that discourages many. This divide comes at a high cost. Research teams write their prototypes in Python, then production teams completely rewrite them in another language. A considerable waste of time and energy.

Mojo was born from this frustration. But unlike past attempts, Lattner and Davis are not trying to replace Python. They want to extend it, elevate it. The gamble is bold: preserve Python’s familiar syntax while integrating advanced concepts borrowed from Rust for memory management and LLVM for compilation.

The language incorporates sophisticated optimization tools from its conception. Tiling optimization, for example, automatically reorganizes calculations to best exploit processor caches. The auto-tuning module adjusts execution parameters according to available hardware. These technical innovations remain transparent to the programmer who writes seemingly simple code.

Compatibility with Python is the trump card. Libraries like NumPy or Matplotlib work without modification. This interoperability avoids starting from scratch, a classic pitfall for new languages. A developer can migrate gradually, replacing only critical parts with optimized Mojo.

The numbers speak for themselves. Some benchmarks show accelerations of 68,000 times compared to standard Python. These spectacular performances are explained by the use of MLIR, a technology developed by Google to optimize code on different types of processors. Where Python interprets each instruction, Mojo compiles and optimizes everything.

Modular accompanies the language with a complete ecosystem. Basalt for machine learning, Endia for scientific computing, Lightbug HTTP for the web. These libraries, entirely written in Mojo, demonstrate the language’s capabilities while serving as examples for developers.

On May 2, 2023, the Mojo Playground opened its doors. This online platform immediately attracted attention: 120,000 registrations in a few months, a community of 19,000 members on Discord and GitHub. The enthusiasm surprised the creators, while developers shared their experiments, created libraries, and proposed improvements.

Mojo’s technical architecture breaks with certain Python conventions. Inferred static typing improves performance without making writing more cumbersome. Value semantics, where functions receive copies rather than references, avoids many classic bugs. These design choices reflect Lattner’s experience with Swift and his understanding of modern development pitfalls.

Tooling follows the language’s development. A Visual Studio Code extension, a Jupyter kernel for notebooks, advanced debugging features. This software suite facilitates adoption by teams already familiar with the Python ecosystem.

The impact on the industry is beginning to take shape. Research teams are using Mojo for their artificial intelligence projects, eliminating the gap between prototype and production. Optimized algorithms reduce the energy consumption of data centers, an issue that has become critical with the explosion of language models.

The community is producing its first remarkable projects. Maxim Zaks implements sophisticated data structures, others develop libraries for matrix computation. This creativity demonstrates rapid appropriation of the language by its users.

Modular continues development with ambition. The stated goal: make Mojo a complete superset of Python. The gradual opening of the source code should accelerate this evolution. Support for macOS and Windows is expanding, broadening the base of potential users.

In 2024, Mojo continues its growth. Performance improves, new features appear, the ecosystem grows richer. The democratization of artificial intelligence and the explosion of high-performance computing needs create favorable ground for this type of innovation.

Solutions often emerge from the creative combination of existing approaches rather than conceptual revolutions. Lattner and Davis were able to identify a widely shared frustration and propose a pragmatic response. Their success is due as much to technical quality as to a fine understanding of modern developers’ needs.

Top

WebGPU

The University of Illinois at Urbana-Champaign faced an unexpected challenge in 2013. Massive online courses were experiencing explosive growth – between 2002 and 2011, digital education in higher education had climbed 17.3% annually, increasing student enrollment from 1.6 to 6.7 million. But how could GPU programming be taught remotely when most learners lacked access to these specialized processors?

Traditional solutions quickly revealed their limitations. Lending computers equipped with GPUs? Impractical at scale. Reserving computer labs? The idea seemed outdated given the scope of MOOCs. As for traditional computing clusters, they accumulated disadvantages: prohibitive costs, complex maintenance, and a learning curve that discouraged beginners.

WebGPU was born from a simple yet bold insight: transform any web browser into a gateway for GPU programming. The idea revolutionized the pedagogical approach at the time. No more laborious installations or specific configurations – an internet connection was all that was needed to program in CUDA, OpenCL, or OpenACC.

The architecture envisioned by the Illinois team consisted of three complementary elements. On one side, web servers hosted a streamlined user interface. In the center, a database orchestrated the management of work submitted by students. Finally, GPU-equipped compute nodes executed code under secure conditions. This technical triangulation concealed genuine sophistication: each code fragment was first analyzed to detect forbidden system calls, then confined to an environment with drastically limited permissions.

The large-scale test came with the “Heterogeneous Parallel Programming” course on Coursera. Over 100,000 participants enrolled, validating the initial intuition while also revealing the specifics of massive education. For while attendance was impressive, it came with a well-known phenomenon: 85% of enrollees dropped out along the way. WebGPU therefore had to manage dramatic fluctuations, with activity spikes concentrated on assignment due dates.

The technical team constantly adjusted the number of available GPUs, relying on the flexibility of Amazon AWS cloud services. This continuous juggling built valuable expertise in elastic management of computing resources. The experience accumulated enough feedback to envision a major overhaul.

WebGPU 2.0 emerged in 2015 with enhanced ambitions. The user interface migrated to OpenEdX, simplifying the integration of educational content. But the most striking innovation lay in adopting Docker to isolate execution environments. This technology, still young at the time, brought unprecedented granularity in resource management and facilitated the deployment of custom configurations.

Docker containers dynamically associated with physical GPUs according to the specific needs of each practical assignment. This architectural flexibility finely adapted learning environments while preserving the overall system’s security. The University of Illinois deployed this new version for its traditional courses ECE 408 and ECE 598HK.

Success extended beyond Illinois borders. North Carolina State University adopted WebGPU, followed by the University of Tennessee. The PUMPS summer school in Barcelona integrated the platform into its intensive training sessions. This geographical spread confirmed the approach’s relevance beyond the initial context.

WebGPU’s deliberately simplified interface, limited to six main operations, illustrated a particular pedagogical philosophy. Unlike traditional development environments that multiply functionalities, this deliberate restriction guided learning by avoiding dispersion. Experience showed that a constrained framework sometimes better fostered understanding than total freedom.

Some experiments nevertheless failed. Peer evaluation, attempted during the first sessions, had to be abandoned due to implementation difficulties. These failures fed reflection on the limits of automation in technical education.

The design team meticulously documented their work in academic publications. This transparency contributed to advancing knowledge about online educational platforms. Issues of scalability, security, and automated assessment found in WebGPU a testing ground rich in lessons.

The platform’s history revealed the transformations of higher education in the digital age. WebGPU addressed the specific requirements of parallel programming while adapting to the constraints of massive training. Its development testified to the growing importance of cloud infrastructure in modern education.

In 2024, as artificial intelligence stimulated unprecedented demand for GPU training, WebGPU’s legacy persisted. Current platforms drew inspiration from its innovations in elastic resource management and automated assessment. This story demonstrated how a technical solution born from a specific pedagogical need could generate lasting advances in digital education.

WebGPU’s modular design remained relevant. Its founding principles – accessibility, scalability, security – continued to inspire creators of new learning platforms. The experience gained with this system enriched our understanding of issues related to democratizing advanced computing technologies, revealing that a pedagogical innovation could sometimes anticipate the technological developments it accompanied.