Stéphane FOSSE

EPOCH

EPOCH © 2025 by - This book is published under the terms of the CC BY-SA 4.0 license

Epilogue

This book ends where perhaps the greatest technological rupture in our history begins. The technologies recounted here already belong to the past. Some survive, others have disappeared. But what lies ahead surpasses in scope everything we have known since the invention of the transistor.

Quantum computing is no longer a distant promise. Google demonstrated in 2024 that one of its quantum processors was 13,000 times faster than the most powerful supercomputer in the world. IBM plans to deliver systems capable of executing 15,000 quantum operations by 2028. Public investments reached 10 billion dollars in 2025, compared to 1.8 billion the previous year. These figures reflect a brutal acceleration. The error correction threshold, which has long blocked any practical application, is beginning to be crossed. The simulation of complex molecules for drug discovery, the optimization of industrial processes, or the creation of new materials are becoming feasible in the short term. But quantum computing threatens current encryption systems, which will collapse like houses of cards when faced with these machines. The race has already begun to develop post-quantum algorithms before the first truly operational computers leave the laboratories.

Artificial intelligence has crossed in just a few years the wall that separated experimentation from mass usage. It appears everywhere, and it is indeed the legacy of the hardware, systems, languages, and software presented in this book (but not only) that made its development and execution possible. In recommendation systems, voice assistants, text and image generation, autonomous driving. But this omnipresence reveals flaws. AI models can be manipulated, biased, hijacked. Their energy consumption is exploding. A single latest-generation language model requires as much electricity as a small city during its training. Data centers are multiplying to absorb this demand. And no one really knows where this trajectory is leading us. Ethical questions are piling up faster than answers. Who controls these systems? Who is responsible when they make mistakes? How can we ensure they don’t reproduce the discrimination present in their training data? Europe is attempting to regulate with the AI Act, but technology advances faster than legislative texts, which are very static.

Cyberattacks have changed scale. In 2024, the average cost of a data breach reached 4.88 million dollars, up 10% over one year. Ransomware accounts for 59% of attacks suffered by organizations. The average ransom paid exceeds 2 million dollars. Hospitals, critical infrastructure, and transportation systems have become prime targets. In 2024, the attack against Change Healthcare disrupted medical claims processing for more than 100 million Americans. Snowflake saw data from over 100 of its clients exfiltrated, including AT&T, Ticketmaster, and Santander. Unpatched vulnerabilities remain the main entry point: 20% of breaches begin with the exploitation of a known but unpatched vulnerability. The average time to identify and contain a breach is 241 days. More than nine months during which attackers can move freely through systems. Faced with this, the means remain inadequate. Companies recruit security teams they struggle to train and retain. AI-powered defense automation is beginning to show its effectiveness, with a 108-day reduction in detection time for those who use it. But AI also serves attackers, who automate reconnaissance, exploitation, and exfiltration. The battle now takes place at machine speed.

Brain-computer interfaces are leaving the laboratories. Neuralink implanted its first volunteers in 2024. A quadriplegic patient now controls a computer cursor by thought to play chess. Other companies like Synchron or Precision Neuroscience have less invasive approaches. In 2025, about twenty clinical trials are underway. Medical applications initially target paralyzed people, those with ALS or other neurodegenerative diseases. But the stated ambition is to restore vision in the blind with visual prosthetics, enhance cognitive abilities, merge human and artificial intelligence. These projects raise dizzying questions. Where does the human begin, where does the machine end when a chip reads and modifies brain activity? The risks of misuse are immense. Who guarantees that a state or company won’t seek to monitor or manipulate thoughts through these devices? Ethical safeguards exist on paper. Their application in a context of fierce international competition remains to be proven.

The user experience is shifting toward new paradigms. Augmented reality glasses are emerging as the next major screen after the smartphone. Meta, Apple, Google, Xreal, all are investing massively. Ray-Ban Meta glasses already integrate AI to identify objects in real time. The 2025 models display information directly in the field of vision with a 70-degree angle. The market is expected to grow from 678,600 units in 2023 to 13 million in 2030. These devices change how we interact with information. No need to take out your phone: directions float on the sidewalk, translations appear as overlays, notifications are superimposed on the real world. But this fusion of physical and digital raises concerns. Surveillance becomes permanent with always-on cameras. Privacy disappears in the face of incessant data collection. Tablets failed to replace the PC because they didn’t offer enough power for complex tasks. AR glasses could succeed where they failed by freeing the hands while providing access to unlimited virtual workspace. Will the laptop and smartphone survive this transition? Hard to say. Some predict that within five years, AR glasses will replace the smartphone as the primary device. Others estimate it will take another decade to overcome technical and cultural obstacles.

Will we all wear a chip under our skin or in our brain connected to the internet via Wi-Fi? The idea is no longer science fiction. RFID implants are already used for contactless payments or access to secure buildings. Thousands of people have taken the plunge. But widespread brain implants remain a distant horizon. The technical obstacles are immense. Long-term biocompatibility is not guaranteed. Risks of infection or rejection exist. And above all, social acceptability remains very low. Few people would agree today to have their skull drilled to improve their cognitive performance. Glasses or augmented contact lenses represent a less invasive and probably more acceptable path. Several companies are working on lenses capable of displaying information or measuring physiological parameters. But component miniaturization and energy management still pose problems. We will likely have to wait until the 2030s to see these devices leave the prototype stage.

Corporate IT infrastructures are becoming more complex. Hybrid cloud, microservices architectures, containers, edge computing create environments of unprecedented sophistication. Managing these systems requires expensive skills. AI-powered automation promises to simplify this management, but it introduces new dependencies. What happens when a critical system relies on an AI model whose internal functioning no one truly understands? Failures become more difficult to diagnose. Regulations are tightening. GDPR in Europe, data protection laws in California and elsewhere impose increasing constraints. Companies must trace the origin of data, document their processing, guarantee their security. Non-compliance fines run into hundreds of millions. This regulatory pressure pushes toward more transparency, but it also slows innovation. Finding the balance between user protection and economic actors’ agility remains an unresolved challenge.

The return of geopolitical conflicts accelerates the militarization of computing. AI embedded in drones, ground robots, or autonomous weapon systems is becoming an operational reality. Boston Dynamics may promise not to weaponize its robots, but others have no such scruples. The Spot robot is being tested by several armies for reconnaissance missions in hostile zones. Variants equipped with CBRN sensors detect chemical, biological, or radiological threats. Armed prototypes are already circulating in certain countries. China, the United States, Russia, and Israel are all developing AI-driven weapon systems. The decision to fire can already be made by an algorithm in certain configurations. International treaties struggle to keep up. The ban on lethal autonomous weapons has been discussed for years without result. Each state fears falling behind its adversaries. This race exhausts resources, but no one dares to slow down. Computing is becoming a sovereignty issue on par with energy or access to raw materials. Semiconductors, data, and AI algorithms are now strategic weapons. Sanctions and export restrictions are multiplying. The fragmentation of the digital world into rival geopolitical blocs is intensifying.

What technologies will need to be invented to deal with all this? Post-quantum encryption is under development, but its large-scale deployment will take years. Zero-trust architectures, where no actor is considered trustworthy by default, are being deployed slowly. Confidential computing, which protects data even during processing, is progressing. But each new layer of security adds complexity and slows systems. Finding the right balance between protection and performance remains more of an art than a science. Formal verification technologies, which mathematically prove that a program does exactly what is expected of it, could become more widespread. Today confined to critical systems like aeronautics or nuclear power, they could extend to other domains if the tools become more accessible. Quantum computing itself could provide solutions in cryptography or simulation of complex systems. But it won’t be a magic wand. Each advance opens new problems.

The history of computing has taught us one thing: ruptures are never the ones we expect. No one predicted the scale of the web in the 1990s. No one anticipated the explosion of smartphones after 2007. No one really knows which technology will disrupt the next decade. The bets are open. But one certainty remains: computing will continue to reinvent itself, to surprise us, to force us to rethink our ways of working, communicating, and thinking. This book tells a piece of a story that never ends. Every period is a new beginning. The technologies we will use in ten years may already exist, somewhere in a laboratory, in the form of a prototype that no one yet takes seriously. Or perhaps they don’t exist yet, waiting for a brilliant idea to cross the mind of a researcher or engineer somewhere in the world.

But what freedoms are we willing to give up for more security or comfort?