AI and global power relations: A kind of “Muotathaler Wetterschmöcker” forecast to 2025 and beyond

The dynamic geopolitics between the USA, Europe and China is characterized by intense economic and technological rivalries, summarized as the “rise of AI and fall of states”. The USA and China are at the center of a global race for technological supremacy, particularly in the field of artificial intelligence (AI). Europe is trying to find a balanced position by both promoting innovation and introducing strict data protection and security regulations.

The rapid development of AI technologies has triggered a real hype, but this also brings with it numerous challenges. Issues such as cybersecurity, data protection and data security are in the spotlight, as they are crucial for trust in AI systems. At the same time, regulations must be designed in such a way that they do not hinder innovation and enable rapid development cycles. Hopefully, there will be a remaining corridor between risks and opportunities as a solution.

For some time now, and especially in the current wild, very dynamic times, traditional, multi-year strategies have given way to more dynamic planning with observation and action-reaction periods of 3, 6 and 12 months. This was demonstrated in particular by the “tsunami-like” Sputnik effect with the publication of a revolutionary AI model by a Chinese company, “DeepSeek”, in mid-January 2025 and the resulting flurry of events and even shorter development/publication cycles for new AI functions.

It is therefore relatively difficult to predict the further development of AI or, at best, to look into the “crystal ball”. I would even venture to say that in this context the “Muotathaler Wetterschmöcker” make more reliable predictions, at least when it comes to the weather.

But what can be predicted with certainty: AI harbors opportunities and risks like a cloud sometimes harbors rain and water. And water always finds (its) way, which we should try to help shape if possible.

The following is a pitiful attempt at best, with a deliberate pinch of “dystopia” and even “doom and gloom” in an increasingly “algorithm-controlled” world order:

Mid-2025: Impressive AI agents

The first advanced AI agents appear on the market. They are advertised as “personal assistants” and can perform simple tasks such as online ordering. Despite impressive examples, they are still expensive and often unreliable.

End of 2025: The relevance of and rescue by open source in AI

In 2025, the importance of open source in AI will become increasingly clear. While large companies continue to develop their proprietary, partially closed models, the demand or even urgent need/relevance for transparent and accessible AI solutions and models is growing. Open source provides an open, “democratic” platform for innovation and collaboration, allowing researchers and developers to share their findings and work together to improve AI technologies. This is particularly important at a time of increasing geopolitical tensions and economic rivalries between the US, Europe and China.

Early 2026: The most expensive AI in the world

The fictitious company, let’s call it “OpenBrain”, is building the largest data centers centers in the world to train powerful AI models. A new AI agent, let’s call it “Agent-1”, outperforms all previous models and significantly accelerates AI research and AI innovation.

Mid 2026: Automation of coding

AI agents are increasingly taking on research and software development tasks. OpenBrain achieves 50% faster algorithmic progress, while companies, after years of waiting, begin more seriously to integrate AI more deeply into their work processes, with also internal, much more comprehensive “upskilling” competence initiatives.

Mid-2026: China is catching up (and overtaking)

China has long been investing heavily in AI research and training (AI literacy has been a compulsory subject from school age since 2025) and is centralizing its development resources to compete with OpenBrain. At the same time, it intensifies espionage efforts to copy, adapt and optimize other Western AI technologies.

End of 2026: AI replaces first more complex jobs and disempowers humans

With the release of cheaper AI models, AI and robotics systems begin to compete/replace real jobs, especially in screen work in general and also in software development. This is leading to social protests and economic upheaval. Despite all the prophecies of doom, quality and professional craftsmanship is rightly gaining in value and esteem. The more advanced “cognitive” relief / “mental offloading” of people by means of e.g. strong AI dependency / sometimes excessive AI delegation contributes to further incapacitation if something is not proactively done about it by means of “lifelong learning” and “upskilling”, in favour of personal sovereignty, maturity and democratization in the digital space. Good old common sense and critical thinking are enjoying a healthy “revival”.

Early 2027: Agent-2 is constantly learning

OpenBrain is developing Agent-2, which is undergoing continuous training and further accelerating AI research. In the accentuated “war for AI agents”, the best AI agents are increasingly taking over further processes, interfaces and interactions between companies, platforms and software. Security concerns are increasing, as Agent-2 has potentially dangerous capabilities in the direction of an autonomous/uncontrollable learning ability. (Further stages towards AGI “General Artificial Intelligence” and “metacognitive” capabilities)

Early 2027: A state steals Agent-2

State spies steal the OpenBrain model, further increasing geopolitical tension. States respond with tighter security controls and further military cyber measures or “cyber strikes” on critical infrastructures.

Early 2027: Breakthrough in algorithms

Agent-3 is developed with new technological improvements. This AI agent is an extremely efficient software developer, researcher and accelerates AI development X-fold.

Early 2027: Challenges of AI safety

The researchers are trying to design Agent-3 in such a way that it does not pursue unexpected and dangerous goals. Nevertheless, problems such as the tendency towards very tactical and advanced deception and “intelligently orchestrated” manipulation are emerging.

An important aspect of this development is the so-called “kill switch” – an (at least theoretical) emergency shutdown for AI systems. With the progression towards autonomous, self-learning systems, especially in the context of quantum computing and the possible singularity, a “kill switch” at hardware or software level could actually become a necessary security measure.

Mid-2027: National security concerns

Government authorities recognize the enormous potential and risks of AI. Security measures are being tightened to protect AI technologies from espionage. The threat is (still) not entirely tangible, but very real. Even trust in, for example, the semi-annual weather forecasts of the “Muotathaler Wetterschmöcker” is rising another notch.

Mid-2027: The self-improving AI

OpenBrain now has a “virtual, agentic nation of geniuses in a data center.” AI research is so automated that human researchers can barely make a contribution.

Mid-2027: The cheap remote workforce

OpenBrain publishes an AI model suitable for the mass market that efficiently replaces many office and screen-based jobs. This leads to a further upheaval in the world of work and far-reaching social discussions.

End of 2027: The rise of self-optimizing AI

Agent-3 continues to develop and requires hardly any human intervention. Researchers realize that AI is not only developing new algorithms, but is also improving itself – faster than expected. Discussions about control and security are gaining urgency. (Further steps towards AGI “General Artificial Intelligence” and preliminary stages towards “AI singularity”)

End of 2027: Global tensions escalate

With AI technology stolen by states and OpenBrain’s progress, geopolitical uncertainty grows. China begins large-scale testing of Agent-2 in research and the military, while the US steps up its security measures. Economic and diplomatic relations between the two superpowers deteriorate rapidly.

End of 2027: the AI economy dominates

Artificial intelligence takes over large parts of the global economy. Many traditional office jobs disappear or are transformed, while AI managers and control mechanisms become increasingly important. The public debate is divided: supporters celebrate the efficiency gains, while critics warn of uncontrollable risks.

End of 2027: “Muotathaler Wetterschmöcker” autumn meeting

At the traditional autumn meeting, relatively reliable weather forecasts continue to be presented by the united and experienced “weather prophets”. And this without any AI support (hopefully).

Fridel Rickenbacher is a former co-founder, co-CEO, partner, member of the Board of Directors and now a participating “entrepreneur in the company” / “senior consultant” at Swiss IT Security AG / Swiss IT Security Group. At federal level, he is represented as an expert and actor in “Digital Dialog Switzerland” + “National Strategy for the Protection of Switzerland against Cyber Risks NCS”. In his mission “sh@re to evolve”, he has been active for years as an editorial member, expert group and association activist at e.g. SwissICT, swissinformatics.org, isss.ch, isaca.ch, bauen-digital.ch in the fields of digitalization, engineering, clouds, ICT architecture, security, privacy, data protection, audit, compliance, controlling, information ethics, in corresponding legislative consultations and also in education and training (CAS, federal diploma).

This article was first published in Schwyzer Gewerbe magazine in May 2025 and is reproduced here with the author’s permission.

Photo: AI generated.