There is much more on the hyperautomation menu than just “fish and chips”

Hyperdigitalization and hyperautomation are processes that are fundamentally changing the way we work, live and interact. This profound transformation is based not only on individual technological advances, but on a complex interplay of several key components. At least five essential building blocks are required to successfully shape this change: chips, energy, data, talent and AI models.

Chips are at the heart of every digital application. High-performance processors and GPUs enable the processing of large amounts of data and AI models in real time. Companies are increasingly investing in the development and optimization of this hardware in order to improve the efficiency and speed of their systems. One example of this is the collaboration between start-ups and established technology companies to develop innovative and increasingly efficiency-optimized solutions and bring them to market.

Energy is another critical factor. Digital systems and especially AI models require enormous amounts of electricity to perform their complex calculations. It is estimated that ChatGPT alone, for example, requires the electricity consumption of around 35,000 US households every day, which corresponds to around 1 GWh.

It is therefore crucial to use sustainable and efficient energy sources in order to minimize the environmental impact and reduce operating costs for the benefit of profitability in the overall balance sheet.

Data is the raw material (“gold mine in your own backyard”) of digital transformation. Models cannot be trained and utilized effectively without the best possible, context-based, high-quality and comprehensive data sets. Companies must ensure that their data is correct, contextually appropriate, up-to-date and well-structured. Data protection and data security play a central role in gaining the trust of users and meeting legal requirements.

Talent is the driving force behind any successful digital initiative. Specialists with in-depth knowledge in areas such as AI applications with the right toolset through to business engineering, data science, machine learning and software development are essential. Companies need to invest in the training and development of their employees to foster the necessary skills and knowledge in their skillset. Programs such as champions programs and certifications in areas such as low coding / no coding / automation can help to expand employees’ skills and increase their motivation in a realigned mindset of the entire organization.

AI models are the “tools” or digital shovels / steam engines / motors, so to speak, that drive hyperdigitalization. They make it possible to recognize sometimes unrecognized patterns in large amounts of data, to map more comprehensive pasts, to support optimized predictions and to optimize and even automate extended decision-making bases. Companies must ensure that the AI tools they use or even their own AI models are secure, robust, transparent and ethical. The integration of AI models into business processes can lead to significant increases in efficiency and innovation.

More than “fish and chips”

In summary, chips, energy, data, talent and AI models are the five pillars on which the future of hyperdigitalization is based. Companies that use these components effectively and link and orchestrate them intelligently will be able to develop innovative solutions, make better decisions, increase their competitiveness and ultimately optimize their “digital customer proximity”.

And yet / fortunately: if you want to master the digital and “real” world, you first have to understand it “real”

The further development of artificial intelligence (AI) towards Artificial General Intelligence (AGI) is a fascinating and challenging process. In principle, AGI aims to create an AI that is capable of performing any intellectual task autonomously and even better/more efficiently than a human can. Getting there requires the extended integration and further development of various technologies such as robotics, sensor technology, haptics and “computer vision”, which provide the AI with the relevant extended senses such as sight, hearing, smell, taste, touch, balance and even a sense of depth in the experience of one’s own body and life.

This is much more than just a possible analogy to “If you don’t want to hear, you have to feel”: AI has long been able to hear or read us (e.g. our entire knowledge of the world), but it is still learning to touch and see – be it through computer vision, haptic sensors, robotics or multimodal models that attempt not only to interpret the world, but also to understand it contextually and react to it in a meaningful way.

Sensor technology in general and robotics in particular play central roles in the physical interaction of AI systems with the real world. Advances in robotics are enabling machines to take on complex tasks that were previously reserved for humans. Sensors and haptics are crucial to giving machines a better understanding of their environment and the ability to interact sensitively. Computer vision enables AI systems to interpret and react to visual information, which is crucial for many applications.

These technologies are currently undergoing rapid development and are driving the next evolutionary stage of AI. A notable driver of this development is also, for example, the so-called “Sputnik effect by DeepSeek from China” in mid-January 2025. This effect describes the sudden and intense surge of innovation triggered by groundbreaking technological advances. DeepSeek’s revolutionary approaches to AI research and application, some of which have since been relativized, have caused a worldwide sensation and further fueled the competition for supremacy in AI technology.

Quantum leaps and quantum computing

The further combination of these technologies and the associated advances through to outright “quantum leaps” (quantum computing is also waiting for us around the corner, armed with more than one “stick”…) brings us closer to the so-called singularity – the point at which AI systems can surpass human intelligence and, above all, continue to develop independently. This development holds both enormous opportunities and challenges that need to be overcome or even adequately regulated.

We do not need to worry about the further development of technology and AI in particular, but about its users (including states, cybercrime, terrorism, disinformation) who use this technology and also use it specifically against people and organizations.

The “kill switch” – from a science fiction joke to a real security issue?

Another interesting aspect of this development is the so-called “kill switch” – an emergency shutdown for AI systems that has often been treated as a gag or science fiction joke until now. However, with the progress towards autonomous, self-learning systems, especially in connection with quantum computing and the possible singularity, a “kill switch” at hardware or software level could actually become a necessary safety measure.

While today’s AI models are still clearly controllable, future systems that develop independently could be more difficult to predict or regulate. A “kill switch engineer”, i.e. a specialized role/mechanism for implementing such rule- or event-based shutdown mechanisms, could assume an essential function in AI safety research – comparable to failsafe/failback systems in nuclear power plants or aircraft control systems.

It remains to be seen whether the “kill switch” will ultimately become just a psychological reassurance measure or a necessary control instrument. What is very likely to become an issue, however, is a new specialization in the direction of a so-called JRE “Junkware Removal Engineer” as part of quality management. With newly specialized expertise to detect, optimize or remove unwanted or security and compliance-related dangerous codes and functions in apps, codes, prompts, automations and agents. This is especially due to the increasing number of power users who are co-developing such functions using “no code” / “low code” approaches, initially without in-depth programming knowledge and experience in data protection, data security and compliance requirements as part of ICT risk management.

Fridel Rickenbacher is a former co-founder, co-CEO, partner, member of the Board of Directors and now a participating “entrepreneur in the company” / “senior consultant” at Swiss IT Security AG / Swiss IT Security Group. At federal level, he is represented as an expert and actor in “Digital Dialog Switzerland” + “National Strategy for the Protection of Switzerland against Cyber Risks NCS”. In his mission “sh@re to evolve”, he has been active for years as an editorial member, expert group and association activist at e.g. SwissICT, swissinformatics.org, isss.ch, isaca.ch, bauen-digital.ch in the fields of digitalization, engineering, clouds, ICT architecture, security, privacy, data protection, audit, compliance, controlling, information ethics, in corresponding legislative consultations and also in education and training (CAS, federal diploma).

This article was first published in Schwyzer Gewerbe magazine in April 2025 and is reproduced here with the author’s permission.

Photo: AI generated.