Between progress and control of AI
The physicist and researcher in the field of artificial intelligence, Max Tegmark, puts it clearly: “We discovered the fire, made lots of mistakes and then invented the fire extinguisher. But this principle of trial and error is not suitable for the use of artificial intelligence, which is far more powerful than fire. With technologies of this scale, such as nuclear weapons combined with superhuman AI, we can’t hope to learn from mistakes – we need to take the right measures from the start.”
The EU Parliament’s newly adopted AI law aims to make the use of artificial intelligence in the European Union safer. It is intended to ensure that AI systems are transparent, comprehensible, non-discriminatory and environmentally friendly. A key point is that these systems are monitored by people and not exclusively by other technologies.
Is this law the “fire extinguisher” for AI, as demanded by Tegmark? Certainly not, but it can be a first step. However, the question remains as to how we can control the AI. We ourselves integrate AI components into our software system(DISKOVER) for particularly complex tasks such as production control. This gives us first-hand experience of how difficult it is to ensure transparency regarding the results of AI-supported software. This becomes even more complicated if external systems such as ChatGPT or Gemini are used.
One example: We are currently experimenting with AI systems that record our meetings. The software takes part in our online meetings like another participant and listens in. It then creates the log, which can be accessed in the cloud at any time. The results are indeed promising. You can even choose whether you want a detailed report or just the most important points, which makes your work much easier.
However, the AI now makes decisions about which points from the meeting are classified as important. It is not clear whether it is acting neutrally. There is a possibility that the AI influences the results or is even discriminatory. Who remembers all the content of a meeting two weeks later? AI could therefore influence our decisions unnoticed.
Should the provisions of the new EU law on the use of AI already apply in such cases? Can external or internal stakeholders hold us accountable if we continue to use the software? Are we obliged to switch off the AI? This would be counterproductive and would hinder progress in AI applications. We would decide against the software too early without having exhausted its possibilities and remedied its weaknesses.
Other countries do not yet have any regulations regarding the use of AI and can continue to operate freely. Europe will thus continue to widen the gap with the USA and China!
Greetings,
Bernd Reineke