Skeptical or fan of the first hour – there is hardly a topic that currently divides opinions as much as artificial intelligence (AI).
In this interview, BAYOOTEC CTO and AI expert David Ondracek talks to us about the opportunities and risks of AI, false expectations, provides insights into software development with AI-based tools, and tells us where he believes there is still a need for research.
Like most new technologies, there are two camps when it comes to artificial intelligence. Which do you belong to, David, are you pro or con AI?
There is no general answer to this question, because the AI has already been around for a long time.
I also can’t be against daylight. I may not like it, but it’s there and it’s not leaving.
If you ask me whether I’m more on the lover’s side or the hater’s side, I’m definitely on the lover’s side. I believe that AI offers a lot of potential to simplify and fundamentally change our lives.
“A lot of people see AI as a danger because they don’t understand what it even means.”
Do you understand then that there is an opposite side, that some people see AI as a danger?
Absolutely, and it’s very important that there is a counterpart, because uncontrolled introduction of AI could lead to big problems.
However, in conversations I realize over and over that many people are against it because they don’t understand what is meant by it in the first place. Then it is often fear that is stoked or there is the false expectation that artificial intelligence knows everything, can do everything – a bit like Skynet in Terminator.
You mean this false expectation of AI is causing frustration?
Absolutely, ChatGPT is a good example of this. It feels so natural to talk to this chat bot, everything he says sounds logical and is well formulated – so you quickly expect factually correct information. But that’s exactly what ChatGPT isn’t.
ChatGPT is not about facts, not a database where I can search for facts. But if I ask a question with this expectation and get the wrong answer, I’m disappointed and then many people think badly of the whole AI technology across the board. This quickly leads to the thought “AI can’t be that good”.
Surely this is also due to the fact that many tools are called AI where there is actually no real AI involved, isn’t it?
Yes, even today process automations are sometimes advertised as AI, although only decision tables run in the background: If condition A applies, then say sentence B, otherwise say sentence C.
However, this is definitely not the case especially with generative models like ChatGPT. These are much more statistical models and there is so much going on in the background that you can’t even capture it. Such models must then be considered more like human intelligence.
“‘The fundamental principle with AI is to handle personal data with responsibility.'”
Does AI pose any tangible risks? Especially in the field of data protection, you can always read something about it.
Of course, AI is a risk when it concerns data protection. But it’s not a bigger risk than social media. The same rule applies: Always handle personal and critical data in a responsible manner.
It doesn’t matter if someone posts pictures of their children on Facebook or gives their full names, including address, to an AI – we all need to educate ourselves in media literacy and AI is now part of that. Accordingly, I see a risk there, but not a new one.
Turning now to the positive side, the reason why AI should be used in the first place: What can AI already do better than us today, where can it perhaps support us?
What’s at the root of it all is pattern recognition, that’s what AI is based on. But “pattern” must be seen in a wider context here. This can be patterns in behavior, in code, and generally in any data, and in an incredible amount of data.
AI can collect a lot of data in a very short time and recognize possible correlations and patterns – better than any human ever could. It just depends on how you use it.
“In cybersecurity, AI can support.”
Do you have a good example of this from the field of software development? How do you use AI at BAYOOTEC?
An important field of our work at BAYOOTEC is cybersecurity. If there is an attack on a website, AI can detect it based on the anomaly in the accesses. Implemented as a pure alerting function, in this case a notification is sent to a responsible person with the note: “Caution, there is something happening over here, you should take a look at this urgently”.
Artificial intelligence can also react by blocking the conspicuous source address, the entire website or a certain area of the website. Potential attacks can then be defended against and sensitive data can be protected.
Let’s take a look to the future: Where do you think there is still a need for research in the field of artificial intelligence? What are challenges, maybe problems, that you see coming?
The timeliness of the data is an issue for which a solution is definitely still needed.
ChatGPT is the best example for this. Many people are not even aware that the data used to train the AI is already two years old (as of September 2021). This means ChatGPT has not learned anything since 2021 and accordingly does not know anything about the latest technologies, developments and levels of knowledge.
This is partly because it takes a long time to train AI models such as ChatGPT, and partly because data must also be prepared in advance.
There are already approaches in AI research on how to get very good models with little data and how the data can be prepared and filtered in advance.
At the BAYOONET Group, some employees are also academically engaged in this field. If AI models were to learn completely unfiltered, this would lead to major problems.
“If we’re not afraid, AI has a huge potential.”
What do you mean?
AI has no ethical ideas, no boundaries, no standards of morality. When AI models are constantly learning and accessing unfiltered input, things can go really wrong. A good example is Microsoft’s AI bot Tay, which was unleashed on Twitter in 2016 and learned from user interaction. The first responses from him were harmless, but within a short time the AI bot has written racist, extremist and anti-feminist comments. Only 16 hours after launch, Microsoft had to shut down the AI bot again.
For this reason, replies to chat bots like ChatGPT often pass through a kind of filtering area, a kind of moral center that checks statements. You also notice the existence of these filtering mechanisms whenever you ask ChatGPT about politics.
That is, what is currently missing is a solution that can learn faster and use current data, but still has sophisticated filtering mechanisms?
Exactly, a kind of control authority.
In the end it is and remains a balance, AI cannot think 100% for us. But as long as we are not afraid and are aware of its enormous potential, it can do exactly what it was developed for: Be an aid for us to make work easier and optimize processes.
Even beyond AI tools, artificial intelligence is part of our everyday work. With AI, we optimize processes in organizations, increase security and reduce costs, especially in the field of e-commerce. How does it work? You can read about it here in our BAYOOTEC service overview.