Blog

AI meets existing system: why established IT landscapes are not an obstacle to AI, but often an advantage

Blog

AI meets existing system: why established IT landscapes are not an obstacle to AI, but often an advantage

AI is no longer a question of the future in software development. According to the latest trends report from CodeSignal, more than 80 percent of software developers already use AI in their daily work, almost half of them on a daily basis. And yet there is a gap here that is still not discussed openly enough in the industry: There is a world of difference between using it and using it responsibly.

We don’t want to ignore this gap. This article shows how we at BAYOOTEC work with AI, where we deliberately do not use it, and why we see governance and regular training not as bureaucracy, but as a quality feature.

Productivity boost with side effects

There’s no doubt about it: AI support in development works. Microsoft researchers were able to show that teams with GitHub Copilot completed the same task around 50 percent faster than comparison groups without AI support. This order of magnitude is impressive and explains why AI tools have so quickly become commonplace in development teams around the world.

But the same development has a downside that is slowly becoming apparent. A recent study by the security company Apiiro shows that developers who use AI assistants write three to four times more code, but also introduce ten times the amount of risks. These include unchecked open source dependencies, hidden backdoors in the generated code and inadvertently published access data. According to industry estimates, AI will have produced around 40 percent of the code written worldwide by 2025. So-called “vibe coding”, i.e. the intuitive generation of code through AI prompts without a deep understanding of the result, is a reality.

Awareness of this is growing: according to the Stack Overflow Developer Survey, developers’ confidence in the accuracy of AI-generated code fell from 42% to 33% between 2024 and 2025. Senior engineers are particularly skeptical, and there is a reason for this: AI-generated errors are often well hidden. The code looks clean and well thought-out at first glance, but the weak points lie deeper. It often takes experienced developers to recognize them in the first place. Those who really understand code can also see what the AI has done wrong.

Where AI is actually used at BAYOOTEC

At BAYOOTEC, we use AI in a targeted manner rather than across the board. AI support brings us real added value in three areas.

  • Code review and security scanning:
    AI-supported review tools help us to make security smells, anti-patterns and insecure dependencies visible directly in the pull request. A clear internal rule is important here: AI hints are never flagged automatically. They are input for human reviewers, not a replacement for them. An AI recommendation is a suggestion, not a commit. The designated person is always responsible, be it the lead developer or the security champion of a project

  • Test generation:
    AI is good at writing tests, we are happy to leave that to it. People decide what is tested. Our QA team defines the relevant scenarios and the AI implements them. We read the fact that 89 percent of AI proposals go through the code review unchanged as a warning signal, not as a success metric.

  • Documentation and change logs:
    AI helps us to explain code, generate API documentation and write change logs. This can save a lot of time, especially with larger refactoring cycles. However, in regulated projects and wherever customers explicitly check the documentation, the final approval always lies with experts. Versioned, traceable, audit-proof.

Where we consciously do without AI

The fact that we use AI does not mean that we use it everywhere. We do not use AI support in projects with particularly sensitive customer data or highly sensitive source code. In its impulse paper on GenAI and cyber security, Fraunhofer AISEC explicitly warns of the risk of information leakage through so-called inference attacks: anyone who transfers confidential data to a large language model can lose control of this information.

In strictly regulated environments, i.e. where GDPR, EU AI Act, MDR or ISO standards must be complied with, we work with particular caution. The OWASP Top 10 for LLM applications lists prompt injection, insecure output and data leaks as critical risks. For security-critical components such as cryptography, authentication logic or access control, our general rule is: no automatic generation without dedicated manual checking and explicit approval.

And we do not use AI where customers expressly do not want it. This is not a restriction for us, but a professional attitude. At a time when, according to the study “Future of Application Security in the Era of AI”, only 18% of companies have clear AI guidelines, but at the same time 81% knowingly provide insecure code, a conscious no to shortcuts is not a step backwards. It is the standard we set for ourselves.

Why AI code is not a free ride

AI-generated code looks clean. Often it is. But it can hard-code secrets or handle them insecurely, use crypto APIs incorrectly, introduce libraries without version or vulnerability checks, and implement input validation and authorization checks incompletely. Fraunhofer AISEC states it clearly in its impulse paper: Generative AI can improve development quality and speed, but the unchecked use of AI responses can create security vulnerabilities, especially if generated code is used in an uncontrolled manner.

For this reason, we treat AI-generated code internally as “unaudited code”. This means that it goes through the same secure coding gates as any other code, sometimes with additional steps. In addition, we have formulated our own secure coding rules that explicitly address AI-specific failure patterns: insecure defaults, questionable library recommendations, missing boundary checks.

This is not a mistrust of technology. It is quality assurance.

KI im Entwickleralltag: Wie wir Cybersecurity und Verantwortung zusammendenken

Our AI governance: who decides what, and how

Analysis is not responsibility. That sounds simple, but it describes the core of the problem very precisely. AI can find weak points, make suggestions and prioritize risks. But risk appetite, policies and approval processes cannot be delegated. These are human decisions.

At BAYOOTEC, we have established clear governance structures for this purpose:

  • Firstly, an AI policy that defines where AI is permitted and where it is not. This explicitly excludes projects with safety-critical control systems, highly sensitive health data or contractual restrictions. A risk class is determined for each project and a decision is made on the use of AI based on this evaluation.

  • Secondly, clearly defined roles and responsibilities: Who is allowed to formulate prompts? Who checks generated code? Who releases? These questions must be answered before deployment, not afterwards.
  • Thirdly, logging and traceability in the CI/CD process: which tools were used, in which version, with which prompts, and which suggestions were adopted? This transparency is not a bureaucratic obligation, but a prerequisite for traceable software development, especially in projects with audit requirements.

This approach is in line with what international governance frameworks such as the NIST AI Risk Management Framework or the OWASP LLM Top 10 recommend: human oversight for high-risk decisions, clear responsibilities, structured documentation. And from August 2, 2026, the EU AI Act will be mandatory for the majority of all AI systems. Those who have not yet established governance today are already working late.

Training courses: What “AI-competent” means for us in concrete terms

Regular internal training on AI is not a mandatory program for us, but the practical lever we use to ensure that efficiency gains through AI do not come at the expense of security or code quality.

Theodor W. Adorno once wrote: “The half-understood and half-experienced is not the precursor of education, but its enemy.” This applies more precisely to AI tools in everyday development than to almost any other tool.

Our training courses therefore do not start with rules, but with understanding: How do Large Language Models work? What can they do, what can’t they do and why? Anyone who understands AI not as a magical black box, but as a tool with known weaknesses, can use it safely. Only on this basis will the other areas follow: typical failure patterns, secure prompting without sensitive data and legal principles such as the EU AI Act and GDPR.

Technical guardrails complement the training program: approved tools, restricted access to certain models, automated checks in the pipeline. Training and system must work together, because knowledge alone does not guarantee security.

AI and cybersecurity: two sides of the same coin

There is another dimension that remains underexposed in many blog posts on AI in development: AI is not only changing how software is built, but also how it is attacked.

Cyberattacks are becoming faster, more scalable and more targeted thanks to AI. Today, malware can be generated with a single prompt. AI helps attackers identify and exploit vulnerabilities in code, including AI-generated code. This connection is not a hypothetical risk: companies using AI coding assistants report over 10,000 new security-related reports from AI-generated code every month, according to Apiiro.

For BAYOOTEC, this means that AI support in development and cybersecurity expertise are not separate topics that are dealt with in separate silos. They belong together. Anyone using AI must also keep an eye on the attack surface that this creates. This is the core of our approach: responsible use not as the antithesis of efficiency, but as a prerequisite for efficiency to work in the long term.

Conclusion: AI yes, but with attitude

AI is here to stay. It is becoming more powerful, more accessible and more deeply integrated into development processes. This is not a threat, but it is not a sure-fire success either. The decisive factor is how companies and teams deal with it.

At BAYOOTEC, we have deliberately decided against accepting AI as a black box or using it as a shortcut. Instead, we rely on clear policies, human approval processes, secure prompting, structured logging and regular training. That takes time. It also costs some speed in some places. But it is the basis on which we can deliver code that we stand behind, for customers who rightly expect it.

Analysis is not responsibility. Responsibility lies with us.

FAQ: AI in software development – what companies need to know

AI can be used securely if clear governance structures are in place: an AI policy, defined roles, logging in the CI/CD process and regular training for developers. Generated code should always be subject to the same secure coding gates as manually written code and should never be automatically adopted without human review.

AI-generated code can hard-code credentials, misuse crypto APIs, introduce insecure libraries without vulnerability checks and implement authorization checks incompletely. It often looks clean, but contains typical failure patterns that are overlooked without additional scanning measures and dedicated code reviews. Fraunhofer AISEC and OWASP LLM Top 10 document these risks in detail.

An AI governance framework defines who may use AI and how, which data may flow into prompts, how decisions are documented and who issues approvals. Without this framework, uncontrolled risks arise: lack of logging, unclear responsibilities, compliance gaps. From August 2026, such a framework will be de facto mandatory for many companies under the EU AI Act.

AI may be used in regulated industries if strict requirements are met: Human-in-the-loop control, traceable documentation, compliance with GDPR, MDR and EU AI Act. AI systems that are integrated into medical devices are often considered high-risk AI and are subject to correspondingly stricter compliance obligations. Special care is required for software development in this environment when using generative AI.

Developers should be trained in four areas: How AI models work and their limitations, typical failure patterns, secure prompting without sensitive data and legal principles such as the EU AI Act and GDPR. Training should be supplemented by technical guardrails, such as approved tools and automated checks in the CI/CD pipeline, as knowledge alone does not guarantee security.

Human-in-the-loop means that AI makes suggestions, but humans make the final decisions. In the development context, this means that AI-generated code is never automatically adopted, but is always checked and approved by a designated person. Risk appetite, safety guidelines and approval processes remain the responsibility of humans and cannot be delegated to a model.

 

AI increases the attack surface in two ways: Firstly, AI-generated code often contains vulnerabilities that provide attackers with new entry points. Secondly, cyber criminals use AI itself to carry out attacks more quickly and at greater scale. Companies that use AI in development must therefore simultaneously adapt their cybersecurity measures and actively address AI-specific risks.

Share now