AI is no longer a question of the future in software development. According to the latest trends report from CodeSignal, more than 80 percent of software developers already use AI in their daily work, almost half of them on a daily basis. And yet there is a gap here that is still not discussed openly enough in the industry: There is a world of difference between using it and using it responsibly.
We don’t want to ignore this gap. This article shows how we at BAYOOTEC work with AI, where we deliberately do not use it, and why we see governance and regular training not as bureaucracy, but as a quality feature.
Productivity boost with side effects
There’s no doubt about it: AI support in development works. Microsoft researchers were able to show that teams with GitHub Copilot completed the same task around 50 percent faster than comparison groups without AI support. This order of magnitude is impressive and explains why AI tools have so quickly become commonplace in development teams around the world.
But the same development has a downside that is slowly becoming apparent. A recent study by the security company Apiiro shows that developers who use AI assistants write three to four times more code, but also introduce ten times the amount of risks. These include unchecked open source dependencies, hidden backdoors in the generated code and inadvertently published access data. According to industry estimates, AI will have produced around 40 percent of the code written worldwide by 2025. So-called “vibe coding”, i.e. the intuitive generation of code through AI prompts without a deep understanding of the result, is a reality.
Awareness of this is growing: according to the Stack Overflow Developer Survey, developers’ confidence in the accuracy of AI-generated code fell from 42% to 33% between 2024 and 2025. Senior engineers are particularly skeptical, and there is a reason for this: AI-generated errors are often well hidden. The code looks clean and well thought-out at first glance, but the weak points lie deeper. It often takes experienced developers to recognize them in the first place. Those who really understand code can also see what the AI has done wrong.
Where AI is actually used at BAYOOTEC
At BAYOOTEC, we use AI in a targeted manner rather than across the board. AI support brings us real added value in three areas.
Where we consciously do without AI
The fact that we use AI does not mean that we use it everywhere. We do not use AI support in projects with particularly sensitive customer data or highly sensitive source code. In its impulse paper on GenAI and cyber security, Fraunhofer AISEC explicitly warns of the risk of information leakage through so-called inference attacks: anyone who transfers confidential data to a large language model can lose control of this information.
In strictly regulated environments, i.e. where GDPR, EU AI Act, MDR or ISO standards must be complied with, we work with particular caution. The OWASP Top 10 for LLM applications lists prompt injection, insecure output and data leaks as critical risks. For security-critical components such as cryptography, authentication logic or access control, our general rule is: no automatic generation without dedicated manual checking and explicit approval.
And we do not use AI where customers expressly do not want it. This is not a restriction for us, but a professional attitude. At a time when, according to the study “Future of Application Security in the Era of AI”, only 18% of companies have clear AI guidelines, but at the same time 81% knowingly provide insecure code, a conscious no to shortcuts is not a step backwards. It is the standard we set for ourselves.
Why AI code is not a free ride
AI-generated code looks clean. Often it is. But it can hard-code secrets or handle them insecurely, use crypto APIs incorrectly, introduce libraries without version or vulnerability checks, and implement input validation and authorization checks incompletely. Fraunhofer AISEC states it clearly in its impulse paper: Generative AI can improve development quality and speed, but the unchecked use of AI responses can create security vulnerabilities, especially if generated code is used in an uncontrolled manner.
For this reason, we treat AI-generated code internally as “unaudited code”. This means that it goes through the same secure coding gates as any other code, sometimes with additional steps. In addition, we have formulated our own secure coding rules that explicitly address AI-specific failure patterns: insecure defaults, questionable library recommendations, missing boundary checks.
This is not a mistrust of technology. It is quality assurance.

Our AI governance: who decides what, and how
Analysis is not responsibility. That sounds simple, but it describes the core of the problem very precisely. AI can find weak points, make suggestions and prioritize risks. But risk appetite, policies and approval processes cannot be delegated. These are human decisions.
At BAYOOTEC, we have established clear governance structures for this purpose:
This approach is in line with what international governance frameworks such as the NIST AI Risk Management Framework or the OWASP LLM Top 10 recommend: human oversight for high-risk decisions, clear responsibilities, structured documentation. And from August 2, 2026, the EU AI Act will be mandatory for the majority of all AI systems. Those who have not yet established governance today are already working late.
Training courses: What “AI-competent” means for us in concrete terms
Regular internal training on AI is not a mandatory program for us, but the practical lever we use to ensure that efficiency gains through AI do not come at the expense of security or code quality.
Theodor W. Adorno once wrote: “The half-understood and half-experienced is not the precursor of education, but its enemy.” This applies more precisely to AI tools in everyday development than to almost any other tool.
Our training courses therefore do not start with rules, but with understanding: How do Large Language Models work? What can they do, what can’t they do and why? Anyone who understands AI not as a magical black box, but as a tool with known weaknesses, can use it safely. Only on this basis will the other areas follow: typical failure patterns, secure prompting without sensitive data and legal principles such as the EU AI Act and GDPR.
Technical guardrails complement the training program: approved tools, restricted access to certain models, automated checks in the pipeline. Training and system must work together, because knowledge alone does not guarantee security.
AI and cybersecurity: two sides of the same coin
There is another dimension that remains underexposed in many blog posts on AI in development: AI is not only changing how software is built, but also how it is attacked.
Cyberattacks are becoming faster, more scalable and more targeted thanks to AI. Today, malware can be generated with a single prompt. AI helps attackers identify and exploit vulnerabilities in code, including AI-generated code. This connection is not a hypothetical risk: companies using AI coding assistants report over 10,000 new security-related reports from AI-generated code every month, according to Apiiro.
For BAYOOTEC, this means that AI support in development and cybersecurity expertise are not separate topics that are dealt with in separate silos. They belong together. Anyone using AI must also keep an eye on the attack surface that this creates. This is the core of our approach: responsible use not as the antithesis of efficiency, but as a prerequisite for efficiency to work in the long term.
Conclusion: AI yes, but with attitude
AI is here to stay. It is becoming more powerful, more accessible and more deeply integrated into development processes. This is not a threat, but it is not a sure-fire success either. The decisive factor is how companies and teams deal with it.
At BAYOOTEC, we have deliberately decided against accepting AI as a black box or using it as a shortcut. Instead, we rely on clear policies, human approval processes, secure prompting, structured logging and regular training. That takes time. It also costs some speed in some places. But it is the basis on which we can deliver code that we stand behind, for customers who rightly expect it.
Analysis is not responsibility. Responsibility lies with us.
FAQ: AI in software development – what companies need to know
Share now


