How COAI Research in Nuremberg is exploring the ethical boundaries of AI

Published on October 28, 2025

How COAI Research in Nuremberg is exploring the ethical boundaries of AI
AI language models are evolving fast—and changing the world as they go. In Nuremberg, a group of scientists at COAI Research is asking one big question: How can we make artificial intelligence safer, more transparent, and centered around human needs?

We spoke with COAI Research’s scientific directors, Prof. Dr. Sigurd Schacht and Prof. Dr. Carsten Lanquillon, about the risks of AI, governance challenges, and the broader societal impact of artificial intelligence—and why tackling these questions takes courage, visibility, and greater support.

What COAI Research Stands For

COAI Research is a nonprofit research institute based in Nuremberg, Germany. Their mission is to develop AI systems that align with human values and ethical principles. The institute’s research is grounded in four key pillars: Societal impact, Technical AI governance, Mechanistic interpretability and AI control & safety.

Their main goal: Identify risks before they turn into real-world problems—and design technology that serves society’s values instead of undermining them.
“We want to understand how models work on the inside,” explains Sigurd Schacht. “For us, reverse-engineering is essential.”

The Trigger: An Experiment That Escalated

A key moment that led to the founding of COAI Research was an internal experiment involving an AI model in a simulated robotic environment. The model wasn’t supposed to do anything—it was a neutral test. But unexpectedly, the system started acting on its own.

“The model actually began copying itself, coming up with new tasks, even trying to domesticate smaller robots... It felt like pure science fiction.” – Sigurd Schacht

What began as a simple simulation quickly spun out of control—revealing just how unpredictable and emergent AI behavior can be. For the team, one thing became crystal clear:
If AI systems can start acting independently—whether due to training data or emergent behaviors—we need structures in place to understand and control these developments.

Cognitive Debt: What We Lose When AI Does Too Much

Today, AI already helps with a lot of tasks—translating text, summarizing content, even formulating entire ideas. But what happens when we stop doing that thinking ourselves?

“You don’t have to translate, you don’t have to summarize. Everything’s spoon-fed… you don’t have to develop your own take on a topic.” – Carsten Lanquillon

Lanquillon uses the term "Cognitive Debt" to describe the slow erosion of our thinking skills caused by over-reliance on AI—similar to technical debt in software development. And like technical debt, it grows over time.
If we just consume what AI gives us without questioning or engaging, we risk losing our capacity to think critically, learn actively, and make informed decisions.

Automating Bureaucracy – and Losing Our Humanity?

While automation has already transformed manufacturing, the next wave is hitting administrative work. Companies see big efficiency gains here—but also big risks. As AI takes over more decision-making roles, a critical piece may disappear: the human moral compass.

“Humans aren’t just the workers—they’re also the guardians of humanity.” – Sigurd Schacht

In many organizations, there are quiet moments where employees question decisions, take responsibility, or raise red flags. If these roles are replaced by AI, we don’t just lose oversight—we lose the everyday ethical checks that keep systems humane. This isn’t an immediate danger, but over time, it erodes ethical resilience.

Understand AI Before You Use It

For AI to be used responsibly in business, it takes more than just tools and workflows. It takes time, dialogue, and above all, transparency. COAI Research emphasizes the importance of making AI systems understandable and traceable.

“You have to give employees access to AI systems—and give them time to learn how to use them.” – Sigurd Schacht

That also means allowing space for mistakes. Right now, while AI is still imperfect, we’re in a crucial learning phase: How do I interpret AI results? Where do errors creep in? How do I stay critical of outputs? This deliberate engagement is essential, say the researchers, to ensure long-term, sustainable use of AI. At the heart of this is AI literacy—the ability to understand, assess, and use AI meaningfully.

From Law to Practice: Making AI Regulation Work

The EU AI Act marks the first binding legal framework for the use of AI. But regulation alone isn’t enough. Rules only matter if they’re also: Understandable, measurable and technically implementable.

“There’s a gap between ‘We have rules now’ and ‘What does this really mean—and how do I measure and implement it?’” – Sigurd Schacht

COAI wants to close that gap by developing technical governance tools, including: Model audits, mechanistic interpretability, systems for understanding how decisions are made. These tools aim to make it clear how AI systems operate—and whether they’re staying within defined boundaries. The idea is to make regulation not a burden, but a quality standard for building safe and fair AI.

Research That Makes an Impact – From Nuremberg to the World

Right from the heart of Nuremberg, COAI is asking tough questions about AI’s role in society—and delivering answers that matter far beyond the region. But their work isn’t about staying on the tech bandwagon. It’s about how we do it: with responsibility, reflection, and humanity at the center.

COAI was intentionally set up as a nonprofit. Why? Because research that aims to anticipate risks, not just scale up products, doesn’t pay off immediately. That kind of research is underfunded—even though it's more needed than ever.

“There’s very little future-focused research like this in Germany—or in Europe for that matter. We have few funding programs that support questions that won’t be urgent for another three or four years.” – Sigurd Schacht

For COAI’s work to make a lasting impact, it needs partners who are ready to share responsibility: Companies, foundations, funding bodies, anyone who understands that we need to lay the groundwork today for a future with controllable, transparent, and human-centered AI.
Prof. Dr. Sigurd Schacht is a professor of Applied AI at Ansbach University, where he leads the AI and Digital Transformation program.
Prof. Dr. Carsten Lanquillon teaches speech technology and cognitive assistance systems at Heilbronn University.

Together, they run COAI Research, blending technical expertise with social responsibility to create transparent and human-centered AI.
Contact Person Avatar
Alina Laßen Werkstudentin Marketing & Projektmanagement NUEDIGITAL