This is a post-mortem on a specific failure mode in AI-powered code audit tooling. It involves tRPC, the EU AI Act, and a word that means two different things depending on context.
tRPC is a TypeScript-first RPC framework. Its transformer components handle data serialisation. Not machine learning. Not neural networks. Data serialisation.
An automated intent audit classified it as EU AI Act High Risk.
Here is exactly what happened.
The classification mechanism
AI-powered audit tools that evaluate codebases against regulatory frameworks need to determine which frameworks apply. For the EU AI Act, the relevant question is whether the system is an AI system and, if so, whether it falls into a high-risk category.
That determination is based on two inputs: the product description the user provides, and any AI or ML components detected in the code.
When the product description contains language that overlaps with EU AI Act high-risk domain vocabulary — health, medical, employment, education, critical infrastructure, transport — the system classifies the codebase as potentially high-risk and evaluates it accordingly.
The specific overlap
The second tRPC audit used a description that mentioned "transport layer" and "WebSocket transport" — accurate technical descriptions of how tRPC handles communication. The word "transport" overlapped with the critical infrastructure domain in the EU AI Act classifier.
Classification: High Risk. The LLM was then given context that included the High Risk classification and instructed to produce findings about it. Five AI Governance findings appeared in the report, all referencing EU AI Act obligations for high-risk AI systems. A TypeScript serialisation library with no AI components.
The same report stated, separately: "No AI components detected — EU AI Act: Not Applicable."
Both outputs were in the same document.
The fix
The third audit used a description that explicitly disambiguated the transformer components: data serialisation utilities, not AI architectures. The codebase contains no machine learning models, no LLM integrations, no AI decision-making components. The word "transformer" refers to data format transformation (superjson, devalue) not the attention mechanism in neural networks.
EU AI Act: Not Applicable. ISO 42001: Not Applicable. Genuine findings surfaced.
Three lessons for developers
One: when writing a product description for an AI-powered audit, be explicit about what components are not, not only what they are. Terminology overlap with regulated domain vocabulary is common and non-obvious.
Two: internal contradiction in an AI-generated report is not a minor formatting issue. "No AI components detected" and "EU AI Act High Risk" cannot both be true. If you see a contradiction like this, investigate the description rather than the findings.
Three: the transformer naming collision is a real and growing problem. HuggingFace's transformers library, tRPC's pluggable transformers, and the attention mechanism in ML architectures all use the same word in entirely different contexts. Any audit tool doing keyword-based AI component detection needs to handle this explicitly.
The scores
Audit v1 — 80 Healthy. Original description.
Audit v2 — 47.6 Critical. "transport" triggered High Risk classification.
Audit v3 — 61.5 Needs Attention. Corrected description. Accurate output.
The code did not change across all three audits.



Top comments (0)