AI Governance for Dev Teams: Practical, Not Paranoid
Your company needs AI policies. But if those policies kill productivity, nobody follows them. Here's the middle ground.
Somewhere between "move fast and break things" and "submit a form in triplicate before using AI" lies a governance approach that actually works. Most teams are stuck at one extreme or the other. Let's find the middle.
AI governance in software development has a perception problem. Developers hear "governance" and think: bureaucracy, approval workflows, and the death of productivity. Leadership hears "no governance" and thinks: leaked secrets, compliance violations, and lawsuits. Both sides are right about the risks they see — and both are wrong about the solution being binary.
Practical AI governance is about creating guardrails that prevent catastrophic mistakes without slowing down everyday work. Think highway barriers, not speed bumps.
The real risks (and the imagined ones)
Let's separate actual risks from FUD:
Real risks you need to address
Sensitive data in prompts. When a developer pastes production database credentials, customer PII, or proprietary algorithms into an AI tool, that data is sent to an external server. Most AI providers have data handling policies, but "most" and "all" aren't the same word, and policies differ between free and paid tiers.
Intellectual property exposure. Code generated by AI may have unclear IP provenance. If an AI tool was trained on GPL-licensed code and reproduces portions of it in your proprietary codebase, you have a legal exposure — however theoretical it may seem today.
Supply chain risks. AI-suggested dependencies may be malicious, abandoned, or vulnerable. The developer who accepts every AI suggestion without checking the packages is introducing unreviewed third-party code into your build pipeline.
Compliance violations. Regulated industries (healthcare, finance, government) have specific requirements about how code is produced, reviewed, and documented. AI-generated code may trigger audit requirements that your current process doesn't satisfy.
Imagined risks you can stop worrying about
"AI will write backdoors." Current AI tools don't have adversarial intent. They produce bugs, not malware. Your existing code review process handles bugs.
"AI-generated code is inherently less secure." Studies show AI-generated code has similar security profiles to human-written code. The issue isn't AI versus human — it's reviewed versus unreviewed.
"We need to approve every AI interaction." This is governance theater. It creates the appearance of control while actually just slowing people down enough that they stop using the tools (or worse, use them secretly).
A practical governance framework
Here's a framework that works for teams ranging from startups to enterprises. Adapt the specifics — but the structure scales.
Layer 1: Automatic guardrails (enforce, don't ask)
These protections should be built into the tooling and CI/CD pipeline so developers don't need to think about them:
Secret scanning. Configure pre-commit hooks and CI checks to catch credentials, API keys, and tokens in any committed code — AI-generated or not.
# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
Dependency auditing. Run automated security scans on any new dependencies. AI tools add packages enthusiastically — your pipeline should verify them automatically.
# In your CI pipeline
- name: Audit dependencies
run: npm audit --audit-level=high
License compliance. Scan for license incompatibilities automatically. Tools like FOSSA or license-checker can flag problematic licenses before they reach production.
Code formatting and linting. Enforce your standards automatically so AI-generated code meets the same bar as human-written code without manual review of style issues.
These guardrails have zero friction. Developers don't fill out forms or wait for approvals. The pipeline catches problems automatically. This is governance that protects without impeding.
Layer 2: Configuration-level policies (guide, don't block)
These are standards that shape AI behavior through configuration rather than process:
Approved AI tools list. Maintain a list of AI tools the team is authorized to use. This isn't about restricting choice — it's about ensuring every tool meets your security requirements (data handling, SOC 2 compliance, data retention policies).
## Approved AI Coding Tools
- Claude Code (Team plan — code not used for training)
- GitHub Copilot (Business plan — telemetry disabled)
- Cursor (Team plan — privacy mode enabled)
## Not approved
- Free-tier tools that use input for training
- Tools without clear data retention policies
Project AI configuration files. As we've discussed in previous posts, CLAUDE.md and .cursorrules files encode team standards. Include governance-relevant rules:
## Security rules
- Never include real credentials, tokens, or PII in code
- Never add dependencies without checking the license
- All API endpoints must include authentication middleware
- SQL queries must use parameterized statements, never string concatenation
Data classification guidance. Help developers understand what they can and can't share with AI tools:
Data type AI tools? Example Public code Yes Open-source libraries, public APIs Internal code Yes (paid tiers) Your application code, internal APIs Customer data Never Database contents, user emails, PII Credentials Never API keys, tokens, passwords Regulated data Check with compliance HIPAA, PCI, SOX-related codeLayer 3: Process-level governance (verify, don't prevent)
For regulated environments or high-stakes code, add verification steps that run in parallel with development rather than blocking it:
AI usage logging. For compliance-sensitive projects, maintain a log of AI tool usage — not every keystroke, but which tools were used on which components. This provides an audit trail without creating friction.
Enhanced review for critical paths. Code that handles authentication, payment processing, or regulated data gets an extra review pass — regardless of whether it was AI-generated. This isn't AI-specific governance; it's critical-path governance that becomes more important when AI accelerates development.
Quarterly governance review. Every quarter, review your AI policies. Are they still relevant? Has the tool landscape changed? Are developers following the policies or working around them? Governance that doesn't evolve becomes governance that gets ignored.
The compliance conversation
If you're in a regulated industry, you'll need to have specific conversations with your compliance team. Frame them productively:
Don't say: "We want to use AI to write code. Is that okay?"
Do say: "We want to use AI coding tools with these specific controls: [list your guardrails]. Here's how we'll document AI usage for audit purposes. Here's how our review process ensures AI-generated code meets the same standards as human-written code. What additional controls do you need?"
The second framing shows you've thought about governance proactively. Compliance teams are far more receptive to teams that arrive with a plan than teams that arrive with a request.
Governance as enablement
The best AI governance isn't about control — it's about confidence. When developers know the guardrails are in place, they use AI tools more boldly. When leadership knows the risks are managed, they support broader AI adoption. When compliance sees proper controls, they approve faster.
Build your guardrails, encode them in automation, and get back to shipping. That's practical governance.