How to Secure Your Codebase When Using AI Coding Assistants
Protect your code from security risks introduced by AI. Covers secret scanning, code review, and safe AI coding practices.
AI coding assistants generate a lot of code fast. That speed creates blind spots. Here’s how security issues slip in and what to do about them.
The Hardcoded Credentials Problem
AI assistants have a troubling habit of hardcoding secrets. Ask one to connect to an API, and you’ll often get something like:
const stripe = new Stripe("sk_live_abc123");
That’s a real API key embedded in source code. If you commit this, push it, and deploy, that key is exposed. Depending on what it’s for, you might be looking at unauthorized charges, data breaches, or worse.
This isn’t the AI being malicious. It’s just pattern-matching. Plenty of example code includes hardcoded keys, so the AI reproduces that pattern.
Why This Is Different With AI
Developers have always made security mistakes. What’s different with AI is the volume and velocity.
When you’re writing code manually, you’re thinking about each line. You’re more likely to notice “wait, should I hardcode this?” When you’re accepting AI suggestions rapidly, each suggestion gets less scrutiny.
AI also generates confident-looking code. It presents the hardcoded key as if that’s the correct approach, which it often isn’t.
And the sheer amount of AI-generated code means more surface area for mistakes. If one in a hundred AI suggestions has a security issue, and you’re accepting a hundred suggestions a day, you have a problem.
Always Use Environment Variables
The fix for hardcoded credentials is simple: use environment variables instead.
When AI generates:
const apiKey = "sk_live_abc123";
Change it to:
const apiKey = process.env.STRIPE_API_KEY;
Create a .env file (which should be in your .gitignore):
STRIPE_API_KEY=sk_live_abc123
This keeps secrets out of your codebase. They’re only present on machines that need them.
The challenge is catching the AI’s hardcoded suggestions before you commit them.
Scanning for Secrets
Manual review catches some problems, but automated scanning catches more.
Pre-commit hooks can block commits that contain potential secrets:
pip install detect-secrets
detect-secrets scan --all-files
GitHub has secret scanning built in for public repositories and as a paid feature for private ones.
mrq includes real-time secret scanning that catches exposed credentials as soon as the AI generates them, before they ever get near Git. This is the most immediate protection because it operates at the moment of creation, not the moment of commit.
Beyond Credentials
Credentials are the most obvious security issue, but not the only one.
AI often generates code that’s vulnerable to injection attacks. If you see string concatenation in database queries:
db.query(`SELECT * FROM users WHERE id = '${userId}'`)
That’s a SQL injection vulnerability. It should use parameterized queries:
db.query('SELECT * FROM users WHERE id = $1', [userId])
AI may also skip input validation entirely. It generates code that works when given expected input but breaks or becomes exploitable when given malicious input.
What to Review
Before accepting AI-generated code, check for:
Hardcoded secrets like API keys, tokens, and passwords. If you see strings that look like credentials, they probably are.
Direct string interpolation in database queries. This is almost always a security issue.
User input being used without validation. If data from a request goes directly into a database or file system operation, that’s a red flag.
Overly permissive settings. CORS allowing any origin, authentication that’s easily bypassed, endpoints that don’t verify permissions.
The Secure Setup
For AI coding that doesn’t compromise security:
Use mrq for real-time scanning. It catches exposed credentials the moment AI generates them.
Add a pre-commit hook as a backup layer. Even if you miss something, it won’t get committed.
Review AI-generated code with security in mind. Spend the extra thirty seconds checking for obvious issues.
Keep secrets in environment variables. Train yourself to change hardcoded credentials to env references immediately.
Run dependency scans regularly. AI might suggest outdated packages with known vulnerabilities. npm audit or snyk test catches these.
Security with AI coding isn’t fundamentally different from security without AI. The difference is volume. More code means more potential issues, which means automation becomes more important.
Set up automated scanning so you don’t have to rely on catching every problem manually.
Related Reading
- How to Audit AI-Generated Code for Security Issues - Detailed audit guide
- The Hidden Security Risk of AI Coding Assistants - Why this matters
- Best Practices for AI Pair Programming - Complete safety guide
mrq includes real-time security scanning that catches exposed credentials before they reach Git.
Written by mrq team