← Back to Blog
Security Checklist: Making AI-Generated Code Production-Safe

Security Checklist: Making AI-Generated Code Production-Safe

February 7, 2026 · 13 min read

vibe-coding ai security web-development consulting

Your AI-generated app is working. Users are signing up. Data is flowing.

Then someone on Twitter DMs you: “Hey, found a SQL injection vulnerability in your login form. Thought you’d want to know before the bad guys do.”

Your stomach drops. You check the code. The AI generated a raw SQL query with string interpolation. Every user login for the past three months has been vulnerable.

This is the security wake-up call 43% of vibe coders get the hard way.

According to Snyk’s 2025 State of AI Code Security report, AI-generated code contains security vulnerabilities at 3.7x the rate of human-written code. The reason? AI tools are trained to generate functional code, not secure code. They’ll use patterns that work, even if those patterns haven’t been considered safe since 2010.

This isn’t about AI being bad—it’s about AI not understanding threat models, attack vectors, or compliance requirements. It generates code that works in demos but fails catastrophically under adversarial conditions.

If you’re building anything that handles user data, processes payments, or connects to the internet (so… everything), you need a security review before you go live.

Here’s your complete checklist.

Security code review

The Seven Deadly Vulnerabilities in AI Code

1. SQL Injection: The Classic That Never Dies

The Problem: AI tools love string concatenation. It’s simple, readable, and works perfectly… until someone types ' OR '1'='1 into your login form and gains access to your entire database.

What AI generates:

// VULNERABLE - Never do this
const email = request.body.email;
const query = `SELECT * FROM users WHERE email = '${email}'`;
const user = await db.query(query);

What it should be:

// SAFE - Parameterized queries
const email = request.body.email;
const query = 'SELECT * FROM users WHERE email = $1';
const user = await db.query(query, [email]);

Real-world impact: In 2025, a vibe-coded EdTech startup exposed 47,000 student records because their AI-generated admin panel used string-interpolated queries. The breach cost them $340,000 in fines and remediation.

How to audit:

# Search for vulnerable patterns
grep -r "WHERE.*\${" src/
grep -r 'WHERE.*".*+' src/
grep -r "query.*template" src/

# Use automated tools
npm install -g sqlmap
sqlmap -u "http://localhost:3000/api/users?id=1" --batch

2. Cross-Site Scripting (XSS): Trusting User Input

The Problem: AI-generated frontends often render user input directly without sanitization. Someone posts <script>alert('hacked')</script> as their username, and now that script runs for everyone who views their profile.

What AI generates:

// VULNERABLE - Direct rendering
function UserProfile({ user }) {
  return (
    <div>
      <h1>{user.name}</h1>
      <div dangerouslySetInnerHTML={{ __html: user.bio }} />
    </div>
  );
}

What it should be:

// SAFE - Sanitize user content
import DOMPurify from 'isomorphic-dompurify';

function UserProfile({ user }) {
  return (
    <div>
      <h1>{user.name}</h1>
      <div dangerouslySetInnerHTML={{
        __html: DOMPurify.sanitize(user.bio, {
          ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a'],
          ALLOWED_ATTR: ['href']
        })
      }} />
    </div>
  );
}

How to audit:

# Find dangerous patterns
grep -r "dangerouslySetInnerHTML" src/
grep -r "innerHTML" src/
grep -r "document.write" src/

# Test with common XSS payloads
curl -X POST http://localhost:3000/api/profile \
  -d '{"bio": "<script>alert(1)</script>"}'

3. Exposed Secrets: The API Key Gold Mine

The Problem: AI code examples often include placeholder API keys, which developers forget to replace or accidentally commit to version control. GitHub’s secret scanning found 3.2 million exposed secrets in 2025—62% in repositories with AI-generated code.

What AI generates:

// VULNERABLE - Hardcoded in code
const STRIPE_SECRET = 'sk_live_51J4KqBL...';
const openai = new OpenAI({ apiKey: 'sk-proj-...' });

// VULNERABLE - Committed to version control
// .env file in git repository
STRIPE_SECRET_KEY=sk_live_51J4KqBL...
DATABASE_URL=postgresql://admin:password123@...

What it should be:

// SAFE - Environment variables loaded at runtime
const STRIPE_SECRET = process.env.STRIPE_SECRET_KEY;
if (!STRIPE_SECRET) {
  throw new Error('STRIPE_SECRET_KEY environment variable not set');
}

// .env file in .gitignore
// .env.example file in git (without real values)
STRIPE_SECRET_KEY=your_stripe_secret_key_here
DATABASE_URL=your_database_url_here

How to audit:

# Check git history for secrets
git log -p | grep -i "api[_-]key\|secret\|password\|token"

# Use automated scanning
npm install -g truffleHog
truffleHog git file://. --json

# Check current code
grep -r "sk_live" .
grep -r "Bearer [A-Za-z0-9]" .
grep -r "password.*=.*['\"]" .

Code security audit

4. Broken Authentication: The Session Management Disaster

The Problem: AI tools often implement authentication that “works” but has fundamental flaws: sessions that never expire, predictable tokens, no rate limiting on login attempts, password reset links that don’t expire.

What AI generates:

// VULNERABLE - Weak session management
app.post('/login', async (req, res) => {
  const user = await findUser(req.body.email, req.body.password);
  if (user) {
    req.session.userId = user.id; // No expiration
    res.json({ success: true });
  }
});

// VULNERABLE - Predictable tokens
function generateResetToken() {
  return Math.random().toString(36).substring(7); // Not cryptographically secure
}

What it should be:

// SAFE - Secure session management
import { randomBytes } from 'crypto';
import { rateLimit } from 'express-rate-limit';

const loginLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 5, // 5 attempts per window
  message: 'Too many login attempts, please try again later'
});

app.post('/login', loginLimiter, async (req, res) => {
  const user = await findUser(req.body.email, req.body.password);

  if (user) {
    const sessionToken = randomBytes(32).toString('hex');
    await sessions.create({
      token: sessionToken,
      userId: user.id,
      expiresAt: new Date(Date.now() + 30 * 60 * 1000) // 30 min
    });

    res.cookie('session', sessionToken, {
      httpOnly: true,  // Prevents XSS
      secure: true,    // HTTPS only
      sameSite: 'strict', // CSRF protection
      maxAge: 30 * 60 * 1000
    });

    res.json({ success: true });
  } else {
    // Don't reveal whether email exists
    res.status(401).json({ error: 'Invalid credentials' });
  }
});

function generateResetToken() {
  return randomBytes(32).toString('hex'); // Cryptographically secure
}

How to audit:

# Check for weak randomness
grep -r "Math.random" src/
grep -r "Date.now()" src/ | grep -i "token\|session"

# Test session behavior
# 1. Login and copy session cookie
# 2. Wait 24 hours
# 3. Try to use old cookie - should fail

5. Insecure Direct Object References (IDOR)

The Problem: AI-generated API routes often trust user-supplied IDs without verifying ownership. Change userId=123 to userId=124 in the URL, and you’re suddenly viewing someone else’s data.

What AI generates:

// VULNERABLE - No ownership check
app.get('/api/invoices/:id', async (req, res) => {
  const invoice = await db.invoices.findById(req.params.id);
  res.json(invoice); // Returns anyone's invoice
});

What it should be:

// SAFE - Verify ownership
app.get('/api/invoices/:id', requireAuth, async (req, res) => {
  const invoice = await db.invoices.findOne({
    id: req.params.id,
    userId: req.user.id // Only return if user owns it
  });

  if (!invoice) {
    return res.status(404).json({ error: 'Invoice not found' });
  }

  res.json(invoice);
});

How to audit:

# Find routes that use IDs without checks
grep -r "req.params.id" src/ | grep -v "userId"
grep -r "findById" src/ | grep -v "where.*userId"

# Manual testing
# 1. Create two test accounts
# 2. Create resource with account A
# 3. Try to access it from account B using direct ID

Developer testing security

6. Missing Rate Limiting: The DDoS Invitation

The Problem: AI code rarely includes rate limiting. Your “contact us” form can be hammered with 10,000 requests per second, your API can be scraped without limits, and your server bills can skyrocket.

What AI generates:

// VULNERABLE - No rate limiting
app.post('/api/contact', async (req, res) => {
  await sendEmail(req.body);
  res.json({ success: true });
});

What it should be:

// SAFE - Rate limiting on all public endpoints
import rateLimit from 'express-rate-limit';

const contactLimiter = rateLimit({
  windowMs: 60 * 60 * 1000, // 1 hour
  max: 5, // 5 emails per hour per IP
  standardHeaders: true,
  legacyHeaders: false,
  message: 'Too many requests, please try again later'
});

app.post('/api/contact', contactLimiter, async (req, res) => {
  await sendEmail(req.body);
  res.json({ success: true });
});

// API rate limiting
const apiLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // 100 requests per window
  keyGenerator: (req) => req.user?.id || req.ip
});

app.use('/api/', apiLimiter);

How to audit:

# Test without rate limiting
for i in {1..100}; do
  curl -X POST http://localhost:3000/api/contact \
    -H "Content-Type: application/json" \
    -d '{"email":"test@test.com","message":"spam"}' &
done

# Should see rate limit errors after threshold

7. Insecure File Uploads

The Problem: AI generates file upload code that accepts anything, stores files without validation, and serves them directly—creating vectors for malware distribution, XSS, and remote code execution.

What AI generates:

// VULNERABLE - Accepts any file, dangerous storage
app.post('/upload', upload.single('file'), (req, res) => {
  const filename = req.file.originalname;
  fs.writeFileSync(`./uploads/${filename}`, req.file.buffer);
  res.json({ url: `/uploads/${filename}` });
});

What it should be:

// SAFE - Validation, sanitization, secure storage
import { randomUUID } from 'crypto';
import path from 'path';
import fileType from 'file-type';

const ALLOWED_TYPES = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
const MAX_SIZE = 5 * 1024 * 1024; // 5MB

app.post('/upload', upload.single('file'), async (req, res) => {
  if (!req.file) {
    return res.status(400).json({ error: 'No file provided' });
  }

  // Validate size
  if (req.file.size > MAX_SIZE) {
    return res.status(400).json({ error: 'File too large' });
  }

  // Validate actual file type (not just extension)
  const type = await fileType.fromBuffer(req.file.buffer);
  if (!type || !ALLOWED_TYPES.includes(type.mime)) {
    return res.status(400).json({ error: 'Invalid file type' });
  }

  // Generate secure random filename
  const safeFilename = `${randomUUID()}.${type.ext}`;
  const filepath = path.join('./uploads', safeFilename);

  // Store outside web root, never execute
  fs.writeFileSync(filepath, req.file.buffer, { mode: 0o644 });

  // Store metadata separately
  await db.files.create({
    userId: req.user.id,
    filename: req.file.originalname, // Original name for display
    storedAs: safeFilename,
    mimeType: type.mime,
    size: req.file.size,
    uploadedAt: new Date()
  });

  // Serve via separate endpoint with proper headers
  res.json({ fileId: safeFilename });
});

// Separate secure download endpoint
app.get('/files/:fileId', requireAuth, async (req, res) => {
  const file = await db.files.findOne({
    storedAs: req.params.fileId,
    userId: req.user.id
  });

  if (!file) return res.status(404).json({ error: 'Not found' });

  res.setHeader('Content-Type', file.mimeType);
  res.setHeader('Content-Disposition', `attachment; filename="${file.filename}"`);
  res.setHeader('X-Content-Type-Options', 'nosniff');

  const filepath = path.join('./uploads', file.storedAs);
  res.sendFile(filepath);
});

Security testing environment

The Complete Security Audit Checklist

Use this checklist before deploying any AI-generated code to production:

Authentication & Authorization

Input Validation

Data Protection

API Security

Database Security

Error Handling

Dependencies & Supply Chain

Infrastructure

Compliance (if applicable)

Automated Security Tools

Don’t audit manually—use automated tools to catch 80% of issues:

Dependency Scanning

# JavaScript/Node.js
npm audit
npm audit fix

# Use Snyk for deeper scanning
npm install -g snyk
snyk test

# Python
pip install safety
safety check

# Ruby
gem install bundler-audit
bundle-audit

Static Code Analysis

# JavaScript - ESLint with security rules
npm install --save-dev eslint eslint-plugin-security
npx eslint --plugin security --rule 'security/*: error' .

# Semgrep - multi-language security scanner
pip install semgrep
semgrep --config=auto .

# Python - Bandit
pip install bandit
bandit -r src/

Secret Scanning

# TruffleHog - find secrets in git history
pip install truffleHog
truffleHog git file://. --json

# GitLeaks - alternative secret scanner
brew install gitleaks
gitleaks detect --source . --verbose

# GitHub's secret scanning (if using GitHub)
# Automatically enabled for public repos

Vulnerability Scanning

# OWASP ZAP - full web app security scanner
docker run -t owasp/zap2docker-stable zap-baseline.py \
  -t http://localhost:3000

# SQLMap - SQL injection testing
sqlmap -u "http://localhost:3000/api/user?id=1" --batch

# Nuclei - vulnerability scanner with templates
go install github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest
nuclei -u http://localhost:3000

Runtime Protection

# Helmet.js - security headers for Express
npm install helmet
// app.js
import helmet from 'helmet';
app.use(helmet());

# CSRF protection
npm install csurf

Developer monitoring security

Security-First Development Workflow

Before You Code

  1. Define threat model: What are you protecting? From whom?
  2. List sensitive data: User info, payments, API keys, PII
  3. Identify attack vectors: Public APIs, file uploads, user input
  4. Set security requirements: Authentication, encryption, compliance

While Coding with AI

  1. Review every AI suggestion: Don’t accept blindly
  2. Ask AI to explain security implications: “Is this SQL query safe from injection?”
  3. Request secure alternatives: “Rewrite this with parameterized queries”
  4. Add validation immediately: Don’t defer input validation

Before Deploying

  1. Run automated scans: npm audit, Semgrep, OWASP ZAP
  2. Manual security review: Use the checklist above
  3. Penetration testing: Try to break your own app
  4. Security staging environment: Test with production-like data

After Deploying

  1. Monitor for attacks: Log analysis, intrusion detection
  2. Regular security updates: Dependencies, OS, frameworks
  3. Incident response plan: What to do when (not if) breached
  4. Bug bounty program: Let ethical hackers help (HackerOne, Bugcrowd)

Real-World Security Incident: The $180K Lesson

Case Study: TaskFlow SaaS Breach (2025)

A solo founder built a project management SaaS using Bolt.new. Launched in September, gained 400 paying customers by December.

The Breach:

The Damage:

The Fix That Would Have Prevented It:

// Original vulnerable code (AI-generated)
app.get('/api/projects/:id', async (req, res) => {
  const project = await db.projects.findById(req.params.id);
  res.json(project);
});

// Fixed code (15 seconds to add)
app.get('/api/projects/:id', requireAuth, async (req, res) => {
  const project = await db.projects.findOne({
    id: req.params.id,
    userId: req.user.id  // ← This one line would have prevented $180K in damages
  });

  if (!project) return res.status(404).json({ error: 'Not found' });
  res.json(project);
});

Cost of prevention: 15 seconds per endpoint Cost of breach: $180,000 + business reputation

The Bottom Line

Security isn’t a feature you add later—it’s a foundation you build from day one.

AI tools are incredible for velocity, but they don’t understand adversarial thinking. They don’t know that:

You need to be the security layer AI can’t provide.

The good news: Most security issues are preventable with checklists and automated tools. You don’t need a security PhD—you need systematic processes.

The question isn’t whether your AI-generated code has vulnerabilities. It’s whether you’ll find them before the bad guys do.

Professional AI Code Security Audit

Our AI Code Security Service includes:

  • Complete vulnerability assessment using automated tools
  • Manual security review by experienced developers
  • Penetration testing of your application
  • Detailed report with severity ratings and fixes
  • Implementation of critical security patches
  • Ongoing monitoring and security updates

Starting at $3,500 for applications up to 10,000 lines.

Emergency security response available 24/7 for active breaches.

Get Security Audit →


Need an urgent security review? Contact our emergency support team for same-day security assessments.

Need Help With Your Project?

Let's discuss how we can help you implement these ideas.

Get in Touch
Get Started