AI Detection
SnapBack monitors your coding session in real-time to detect AI tool activity and analyze code risks. This dual approach ensures protection is triggered when AI makes substantial changes or introduces common risks.
How It Works
AI Tool Detection
SnapBack detects when AI coding assistants are active and tracks their contribution to your code:
Tool Detection
Identifies which AI tools are running: Cursor, GitHub Copilot, Claude Code, Windsurf, and others.
~89% accuracy
Contribution Tracking
Measures how much of your recent work came from AI suggestions vs. manual typing.
Adapts over time
Guardian AI: Code Analysis
Separately, Guardian AI scans your code for risks that AI tools sometimes introduce:
Secrets Detection
Identifies API keys, JWT tokens, database strings, and other sensitive information.
Mocks Detection
Catches test artifacts and mock data that accidentally make it into production code.
Phantom Dependencies
Detects missing package.json entries for modules used in your code.
Two Systems Working Together:
AI tool detection tells SnapBack when to pay attention. Guardian AI tells it what risks to look for. Together, they decide when to protect your work.
Real-Time Analysis
Guardian AI performs real-time analysis on every save, providing immediate feedback:
For secrets, mocks, and phantom dependencies
Detection Speed
Both AI tool detection and Guardian AI operate in real-time:
- AI Tool Detection: Instant (passive monitoring)
- Guardian AI Scan: <50ms average
- Complex Pattern Analysis: <200ms
What Gets Detected
Secrets
Guardian AI identifies various types of secrets that should never be committed:
// ❌ Detected as secret risk
const config = {
apiKey: "sk-abcdefghijklmnopqrstuvwxyz1234567890", // API Key
dbPassword: "supersecretpassword123", // Database Password
jwtSecret: "my-jwt-secret-key", // JWT Secret
awsAccessKey: "AKIAIOSFODNN7EXAMPLE", // AWS Access Key
};
Mocks
AI assistants sometimes generate test artifacts that shouldn’t be in production code:
// ❌ Detected as mock risk
const userData = {
id: 1,
name: "John Doe",
email: "john.doe@example.com", // Realistic but fake data
role: "admin"
};
// This looks like mock data that shouldn't be in production
function processPayment() {
console.log("Processing payment of $100.00"); // Mock payment
return { success: true, transactionId: "txn_1234567890" }; // Fake transaction
}
Phantom Dependencies
Missing dependencies that are used in code but not declared in package.json:
// ❌ Detected as phantom dependency risk
import { format } from 'date-fns'; // Used but not in package.json
import lodash from 'lodash'; // Used but not declared
export function formatDate(date) {
return format(date, 'yyyy-MM-dd');
}
export function deepClone(obj) {
return lodash.cloneDeep(obj);
}
False Positive Handling
We understand that not every detection is a real risk. Guardian AI is designed to minimize false positives while maintaining high accuracy:
💡 False Positive Management: If Guardian AI flags something that isn’t actually a risk, you can:
- Add it to your ignore list in .snapbackignore
- Adjust sensitivity levels for specific file patterns
- Provide feedback through the VS Code extension
Configuration
You can customize Guardian AI’s behavior to match your project’s needs:
Sensitivity Levels
// .snapbackconfig
{
"aiDetection": {
"sensitivity": "high", // low, medium, high
"ignorePatterns": [
"**/*.test.js",
"**/mocks/**"
],
"customRules": [
{
"pattern": "API_KEY",
"type": "secret",
"severity": "high"
}
]
}
}
VS Code Integration
In the VS Code extension, you can:
- View real-time detection results in the Problems panel
- See inline decorations for detected risks
- Configure detection settings through the UI
- Provide feedback on detections
Accuracy Metrics
Our continuous testing shows Guardian AI maintains high accuracy across different codebases:
Precision
Of detections are actual risks requiring attention
Recall
Of actual risks are successfully detected
Best Practices
🔍 Review Detections Promptly
Don’t ignore Guardian AI warnings. Take time to understand what’s being flagged and why.
⚙️ Customize for Your Project
Adjust sensitivity levels and ignore patterns based on your specific project needs.
📚 Educate Your Team
Make sure all team members understand what Guardian AI detects and how to respond.
🔄 Provide Feedback
Help us improve by reporting false positives or missed detections.