AI Coding Integration
Integrate DryRun Security with AI coding tools, IDEs, and AI agents.
AI Coding Tool Integrations
Security in Your Editor
The earlier in the development process a vulnerability is caught, the cheaper it is to fix. DryRun Security's IDE integration brings security analysis into the development environment itself - the place where developers spend most of their time writing and reviewing code.
Rather than waiting for a PR to be opened to receive security feedback, developers with the IDE integration can get security context inline as they work - understanding the security implications of the code they're writing and the codebase they're modifying without leaving their editor.
AI Coding Integrations
DryRun Security integrates with the most popular AI coding tools. Each integration is available from the Settings > Integrations page in the DryRun Security dashboard. Every AI coding integration provides two connection options:
- Connect - Connects the DryRun Insights MCP to the tool, giving its AI assistant access to your organization's security data for context-aware code analysis
- Add Skill - Installs the DryRun remediation skill/plugin, enabling the tool to discover and fix security findings directly
Supported AI Coding Tools
| Tool | Description |
|---|---|
| Cursor | Connect DryRun Insights MCP to Cursor IDE for AI-powered code analysis |
| Codex | Integrate DryRun Insights MCP with OpenAI Codex for enhanced code review |
| Claude Code | Use DryRun Insights MCP with Claude Code for security-aware coding assistance |
| Windsurf | Integrate DryRun Insights MCP with Windsurf IDE for AI-assisted code review |
| VS Code | Connect DryRun Insights MCP to Visual Studio Code for AI-powered security analysis |
Connecting an AI Coding Tool
Clicking Connect on a tool card provides the setup command or configuration for that tool. For example, connecting Claude Code provides this command:
claude mcp add --transport http dryrun-security https://insights-mcp.dryrun.security/api/insights/mcp --header "Authorization: Bearer <dryrunsec_token>"
Replace <dryrunsec_token> with your API token from Settings > Access Keys. See API Usage Guide for how to generate an access key.
Adding the Remediation Skill
Clicking Add Skill installs the DryRun remediation plugin into your coding tool. For example, in Claude Code:
/plugin marketplace add DryRunSecurity/external-plugin-marketplace
/plugin install dryrun-remediation@dryrunsecurity
Once the skill is installed, the AI assistant can discover security findings from DryRun Security and generate fixes directly within your coding session.
Desktop Integrations
For desktop AI applications that support MCP, DryRun Security offers dedicated integration cards:
- Claude Desktop - Connect the DryRun Insights MCP to Claude Desktop for security-aware conversations about your codebase
See MCP Integration for detailed configuration instructions for all supported clients.
AI-Native IDE Workflows
For teams using AI coding assistants, the DryRun Security integration is particularly valuable. It allows the AI assistant to query DryRun Security's security intelligence as part of code generation - helping AI assistants write more secure code by understanding what vulnerabilities have been found in the codebase and what security patterns are in use.
This is especially relevant as teams adopt AGENTS.md to guide AI coding agents. See AGENTS.md for how to configure security guidelines that AI agents and DryRun Security both use.
AI Tool Integrations
AI-Generated Code Coverage
DryRun Security reviews all code in every pull request, regardless of whether it was written by a human or generated by an AI coding tool. No special configuration or setup is needed - if the code reaches a PR, DryRun analyzes it with the same Contextual Security Analysis applied to all changes.
This is important because AI coding assistants are generating an increasing share of production code, and AI-generated code carries its own patterns of security risk.
Compatible AI Coding Tools
DryRun Security works with any tool that produces code submitted through a pull request or merge request:
- GitHub Copilot - inline code suggestions and chat-based generation
- Cursor - AI-native code editor with multi-file generation
- Windsurf - AI coding assistant
- OpenAI Codex - code generation API and CLI
- Claude Code - Anthropic's coding assistant
- Amazon CodeWhisperer - AWS coding companion
- Any other tool that generates code committed to a Git repository
Because DryRun operates at the SCM level (analyzing PRs), compatibility with new AI tools is automatic. There is no integration required on the AI tool side.
Common AI-Generated Code Risks
AI coding tools tend to produce specific patterns of security issues that DryRun's analyzers are particularly effective at catching:
- Missing input validation - AI-generated endpoints that accept and use user input without sanitization
- Hardcoded credentials - example API keys and tokens that should have been replaced with environment variables
- Incomplete authorization - CRUD operations generated without access control checks
- Outdated patterns - AI models trained on older code that uses deprecated or insecure APIs
- Copy-paste vulnerabilities - code generated from training data that contains known vulnerability patterns
Visibility into AI-Generated Changes
DryRun Security's AI Coding Visibility feature provides observability into how AI tools are being used across your codebase - which repositories have the most AI-generated code, what types of changes are being made, and where security findings correlate with AI-generated contributions.
MCP for Agentic Workflows
For teams using AI coding agents that operate autonomously (creating PRs, making multi-file changes), DryRun Security's MCP integration enables the agent to query security status, check findings, and respond to security feedback programmatically. This creates a closed loop where AI agents can fix their own security issues before a human reviews the PR.
Related Pages
- Securing AI-Generated Code - DryRun's approach to AI code security
- AI Coding Visibility - observability into AI-generated changes
- Malicious Agent Detection - detecting adversarial AI behavior
- MCP Integration - programmatic access for AI agents
Securing AI-Generated Code
The AI Code Security Challenge
AI coding assistants - GitHub Copilot, Cursor, Claude Code, and similar tools - have dramatically changed how software is written. Developers using these tools can produce working code faster than ever before. But AI-generated code introduces a new and underappreciated security challenge: AI models can produce code that is functionally correct and syntactically sound while containing security vulnerabilities that the developer who accepted the suggestion didn't introduce and may not recognize.
Traditional code review processes assume the developer is responsible for the code they write. AI-generated code muddies this: the developer accepted a suggestion but didn't reason through every security implication of the code that was generated. The responsibility is shared - and the security tooling needs to account for this new dynamic.
How DryRun Security Handles AI-Generated Code
DryRun Security applies additional analytical scrutiny to code that exhibits characteristics of AI generation. This isn't about penalizing AI-assisted development - it's about recognizing that AI-generated code patterns, particularly around security-sensitive operations, warrant extra care in review.
AI coding assistants sometimes:
- Generate code that uses deprecated or insecure API patterns that were common in their training data
- Produce authentication and authorization logic that is structurally plausible but subtly flawed
- Include hardcoded credentials or placeholder values that developers inadvertently ship
- Generate SQL queries or shell commands that are vulnerable to injection in the specific context of the application
DryRun Security's contextual analysis is particularly effective at catching these issues because it evaluates AI-generated code in the same way it evaluates human-written code: with full understanding of the surrounding context, data flows, and security-relevant patterns.
Organizational Visibility
Beyond per-PR security analysis, DryRun Security provides visibility into AI coding activity across your organization - tracking where AI-generated code is being introduced and what security implications it carries. See AI Coding Visibility for details.
AI Coding Visibility
Understanding AI in Your Codebase
When AI coding assistants are widely adopted across an engineering organization, a natural question emerges: how much of our codebase was written by AI, and does that matter for security? The answer to the second question is increasingly yes - and answering the first requires dedicated tooling.
DryRun Security's AI Coding Visibility capability gives security teams and engineering leadership an organizational view of AI coding activity: where AI-generated code is being introduced, at what rate, in which repositories and by which teams, and what the security characteristics of that code are.
What AI Coding Visibility Tracks
AI Coding Visibility provides insight across several dimensions:
- AI code volume - What percentage of new code being committed exhibits characteristics of AI generation? How is this changing over time as AI adoption grows or changes in your organization?
- Distribution across repositories - Are some teams or projects using AI coding assistants more than others? Are security findings concentrated in AI-heavy repositories?
- Finding rates by code origin - Do AI-generated code sections have systematically different security finding rates compared to human-written code? Understanding this helps calibrate review processes and training investments.
- Agent activity patterns - In environments using autonomous AI coding agents (not just suggestion-based assistants), visibility into what the agents are doing, what files they're modifying, and what patterns emerge in their changes.
Security Implications for Security Teams
This visibility serves several practical security use cases:
- Risk concentration - Identify whether certain areas of the codebase or certain development patterns are producing disproportionate security risk from AI-generated code.
- Audit trail - For regulated industries, maintaining a record of AI involvement in code production is increasingly an audit requirement.
- Supply chain transparency - AI-BOM generation (see SBOM Generation) provides a formal record of AI involvement in software production for compliance purposes.
- Policy enforcement - Custom Code Policies can be configured specifically for AI-generated code sections, enforcing stricter review criteria where AI involvement is detected.
Malicious Agent Detection
The Malicious Agent Threat
As AI coding agents become more capable and more autonomous, they introduce a novel threat vector: an AI agent that has been compromised, manipulated via prompt injection, or is operating outside its intended parameters can introduce malicious code directly into a codebase. Unlike a human developer inserting malicious code, a compromised AI agent can do so at scale, across multiple repositories, in ways that may be difficult to distinguish from legitimate AI-assisted development.
This is not a theoretical concern. Prompt injection attacks against coding agents have been demonstrated in research settings, and as AI agents gain broader permissions in development environments, the potential impact of such attacks grows.
What DryRun Security Detects
DryRun Security's malicious agent detection capability is designed to identify code changes that exhibit patterns consistent with malicious intent, regardless of whether they originate from a human or an AI agent:
- Backdoor patterns - Code that creates covert access mechanisms, such as hardcoded credential bypass paths, undocumented administrative endpoints, or logic that behaves differently based on hidden trigger conditions.
- Data exfiltration patterns - Code that transmits data to unexpected external endpoints or stores data in ways inconsistent with the application's intended behavior.
- Permission escalation - Changes that expand the permissions available to the application beyond what its function requires.
- Obfuscated logic - Code structured to obscure its intent - unusual encoding, unnecessarily complex indirection, or logic that accomplishes a simple operation through unnecessarily convoluted means.
Behavioral Context
Malicious agent detection is strengthened by DryRun Security's git behavioral analysis capability. Code changes arriving through unusual patterns - outside normal working hours, from unexpected contributors, making atypical modifications to security-sensitive files - are evaluated with elevated scrutiny. Behavioral anomalies don't trigger automatic findings, but they raise the signal strength of other analysis.
Defense in Depth
Malicious agent detection is one layer in a defense-in-depth approach to AI coding security. Combined with Custom Code Policies that enforce organizational coding standards, the Secrets Analyzer detecting credential introduction, and the code security knowledge graph tracking behavioral patterns over time, DryRun Security provides comprehensive coverage against AI-specific security risks in the development pipeline.
AI Red Teaming
The AI Development Threat Landscape
AI-assisted development introduces new categories of security risk that traditional tools are not designed to detect. When AI agents write code, review code, or interact with development infrastructure, they create attack surfaces that adversaries can exploit through prompt injection, supply chain manipulation, and behavioral subversion.
AI-Specific Attack Vectors
DryRun Security's AI Agent Security capabilities address several categories of threats:
- Prompt injection via code - malicious instructions embedded in code comments, documentation, or dependency files that manipulate AI coding assistants into generating insecure code
- Malicious agent skills - AI agents with tool access (file system, network, shell) that can be manipulated into performing unintended actions. See Malicious Agent Detection for details
- Training data poisoning - AI models generating code patterns derived from intentionally vulnerable training examples
- Supply chain attacks via AI - adversaries using AI-generated PRs to introduce subtle backdoors that pass human review
Behavioral Analysis
DryRun Security applies Git Behavioral Analysis to detect anomalous patterns in AI-generated contributions. This includes:
- Unusual commit patterns - timing, frequency, or volume that deviates from established baselines
- Code style anomalies - changes that do not match the repository's established patterns
- Scope creep - AI-generated changes that modify files or systems outside the stated scope of a task
- Privilege escalation attempts - changes to authorization, permissions, or access control that were not part of the original request
Continuous Monitoring
Rather than point-in-time assessments, DryRun Security provides continuous monitoring of AI-assisted development activity. Every PR - whether authored by a human, an AI assistant, or an autonomous agent - receives the same depth of security analysis. This means adversarial patterns are detected at the moment they appear, not during a periodic review.
Threat Modeling Support
DryRun Security's intelligence index capabilities support threat modeling exercises by answering questions like:
- "Which repositories have the most AI-generated code changes this month?"
- "What new API endpoints were introduced by AI-generated PRs?"
- "Show findings correlated with AI-generated commits across all repos"
This data helps security teams prioritize review efforts and identify repositories where AI-generated code may need additional scrutiny.
Related Pages
- Malicious Agent Detection - detecting adversarial AI agent behavior
- Git Behavioral Analysis - anomaly detection in commit patterns
- AI Coding Visibility - observability into AI-generated changes
- Securing AI-Generated Code - security analysis for AI-written code