Back to Blog
ProductMCPSecurityProductCode AnalysisAI AgentsStatic Analysis

Stop Grepping, Start Querying: MCP Server for Code-Pathfinder

AI agents like Claude Code, Windsurf Cascade, and OpenAI Codex have taken the world by storm, iteratively using tools to append, edit, and delete code. Having used Claude Code for over a year now, I can confidently say it has been a game changer. It helps me write better code, faster. More importantly, it lets me rapidly prototype multiple approaches, draft RFC specs, identify gaps in my implementation, and apply the Pareto principle to pick the 20% of features that deliver 80% of the value for complex security scan capabilities.

But there's one annoying pattern I keep running into. These tools rely heavily on grep and reading entire files as sub-agents that help with editing or appending code. Repeat this for tens or even fifties of files per session, and it quickly consumes time and tokens. More importantly, grep results and reading whole files to spot patterns (finding the needle in the haystack) often reduces precision as the context window grows larger, a phenomenon known as context rot. Picture handling multiple Python repos where one acts as an SDK, another as a gRPC server, and a third as a frontend BFF server. Each search and grep becomes costlier and more time consuming.

This is where optimization comes in. I started questioning:

  1. What if we could model the whole codebase by indexing it based on attributes like functions, classes, and variables?
  2. What if we could query this index directly from the AI agent?

The Language Server Protocol (LSP) intelligently provides this functionality. LSP powers IDEs like VS Code and JetBrains to deliver intelligent code completion, navigation, and refactoring. I've used LSP heavily in the past and found it transformative for code navigation and refactoring. However, in my opinion, it lacks scalability and isn't designed for AI agents that need sub-millisecond response times for a better user experience. LSP does a terrible job with monorepos and large codebases. For instance, it takes 30 to 40 seconds for each method resolution on the sourcegraph/sourcegraph repo when it was open source. While I could have optimized gopls (which powers Go LSP under the hood) to eliminate test files and irrelevant files from indexing, it still doesn't solve the sub-millisecond response time problem.

I've been building Code-Pathfinder, an open-source static code analysis tool for security purposes, parallel to this journey. One thing I found exciting is how blazing fast Pathfinder parses, indexes, resolves definitions, generates call graphs, and infers types. Blazing fast here means monorepo scale: 50,000+ Python files indexed in 8 minutes on Apple M2 Max. After indexing, it takes less than 100 milliseconds to query any function or class (symbols included). Running a security scan on top of these indices, including source-to-sink analysis, takes less than 10 seconds.

This immediately prompted me to build a local MCP server and integrate it with monorepo-like projects. Even more interesting, I can connect multiple instances of Pathfinder to different projects for precise querying and security analysis. Today, I'm open sourcing the MCP server for Code-Pathfinder, which you can run locally, connect to multiple projects, and use for instant code queries and security analysis.

What's Available in the MCP Server?

The Code-Pathfinder MCP server exposes 6 powerful tools that AI agents can use to intelligently query your codebase:

ToolPurpose
get_index_infoGet project statistics, indexing status, and build performance metrics
find_symbolLocate functions, classes, or methods by name with fuzzy matching support
get_callersFind all functions that call a target function (reverse call graph)
get_calleesList all functions a given function depends on (forward call graph)
get_call_detailsGet granular information about specific calls between two functions
resolve_importMap Python import paths to actual file locations

These tools enable AI agents to navigate your codebase intelligently without reading entire files or running expensive grep operations.

Getting Started

MCP server support is available from Code-Pathfinder v1.1.6 onwards. Here's how to set it up:

1. Install Code-Pathfinder

Follow the Quickstart Guide for detailed installation instructions. Quick install:

# macOS/Linux via Homebrew
brew install shivasurya/tap/pathfinder

# Or via pip
pip install codepathfinder

2. Configure Your AI Assistant

Add Code-Pathfinder to your AI assistant's MCP configuration file.

For Claude Code (~/.claude.json):

{
  "mcpServers": {
"code-pathfinder": {
  "command": "pathfinder",
  "args": ["serve", "--project", "/absolute/path/to/your/project"]
}
  }
}

3. Test It Out

Ask your AI assistant questions like:

  • "Find all functions that call process_payment"
  • "Show me what functions validate_user calls"
  • "Get project statistics"

Your AI agent will drive Code-Pathfinder's MCP tools to query the indexed call graph directly, delivering instant answers without reading files or running grep operations.

Check out the full documentation and setup guide for more configuration options, multiple project setup, and HTTP mode.


Get Involved

Found a bug or have a feature request? We'd love to hear from you:

Try Code Pathfinder Today

Eliminate false positives and find real security vulnerabilities in your code. Get started in minutes with AI-powered SAST.

Free and open source • AGPL-3.0 License

Secure your code with confidence

Eliminate false positives and surface real security issues so developers can focus on building features.

Write to us

Send email

Chat with us

Join discussions

Try it now

Get started