MCP Server

Serve your Airweave collection through an MCP server so that clients such as Cursor, Claude or VS Code can query it directly.

Prerequisites

Before you start you’ll need:

  • A collection with data: at least one source connection must have completed its initial sync. See the Quickstart if you need to set this up.
  • An API key: Create one in the Airweave dashboard under API Keys.

Set-up

Requirement: Cursor version 0.45.6 or later

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click ”+ Add new global MCP server”
  4. Add this configuration:
Cursor Configuration
1{
2 "mcpServers": {
3 "airweave-search": {
4 "command": "npx",
5 "args": ["-y", "airweave-mcp-search"],
6 "env": {
7 "AIRWEAVE_API_KEY": "your-api-key",
8 "AIRWEAVE_COLLECTION": "your-collection-id"
9 }
10 }
11 }
12}

These environment variables can be set:

  • AIRWEAVE_API_KEY (Required): Authenticates the MCP server with the Airweave API so it can run searches on your behalf.
  • AIRWEAVE_COLLECTION (Required): Readable ID of the collection to query. All search calls are scoped to this collection.
  • AIRWEAVE_BASE_URL (Optional): Override if you’re running a self-hosted Airweave instance (default: https://api.airweave.ai).

Available tools

The MCP server provides two tools:

  • search: Enhanced search across all source connections in the collection with full parameter control and optional AI completion.
  • get-config: View current configuration and connection status so you can verify everything is working correctly. No parameters required.

Advanced Search Features

The search tool uses the POST endpoint for full control over search parameters.

Available Parameters:

  • retrieval_strategy: Choose between ‘hybrid’, ‘neural’, or ‘keyword’ search (default: ‘hybrid’)
  • expand_query: Generate query variations for better recall (default: true)
  • interpret_filters: Extract filters from natural language (default: false)
  • rerank: LLM-based result reranking for improved relevance (default: true)
  • generate_answer: AI-generated completion from results (default: true)
  • temporal_relevance: Recency weighting from 0.0 to 1.0 (default: 0.3)
  • limit: Maximum number of results (default: 1000)
  • offset: Pagination offset (default: 0)

Natural Language Examples:

  • “Use neural search for semantic similarity” → retrieval_strategy: "neural"
  • “Search without expanding the query” → expand_query: false
  • “Extract filters from my query” → interpret_filters: true
  • “Disable reranking for faster results” → rerank: false
  • “Just return results, no AI summary” → generate_answer: false
  • “Prioritize recent documents” → temporal_relevance: 0.8