AI Integration Guide
Learn how to combine MCP tools with AI models for intelligent, autonomous tool usage and natural language interactions.
Overview
OmniMCP`s AI integration allows language models to discover and use MCP tools automatically based on natural language queries. This enables building sophisticated AI assistants that can interact with external systems.
Setting Up AI Providers
OpenAI Integration
import { MCPClientWithAI } from '@omnimcp/client/providers'; const client = new MCPClientWithAI('ai-assistant', '1.0.0', { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, baseURL: 'https://api.openai.com/v1' // Optional custom endpoint }); // Connect to MCP server await client.connect({ type: 'stdio', options: { command: 'mcp-server' } }); // Let AI handle the request const response = await client.queryAI( "What's the weather in Tokyo?", { model: 'gpt-4-turbo-preview' } );
Anthropic Integration
const client = new MCPClientWithAI('claude-assistant', '1.0.0', { provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }); await client.connect(mcpConfig); const response = await client.queryAI( "Analyze the sales data for last quarter", { model: 'claude-3-opus-20240229', maxTokens: 4096 } );
How AI Integration Works
The AI integration follows this workflow:
- Tool Discovery - AI receives list of available MCP tools
- Query Analysis - AI determines which tools to use
- Tool Execution - AI calls tools with appropriate parameters
- Response Generation - AI formats results for the user
Advanced AI Patterns
Multi-Step Tool Usage
AI can chain multiple tool calls to complete complex tasks:
const response = await client.queryAI( "Find all customers in California, check their account status, " + "and send renewal reminders to those expiring this month", { model: 'gpt-4', systemPrompt: `You have access to these tools: - list_customers: Get customers by location - check_account_status: Check account details - send_email: Send emails to customers Break down complex tasks into steps and use tools sequentially.` } );
Custom System Prompts
Customize AI behavior with detailed system prompts:
const client = new MCPClientWithAI('support-bot', '1.0.0', { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, defaultSystemPrompt: `You are a helpful customer support assistant. Guidelines: - Be concise and friendly - Always verify customer identity first - Escalate complex issues to human agents - Use available tools to check order status, process refunds, etc. Available tools will be provided for each query.` });
Streaming Responses
Stream AI responses for real-time updates:
const stream = await client.streamAI( "Generate a detailed report on system performance", { model: 'gpt-4', onToken: (token) => { process.stdout.write(token); }, onToolCall: (tool, args) => { console.log(`\n🔧 Calling ${tool} with ${JSON.stringify(args)}\n`); } } ); await stream.complete();
Tool Selection Strategies
Explicit Tool Hints
Guide AI to use specific tools:
const response = await client.queryAI( "Check the weather using the weather_api tool", { toolHints: ['weather_api'], // Prefer these tools model: 'gpt-4' } );
Tool Filtering
Limit which tools AI can access:
const response = await client.queryAI( "Help me with my order", { allowedTools: ['get_order', 'track_shipment'], // Only these tools blockedTools: ['delete_order', 'modify_price'], // Never these model: 'gpt-4' } );
Error Handling in AI Calls
Handle tool failures gracefully:
const response = await client.queryAI( "Get the latest metrics", { model: 'gpt-4', onError: (error, toolName) => { console.error(`Tool ${toolName} failed: ${error.message}`); // Return fallback message to AI return "Tool temporarily unavailable, please try again later"; }, maxToolRetries: 2 } );
Conversation Management
Maintaining Context
class AIConversation { private history: Message[] = []; constructor(private client: MCPClientWithAI) {} async query(userMessage: string) { // Add user message to history this.history.push({ role: 'user', content: userMessage }); // Query with full context const response = await this.client.queryAI(userMessage, { messages: this.history, model: 'gpt-4' }); // Add AI response to history this.history.push({ role: 'assistant', content: response }); return response; } clearHistory() { this.history = []; } }
Tool Usage Logging
const response = await client.queryAI(query, { model: 'gpt-4', onToolUse: (toolCall) => { // Log tool usage for analytics analytics.track('ai_tool_use', { tool: toolCall.name, parameters: toolCall.arguments, timestamp: new Date(), queryId: toolCall.id }); } });
Performance Optimization
Caching AI Responses
import { LRUCache } from 'lru-cache'; const aiCache = new LRUCache<string, string>({ max: 500, ttl: 1000 * 60 * 60 // 1 hour }); async function cachedQueryAI(query: string, options: AIQueryOptions) { const cacheKey = `${query}:${JSON.stringify(options)}`; if (aiCache.has(cacheKey)) { return aiCache.get(cacheKey); } const response = await client.queryAI(query, options); aiCache.set(cacheKey, response); return response; }
Parallel Tool Execution
Enable AI to execute independent tools in parallel:
const response = await client.queryAI( "Get weather for Tokyo, London, and New York", { model: 'gpt-4', parallelTools: true, // Execute independent tools simultaneously maxParallelCalls: 3 } );
Security Considerations
Tool Permission Control
const client = new MCPClientWithAI('secure-ai', '1.0.0', { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, toolPermissions: { // Define permissions per tool 'delete_user': { requireConfirmation: true }, 'send_email': { rateLimit: 10, rateLimitWindow: 3600 }, 'execute_sql': { blocked: true } } });
Input Sanitization
const response = await client.queryAI( userInput, { model: 'gpt-4', preprocessInput: (input) => { // Sanitize user input return input .replace(/<script>/gi, '') .replace(/[\x00-\x1F\x7F]/g, '') // Remove control characters .trim(); }, maxInputLength: 1000 } );
Real-World Example
Here`s a complete example of an AI-powered customer service bot:
import { MCPClientWithAI } from '@omnimcp/client/providers'; import express from 'express'; // Initialize AI-enabled MCP client const client = new MCPClientWithAI('support-bot', '1.0.0', { provider: 'openai', apiKey: process.env.OPENAI_API_KEY }); // Connect to customer service MCP server await client.connect({ type: 'http', options: { url: 'https://api.example.com/mcp' } }); // Create Express app const app = express(); app.use(express.json()); // Chat endpoint app.post('/api/chat', async (req, res) => { const { message, sessionId } = req.body; try { // Query AI with customer message const response = await client.queryAI(message, { model: 'gpt-4', systemPrompt: `You are a helpful customer service agent. Use available tools to: - Look up order information - Check shipping status - Process returns - Answer product questions Always be polite and helpful.`, context: { sessionId }, allowedTools: [ 'get_order', 'track_shipment', 'initiate_return', 'search_products' ] }); res.json({ response }); } catch (error) { console.error('AI query failed:', error); res.status(500).json({ error: 'Sorry, I encountered an error. Please try again.' }); } }); app.listen(3000, () => { console.log('AI support bot running on http://localhost:3000'); });
Next Steps
- Explore advanced patterns
- Learn about error handling
- See complete API reference