Error Handling
OmniMCP provides a robust error handling system with helpful error messages, automatic retries, and clear recovery strategies.
Error Types
All OmniMCP errors extend the base MCPError
class and provide rich context for debugging and recovery:
Base Error Class
class MCPError extends Error { errorCode: string; // Unique error identifier context: ErrorContext; // Additional error context retryable: boolean; // Whether retry might succeed getUserMessage(): string; // User-friendly message getDebugInfo(): object; // Detailed debug information }
Common Error Types
ConnectionError
Thrown when connection to MCP server fails.
try { await client.connect(config); } catch (error) { if (error instanceof ConnectionError) { console.error('Connection failed:', error.getUserMessage()); // Error codes: CONNECTION_REFUSED, CONNECTION_TIMEOUT, etc. } }
TimeoutError
Operation exceeded the configured timeout.
try { await client.tools.call({ name: 'slow_operation' }); } catch (error) { if (error instanceof TimeoutError) { console.error('Operation timed out after', error.context.timeout, 'ms'); } }
ValidationError
Invalid parameters or schema validation failure.
try { await client.tools.call({ name: 'get_weather', arguments: { invalid: 'params' } }); } catch (error) { if (error instanceof ValidationError) { console.error('Validation failed:', error.context.validationErrors); } }
AuthenticationError
Authentication or authorization failures.
try { await client.connect({ type: 'http', options: { url: 'https://api.example.com' } }); } catch (error) { if (error instanceof AuthenticationError) { console.error('Auth failed:', error.errorCode); // Codes: INVALID_TOKEN, EXPIRED_TOKEN, INSUFFICIENT_PERMISSIONS } }
RateLimitError
Rate limit exceeded with retry information.
try { await client.tools.call({ name: 'api_operation' }); } catch (error) { if (error instanceof RateLimitError) { const retryAfter = error.context.retryAfter; // seconds console.log(`Rate limited. Retry after ${retryAfter}s`); } }
Error Context
Every error includes rich context for debugging:
interface ErrorContext { timestamp: Date; // When error occurred operation: string; // What operation failed transport: string; // Which transport was used serverInfo?: object; // Server information if available request?: object; // Request that caused error response?: object; // Response if available retryCount?: number; // Number of retry attempts [key: string]: any; // Additional context }
Retry Strategies
OmniMCP includes built-in retry functionality with exponential backoff:
Basic Retry
import { retry } from '@omnimcp/core'; const result = await retry( () => client.connect(config), { maxAttempts: 3, initialDelay: 1000, maxDelay: 10000, backoffFactor: 2 } );
Custom Retry Logic
const result = await retry( () => client.tools.call({ name: 'flaky_operation' }), { maxAttempts: 5, shouldRetry: (error) => { // Only retry on specific errors return error.retryable && error.errorCode !== 'INVALID_PARAMETERS'; }, onRetry: (attempt, error) => { console.log(`Retry attempt ${attempt}: ${error.message}`); } } );
Error Recovery Patterns
Graceful Degradation
async function getWeatherWithFallback(city: string) { try { // Try primary MCP server return await primaryClient.tools.call({ name: 'get_weather', arguments: { city } }); } catch (error) { console.warn('Primary failed, trying fallback:', error.getUserMessage()); try { // Fall back to secondary server return await secondaryClient.tools.call({ name: 'weather_backup', arguments: { location: city } }); } catch (fallbackError) { // Return cached or default data return getCachedWeather(city) || { temperature: 'Unknown', conditions: 'Data unavailable' }; } } }
Circuit Breaker Pattern
class CircuitBreaker { private failures = 0; private lastFailure?: Date; private state: 'closed' | 'open' | 'half-open' = 'closed'; async call<T>(operation: () => Promise<T>): Promise<T> { if (this.state === 'open') { if (Date.now() - this.lastFailure!.getTime() > 60000) { this.state = 'half-open'; } else { throw new Error('Circuit breaker is open'); } } try { const result = await operation(); this.onSuccess(); return result; } catch (error) { this.onFailure(); throw error; } } private onSuccess() { this.failures = 0; this.state = 'closed'; } private onFailure() { this.failures++; this.lastFailure = new Date(); if (this.failures >= 5) { this.state = 'open'; } } }
Logging and Monitoring
Best practices for error logging:
client.on('error', (error: MCPError) => { // Log to monitoring service logger.error({ message: error.getUserMessage(), errorCode: error.errorCode, context: error.context, stack: error.stack, debugInfo: error.getDebugInfo() }); // Track metrics metrics.increment('mcp.errors', { errorCode: error.errorCode, operation: error.context.operation, retryable: error.retryable }); });
Best Practices
- Always handle errors explicitly - Don`t let them bubble up unhandled
- Use type guards - Check error types with instanceof
- Log context - Include error context in logs
- Implement retries - But respect rate limits
- Fail fast - Don`t retry non-retryable errors
- Provide fallbacks - Have backup strategies
- Monitor errors - Track error rates and patterns
Next Steps
- See advanced error handling examples
- Learn about client configuration
- Explore AI integration patterns