Multi-provider AI routing system that bridges external AI assistants (OpenAI, Gemini, Claude) to the MCP Orchestrator with health monitoring and feedback collection.
GPT-4 integration with streaming support, function calling, and error handling.
Google Gemini integration with multimodal support and safety settings.
Main integration bridge with intent mapping, tool conversion, and feedback collection.
Health checking, latency tracking, and availability monitoring for all providers.
Pydantic schemas for provider configuration, rate limits, and API keys.
Requests are mapped to capabilities, routed to the healthiest provider, and feedback is collected for learning.
OpenAI GPT-4, Google Gemini, and Anthropic Claude with unified interface.
Map natural language intents to capabilities using pattern matching.
Convert tool schemas between providers (OpenAI, Anthropic, Gemini formats).
Track latency, success rates, and availability for each provider.
Collect success/failure feedback to improve future routing decisions.
Built-in rate limit tracking and automatic provider fallback.
from assistant_bridge import AssistantBridge from provider_monitor import ProviderMonitor # Initialize bridge with health monitoring monitor = ProviderMonitor() bridge = AssistantBridge(monitor=monitor) # Configure providers bridge.add_provider("openai", api_key="sk-...") bridge.add_provider("gemini", api_key="AI...") # Execute task - automatically routes to healthiest provider response = await bridge.execute( task="Generate a sorting algorithm", capabilities=["code_generation"], ) print(response.provider) # openai or gemini print(response.latency) # 1234ms