How AI Assistant Integration Works

Understanding multi-provider routing in projects02

1. The Problem

Different AI providers (OpenAI, Google Gemini, Anthropic Claude) have different APIs, rate limits, and capabilities. Switching between providers or handling failover requires significant code changes.

Solution: A unified bridge that abstracts provider differences and routes requests to the healthiest available provider automatically.

2. Architecture Overview

User Request
Intent Mapper
Provider Monitor
Provider Adapter
AI API

Intent Mapper

Maps natural language intents to capabilities using pattern matching.

Tool Converter

Converts tool schemas between provider formats (OpenAI, Anthropic, Gemini).

Provider Monitor

Tracks health, latency, and availability of each provider.

Feedback Collector

Collects success/failure feedback to improve routing.

3. Intent Mapping

The Intent Mapper converts natural language task descriptions into capability requirements. It uses keyword patterns to identify what capabilities are needed.

# Example intent mapping patterns
PATTERNS = {
    "code": ["code_generation", "coding"],
    "analyze": ["code_analysis", "text_analysis"],
    "summarize": ["text_summarization"],
    "translate": ["translation"],
}

# Input: "Write code to sort a list"
# Output: ["code_generation", "coding"]

4. Provider Health Monitoring

The Provider Monitor continuously tracks the health of each provider:

OpenAI: 98% success, 450ms avg

Gemini: 95% success, 380ms avg

Claude: 99% success, 520ms avg

5. Provider Selection

When a request comes in, the bridge selects the best provider based on:

  1. Capability support (does the provider support the required capability?)
  2. Current health score (latency + success rate)
  3. Rate limit status (has quota remaining?)
  4. User preference (if specified)
def select_provider(capabilities, preference=None):
    candidates = []
    for provider in available_providers:
        if provider.supports(capabilities):
            score = provider.health_score()
            if preference == provider.name:
                score *= 1.5  # Boost preferred
            candidates.append((provider, score))
    return max(candidates, key=lambda x: x[1])

6. Tool Format Conversion

Different providers expect different tool/function schemas. The Tool Converter handles this:

# OpenAI format
{"type": "function", "function": {"name": "search", ...}}

# Anthropic format
{"name": "search", "input_schema": {...}}

# Gemini format
{"name": "search", "parameters": {...}}

7. Feedback Loop

After each request, feedback is collected and sent to the Learning Layer (M5):

Response
Feedback Collector
M5 Learning
Score Update

Successful requests boost the provider's score; failures reduce it. This creates a self-improving system that learns which providers work best for which tasks.