Original Implementation v1

Project Structure

The BILL agent follows a modular architecture that separates concerns and enables easy extension. Here's how the codebase is organized:

bill-agent/
├── src/
│   ├── core/                 # Core business logic and orchestration
│   │   ├── agent.ts          # Main agent runtime - orchestrates all components
│   │   ├── character.ts      # Character system - defines BILL's personality
│   │   ├── memory/           # Memory management - handles context and knowledge
│   │   │   ├── manager.ts    # Memory coordinator - routes between memory types
│   │   │   ├── platform.ts   # Platform-specific memory - isolated conversations
│   │   │   ├── shared.ts     # Shared memory - cross-platform knowledge
│   │   │   └── types.ts      # Memory interfaces and type definitions
│   │   ├── context.ts        # Context building - assembles conversation context
│   │   └── llm-router.ts     # LLM provider routing - selects optimal models
│   ├── plugins/              # Platform integrations - handle external APIs
│   │   ├── base.ts           # Base plugin interface - common plugin contract
│   │   ├── twitter.ts        # Twitter integration - OAuth 2.0 PKCE + API
│   │   └── telegram.ts       # Telegram integration - bot API wrapper
│   ├── providers/            # External service providers - AI and data services
│   │   ├── llm.ts           # LLM provider interface - common AI contract
│   │   ├── openai.ts        # OpenAI implementation - GPT models
│   │   ├── openrouter.ts    # OpenRouter implementation - multiple models
│   │   ├── embedding.ts     # Embedding service - text vectorization
│   │   └── image.ts         # Image generation service - DALL-E integration
│   ├── database/            # Data persistence layer - handles all storage
│   │   ├── supabase.ts      # Supabase client - primary database
│   │   ├── pinecone.ts      # Pinecone client - vector database
│   │   └── migrations/      # Database migrations - schema evolution
│   ├── config/              # Configuration management - settings and setup
│   │   ├── character.ts     # Character configuration - BILL's personality
│   │   └── environment.ts   # Environment setup - validation and defaults
│   └── main.ts              # Application entry point - startup and initialization
├── package.json
└── .env.example

Core Interfaces

Message Interface

The Message interface is the fundamental data structure that flows through the entire system. It provides a unified way to represent user input from any platform, ensuring that the core agent logic doesn't need to know about platform-specific message formats.

Why this matters: By standardizing message format, we can add new platforms (Discord, Slack, etc.) without changing the core agent logic. The metadata field allows platform-specific features while keeping the interface clean.

Memory Interfaces

The memory system is the brain of BILL - it determines what context to provide for generating responses. We use a dual-layer approach to balance personalization with privacy.

Platform Memory keeps conversations isolated - your Twitter conversations don't bleed into Telegram chats. This maintains privacy and context appropriateness.

Shared Memory captures valuable knowledge that applies across platforms - like facts about cryptocurrency, coding solutions, or general knowledge that BILL learns.

Plugin Interface

Plugins are how BILL connects to external platforms. Each plugin handles the messy details of platform APIs, authentication, and message formatting, presenting a clean interface to the core agent.

Why plugins matter: Each platform has different APIs, authentication methods, and message formats. Plugins isolate this complexity, making it easy to add new platforms or update existing ones without touching the core agent logic.

Character Interface

The Character system defines BILL's personality, expertise, and how he adapts to different platforms. This ensures consistent personality while allowing platform-appropriate communication styles.

Platform adaptation: BILL might be more concise on Twitter due to character limits, but more detailed on Telegram where longer messages are welcome.

LLM Router Interfaces

The LLM Router intelligently selects the best AI model for each task, balancing cost, capability, and performance. Different tasks benefit from different models.

Smart routing examples:

  • Simple questions → Cheaper, faster models (Llama 3.1)

  • Complex coding → High-capability models (Claude 3.5 Sonnet)

  • Image analysis → Vision-capable models (GPT-4o)

  • Creative writing → Models optimized for creativity (GPT-4)

Image Generation Interfaces

BILL can generate and analyze images to enhance conversations. This adds visual communication capabilities while managing costs and usage.

Memory System Implementation

Memory Manager: The Central Coordinator

The Memory Manager acts as a traffic controller, routing memory operations to the appropriate storage systems. It ensures that platform-specific conversations stay isolated while shared knowledge remains accessible to all platforms.

Why this design: The Memory Manager provides a single interface for the agent while managing the complexity of dual-layer storage. Adding new platforms just requires implementing the PlatformMemory interface.

Platform Memory: Conversation Context

Platform Memory keeps track of conversations within each platform. For Twitter, this means tracking reply threads and mentions. For Telegram, it means tracking chat history and user interactions.

Key benefits:

  • Fast thread retrieval: Supabase queries for chronological conversation history

  • Semantic search: Pinecone finds relevant past interactions even if keywords don't match

  • Platform isolation: Twitter conversations don't interfere with Telegram chats

Shared Memory: Cross-Platform Knowledge

Shared Memory captures valuable knowledge that applies across all platforms. This includes facts BILL learns, successful response patterns, and general knowledge that enhances future conversations.

Intelligence features:

  • Automatic filtering: Only stores high-value interactions to avoid noise

  • Category organization: Groups knowledge by type for better retrieval

  • Importance scoring: Prioritizes technical and educational content

Agent Runtime Implementation

Core Agent Class: The Orchestrator

The Agent Runtime is the central nervous system of BILL. It coordinates all components to process incoming messages and generate appropriate responses.

Processing flow:

  1. Image Analysis: If the message contains images, analyze them first

  2. Context Gathering: Retrieve relevant conversation history and knowledge

  3. Task Analysis: Determine what type of response is needed

  4. LLM Selection: Choose the best model for this specific task

  5. Context Building: Assemble all information into a comprehensive prompt

  6. Response Generation: Generate the response using the selected LLM

  7. Image Generation: Create images if the response suggests it

  8. Memory Storage: Store the interaction for future learning

Context Builder: Assembling the Perfect Prompt

The Context Builder takes all available information and creates a comprehensive prompt that gives the LLM everything it needs to generate an appropriate response as BILL.

Context prioritization:

  1. Character prompt: Establishes BILL's personality and capabilities

  2. Platform context: Recent conversation and relevant memories

  3. Shared knowledge: Cross-platform facts and learnings

  4. Current message: The immediate question or comment

  5. Image analysis: Visual context if images are present

Database Schema

Complete Supabase Schema

The database schema is designed for both performance and flexibility, supporting the dual-layer memory architecture while enabling fast queries and analytics.

Schema design principles:

  • Platform isolation: Separate tables for each platform's conversations

  • Shared knowledge: Central repository for cross-platform learning

  • Performance optimization: Indexes on frequently queried columns

  • Cost tracking: Monitor LLM and image generation expenses

  • Flexibility: JSONB metadata fields for platform-specific data

Environment Setup

The environment configuration supports the OAuth 2.0 PKCE flow and provides sensible defaults for development and production.

Development Workflow

Getting Started

Testing Strategy

The testing approach covers all layers of the system to ensure reliability and performance.

Unit Tests

  • Memory system components: Test storage and retrieval logic

  • Context building logic: Verify prompt assembly

  • Character system: Test personality consistency

  • Plugin message transformation: Validate format conversion

Integration Tests

  • Database operations: Test Supabase and Pinecone integration

  • Vector search functionality: Verify semantic search accuracy

  • End-to-end message processing: Test complete message flow

  • Platform API interactions: Test Twitter and Telegram APIs

Performance Tests

  • Memory retrieval speed: Ensure fast context assembly

  • Context building performance: Optimize prompt generation

  • Concurrent message handling: Test multiple simultaneous conversations

  • Cost optimization: Monitor and optimize LLM usage costs

This implementation guide provides a solid foundation for building and extending the BILL agent system while maintaining code quality and performance.

Last updated