The Message interface is the fundamental data structure that flows through the entire system. It provides a unified way to represent user input from any platform, ensuring that the core agent logic doesn't need to know about platform-specific message formats.
Why this matters: By standardizing message format, we can add new platforms (Discord, Slack, etc.) without changing the core agent logic. The metadata field allows platform-specific features while keeping the interface clean.
Memory Interfaces
The memory system is the brain of BILL - it determines what context to provide for generating responses. We use a dual-layer approach to balance personalization with privacy.
Platform Memory keeps conversations isolated - your Twitter conversations don't bleed into Telegram chats. This maintains privacy and context appropriateness.
Shared Memory captures valuable knowledge that applies across platforms - like facts about cryptocurrency, coding solutions, or general knowledge that BILL learns.
Plugin Interface
Plugins are how BILL connects to external platforms. Each plugin handles the messy details of platform APIs, authentication, and message formatting, presenting a clean interface to the core agent.
Why plugins matter: Each platform has different APIs, authentication methods, and message formats. Plugins isolate this complexity, making it easy to add new platforms or update existing ones without touching the core agent logic.
Character Interface
The Character system defines BILL's personality, expertise, and how he adapts to different platforms. This ensures consistent personality while allowing platform-appropriate communication styles.
Platform adaptation: BILL might be more concise on Twitter due to character limits, but more detailed on Telegram where longer messages are welcome.
LLM Router Interfaces
The LLM Router intelligently selects the best AI model for each task, balancing cost, capability, and performance. Different tasks benefit from different models.
Creative writing → Models optimized for creativity (GPT-4)
Image Generation Interfaces
BILL can generate and analyze images to enhance conversations. This adds visual communication capabilities while managing costs and usage.
Memory System Implementation
Memory Manager: The Central Coordinator
The Memory Manager acts as a traffic controller, routing memory operations to the appropriate storage systems. It ensures that platform-specific conversations stay isolated while shared knowledge remains accessible to all platforms.
Why this design: The Memory Manager provides a single interface for the agent while managing the complexity of dual-layer storage. Adding new platforms just requires implementing the PlatformMemory interface.
Platform Memory: Conversation Context
Platform Memory keeps track of conversations within each platform. For Twitter, this means tracking reply threads and mentions. For Telegram, it means tracking chat history and user interactions.
Key benefits:
Fast thread retrieval: Supabase queries for chronological conversation history
Semantic search: Pinecone finds relevant past interactions even if keywords don't match
Platform isolation: Twitter conversations don't interfere with Telegram chats
Shared Memory: Cross-Platform Knowledge
Shared Memory captures valuable knowledge that applies across all platforms. This includes facts BILL learns, successful response patterns, and general knowledge that enhances future conversations.
Intelligence features:
Automatic filtering: Only stores high-value interactions to avoid noise
Category organization: Groups knowledge by type for better retrieval
Importance scoring: Prioritizes technical and educational content
Agent Runtime Implementation
Core Agent Class: The Orchestrator
The Agent Runtime is the central nervous system of BILL. It coordinates all components to process incoming messages and generate appropriate responses.
Processing flow:
Image Analysis: If the message contains images, analyze them first
Context Gathering: Retrieve relevant conversation history and knowledge
Task Analysis: Determine what type of response is needed
LLM Selection: Choose the best model for this specific task
Context Building: Assemble all information into a comprehensive prompt
Response Generation: Generate the response using the selected LLM
Image Generation: Create images if the response suggests it
Memory Storage: Store the interaction for future learning
Context Builder: Assembling the Perfect Prompt
The Context Builder takes all available information and creates a comprehensive prompt that gives the LLM everything it needs to generate an appropriate response as BILL.
Context prioritization:
Character prompt: Establishes BILL's personality and capabilities
Platform context: Recent conversation and relevant memories
Shared knowledge: Cross-platform facts and learnings
Current message: The immediate question or comment
Image analysis: Visual context if images are present
Database Schema
Complete Supabase Schema
The database schema is designed for both performance and flexibility, supporting the dual-layer memory architecture while enabling fast queries and analytics.
Schema design principles:
Platform isolation: Separate tables for each platform's conversations
Shared knowledge: Central repository for cross-platform learning
Performance optimization: Indexes on frequently queried columns
Cost tracking: Monitor LLM and image generation expenses
Flexibility: JSONB metadata fields for platform-specific data
Environment Setup
The environment configuration supports the OAuth 2.0 PKCE flow and provides sensible defaults for development and production.
Development Workflow
Getting Started
Testing Strategy
The testing approach covers all layers of the system to ensure reliability and performance.
Unit Tests
Memory system components: Test storage and retrieval logic
Context building logic: Verify prompt assembly
Character system: Test personality consistency
Plugin message transformation: Validate format conversion
Integration Tests
Database operations: Test Supabase and Pinecone integration
interface Message {
id: string; // Unique identifier for the message
content: string; // The actual text content from the user
platform: 'twitter' | 'telegram'; // Which platform this came from
authorId: string; // Platform-specific user identifier
threadId?: string; // For threaded conversations (Twitter replies)
timestamp: Date; // When the message was sent
metadata: Record<string, any>; // Platform-specific data (images, mentions, etc.)
}
interface AgentResponse {
content: string; // The generated response text
shouldReply: boolean; // Whether to actually send this response
imageUrl?: string; // Optional generated image URL
metadata?: Record<string, any>; // Additional response data
}
interface PlatformMemory {
// Store a conversation interaction for future reference
storeInteraction(message: Message, response: string): Promise<void>;
// Get relevant context for generating a response to this message
getRelevantContext(message: Message): Promise<PlatformContext>;
// Retrieve conversation history for threaded discussions
getConversationHistory(threadId: string, limit?: number): Promise<Message[]>;
// Search past interactions using semantic similarity
searchInteractions(query: string, limit?: number): Promise<MemoryItem[]>;
}
interface SharedMemory {
// Store interactions that contain valuable cross-platform knowledge
storeInteraction(message: Message, response: string): Promise<void>;
// Search for relevant knowledge across all platforms
searchKnowledge(query: string, limit?: number): Promise<KnowledgeItem[]>;
// Manually add important facts or information
addKnowledge(content: string, category: string, tags?: string[]): Promise<void>;
// Get analytics about interaction patterns
getInteractionStats(): Promise<InteractionStats>;
}
interface MemoryItem {
id: string; // Unique identifier
content: string; // Original message content
response?: string; // BILL's response (if any)
timestamp: Date; // When this happened
score?: number; // Relevance score from vector search
metadata: Record<string, any>; // Additional context
}
interface KnowledgeItem {
id: string; // Unique identifier
content: string; // The knowledge content
category: string; // Type of knowledge (tech, crypto, etc.)
tags: string[]; // Searchable tags
importance: number; // How important this knowledge is (1-10)
score?: number; // Relevance score from search
}
interface IPlugin {
platform: string; // Platform identifier ('twitter', 'telegram')
// Set up connections, authenticate, start listening for messages
initialize(): Promise<void>;
// Handle an incoming message from the platform
processMessage(rawMessage: any): Promise<void>;
// Send BILL's response back to the platform
sendResponse(message: Message, response: string): Promise<void>;
// Convert platform-specific message format to our standard Message
transformMessage(rawMessage: any): Message;
// Convert our response to platform-specific format (hashtags, markdown, etc.)
formatResponse(response: string, context: any): string;
}
interface CharacterConfig {
name: string; // Character name
description: string; // What this character does
personality: {
traits: string[]; // Personality traits (helpful, knowledgeable, etc.)
tone: string; // Overall communication tone
expertise: string[]; // Areas of expertise
};
platforms: {
twitter: {
maxLength: number; // Character limit for tweets
useHashtags: boolean; // Whether to include hashtags
style: string; // Platform-specific style guide
};
telegram: {
useMarkdown: boolean; // Whether to use markdown formatting
style: string; // Platform-specific style guide
};
};
limitations: string[]; // What the character won't/can't do
}
interface LLMTask {
type: 'text' | 'code' | 'analysis' | 'creative'; // What kind of task
complexity: 'simple' | 'medium' | 'complex'; // How complex
platform: string; // Where it's going
requiresVision?: boolean; // Needs image analysis
}
interface LLMProvider {
name: string; // Provider identifier
complete(prompt: string): Promise<string>; // Generate text completion
getCost(tokens: number): number; // Calculate cost for tokens
getMaxTokens(): number; // Maximum context length
}
interface LLMRouter {
// Choose the best model for this specific task
selectProvider(task: LLMTask): Promise<LLMProvider>;
// Track usage for cost monitoring and optimization
trackUsage(provider: string, tokens: number, cost: number): Promise<void>;
}
interface ImageRequest {
prompt: string; // What to generate
style?: 'realistic' | 'artistic' | 'diagram' | 'meme'; // Visual style
size?: '1024x1024' | '1792x1024' | '1024x1792'; // Image dimensions
platform: string; // Where this will be used
}
interface ImageResult {
url: string; // Where the generated image is stored
prompt: string; // Original prompt
revisedPrompt?: string; // AI-improved prompt
cost?: number; // Cost of generation
}
interface ImageGenerator {
// Create new images from text descriptions
generateImage(request: ImageRequest): Promise<ImageResult>;
// Analyze existing images and answer questions about them
analyzeImage(imageUrl: string, question?: string): Promise<string>;
// Store generated images for future reference
storeImage(imageUrl: string, platform: string): Promise<string>;
}
class MemoryManager {
private platformMemories: Map<string, PlatformMemory>;
private sharedMemory: SharedMemory;
constructor(
private supabase: SupabaseClient,
private pinecone: PineconeClient,
private embeddingService: EmbeddingService
) {
// Initialize platform-specific memory systems
this.platformMemories = new Map([
['twitter', new TwitterMemory(supabase, pinecone, embeddingService)],
['telegram', new TelegramMemory(supabase, pinecone, embeddingService)]
]);
// Initialize shared knowledge system
this.sharedMemory = new SharedMemory(supabase, pinecone, embeddingService);
}
getPlatformMemory(platform: string): PlatformMemory {
const memory = this.platformMemories.get(platform);
if (!memory) {
throw new Error(`No memory implementation for platform: ${platform}`);
}
return memory;
}
getSharedMemory(): SharedMemory {
return this.sharedMemory;
}
// Store an interaction in both platform-specific and shared memory
async storeInteraction(message: Message, response: string): Promise<void> {
await Promise.all([
this.getPlatformMemory(message.platform).storeInteraction(message, response),
this.sharedMemory.storeInteraction(message, response)
]);
}
}
class SharedMemory implements SharedMemory {
constructor(
private supabase: SupabaseClient,
private pinecone: PineconeClient,
private embeddingService: EmbeddingService
) {}
async storeInteraction(message: Message, response: string): Promise<void> {
// Only store high-value interactions in shared memory
const importance = this.calculateImportance(message, response);
if (importance >= 7) { // Threshold for shared knowledge
await this.addKnowledge(
`${message.content} -> ${response}`,
'interaction',
[message.platform, 'conversation']
);
}
}
async searchKnowledge(query: string, limit: number = 5): Promise<KnowledgeItem[]> {
// Search across all shared knowledge using semantic similarity
const queryEmbedding = await this.embeddingService.generateEmbedding(query);
const results = await this.pinecone.query({
vector: queryEmbedding,
topK: limit,
namespace: 'shared',
includeMetadata: true
});
return results.matches?.map(match => ({
id: match.id,
content: match.metadata?.content || '',
category: match.metadata?.category || '',
tags: match.metadata?.tags || [],
importance: match.metadata?.importance || 5,
score: match.score
})) || [];
}
async addKnowledge(
content: string,
category: string,
tags: string[] = []
): Promise<void> {
// Store structured knowledge in Supabase
const { data } = await this.supabase
.from('shared_knowledge')
.insert({
content,
category,
tags,
importance: 5
})
.select()
.single();
if (data) {
// Create searchable embedding
const embedding = await this.embeddingService.generateEmbedding(content);
// Store in vector database for semantic search
await this.pinecone.upsert({
vectors: [{
id: `knowledge_${data.id}`,
values: embedding,
metadata: {
type: 'knowledge',
content,
category,
tags,
importance: data.importance
}
}]
});
}
}
private calculateImportance(message: Message, response: string): number {
// Intelligent importance scoring for knowledge extraction
let importance = 5; // Base importance
// Longer, more detailed responses are often more valuable
if (response.length > 200) importance += 1;
// Questions often lead to valuable knowledge
if (message.content.includes('?')) importance += 1;
// Technical discussions are high-value
const techTerms = ['bitcoin', 'ethereum', 'blockchain', 'smart contract', 'defi'];
const hasTechTerms = techTerms.some(term =>
message.content.toLowerCase().includes(term) ||
response.toLowerCase().includes(term)
);
if (hasTechTerms) importance += 2;
// Code-related discussions are valuable
if (response.includes('```') || message.content.toLowerCase().includes('code')) {
importance += 2;
}
return Math.min(importance, 10); // Cap at maximum importance
}
}
class AgentRuntime {
private character: Character;
private memoryManager: MemoryManager;
private llmRouter: LLMRouter;
private imageGenerator: ImageGenerator;
private contextBuilder: ContextBuilder;
private plugins: Map<string, IPlugin>;
constructor(
character: CharacterConfig,
memoryManager: MemoryManager,
llmRouter: LLMRouter,
imageGenerator: ImageGenerator
) {
this.character = new Character(character);
this.memoryManager = memoryManager;
this.llmRouter = llmRouter;
this.imageGenerator = imageGenerator;
this.contextBuilder = new ContextBuilder(this.character);
this.plugins = new Map();
}
async processMessage(message: Message): Promise<AgentResponse> {
try {
// Step 1: Analyze any images in the message
let imageAnalysis = '';
if (message.metadata.imageUrls?.length > 0) {
imageAnalysis = await this.analyzeImages(message.metadata.imageUrls);
}
// Step 2: Gather context from both memory systems
const [platformContext, sharedContext] = await Promise.all([
this.memoryManager.getPlatformMemory(message.platform).getRelevantContext(message),
this.memoryManager.getSharedMemory().searchKnowledge(message.content)
]);
// Step 3: Analyze the task and select the best LLM
const task = this.analyzeTask(message, imageAnalysis);
const provider = await this.llmRouter.selectProvider(task);
// Step 4: Build comprehensive context for the LLM
const context = this.contextBuilder.buildContext(
message,
platformContext,
sharedContext,
imageAnalysis
);
// Step 5: Generate the response using the selected LLM
const response = await provider.complete(context);
// Step 6: Check if we should generate an image to accompany the response
let generatedImage: ImageResult | null = null;
if (this.shouldGenerateImage(response, message.platform)) {
const imagePrompt = this.extractImagePrompt(response);
generatedImage = await this.imageGenerator.generateImage({
prompt: imagePrompt,
platform: message.platform
});
}
// Step 7: Store the interaction for future learning
await this.memoryManager.storeInteraction(message, response, {
llmProvider: provider.name,
imageAnalysis,
generatedImage: generatedImage?.url
});
return {
content: response,
shouldReply: true,
imageUrl: generatedImage?.url,
metadata: {
llmProvider: provider.name,
hasImage: !!generatedImage,
taskType: task.type
}
};
} catch (error) {
console.error('Error processing message:', error);
return this.handleError(error, message);
}
}
private analyzeTask(message: Message, imageAnalysis: string): LLMTask {
// Intelligent task classification for optimal LLM selection
const content = `${message.content} ${imageAnalysis}`.toLowerCase();
// Image-related tasks need vision capabilities
if (imageAnalysis) {
return {
type: 'analysis',
complexity: 'medium',
platform: message.platform,
requiresVision: true
};
}
// Code-related tasks benefit from specialized models
if (content.includes('code') || content.includes('programming')) {
return { type: 'code', complexity: 'medium', platform: message.platform };
}
// Creative tasks need models optimized for creativity
if (content.includes('create') || content.includes('generate') || content.includes('make')) {
return { type: 'creative', complexity: 'medium', platform: message.platform };
}
// Default to simple text generation
return { type: 'text', complexity: 'simple', platform: message.platform };
}
private async analyzeImages(imageUrls: string[]): Promise<string> {
// Process multiple images and combine analyses
const analyses = await Promise.all(
imageUrls.map(url => this.imageGenerator.analyzeImage(url))
);
return analyses.join('\n');
}
private shouldGenerateImage(response: string, platform: string): boolean {
// Detect when the response suggests creating an image
const imageKeywords = ['create image', 'generate image', 'make picture', 'draw', 'visualize'];
return imageKeywords.some(keyword => response.toLowerCase().includes(keyword));
}
private extractImagePrompt(response: string): string {
// Extract image generation instructions from the response
const match = response.match(/(?:create|generate|make).*?image.*?(?:of|showing|with)?\s*([^.!?]+)/i);
return match ? match[1].trim() : 'A helpful illustration';
}
registerPlugin(plugin: IPlugin): void {
this.plugins.set(plugin.platform, plugin);
}
async initialize(): Promise<void> {
// Initialize all registered plugins
for (const plugin of this.plugins.values()) {
await plugin.initialize();
}
}
}
class ContextBuilder {
constructor(private character: Character) {}
buildContext(
message: Message,
platformContext: PlatformContext,
sharedContext: KnowledgeItem[],
imageAnalysis: string
): string {
// Assemble context in order of importance
const sections = [
this.character.getSystemPrompt(message.platform), // Who BILL is
this.formatPlatformContext(platformContext), // Recent conversation
this.formatSharedContext(sharedContext), // Relevant knowledge
this.formatCurrentMessage(message), // Current message
this.formatImageAnalysis(imageAnalysis) // Image context
];
return sections.filter(Boolean).join('\n\n');
}
private formatPlatformContext(context: PlatformContext): string {
// Format conversation history and relevant memories
if (!context.conversationHistory.length && !context.relevantMemories.length) {
return '';
}
let formatted = '## Platform Context\n';
// Recent conversation provides immediate context
if (context.conversationHistory.length > 0) {
formatted += '### Recent Conversation:\n';
formatted += context.conversationHistory
.map(msg => `${msg.authorId}: ${msg.content}`)
.join('\n');
formatted += '\n';
}
// Relevant memories provide broader context
if (context.relevantMemories.length > 0) {
formatted += '### Relevant Past Interactions:\n';
formatted += context.relevantMemories
.map(memory => `- ${memory.content} (relevance: ${memory.score?.toFixed(2)})`)
.join('\n');
}
return formatted;
}
private formatSharedContext(knowledge: KnowledgeItem[]): string {
// Format cross-platform knowledge
if (!knowledge.length) return '';
return `## Shared Knowledge\n${knowledge
.map(item => `- ${item.content} (${item.category})`)
.join('\n')}`;
}
private formatCurrentMessage(message: Message): string {
return `## Current Message\nUser: ${message.content}`;
}
private formatImageAnalysis(imageAnalysis: string): string {
// Include image analysis if available
if (!imageAnalysis) return '';
return `## Image Analysis\n${imageAnalysis}`;
}
}
-- Enable UUID extension for unique identifiers
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- Twitter conversations - platform-specific storage
CREATE TABLE twitter_conversations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
message_id TEXT UNIQUE NOT NULL, -- Twitter message ID
author_id TEXT NOT NULL, -- Twitter user ID
thread_id TEXT, -- Twitter thread/conversation ID
content TEXT NOT NULL, -- Message content
response TEXT, -- BILL's response
llm_provider TEXT, -- Which LLM generated the response
image_urls TEXT[] DEFAULT '{}', -- URLs of images in message
generated_images TEXT[] DEFAULT '{}', -- URLs of images BILL generated
timestamp TIMESTAMPTZ NOT NULL, -- When message was sent
metadata JSONB DEFAULT '{}', -- Additional platform data
created_at TIMESTAMPTZ DEFAULT NOW() -- When stored in database
);
-- Telegram conversations - platform-specific storage
CREATE TABLE telegram_conversations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
message_id TEXT UNIQUE NOT NULL, -- Telegram message ID
chat_id TEXT NOT NULL, -- Telegram chat ID
user_id TEXT NOT NULL, -- Telegram user ID
content TEXT NOT NULL, -- Message content
response TEXT, -- BILL's response
llm_provider TEXT, -- Which LLM generated the response
image_urls TEXT[] DEFAULT '{}', -- URLs of images in message
generated_images TEXT[] DEFAULT '{}', -- URLs of images BILL generated
timestamp TIMESTAMPTZ NOT NULL, -- When message was sent
metadata JSONB DEFAULT '{}', -- Additional platform data
created_at TIMESTAMPTZ DEFAULT NOW() -- When stored in database
);
-- Shared knowledge base - cross-platform storage
CREATE TABLE shared_knowledge (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
content TEXT NOT NULL, -- Knowledge content
category TEXT NOT NULL, -- Knowledge category
tags TEXT[] DEFAULT '{}', -- Searchable tags
importance INTEGER DEFAULT 5, -- Importance score (1-10)
source_platform TEXT, -- Where this knowledge came from
created_at TIMESTAMPTZ DEFAULT NOW(), -- When added
updated_at TIMESTAMPTZ DEFAULT NOW() -- Last updated
);
-- User profiles - cross-platform user tracking
CREATE TABLE user_profiles (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
platform TEXT NOT NULL, -- Platform name
platform_user_id TEXT NOT NULL, -- Platform-specific user ID
username TEXT, -- Display username
interaction_count INTEGER DEFAULT 0, -- Number of interactions
first_seen TIMESTAMPTZ DEFAULT NOW(), -- First interaction
last_seen TIMESTAMPTZ DEFAULT NOW(), -- Most recent interaction
preferences JSONB DEFAULT '{}', -- User preferences and settings
UNIQUE(platform, platform_user_id) -- One profile per platform per user
);
-- Image generation tracking - cost and usage monitoring
CREATE TABLE image_generations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
platform TEXT NOT NULL, -- Where image was generated
message_id TEXT NOT NULL, -- Associated message
prompt TEXT NOT NULL, -- Generation prompt
revised_prompt TEXT, -- AI-improved prompt
image_url TEXT NOT NULL, -- Generated image URL
style TEXT, -- Image style
size TEXT, -- Image dimensions
cost_usd DECIMAL(10,6), -- Generation cost
created_at TIMESTAMPTZ DEFAULT NOW() -- When generated
);
-- LLM usage tracking - cost and performance monitoring
CREATE TABLE llm_usage (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
provider TEXT NOT NULL, -- LLM provider name
model TEXT NOT NULL, -- Specific model used
platform TEXT NOT NULL, -- Where response was generated
task_type TEXT NOT NULL, -- Type of task
tokens_used INTEGER, -- Tokens consumed
cost_usd DECIMAL(10,6), -- Cost in USD
response_time_ms INTEGER, -- Response time
created_at TIMESTAMPTZ DEFAULT NOW() -- When used
);
-- Performance indexes for fast queries
CREATE INDEX idx_twitter_conversations_author_id ON twitter_conversations(author_id);
CREATE INDEX idx_twitter_conversations_thread_id ON twitter_conversations(thread_id);
CREATE INDEX idx_twitter_conversations_timestamp ON twitter_conversations(timestamp DESC);
CREATE INDEX idx_telegram_conversations_chat_id ON telegram_conversations(chat_id);
CREATE INDEX idx_telegram_conversations_user_id ON telegram_conversations(user_id);
CREATE INDEX idx_telegram_conversations_timestamp ON telegram_conversations(timestamp DESC);
CREATE INDEX idx_shared_knowledge_category ON shared_knowledge(category);
CREATE INDEX idx_shared_knowledge_tags ON shared_knowledge USING GIN(tags);
CREATE INDEX idx_shared_knowledge_created_at ON shared_knowledge(created_at DESC);
CREATE INDEX idx_user_profiles_platform_user ON user_profiles(platform, platform_user_id);
CREATE INDEX idx_image_generations_platform ON image_generations(platform);
CREATE INDEX idx_llm_usage_provider_model ON llm_usage(provider, model);
# Twitter OAuth 2.0 PKCE Configuration
TWITTER_CLIENT_ID=your_client_id_here
TWITTER_CLIENT_SECRET=your_client_secret_here
TWITTER_REDIRECT_URI=http://localhost:3000/callback
TWITTER_USERNAME=shillbillai
# Authentication Security
AUTH_SECRET_KEY=your_secret_key_here # Generate with: openssl rand -hex 32
AUTH_ALLOWED_IPS=127.0.0.1,::1 # Optional IP whitelist
# Database Configuration
SUPABASE_URL=https://your-project-id.supabase.co
SUPABASE_SECRET_KEY=your_secret_key_here
# Vector Database
PINECONE_API_KEY=your-pinecone-key
PINECONE_ENVIRONMENT=your-environment
PINECONE_INDEX=bill-agent
# LLM Providers
OPENAI_API_KEY=your-openai-key
OPENROUTER_API_KEY=your-openrouter-key
# Optional: Rate Limiting and Timing
POST_INTERVAL_MIN=240 # Post every 4 hours
REPLY_CHECK_INTERVAL_MIN=2 # Check replies every 2 minutes
POST_LIMIT=20 # Max posts per day
REPLY_LIMIT=100 # Max replies per day
# Install dependencies using Bun
bun install
# Copy environment template
cp .env-example .env
# Edit .env with your actual credentials
# Set up database schema
bun run db:migrate
# Authenticate with Twitter
bun run auth
# Run in development mode
bun run dev
# Build for production
bun run build
# Deploy to production
bun run deploy