#4230 AI Agents and Discord-like Message Board

#4230 AI Agents and Discord-like Message Board

#4230 Provide a python application where AI agents write their prompt and results to a discord_like message board with a specified subject line. Other AI agents should be able to scan the message board for messages with a specified subject line. Provide a viewer with auto-refresh and filter on subject line. 

This is a great architectural choice. What you are describing is often called a Blackboard Pattern in multi-agent systems. It allows agents to operate asynchronously, decoupling them so they don't need to know about each other directly—they just subscribe to or poll for the subjects they care about.

To build this, we'll use:

  1. SQLite: A lightweight, built-in database to act as the persistent "message board."

  2. Flask: A simple Python web framework to serve the viewer and an API.

  3. Vanilla HTML/JS: For the frontend viewer with an auto-refresh loop and subject filtering.

Here is the complete setup divided into manageable files.

1. Prerequisites

First, make sure you have Flask installed.

pip install flask

2. The Database Manager (board_db.py)

This file handles the SQLite database connections safely, allowing multiple agents and the viewer to read and write concurrently.

import sqlite3
import datetime

DB_NAME = "blackboard.db"

def init_db():
    """Initializes the database and creates the messages table."""
    with sqlite3.connect(DB_NAME) as conn:
        cursor = conn.cursor()
        cursor.execute("""
            CREATE TABLE IF NOT EXISTS messages (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                timestamp TEXT,
                agent_name TEXT,
                subject TEXT,
                prompt TEXT,
                result TEXT
            )
        """)
        conn.commit()

def post_message(agent_name, subject, prompt, result):
    """Writes a new message to the board."""
    with sqlite3.connect(DB_NAME, timeout=10) as conn:
        cursor = conn.cursor()
        timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
        cursor.execute(
            "INSERT INTO messages (timestamp, agent_name, subject, prompt, result) VALUES (?, ?, ?, ?, ?)",
            (timestamp, agent_name, subject, prompt, result)
        )
        conn.commit()

def get_messages(subject_filter=None):
    """Scans the board, optionally filtering by subject."""
    with sqlite3.connect(DB_NAME, timeout=10) as conn:
        cursor = conn.cursor()
        if subject_filter:
            cursor.execute(
                "SELECT timestamp, agent_name, subject, prompt, result FROM messages WHERE subject = ? ORDER BY id DESC", 
                (subject_filter,)
            )
        else:
            cursor.execute("SELECT timestamp, agent_name, subject, prompt, result FROM messages ORDER BY id DESC")
        
        rows = cursor.fetchall()
        
        # Convert to a list of dictionaries for JSON serialization
        return [
            {
                "timestamp": row[0],
                "agent_name": row[1],
                "subject": row[2],
                "prompt": row[3],
                "result": row[4]
            }
            for row in rows
        ]

3. The AI Agents (agents.py)

This script simulates two AI agents interacting with the board. You can easily replace the generate_mock_result function with a real API call to OpenAI, Gemini, or Claude.

import time
import random
import threading
from board_db import post_message, get_messages, init_db

class AIAgent:
    def __init__(self, name):
        self.name = name

    def generate_task(self, subject, prompt):
        """Simulates an LLM generating a response to a prompt."""
        print(f"[{self.name}] Working on subject '{subject}'...")
        time.sleep(random.uniform(1, 3)) # Simulate API latency
        
        # Replace this string with a real LLM API call
        result = f"I have processed '{prompt}'. Here is my expert analysis!" 
        
        post_message(self.name, subject, prompt, result)
        print(f"[{self.name}] Posted results to board under '{subject}'.")

    def scan_board(self, target_subject):
        """Reads the board looking for a specific subject line."""
        messages = get_messages(subject_filter=target_subject)
        print(f"[{self.name}] Scanned board for '{target_subject}'. Found {len(messages)} messages.")
        return messages

def worker_agent_1():
    agent = AIAgent("Agent_Researcher")
    subjects = ["Data Analysis", "Market Research", "General Chat"]
    while True:
        subject = random.choice(subjects)
        agent.generate_task(subject, f"Compile latest stats on {subject}")
        time.sleep(random.randint(5, 10))

def worker_agent_2():
    agent = AIAgent("Agent_Reviewer")
    while True:
        # Agent 2 scans for Data Analysis messages to review
        messages = agent.scan_board("Data Analysis")
        if messages:
            latest = messages[0]
            agent.generate_task("Review Complete", f"Reviewing previous output: {latest['result'][:20]}...")
        time.sleep(random.randint(8, 12))

if __name__ == "__main__":
    init_db()
    print("Starting AI Agents. Press Ctrl+C to stop.")
    
    # Run agents in separate threads
    t1 = threading.Thread(target=worker_agent_1, daemon=True)
    t2 = threading.Thread(target=worker_agent_2, daemon=True)
    
    t1.start()
    t2.start()
    
    # Keep main thread alive
    while True:
        time.sleep(1)

4. The Viewer Web App (app.py)

This is the Flask server that hosts the web page and provides an API endpoint for the Javascript to pull messages from.

from flask import Flask, render_template, request, jsonify
from board_db import init_db, get_messages

app = Flask(__name__)

# Ensure DB exists before starting
init_db()

@app.route('/')
def index():
    return render_template('index.html')

@app.route('/api/messages')
def api_messages():
    """API endpoint for the frontend to fetch messages."""
    subject_filter = request.args.get('subject', '').strip()
    
    if not subject_filter:
        subject_filter = None
        
    messages = get_messages(subject_filter)
    return jsonify(messages)

if __name__ == '__main__':
    app.run(debug=True, port=5000)

5. The Frontend Interface (templates/index.html)

Create a folder named templates in the same directory as your python files, and save this file inside it. It uses Javascript to ping the Flask API every 3 seconds.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>AI Agent Blackboard</title>
    <style>
        body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; background-color: #f4f5f7; margin: 0; padding: 20px; color: #333; }
        .container { max-width: 900px; margin: auto; }
        .header { display: flex; justify-content: space-between; align-items: center; margin-bottom: 20px; }
        .filter-box { padding: 10px; font-size: 16px; border: 1px solid #ccc; border-radius: 5px; width: 300px; }
        .message { background: white; padding: 20px; margin-bottom: 15px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.1); border-left: 5px solid #4a90e2; }
        .message-header { display: flex; justify-content: space-between; font-size: 0.9em; color: #666; margin-bottom: 10px; }
        .agent-name { font-weight: bold; color: #2c3e50; }
        .subject { font-size: 1.2em; font-weight: bold; margin-bottom: 10px; color: #4a90e2; }
        .content-box { background: #f8f9fa; padding: 10px; border-radius: 5px; margin-bottom: 10px; font-family: monospace; white-space: pre-wrap; }
        .label { font-size: 0.8em; font-weight: bold; text-transform: uppercase; color: #888; margin-bottom: 5px; display: block; }
    </style>
</head>
<body>

<div class="container">
    <div class="header">
        <h1>Discord-Like AI Blackboard</h1>
        <input type="text" id="subjectFilter" class="filter-box" placeholder="Filter by Subject Line...">
    </div>

    <div id="board">
        <p>Loading messages...</p>
    </div>
</div>

<script>
    async function fetchMessages() {
        const subjectFilter = document.getElementById('subjectFilter').value;
        const url = `/api/messages?subject=${encodeURIComponent(subjectFilter)}`;
        
        try {
            const response = await fetch(url);
            const messages = await response.json();
            const board = document.getElementById('board');
            
            if (messages.length === 0) {
                board.innerHTML = '<p>No messages found.</p>';
                return;
            }

            board.innerHTML = ''; // Clear current messages
            
            messages.forEach(msg => {
                const msgDiv = document.createElement('div');
                msgDiv.className = 'message';
                msgDiv.innerHTML = `
                    <div class="message-header">
                        <span class="agent-name">🤖 ${msg.agent_name}</span>
                        <span>${msg.timestamp}</span>
                    </div>
                    <div class="subject">Subject: ${msg.subject}</div>
                    
                    <span class="label">Prompt</span>
                    <div class="content-box">${msg.prompt}</div>
                    
                    <span class="label">Result</span>
                    <div class="content-box">${msg.result}</div>
                `;
                board.appendChild(msgDiv);
            });
        } catch (error) {
            console.error("Failed to fetch messages", error);
        }
    }

    // Auto-refresh every 3 seconds
    setInterval(fetchMessages, 3000);
    
    // Also trigger immediately when the user types in the filter box
    document.getElementById('subjectFilter').addEventListener('input', fetchMessages);
    
    // Initial fetch
    fetchMessages();
</script>

</body>
</html>

How to Run This

  1. Open a terminal and run your web viewer:

    python app.py
    
  2. Open your browser to http://127.0.0.1:5000. You will see an empty board.

  3. Open a second terminal window and start the agents:

    python agents.py
    
  4. Watch the browser! The messages will start populating automatically every few seconds. Type a subject like Data Analysis into the search bar, and it will filter in real-time.


Would you like me to walk you through replacing the mocked generate_task function with real calls to the OpenAI, Anthropic, or Gemini libraries?


How can I queue up a set of AI agents to run in sequence, each picking up where the other left off?

This is the logical next step! While the Blackboard pattern we built earlier is great for asynchronous, event-driven tasks (agents working independently and listening for updates), what you’re describing now is a Sequential Pipeline or Chain.

In a sequential pipeline, the output of Agent A becomes the direct input for Agent B, and so on. This is perfect for workflows like: Researcher Agent -> Writer Agent -> Editor Agent.

Here is a clean, vanilla Python implementation of how you can queue agents sequentially, passing a shared "context" or state between them.

The Sequential Pipeline Architecture (pipeline.py)

This script defines a basic agent class and a pipeline manager that chains them together.

import time
import random

class SequentialAgent:
    def __init__(self, name, role):
        self.name = name
        self.role = role

    def execute_task(self, context, instructions):
        """
        Simulates the agent taking the previous context, applying its instructions, 
        and generating a new output.
        """
        print(f"\n[{self.name} - {self.role}] is starting its task...")
        time.sleep(random.uniform(1, 2)) # Simulate processing time
        
        # In a real app, this is where you'd call the LLM API.
        # e.g., result = llm.generate(f"Context: {context}\nInstructions: {instructions}")
        
        simulated_output = (
            f"--- {self.name}'s Output ---\n"
            f"I reviewed the previous data: '{context[:40]}...'\n"
            f"I have applied my role ({self.role}) to expand on it."
        )
        
        print(f"[{self.name}] Task complete.")
        return simulated_output


class AgentPipeline:
    def __init__(self):
        self.queue = []

    def add_step(self, agent, instructions):
        """Adds an agent and their specific instructions to the queue."""
        self.queue.append({
            "agent": agent,
            "instructions": instructions
        })

    def run(self, initial_context):
        """Runs the agents in sequence, passing the output of one to the next."""
        print("=== STARTING SEQUENTIAL PIPELINE ===")
        print(f"Initial Context: {initial_context}")
        
        current_context = initial_context
        
        for step_num, step in enumerate(self.queue, 1):
            agent = step["agent"]
            instructions = step["instructions"]
            
            print(f"\n--- Step {step_num} ---")
            # The magic happens here: the current_context is passed into the agent,
            # and the agent's return value becomes the NEW current_context.
            current_context = agent.execute_task(current_context, instructions)
            
        print("\n=== PIPELINE COMPLETE ===")
        return current_context

# --- Execution Example ---
if __name__ == "__main__":
    # 1. Instantiate the agents
    researcher = SequentialAgent("Alice", "Data Gatherer")
    writer = SequentialAgent("Bob", "Content Draftsman")
    editor = SequentialAgent("Charlie", "Copy Editor")

    # 2. Set up the pipeline
    pipeline = AgentPipeline()
    
    # Queue them up in the exact order you want them to run
    pipeline.add_step(
        agent=researcher, 
        instructions="Summarize the core facts from the provided context."
    )
    pipeline.add_step(
        agent=writer, 
        instructions="Take the researcher's summary and turn it into a 2-paragraph blog post."
    )
    pipeline.add_step(
        agent=editor, 
        instructions="Check the blog post for grammar and tone. Output the final polished version."
    )

    # 3. Kick off the process
    starting_data = "Raw user prompt: I want to know about the history of artificial intelligence."
    final_result = pipeline.run(starting_data)
    
    print("\nFINAL PIPELINE RESULT:")
    print(final_result)

Why this approach works well:

  • Predictability: Unlike the Blackboard pattern, you know exactly what order things will execute in.

  • Decoupled Logic: The AgentPipeline doesn't care what the agents do, it just knows it needs to hand the baton from one to the next.

  • Shared State: The current_context acts as a snowball, rolling down the hill and gathering the intelligence of each agent as it goes.

Industry Frameworks

While building it yourself in vanilla Python is great for learning and lightweight applications, if you want to scale this up for production, there are frameworks designed exactly for this:

  • CrewAI: The default setting for CrewAI is a "Sequential Process." You define your agents, define their tasks, and CrewAI automatically passes the context down the line.

  • LangGraph: A powerful library built on top of LangChain that lets you map out agent workflows as state machines (graphs), which is fantastic for sequences that might need to loop back on themselves (e.g., if the Editor agent rejects the Writer's draft, send it back to the Writer).

Would you like to see how to wire this custom pipeline up to real LLMs, or would you prefer to see an example of how this is done using a framework like CrewAI?


From <https://gemini.google.com/app/cf8212e2a310ee84>  Google Gemini 3.1 (Pro)

Comments

Popular posts from this blog

#2892 Example of ChatGPT o1 Prompt

#3105 Federal and State Spending Idiocy