Deleted Comment
The idea: MCP tools return HTML/CSS/JS directly. The client renders it in a sandboxed iframe. That's it.
Your AI agent calls a tool, gets back a full interactive UI. Dashboard, form, chart - whatever you need.
How it works: - Tool returns text/html+mcp resource - Client renders in iframe with CSP - UI talks back via JSON-RPC 2.0 postMessage - Fully sandboxed, secure by default
Built a sample implementation with vanilla Web Components. This is where MCP is heading.
AgentU uses two operators for workflows: >> chains steps, & runs parallel. That's it.
```
from agentu import Agent, serve
import asyncio def search(topic: str) -> str:
return f"Results for {topic}"
# Agent auto-detects available model, connects to authenticated MCP server
agent = Agent("researcher").with_tools([search]).with_mcp([
{"url": "http://localhost:3000", "headers": {"Authorization": "Bearer token123"}}
])
# Memory
agent.remember("User wants technical depth", importance=0.9)
# Parallel then sequential: & runs parallel, >> chains
workflow = (
agent("AI") & agent("ML") & agent("LLMs")
>> agent(lambda prev: f"Compare: {prev}")
)
# Execute workflow
result = asyncio.run(workflow.run())
# REST API with auto-generated Swagger docs
serve(agent, port=8000)
``` Features:
- Auto-detects Ollama models (also works with OpenAI, vLLM, LM Studio)
- Memory with importance weights, SQLite backend
- MCP integration with auth support
- One-line REST API with Swagger docs
- Python functions are tools, no decorators needed
Using it for automated code review, parallel data enrichment, research synthesis.
pip install agentu
GitHub: https://github.com/hemanth/agentu
Open to feedback.
Key Features
Multi-server support — connect to several MCP servers at once
OAuth 2.0 & Bearer Token auth (with PKCE)
Persistent sessions — servers + credentials saved locally
Full MCP features — tools, resources, prompts
LLM support — bring your own inference backend
The goal is to make exploring and working with the Model Context Protocol much more approachable.
Happy to answer questions, take feedback, or hear feature requests!