--- title: Meridian - Historical Image Generator emoji: 🌍 colorFrom: blue colorTo: indigo sdk: gradio sdk_version: 6.0.0 app_file: app.py pinned: false tags: - building-mcp-track-consumer - building-mcp-track-provider - mcp-in-action-track - mcp - historical - image-generation - flux - gemini - geospatial - coordinates - agentic-chatbots - rag - context-engineering - gradio-6 license: mit --- # 🌍 Meridian: Historical Image Generator **Turn coordinates and dates into historical images. See what happened at any location, any time.** ## What It Does Meridian takes geographic coordinates and a date, finds historical events that happened there, and generates images of those moments. It solves a real problem: image generators don't understand coordinates. Ask DALL-E to show you "52.5163°N, 13.3777°E on November 9, 1989" and you'll get random landscapes. But that's exactly where the Berlin Wall fell. ## The Problem Image models break coordinates into separate tokens ("52", ".", "5163") and treat them like unrelated symbols. They don't know that 52.5°N and 52.6°N are close together—to the model, they're as different as "apple" and "orange". Training data rarely includes GPS coordinates, so models never learn the connection between coordinates and what you'd actually see there. DALL-E 3 works around this by using GPT-4 as a middleman—GPT recognizes the coordinates, looks them up, rewrites them to "Berlin Wall", then DALL-E generates the image. But that's fragile and limited to what GPT knows. ## How Meridian Space Works Instead of trying to teach image models about coordinates (which would need billions of coordinate-tagged images), Meridian builds a bridge: it maintains a database of historical events with their exact coordinates and dates, then converts those into descriptions that image models understand. 1. You enter coordinates and a date 2. Meridian searches its database and Wikidata for events at that location and time 3. It builds a focused prompt emphasizing the event name, location, year, and time 4. FLUX generates a historically accurate image The system prioritizes events that match your exact year, so "Rome, 44 BCE" gives you Julius Caesar's assassination, not a medieval event at the same spot. ## Features **Multiple Styles**: Documentary, Cinematic, Photojournalistic, Dramatic, Cartoon, Minecraft, Retro, and Glitch styles let you visualize history in different ways. **Time-Aware**: The hour you choose affects lighting (dawn vs midday vs night). Month and latitude determine season and atmosphere. **Period-Accurate**: Negative prompts automatically prevent anachronisms. Ancient scenes won't have modern tech, 19th-century scenes won't have smartphones. **Dual Data Sources**: Curated database for quality, Wikidata for breadth. Events are scored by how close they are (geographically and temporally) and how reliable the source is. **MCP Integration**: Works as an MCP server, so Claude and other AI assistants can call it programmatically. Ask Claude "show me what happened at these coordinates" and it just works. ## Quick Start ```bash # Install dependencies pip install -r requirements.txt # Set API keys (optional but recommended) export HF_TOKEN="your_huggingface_token" # Required for image generation export GEMINI_API_KEY="your_gemini_key" # Optional, for better prompts # Run it python app.py ``` Visit http://localhost:7860 (or whatever port it shows in your terminal) ## Examples to Try - **Julius Caesar's Assassination**: 41.8902°N, 12.4922°E, March 15, 44 BCE, 11:00 - **Fall of Berlin Wall**: 52.5163°N, 13.3777°E, November 9, 1989, 23:00 - **Moon Landing**: 28.5729°N, -80.6490°W, July 16, 1969, 13:32 - **D-Day**: 49.3414°N, -0.8322°W, June 6, 1944, 06:30 ## MCP Server To use with Claude Desktop, add to your `claude_desktop_config.json`: ```json { "mcpServers": { "meridian": { "command": "uv", "args": ["run", "/absolute/path/to/MCP/Meridian/app.py"] } } } ``` Then for example ask Claude: "Show me what happened at 52.5163, 13.3777 on November 9, 1989" ## Tech Stack - **FLUX.1-dev**: Image generation via Hugging Face - **Gemini 1.5 Flash**: Optional prompt enhancement - **Wikidata**: Historical event database - **SQLite**: Local curated events with Haversine distance queries (50 events so far but will add more) - **Gradio**: Web interface - **MCP**: Model Context Protocol for AI assistant integration ## Limitations The curated database has about 50 events, but Wikidata adds much more coverage. Prompt quality depends on whether Gemini is available (falls back to templates if not). Images are historically inspired but may not capture every detail perfectly. ## Built For The [MCP 1st Birthday Hackathon](https://huggingface.co/MCP-1st-Birthday) by Anthropic and Gradio. **Competing in ALL Tracks:** - **Track 1: Building MCP** - Full MCP server implementation with 6+ tools for historical event search and image generation - **Track 2: MCP in Action** - Practical application using MCP for agentic workflows, RAG, and context engineering **Key MCP Features:** - ✅ MCP Server with multiple tools (`mcp_generate_historical_image`, `mcp_get_events_by_coordinates`, `mcp_search_wikidata_events`, etc.) - ✅ Consumer integration (works with Claude Desktop and other MCP clients) - ✅ Agentic capabilities (AI assistants can programmatically generate historical images) - ✅ RAG implementation (searches curated database + Wikidata for historical context) - ✅ Context engineering (builds rich prompts from coordinates, dates, and event data) - ✅ Gradio 6.0 with native MCP support ## Social Media **X/Twitter Post**: https://x.com/osamaamoftah/status/1995280953116065956?s=20 **Demo Video**: https://x.com/osamaamoftah/status/1995280953116065956?s=20 Made for the MCP community ❤️