
Visurai — Visual Learning Copilot
Visurai — Visual Learning Copilot
🏆 Built at the Good Vibes Only AI/ML Buildathon @ USC (2025)
Service Link
https://visurai-story-maker.lovable.app/
Overview
Project Demo : https://drive.google.com/file/d/16_YFVfVJoDPQqLkXXaRXSv_Dyr98bxey/view?usp=sharing
Visurai helps dyslexic and visual learners comprehend material by converting text into a sequence of AI-generated images with optional narration.
Paste any text and get:
Features
Architecture
Repository Structure
Prerequisites
Backend — Quick Start (run from repo root)
From the repo root:
backend/.env (example)
Verify
Frontend — Quick Start (pnpm)
Configure your frontend to call the backend base URL (e.g., PUBLIC_BASE_URL).
Typical React workflow:
Ensure your frontend uses absolute URLs from the backend responses (e.g., image_url, audio_url), which already include the PUBLIC_BASE_URL when set.
If your frontend needs an explicit base URL, set it (e.g., Vite):
Engine Switch: LangGraph vs Imperative
The backend can run either:
Enable LangGraph by setting an env var and restarting the server:
Endpoints are the same (e.g., POST /generate_visuals), but execution uses the graph.
API Highlights
Troubleshooting
License
MIT License © 2025 Visurai Team
Made with care for learners who think in pictures.




