cover

Peeko — Real-time AI Lecture Recovery System

Award

A real-time lecture companion that helps students instantly recover when they zone out.

2026-04-04

Claude CodeVercelSupabaseReactVite
HackathonAIHuman-Centered AIEdTechReal-time SystemsLLM ApplicationsHCIProduct DesignAttention Recovery

Peeko — Real-time AI Lecture Recovery System

A real-time lecture companion that helps students instantly recover when they zone out.

🏆 1st Place Overall — Build4SC Hackathon (USC, GRIDS x Viterbi)


🧠 Problem

When students lose focus during a lecture, the hardest part is not the distraction itself — it’s getting back in.

The professor has already moved on, and the only option is to scroll through long transcripts or wait until the lecture ends.

Existing tools like Otter or Notion AI help after the lecture, but not in the moment.

From personal experience, even missing 5–10 minutes creates a gap that is surprisingly hard to recover from.


💡 Insight

We initially explored detecting distraction — tracking whether a student is paying attention.

However, we quickly realized:

  • Reliable distraction detection is difficult in real time
  • It introduces privacy concerns and noise
  • It doesn’t actually solve the core problem

The real problem is recovery, not detection.

So we shifted our focus:

Instead of trying to stop distraction, help users recover instantly.

🚀 Solution

image
image
image
image
image

Peeko is a real-time recovery system that helps students rejoin a lecture immediately.

It provides:

  • Rolling summary cards generated every few minutes
  • A “Catch Me Up” feature that explains what is happening now and what was missed
  • A lightweight companion interface that keeps users connected to the lecture

✨ Key Features

🃏 Live Flashcard Timeline

  • Generates structured summary cards every ~5 minutes
  • Cards include topic, key points, and keywords
  • Builds a chronological timeline of the lecture

⚡ Catch Me Up (Recovery Navigation)

  • One tap to instantly rejoin the lecture
  • Shows:

🧠 Context-Aware Summarization

  • Each card builds on previous cards (rolling memory)
  • Maintains continuity across the lecture

🦊 Ambient Companion UI

  • A lightweight fennec fox companion
  • Provides subtle, always-visible feedback without interrupting the user

🏗 Architecture

Audio (Browser) ↓ WebSocket → Backend (React + Vite) ↓ Web Speech API (Real-time STT) ↓ Claude API (Summarization / Catch-up) ↓ Supabase (Storage) ↓ Frontend (React / Next.js)


⚙️ What We Built in 12 Hours

  • Real-time audio streaming and transcription pipeline
  • Rolling LLM-based summarization system
  • Catch Me Up recovery logic
  • Timeline-based UI for lecture navigation
  • End-to-end working demo

🧠 Key Decisions & Tradeoffs

  • Dropped distraction detection due to reliability and complexity
  • Focused on recovery instead of prevention
  • Used rolling summary cards to avoid large context windows
  • Prioritized speed and clarity over feature completeness

👥 Team

Built with an amazing team:

  • Jimin Lee — Concept, product direction, backend architecture, LLM integration
  • Astrid — UX design, interaction design, product concept contribution
  • Atharva — Front end, real-time audio pipeline, Web Speech API integration, PIP
  • Vatsal — Claude Code integration

🏆 Result

🥇 1st Place Overall — Build4SC Hackathon

(47 participants, USC GRIDS x Viterbi)


🔗 Links


🧭 Reflection

This project reinforced an important lesson:

In a world rapidly changing with AI, what matters most is still identifying real problems and solving them well.

Rather than starting from technology, we focused on a simple, real pain point: how hard it is to recover after losing focus.

That focus led to a product that felt immediately useful.

Moving forward, I want to keep building systems that start from real user problems and turn them into practical, usable solutions.