We’ve always made videos to communicate any concept and felt like it was the clearest way to communicate. But making good videos was time-consuming and tedious. It required planning, scripting, recording, editing, syncing voice with visuals. Even a 2-minute video could take hours.
AI video tools are impressive at generating cinematic scenes and flashy content, but struggle to explain a product demo, walk through a complex workflow, or teach a technical topic. People still spend hours making explainer videos manually because existing AI tools aren’t built for learning or clarity.
Our solution is Golpo. Our video generation engine generates time-aligned graphics with spoken narration that are good for onboarding, training, product walkthroughs, and education. It’s fast, scalable, and built from the ground up to help people understand complex ideas through simple storytelling.
Here’s a demo: https://www.youtube.com/watch?v=C_LGM0dEyDA#t=7.
Golpo is built specifically for use cases involving explaining, learning, and onboarding. In our (obviously biased!) opinion, it feels authentic and engaging in a way no other AI video generator does.
Golpo can generate videos in over 190 languages. After it generates a video, you can fully customize its animations by just describing the changes you want to see in each motion graphic it generates in natural language.
It was challenging to get this to work! Initially, we used a code-generation approach with Manim, where we fine-tuned a language model to emit Python animation scripts directly from the input text. While promising for small examples, this quickly became brittle, and the generated code usually contained broken imports, unsupported transforms, and poor timing alignment between narration and visuals. Debugging and regenerating these scripts was often slower than creating them manually.
We also explored training a custom diffusion-based video model, but found it impractical for our needs. Diffusion could produce high-fidelity cinematic scenes, but generating coherent sequences beyond about 30 seconds was unreliable without complex stitching, making edits required regenerating large portions of the video, and visuals frequently drifted from the instructional intent, especially for abstract or technical topics. Also, we did not have the compute to scale this.
Existing state-of-the-art systems like Sora and Veo 3 face similar limitations: they are optimized for cinematic storytelling, not step-by-step educational content, and they lack both the deterministic control needed for time-aligned narration and the scalability for 5–10 minute explainers.
In the end, we took a different path of training a reinforcement learning agent to “draw” whiteboard strokes, step-by-step, optimized for clear, human-like explanations. This worked well because the action space was simple and the environment was not overly complex, allowing the agent to learn efficient, precise, and consistent drawing behaviors.
Here are some sample videos that Golpo generated:
https://www.youtube.com/watch?v=33xNoWHYZGA (Whiteboard Gym - the tech behind Golpo itself)
https://www.youtube.com/watch?v=w_ZwKhptUqI (How do RNNs work?)
https://www.youtube.com/watch?v=RxFKo-2sWCM (function pointers in C)
https://golpo-podcast-inputs.s3.us-east-2.amazonaws.com/file... (basic intro to Gödel's theorem)
You can try Golpo here: https://video.golpoai.com, and we will set you up with 2 credits. We’d love your feedback, especially on what feels off, what you’d want to control, and how you might use it. Comments welcome!
Edit: I've used. It's amazing. I'm going to be using this a lot.
I agree. Rather than (what I assume is) E2E text -> video/audio output, it seems like training a model on how to utilize the community fork of manim which 3blue1brown uses for videos would produce a better result.
[1] https://github.com/ManimCommunity/manim/
Congrats! Cool product.
Feedback: I tried making a product explainer video for a tree planting rover I’m working on. The rover looked different in every scene. I can imagine this kind of consistency may be more difficult to get right. Maybe if I had uploaded a photo of how the rover looks it may have helped. In one scene the rover looks like an actual rover, in the other it looks like a humanoid robot.
But still, super impressed!
Signed up and waiting on a video :)
Edit: here's a 58s explainer video for the concept of body doubling: https://video.golpoai.com/share/448557cc-cf06-4cad-9fb2-f56b...
Have you tried a "filled line" approach, rather than "outlined" strokes? Might feel more like individual marker strokes.
I made a demo video on the free tier and it did a great job explaining acoustic delay lines in an accessible fashion, after feeding it a catalog PDF with an overview of the historical artefact and photography of an example unit. Unfortunately the service invented its own idea of what the artefact looked like. Could you offer a storyboard view and let users erase the incorrect parts and sketch their own shapes? Or split the drawing up into logical elements and the user could redraw them as needed, which would then be reused where that element is used in other frames?
Keep up refining the generated demo! Best of luck
[1] https://www.youtube.com/@Aleph0
[2] https://www.youtube.com/@MinutePhysics
[3] https://www.youtube.com/@12tone
[4] https://www.youtube.com/@SimplilearnOfficial