A weekend experiment. I wrote a VS Code extension that uses LLMs + native TTS to voice-explain your code ( especially for LLM generated ones after a long night of coding ).
1. Quickly grok unfamiliar code - Auto-splits into well named, logical sections that you can traverse, hear the explanation, or break down further.
2. Can utilize reasoning models for better output (slower)
3. Vercel AI SDK wrapper for multi-provider support.
This was fun to build, and see promise in some sort of more accessible design like this in future copilots. I'm really keen on the learning tools of the future.
A weekend experiment. I wrote a VS Code extension that uses LLMs + native TTS to voice-explain your code ( especially for LLM generated ones after a long night of coding ).
1. Quickly grok unfamiliar code - Auto-splits into well named, logical sections that you can traverse, hear the explanation, or break down further.
2. Can utilize reasoning models for better output (slower)
3. Vercel AI SDK wrapper for multi-provider support.
This was fun to build, and see promise in some sort of more accessible design like this in future copilots. I'm really keen on the learning tools of the future.