Readit News logoReadit News
svnscha commented on AI Meets WinDBG   svnscha.de/posts/ai-meets... · Posted by u/thunderbong
indigodaddy · 4 months ago
My word, that's one of the most beautiful sites I've ever encountered on mobile.
svnscha · 4 months ago
Thank you - using Zola and Apollo Theme (slightly modified) - https://www.getzola.org/themes/apollo/
svnscha commented on AI Meets WinDBG   svnscha.de/posts/ai-meets... · Posted by u/thunderbong
psanchez · 4 months ago
It looks like it is using "Microsoft Console Debugger (CDB)" as the interface to windbg.

Just had a quick look at the code: https://github.com/svnscha/mcp-windbg/blob/main/src/mcp_serv...

I might be wrong, but at first glance I don't think it is only using those 4 commands. It might be using them internally to get context to pass to the AI agent, but it looks like it exposes:

    - open_windbg_dump
    - run_windbg_cmd
    - close_windbg_dump
    - list_windbg_dumps
The most interesting one is "run_windbg_cmd" because it might allow the MCP server to send whatever the AI agent wants. E.g:

    elif name == "run_windbg_cmd":
        args = RunWindbgCmdParams(**arguments)
        session = get_or_create_session(
            args.dump_path, cdb_path, symbols_path, timeout, verbose
        )
        output = session.send_command(args.command)
        return [TextContent(
            type="text",
            text=f"Command: {args.command}\n\nOutput:\n```\n" + "\n".join(output) + "\n```"
        )]

(edit: formatting)

svnscha · 4 months ago
Yes, that's exactly the point. LLMs "know" about WinDBG and its commands. So if you ask to switch the stack frame, inspect structs, memory or heap - it will do so and give contextual answers. Trivial crashes are almost analyzed fully autonomous whereas for challenging ones you can get quite a cool assistant on your side, helping you to analyze data, patterns, structs - you name it.
svnscha commented on AI Meets WinDBG   svnscha.de/posts/ai-meets... · Posted by u/thunderbong
JanSchu · 4 months ago
This is one of the most exciting and practical applications of AI tooling I've seen in a long time. Crash dump analysis has always felt like the kind of task that time forgot—vital, intricate, and utterly user-hostile. Your approach bridges a massive usability gap with the exact right philosophy: augment, don't replace.

A few things that stand out:

The use of MCP to connect CDB with Copilot is genius. Too often, AI tooling is skin-deep—just a chat overlay that guesses at output. You've gone much deeper by wiring actual tool invocations to AI cognition. This feels like the future of all expert tooling.

You nailed the problem framing. It’s not about eliminating expertise—it’s about letting the expert focus on analysis instead of syntax and byte-counting. Having AI interpret crash dumps is like going from raw SQL to a BI dashboard—with the option to drop down if needed.

Releasing it open-source is a huge move. You just laid the groundwork for a whole new ecosystem. I wouldn’t be surprised if this becomes a standard debug layer for large codebases, much like Sentry or Crashlytics became for telemetry.

If Microsoft is smart, they should be building this into VS proper—or at least hiring you to do it.

Curious: have you thought about extending this beyond crash dumps? I could imagine similar integrations for static analysis, exploit triage, or even live kernel debugging with conversational AI support.

Amazing work. Bookmarked, starred, and vibed.

svnscha · 4 months ago
Yes, I've thought about this already! Right now I'm exploring crash dump analysis, but static analysis and reverse engineering are definitely areas where such assistants can help. LLMs are surprisingly good at understanding disassembly, which makes this really exciting, beyond crash dump analysis. Besides that, I think, assisted perf trace analysis may be another cool area to explore.

Domain expertise remains crucial though. As complexity increases, you need to provide guidance to the LLM. However, when the model understands specialized tools well - like WinDBG in my experience - it can propose valuable next steps. Even when it slightly misses the mark, course correction is quick.

I've invested quite some time using WinDBG alongside Copilot (specifically Claude in my configuration), analyzing memory dumps, stack frames, variables, and inspect third-party structures in memory. While not solving everything automatically, it substantially enhances productivity.

Consider this as another valuable instrument in your toolkit. I hope tool vendors like Microsoft continue integrating these capabilities directly into IDEs rather than requiring external solutions. This approach to debugging and analysis tools is highly effective, and many already incorporate AI capabilities.

What Copilot currently lacks is the ability to configure custom Agents with specific System Prompts. This would advance these capabilities significantly - though .github/copilot-instructions.md does help somewhat, it's not equivalent to defining custom system prompts or creating a chart participant enabling Agent mode. This functionality will likely arrive eventually.

Other tools already allowing system prompt customization might yield even more interesting results. Reducing how often I need to redirect the LLM could further enhance productivity in this area.

The whole point of this was me chatting with Copilot about a crash dump and I asked him about what the command for some specific task is, because I didn't remember and it suggested me which commands I could further try to investigate something and I was like - wait, what if I let him do this automatically?

That's basically the whole idea behind. Me being too lazy to copy-paste Copilot's suggestions to my WinDBG and while this was just a test at first, becoming a proof of concept and now, almost overnight got quite a lot of attention. I am probably excited the same way as you are.

u/svnscha

KarmaCake day3May 5, 2025View Original