Readit News logoReadit News
Posted by u/d-yoda 2 months ago
Show HN: Pyscn – Python code quality analyzer for vibe codersgithub.com/ludo-technolog...
Hi HN! I built pyscn for Python developers in the vibe coding era. If you're using Cursor, Claude, or ChatGPT to ship Python code fast, you know the feeling: features work, tests pass, but the codebase feels... messy.

Common vibe coding artifacts:

• Code duplication (from copy-pasted snippets)

• Dead code from quick iterations

• Over-engineered solutions for simple problems

• Inconsistent patterns across modules

pyscn performs structural analysis:

• APTED tree edit distance + LSH

• Control-Flow Graph (CFG) analysis

• Coupling Between Objects (CBO)

• Cyclomatic Complexity

Try it without installation:

  uvx pyscn analyze .          # Using uv (fastest)
  pipx run pyscn analyze .     # Using pipx
  (Or install: pip install pyscn)
Built with Go + tree-sitter. Happy to dive into the implementation details!

brynary · 2 months ago
This looks great! Duplication and dead code are especially tricky to catch because they are not visible in diffs.

Since you mentioned the implementation details, a couple questions come to mind:

1. Are there any research papers you found helpful or influential when building this? For example, I need to read up on using tree edit distance for code duplication.

2. How hard do you think this would be to generalize to support other programming languages?

I see you are using tree-sitter which supports many languages, but I imagine a challenge might be CFGs and dependencies.

I’ll add a Qlty plugin for this (https://github.com/qltysh/qlty) so it can be run with other code quality tools and reported back to GitHub as pass/fail commit statuses and comments. That way, the AI coding agents can take action based on the issues that pyscn finds directly in a cloud dev env.

d-yoda · 2 months ago
Thank you! 1.For tree edit distance, I referred to "APTED: A Fast Tree Edit Distance Algorithm" (Pawlik & Augsten, 2016), but the algorithm works as O(n²) so I also implemented LSH (classic one) for large codebases.The other analyses also use classical compiler theory and techniques. 2. Should be straightforward! tree-sitter gives us parsers for 40+ languages. CFG construction is just tracking control flow, and the core algorithm stays the same.

I focused on Python first because vibe coding with Python tends to accumulate more structural issues. But the same techniques should apply to other languages as well.

Excited about the Qlty integration - that would make pyscn much more accessible and would be amazing!

amacbride · 2 months ago
I'm going to push back hard on the folks dunking on "vibe coders" -- I have been programming longer than most of you have been alive, and there are times when I absolutely do vibe coding:

1) unfamiliar framework 2) just need to build a throwaway utility to help with a main task (and I don't want to split my attention) 3) for fun: I think of it as "code sculpting" rather than writing

So this is absolutely a utility I would use. (Kudos to the OP.)

Remember the second-best advice for internet interactions (after Wheaton's Law): "Ssssshh. Let people enjoy things."

kelnos · 2 months ago
I too have probably been programming longer than most people here, and I'll vibe code on occasion for your #2 reason. (Recently I needed to take an OpenAPI spec file and transform/reduce it in some mechanical ways; didn't feel like writing the code for it, didn't care if it was maintainable, and it was easily verifiably correct after a quick manual skim of its output.)

I don't think #1 is a good place to vibe code; if it's code that I'll have to maintain, I want to understand it. In that case I'll sometimes use an LLM to write code incrementally in the new framework, but I'll be reading every line of it and using the LLM's work to help me understand and learn how it works.

A utility like pyscn that determines code quality wouldn't be useful for me with #1: even in an unfamiliar framework, I'm perfectly capable judging code quality on my own, and I still need and want to examine the generated code anyway.

(I'm assuming we're using what I think is the most reasonable definition of "vibe coding": having an LLM do the work, and -- critically -- not inspecting or reviewing the LLM's output.)

amacbride · 2 months ago
I was using the definition of “let the LLM take the lead in writing the code, but review it afterwards“ so I don’t think our opinions are in conflict.

I think of coding agents as “talented junior engineers with no fatigue, but sometimes questionable judgment.”

convolvatron · 2 months ago
we can have a pissing contest. I don't begrudge anyone their fun, but when my job becomes taking hundreds of thousands of lines of vibe code and just finding that one little change that will make it all work, we have a serious problem with expectations.
amacbride · 2 months ago
I don't think we're at odds: I think "vibe coding" is strictly for fun and for prototypes. However, people will misuse any tool, so having utilities to mitigate the risk isn't a bad thing.
scuff3d · 2 months ago
This is an interesting idea but you might be better off marketing it as a tool for software engineers, maybe to help with old code bases. Or even for someone stuck cleaning up vibe coded nonsense.

Vibe coders don't care about quality and wouldn't understand why any of these things are a problem in the first place.

ryandrake · 2 months ago
I agree with this. I've been pretty critical of AI coding, but at the urging of some other HN posters, I shelled out a few bucks and started giving Claude Code a chance. After about 2 months of using it for various personal Python and C++ projects, my current problem with it is 1. how much babysitting you need to do to keep it on track and writing code the way you'd like it written, and 2. how much effort you need to spend after it writes the code, to clean it up and fix it. This tool would probably help quite a bit with 2.

I find for every 5 minutes of Claude writing code, I need to spend about 55 minutes cleaning up the various messes. Removing dead code that Claude left there because it was confused and "trying things". Finding opportunities for code reuse, refactoring, reusing functions. Removing a LOT of scaffolding and unnecessary cruft (e.g. this class with no member variables and no state could have just been a local function). And trivial stylistic things that add up, like variable naming, lint errors, formatting.

It takes 5 minutes to make some ugly thing that works, but an hour to have an actual finished product that's sanded and polished. Would it have taken an hour just to write the code myself without assistance? Maybe? Probably? Jury is still out for me.

Wowfunhappy · 2 months ago
Have you experimented with using a Claude.md file that describes your preferred coding style, including a few examples of what not to do and the corrected version? I haven't had complete success with this but it does seem to help.
scuff3d · 2 months ago
Yeah in general I think agents are a mistake. People are desperately trying to make these things more useful then they are.

It's more useful as a research assistant, documentation search, and writing code a few lines at a time.

Or yesterday for work I had to generate a bunch of json schemas from Python classes. Friggin great for that. Highly structured input, highly structured output, repetitious and boring.

CuriouslyC · 2 months ago
Vibe coders do care about quality, at least the ones that try to ship and get burned by a mountain of tech debt. People aren't as stupid and one dimensional as you assume.
scuff3d · 2 months ago
Given an entire industry is cropping up to fix the mess these people make, I think less of them care then you think.
flare_blitz · 2 months ago
And where, exactly, did this commenter say that vibe coders are "stupid and one dimensional"? Stop putting words in people's mouths.
lacy_tinpot · 2 months ago
This kind of weird disdain towards "vibe coders" is hilarious to me.

There was a time when hand soldered boards were not only seen as superior to automated soldering, but machine soldered boards were looked down on. People went gaga over a good hand soldered board and the craft.

People that are using AI to assist them to code today, the "vibe coders", I think would also appreciate tooling that assists in maintaining code quality across their project.

scuff3d · 2 months ago
Whether the board is hand solder or not, the person designing it still has to know what they're doing.

I think a comparison that fits better is probably PCB/circuit design software. Back in the day engineering firms had rooms full of people drafting and doing calculations by hand. Today a single engineer can do more in an hour then 50 engineers in a day could back then.

The critical difference is, you still have to know what you are doing. The tool helps, but you still have to have foundational understanding to take advantage of it.

If someone wants to use AI to learn and improve, that's fine. If they want to use it to improve their workflow or speed them up that's fine too. But those aren't "vibe coders".

People who just want the AI to shit something out they can use with absolutely no concern for how or why it works aren't going to be a group who care to use a tool like this. It goes against the whole idea.

d-yoda · 2 months ago
"You're absolutely right!" - the messaging could be clearer. I built pyscn because more engineers than expected are using AI assistants these days (to varying degrees), and I wanted to give them a tool to check code quality. But the real value might be for engineers who inherit or maintain AI-generated codebases as you say, rather than those actively vibe coding.
aDyslecticCrow · 2 months ago
Current AI is most proficient in java-script and python because of the vast training data. But in the long-run, i feel like languages with good static analysis, static type checks, clear language rules, memory leak detection, fuzzing, test oriented code, and any number of other similar tooling is gonna be the true game-changer. Directed learning using this tooling could improve the models beyond their training set, or simply allow humans to constrain AI output within certain bounds.
d-yoda · 2 months ago
Great point! Golang is indeed one of those languages with strong "vibe coding resistance" - it's personally one of my favorites for that reason. On the flip side, I think there's a future where tools like pyscn work alongside AI to make languages with large communities like Python even more dominant.
buremba · 2 months ago
I was more optimistic before bur if 95% of the all software is written by these two languages, it will be very hard for any (better) alternative to disrupt them. The only way will likely to make better profiling & debugging tools to help maintain existing codebase.
d-yoda · 2 months ago
I'm actually more optimistic. While Python/JS have huge ecosystems, there are still things only Go/Rust can achieve.
aDyslecticCrow · 2 months ago
Personally i think we would need to see more specialized models RustGPT or ClangCPT or GoGPT. A model tuned for a specific language with proper tooling integration beyond MCP tools. The current general purpose models will always be biased towards what they have training data for, and we're clearly seeing them struggle with these languages. (My personal experience with AI C or Rust code is quite atrocious.)

But specialization restricts target market and requires time to develop. Its currently faar more lucrative trying to make a general purpose model and attract VC funding for market capture.

xrd · 2 months ago
I absolutely love this. Tests and code coverage metrics are still important, but so easy to leave behind as you are running toward the vibe. This is a nice addition to the toolbox.
smoe · 2 months ago
I’d argue that those kinds of automated tools are much more important much earlier in a project than they used to be.

Personally, I can deal with quite a lot of jank and a lack of tests or other quality control tools in the early stages, but LLMs get lost so quickly. It’s like onboarding someone new to the codebase every hour or so.

You want to put them into a feedback loop with something or someone that isn’t you.

d-yoda · 2 months ago
Thank you! I'll keep improving it more and more!
stuaxo · 2 months ago
Here are some anti patterns a colleague put in their code.

Prescriptive comment: Comment describes exactly what the following code does without adding useful context. (Usually this is for the LLM to direct itself and should be removed).

Inconsistent style: You have this across modules, but this would be in the same file.

Inconsistent calling style: A function or method should return one kind of thing.

(In the worst case, the LLM has generated a load of special cases in the caller to handle the different styles it made).

Unneeded "Service" class: I saw a few instances where something that should have been simple function calls resulted in a class with Service in the name being added, I'm not sure why, but it did happen adding extra complications.

Those are the ones off the top of my head.

As a senior dev, I think use of these tools can be fine, as long as people are happy to go and fix the issues and learn, anyone can go from vibe coder to coder if you accept the need to learn and improve.

The output of the LLM is a starting point, however much we engineer prompts, we can't know what else we need to say until we see the (somewhat) wrong output and iterate it.

d-yoda · 2 months ago
This is super insightful, thank you for sharing. It's a great list of common LLM-generated anti-patterns.

I'd love to look into incorporating checks for these into pyscn. This is exactly the kind of stuff I want it to catch.

incomingpain · 2 months ago
I saw this project show up in some newsletters. Awesome project idea. I've been using 'vulture' so far.

But when i try to run analyze or check.

Running quality check...

Complexity analysis failed: [INVALID_INPUT] no Python files found in the specified paths

Dead code analysis failed: [INVALID_INPUT] no Python files found in the specified paths

  Clone detection failed: no Python files found in the specified paths
Error: analysis failed with errors

I'm certainly in a folder with python files.

d-yoda · 2 months ago
Wow, was it really in some newsletters? That's awesome to hear, and would definitely explain the recent spike on GitHub!

Thanks a lot for the bug report and for providing the details. I have a hunch—it's possible that you need to explicitly specify the path depending on your directory structure. For example, if your Python files are under a src directory, could you try running it like [your_tool_name] analyze src/?

If that still doesn't solve the problem, it would be a huge help if you could open a quick issue on GitHub for this.

Thanks again for your feedback!

incomingpain · 2 months ago
https://imgur.com/a/382mtPr

It was linked in the TLDR newsletter on monday.

(myglobalenv) steve@bird:~/PycharmProjects/netflow$ ls aggregator.py data netflow settings.py assets database.py notifications.py sniper.py config.py Dockerfile opencode.json start.sh context_manager.py integration.py __pycache__ tcpports.py context.py largflow.py README.md dashboard.py main.py requirements.txt (myglobalenv) steve@bird:~/PycharmProjects/netflow$ pyscn check . Running quality check... Complexity analysis failed: [INVALID_INPUT] no Python files found in the specified paths Dead code analysis failed: [INVALID_INPUT] no Python files found in the specified paths Clone detection failed: no Python files found in the specified paths Error: analysis failed with errors Usage: pyscn check [files...] [flags]

Flags: --allow-dead-code Allow dead code (don't fail) -c, --config string Configuration file path -h, --help help for check --max-complexity int Maximum allowed complexity (default 10) -q, --quiet Suppress output unless issues found --skip-clones Skip clone detection

Global Flags: -v, --verbose Enable verbose output

-v doesnt give me anything neither.

FergusArgyll · 2 months ago
Very cool! I've never seen a cli that opens an html file when it's finished. I kinda like it, hope to see more of that in the future
d-yoda · 2 months ago
Glad you like it! Trying to make it as user-friendly as possible.
johtso · 2 months ago
This is fairly common with linting/test coverage tools