Readit News logoReadit News
jehna1 commented on Show HN: AI in SolidWorks   trylad.com... · Posted by u/WillNickols
jehna1 · a month ago
I've been experimenting with Claude Code and different code-to-cad tools and the best workflow yet has been with Replicad. It allows for realtime rendering in a browser window as Claude does changes to a single code file.

Here's an example I finished just a few minutes ago:

https://github.com/jehna/plant-light-holder/blob/main/src/pl...

jehna1 commented on OpenAI is good at unminifying code   glama.ai/blog/2024-08-29-... · Posted by u/punkpeye
punkpeye · a year ago
Looks useful! I will update the article to link to this tool. Thanks for sharing!
jehna1 · a year ago
Super, thank you for adding the link! It really helps to get people to find the tool
jehna1 commented on OpenAI is good at unminifying code   glama.ai/blog/2024-08-29-... · Posted by u/punkpeye
anticensor · a year ago
Thanks for creating this megafier, can you add support for local LLMs?
jehna1 · a year ago
Better yet, it already does have support for local LLMs! You can use them via `humanify local`
jehna1 commented on OpenAI is good at unminifying code   glama.ai/blog/2024-08-29-... · Posted by u/punkpeye
bgirard · a year ago
How well does it compare to the original un-minified code if you compare it against minify + humanify. Would be neat if it can improve mediocre code.
jehna1 · a year ago
On structural level it's exactly 1-1: HumanifyJS only does renames, no refactoring. It may come up with better names for variables than the original code though.
jehna1 commented on OpenAI is good at unminifying code   glama.ai/blog/2024-08-29-... · Posted by u/punkpeye
cryptoz · a year ago
Finally someone else using ASTs while working with LLMs and modifying code! This is such an under-utilized area. I am also doing this with good results: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
jehna1 · a year ago
Super interesting! Since you're generating code with LLMs, you should check out this paper:

https://arxiv.org/pdf/2405.15793

It uses smart feedback to fix the code when LLMs occasionally do hiccups with the code. You could also have a "supervisor LLM" that asserts that the resulting code matches the specification, and gives feedback if it doesn't.

jehna1 commented on OpenAI is good at unminifying code   glama.ai/blog/2024-08-29-... · Posted by u/punkpeye
KolmogorovComp · a year ago
Thanks for your tool. Have you been able to quantify the gap between your local model and chatgpt in terms of ‘unminification performance’?
jehna1 · a year ago
At the moment I haven't found good ways of measuring the quality between different models. Please share if you have any ideas!

For small scripts I've found the output to be very similar between small local models and GPT-4o (judging by a human eye).

jehna1 commented on OpenAI is good at unminifying code   glama.ai/blog/2024-08-29-... · Posted by u/punkpeye
boltzmann-brain · a year ago
how do you make an LLM work on the AST level? do you just feed a normal LLM a text representation of the AST, or do you make an LLM where the basic data structure is an AST node rather than a character string (human-language word)?
jehna1 · a year ago
I'm using both a custom Babel plugin and LLMs to achieve this.

Babel first parses the code to AST, and for each variable the tool:

1. Gets the variable name and surrounding scope as code

2. Asks the LLM to come up with a good name for the given variable name, by looking at the scope where the variable is

3. Uses Babel to make the context-aware rename to AST based on the LLM's response

jehna1 commented on OpenAI is good at unminifying code   glama.ai/blog/2024-08-29-... · Posted by u/punkpeye
thrdbndndn · a year ago
Does it work with huge files? I'm talking about something like 50k lines.

Edit: I'm currently trying it with a mere 1.2k JS file (openai mode) it's only 70% done after 20 minutes. Even if it works therodically with 50k LOC file, I don't think you should try.

jehna1 · a year ago
It does work with any sized file, although it is quite slow if you're using the OpenAI API. HumanifyJS works so it processes each variable name separately, and keeps the context size manageable for an LLM.

I'm currently working on parallelizing the rename process, which should give orders of magnitude faster processing times for large files.

jehna1 commented on OpenAI is good at unminifying code   glama.ai/blog/2024-08-29-... · Posted by u/punkpeye
sebstefan · a year ago
What kind of question does it ask the LLM? Giving it a whole function and asking "What should we rename <variable 1>?" repeatedly until everything has been renamed?

Asking it to do it on the whole thing, then parsing the output and checking that the AST still matches?

jehna1 · a year ago
For each variable:

1. It asks the LLM to write a description of what the variable does

2. It asks for a good variable name based on the description from 1.

3. It uses a custom Babel plugin to do a scope-aware rename

This way the LLM only decides the name, but the actual renaming is done with traditional and reliable tools.

u/jehna1

KarmaCake day604January 4, 2013View Original