I have a command named "decolour", which strips (most) ANSI escape codes. Clear as day what it does, almost nobody uses this spelling when naming commands that later land as part of a distribution.
I have a command named "decolour", which strips (most) ANSI escape codes. Clear as day what it does, almost nobody uses this spelling when naming commands that later land as part of a distribution.
I think you misunderstood this. tsx in this context is/was a way to run typescript files locally without doing tsc yourself first, ie make them run like a script. You can just use Node now, but for a long time it couldn’t natively run typescript files.
The only limitation I run into using Node natively is you need to do import types as type imports, which I doubt would be an issue in practice for agents.
I wouldn't call it running TS natively - what they're doing is either using an external tool, or just stripping types, so several things, like most notably enums, don't work by default.
I mean, that's more than enough for my use cases and I'm happy that the feature exists, but I don't think we'll ever see a native TypeScript engine. Would have been cool, though, considering JS engines define their own internal types anyway.
Similarly: TypeScript, despite what Node people might want you to believe, is not part of the JavaScript language.
I've always used ts-node, so I forgot about tsx's existence, but still those are just tools used for convenience.
Nothing currently actually runs TypeScript natively and the blessed way was always to compile it to JS and run that.
1. Large built-in standard library (CSV, sqlite3, xml/json, zipfile).
2. In Python, whatever the LLM is likely to do will probably work. In JS, you have the Node / Deno split, far too many libraries that do the same thing (XMLHTTPRequest / Axios / fetch), many mutually-incompatible import syntaxes (E.G. compare tsx versus Node's native ts execution), and features like top-level await (very important for small scripts, and something that an LLM is likely to use!), which only work if you pray three times on the day of the full moon.
3. Much better ecosystem for data processing (particularly csv/pandas), partially resulting from operator overloading being a thing.
You do? Deno is maybe a single digit percentage of the market, just hyped tremendously.
> E.G. compare tsx versus Node's native ts execution
JSX/TSX, despite what React people might want you to believe, are not part of the language.
> which only work if you pray three times on the day of the full moon.
It only doesn't work in some contexts due to legacy reasons. Otherwise it's just elaborate syntax sugar for `Promise`.
My hierarchy of static analysis looks like this (hierarchy below is Typescript focused but in principle translatable to other languages):
1. Typesafe compiler (tsc)
2. Basic lint rules (eslint)
3. Cyclomatic complexity rules (eslint, sonarjs)
4. Max line length enforcement (via eslint)
5. Max file length enforcement (via eslint)
6. Unused code/export analyser (knip)
7. Code duplication analyser (jscpd)
8. Modularisation enforcement (dependency-cruiser)
9. Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
10. Security check (semgrep)
I stitch all the above in a single `pnpm check` command and defined an agent rule to run this before marking task as complete.
Finally, I make sure `pnpm check` is run as part of a pre-commit hook to make sure that the agent has indeed addressed all the issues.
This makes a dramatic improvement in code quality to the point where I'm able to jump in and manually modify the code easily when the LLM slot machine gets stuck every now and then.
(Edit: added mention of pre-commit hook which I missed mention of in initial comment)
One current idea of mine, is to iteratively make things more and more specific, this is the approach I take with psuedocode-expander ([0]) and has proven generally useful. I think there's a lot of value in the LLM instead of one shot generating something linearly, building from the top down with human feedback, for instance. I give a lot more examples on the repo for this project, and encourage any feedback or thoughts on LLM driven code generation in a more sustainable then vibe-coding way.
[0]: https://github.com/explosion-Scratch/psuedocode-expander/
Well, you can always set temperature to 0, but that doesn't remove hallucinations.
If they can muster defiance, it's only because you weren't violent enough. If someone is defiant enough to play probability games with you, just punish them 100% of the time instead, even if they did nothing. He was probably doing it some other time where you didn't catch him, so it's warranted.
There's always someone willing to escalate things further. Things will escalate until someone discovers their limits and backs down. Consequences range from being quietly hated, to being ostracized, to being actively fucked with, to being beaten up, to being straight up killed.
Smart people don't fuck around and find out. They check their behavior so that they don't step on other people's toes for no reason. Violence very often comes with instructions on how to avoid it. Don't do this, and I won't do that. All they have to do is listen and follow the instructions.
The outcome where the obnoxious neighbor learns his lesson and stops his bad behavior is the good ending. The behavior stops, the situation de-escalates and peace is restored. If they keep up their defiance, things will only keep escalating further. Somebody could get hurt.
Of course they do. Some of the smartest people out there are habitual risk takers. We wouldn't have organised crime if it weren't for people smart enough to not get caught or killed early on.
Your method doesn't take into account that the person you're targeting also has a brain and they will use it against you and also that they have as much power as you do.
Overall you're describing a power fantasy, not reality.
I just enjoy writing my own software. If I have a tool that will help me to lubricate the tight bits, I’ll use it.
Occasionally of course it's way off, in which case I have to tell it to stfu ("snooze").
Also it's great at presenting someone else's knowledge, as it doesn't actually know facts - just what token should come after a sequence of others. The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it - brilliant, just what I wanted.