If ChatGPT is delivering that, they should have no problem raising money.
If ChatGPT is delivering that, they should have no problem raising money.
Yeah, the dos to windows transitions was a big deal, but it was a pretty ripe time for innovation then.
>>> int(float('348555896224571969'))
348555896224571968
It just exceeds the mantissa bits of doubles: >>> math.log2(34855589622457196)
54.952239550875795
JavaScript (in)famously stores all numbers as floating point resulting in silent errors also with user perceived integers, so this might be an indication that Claude Code number handling uses JS native numbers for this.It forcibly installs itself to ~/.local/bin. Do you already have a file at that location? Not anymore. When typing into the prompt, EACH KEYSTROKE results in the ENTIRE conversation scrollback being cleared and replayed, meaning 1 byte of new data results in kilobytes of data transferred when using Claude over SSH. The tab completion for @-mentioning is so bad it's worthless, and also async, so not even deterministic. You cannot disable their request for feedback. Apparently it lies in tool output.
It truly is a testament to the dangers of vibe coding, proudly displayed for everyone to take an example from.
> The redemption period ends August 31, 2026. For full details, visit https://www.asus.com/content/asus-offers-adobe-creative-clou....
Well, the monitor is €8,999, so maybe it’d be more than two taps for me:
> The monitor is scheduled to be available by October 2025 and will costs €8,999 in Europe (including VAT)
The days of single- and double-DIN stereo swapping are slipping away fast. You're pretty much stuck with what you get when you buy the car, so it had better be what you want.
My general procedure for using an LLM to write code, which is in the spirit of what is advocated here, is:
1) First, feed in the existing relevant code into an LLM. This is usually just a few source files in a larger project
2) Describe what I want to do, either giving an architecture or letting the LLM generate one. I tell it to not write code at this point.
3) Let it speak about the plan, and make sure that I like it. I will converse to address any deficiencies that I see, and I almost always do.
4) I then tell it to generate the code
5) I skim & test the code to see if it's generally correct, and have it make corrections as needed
6) Closely read the entire generated artifact at this point, and make manual corrections (occasionally automatic corrections like "replace all C style casts with the appropriate C++ style casts" then a review of the diff)
The hardest part for me is #6, where I feel a strong emotional bias towards not doing it, since I am not yet aware of any errors compelling such action.
This allows me to operate at a higher level of abstraction (architecture) and remove the drudgery of turning an architectural idea into written, precise, code. But, when doing so, you are abandoning those details to a non-deterministic system. This is different from, for example, using a compiler or higher level VM language. With these other tools, you can understand how they work and rapidly have a good idea of what you're going to get, and you have robust assurances. Understanding LLMs helps, but thus not to the same degree.
If you keep all edits to be driven by the LLM, you can use that knowledge later in the session or ask your model to commit the guidelines to long term memory.