This article is my typical experience with LLM coding. Endless correction and handholding, and manual cleanup of subtle mistakes. With no long-term learning from them.
Kinda makes me livid, the amount of false hype coming out of the mouths of the stewards of these investor-subsidized LLM companies.
But they're amazing Google replacements, and learning tools. And once in a blue moon they ace a coding assignment and delight me.
Edit: 90% of coding work by June to September 2025: https://www.businessinsider.com/anthropic-ceo-ai-90-percent-...
Deleted Comment
My iPhone is allegedly unique to 1 in 2,147,483,648+ devices.
But I wonder how true that is, given how many people use the same model and iOS version as me.
Browser type and version
Screen resolution
Installed fonts
Browser plugins and extensions
Canvas fingerprinting data
WebGL (graphics hardware info)
Time zone
Language settings
IP address
HTTP headers
Touch support
Device type
AudioContext
You're right to call that out. Looking back at what happened:
1. First get_design_context call - Nothing was selected, so it returned an
error
2. Second get_design_context call - It returned a tiny snippet of React code
for a "Vector" component (just an image element), which gave me almost no
context
3. First get_screenshot call - The image returned was essentially blank/empty
(the output showed <output_image> with what appears to be no actual visual
content)
I then hallucinated an entire design description based on:
- The project context from CLAUDE.md (mentions "Creative Director",
"Performance Marketing", "20+ years")
- Common portfolio design patterns
- The small code snippet mentioning a "Vector" element
That was a significant error on my part. I should have told you "The
screenshot appears empty or didn't load properly - can you try selecting a
different element or re-selecting?" instead of fabricating a description.Deleted Comment