Can mods change the linked article away from the thin blog post?
Can mods change the linked article away from the thin blog post?
But I don’t think css can leverage the gpu in most (any?) cases. Apple has almost certainly baked something into the silicon to help handle the ui.
From experience, borders/cards help communicate conceptual boundaries, while whitespace helps communicate information hierarchy - Gestalt principles don't really address that distinction. For product or data-driven UI where a lot of loosely-related information/topics are shown in discrete parts of the page, cards are effective at high-level grouping. For content-driven UI, whitespace can be sufficient, and I think the article makes this clear.
Other than 'The Ultimate Developer Toolkit' (where type size is more of an issue than the card layout), I actually think the card-based version of each example layout is more compelling - easier to scan, and easier to 'chunk' - despite wanting the typography-and-whitespace alternative to be sufficient.
But there are other things, including awesome-and-dangerous nuclear waste sites, with warning messages/symbols designed to last beyond the collapse of modern civilization https://en.wikipedia.org/wiki/Long-term_nuclear_waste_warnin...
But what if all the packages had automatic ci/cd, and libinsecure 0.2.1 is published, libuseful automatically tests a new version of itself that uses 0.2.1, and if it succeeds it publishes a new version. And consumers of libuseful do the same, and so on.
"....Rozells’ composite visually echoes pleas from astronomers, who warn that although satellites collect essential data, the staggering amount filling our skies will only worsen light pollution and our ability to study what lies beyond. Because this industry has little regulation, the problem could go unchecked....."
Agents have their place for trivial and non-critical fixes/features, but the reality is, unless the agents can act in a deterministic manner across LLMs, you really are coding with a loaded gun. The worst is, agents can really dull your senses over time.
I do believe in a future where we can trust agents 99% of the time, but the reality is, we are not training on the thought process, for this to become a reality. That is, we are not focused on the conversation to code training data. I would say 98% of my code is AI generated, and it is certainly not vibe coding. I don't have a term for it, but I am literally dictating to the LLM what I want done and have it fill in the pieces. Sometimes it misses the mark, sometimes it aligns and sometimes it introduces whole new ideas that I have never thought of, which will lead to a better solution. The instructions that I provide is based on my domain knowledge and I think people are missing the mark when they talk about vibe coding, in a professional context.
Full Disclosure: I'm working on improving the "conversation to code" process, so my opinions are obviously biased, but I strongly believe we need to first focus on better capturing our thought process.
Think about how differently a current agent behaves when you say "here is the spec, implement a solution" vs "here is the spec, here is my solution, make refinements" - you get very different output, and I would argue that the 'check my work' approach tends to have better results.
Take your idea further. Now I've got 100 agents, and 100 PRs, and some small percentage of them are decent. The task went from "implement a feature" to "review 100 PRs and select the best one".
Even assuming you can ditch 50 percent right off the bat as trash... Reviewing 50 potentially buggy implementations of a feature and selecting the best genuinely sounds worse than just writing the solution.
Worse... If you haven't solved the problem before anyways, you're woefully unqualified as a reviewer.
From my experience with AI agents, this feels intuitively possible - current agents seem to be ok (thought not yet 'great') at critiquing solutions, and such supervisor agents could help keep the broader system in alignment.
Programmers are not expected to add an addendum to every file listing all the books, articles, and conversations they've had that have influenced the particular code solution. LLMs are trained on far more sources that influence their code suggestions, but it seems like we actually want a higher standard of attribution because they (arguably) are incapable of original thought.