Readit News logoReadit News
lee_ars commented on So yeah, I vibe-coded a log colorizer–and I feel good about it   arstechnica.com/features/... · Posted by u/lee_ars
lee_ars · 6 days ago
> Have you had to go back and fix any of your vibe coded projects yet?

Not yet, but you're absolutely right. Once a tool like this stops being front of mind, it'll fall right out of my head. It's a bit like driving somewhere versus being driven—I'm a lot more likely to remember how to get to a place if I have to actively navigate to it. If I'm in the passenger seat, all bets are off!

lee_ars commented on When will CSS Grid Lanes arrive?   webkit.org/blog/17758/whe... · Posted by u/feross
lee_ars · 9 days ago
Looking at the comparison image between CSS grid lanes and CSS grid 1, the grid lanes example looks....horrifying. It looks like pinterest cancer. It makes the page look like a ragged assortment of random shit. Scannability is grossly impaired. How are you supposed to approach this content? What objective does this mess of a presentation accomplish? What kind of information lends itself to this kind of "masonry-style waterfall layout"?
lee_ars commented on Sometimes your job is to stay the hell out of the way   randsinrepose.com/archive... · Posted by u/ohjeez
prinny_ · 9 days ago
This is an incredibly cringe article. From using “wolf” in a completely forced way, to full quoting a conversation that seemingly only misses “and that testing framework’s name? Albert Einstein”.
lee_ars · 9 days ago
It's like a linkedin article that has escaped from its cage.
lee_ars commented on Don't fall into the anti-AI hype   antirez.com/news/158... · Posted by u/todsacerdoti
ux266478 · a month ago
The problem is that this is completely false. LLMs are actually deterministic. There are a lot more input parameters than just the prompt. If you're using a piece of shit corpo cloud model, you're locked out of managing your inputs because of UX or whatever.
lee_ars · a month ago
> The problem is that this is completely false. LLMs are actually deterministic. There are a lot more input parameters than just the prompt. If you're using a piece of shit corpo cloud model, you're locked out of managing your inputs because of UX or whatever.

When you decide to make up your own definition of determinism, you can win any argument. Good job.

lee_ars commented on Iran is likely jamming Starlink   timesofisrael.com/iran-ap... · Posted by u/ukblewis
kennykartman · a month ago
Playing a bit the devil's advocate here, but

> This is why all of those "national great firewalls" shouldn't exist in the first place

This is a kind of colonialist thinking that is, IMO, a problem in the western society. There are indeed drawbacks in a lack of freedom, but assuming that a government should not be able to filter the content diffused to the population is wrong in principle. You don't get to choose what is right or wrong in every part of the world: that is a very USA-centric way to view the society and easily leads to "export freedom and democracy" acts. It's a very USA-friendly way to frame things. Not necessarily the right way to frame things.

lee_ars · a month ago
> There are indeed drawbacks in a lack of freedom, but assuming that a government should not be able to filter the content diffused to the population is wrong in principle.

Why?

lee_ars commented on Don't fall into the anti-AI hype   antirez.com/news/158... · Posted by u/todsacerdoti
AuryGlenz · a month ago
Have you seen the way some people google/prompt? It can be a murder scene.

Not coding related but my wife is certainly better than most and yet I’ve had to reprompt certain questions she’s asked ChatGPT because she gave it inadequate context. People are awful at that. Us coders are probably better off than most but just as with human communication if you’re not explaining things correctly you’re going to get garbage back.

lee_ars · a month ago
People are "awful at that" because when two people communicate, we're using a lot more than words. Each person participating in a conversation is doing a lot of active bridge-building. We're supplying and looking for extra nonverbal context; we're leaning on basic assumptions about the other speaker, their mood, their tone, their meanings; we're looking at not just syntax but the pragmatics of the convo (https://en.wikipedia.org/wiki/Pragmatics). The communication of meaning is a multi-dimensional thing that everyone in the conversation is continually contributing to and pushing on.

In a way, LLMs are heavily exploitative of human linguistic abilities and expectations. We're wired so hard to actively engage and seek meaning in conversational exchanges that we tend to "helpfully" supply that meaning even when it's absent. We are "vulnerable" to LLMs because they supply all the "I'm talking to a person" linguistic cues, but without any form of underlying mind.

Folks like your wife aren't necessarily "bad" at LLM prompting—they're simply responding to the signals they get. The LLM "seems smart." It seems like it "knows" things, so many folks engage with them naturally, as they would with another person, without painstakingly feeding in context and precisely defining all the edges. If anything, it speaks to just how good LLMs are at being LLMs.

lee_ars commented on Don't fall into the anti-AI hype   antirez.com/news/158... · Posted by u/todsacerdoti
jonas21 · a month ago
Operating a car (i.e. driving) is certainly not deterministic. Even if you take the same route over and over, you never know exactly what other drivers or pedestrians are going to do, or whether there will be unexpected road conditions, construction, inclement weather, etc. But through experience, you build up intuition and rules of thumb that allow you to drive safely, even in the face of uncertainty.

It's the same programming with LLMs. Through experience, you build up intuition and rules of thumb that allow you to get good results, even if you don't get exactly the same result every time.

lee_ars · a month ago
> It's the same programming with LLMs. Through experience, you build up intuition and rules of thumb that allow you to get good results, even if you don't get exactly the same result every time.

Friend, you have literally described a nondeterministic system. LLM output is nondeterministic. Identical input conditions result in variable output conditions. Even if those variable output conditions cluster around similar ideas or methods, they are not identical.

lee_ars commented on CLI agents make self-hosting on a home server easier and fun   fulghum.io/self-hosting... · Posted by u/websku
wasmitnetzen · a month ago
Conversely, what do you gain by using a standard port?

Now, I do agree a non-standard port is not a security tool, but it doesn't hurt running a random high-number port.

lee_ars · a month ago
> Conversely, what do you gain by using a standard port?

One less setup step in the runbook, one less thing to remember. But I agree, it doesn't hurt! It just doesn't really help, either.

lee_ars commented on CLI agents make self-hosting on a home server easier and fun   fulghum.io/self-hosting... · Posted by u/websku
visageunknown · a month ago
I find LLMs remove all the fun for me. When I build my homelab, I want the satisfaction of knowing that I did it. And the learning gains that only come from doing it manually. I don't mind using an LLM to shortcut areas that are just pure pain with no reward, but I abstain from using it as much as possible. It gives you the illusion that you've accomplished something.
lee_ars · a month ago
>I don't mind using an LLM to shortcut areas that are just pure pain with no reward...

Enlightenment here comes when you realize others are doing the exact same thing with the exact same justification, and everyone's pain/reward threshold is different. The argument you are making justifies their usage as well as yours.

u/lee_ars

KarmaCake day202April 20, 2013
About
Senior Technology Editor @ Ars Technica
View Original