Readit News logoReadit News
jerf commented on Valve Software handbook for new employees [pdf] (2012)   cdn.akamai.steamstatic.co... · Posted by u/Michelangelo11
Hendrikto · 4 hours ago
> I'm not sure why Steam always seems to be exempt from the "perils of digital ownership" arguments

Because they have been consistently good citizens for more than 2 decades. They built a reputation. Something other companies are eager to piss away at the first opportunity to sell out or squeeze their customers.

It’s not surprising that Valve is successful and trusted with this approach. What is surprising is that it is apparently so incredibly hard for other companies to understand this very simple fact.

1. Build a good product.

2. Consistently act in good faith.

3. Profit.

jerf · 41 minutes ago
The dominant business school philosophy in the West is that 1. any reputation you have with your customers is a monetary asset and 2. therefore you should sell it for profit because it's greater than the long term expected monetary value according to a simple time-value of money calculation, especially because of the lag before your customers figure out you've sold them out.

#1 on its own isn't so bad, you should indeed treat reputation as a valuable asset, but the way their style of logic invariably jumps to "and therefore you should sell, sell, sell it!" is the source of the problems we see. Especially because they're likely to jump jobs before the consequences occur. We really ought to have a culture of looking askance at executives and decision makers who never spend more than 2 years at a job, rather than celebrating them. If they've never had to live with the effects of their decisions they're really just a fresh-out-of-college person with 10 instances of the same two years of experience.

jerf commented on Everything is correlated (2014–23)   gwern.net/everything... · Posted by u/gmays
sayamqazi · 2 days ago
Wouldnt you need the T_zero configuration of the universe for this to work?

Given different T_zero configs of matter and energies T_current would be different. and there are many pathways that could lead to same physical configuration (position + energies etc) with different (Universe minus cake) configurations.

Also we are assuming there is no non-deterministic processed happening at all.

jerf · 2 days ago
The real problem is you need a real-number-valued universe for this to work, where the measurer needs access to the full real values [1]. In our universe, which has a Planck size and Planck time and related limits, the statement is simply untrue. Even if you knew every last detail about a piece of fairy cake, whatever "every last detail" may actually be, and even if the universe is for some reason deterministic, you still could not derive the entire rest of the universe from it correctly. Some sort of perfect intelligence with access to massive amounts of computation may be able to derive a great deal more than you realize, especially about the environment in the vicinity of the cake, but it couldn't derive the entire universe.

[1]: Arguments are ongoing about whether the universe has "real" numbers (in the mathematical sense) or not. However it is undeniable the Planck constants still provide a practical barrier to any hypothetical real valued numbers in the universe that make them in practice inaccessible.

jerf commented on What is going on right now?   catskull.net/what-the-hel... · Posted by u/todsacerdoti
bfrog · 2 days ago
The question to ask is... why have the junior be a chatgpt interface if this is the case.
jerf · 2 days ago
That's a question every current junior should be asking themselves.

If you want to be well-paid, you need to be able to distinguish yourself in some economically-useful manner from other people. That was true before AI and AI isn't going to make it go away. It may even in some sense sharpen it.

In another few years there's going to be large numbers of people who can be plopped down in front of a code base and just start firing prompts at an AI. If you're just another one of the crowd, you're going to get mediocre career results, potentially including not having a career at all.

However, here in 2025 I'm not sure what that "standing out in the crowd" will be. It could well be "exceptional skill in prompting". It could be that deeper understanding of what the code is really doing. It could be the ability to debug deeply yourself with an old-school debugger when something goes wrong and the AI just can't work. It could be non-coding skills entirely. In reality it'll be more than just one thing anyhow and the results will vary. I don't know what to tell you juniors except to keep your eyes peeled for whatever this will be, and when you think you have an idea, don't let the cognitively-lazy appeal of just letting the AI do everything stop you from pursuing it. I don't know specifically what this will be, but you don't have to be right the first time, you have time to get several licks at this.

But I do know that we aren't going to need very many people who are only capable of firing prompts at an AI and blindly saying "yes" to whatever it says, not because of the level of utility that may or may not have, but because that's not going to distinguish you at all.

If all you are is a proxy to AI, I don't need you. I've got an AI of my own, and I've got lower latency and higher bandwidth to it.

Correspondingly, if you detect that you are falling into the pattern of being on the junior programmer end of what this article is complaining about, where you interact with your coworkers as nothing but an AI proxy, you need to course correct and you need to course correct now. Unfortunately, again, I don't have a recipe for that correction. Ask me in 2030.

"Just a proxy to an AI" may lead to great things for the AI but it isn't going to lead you anywhere good!

jerf commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
adastra22 · 3 days ago
Yes, but:

(1) The process of creating "a very, very precise and detailed understanding of the actual problem" is something AI is really good at, when partnered with a human. My use of AI tools got immensely better when I figured out that I should be prompting the AI to turn my vague short request into a detailed prompt, and then I spend a few iteration cycles fixing up before asking the agent to do it.

(2) The other problem of managing context is a search and indexing problem, which we are really, really good at and have lots of tools for, but AI is just so new that these tools haven't been adapted or seen wide use yet. If the limitation of the AI was its internal reasoning or training or something, I would be more skeptical. But the limitation seems to be managing, indexing, compressing, searching, and distilling appropriate context. Which is firmly in the domain of solvable, albeit nontrivial problems.

I don't see the information theoretic barrier you refer to. The amount of information an AI can keep in its context window far exceeds what I have easily accessible to my working memory.

jerf · 2 days ago
The information theoretic barrier is in the information content of your prompt, not the ability of the AI to expand it.

But then I suppose I should learn from my own experiences and not try to make information theoretic arguments on HN, since it is in that most terrible state where everyone thinks they understand it because they use "bits" all the time, but in fact the average HN denizen knows less than nothing about it because even their definition of "bit" actively misleads them and that's about all they know.

jerf commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
xyzzyz · 2 days ago
Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.

I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).

Go's filesystem API is the perfect example. You need to open files? Great, we'll create

  func Open(name string) (*File, error)
function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.

jerf · 2 days ago
While the general question about string encoding is fine, unfortunately in a general-purpose and cross-platform language, a file interface that enforces Unicode correctness is actively broken, in that there are files out in the world it will be unable to interact with. If your language is enforcing that, and it doesn't have a fallback to a bag of bytes, it is broken, you just haven't encountered it. Go is correct on this specific API. I'm not celebrating that fact here, nor do I expect the Go designers are either, but it's still correct.
jerf commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
adastra22 · 3 days ago
It's a matter of the tools not getting there though. If there was a summarization system that could compress down the structure and history of the system you are working on in a way that could then extract out a half-filled context window of the relevant bits of the code base and architecture for the task (in other words, generate that massive prompt for you), then you might see the same results that you get with Android apps.

The reason being that the boilerplate Android stuff is effectively given for free and not part of the context as it is so heavily represented in the training set, whereas the unique details of your work project is not. But finding a way to provide that context, or better yet fine-tune the model on your codebase, would put you in the same situation and there's no reason for it to not deliver the same results.

That it is not working for you now at your complex work projects is a limitation of tooling, not something fundamental about how AI works.

Aside: Your recommendation is right on. It clicked for me when I took a project that I had spent months of full-time work creating in C++, and rewrote it in idiomatic Go, a language I had never used and knew nothing about. It took only a weekend, and at the end of the project I had reviewed and understood every line of generated code & was now competent enough to write my own simple Go projects without AI help. I went from skeptic to convert right then and there.

jerf · 3 days ago
I agree that the level of complexity of task it can do is likely to rise over time. I often talk about the "next generation" of AI that will actually be what we were promised LLMs would be, but LLMs architecturally are just not suited for. I think the time is coming when AIs "truly" (for some definition of truly) will understand architecture and systems in a way that LLMs don't and really can't, and will be able to do a lot more things than they can now, though when that will be is hard to guess. Could be next year, or AI could stall out now where it is now for the next 10. Nobody knows.

However, the information-theoretic limitation of expressing what you want and how anyone, AI or otherwise, could turn that into commits, is going to be quite the barrier, because that's fundamental to communication itself. I don't think the skill of "having a very, very precise and detailed understanding of the actual problem" is going anywhere any time soon.

jerf commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
tick_tock_tick · 3 days ago
> AI is only as smart as the human handling it.

I think I'm slowly coming around to this viewpoint too. I really just couldn't understand how so many people were having widely different experiences. AI isn't magic; how could I have expected all the people I've worked with who struggle to explain stuff to team members, who have near perfect context, to manage to get anything valuable across to an AI?

I was original pretty optimistic that AI would allow most engineers to operate at a higher level but it really seems like instead it's going to massively exacerbate the difference between an ok engineer and a great engineer. Not really sure how I feel about that yet but at-least I understand now why some people think the stuff is useless.

jerf · 3 days ago
I've been struggling to apply AI on any large scale at work. I was beginning to wonder if it was me.

But then my wife sort of handed me a project that previously I would have just said no to, a particular Android app for the family. I have instances of all the various Android technologies under my belt, that is, I've used GUI toolkits, I've used general purpose programming languages, I've used databases, etc, but with the possible exception of SQLite (which even that is accessed through an ORM), I don't know any of the specific technologies involved with Android now. I have never used Kotlin; I've got enough experience that I can pretty much piece it together when I'm reading it but I can't write it. Never used the Android UI toolkit, services, permissions, media APIs, ORMs, build system, etc.

I know from many previous experiences that A: I could definitely learn how to do this but B: it would be a many-week project and in the end I wouldn't really be able to leverage any of the Android knowledge I would get for much else.

So I figured this was a good chance to take this stuff for a spin in a really hard way.

I'm about eight hours in and nearly done enough for the family; I need about another 2 hours to hit that mark, maybe 4 to really polish it. Probably another 8-12 hours and I'd have it brushed up to a rough commercial product level for a simple, single-purpose app. It's really impressive.

And I'm now convinced it's not just that I'm too old a fogey to pick it up, which is, you know, a bit of a relief.

It's just that it works really well in some domains, and not so much in others. My current work project is working through decades of organically-grown cruft owned by 5 different teams, most of which don't even have a person on them that understands the cruft in question, and trying to pull it all together into one system where it belongs. I've been able to use AI here and there for some stuff that is still pretty impressive, like translating some stuff into psuedocode for my reference, and AI-powered autocomplete is definitely impressive when it correctly guesses the next 10 lines I was going to type effectively letter-for-letter. But I haven't gotten that large-scale win where I just type a tiny prompt in and see the outsized results from it.

I think that's because I'm working in a domain where the code I'm writing is already roughly the size of the prompt I'd have to give, at least in terms of the "payload" of the work I'm trying to do, because of the level of detail and maturity of the code base. There's no single sentence I can type that an AI can essentially decompress into 250 lines of code, pulling in the correct 4 new libraries, and adding it all to the build system the way that Gemini in Android Studio could decompress "I would like to store user settings with a UI to set the user's name, and then display it on the home page".

I think I recommend this approach to anyone who wants to give this approach a fair shake - try it in a language and environment you know nothing about and so aren't tempted to keep taking the wheel. The AI is almost the only tool I have in that environment, certainly the only one for writing code, so I'm forced to really exercise the AI.

jerf commented on Marines managed to get past an AI powered camera "undetected" by hiding in boxes   rudevulture.com/marines-m... · Posted by u/voxadam
lazide · 3 days ago
Why would the model know trees can’t walk?

Therein lies the rub.

jerf · 3 days ago
English breaks down here, but the model probably does "know" something more like "If the tree is here in this frame, in the next frame, it will be there, give or take some waving in the wind". It doesn't know that "trees don't walk", just as it doesn't know that "trees don't levitate", "trees don't spontaneously turn into clowns", or an effectively infinite number of other things that trees don't do. What it can do possibly do is realize that in frame 1 there was a tree, and then in frame 2, there was something the model didn't predict as a high-probability output of the next frame.

It isn't about knowing that trees don't walk, but that trees do behave in certain ways and noticing that it is "surprised" that they fail to behave in the predicted ways, where "surprise" is something like "this is a very low probability output of my model of the next frame". It isn't necessary to enumerate all the ways the next frame was low-probability, it is enough to observe that it was logically-not high probability.

In a lot of cases this isn't necessarily that useful, but in a security context having a human take a look at a "very low probability series of video frames" will, if nothing else, teach the developers a lot about the real capability of the model. If it spits out a lot of false positives, that is itself very informative about what the model is "really" doing.

jerf commented on I did 98,000 Anki reviews. Anki is already dead   miguelconner.substack.com... · Posted by u/dothereading
NewsaHackO · 3 days ago
Hallucinations in LLMs when learning is dangerous; IF you have some background, you can usually tell with LLMs go off the rails, but It would be unfortunate for you to commit to memory an incorrect fact at such a vulnerable time. It will be difficult to "uncommit it" at that point.
jerf · 3 days ago
I don't think the LLM value prop here is to build lots of cards. It's just to interact with the model conversaitonally. If the model is wrong in this one translation, I'm not going to exactly "commit it to memory", I'm just going to keep on carrying on. I don't know that the LLM mistakes are any particularly worse than the many and sundry other mistakes I'm already continuously making as a language learner anyhow.

If I could speak a foreign language as well as a 8B parameter LLM, hallucinations and all, I'd be immensely ahead of where I am now. It's not like second languages aren't themselves often broken in somewhat similar ways.

jerf commented on Marines managed to get past an AI powered camera "undetected" by hiding in boxes   rudevulture.com/marines-m... · Posted by u/voxadam
jerf · 3 days ago
I wonder if one could extract a "surprisedness" value out of the AI, basically, "the extent to which my current input is not modeled successfully by my internal models". Giving the model a metaphorical "WTF, human, come look at this" might be pretty powerful for those walking cardboard boxes and trees, to add to the cases where the model knows something is wrong. Or it might false positive all the darned time. Hard to tell without trying.

u/jerf

KarmaCake day89538October 13, 2008
About
http://www.jerf.org/iri , though infrequently updated

jerf@jerf.org , though be aware that I only really check email every few days now.

Permission for comment republication in HN collections granted, though please do drop a line to jerf@jerf.org so I know. :)

my public key: https://keybase.io/jerf; my proof: https://keybase.io/jerf/sigs/vL9FeVDSGtiDMBmXC4f_rCikI0n4jNfB-1PsNgUN-Is

View Original