Readit News logoReadit News
xpe commented on It seems that OpenAI is scraping [certificate transparency] logs   benjojo.co.uk/u/benjojo/h... · Posted by u/pavel_lishin
47282847 · 10 hours ago
I agree with your analysis but try to not agree with your conclusion, purely for my own metal hygiene: I believe one can retrain the pattern matching of one’s brain for happier outcomes. If I let my brain judge this as a “failure“ (judgment “it is wrong“), I will either get sad about it (judgment “… and I can’t change it“) or angry (… and I can do something about it“). In cases such as this I prefer to accept it as is, so I try to rewrite my brain rule to consider it a necessary part of life (judgment “true/good/correct“).
xpe · 9 hours ago
Ah, in case it didn't come across clearly, my conclusion isn't to blame the individuals. My assessment is to seek out better communication patterns, which is partly about "technology" and partly about culture (expectations). People could indeed learn not to act this way with a bit of subtle nudging, feedback, and mechanism design.

I'm also pretty content accepting the unpleasant parts of reality without spin or optimism. Sometimes the better choice is still crappy, after all ;) I think Oliver Burkeman makes a fun and thoughtful case in "The Antidote: Happiness for People Who Can't Stand Positive Thinking" https://www.goodreads.com/book/show/13721709-the-antidote

xpe commented on It seems that OpenAI is scraping [certificate transparency] logs   benjojo.co.uk/u/benjojo/h... · Posted by u/pavel_lishin
xpe · 10 hours ago
Looking around at the comments, I have a birds-eye view. People are quite skilled at jumping to conclusions or assuming their POV is the only one. Consider this simplified scenario to illustrate:

    - X happened
    - Person P says "Ah, X happened."
    - Person Q interprets this in a particular way
        and says "Stop saying X is BAD!"
    - Person R, who already knows about X...
        (and indifferent to what others notice
         or might know or be interested in)
        ...says "(yawn)".
    - Person S narrowly looks at Person R and says
        "Oh, so you think Repugnant-X is ok?"
What a train wreck. Such failure modes are incredibly common. And preventable.* What a waste of the collective hours of attention and thinking we are spending here that we could be using somewhere else.

See also: the difference between positive and normative; charitable interpretations; not jumping to conclusions; not yucking someone else's yum

* So preventable that I am questioning the wisdom of spending time with any communication technology that doesn't actively address these failures. There is no point at blaming individuals when such failures are a near statistical certainty.

xpe commented on It seems that OpenAI is scraping [certificate transparency] logs   benjojo.co.uk/u/benjojo/h... · Posted by u/pavel_lishin
pavel_lishin · 11 hours ago
What's the yawn for?
xpe · 11 hours ago
Presumably this is well-known among people that already know about this.

P.S. In the hopes of making this more than just a sarcastic comment, the question of "How do people bootstrap knowledge?" is kind of interesting. [1]

> To tackle a hard problem, it is often wise to reuse and recombine existing knowledge. Such an ability to bootstrap enables us to grow rich mental concepts despite limited cognitive resources. Here we present a computational model of conceptual bootstrapping. This model uses a dynamic conceptual repertoire that can cache and later reuse elements of earlier insights in principled ways, modelling learning as a series of compositional generalizations. This model predicts systematically different learned concepts when the same evidence is processed in different orders, without any extra assumptions about previous beliefs or background knowledge. Across four behavioural experiments (total n = 570), we demonstrate strong curriculum-order and conceptual garden-pathing effects that closely resemble our model predictions and differ from those of alternative accounts. Taken together, this work offers a computational account of how past experiences shape future conceptual discoveries and showcases the importance of curriculum design in human inductive concept inferences.

[1]: https://www.nature.com/articles/s41562-023-01719-1

xpe commented on Ask HN: How can I get better at using AI for programming?    · Posted by u/lemonlime227
mrwrong · a day ago
your suggestion is to treat it like a person but (surprise surprise) you don't have any specific ideas of how and why that works. your idea just sounds like marketing
xpe · a day ago
> your suggestion is to treat it like a person but (surprise surprise) you don't have any specific ideas of how and why that works. your idea just sounds like marketing

This is unnecessarily mean. Please review https://news.ycombinator.com/newsguidelines.html

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

xpe commented on Auto-grading decade-old Hacker News discussions with hindsight   karpathy.bearblog.dev/aut... · Posted by u/__rito__
brian_spiering · 5 days ago
That is interesting because of the Halo effect. There is a cognitive bias that if a person is right in one area, they will be right in another unrelated area.

I try to temper my tendency to believe the Halo effect with Warren Buffett's notion of the Circle of Competence; there is often a very narrow domain where any person can be significantly knowledgeable.

xpe · 2 days ago
> A circle of competence is the subject area which matches a person's skills or expertise. The concept was developed by Warren Buffett and Charlie Munger as what they call a mental model, a codified form of business acumen, concerning the investment strategy of limiting one's financial investments in areas where an individual may have limited understanding or experience, while concentrating in areas where one has the greatest familiarity. -Wikipedia

> I try to temper my tendency to believe the Halo effect with Warren Buffett's notion of the Circle of Competence; there is often a very narrow domain where any person can be significantly knowledgeable. (commenter above)

Putting aside Buffett in particular, I'm wary of claims like "there is often a very narrow domain where any person can be significantly knowledgeable". How often? How narrow of a domain? Doesn't it depend on arbitrary definitions of what qualifies as a category? Is this a testable theory? Is it a predictive theory? What does empirical research and careful analysis show?

Putting that aside, there are useful mathematical ways to get an idea of some of the backing concepts without making assumptions about people, culture, education, etc. I'll cook one up now...

Start with 70K balls split evenly across seven colors: red, orange, yellow, green, blue, indigo, and violet. 1,000 show up demanding balls. So we mix them up and randomly distribute 10 balls to every person. What does the distribution tend to look like? What particulars would you tune and/or definitions would you choose to make this problem "sort of" map to something sort of like assessing the diversity of human competence across different areas?

Note the colored balls example assumes independence between colors (subjects or skills or something). But in real life, there are often causally significant links between skills. For example, general reasoning ability improves performance in a lot of other subjects.

Then a goat exploded, because I don't know how to end this comment gracefully.

Deleted Comment

xpe commented on Ask HN: How can I get better at using AI for programming?    · Posted by u/lemonlime227
rr808 · 2 days ago
Thanks yes of course you're right, still frustrating. I'm nearing retirement so not really worried about job loss, just want to make use of the tools.
xpe · 2 days ago
I get that. I probably could have been much more succinct by saying this: We can consciously act in ways that reduce the frustration level even if the environment itself doesn't change. It usually takes time and patience, but not always. Sometimes a particular mindset shift is sufficient to make a frustration completely vanish almost immediately.

Some examples from my experience: (1) Many particular frustrations with LLMs vanish the more I learn about their internals. (2) Frustration with the cacophony of various RAG/graph-database tooling vanishes once I realize that there is an entire slice of VC money chasing these problems precisely because it is uncertain: the victors are not pre-ordained and ... [insert bad joke about vectors here]

xpe commented on Ask HN: How can I get better at using AI for programming?    · Posted by u/lemonlime227
rr808 · 2 days ago
Its super frustrating there is no official guide. I hear lots of suggestions all the time and who knows if they help or not. The best one recently is tell the LLM to "act like a senior dev", surely that is expected by default? Crazy times.
xpe · 2 days ago
When the world is complicated, entangled, and rapidly changing, would you expect there to be one centralized official guide?*

At the risk of sounding glib or paternalistic -- but I'm going to say it anyway, because once you "see it" it won't feel like a foreign idea being imposed on you -- there are ways that help to lower and even drop expectations.

How? To mention just one: good reading. Read "Be a new homunculus" [1]. To summarize, visualize yourself like you are the "thing that lives in your brain". Yes, this is non-sense but try it anyway.

If you find various ways to accept "the world is changing faster than ever before" and it feels like too much. Maybe you are pissed off or anxious about AI. Maybe AI is being "heavily encouraged" for you (on you?) at work. Maybe you feel like we're living in an unsustainable state of affairs -- don't deny it. Dig into that feeling, talk about it. See where it leads you. Burying these things isn't a viable long-term strategy.**

* There is an "awesome-*" GitHub repository for collecting recommended resources to help with Claude Code: [2] But still requires a lot of curation and end-user experimentation: [2] There are few easy answers in a dynamic uncertain world.

** Yes I'm intentionally cracking the door open to "Job loss is scary. It is time to get real on this, including political activism."

[1]: https://mindingourway.com/be-a-new-homunculus/

[2]: https://github.com/hesreallyhim/awesome-claude-code

xpe commented on Auto-grading decade-old Hacker News discussions with hindsight   karpathy.bearblog.dev/aut... · Posted by u/__rito__
sigmar · 5 days ago
> The conclusion was that superforecasters' ability to filter out "noise" played a more significant role in improving accuracy than bias reduction or the efficient extraction of information.

>In February 2023, Superforecasters made better forecasts than readers of the Financial Times on eight out of nine questions that were resolved at the end of the year.[19] In July 2024, the Financial Times reported that Superforecasters "have consistently outperformed financial markets in predicting the Fed's next move"

>In particular, a 2015 study found that key predictors of forecasting accuracy were "cognitive ability [IQ], political knowledge, and open-mindedness".[23] Superforecasters "were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness".

I'm really not sure what you want me to take from this article? Do you contend that everyone has the same competency at forecasting stock movements?

xpe · 4 days ago
> I'm really not sure what you want me to take from this article?

I linked to the Wikipedia page as a way of pointing to the book Superforecasters by Tetlock and Gardner. If forecasting interests you, I recommend using it as a jumping off point.

> Do you contend that everyone has the same competency at forecasting stock movements?

No, and I'm not sure why you are asking me this. Superforecasters does not make that claim.

> I'm really not sure what you want me to take from this article?

If you read the book and process and internalize its lessons properly, I predict you will view what you wrote above in a different different light:

> Gotta auto grade every HN comment for how good it is at predicting stock market movement then check what the "most frequently correct" user is saying about the next 6 months.

Namely, you would have many reasons to doubt such a project from the outset and would pursue other more fruitful directions.

xpe commented on Auto-grading decade-old Hacker News discussions with hindsight   karpathy.bearblog.dev/aut... · Posted by u/__rito__
samdoesnothing · 5 days ago
Thats the whole point of democracy, to prevent the ruling parties from doing wildly unpopular things. Unlike a dictatorship, where they can do anything (including good things, that otherwise wouldn't happen in a democracy).

I know that "X is destroying democracy, vote for Y" has been a prevalent narrative lately, but is there any evidence that it's true? I get that it's death by a thousand cuts, or "one step at a time" as they say.

xpe · 4 days ago
> I know that "X is destroying democracy, vote for Y" has been a prevalent narrative lately, but is there any evidence that it's true? I get that it's death by a thousand cuts, or "one step at a time" as they say.

I suggest reading [1], [2], and [3]. From there, you'll probably have lots of background to pose your own research questions. According to [4], until you write about something, your thinking will be incomplete, and I tend to agree nearly all of the time.

[1]: https://en.wikipedia.org/wiki/Democratic_backsliding

[2]: https://hub.jhu.edu/2024/08/12/anne-applebaum-autocracy-inc/

[3]: https://carnegieendowment.org/research/2025/08/us-democratic...

[4]: "Neuroscientists, psychologists and other experts on thinking have very different ideas about how our brains work, but, as Levy writes: “no matter how internal processes are implemented, (you) need to understand the extent to which the mind is reliant upon external scaffolding.” (2011, 270) If there is one thing the experts agree on, then it is this: You have to externalise your ideas, you have to write. Richard Feynman stresses it as much as Benjamin Franklin. If we write, it is more likely that we understand what we read, remember what we learn and that our thoughts make sense." - Sönke Ahrens. How to Take Smart Notes_ - Sonke Ahrens (p. 30)

u/xpe

KarmaCake day4167August 27, 2012
About
Comments and responses licensed under Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) Attribution requirement waived if quoting as a descendant comment on the Hacker News platform only.

^ scrapers, bots, etc., ... this applies to you.

Hopes and beliefs: (a) Many people are capable of speaking and thinking clearly; (b) We have better discussions when people explain where they are coming from and their points of view.

Note: I borrowed the license comment from /user?id=kelseyfrog.

View Original