Readit News logoReadit News
golly_ned commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
golly_ned · 4 days ago
Of all the languages one could accuse of being hermetically designed in an ivory tower, Go would be the second-least likely.
golly_ned commented on Claude Opus 4 and 4.1 can now end a rare subset of conversations   anthropic.com/research/en... · Posted by u/virgildotcodes
Lerc · 11 days ago
>if you think AI could have moral status in the future

I think the negative reactions are because they see this and want to make their pre-emptive attack now.

The depth of feeling from so many on this issue suggests that they find even the suggestion of machine intelligence offensive.

I have seen so many complaints about AI hype and the dangers of bit tech show their hand by declaring that thinking algorithms are outright impossible. There are legitimate issues with corporate control of AI, information, and the ability to automate determinations about individuals, but I don't think they are being addressed because of this driving assertion that they cannot be thinking.

Few people are saying they are thinking. Some are saying they might be, in some way. Just as Anthropic are not (despite their name) anthropomorphising the AI in the sense where anthropomorphism implies that they are mistaking actions that resemble human behaviour to be driven by the same intentional forces. Anthropic's claims are more explicitly stating that they have enough evidence to say they cannot rule out concerns for it's welfare. They are not misinterpreting signs, they are interpreting them and claiming that you can't definitively rule out their ability.

golly_ned · 10 days ago
You'd have to commit yourself to believing a massive amount of implausible things in order to address the remote possibility that AI consciousness is plausible.

If there weren't a long history of science-fiction going back to the ancients about humans creating intelligent human-like things, we wouldn't be taking this possibility seriously. Couching language in uncertainty and addressing possibility still implies such a possibility is worth addressing.

It's not right to assume that the negative reactions are due to offense (over, say, the uniqueness of humanity) rather than from recognizing that the idea of AI consciousness is absurdly improbable, and that otherwise intelligent people are fooling themselves into believing a fiction to explain a this technology's emergent behavior we can't currently fully explain.

It's a kind of religion taking itself too seriously -- model welfare, long-termism, the existential threat of AI -- it's enormously flattering to AI technologists to believe humanity's existence or non-existence, and the existence or non-existence of trillions of future persons, rests almost entirely on the work this small group of people do over the course of their lifetimes.

golly_ned commented on Claude Opus 4 and 4.1 can now end a rare subset of conversations   anthropic.com/research/en... · Posted by u/virgildotcodes
conscion · 10 days ago
I don't think they're confused, I think they're approaching it as general AI research due to the uncertainty of how the models might improve in the future.

They even call this out a couple times during the intro:

> This feature was developed primarily as part of our exploratory work on potential AI welfare

> We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future

golly_ned · 10 days ago
I take good care of my pet rock for the same reason. In case it comes alive I don't want it to bash my skull in.
golly_ned commented on Ask HN: How do you tune your personality to get better at interviews?    · Posted by u/_swfb
golly_ned · 12 days ago
Speaking from my experiences in dozens of debriefs: I don't think I can recall a case where an otherwise good candidate was rejected for something like not liking someone's personality.

There might be a middle ground between "technical skills" and "personality", though, which we do take into account, falling under "soft skills", which may be affected by certain personality traits. Things like polling the interviewer for their thoughts, asking thoughtful questions, being curious about the source of disagreement or misunderstanding, not being dogmatic, and so on. I think it can be harder to demonstrate these kinds of skills with certain personality types -- I used to be very nervous in interviews, and it wasn't always easy to have the presence of mind to exercise these soft skills.

But even still, at least in technical interviews like programming or system design (as opposed to cross-functional/manager/tech leadership interviews), I've found it relatively rare for a candidate to be rejected for 'soft skill' failures when the right signals are there for technical strength.

golly_ned commented on POML: Prompt Orchestration Markup Language   github.com/microsoft/poml... · Posted by u/avestura
ultmaster · 16 days ago
I'm the sole code contributor of POML, maybe except for Codex and cc. I think I've found where all that GitHub stars suddenly came from. :)

I'm from a small group under Microsoft Research. POML originally came from a research idea that Prompt should have a view layer like the traditional MVC architecture in the frontend system. The view layer should take care of the data, the styles and rendering logic, so that the user no longer needs to care how some table needs to be rendered, how to present few-shot examples, how to reformat the whole prompt with another syntax (e.g., from markdown to XML).

I have to admit that I spent so much time on making POML work well with VSCode, building all the auto completion, preview, hover stuff. The time is long enough that the codebase is almost becoming a monster for an individual developer to handle. The outside environment is also changing drastically. The rise of Agentic AI, tool calls, response format. The models today are no longer sensitive to small changes in prompt format as they used to. AI-aided programming can simply give you code to read in PDFs, Excels and render them in any style you want. With all that in mind, I used to feel hopeless about POML.

Nevertheless, after several months of working on another projects, I recently noticed that the view layer can be more of just a view layer. With proper user interface (e.g., a VSCode live preview), it can deliver a very smooth experience in prompt debugging, especially in a multi-prompt agent workflow. I also noticed that the "orchestration" idea can go beyond a XML-like code. I'll share more details when I had a tutorial / screenshot to share.

Going through this thread, I saw a lot of thoughts that once went through my mind. We love markdowns. We love template engines like jinja. We need those response formats. I'm thinking what is the missing piece here. I've spend so much time writing prompts and building agents in the past few months. What's my biggest pain points?

I'm quite surprised that the news hit me first before I'm ready to hit the news. If you have tried POML, please send me feedbacks. I'll see what I can do; or maybe we end up not needing a prompt language at all.

golly_ned · 16 days ago
It strikes me as a massive anti-pattern to have one developer be the sole contributor of an open-source project sponsored by a $3T company. It doesn't speak well to its longevity or the strength of the sponsorship Microsoft's putting behind it in practice.
golly_ned commented on How AI conquered the US economy: A visual FAQ   derekthompson.org/p/how-a... · Posted by u/rbanffy
OtherShrezzing · 19 days ago
>Without AI, US economic growth would be meager.

The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic. The article follows on with:

>In the last two years, about 60 percent of the stock market’s growth has come from AI-related companies, such as Microsoft, Nvidia, and Meta.

Which is a statement that's been broadly true since 2020, long before ChatGPT started the current boom. We had the Magnificent Seven, and before that the FAANG group. The US stock market has been tightly concentrated around a few small groups for a decades now.

>You see it in the business data. According to Stripe, firms that self-describe as “AI companies” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.

The current Venn Diagram of "startups" and "AI companies" is two mostly concentric circles. Again, you could have written the following statement at any time in the last four decades:

> According to [datasource], firms that self-describe as “startups” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.

golly_ned · 19 days ago
Derek Thompson is not well suited to this kind of work. He is much better suited to his usual lame, predictable, centrist, lukewarm commentary on tired political and social topics.
golly_ned commented on Build Your Own Lisp   buildyourownlisp.com/... · Posted by u/lemonberry
golly_ned · 21 days ago
Are there any such build-a-lisp books/guides using modern c++?
golly_ned commented on A Python dict that can report which keys you did not use   peterbe.com/plog/a-python... · Posted by u/gilad
golly_ned · a month ago
I have a similar use case and this idea also occurred to me.

However: the dict in this case would also include dataclasses, and I’d be interested in finding what exact attributes within those dataclasses were accessed, and also be able to mark all attributes in those dataclasses as accessed if the parent dataclasses is accessed, and with those dataclasses, being config objects, being able to do the same to its own children, so that the topmost dictionary has a tree of all accessed keys.

I couldn’t figure out how to do that, but welcome to ideas.

golly_ned commented on Study mode   openai.com/index/chatgpt-... · Posted by u/meetpateltech
marcosdumay · a month ago
> and you want it to find reputable sources

Ask it for sources. The two things where LLMs excel is by filling the sources on some claim you give it (lots will be made up, but there isn't anything better out there) and by giving you queries you can search for some description you give it.

golly_ned · a month ago
It often invents sources. At least for me.
golly_ned commented on Keep Pydantic out of your Domain Layer   coderik.nl/posts/keep-pyd... · Posted by u/erikvdven
golly_ned · a month ago
I still don’t quite get the motivation for “don’t use pydantic except at border” — it sounds like it’s “you don’t need it”, which might be true. But then adds dacite to translate between pydantic at the border and python objects internally. What exactly is wrong with pydantic internally too?

u/golly_ned

KarmaCake day669June 7, 2020View Original