Readit News logoReadit News
khafra commented on Julia   borretti.me/fiction/julia... · Posted by u/ashergill
Tarq0n · 7 days ago
I adore Peter Watts, so I'll be checking out Smith then!

Watts is like brain candy, keeps my mind buzzing from all the ideas for weeks. Charles Stross can have the same effect, a sort of future shock.

khafra · 7 days ago
Cordwainer Smith's style and subject matter is quite different from Watts; I felt this story was like a combination of the two. So, if you would like this story even if it was less eschatalogically cynical, and had more of a golden age setting, you'll probably like Smith!
khafra commented on Julia   borretti.me/fiction/julia... · Posted by u/ashergill
NuclearPM · 8 days ago
Needs a twist or a reason to care about the characters.
khafra · 8 days ago
They're the last living humans, and the last human-derived mind?

I like Cordwainer Smith and Peter Watts; so I really liked this blend of their styles and subjects.

khafra commented on AISLE’s autonomous analyzer found all CVEs in the January OpenSSL release   aisle.com/blog/aisle-disc... · Posted by u/mmsc
bsimpson · 14 days ago
I wonder what adoption would actually look like.

It reminds me of Google Dart, which was originally pitched as an alternate language that enabled web programming in the style Google likes (strong types etc.). There was a loud cry of scope creep from implementors and undo market influence in places like Hacker News. It was so poorly received that Google rescinded the proposal to make it a peer language to JavaScript.

Granted, the interests point in different directions for security software v.s. a mainstream platform. Still, audiences are quick to question the motives of companies that have the scale to invest in something like making a net-new security runtime.

khafra · 14 days ago
> undo market influence

Pointless nitpick, but you want "undue market influence." "Undo market influence" is what the FTC orders when they decide there's monopolistic practices going on.

khafra commented on Don't fall into the anti-AI hype   antirez.com/news/158... · Posted by u/todsacerdoti
totallykvothe · a month ago
I don't understand the stance that AI currently is able to automate away non-trivial coding tasks. I've tried this consistently since GPT 3.5 came out, with every single SOTA model up to GPT 5.1 Codex Max and Opus 4.5. Every single time, I get something that works, yes, but then when I start self-reviewing the code, preparing to submit it to coworkers, I end up rewriting about 70% of the thing. So many important details are subpar about the AI solution, and many times fundamental architectural issues cripple any attempt at prompting my way out of it, even though I've been quite involved step-by-step through the whole prototyping phase.

I just have to conclude 1 of 2 things:

1) I'm not good at prompting, even though I am one of the earliest AI in coding adopters I know, and have been consistent for years. So I find this hard to accept.

2) Other people are just less picky than I am, or they have a less thorough review culture that lets subpar code slide more often.

I'm not sure what else I can take from the situation. For context, I work on a 15 year old Java Spring + React (with some old pages still in Thymeleaf) web application. There are many sub-services, two separate databases,and this application needs to also 2-way interface with customer hardware. So, not a simple project, but still. I can't imagine it's way more complicated than most enterprise/legacy projects...

khafra · a month ago
> Non-trivial coding tasks

A coding agent just beat every human in the AtCoder Heuristic optimization contest. It also beat the solution that the production team for the contest put together. https://sakana.ai/ahc058/

It's not enterprise-grade software, but it's not a CRUD app with thousands of examples in github, either.

khafra commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
Workaccount2 · 2 months ago
Using my parents as a reference, they just thought it was neat when I showed them GPT-4 years ago. My jaw was on the floor for weeks, but most regular folks I showed had a pretty "oh thats kinda neat" response.

Technology is already so insane and advanced that most people just take it as magic inside boxes, so nothing is surprising anymore. It's all equally incomprehensible already.

khafra · 2 months ago
LLMs are an especially tough case, because the field of AI had to spend sixty years telling people that real AI was nothing like what you saw in the comics and movies; and now we have real AI that presents pretty much exactly like what you used to see in the comics and movies.
khafra commented on Auto-grading decade-old Hacker News discussions with hindsight   karpathy.bearblog.dev/aut... · Posted by u/__rito__
popinman322 · 2 months ago
It doesn't look like the code anonymizes usernames when sending the thread for grading. This likely induces bias in the grades based on past/current prevailing opinions of certain users. It would be interesting to see the whole thing done again but this time randomly re-assigning usernames, to assess bias, and also with procedurally generated pseudonyms, to see whether the bias can be removed that way.

I'd expect de-biasing would deflate grades for well known users.

It might also be interesting to use a search-grounded model that provides citations for its grading claims. Gemini models have access to this via their API, for example.

khafra · 2 months ago
You can't anonymize comments from well-known users, to an LLM: https://gwern.net/doc/statistics/stylometry/truesight/index
khafra commented on How elites could shape mass preferences as AI reduces persuasion costs   arxiv.org/abs/2512.04047... · Posted by u/50kIters
PaulHoule · 2 months ago
An essay by Converse in this volume

https://www.amazon.com/Ideology-Discontent-Clifford-Geertz/d... [1]

calls into question whether or not the public has an opinion. I was thinking about the example of tariffs for instance. Most people are going on bellyfeel so you see maybe 38% are net positive on tariffs

https://www.pewresearch.org/politics/2025/08/14/trumps-tarif...

If you broke it down in terms of interest groups on a "one dollar one vote" basis the net positive has to be a lot worse: to the retail, services and constructor sectors tariffs are just a cost without any benefits, even most manufacturers are on the fence because they import intermediate goods and want access to foreign markets. The only sectors that are strongly for it that I can suss out are steel and aluminum manufacturers who are 2% or so of the GDP.

The public and the interest groups are on the same side of 50% so there is no contradiction, but in this particular case I think the interest groups collectively have a more rational understanding of how tariffs effect the economy than do "the people". As Habermas points out, it's quite problematic giving people who don't really know a lot a say about things even though it is absolutely necessary that people feel heard.

[1] Interestingly this book came out in 1964 just before all hell broke loose in terms of Vietnam, counterculture, black nationalism, etc. -- right when discontent when from hypothetical to very real

khafra · 2 months ago
The natural solution is futarchy: Vote on values, bet on beliefs. Everybody knows that, all else being equal, they want higher GDP/cap, better GINI, a higher happiness index. Only the experts know whether tariffs will help produce this.

So, instead of having everyone vote on tariffs (or vote for a whimsical strongman who will implement tariffs), have everyone vote for the package of metrics they want to hit. Then, let experts propose policy packages to achieve these metrics, and let everyone vote on which policies will achieve the goals.

Bullshit gets heavily taxed, and the beliefs of people who actually know the likely outcomes will be what guide the nation.

khafra commented on OpenAI declares 'code red' as Google catches up in AI race   theverge.com/news/836212/... · Posted by u/goplayoutside
echelon · 2 months ago
Why is profit bad? You can be open source, ethical, and for-profit.
khafra · 2 months ago
If you start out as a non-profit, and pull a bunch of shady shenanigans in order to convert to a for-profit, claiming to be ethical after that is a bit of a hard sell.
khafra commented on AI Is Breaking the Moral Foundation of Modern Society   eyeofthesquid.com/ai-is-b... · Posted by u/TinyBig
khafra · 2 months ago
This piece is wildly optimistic about the outcomes likely from AI on par with the smartest humans, let alone smarter than that. The author seems to think that widespread disbelief in the legitimacy of the system could make a difference, in such a world.
khafra commented on What OpenAI did when ChatGPT users lost touch with reality   nytimes.com/2025/11/23/te... · Posted by u/nonprofiteer
nullbio · 3 months ago
When will folks stop trusting Palantir-partnered Anthropic is probably a better question.

Anthropic has weaponized the safety narrative into a marketing and political tool, and it is quite clear that they're pushing this narrative both for publicity from media that love the doomer narrative because it brings in ad-revenue, and for regulatory capture reasons.

Their intentions are obviously self-motivated, or they wouldn't be partnering with a company that openly prides itself on dystopian-level spying and surveillance of the world.

OpenAI aren't the good guys either, but I wish people would stop pretending like Anthropic are.

khafra · 3 months ago
All of the leading labs are on track to kill everyone, even Anthropic. Unlike the other labs, Anthropic takes reasonable precautions, and strives for reasonable transparency when it doesn't conflict with their precautions; which is wholly inadequate for the danger and will get everyone killed. But if reality graded on a curve, Anthropic would be a solid B+ to A-.

u/khafra

KarmaCake day3530May 23, 2008
About
Morituri nolumus mori
View Original