Readit News logoReadit News
AIPedant commented on Making games in Go: 3 months without LLMs vs. 3 days with LLMs   marianogappa.github.io/so... · Posted by u/maloga
danjl · a day ago
The LLM started with a three month headstart, both in terms of code, using the previous game as a template, and more importantly, all of the learnings and mistakes you made in the hand-coded pass.
AIPedant · a day ago
Yeah, I figured this was clickbait but my jaw still dropped a bit when I saw this:

  I cloned the backend for Truco and gave Claude a long prompt explaining the rules of Escoba and asking it to refactor the code to implement it.
How long would it take the human dev to refactor the code themselves? I think it's plausible that it would be longer than 3 days, but maybe not!

AIPedant commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
tshaddox · 2 days ago
> We don't know if AGI is even possible outside of a biological construct yet. This is key.

A discovery that AGI is impossible in principle to implement in an electronic computer would require a major fundamental discovery in physics that answers the question “what is the brain doing in order to implement general intelligence?”

AIPedant · 2 days ago
It is vacuously true that a Turing machine can implement human intelligence: simply solve the Schrödinger equation for every atom in the human body and local environment. Obviously this is cost-prohibitive and we don’t have even 0.1% of the data required to make the simulation. Maybe we could simulate every single neuron instead, but again it’ll take many decades to gather the data in living human brains, and it would still be extremely expensive computationally since we would need to simulate every protein and mRNA molecule across billions of neurons and glial cells.

So the question is whether human intelligence has higher-level primitives that can be implemented more efficiently - sort of akin to solving differential equations, is there a “symbolic solution” or are we forced to go “numerically” no matter how clever we are?

AIPedant commented on Optimizing our way through Metroid   antithesis.com/blog/2025/... · Posted by u/eatonphil
wwilson · 2 days ago
Haven’t tried Castlevania II, but here’s the first one: https://antithesis.com/blog/castlevania/
AIPedant · 2 days ago
This seems like a cool company and I don't want to nitpick too much, but gamers have no respect for history:

  Castlevania... [so] called because it is a Metroidvania game set in a Castle.
Ouch - this is precisely backwards. Metroidvanias are named after Metroid and Castlevania because those series practically defined the genre.

Also a bit frustrating because the first Castlevania itself isn't actually a metroidvania, it's a more conventional action-platformer. Castlevania II has non-linear exploration, lots of items to collect, and puzzle-solving, all like Metroid. So it's not too surprising Antithesis had to do a lot of work for adapting their system to Metroid - but I wonder if this work means it now can handle Castlevania II without much extra development.

AIPedant commented on Being “Confidently Wrong” is holding AI back   promptql.io/blog/being-co... · Posted by u/tango12
j-krieger · 3 days ago
Never before did we have a combination of well and poison where the pollution of the first was both as instantaneous and as easily achieved.

I‘ve yet to see a convincing article for artificial training data.

AIPedant · 3 days ago
It does seem like it helps with math, but in a way that demonstrates the futility of the enterprise: "after training the LLM on 10,000,000 examples of K-8 arithmetic it is now superhuman up to 12 digits, after which it falls off a cliff. Also it demonstrably doesn't understand what 'four' means conceptually and it still fails on many trivial counting problems."
AIPedant commented on AI Mode in Search gets new agentic features and expands globally   blog.google/products/sear... · Posted by u/meetpateltech
apwell23 · 4 days ago
you don't really need to be 100% sure of it being truth for a vast majority of cases.

edit for comment below: Its not about laziness for me. Its the displeasure of wading through junk that internet has become. I just don't have brain capacity or the smarts to outwit the scammers .

AIPedant · 4 days ago
I just don't understand being so cynical and lazy that you'll accept a meaningfully higher chance of being misinformed if it saves a few minutes of searching and reading[1]. Nobody is that busy.

[1] If the search takes more than a few minutes then the AI overview is almost guaranteed to be wrong or useless.

AIPedant commented on Tech, chip stock sell-off continues as AI bubble fears mount   finance.yahoo.com/news/te... · Posted by u/pera
mgh2 · 5 days ago
The unpredictable thing about bubbles is that you never know when it is gonna pop or the irrationality of the masses, until then it is a hot potato game.
AIPedant · 5 days ago
This is true - the most compelling evidence we are in a bubble is not the content of this story (maybe it's just a day in the markets) but the triviality of the cause for hand-wringing. A somewhat disappointing product release from a single company should not strike investor dread across the entire sector. The tenor of the conversation changed dramatically over the weekend because bubbles are very thin and pop quickly.

That said, "GPT-5 will not be any better than competitors' products, demonstrating OpenAI was bluffing about AGI and destroying investor exuberance" was a very specific prediction made by (for example) Gary Marcus.

AIPedant commented on As Alaska's salmon plummet, scientists home in on the killer   science.org/content/artic... · Posted by u/rbanffy
tzs · 6 days ago
Has the headline been changed since you commented?

The headline on HN at the moment is "As Alaska's salmon plummet, scientists home in on the killer". The headline on the article itself is "As salmon in Alaska plummet, scientists home in on a killer". I don't see any way to read those as suggesting science is killed the salmon.

AIPedant · 6 days ago
Yes - the submission title used to be

  As Alaska's salmon plummet, scientists home in on the killer - Science - AAAS
seemingly a goofy copy-paste thing.

AIPedant commented on Anna's Archive: An Update from the Team   annas-archive.org/blog/an... · Posted by u/jerheinze
Vektorceraptor · 6 days ago
Then show me the easily available "information on the langauge tree" to solve the unsolved problems in science. Btw. books are not mere information, they are also products of effort and sacrifice and intentions. They are also embedded in an economic system of paper, books, ink, transport and what not producers.

So you are either poor or too lazy to buy a book from the store. But this doesn't justify mind theft or it's distribution.

AIPedant · 6 days ago
My comment was sarcastic.
AIPedant commented on A general Fortran code for solutions of problems in space mechanics [pdf]   jonathanadams.pro/blog-ar... · Posted by u/keepamovin
nyc111 · 7 days ago
It looks like they chose to use the "universal gravitational constant" "k" instead of Newton^s constant, "G": p.23, "k^2 = universal gravitational constant, 1.32452139x10^20, m^3/(sec^2)(sun mass units)"

I think "k" was also known as "Gaussian gravitational constant" https://en.wikipedia.org/wiki/Gaussian_gravitational_constan...

But the value and unit of "k" given in the Wikipedia page is different. Do you know what NASA document means by "universal gravitational constant" in modern sense?

AIPedant · 6 days ago
It's just regular old G, defined in mass-of-sun units: https://en.m.wikipedia.org/wiki/Gravitational_constant (fourth item in the first table: NASA also uses meters whereas Wiki uses km)

Gauss's constant k is defined as sqrt(G), but for a while the international standard was to define k and then compute G as k^2, which is why NASA refers to it that way.

AIPedant commented on Anna's Archive: An Update from the Team   annas-archive.org/blog/an... · Posted by u/jerheinze
Vektorceraptor · 7 days ago
So what about the authors and creators of the works? They did it for free?
AIPedant · 7 days ago
Information and well-crafted sentences are available on the Language Tree, easily plucked by anyone at zero cost. It's greedy for those so-called novelists and subject matter experts to expect a living wage.

"Information wants to be free," which means that any cost of producing that information can be abstracted away due to ideological inconvenience.

u/AIPedant

KarmaCake day871April 5, 2025View Original