Readit News logoReadit News
npinsker commented on Godot 4.6 Release: It's all about your flow   godotengine.org/releases/... · Posted by u/makepanic
socalgal2 · 12 days ago
> My son had tried Unity first, but the C# compile cycle was so slow that he kept getting out of the flow.

I'd like to know more about this. Were you comparing similar sized projects? I've only done very small projects in Unity and the cycle was near instant. Loading up some of their 3gig+ samples, there was an initial build that took 40+ mins but that's because it had 3gig of assets to process.

npinsker · 11 days ago
It’s probably not compilation, but “Domain Reloading” (https://docs.unity3d.com/2022.2/Documentation/Manual/DomainR...) which is laughably slow and on by default.

I think Unity does this because the same process is re-used for Play and Editor modes, whereas Godot does the normal thing and spawns a new process when testing.

npinsker commented on How to turn 'sfo-jfk' into a suitable photo   approachwithalacrity.com/... · Posted by u/bblcla
dang · 13 days ago
[stub for offtopicness]

[sorry I messed up with that title! perils of not reading the articles closely]

npinsker · 13 days ago
The headline (currently: “Trying to craft AI images that are worth displaying to end users”) is misleading and changed from the original. Author isn’t crafting any AI images; they’re using AI in tandem with manual work to help choose from a set of human-authored images.
npinsker commented on GenAI, the snake eating its own tail   ybrikman.com/blog/2026/01... · Posted by u/brikis98
furyofantares · 21 days ago
The article feels very confused to me.

Example 1 is bad, StackOverflow had clearly plateaued and was well into the downward freefall by the time ChatGPT was released.

Example 2 is apparently "open source" but it's actually just Tailwind which unfortunately had a very susceptible business model.

And I don't really think the framing here that it's eating its own tail makes sense.

It's also confusing to me why they're trying to solve the problem of it eating its own tail - there's a LOT of money being poured into the AI companies. They can try to solve that problem.

What I mean is - a snake eating its own tail is bad for the snake. It will kill it. But in this case the tail is something we humans valued and don't want eaten, regardless of the health of the snake. And the snake will probably find a way to become independent of the tail after it ate it, rather than die, which sucks for us if we valued the stuff the tail was made of, and of course makes the analogy totally nonsensical.

The actual solutions suggested here are not related to it eating its own tail anyway. They're related to the sentiment that the greed of AI companies needs to be reeled in, they need to give back, and we need solutions to the fact that we're getting spammed with slop.

I guess the last part is the part that ties into it "eating its own tail", but really, why frame it that way? Framing it that way means it's a problem for AI companies. Let's be honest and say it's a problem for us and we want it solved for our own reasons.

npinsker · 21 days ago
“Well, Reddit is growing, which contradicts my point, but I really feel like it’s not”

Deleted Comment

Loading parent story...

Loading comment...

Loading parent story...

Loading comment...

npinsker commented on I doubt that anything resembling genuine AGI is within reach of current AI tools   mathstodon.xyz/@tao/11572... · Posted by u/gmays
mindcrime · 2 months ago
Terry Tao is a genius, and I am not. So I probably have no standing to claim to disagree with him. But I find this post less than fulfilling.

For starters, I think we can rightly ask what it means to say "genuine artificial general intelligence", as opposed to just "artificial general intelligence". Actually, I think it's fair to ask what "genuine artificial" $ANYTHING would be.

I suspect that what he means is something like "artificial intelligence, but that works just like human intelligence". Something like that seems to be what a lot of people are saying when they talk about AI and make claims like "that's not real AI". But for myself, I reject the notion that we need "genuine artificial general intelligence" that works like human intelligence in order to say we have artificial general intelligence. Human intelligence is a nice existence proof that some sort of "general intelligence" is possible, and a nice example to model after, but the marquee sign does say artificial at the end of the day.

Beyond that... I know, I know - it's the oldest cliche in the world, but I will fall back on it because it's still valid, no matter how trite. We don't say "airplanes don't really fly" because they don't use the exact same mechanism as birds. And I don't see any reason to say that an AI system isn't "really intelligent" if it doesn't use the same mechanism as human.

Now maybe I'm wrong and Terry meant something altogether different, and all of this is moot. But it felt worth writing this out, because I feel like a lot of commenters on this subject engage in a line of thinking like what is described above, and I think it's a poor way of viewing the issue no matter who is doing it.

npinsker · 2 months ago
> I suspect that what he means is something like "artificial intelligence, but that works just like human intelligence".

I think he means "something that can discover new areas of mathematics".

Loading parent story...

Loading comment...

Loading parent story...

Loading comment...

Loading parent story...

Loading comment...

u/npinsker

KarmaCake day665May 22, 2018
About
I'm working on an indie game called Galactic Diner: https://www.dinergame.com/

Hoping to release in 2026 :)

View Original