Readit News logoReadit News
ghostzilla commented on The impact of competition and DeepSeek on Nvidia   youtubetranscriptoptimize... · Posted by u/eigenvalue
rightbyte · 8 months ago
Unique, ye, but isn't their method open? I read something about a group replicating a smaller variant of their main model.
ghostzilla · 8 months ago
Which brings the question, if LLMs are an asset of such strategic value, why did China allow the DeepSeek to be released?

I see two possibilities here, either that the CCP is not that all-reaching as we think, or that the value of the technology isn't critical, and that the release was further cleared with the CCP and maybe even timed to come right after Trump's announcement of American AI supremacy.

ghostzilla commented on Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers   apnews.com/article/trump-... · Posted by u/tedsanders
roenxi · 8 months ago
That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.

That said, this does look like dreadful policy at the first headline. There is a lot of money going in to AI, adding more money from the US taxpayer is gratuitous. Although in the spirit of mixing praise and condemnation, if this is the worst policy out of Trump Admin II then it'll be the best US administration seen in my lifetime. Generally the low points are much lower.

ghostzilla · 8 months ago
This seems more like a move designed to frighten China -- or force them to spend money making LLMs -- then an actual threat. The clues are that Trump ceremonially blessed the deal but did not promise money (SoftBank et al will, supposedly), and then Musk said that's all fake because SoftBank doesn't have the money, and Altman countered that Musk should not be butthurt and should put America first. Who does that? I'm thinking, no one who has something real on his hands.
ghostzilla commented on What I Learned Failing to Finish a Game in 2024   georgeallen.dev/posts/202... · Posted by u/grgaln
snapcaster · 8 months ago
I have tons of friends that took his classes at CMU, as much as everything he says sounds good I don't know a single person that has ever enjoyed a game he made. Because of that, I have to assume what he says is either fluff or wrong even if i can't perceive why exactly
ghostzilla · 8 months ago
That's interesting, it hasn't occurred to me to check his games. That said, I remember reading that Machiaveli was once given a territory to govern and he was terrible at it, despite The Prince. It may be a thing about teachers vs doers.

THAT said, there is a lot of intersting things one can learn from John Carmack, so there's an exception to every rule.

ghostzilla commented on What I Learned Failing to Finish a Game in 2024   georgeallen.dev/posts/202... · Posted by u/grgaln
hypertexthero · 8 months ago
While thinking of making a game I’ve found these helpful:

1. The Art of Game Design, A Book of Lenses by Jesse Schell - https://schellgames.com/art-of-game-design

2. 20 Tips on Making Games by Jordan Mechner - https://www.jordanmechner.com/downloads/library/20tips.pdf

3. Liz England’s blog - https://lizengland.com/blog/

ghostzilla · 8 months ago
Jesse Schell's book is a great read beyond game design.

Thanks for the other links.

To leave something in return, here's something I read the other day and kept thinking about it (I'm designing on a PvP motion based game)

"In competitive games, there is little more valuable than knowing the mind of the opponent, which the Japanese call “yomi.”

As a side note, I would even argue that the “strategic depth” of a game should be defined almost entirely on its ability to support and reward yomi."

The Yomi Layer concept is a reminder that moves need to have counters. If you know what the opponent will do, you should generally have some way of dealing with that.

https://www.sirlin.net/ptw-book/7-spies-of-the-mind

ghostzilla commented on Can LLMs write better code if you keep asking them to “write better code”?   minimaxir.com/2025/01/wri... · Posted by u/rcarmo
nextaccountic · 8 months ago
> In my experience the trouble with LLMs at the professional level is that they're almost as work to prompt to get the right output as it would be to simply write the code.

Yeah. It's often said that reading (and understanding) code is often harder than writing new code, but with LLMs you always have to read code written by someone else (something else).

There is also the adage that you should never write the most clever code you can, because understanding it later might prove too hard. So it's probably for the best that LLM code often isn't too clever, or else novices unable to write the solution from scratch will also be unable to understand it and assess whether it actually works.

ghostzilla · 8 months ago
Another adage is "code should be written for people to read, and only incidentally for machines to execute". This goes directly against code being written by machines.

I still use ChatGPT for small self-contained functions (e.g. intersection of line and triangle) but mark the inside of the function clearly as chat gpt made and what the prompt was.

ghostzilla commented on Please don't mention AI again   ludic.mataroa.blog/blog/i... · Posted by u/ludicity
immibis · a year ago
No. The mouse would just be a mouse. It wouldn't learn anything, because it's a mouse. It might chew on some of the books. Meanwhile, transformers do learn things, so there is obviously more to it than just the quantity of data.

(Why spend a mouse? Just sit a strawberry in a library, and if the hypothesis holds that the quantity of data is the only thing that matters holds, you'll have a super intelligent strawberry)

ghostzilla · a year ago
Or a pebble; for a super intelligent pebble.

“God sleeps in the rock, dreams in the plant, stirs in the animal, and awakens in man.” ― Ibn Arabi

ghostzilla commented on AI-powered conversion from Enzyme to React Testing Library   slack.engineering/balanci... · Posted by u/GavCo
ghostzilla · a year ago
> You don’t need unit tests if you have integration tests.

Which is why, as per Jim Coplien, most unit testing is waste.

But converting one type of unit tests into another is a perfect showcase for AI-generated code. They could have even kept just the prompts in the source and regenerate the tests on every run, were it not for inaccuracy, temperature, and the high cost of running.

ghostzilla commented on Building an AI game studio: what we've learned so far   braindump.me/blog-posts/b... · Posted by u/FredrikNoren
hluska · a year ago
You come across as being extremely condescending. And I’m sure you make some good points, but I can’t find them behind the tone. It’s a shame because again, I’m sure you make good points.
ghostzilla · a year ago
On the internet, no one hears you being subtle. (Torvalds)

I'll add my own view: when you watch a movie, read a book, listen to a song, play a game... you CONNECT with the mind of the person who made it. When there is no mind, or the source is a dead, statistical amalgamation of countless fragments of other minds, there is nothing you'll want to connect to, nothing you'll want to squander precious hours of your life on.

And while you may be curious to see, once maybe, a movie such an imaginary AGI-LLM has created from your prompt, no one else will have the slightest interest in seeing it. And vice versa. Which means there would be absolutely NO MONEY in that market. There would be no market.

ghostzilla commented on Today's regional conflicts resemble the ones that produced World War II   foreignaffairs.com/united... · Posted by u/keepamovin
drewcoo · 2 years ago
Agree.

Without the USSR, things look very different. Russia is not the USSR.

Also, the US is the main aggressor in the world today, as opposed to a distant secondary power in the early 20th century.

A lot of the comparisons in the article just don't make much sense. Except to a neocon with a neocon's reading of history and the present.

ghostzilla · 2 years ago
Certainly agree with you on that one.
ghostzilla commented on Conversational AI is a great tool for education   twitter.com/vishnuhx/stat... · Posted by u/vishnuharidas
makach · 2 years ago
I completely agree. The power of LLM and conversational AI will give each student their own private tutor, ofc. there are issues today, but the future will be wonderful for whoever is able to embrace this technology.

I already used GPT to help me tutor my kids, and for my kids to use it when they get stuck. They get unstuck faster. They are critical but also more willing to accept response as fact, we discuss this regularly and they seem to be getting the point.

so many kids get left behind because a teacher is unable to spend time with them, how amazing will it not be for each student to have their own supporting teacher?

hopefully, we will be able to harness AI for the better and good.

ghostzilla · 2 years ago
I find it draining to have to be on the lookout constantly for hallucinations, or omissions a person wouldn't make. I imagine as long as I'm walking well known paths -- well known to many but not me -- I'm safe, but the moment I need nuances I can expect that one of those nuances is completely and convincingly made up, except I don't know which one.

u/ghostzilla

KarmaCake day26August 14, 2023
About
kinetronix.com
View Original