Readit News logoReadit News
kartoffelsaft commented on Librebox: An open source, Roblox-compatible game engine   github.com/librebox-devs/... · Posted by u/dedicateddev_hn
Wowfunhappy · 2 days ago
> I hope it won't get slaughtered by Roblox's legal team.

I'm not saying Roblox won't try, but this project strikes me as very obviously legal.

If legality was a spectrum, I'd rank this higher than VLC Media Player (patents) and way above an NES emulator. I suppose it'd be below Android, and Oracle did sue over Android.

(Disclaimer, I am not a lawyer, etc.)

kartoffelsaft · 2 days ago
Curious what makes you say it'd be less legally dubious than an emulator? To me, it seems this would be at the same legality as the NES emulator because you're basically 'emulating' the environment Roblox game code runs in. To be fair if that intuition's correct it'd still be legal like emulators are if they're careful.

(also not a lawyer)

kartoffelsaft commented on April Fools 2014: The *Real* Test Driven Development (2014)   testing.googleblog.com/20... · Posted by u/omot
NitpickLawyer · 11 days ago
To put things into perspective: DeepMind was founded in 2010, bought by goog in 2014, the year of this "prank". 11 years later and ... here we are.

Also, a look at how our expectations / goalposts are moving. In 2010, one of the first "presentations" given at Deepmind by Hassabis, had a few slides on AGI (from the movie/documentary "The Thinking Game"):

Quote from Shane Legg: "Our mission was to build an AGI - an artificial general intelligence, and so that means that we need a system which is general - it doesn't learn to do one specific thing. That's really key part of human intelligence, learn to do many many things".

Quote from Hassabis: "So, what is our mission? We summarise it as <Build the world's first general learning machine>. So we always stress the word general and learning here the key things."

And the key slide (that I think cements the difference between what AGI stood for then, vs. now):

AI - one task vs. AGI - many tasks

at human level intelligence.

----

I'm pretty sure that if we go by that definition, we're already there. I wish I'd have a magic time traveling machine, to see Legg and Hassabis in front of gemini2.5/o3/whatever top model today, trained on "next token prediction" and performing on so many different levels - gold at IMO, gold at IoI, playing chess, writing code, debugging code, "solving" NLP, etc. I'm curious if they'd think the same.

But having a slow ramp up, seeing small models get bigger, getting to play with gpt2, then gpt3, then chatgpt, I think it has changed our expectations and our views on what is truly AGI. And there's a bit of that famous quote "AI is everything that hasn't been done before"...

kartoffelsaft · 11 days ago
I don't think what we have now fits that definition. LLMs are still narrowly good at language generation, and the "many" things it's good at are things that have canonical textual / linguistic representations (code, chess notation, etc.). Much of existing AI that appears more general is hooking up more specific models together; for example, taking the output of an LLM and piping it into a TTS . Since these pieces are easily replaceable I struggle to call it one AI that can do many tasks.

Consider that LLM->TTS example's human equivalent: when you're talking, you naturally emphasize certain words, and part of that is knowing not just what you want to say but why you want to say it. If you had a machine learning model where the speech module had insight into why the language model picked the words it has, and also vision so it knows who it's talking to to pick the right tone, and also the motor system had access to that too for gesturing, etc. then at that point you'd have a single AI that was indeed generally solving a large variety of tasks. We have a little bit of that for some domains but as it stands most of what we have are lots of specific models that we've got talking to each other and falling a little short of human level when the interface between them is incomplete.

kartoffelsaft commented on Why is GitHub UI getting slower?   yoyo-code.com/why-is-gith... · Posted by u/lr0
kartoffelsaft · 20 days ago
Reminds me of a discussion that Casey Muratori and Robert Martin had over clean code and it's impact on performance... but not because of the subject matter. They were using GitHub as a medium for their discussion, and they ran into some serious lag when typing paragraphs into just the hundreds of characters (ctrl+f emoji, ~1/4 of the way through):

https://github.com/unclebob/cmuratori-discussion/blob/main/c...

kartoffelsaft commented on Dear valued user, You have reached the error page for the error page   imgur.com/a/2H7HVcU... · Posted by u/Alex3917
hnlmorg · a month ago
You don’t know when that was written.
kartoffelsaft · a month ago
It had been written by (at the latest) September 2008:

https://googlesystem.blogspot.com/2008/09/best-gmail-error-m...

kartoffelsaft commented on 'Gwada negative': French scientists find new blood type in woman   lemonde.fr/en/science/art... · Posted by u/spidersouris
newsbinator · 2 months ago
Younger generations are now heavily into MBTI. And I mean heavily: you won't find a person under 35 or so who doesn't know their MBTI letters.
kartoffelsaft · 2 months ago
I am well below that age and I don't know my MBTI letters.
kartoffelsaft commented on LLMs are mirrors of operator skill   ghuntley.com/mirrors/... · Posted by u/ghuntley
ghuntley · 3 months ago
No, it is actually a critical skill. Employers will be looking for software engineers that can orchestrate their job function and these are the two key primitives to do that.
kartoffelsaft · 3 months ago
The way it is written is to say that this is an important interview question for any software engineering position, and I'm guessing you agree by the way you say it's critical.

But by the same logic, should we be asking for the same knowledge of the language server protocol and algorithms like treesitter? They're integral right now in the same way these new tools are expected to become (and have become for many).

As I see it, knowing the internals of these tools might be the thing that makes the hire, but not something you'd screen every candidate with who comes through the door. It's worth asking, but not "critical." Usage of these tools? sure. But knowing how they're implemented is simply a single indicator to tell if the developer is curious and willing to learn about their tools - an indicator which you need many of to get an accurate assessment.

kartoffelsaft commented on Why Use Structured Errors in Rust Applications?   home.expurple.me/posts/wh... · Posted by u/todsacerdoti
kbolino · 3 months ago
> one must define a new error enum that exists only to wrap the values returned by two different fallible functions belonging to different libraries

Saying this must be done is awfully strong here. There are other tools at your disposal, like dyn Error and the anyhow crate.

kartoffelsaft · 3 months ago
It "must" be done in the sense that `dyn Error`, anyhow, and (if we include zig's equivalent) inferred error sets mean something different than setting up an error enum. Using an error enum makes the failures explicit, which (most) other options choose to discard because they're designed to handle any error.

Your options effectively become either writing that enum, or using a macro that reduces to that enum. Unless, of course, you're willing to have the function signature say "eh, this function just breaks sometimes. Could be anything really."

kartoffelsaft commented on After months of coding with LLMs, I'm going back to using my brain   albertofortin.com/writing... · Posted by u/a7fort
stevepotter · 3 months ago
I do a variety of things, including iOS and web. Like you mentioned, LLM results between the two are very different. I can't trust LLM output to even compile, much less work. Just last night, it told me to use an API called `CMVideoFormatDescriptionGetCameraIntrinsicMatrix`. That API is very interesting because it doesn't exist. It also did a great job of digging some deep holes when dealing with some tricky Swift 6 concurrency stuff. Meanwhile it generated an entire nextjs app that worked great on the first shot. It's all about that training data baby
kartoffelsaft · 3 months ago
Honestly, with a lot of HN debating the merits of LLMs for generating code, I wish it were an unwritten rule that everyone states the stack they're using with it. It seems that the people who rave about it creating a whole product line in a weekend are asking it to write them a web iterface using [popular js framework] that connects to [ubiquitous database], and their app is a step or two away from being CRUD. Meanwhile, the people who say it's done nothing for them are writing against [proprietary in-house library from 2005].

The worst is the middleground of stacks that are popular enough to be known but not enough for an LLM to know them. I say worst because in these cases the facade that the LLM understands how to create your product will fall before you the software's lifecycle ends (at least, if you're vibe-coding).

For what it's worth, I've mostly been a hobbyist but I'm getting close to graduating with a CS degree. I've avoided using LLMs for classwork because I don't want to rob myself of an education, but I've occasionally used them for personal, weird projects (or tried to at least). I always give up with it because I tend to like trying out niche languages that the LLM will just start to assume work like python (ex: most LLMs struggle with zig in my experience).

kartoffelsaft commented on Why Bell Labs Worked   1517.substack.com/p/why-b... · Posted by u/areoform
jve · 3 months ago
They look high hanging fruits when you haven't yet reached them.

They look low hanging fruits when you have risen above them.

kartoffelsaft · 3 months ago
This seems written to sound like a profound piece of wisdom, but I find it difficult to not interperet it as a very flowery way of saying "git gud." If that is indeed what you mean then that is fine, but it is still worth acknowledging that the greater competition for funding of today means scientists of today are not playing the same game as scientists of Bell Labs's time.
kartoffelsaft commented on I asked police to send me their public surveillance footage of my car   cardinalnews.org/2025/03/... · Posted by u/bookofjoe
dylan604 · 5 months ago
Then don't make the 3rd party a for profit private company. Make it a new branch of the government. Make it part of the DoT or which ever company is responsible for collecting taxes in your locale. Intra agency operations would still be possible while removing direct access to LEOs. If you want your own information, pay your $25 type fees after proving who you are for printing/research/etc that are typical when requesting official gov't records. If you're another gov't agency, provide the warrant granting access to civilian records.
kartoffelsaft · 5 months ago
The argument is not that this theoretical 3rd party would be a for-profit company, but that there's already existing for-profit companies that exist and could serve that purpose, and that the new 3rd party wouldn't see much use because of that.

They almost certainly are willing to buy hordes of data off of google/facebook/etc.; it's useful data that they're already negotiating to get. Why, then, would they be want to put in the effort for a warrant on cctv footage when the suspect's google search and maps history they're alreaty getting contain the same info and more? At best, your creating a small amount of competition for data brokers, and I question if it's that much.

u/kartoffelsaft

KarmaCake day38October 28, 2024View Original