Readit News logoReadit News
ItsMattyG commented on Why are there so many rationalist cults?   asteriskmag.com/issues/11... · Posted by u/glenstein
pavlov · 16 days ago
A very interesting read.

My idea of these self-proclaimed rationalists was fifteen years out of date. I thought they’re people who write wordy fan fiction, but turns out they’ve reached the point of having subgroups that kill people and exorcise demons.

This must be how people who had read one Hubbard pulp novel in the 1950s felt decades later when they find out he’s running a full-blown religion now.

The article seems to try very hard to find something positive to say about these groups, and comes up with:

“Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.”

There’s nothing very unique about agreeing with the WHO, or thinking that building Skynet might be bad… (The rationalist Moses/Hubbard was 12 when that movie came out — the most impressionable age.) In the wider picture painted by the article, these presumed successes sound more like a case of a stopped clock being right twice a day.

ItsMattyG · 16 days ago
FWIW, my rationalist friends were warning about Covid before I had heard about it from others, and talking about AI before it was on others radar.
ItsMattyG commented on Operator research preview   openai.com/index/introduc... · Posted by u/meetpateltech
thrtythreeforty · 7 months ago
APIs have an MxN problem. N tools each need to implement M different APIs.

In nearly every case (that an end user cares about), an API will also have a GUI frontend. The GUI is discoverable, able to be authenticated against, definitely exists, and generally usable by the lowest common denominator. Teaching the AI to use this generically, solves the same problem as implementing support for a bunch of APIs without the discoverability and existence problems. In many ways this is horrific compute waste, but it's also a generic MxN solution.

ItsMattyG · 7 months ago
But if you have an AI then all that's needed to implement an api is documentation
ItsMattyG commented on Operator research preview   openai.com/index/introduc... · Posted by u/meetpateltech
krapp · 7 months ago
McDonald's already tried having AI take orders and stopped when the AI did things like randomly add $250 of McNuggets or mistake ketchup for butter.

Note - because this is something which needs to be pointed out in any discussion of AI now - even though human beings also make mistakes this is still markedly less accurate than the average human employee.

ItsMattyG · 7 months ago
For now
ItsMattyG commented on OpenAI O3 breakthrough high score on ARC-AGI-PUB   arcprize.org/blog/oai-o3-... · Posted by u/maurycy
figure8 · 8 months ago
I have a very naive question.

Why is the ARC challenge difficult but coding problems are easy? The two examples they give for ARC (border width and square filling) are much simpler than pattern awareness I see simple models find in code everyday.

What am I misunderstanding? Is it that one is a visual grid context which is unfamiliar?

ItsMattyG · 8 months ago
Francois'(the creator of ARC-AGI benchmark) whole point was that while they look the same, they're not. Coding is solving a familiar pattern in the same way (and fails when it' s NOT doing that, it just looks like it doesn't happen because it's seen SO MANY patterns in code). But the point of Arc AGI is to make each problem have to generalize in some new ay.
ItsMattyG commented on How I build and run behavioral interviews   benkuhn.net/behavioral/... · Posted by u/surprisetalk
neilv · 2 years ago
> The second sentence (context/problem/solution) is important for helping the candidate keep their initial answer focused—otherwise, they are more likely to ramble for a long time and leave less time for you to... Dig into details

How about, instead of asking "Tell me about a time you...", and you presuming to understand the situation so well that you can make judgments about nuance based on the off-the-cuff example you asked for, and trying to cut them short from "rambling"...

You instead ask them "Say you have a situation like X; how would you approach it?" And you can change the situation by adding information, "What if the report responds Y?"

Then both people are operating from closer to similar information about the situation. (Though it's still possible that the interviewee understands something about these situations in general that the interviewer doesn't.)

This also avoids dredging up past unpleasant situations (someone with more experience will have handled more unpleasant situations, but that doesn't mean it doesn't invoke a somber mood, if they're not acting or oblivious).

It also means they don't have to also think about how much they can say under NDA and being discreet about personnel matters (while a poor interviewer might take hesitation or choosing words carefully as interviewee trying to put themselves in the best light or keep a fabricated story straight).

An expected objection to this approach of spinning a hypothetical situation is that candidates might just say what they think are the correct answers. But knowing the correct answers is at least half the problem. And what do you think many candidates are doing in the interview anyway, if they are the kind to know the correct answers, but not follow them in real life.

ItsMattyG · 2 years ago
Because it empirically doesn't lead to as good hires as asking about past experiences.
ItsMattyG commented on Show HN: I made an app to use local AI as daily driver   recurse.chat/... · Posted by u/xyc
wodow · 2 years ago
> * The main thing that makes ChatGPTs ui useful to me is the ability to change any of my prompts in the conversation & it will then go back to that part of the converation and regenerate, while removing the rest of the conversation after that point.

Agreed, but what I would also really like (from this and ChatGPT) would be branching: take a conversation in two different ways from some point and retain the seperate and shared history.

I'm not sure what the UI should be. Threads? (like mail or Usenet)

ItsMattyG · 2 years ago
ChatGPT does this. You just click an arrow and it will show you other branches.
ItsMattyG commented on Google pays publishers to test AI tool that scrapes sites to craft new content   adweek.com/media/google-p... · Posted by u/vincent_s
beezlebroxxxxxx · 2 years ago
"Handmade" or "manmade" is about to have a whole new relevance on the internet.

I'm sure I'm in the minority, but the mere suggestion that some purportedly creative "content" (ugh, that word sounds like saying "sausage meat") is AI generated makes me completely lose interest. Soon enough that might make me lose interest in whole swathes of the internet, but I can't deny that's been an ongoing process regardless for a while anyways.

ItsMattyG · 2 years ago
Interesting, for me this is only true as long as the AI generated content is worse.

There's some things I really care about the human experience for, but much content it just matters to me if it's a joy to ride.

ItsMattyG commented on Sora: Creating video from text   openai.com/sora... · Posted by u/davidbarker
Hoasi · 2 years ago
> As of now, the models still need large amounts of human produced creative works for training.

That will likely always be the case. Even 100% synthetic data has to come from somewhere. Great synopsis! Working for hire to feed a machine that regurgitates variations of the missing data sounds dystopian. But here we are, almost there.

ItsMattyG · 2 years ago
Eventually models will likely get their creativity by:

1. Interacting with the randomness of the world

and

2. Thinking a lot, going in loops and thought loops and seeing what they discover.

I don't expect them to need humans forever.

ItsMattyG commented on How Lego builds a new Lego set   theverge.com/c/23991049/l... · Posted by u/sohkamyung
ryukoposting · 2 years ago
> offering both fame and a small fortune — 1 percent of net sales — to anyone who can convince 10,000 peers and The Lego Group that their set deserves to exist

This isn't entirely true. Plenty of LEGO Ideas designs get to the 10k threshold, then LEGO vetoes them for one reason or another. The decision process is completely opaque; more often than not, they basically just say "the design didn't pass internal review." Never mind that most Ideas sets get a significant design overhaul before reaching production anyway.

ItsMattyG · 2 years ago
This is literally what the whole article was about. Not only does the quote itself contain that context "and the lego group", but the very next paragraph is "And then… nothing. The Tintin votes dried up, and Lego rejected both his fan-favorite Avatar and Polar Express ideas. The company never says why it rejects an Ideas submission, only that deciding factors include everything from “playability” and “brand fit” to the difficulties in licensing another company’s IP."
ItsMattyG commented on Visual Anagrams: Generating optical illusions with diffusion models   dangeng.github.io/visual_... · Posted by u/beefman
dwighttk · 2 years ago
Every single one of the examples is like "yeah... I mean, I guess... sorta"

the penguin/giraffe is probably the best one. The old lady/dress barely looks like either.

ItsMattyG · 2 years ago
oh hmm, the penguin/giraffe one when I first saw it I was like "that looks like an upside down penguin, where's the giraffe?" Whereas others I immediately saw what it was trying to be.

u/ItsMattyG

KarmaCake day148August 14, 2014View Original