Readit News logoReadit News
iliane5 commented on Xzbot: Notes, honeypot, and exploit demo for the xz backdoor   github.com/amlweems/xzbot... · Posted by u/q3k
anarazel · 2 years ago
> It was reported by an MS engineer who happens to be involved in another OSS project.

I view it as being OSS or postgresql dev that happens to work at microsoft. I've been doing the former for much longer (starting somewhere between 2005 and 2008, depending on how you count) than the latter (2019-12).

iliane5 · 2 years ago
Just wanted to say thank you for your work and attention to detail, it's immensely valuable and we're all very grateful for it.
iliane5 commented on Video generation models as world simulators   openai.com/research/video... · Posted by u/linksbro
iliane5 · 2 years ago
Watching an entirely generated video of someone painting is crazy.

I can't wait to play with this but I can't even imagine how expensive it must be. They're training in full resolution and can generate up to a minute of video.

Seeing how bad video generation was, I expected it would take a few more years to get to this but it seems like this is another case of "Add data & compute"(TM) where transformers prove once again they'll learn everything and be great at it

iliane5 commented on OpenAI is too cheap to beat   generatingconversation.su... · Posted by u/cgwu
jonplackett · 2 years ago
Is this a reflection of OpenAI’s massive scale making it so cheap for them?

Or is it the deal with Microsoft for cloud services making it cheap?

Or are they just operating at a massive loss to kill off other competition?

Or something else?

iliane5 · 2 years ago
I think it's mostly the scale. Once you have a consistent user base and tons of GPUs, batching inference/training across your cluster allows you to process requests much faster and for a lower marginal cost.
iliane5 commented on Why are LLMs general learners?   intuitiveai.substack.com/... · Posted by u/pgspaintbrush
rcme · 3 years ago
It doesn't have anything to do with tokenization. You can define binary addition using symbols, e.g. a and b, and provide properly tokenized strings to GPT-4. GPT-4 appears to solve the arithmetic puzzles for a few bits, but quickly falls apart on larger examples.
iliane5 · 3 years ago
What I was saying is that because you need to go out of your way to make sure it's tokenized properly, I wouldn't be surprised if there are enough non properly tokenized examples in the dataset.

If that was the case, it would make it difficult to generalize these concepts.

iliane5 commented on Why are LLMs general learners?   intuitiveai.substack.com/... · Posted by u/pgspaintbrush
dgreensp · 3 years ago
LLMs are not particularly good at arithmetic, counting syllables, or recognizing haikus, though, because (contrary to the thesis of the article) they don’t magically acquire whatever ability would “simplify” predicting the next token.

I don’t feel like the points made here align with any insight about the workings of LLMs. The fact that, as a human, I “wouldn’t know where to start” when asked to add two numbers without doing any addition doesn’t apply to computers (running predictive models). They would start with statistics over lots of similar examples in the training data. It’s still remarkable LLMs do so well on these problems, while at the same time doing somewhat poorly because they can’t do arithmetic!

iliane5 · 3 years ago
> LLMs are not particularly good at arithmetic, counting syllables, or recognizing haikus

I suspect most of this is due to tokenization making it difficult to generalize these concepts.

There are some weird edge cases though, for example GPT-4 will almost always be able to add two 40 digits number but it is also almost always wrong when adding a 40 digit and 35 digit number.

iliane5 commented on OpenAI Employee: GPT-4 has been static since March   twitter.com/OfficialLogan... · Posted by u/behnamoh
flir · 3 years ago
Hm. Maybe they've backported some nanny code from Sydney?
iliane5 · 3 years ago
AFAIK it's pretty standard practice not to expose the "raw" LLM directly to the user. You need a "sanity loop" where user input and the output of the LLM is checked by another LLM to actually enforce rules and mitigate prompt injections, etc.
iliane5 commented on Superintelligence: An idea that eats smart people (2016)   idlewords.com/talks/super... · Posted by u/aebtebeten
civilized · 3 years ago
That's true and relevant, but doesn't prove much. What's the big plan, what's the next step in the plan?

Maciej's larger point is that the AI faces tons of very difficult problems in escaping its physical constraints. It's simplistic to wave hands and say "the AI is super duper smart and will have no difficulty hacking all computer systems, inventing and manufacturing swarms of unbeatable nanobots, etc. without being detected or resisted".

It seems to me that the core conceit of AI doomerism is that sufficient intelligence can overcome all barriers with some plan that is so smart, people would never think of it. This is much less plausible than believers take it to be. In mathematics alone, it is very easy to come up with a problem that the collective mathematical ingenuity of the entire human race is helpless before for decades, centuries, or longer.

iliane5 · 3 years ago
100% agree.

However, seeing how excited Palantir is with their war assistant LLM , the US testing autonomous fighter jets a few months ago, etc. I think there's a decent chance that AI won't even have to break out of its constraints. It's pretty much guaranteed people are going to do the obviously dumb thing and give it capabilities it shouldn't have or is not equipped to deal with safely.

Dead Comment

iliane5 commented on Sam Altman goes before US Congress to propose licenses for building AI   reuters.com/technology/op... · Posted by u/vforgione
api · 3 years ago
LLMs are better than me at rapidly querying a vast bank of language-encoded knowledge and synthesizing it in the form of an answer to or continuation of a prompt... in the same way that Mathematica is vastly better than me at doing the mechanics of math and simplifying complex functions. We build tools to amplify our agency.

LLMs are not sentient. They have no agency. They do nothing a human doesn't tell them to do.

We may create actual sentient independent AI someday. Maybe we're getting closer. But not only is this not it, but I fail to see how trying to license it will prevent that from happening.

iliane5 · 3 years ago
I don't think we need sentient AI for it to be autonomous. LLMs are powerful cognitive engines and weak knowledge engines. Cognition on its own does not allow them to be autonomous, but because they can use tools (APIs, etc.) they are able to have some degree of autonomy when given a task and can use basic logic to follow them through/correct their mistakes.

AutoGPTs and the likes are much overhyped (it's early tech experiments after all) and have not produced anything of value yet but having dabbled with autonomous agents, I definitely see a not so distant future when you can outsource valuable tasks to such systems.

iliane5 commented on Sam Altman goes before US Congress to propose licenses for building AI   reuters.com/technology/op... · Posted by u/vforgione
api · 3 years ago
I have a chain saw that can cut better than me, a car that can go faster, a computer that can do math better, etc.

We've been doing this forever with everything. Building tools is what makes us unique. Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed? Did people freak out this much about computers replacing humans when they were shown to be good at math?

iliane5 · 3 years ago
> Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed?

We've already crossed it and I believe we should go full steam ahead, tech is cool and we should be doing cool things.

> Did people freak out this much about computers replacing humans when they were shown to be good at math?

Too young but I'm sure they did freak out a little! Computers have changed the world and people have internalized computers as being much better/faster at math but exhibiting creativity, language proficiency and thinking is not something people thought computers were supposed to do.

u/iliane5

KarmaCake day85December 2, 2022View Original