Readit News logoReadit News
ironrabbit commented on Andreessen Horowitz's SB 1047 campaign is as misleading as it gets   transformernews.ai/p/lies... · Posted by u/apsec112
xhkkffbf · a year ago
I'm still blown away by what many of the LLMs can do, but I'm not fooling myself into thinking that they're more than extremely flexible functions. You put in a query and get an answer. How are they that different from any search engine or web site?

When the Internet first bloomed, a hands-off ethic appeared and took hold. Let's not regulate or tax any of it lest we kill it. But now there are significant groups-- often from the establishment players -- who are concocting scary what-if narratives that seem pretty unlikely to me.

How about we just try the "slow regulation" approach again? It worked pretty well for the Internet. We've slowly added some laws with the utmost care not to destroy and, over all, it's worked. Let's do the same with AI.

ironrabbit · a year ago
Which part of SB1047 do you think is too onerous or not "slow regulation" enough?
ironrabbit commented on The AI industry turns against its favorite philosophy   semafor.com/article/11/21... · Posted by u/spopejoy
PaulHoule · 2 years ago
ironrabbit · 2 years ago
This is a forum post from a student with 18 total karma
ironrabbit commented on Claude 2.1   anthropic.com/index/claud... · Posted by u/technics256
swatcoder · 2 years ago
Because they're ultimately training data simulators and not actually brilliant aritifical programmers, we can expect Microsoft-affiliated models like ChatGPT4 and beyond to have much stronger value for coding because they have unmediated access to GitHub content.

So it's most useful to look at other capabilities and opportunities when evaluating LLM's with a different heritage.

Not to say we shouldn't evaluate this one for coding or report our evaluations, but we shouldn't be surprised that it's not leading the pack on that particular use case.

ironrabbit · 2 years ago
Zero chance private github repos make it into openai training data, can you imagine the shitshow if GPT-4 started regurgitating your org's internal codebase?
ironrabbit commented on Persimmon-8B   adept.ai/blog/persimmon-8... · Posted by u/jgershen
automatistist · 3 years ago
> The standard practice for achieving fast inference is to rewrite the entire model inference loop in C++, as in FasterTransformer, and call out to special fused kernels in CUDA. But this means that any changes to the model require painfully reimplementing every feature twice: once in Python / PyTorch in the training code and again in C++ in the inference codebase. We found this process too cumbersome and error prone to iterate quickly on the model.

I am an AI novice but why can't they automated this with AI? I thought the whole point of these tools was to automated tasks that are error prone and require lots of attention to details. Computers are great at that kind of stuff so it's surprising they haven't applied AI techniques to automate parts of the AI pipeline like converting code from Python to C++.

ironrabbit · 3 years ago
Automatic kernel fusion (compilation) is a very active field, and most major frameworks support some easy-to-use compilation (e.g. jax's jit, or torch.compile which iirc uses openai's triton under the hood). Often you can still do better than the compiler by writing fused kernels yourself (either in cuda c++ or in something like triton (python which compiles down to cuda) but compilers are getting pretty good.

edit: not sure why op is getting downvotes, this is a very reasonable question imo; maybe the characterization of kernel compilation as "AI" vs. just "software"?

ironrabbit commented on $900k Median Package for Engineers at OpenAI   levels.fyi/companies/open... · Posted by u/zuhayeer
zerr · 3 years ago
7-figure total comp is common for E6 and very common for E7 engineers at Meta. A stock being a significant part of course.
ironrabbit · 3 years ago
The median OAI employee with a 900k comp is probably L5, not L6 or L7
ironrabbit commented on $900k Median Package for Engineers at OpenAI   levels.fyi/companies/open... · Posted by u/zuhayeer
Panini_Jones · 3 years ago
Why does levels.fyi say US$542,547 for E6? I count only 6 figures.
ironrabbit · 3 years ago
New hires' comp is much higher than existing employees', especially if you've hit your cliff. 7 figures for E6 can happen if you joined recently, have good counter-offers, and negotiate. It's not super uncommon but it's also not the median E6 comp.
ironrabbit commented on $900k Median Package for Engineers at OpenAI   levels.fyi/companies/open... · Posted by u/zuhayeer
sidcool · 3 years ago
Isn't it for member of technical staff? It's a high level position.
ironrabbit · 3 years ago
All engineers and researchers, even junior, are "Member of Technical Staff" at OAI

Deleted Comment

ironrabbit commented on OpenAI Lobbied the E.U. To Water Down AI Regulation   time.com/6288245/openai-e... · Posted by u/jlpcsl
andrewstuart · 3 years ago
So Sam Altman gets in front of governments around the world, alarms everyone with science fiction about how he’s created the technology that might be the doom of the human race, asks for regulation, now doesn’t want regulation even though he’s stirred up a hornets nest of government panic. There’s leadership for you.

(Edit remove criticism of Altman too negative)

I suppose I should be careful saying stuff about zuckerberg like that. He might want to cage fight me.

ironrabbit · 3 years ago
> the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.” [...] The company argued that it would be sufficient to instead rely on another part of the Act, that mandates AI providers sufficiently label AI-generated content and be clear to users that they are interacting with an AI system.

This sounds pretty reasonable? I don't think it's hypocritical to be talking about the doom of humanity and also arguing that GPT-3, a 3-year old model, should not be classified as "high-risk" in that sense.

Even if you disagree, questioning Altman's leadership and calling him an "empty soul" over this kind of regulatory detail is not adding substance to the discussion imo.

ironrabbit commented on Facebook LLAMA is being openly distributed via torrents   github.com/facebookresear... · Posted by u/micro_charm
Stagnant · 3 years ago
Correct me if I'm wrong but doesn't character.ai use their own model and isn't associated with OpenAI? At least I can't find any information that would claim so.

Anecdotally, as a roleplaying chat experience, char.ai seems to perform way better than anything else publicly available (doesn't get repetitive, very long memory). It also feels different to GPT3 on how it is affected by prompts.

I've just assumed that char.ai is doing its own thing as it was founded by two engineers who worked on google's LaMDA.

ironrabbit · 3 years ago
Character has their own models, and anecdotally I've heard they have one of the better LM training codebases out there.

u/ironrabbit

KarmaCake day364December 10, 2013View Original