https://forum.effectivealtruism.org/posts/xJhtrAxd5QpXi2CdA/...
Here is the Google query that came from
https://www.google.com/search?q=effective+altruism+climate+c...
https://forum.effectivealtruism.org/posts/xJhtrAxd5QpXi2CdA/...
Here is the Google query that came from
https://www.google.com/search?q=effective+altruism+climate+c...
So it's most useful to look at other capabilities and opportunities when evaluating LLM's with a different heritage.
Not to say we shouldn't evaluate this one for coding or report our evaluations, but we shouldn't be surprised that it's not leading the pack on that particular use case.
I am an AI novice but why can't they automated this with AI? I thought the whole point of these tools was to automated tasks that are error prone and require lots of attention to details. Computers are great at that kind of stuff so it's surprising they haven't applied AI techniques to automate parts of the AI pipeline like converting code from Python to C++.
edit: not sure why op is getting downvotes, this is a very reasonable question imo; maybe the characterization of kernel compilation as "AI" vs. just "software"?
Deleted Comment
(Edit remove criticism of Altman too negative)
I suppose I should be careful saying stuff about zuckerberg like that. He might want to cage fight me.
This sounds pretty reasonable? I don't think it's hypocritical to be talking about the doom of humanity and also arguing that GPT-3, a 3-year old model, should not be classified as "high-risk" in that sense.
Even if you disagree, questioning Altman's leadership and calling him an "empty soul" over this kind of regulatory detail is not adding substance to the discussion imo.
Anecdotally, as a roleplaying chat experience, char.ai seems to perform way better than anything else publicly available (doesn't get repetitive, very long memory). It also feels different to GPT3 on how it is affected by prompts.
I've just assumed that char.ai is doing its own thing as it was founded by two engineers who worked on google's LaMDA.
When the Internet first bloomed, a hands-off ethic appeared and took hold. Let's not regulate or tax any of it lest we kill it. But now there are significant groups-- often from the establishment players -- who are concocting scary what-if narratives that seem pretty unlikely to me.
How about we just try the "slow regulation" approach again? It worked pretty well for the Internet. We've slowly added some laws with the utmost care not to destroy and, over all, it's worked. Let's do the same with AI.