I replied to LeCun's claims about their latest protein structure predictor and he immediately got defensive. The problem is that i'm an expert in that realm and he is not. My statements were factual (pointing out real limitations in their system along with the lack of improvement over AlphaFold) and he responded by regurgitating the same misleading claims everybody in ML who doesn't understand biology makes. I've seen this pattern repeatedly.
It's too bad because you really do want leaders who listen to criticism carefully and don't immediately get defensive.
Same thing with making wildly optimistic claims about "obsoleting human radiologists in five years", made more than five years ago by another AI bigwig Geoffrey Hinton. They are doubtless brilliant researchers in their field, but they seem to view AI as a sort of cheat code to skip the stage where you actually have to understand the first thing about the problem domain to which it is applied, before getting to the "predictions about where the field is going".
Very similar to crypto evangelists boldly proclaiming the world of finance as obsolete. Rumours of you understanding how the financial system works were greatly exaggerated, my dudes.
The tribal thesis in the AI world seems to be that AI workers don't need subject matter expertise, as the AI will figure it out during training. In fact, subject matter expertise can be a negative because it's a distraction from making the AI good enough to figure it out on its own.
This assumption has proven to be very fragile, but I don't think the AI bigwigs have accepted that yet. Still flush from the success of things like AlphaZero, where this thesis was more true.
I think all this work could be useful if only those people understood that the technology is not mature enough to remove people from the loop.
For the case of AI analysing x-ray photos, the obvious solution would be a system that can tag photos with information about what AI thinks is going on there. And this information could be passed to the human.
This could save a ton of time and help reduce cases where radiologists missed obvious things.
My son once broke his arm. I brought him to ER, they made the photo but the two people who looked at the photo said there is nothing wrong with the arm. I asked for the copy of the photo.
A week later the swelling did not subside so I took the photo to another doctor and he pointed out an obvious fracture line.
There are many ways to deploy automation and I wonder why everybody tries to shoot for removing humans altogether when most of the time this is literally asking for problems.
The funny thing is, of all the areas where ML could help, radiological image classification definitely is the one where ML could shine (and I think it did). HUmans doing radiological image classification are basically a network service now (IE they can do their job on the other side of the world, and their efforts are extremely carefully evaluated using later data such as disease progression).
There is a personal bias I have observed when I have listened to people talk over the last ~5 years, for example:
- Startup founders without any domain experience in extremely risky ventures
- Crypto bros
- Donald Trump
I consider myself - and others view me as - a hyperrational person (possibly often to a fault), and I must admit that when I hear an outlandish claim like the ones spewed by the above, I am sometimes left in a strange emotional state... a stupor?
Like, I don't quite believe the claim because I'm defensively rational, but I feel a certain dizziness and confusion (thinking to myself, "could this actually be true?") until I come back to my senses. The more outlandish and impassioned the speech, the stronger the effect.
It's made me realise we're all built similarly, from the most rational from the most gullible.
I think HN is skewing heavily against AI and blockchain claims. I pick those to make the point below.
For what it's worth, I agree that blockchain itself is a first-generation technology and sucks relative to other things, like giant vacuum tube computers did. However, the concepts it enables (smart contracts) have as much promise as the idea of software programs running on personal computers back when most people wondered why you need them, since they do very little but play pong.
When I wrote the following article for CoinDesk in 2020, I didn't want to say "blockchain voting", I wanted to say "voting from your phone". Because there are far better decentralized byzantine-fault-tolerant systems, than blockchains. But that's what they ran with:
For every technology we use today, there was a time it was laughably inadequate as a replacement for what came before.
And that's really, the crux of the issue. It happens slowly, and then all at once. Yes we need to listen to guys like Moxie who are skeptical, but we need to also then go and have a discussion from different perspectives, not just one specific perspective. It has even become fashionable in many liberal circles to be against the type "tech bro" typified by HN, including VCs and Web 2.0 tech bros. So before you downvote, realize that most of you would be on the receiving end of it in other echo chambers, due to this phenomenon of thinking there's only one best narrative.
People like Moxie are much more interesting and interested, because they say they' love to be proven wrong. And I am also open to substantive discussion:
I imagine it's the same with AI claims about traditional fields. Where have we heard that before? "Yes it's cute and impressive but these guys don't really understand what the experts know about chess."
TBH I think even a basic "I considered your point, but X and Y factors seemed to mitigate it enough for my standards, defined by P and Q. Let me know if I'm misunderstanding anything" would do a lot. It's important to always show you've considered that you're wrong about something.
I'm not sure what OPs particular point was, but Yan seemed to argue over and over again that testing Galactica with adversial inputs is why "we can't have nice things" which to me seems not just defensive but kind of comical.
Any AI model needs to be designed with adversarial usage in mind. And I don't even think random people trying to abuse the thing for five minutes to get it to output false or vile info counts as a sophisticated attack.
Clearly before they published that demo Facebook had again put zero thought into what bad actors can do with this technology and the appropriate response to people testing that out is certainly not blaming them.
> He's supposed to agree with you, or not express an opinion?
Wow, not sure what to say if that's what you think are the only options. I didn't see the original response to the parent commenter, but this quote in the article, "It’s no longer possible to have some fun by casually misusing it. Happy?" doesn't bode well.
I get that in the post-Twitter world it can be heart to differentiate between valid criticism and toxic bad-faith arguments, but lets not pretend that it's impossible to acknowledge criticism in a way that doesn't immediately try to dismiss it, even if you may not agree in the end.
No, you can disagree with someone without acting defensive. When a person is acting defensive, they're trying to protect or justify themselves. People who are insecure or guilty tend to act defensive. You can have a disagreement and defend your positions without taking things personally.
I feel we are at crypto/blockchain levelof hype int ML and basically the old saying of "if you are a hammer, everything looks like a nail" applies.
For someone who dedicated their career to ML, they'll naturally try to solve everything in that framework. I observe this in every discipline that falls prey to it's own success. If there's a problem, those in the industry will naturally try to solve it with ML, often completely ignoring practical considerations.
Is the engine in your car underperforming? Let's apply ML. Has your kid bruised their knee while skating? Apply ML to his skating patterns.
The one saving grace of ML is that there are genuinely useful applications among the morass.
Without even opening the link I half expected it to be about LeCun and I want wrong.
Him and Grady Booch recently had a back and forth on the same subject on Twitter where to me it seemed like he couldn’t answer Booch’s very basic questions. It’s interesting to see another person with a similar opinion.
> It's too bad because you really do want leaders who listen to criticism carefully and don't immediately get defensive.
For sure. If this is how he treats outside experts, I can't imagine what it's like to work for him. Or rather, I can imagine it, and I think it does a lot to explain the release-and-panicked-rollback pattern.
ML people are the ultimate generalists. They claim to make tools which are domain agnostic, but they can't really validate that for themselves because they have no domain knowledge about anything.
Could you share the critical feedback you gave? I am interested as someone who works with biological systems and is curious about how ML can or cannot help.
I told him that: the increased speed but lower accuracy of their protein structure predictor was not useful because the only thing that matters in PSP is absolute prediction quality. And that speeding up that step wouldn't have any impact on pharmaceutical development, which is one of his claims (closed-loop protein design).
I think it makes sense when you realize that the product (Galactica) and all the messaging around it are just PR - they're communicating to shareholders of a company in deep decline trying to say 'look at the new stuff we're doing, the potential for new business here'.
You interrupting the messaging ruins it, so you get some deniable boilerplate response. its not personal.
But we gave the keys to the economy to some vain children who have never had to do real work to make a name for themselves. Straight from uni being librarians assistants to the elders, straight to running the world!
Society is still led by vague mental imagery and promises of forever human prosperity. The engineering is right but no one asks if rockets to Mars are the right output to preserve consciousness. We literally just picked it up because the elders started there and later came to control where the capital goes.
We’re shorting so many other viable experiments to empower same old retailers and rockets to nowhere.
I'm delighted you called out these problems when you came across them, and sorry that he didn't have the grace or maturity to take it on board without getting defensive.
Like many thin-skinned hype merchants with a seven-figure salary to protect, they're going to try and block criticism in case it hits them in the pocket. Simple skin in the game reflex that will only hurt any chances of improvement.
It's tough because I think he has a really difficult job in many regards; Meta catches so much unjustified flak (in addition to justified flak) that being a figurehead for a project like this has to be exhausting.
Being constantly sniped at probably puts you in a default-unreceptive state, which makes you unable to take on valid feedback, as yours sounds like.
At some level he must know (AI) winter is coming along with the recession, which is why he is so defensive, as if a barrage of words will stave off the inevitable.
As a bright-eyed science undergraduate, I went to my first conference thinking how amazing it would be to have all these accomplished and intelligent people in my field all coming together to share their knowledge and make the world a better place.
And my expectations were exceeded by the first speaker. I couldn't wait for 3 full days of this! Then the second speaker got up, and spent his entire presentation telling why the first person was an idiot and totally wrong and his research was garbage because his was better. That's how I found out my field of study was broken into two warring factions, who spent the rest of the conference arguing with each other.
I left the conference somewhat disillusioned, having learned the important life lesson that just because you're a scientist doesn't mean you aren't also a human, with all the lovely human characteristics that entails. And compared to this fellow, the amount of money and fame at stake in my tiny field was miniscule. I can only imagine the kinds of egos you see at play among the scientists in this article.
I worked for a university for many years and I can confirm this. I have never seen such negativity, scheming, and fighting in my many professional years since. All because in the end, they were fighting over nothing and they knew it. But they needed to feel like what they were doing was important.
At the end of the day, all of the noise of negativity and bad press is being drowned out by incredible demos. I don't know what to chalk this up to if not jealousy. Most people in the ML-o-sphere are ignoring it.
At the end of the day, all that matters is: are users using what you built?
I mean.... That matters to someone in the industry, yes. But not necessarily someone in academia.
You chose industry over academia, and that's fine. It lines up with your values. But realize that not everyone shares those values and beliefs. To some, the act of discovering a new thing is much more important than the users using said discovery. And that lines up with academia more so than the industry.
It's fine that you like industry better than academia, so do I, but you'd better count your lucky stars that scientists exist.
> At the end of the day, all that matters is: are users using what you built?
How would you measure Isaac Newton's advances in calculus and mechanics or Einstein's general theory of relativity, against say, a web app with a billion users?
One of the most intense and fun user bases I had was in HPC at an academic healthcare research institute. I've also worked in high energy research.
When most folks think of academia they think faculty, but staff vastly outnumber then. Contrary to popular belief there are legions of cold, level headed, engineers that get shit done.
A lot of the research isn't some random study of something that may or may not be useful in half a century or more, it's often immediately applicable and winds up in products or shaping government policy on a global scale. Especially the well funded ones.
But we don't hear about that stuff. We hear what the media and tech companies are currently trying to cram down our throats.
I was working for somebody once who seemed to think LeCun was an uninspired grind and I'm like no, LeCun won a contest to make a handwritten digit recognizer for the post office. LeCun wrote a review paper on text classification that got me started building successful text classifiers and still influences my practice. LeCun is one of the few academics who I feel almost personally taught me how to do something challenging.
But the A.I. hype is out of hand. "A.I. Safety" research is the worst of it, as it suggests this technology is so powerful that it's actually dangerous. The other day I was almost to write a comment on HN to a post from lesswrong where they apologized at the beginning of an article critical of the intelligence explosion hypothesis because short of Scientology or the LaRouche Youth Movement it is hard to find a place where independent thought is so unwelcome.
Let's hope "longtermism" and other A.I. hype goes the way of "Web3".
AI Safety is important, but the unsafety isn't from the superintelligrnt AI, it's from dumb and cruel people hiding behind it as an excuse for their misbehavior.
Weapons of Math Destruction is good book on the topic. Using ML to do tasks like evaluate employee performance (and trigger firings), issue loans, insurance etc. is affecting peoples lives in the real world today.
A book called Automating Inequality by Virginia Eubanks touches on how for many years now, we've hid unfair policies behind computer complexity. It was really eye opening to me how what seems to be a poorly implemented system is actually working as intended in order to make life difficult for certain people.
Like the post on here a while ago about ai algorithms setting prices for landlords. Most human have a limit to what they will personally do to someone, but for whatever reason if they get told to do something worse and they can blame it on that thing instead people are willing to be monsters. So it only takes a few sociopaths making the software to make a whole industry even worse.
"Nuclear Weapon Safety is important but the unsafety isn't from the Bombs but from dumb and cruel people hiding behind them as an excuse for their misbehavior."
The most aggrivating thing about EA "longtermism" AI Safety stuff is that is takes the oxygen out of the room for actual AI safety research.
Using ML for object detection, object tracking, or prediction on an L2-L5 driver assistant system? AI safety research sounds like a capability you'd really want.
Using ML for object detection, object tracking, or prediction on an industrial robot that is going to work alongside humans or could cost $$$ when it fails? AI safety research sounds like a capability you'd really want.
Using classifiers or any form of optimization for algorithmic trading? AI safety research sounds like a capability you'd really want.
Building decision support systems to optimize resource allocation (in an emergency, in a data center, in a portfolio, ...)? AI safety research sounds like a capability you'd really want.
Hell, want to use an LLM as part of a customer service chatbot? You probably don't want it to be hurling racial slurs at your customers. AI safety research sounds like a capability you'd really want.
Unfortunately, now "AI Safety" no longer means "building real world ML systems for real world problems with bounds on their behavior" and instead means... idk, something really stupid EA longtermism nonsense.
I'm going to add that "AI Safety" (allied with longtermism) is also part of the hype machine for big tech.
Pressing the idea that AI is dangerous makes it seem like these companies are even more powerful than they are and could drive up their stock price. When the AI Safety people get into some conflict and get fired they are really doing their job because now it looks like big tech is in a conspiracy to cover up how dangerous their technology is.
You just listed 5 completely distinct applications of AI safety (and surely can name countless others) and then concluded with a complaint that the concept of AI safety is not well-defined?
The whole point is that the technology itself and its capabilities are not well defined, people are constantly inventing new applications and new methods now at breakneck speed, so the question of how to mitigate its risks is going to be at least as squishy a concept as the underlying tech + applications.
> But the A.I. hype is out of hand. "A.I. Safety" research is the worst of it, as it suggests this technology is so powerful that it's actually dangerous.
It boggles my mind how anyone can think otherwise. Existential dangers of superintelligent or even non-intelligent AI are the long-term result of the dangers of AI being developed and misused over time for human ends.
It's the exact same argument behind why we should be trying to track asteroids, or why we should be trying to tackle climate change: the worst-case scenario is unlikely or in the future, but the path we're on has numerous dangers where suffering and loss of human life is virtually certain unless something is done.
Because it's like cavemen pondering the safety of nuclear fusion after discovering fire. Yes: nuclear fusion could be dangerous. No: there is nothing useful that can come out of such "research".
You know what would be useful for cavemen to ponder? The safety of fire. Or you know, just staying alive because there are more dangerous things out there.
The current state of so-called "AI" is our fire. It's impressive and useful (and there are real dangers associated with it) but it has no bearing on intelligence, let alone superintelligence. It's more likely that a freak super-intelligent human will be born than that we accidentally produce a super-intelligent computer. We produce a lot of intelligent humans, and we've never produced a single intelligent computer.
As it stands, we don't have the understanding or the tools to do anything useful wrt safety from a super-intelligent AI. We do have the understanding and tools to do something useful about asteroids and climate change.
> It boggles my mind how anyone can think otherwise.
Some AI dangers are certainly legitimate - it's easy to foresee how an image recognition system might think all snowboarders are male; or a system trained on unfair sentences handed out to criminals would replicate that unfairness, adding a wrongful veneer of science and objectivity; or a self-driving car trained on data from a country with few mopeds and most pedestrians wearing denim might underperform in a country with many mopeds and few pedestrians wearing denim.
But other AI dangers sound more like the work of philosophers and science fiction authors. The moment people start predicting the end of humans needing to work, or talking about a future evil AI that punishes people who didn't help bring it into existence? That's pretty far down in my list of worries.
"the worst-case scenario is unlikely or in the future ... loss of human life is virtually certain"
These two things are in conflict. We could ignore both asteroids and climate change and according to the best known science there'd be very little impact for vast timespans and possibly no impact ever (before humanity is ended by something else like war).
Yes, also for the climate. Look at the actual predictions and it's like a small reduction in GDP growth spread over a very long period of time, and that's assuming the predictions are actually correct when they have a long track record of being not so.
Really stuff like asteroids and climate is a good counter-argument to caring about AI risk. Intellectuals like to hypothesize world-ending cataclysms that only their far sighted expertise can prevent, but whenever these people's predictions get tested against something concrete they seem to invariably end up being wrong. Our society rewards catastrophising far too generously and penalizes being wrong far too little, especially for academics, NGOs etc. It makes people feel or seem smart in the moment, and they can punt the reputational damage from being wrong far into the future (and then pretend they never made those predictions at all or there were mitigating factors).
The so called "AI safety" researchers issuing breathless warnings about the dangers of superintelligence are nothing but grifters. They are the modern equivalent of shaman, telling the common people that they need ever increasing amounts of resources to continue their vital work in protecting us from wrathful spirits. There is zero actual scientific evidence to support their claims. It is essentially a modern-day secular religion.
(There is value in doing research and ethical analysis into AI/ML statistical algorithms to prevent hidden biases or accidental physical harm. People working in those areas are producing real benefits for the rest of us and I'm not criticizing them.)
>But the A.I. hype is out of hand. "A.I. Safety" research is the worst of it, as it suggests this technology is so powerful that it's actually dangerous. The other day I was almost to write a comment on HN to a post from lesswrong where they apologized at the beginning of an article critical of the intelligence explosion hypothesis because short of Scientology or the LaRouche Youth Movement it is hard to find a place where independent thought is so unwelcome.
I hesitate to say "safe space", but... what if a group of people wants to come together discuss AI safety? If they'd have to regurgitate all the arguments and assumptions for everyone who comes along they'd never get anything done. If you are really interested to know where they are coming from, you can read the introductory materials that already exist. If the 99.9% of the world is hostile towards discussing AI safety (of the superintelligence explosion kind, not the corporate moralitywashing kind) there is some value in a place which is hostile to not discussing it, so that at least those interested can actually discuss it.
> it is hard to find a place where independent thought is so unwelcome.
Is that actually true, though? It's true that a higher fraction of the people in that community give credence to the intelligence explosion hypothesis than pretty much anywhere else. (This is what one would expect, since part of the purpose of LessWrong is to be a forum for discussions about super-intelligent AI.) But even if the intelligence explosion is a terrible, absolutely-wrong theory, that doesn't prevent the people who hold it from being open-minded and tolerant of independent thought. Willingness to consider new and different ideas is something the LessWrong community claims to value, so it would be a little bit weird if they were doing way worse than average at it. And AFAICT, it seems like they're doing fine. Some examples:
- Here [1] is a post critical of the intelligence explosion theory. It has 81 upvotes as of this writing, and the highest upvoted comment goes like: "thanks for writing this post, it makes a lot of good arguments. I agree with these things you wrote" (list of things) "here are some points where I disagree" (list of things). This may even be the original post you were talking about in your comment, except that it doesn't start with an apology.
- LW has 2 different kinds of voting: "Regular upvotes" provide an indication of the quality of a post or comment, and "agree/disagree votes" let people express how much they agree or disagree with a particular comment. Down-voting a high quality comment just because you disagree (instead of giving it a disagree-vote) would be against the culture on LW.
If you're already sure that LW is wrong about superintelligence, and you're trying to explain how they became wrong, then "those LW people were too open minded and fell for that intelligence explosion BS" makes more sense to me than anything about suppression of independent thought.
> "A.I. Safety" research is the worst of it, as it suggests this technology is so powerful that it's actually dangerous
Well that's an interesting way to misrepresent an entire important field of research based on what a few idiots said. There are serious people in that field who aren't addicted to posting on LessWrong.
I studied machine learning at NYU, and from interacting with Yann LeCun, I can say he’s actually a nice guy. Yes, his tweet is grumpy. I still feel as if the implication that Galactica should have been taken down was the worse thing happening here.
I read the MIT Technology Review article, and I was asking myself “what is an example of Galactica making a mistake?” The article could easily have quoted a specific prompt, but doesn’t. It says the model makes mistakes in terms of understanding what’s real/correct or not, but the only concrete example I see in the article is that the model will write about the history of bears in space with the implication that it’s making things up (and I believe the model does make such mistakes). I don’t think it’s a good article because it’s heavy on quoting people who don’t like the work and light on concrete details.
Does the imperfection of a language model really mean the model should not exist? This seems to be what some critics are aiming for.
I saw examples of people using it to generate scientific-sounding fake studies like: the benefits of eating glass, or promoting antisemitism.
That being said, I am very partial to the AI researchers here who feel like their cool demo has to be taken down because some people were misusing it. It's an unfair high standard they're holding AI demos to, compared with other technologies. It's analogous to asking Alexander Graham Bell to shut down an early telephone prototype because some jerks were using it to discuss antisemitic conspiracies.
I agree that the model does make mistakes. Your examples sound realistic, and I hope we make more progress to improve on models that can propagate stereotypes and similarly negative aspects that can arise in training data. I had meant to criticize the journalism, versus saying the mistakes don't exist.
It is the same problem as with all the AI models that are “racist” (note some are actually racist or insensitive): the AI model just does what it’s told, doesn’t know right from wrong, and amplifies fake and actual differences in ways that make us uncomfortable.
So you can get an AI model that hasn’t been hardened against these attacks to write a paper on why <racist thing> or create an image depicting <racist or porny thing> and it just does it. Because the model is just an input:output device and doesn’t have the “wait maybe I shouldn’t do that because it’s bad” feature.
And while teen and young 20 something males will get a huge laugh out of posting screenshots of it, the journalist crowd will freak out and start calling the model, researchers, and company racist.
Personally I have played with large language models and the chutzpah with which they will lie and make things up is indeed astounding (they do a good job at making them sound believable, and lie with utter seriousness and confidence). So I can see where the controversy comes from, although I agree with the other commenters that the researchers should be able to put up a bit fat disclaimer about it
I kinda agree with LeCun here. Why can't companies and people just put out cool things that have faults? Now we have a tool that got pulled, not because any concrete harm, only outrage over theoretical harm. It is not the tool, not the people finding faults, but people reaction's that seem like they have gone too far.
> In the company’s words, Galactica “can summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.”
The first step to successfully publish prototypes is creating realistic expectations. That's being done all the time in papers and other ML projects. Instead Meta listed a set of features in a language model that can be summarized as "magic".
GPT-3 can already most of those things. If you haven’t spoken with GPT yourself, you might think it’s overhyped. But it really is quite amazing. I’m saying this as a 15-year veteran machine learning engineer who is not impressed by much in ML. The “hype” part of what’s happening (the disconnect between reality and the goal) is that these models do make silly mistakes. But they also can actually perform the operations on the list above, and not in some “works 1% of the time” sense. This kind of AI is at a turning point, becoming deeply impressive and significantly expanding what code can do.
>Why can't companies and people just put out cool things that have faults?
Absolutely they can, and his employer could have kept it up. The issue is the phantasmagorical and ridiculous claims about AI-generated scientific research that LeCun peddles. When there's something concrete one can use to test these extremely bold claims, there's a way to at least partially apply a reality check to the claims, and demonstrate their ridiculousness. Which is a very useful and important part of how the scientific field evolves and advances. Feeding non-experts all these wild claims in perpetual future tense only works for so long, and it ought to be that way.
If you put something online, and present it as useful tool, then you have to expect that people are going to try to break it. You can look at that as free testing and open-source bughunting, or you can complain about about misuse and take it offline. The responsible parties took the latter route, which is kind of silly.
What this project created was something sophisticated and powerful, but not something people wanted, and they got (rightfully) pilloried for it. Instead of shaking ones fist at the world for rejecting your brilliance, maybe the really smart ones are making the things that others actually desire, and not merely developing techs that give themselves leverage over others and expecting the world to defer to this demonstration of intellectual prowess.
This whole incident was a case study for product management and startup school 101. I've made this exact same category of error in developing products, where I said, "hey, look at this thing I built that may mean you don't have to do what you do anymore!" and then was surprised when people picked it apart for "dumb" reasons that ignored the elegance of having automated some problem away.
If this model were really good, they would have used it to advance a bunch of new ideas in different disciplines before exposing it to the internet. Reality is, working at Meta/Facebook means they are too disconnected from the world they have influenced so heavily to be able to interpret real desire from people who live in it anymore. When you are making products to respond to data and no actual physical customer muse, you're pushing on a rope. I'd suggest the company has reached a stage of being post-product, where all that is left are "solutions," to the institutional customers who want some kind of leverage over their userbase, but no true source of human desire.
> LeCun also approvingly links to someone else who writes, in response to AI critic Gary Marcus
The article really fails to explain that LeCun and Marcus have been trading insults for the last few years, it's hardly LeCun snapping at some random person.
Why Meta’s latest large language model survived only three days online - https://news.ycombinator.com/item?id=33670124 - Nov 2022 (119 comments)
It's too bad because you really do want leaders who listen to criticism carefully and don't immediately get defensive.
Very similar to crypto evangelists boldly proclaiming the world of finance as obsolete. Rumours of you understanding how the financial system works were greatly exaggerated, my dudes.
This assumption has proven to be very fragile, but I don't think the AI bigwigs have accepted that yet. Still flush from the success of things like AlphaZero, where this thesis was more true.
For the case of AI analysing x-ray photos, the obvious solution would be a system that can tag photos with information about what AI thinks is going on there. And this information could be passed to the human.
This could save a ton of time and help reduce cases where radiologists missed obvious things.
My son once broke his arm. I brought him to ER, they made the photo but the two people who looked at the photo said there is nothing wrong with the arm. I asked for the copy of the photo.
A week later the swelling did not subside so I took the photo to another doctor and he pointed out an obvious fracture line.
There are many ways to deploy automation and I wonder why everybody tries to shoot for removing humans altogether when most of the time this is literally asking for problems.
- Startup founders without any domain experience in extremely risky ventures
- Crypto bros
- Donald Trump
I consider myself - and others view me as - a hyperrational person (possibly often to a fault), and I must admit that when I hear an outlandish claim like the ones spewed by the above, I am sometimes left in a strange emotional state... a stupor?
Like, I don't quite believe the claim because I'm defensively rational, but I feel a certain dizziness and confusion (thinking to myself, "could this actually be true?") until I come back to my senses. The more outlandish and impassioned the speech, the stronger the effect.
It's made me realise we're all built similarly, from the most rational from the most gullible.
For what it's worth, I agree that blockchain itself is a first-generation technology and sucks relative to other things, like giant vacuum tube computers did. However, the concepts it enables (smart contracts) have as much promise as the idea of software programs running on personal computers back when most people wondered why you need them, since they do very little but play pong.
When I wrote the following article for CoinDesk in 2020, I didn't want to say "blockchain voting", I wanted to say "voting from your phone". Because there are far better decentralized byzantine-fault-tolerant systems, than blockchains. But that's what they ran with:
https://www.coindesk.com/in-defense-of-blockchain-voting
In it, I say:
For every technology we use today, there was a time it was laughably inadequate as a replacement for what came before.
And that's really, the crux of the issue. It happens slowly, and then all at once. Yes we need to listen to guys like Moxie who are skeptical, but we need to also then go and have a discussion from different perspectives, not just one specific perspective. It has even become fashionable in many liberal circles to be against the type "tech bro" typified by HN, including VCs and Web 2.0 tech bros. So before you downvote, realize that most of you would be on the receiving end of it in other echo chambers, due to this phenomenon of thinking there's only one best narrative.
People like Moxie are much more interesting and interested, because they say they' love to be proven wrong. And I am also open to substantive discussion:
https://community.intercoin.app/t/web3-moxie-signal-telegram...
I imagine it's the same with AI claims about traditional fields. Where have we heard that before? "Yes it's cute and impressive but these guys don't really understand what the experts know about chess."
He's supposed to agree with you, or not express an opinion? Anything else short of this would be "defensive" right?
This whole idea that defending your positions in arguments is somehow a bad thing is a really odd modern development that I never understood.
Any AI model needs to be designed with adversarial usage in mind. And I don't even think random people trying to abuse the thing for five minutes to get it to output false or vile info counts as a sophisticated attack.
Clearly before they published that demo Facebook had again put zero thought into what bad actors can do with this technology and the appropriate response to people testing that out is certainly not blaming them.
Wow, not sure what to say if that's what you think are the only options. I didn't see the original response to the parent commenter, but this quote in the article, "It’s no longer possible to have some fun by casually misusing it. Happy?" doesn't bode well.
I get that in the post-Twitter world it can be heart to differentiate between valid criticism and toxic bad-faith arguments, but lets not pretend that it's impossible to acknowledge criticism in a way that doesn't immediately try to dismiss it, even if you may not agree in the end.
Dead Comment
For someone who dedicated their career to ML, they'll naturally try to solve everything in that framework. I observe this in every discipline that falls prey to it's own success. If there's a problem, those in the industry will naturally try to solve it with ML, often completely ignoring practical considerations.
Is the engine in your car underperforming? Let's apply ML. Has your kid bruised their knee while skating? Apply ML to his skating patterns.
The one saving grace of ML is that there are genuinely useful applications among the morass.
Him and Grady Booch recently had a back and forth on the same subject on Twitter where to me it seemed like he couldn’t answer Booch’s very basic questions. It’s interesting to see another person with a similar opinion.
For sure. If this is how he treats outside experts, I can't imagine what it's like to work for him. Or rather, I can imagine it, and I think it does a lot to explain the release-and-panicked-rollback pattern.
You interrupting the messaging ruins it, so you get some deniable boilerplate response. its not personal.
Deleted Comment
Society is still led by vague mental imagery and promises of forever human prosperity. The engineering is right but no one asks if rockets to Mars are the right output to preserve consciousness. We literally just picked it up because the elders started there and later came to control where the capital goes.
We’re shorting so many other viable experiments to empower same old retailers and rockets to nowhere.
Like many thin-skinned hype merchants with a seven-figure salary to protect, they're going to try and block criticism in case it hits them in the pocket. Simple skin in the game reflex that will only hurt any chances of improvement.
Being constantly sniped at probably puts you in a default-unreceptive state, which makes you unable to take on valid feedback, as yours sounds like.
Ideally scientists would be interested in the truth and engineers would be interested in making the system better.
And my expectations were exceeded by the first speaker. I couldn't wait for 3 full days of this! Then the second speaker got up, and spent his entire presentation telling why the first person was an idiot and totally wrong and his research was garbage because his was better. That's how I found out my field of study was broken into two warring factions, who spent the rest of the conference arguing with each other.
I left the conference somewhat disillusioned, having learned the important life lesson that just because you're a scientist doesn't mean you aren't also a human, with all the lovely human characteristics that entails. And compared to this fellow, the amount of money and fame at stake in my tiny field was miniscule. I can only imagine the kinds of egos you see at play among the scientists in this article.
At the end of the day, all of the noise of negativity and bad press is being drowned out by incredible demos. I don't know what to chalk this up to if not jealousy. Most people in the ML-o-sphere are ignoring it.
At the end of the day, all that matters is: are users using what you built?
You chose industry over academia, and that's fine. It lines up with your values. But realize that not everyone shares those values and beliefs. To some, the act of discovering a new thing is much more important than the users using said discovery. And that lines up with academia more so than the industry.
Both are different. Both are valid.
> At the end of the day, all that matters is: are users using what you built?
How would you measure Isaac Newton's advances in calculus and mechanics or Einstein's general theory of relativity, against say, a web app with a billion users?
When most folks think of academia they think faculty, but staff vastly outnumber then. Contrary to popular belief there are legions of cold, level headed, engineers that get shit done.
A lot of the research isn't some random study of something that may or may not be useful in half a century or more, it's often immediately applicable and winds up in products or shaping government policy on a global scale. Especially the well funded ones.
But we don't hear about that stuff. We hear what the media and tech companies are currently trying to cram down our throats.
Ah yes, the Kardashian model of success
But the A.I. hype is out of hand. "A.I. Safety" research is the worst of it, as it suggests this technology is so powerful that it's actually dangerous. The other day I was almost to write a comment on HN to a post from lesswrong where they apologized at the beginning of an article critical of the intelligence explosion hypothesis because short of Scientology or the LaRouche Youth Movement it is hard to find a place where independent thought is so unwelcome.
Let's hope "longtermism" and other A.I. hype goes the way of "Web3".
Towards the beginning of the pandemic there was a lot of this sort of stuff, say: https://www.nature.com/articles/s42256-021-00338-7
Deleted Comment
Using ML for object detection, object tracking, or prediction on an L2-L5 driver assistant system? AI safety research sounds like a capability you'd really want.
Using ML for object detection, object tracking, or prediction on an industrial robot that is going to work alongside humans or could cost $$$ when it fails? AI safety research sounds like a capability you'd really want.
Using classifiers or any form of optimization for algorithmic trading? AI safety research sounds like a capability you'd really want.
Building decision support systems to optimize resource allocation (in an emergency, in a data center, in a portfolio, ...)? AI safety research sounds like a capability you'd really want.
Hell, want to use an LLM as part of a customer service chatbot? You probably don't want it to be hurling racial slurs at your customers. AI safety research sounds like a capability you'd really want.
Unfortunately, now "AI Safety" no longer means "building real world ML systems for real world problems with bounds on their behavior" and instead means... idk, something really stupid EA longtermism nonsense.
Pressing the idea that AI is dangerous makes it seem like these companies are even more powerful than they are and could drive up their stock price. When the AI Safety people get into some conflict and get fired they are really doing their job because now it looks like big tech is in a conspiracy to cover up how dangerous their technology is.
The whole point is that the technology itself and its capabilities are not well defined, people are constantly inventing new applications and new methods now at breakneck speed, so the question of how to mitigate its risks is going to be at least as squishy a concept as the underlying tech + applications.
It boggles my mind how anyone can think otherwise. Existential dangers of superintelligent or even non-intelligent AI are the long-term result of the dangers of AI being developed and misused over time for human ends.
It's the exact same argument behind why we should be trying to track asteroids, or why we should be trying to tackle climate change: the worst-case scenario is unlikely or in the future, but the path we're on has numerous dangers where suffering and loss of human life is virtually certain unless something is done.
You know what would be useful for cavemen to ponder? The safety of fire. Or you know, just staying alive because there are more dangerous things out there.
The current state of so-called "AI" is our fire. It's impressive and useful (and there are real dangers associated with it) but it has no bearing on intelligence, let alone superintelligence. It's more likely that a freak super-intelligent human will be born than that we accidentally produce a super-intelligent computer. We produce a lot of intelligent humans, and we've never produced a single intelligent computer.
As it stands, we don't have the understanding or the tools to do anything useful wrt safety from a super-intelligent AI. We do have the understanding and tools to do something useful about asteroids and climate change.
Some AI dangers are certainly legitimate - it's easy to foresee how an image recognition system might think all snowboarders are male; or a system trained on unfair sentences handed out to criminals would replicate that unfairness, adding a wrongful veneer of science and objectivity; or a self-driving car trained on data from a country with few mopeds and most pedestrians wearing denim might underperform in a country with many mopeds and few pedestrians wearing denim.
But other AI dangers sound more like the work of philosophers and science fiction authors. The moment people start predicting the end of humans needing to work, or talking about a future evil AI that punishes people who didn't help bring it into existence? That's pretty far down in my list of worries.
These two things are in conflict. We could ignore both asteroids and climate change and according to the best known science there'd be very little impact for vast timespans and possibly no impact ever (before humanity is ended by something else like war).
Yes, also for the climate. Look at the actual predictions and it's like a small reduction in GDP growth spread over a very long period of time, and that's assuming the predictions are actually correct when they have a long track record of being not so.
Really stuff like asteroids and climate is a good counter-argument to caring about AI risk. Intellectuals like to hypothesize world-ending cataclysms that only their far sighted expertise can prevent, but whenever these people's predictions get tested against something concrete they seem to invariably end up being wrong. Our society rewards catastrophising far too generously and penalizes being wrong far too little, especially for academics, NGOs etc. It makes people feel or seem smart in the moment, and they can punt the reputational damage from being wrong far into the future (and then pretend they never made those predictions at all or there were mitigating factors).
(There is value in doing research and ethical analysis into AI/ML statistical algorithms to prevent hidden biases or accidental physical harm. People working in those areas are producing real benefits for the rest of us and I'm not criticizing them.)
What sort of AI catastrophe do you think would happen?
We must do something.
This is something.
Therefore, we must do this.
I hesitate to say "safe space", but... what if a group of people wants to come together discuss AI safety? If they'd have to regurgitate all the arguments and assumptions for everyone who comes along they'd never get anything done. If you are really interested to know where they are coming from, you can read the introductory materials that already exist. If the 99.9% of the world is hostile towards discussing AI safety (of the superintelligence explosion kind, not the corporate moralitywashing kind) there is some value in a place which is hostile to not discussing it, so that at least those interested can actually discuss it.
Is that actually true, though? It's true that a higher fraction of the people in that community give credence to the intelligence explosion hypothesis than pretty much anywhere else. (This is what one would expect, since part of the purpose of LessWrong is to be a forum for discussions about super-intelligent AI.) But even if the intelligence explosion is a terrible, absolutely-wrong theory, that doesn't prevent the people who hold it from being open-minded and tolerant of independent thought. Willingness to consider new and different ideas is something the LessWrong community claims to value, so it would be a little bit weird if they were doing way worse than average at it. And AFAICT, it seems like they're doing fine. Some examples:
- Here [1] is a post critical of the intelligence explosion theory. It has 81 upvotes as of this writing, and the highest upvoted comment goes like: "thanks for writing this post, it makes a lot of good arguments. I agree with these things you wrote" (list of things) "here are some points where I disagree" (list of things). This may even be the original post you were talking about in your comment, except that it doesn't start with an apology.
- LW has 2 different kinds of voting: "Regular upvotes" provide an indication of the quality of a post or comment, and "agree/disagree votes" let people express how much they agree or disagree with a particular comment. Down-voting a high quality comment just because you disagree (instead of giving it a disagree-vote) would be against the culture on LW.
If you're already sure that LW is wrong about superintelligence, and you're trying to explain how they became wrong, then "those LW people were too open minded and fell for that intelligence explosion BS" makes more sense to me than anything about suppression of independent thought.
[1] https://www.lesswrong.com/posts/zB3ukZJqt3pQDw9jz/ai-will-ch...
Well that's an interesting way to misrepresent an entire important field of research based on what a few idiots said. There are serious people in that field who aren't addicted to posting on LessWrong.
Suffice it to say, ideologues desperately want control over everything AI/ML. That's the real danger.
I read the MIT Technology Review article, and I was asking myself “what is an example of Galactica making a mistake?” The article could easily have quoted a specific prompt, but doesn’t. It says the model makes mistakes in terms of understanding what’s real/correct or not, but the only concrete example I see in the article is that the model will write about the history of bears in space with the implication that it’s making things up (and I believe the model does make such mistakes). I don’t think it’s a good article because it’s heavy on quoting people who don’t like the work and light on concrete details.
Does the imperfection of a language model really mean the model should not exist? This seems to be what some critics are aiming for.
That being said, I am very partial to the AI researchers here who feel like their cool demo has to be taken down because some people were misusing it. It's an unfair high standard they're holding AI demos to, compared with other technologies. It's analogous to asking Alexander Graham Bell to shut down an early telephone prototype because some jerks were using it to discuss antisemitic conspiracies.
Deleted Comment
So you can get an AI model that hasn’t been hardened against these attacks to write a paper on why <racist thing> or create an image depicting <racist or porny thing> and it just does it. Because the model is just an input:output device and doesn’t have the “wait maybe I shouldn’t do that because it’s bad” feature.
And while teen and young 20 something males will get a huge laugh out of posting screenshots of it, the journalist crowd will freak out and start calling the model, researchers, and company racist.
Personally I have played with large language models and the chutzpah with which they will lie and make things up is indeed astounding (they do a good job at making them sound believable, and lie with utter seriousness and confidence). So I can see where the controversy comes from, although I agree with the other commenters that the researchers should be able to put up a bit fat disclaimer about it
Deleted Comment
The first step to successfully publish prototypes is creating realistic expectations. That's being done all the time in papers and other ML projects. Instead Meta listed a set of features in a language model that can be summarized as "magic".
Absolutely they can, and his employer could have kept it up. The issue is the phantasmagorical and ridiculous claims about AI-generated scientific research that LeCun peddles. When there's something concrete one can use to test these extremely bold claims, there's a way to at least partially apply a reality check to the claims, and demonstrate their ridiculousness. Which is a very useful and important part of how the scientific field evolves and advances. Feeding non-experts all these wild claims in perpetual future tense only works for so long, and it ought to be that way.
This whole incident was a case study for product management and startup school 101. I've made this exact same category of error in developing products, where I said, "hey, look at this thing I built that may mean you don't have to do what you do anymore!" and then was surprised when people picked it apart for "dumb" reasons that ignored the elegance of having automated some problem away.
If this model were really good, they would have used it to advance a bunch of new ideas in different disciplines before exposing it to the internet. Reality is, working at Meta/Facebook means they are too disconnected from the world they have influenced so heavily to be able to interpret real desire from people who live in it anymore. When you are making products to respond to data and no actual physical customer muse, you're pushing on a rope. I'd suggest the company has reached a stage of being post-product, where all that is left are "solutions," to the institutional customers who want some kind of leverage over their userbase, but no true source of human desire.
The article really fails to explain that LeCun and Marcus have been trading insults for the last few years, it's hardly LeCun snapping at some random person.