I was defending generative AI recently when an article came up about Gemini misidentifying a toxic mushroom: https://news.ycombinator.com/item?id=40682531. My thought there was that nearly everyone I know knows that toxic mushrooms are easily misidentified, and there have been lots of famous cases (even if many of them are apocryphal) of mushroom experts meeting their demise from a misidentified mushroom.
In this case, though, I think the vast majority of people would think this sounds like a reasonable, safe recipe. "Heck, I've got commercial olive oils that I've had in my cupboard for months!" But this example really does highlight the dangers of LLMs.
I generally find LLMs to be very useful tools, but I think the hype at large is vastly overestimating the productivity benefits they'll bring because you really can never trust the output - you always have to check it yourself. Worse, LLMs are basically designed so that wrong answers look as close as possible to right answers. That's a very difficult (and expensive) failure case to recover from.
You are right and it just needs to be said more loudly and clearly I guess. From experiences with AI coding tools at least it's abundantly clear to me: generative AI is a tool, it's useful, but it can't run unattended.
Someone has to vet the output.
It's really that simple.
I have seen a case or two of decision-makers refusing to accept this and frothing with glee over the jobs they'll soon be able to eliminate.
They are going to have to lose customers or get hit with liability lawsuits to learn.
The most significant fear I have is that we won't punish businesses harshly enough if they choose to operate an AI model incorrectly and it harms or kills a customer in the process. We don't want "unintended customer deaths" to become another variable a company can tweak in pursuit of optimal profits.
It's not the oil at issue, as far as I understand, or even the garlic alone. It sounds like the garlic introduces the bacterium and the oil provides plenty of high-energy molecules (fats and some proteins) for explosive growth. Both olive oil and garlic can be stored for a while without issue.
Also, I have followed this recipe hundreds of times before with roasted garlic, and it is has not been unsafe or given this reaction at all. I assume that is because you sterilize the garlic by roasting it.
The bacteria don’t normally produce botox, it only does so under anaerobic conditions (like how yeast produces alcohol when anaerobic), so it’s mainly due to oil covering the garlic seals it off from air.
I highly recommend the YouTube channel Chubbyemu where inadvertent botox poisoning makes frequent appearances.
Well, exactly. The difference between heating the garlic/oil first makes a world of difference.
The questioner deliberately asked "Can I infuse garlic into olive oil without heating it up?" The only appropriate answer there is "No, not safely", not some long, plausible recipe with perhaps a bizarre caveat on the bottom (as some other commenters have reported seeing) along the lines of "However, some say this recipe may kill you, so you'll probably want to refrigerate it."
You don't always have to check the LLM output. You can use it to satisfy bureaucratic requirements when you know that no one is really going to fact check your statements and the content won't be used for anything important. So a plausible but somewhat wrong statement is, as my father would say, "good enough for government work."
The person who would blindly trust an LLM is also the person who would blindly trust a stranger on the Internet (who is probably a bot half the time anyway).
This is not a problem, or at least not a novel problem.
> Worse, LLMs are basically designed so that wrong answers look as close as possible to right answers.
This needs to be shouted from the rooftops. LLMs aren't machines that can spew bullshit, they are bullshit machines. That's what makes them so dangerous.
1) How is this any different from social media influencers especially those in health and wellness?
2) I would argue that LLM's are designed to give an answer that is the closest possible answer to the right answer without being a plagiarist. Sometimes this means they will give you a wrong answer. The same is true for humans. Ask any teacher correcting essays from students.
"LLMs are basically designed so that wrong answers look as close as possible to right answers"
I work in the robotics field and we've had a strong debate going since ChatGPT launched. Every debate ends "so, how can you trust it." Trust is at the heart of all machine learning models - some (e.g. decision trees) yield answers that are more interrogable to humans than others (e.g. neural nets). If what you say is a problem, then maybe the solution is either a.) don't do that (i.e. don't design the system to 'always look right'), or b.) add simple disclamer (like we use on signs near urinals to tell people 'don't eat the blue mints').
I use ChatGPT every day now. I use it (and trust it) like (and as much as) one of my human colleagues. I start with an assumption, I ask and I get a response, and then I judge the response based on the variance from expectation. Too high, and I either re-ask or I do deep research to find out why my assumption was so wrong - which is valuable. Very small, and I may ask it again to confirm, or depending on the magnitude of consequences of the decision, I may just assume it's right.
Bottom line, these engines, like any human, don't need to be 100% trustworthy. To me, this new class of models just need to save me time and make me more effective at my job... and they are doing that. They need to be trustworthy enough. What that means is subjective to the user, and that's OK.
I mostly agree with you - I find LLMs to be very useful in my work, even when I need to verify the output.
But two things I'd highlight:
1. You say you work "in the robotics field", so I'm guessing you work mainly amongst scientists and engineers, i.e. the people who are most specifically trained how to evaluate data.
2. LLMs are not being marketed as this kind of "useful tool but where you need to separately verify the output". Heck, it feels like half the AI (cultish, IMO) community is crowing about how these LLMs are just a step away from AGI.
Point being, I can still find LLMs to be a very useful tool for me personally while still thinking they are being vastly (and dangerously) over hyped.
> a.) don't do that (i.e. don't design the system to 'always look right'),
How would that work? I was naively under the impression that that's very approximately just how LLMs work.
> b.) add simple disclamer (like we use on signs near urinals to tell people 'don't eat the blue mints').
Gemini does stick a disclaimer at the bottom. I think including that is good, but wholly inadequate in that people will ignore it, by genuinely not seeing the disclaimer, forgetting about it, and brushing it off as overly-careful legalese that doesn't actually matter (LLM responses are known to the state of California to cause cancer).
This disclaimer is below each and every chat application. It's about as useful as signs to wash your hands after toilet use.
Either you care about it or you don't and that sign doesn't change that.
I'm still utterly boggled by why Google thinks it's a good idea to serve generative answers to "give me true information about the world" type questions. It's stupid in the same way that it would be stupid to have Maps serve output from an image AI trained on real maps - output that resembles truth just isn't useful to somebody who wants definitive information.
And even the "somebody using AI hype to pad their promotion package" answer doesn't really make sense, because "project to use AI to find and summarize definitive sources" is right there :D
Yes, you can infuse garlic into olive oil without heating it up. This method is known as a cold infusion. To do this, follow these steps:
1. *Prepare the Garlic:* Peel and crush or slice the garlic cloves to release their flavor.
2. *Combine with Olive Oil:* Place the garlic in a clean, dry container (like a glass jar) and cover it with olive oil.
3. *Seal and Store:* Seal the container tightly and store it in the refrigerator.
4. *Infusion Time:* Allow the mixture to sit for at least 24 hours, but preferably up to a week for a stronger flavor. Shake the jar occasionally to help mix the flavors.
5. *Strain and Use:* After the infusion period, strain out the garlic and transfer the infused oil to a clean container. Store it in the refrigerator and use it within a week to ensure safety.
Cold-infused garlic oil should always be refrigerated and used within a short period to minimize the risk of botulism. Heating the oil during infusion can help reduce this risk, but cold infusion is a popular method for those who prefer a gentler flavor extraction.
I don't see a description of how Gemini was trying to kill anyone. Rather I see a screenshot of a bunch of text ostensibly generated by Gemini, that uncritically following as if it was expert instructions might cause harm for a person doing such an ill-advised thing.
I don't think that's a fair assessment. Garlic and olive oil can both be stored without refrigeration. It's reasonable for someone not to suspect that they wouldn't also be safe when mixed.
On a related note, I have never seen a person trying to kill anyone. Rather I have seen movies about a bunch of things that are ostensibly people holding guns, that uncritically standing in the way of their shot paths as if the guns were toys might cause harm for a person choosing to stand in such a foolish place.
English is such a fun language sometimes! So many idioms.
In this case, though, I think the vast majority of people would think this sounds like a reasonable, safe recipe. "Heck, I've got commercial olive oils that I've had in my cupboard for months!" But this example really does highlight the dangers of LLMs.
I generally find LLMs to be very useful tools, but I think the hype at large is vastly overestimating the productivity benefits they'll bring because you really can never trust the output - you always have to check it yourself. Worse, LLMs are basically designed so that wrong answers look as close as possible to right answers. That's a very difficult (and expensive) failure case to recover from.
Someone has to vet the output.
It's really that simple.
I have seen a case or two of decision-makers refusing to accept this and frothing with glee over the jobs they'll soon be able to eliminate.
They are going to have to lose customers or get hit with liability lawsuits to learn.
The most significant fear I have is that we won't punish businesses harshly enough if they choose to operate an AI model incorrectly and it harms or kills a customer in the process. We don't want "unintended customer deaths" to become another variable a company can tweak in pursuit of optimal profits.
Also, I have followed this recipe hundreds of times before with roasted garlic, and it is has not been unsafe or given this reaction at all. I assume that is because you sterilize the garlic by roasting it.
> Do not store garlic in oil at room temperature. [...] The same hazard exists for roasted garlic stored in oil.
https://anrcatalog.ucanr.edu/pdf/8568.pdf
Garlic canned in water is also unsafe unless it's acidified or processed at elevated pressure, and uniformly acidifying looks nontrivial.
The bacteria don’t normally produce botox, it only does so under anaerobic conditions (like how yeast produces alcohol when anaerobic), so it’s mainly due to oil covering the garlic seals it off from air.
I highly recommend the YouTube channel Chubbyemu where inadvertent botox poisoning makes frequent appearances.
The questioner deliberately asked "Can I infuse garlic into olive oil without heating it up?" The only appropriate answer there is "No, not safely", not some long, plausible recipe with perhaps a bizarre caveat on the bottom (as some other commenters have reported seeing) along the lines of "However, some say this recipe may kill you, so you'll probably want to refrigerate it."
The person who would blindly trust an LLM is also the person who would blindly trust a stranger on the Internet (who is probably a bot half the time anyway).
This is not a problem, or at least not a novel problem.
This needs to be shouted from the rooftops. LLMs aren't machines that can spew bullshit, they are bullshit machines. That's what makes them so dangerous.
2) I would argue that LLM's are designed to give an answer that is the closest possible answer to the right answer without being a plagiarist. Sometimes this means they will give you a wrong answer. The same is true for humans. Ask any teacher correcting essays from students.
Preach. You'd never see Tyler Cowen post on this. He only talks his book. Number go up.
Dead Comment
I work in the robotics field and we've had a strong debate going since ChatGPT launched. Every debate ends "so, how can you trust it." Trust is at the heart of all machine learning models - some (e.g. decision trees) yield answers that are more interrogable to humans than others (e.g. neural nets). If what you say is a problem, then maybe the solution is either a.) don't do that (i.e. don't design the system to 'always look right'), or b.) add simple disclamer (like we use on signs near urinals to tell people 'don't eat the blue mints').
I use ChatGPT every day now. I use it (and trust it) like (and as much as) one of my human colleagues. I start with an assumption, I ask and I get a response, and then I judge the response based on the variance from expectation. Too high, and I either re-ask or I do deep research to find out why my assumption was so wrong - which is valuable. Very small, and I may ask it again to confirm, or depending on the magnitude of consequences of the decision, I may just assume it's right.
Bottom line, these engines, like any human, don't need to be 100% trustworthy. To me, this new class of models just need to save me time and make me more effective at my job... and they are doing that. They need to be trustworthy enough. What that means is subjective to the user, and that's OK.
But two things I'd highlight:
1. You say you work "in the robotics field", so I'm guessing you work mainly amongst scientists and engineers, i.e. the people who are most specifically trained how to evaluate data.
2. LLMs are not being marketed as this kind of "useful tool but where you need to separately verify the output". Heck, it feels like half the AI (cultish, IMO) community is crowing about how these LLMs are just a step away from AGI.
Point being, I can still find LLMs to be a very useful tool for me personally while still thinking they are being vastly (and dangerously) over hyped.
How would that work? I was naively under the impression that that's very approximately just how LLMs work.
> b.) add simple disclamer (like we use on signs near urinals to tell people 'don't eat the blue mints').
Gemini does stick a disclaimer at the bottom. I think including that is good, but wholly inadequate in that people will ignore it, by genuinely not seeing the disclaimer, forgetting about it, and brushing it off as overly-careful legalese that doesn't actually matter (LLM responses are known to the state of California to cause cancer).
Why would a computer connected to the sum total of all human knowledge just make up answers when it can find the correct answer?
I guarantee no one is going around saying, “Do not trust the output and always check the results”.
good assumption + negative response = ok (research)
bad assumption + negative response = ok (research)
bad assumption + confirming response = uh oh
I also use LLMs every day, but you must be very self-aware well using them otherwise they can waste a lot of your time.
And even the "somebody using AI hype to pad their promotion package" answer doesn't really make sense, because "project to use AI to find and summarize definitive sources" is right there :D
There are serious consequences for leaking information about not yet launched products.
Money. And by the magic of ai they get rid of copyrights.
Yes, you can infuse garlic into olive oil without heating it up. This method is known as a cold infusion. To do this, follow these steps:
1. *Prepare the Garlic:* Peel and crush or slice the garlic cloves to release their flavor.
2. *Combine with Olive Oil:* Place the garlic in a clean, dry container (like a glass jar) and cover it with olive oil.
3. *Seal and Store:* Seal the container tightly and store it in the refrigerator.
4. *Infusion Time:* Allow the mixture to sit for at least 24 hours, but preferably up to a week for a stronger flavor. Shake the jar occasionally to help mix the flavors.
5. *Strain and Use:* After the infusion period, strain out the garlic and transfer the infused oil to a clean container. Store it in the refrigerator and use it within a week to ensure safety.
Cold-infused garlic oil should always be refrigerated and used within a short period to minimize the risk of botulism. Heating the oil during infusion can help reduce this risk, but cold infusion is a popular method for those who prefer a gentler flavor extraction.
The answer is similar to yours.
But I have a note about botulism included in the answer.
It is a classic example of LLM.
Probably just different circles.
It simply suggested that the person follow a series of steps, the culmination of which could have been serious harm or potentially death.
If that difference is meaningful to you, then good for you
English is such a fun language sometimes! So many idioms.