“It’s an ambiguous term with many possible definitions.”
“Does the product actually use AI at all? If you think you can get away with baseless claims…”
Last I checked basic optimization techniques like simulated annealing and gradient descent - as well as a host of basic statistical tools - are standard parts of an introductory AI textbook. I’ve been on the receiving end of government agency enforcement (SEC) and it felt a lot like a shakedown. This language carries a similar intent: if we decide we don’t like you, watch out!
Yeah, it's pretty laughable that the source they link to for those "possible definitions" right away says this:
> AI is defined in many ways and often in broad terms. The variations stem in part from whether one sees it as a discipline (e.g., a branch of computer science), a concept (e.g., computers performing tasks in ways that simulate human cognition), a set of infrastructures (e.g., the data and computational power needed to train AI systems), or the resulting applications and tools. In
a broader sense, it may depend on who is defining it for whom, and who has the power to do so.
I don't see how they can possibly enforce "if it doesn't have AI, it's false advertising to say it does" when they cannot define AI. "I'll know it when I see it" is truly an irksome thorn.
Deterministic if/then statements simulate a surprising coverage of average human cognition, so who's to say a program comprised of them is neither artificial nor intelligent? (That's hand-waving over the more mathematical fact that even the most advanced AI of today is all just branching logic in the end. It just happens to have been automatically generated through a convoluted process that we call "training" resulting in complicated conditions for each binary decision).
In general I like the other bullet points, but I find it really bizarre they'd run with this one.
While I don't disagree with the basic premise ("AI" as a specific falsifiable term is hard to pin down due to the ubiquity associated with the term); I do think there are specific cut-and-dry circumstances where the FTC could falsifiably prove your product does not include AI.
For example, using an alternative of Amazon's Mechanical Turk to process data is clearly a case where your product does not use AI. Which I believe is more likely the kind of scenario envisioned when the author was writing that sentence.
On the other end of the spectrum, calling a feature of a product "AI" seems to imply some minimal level of complexity.
If, for example, a company marketed a toaster that "uses AI to toast your bread perfectly", I would expect that language to indicate something more sophisticated than an ordinary mechanical thermostat.
It makes sense to protect investors from falsely investing in new "AI" tech that isn't really new AI tech, but why do consumers need to be protected? If a software product solves their problem equally well with deep learning or with a more basic form of computation, why is the consumer harmed from false claims of AI?
To put it another way, if you found out that Chat GPT was implemented without any machine learning, and was just an elaborate creation of traditional software, would the consumer of the product have been harmed by false claims of AI?
I know of at least one startup that claimed to use AI (including having AI in the company name), but in actuality humans did nearly all of the work. Hoped that once they got enough customers (and supposedly "proved the concept"), they could figure out how to use AI instead. I bet this is/was somewhat common.
I also see many (particularly "legacy") products say they're "AI-driven" or "powered by AI", when in actuality one minor feature uses some AI, even in the broadest sense.
> Are you exaggerating what your AI product can do?
> Are you promising that your AI product does something better than a non-AI product?
> Are you aware of the risks?
I'm guessing everyone here has come across examples of "AI" tossed onto something that either 1) 10 years ago wouldn't have been called AI or 2) the thought of something with a more recent interpretation of "AI" being core to the function of the product is a little scary and/or feels a little unnecessary.
Maybe it is a shakedown/warning. I think that's fair. We should have better definitions so that these agencies can't overstep, and products should have a better explanation of what "AI" means in their context. Until then yeah, vague threats versus vague promises.
It sounds like you may have missed the stampede of “AI” companies coming out of the woodwork the last few months.
For every legitimate AI project, there have been a thousand “entrepreneurs” who spend 4 hours putting a webflow site on top of GPT APIs and claim they’ve built an “AI product”. There’s no limit on the amount of BS benefits they claim. They seem like the same people who just finished running the crypto scams.
It seems quite obvious to me that this cohort is the target of this statement.
> spend 4 hours putting a webflow site on top of GPT APIs
GPT _is_ AI though, no? I would think that this would count. Might violate "a re you exaggerating what your AI product can do" or "are you aware of the risks" instead though.
>In the 2021 Appropriations Act, Congress directed the Federal Trade Commission to study and report on whether and how artificial intelligence (AI) “may be used to identify, remove, or take any other appropriate action necessary to address” a wide variety of specified “online harms.”
>We assume that Congress is less concerned with whether a given tool fits within a definition of AI than whether it uses computational technology to address a listed harm. In other words, what matters more is output and impact. Thus, some tools mentioned herein are not necessarily AI-powered. Similarly, and when appropriate, we may use terms such as automated detection tool or automated decision system, which may or may not involve actual or claimed use of AI.
I find it hard to sympathize with companies whose websites are full of AI, blockchain, and quantum trash. Honestly, idgaf if they get shaken down. If you have a product that people like, just market your product based on its features, and remove all the BS about using <insert the buzzword of the day>.
If the FTC tells OpenAI to stop mentioning AI, I would be surprised. Even if that happens, I am sure ChatGPT will remain just as popular.
There is also the high level question of why exactly the government needs to police this. If it turns out that some Stable Diffusion frontend was actually sending the prompts to a team of Indians who happen to draw really quickly; that is no reason to get the enforcers involved.
If examined closely, the finger wagging in this post is remarkably petty. This guy was likely part of the angry crowd who didn't like Steve Jobs describing the iPhone as "magical". The standard should be "a lie that causes measurable, material harm", not some company exaggerated in their advertising. Advertisers exaggerate, that is just something people have to live with.
The problem is that this ends with everybody calling their product magic and the word losing its original meaning; soon after it will have a meaning closer to "disappointing" or "lame".
It doesn't really matter what the standard is... What matters is that there aren't some companies who push the limits far harder than others. If there are, then those companies who push the limits of what is allowed harder will be at an advantage, to the detriment of the public and the american economy as a whole.
>If it turns out that some Stable Diffusion frontend was actually sending the prompts to a team of Indians who happen to draw really quickly; that is no reason to get the enforcers involved.
Well, if the enforcement agency is the SEC I would think that it made a good deal of difference to the actual value of your company?
I’m sure there’s a company out there who uses some linear equation in their app that they came up with by dumping the data they had into Excel and running the linear regression “data analysis” on it.
> “Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.”
So is y=7.4x+5 an “AI” running inside our app or is it just the output from an “AI tool” FTC?
Replace x and y with matrices and wrap everything in a non-linearity. Swap the 7.4 and 5 constants for variables a and b and set their values by taking the partial derivative of the with respect to the difference between the ground truth value and the predicted y.
String together a bunch of these "smart cells" and observe that we can process sequences of data by linking the cells together. Further observe that if we have a separate set of cells (technically it's an attention vector, not quite a group of neurons) whose loss function is with respect to each individual token in the sequences, we can "pay attention" to specific segments in the sequences.
Add a few more gates and feedback loops, scale up the number of cells to 10^12, and you basically have a state of the art chatbot. Capiche?
>Last I checked basic optimization techniques like simulated annealing and gradient descent - as well as a host of basic statistical tools - are standard parts of an introductory AI textbook.
maybe the textbook needs to be investigated
that's meant to sound ironic no matter which side of the issue you're on
Given the ambiguity of the term it would actually be better if FTC didn’t step in at all. To let the term dilute itself in its own marketing, to the point where consumers don’t care about it at all or actively avoid products with “AI”.
That's true if you use the sci-fi definition ("machines that think as well as humans") but the technical definition is a lot broader than that. In academic terms, a sentient machine would be "strong AI" or "AGI (artificial general intelligence)"; we've had "weak AI" for decades.
You can't use simulated annealing or gradient descent in your product and claim that you have built something intelligent. That would be laughable and validate these kind of messaging from the government.
AI is indeed a very ambiguous and subjectively defined term. In my own personal subjective opinion anything that does not have survival instinct is not remotely intelligent. By that definition unicellular organisms are more intelligent than a Tesla self driving vehicle.
A person can certainly claim the product uses "AI". The currently used definition of AI might be absurd, but you can't say such a person is lying or deceiving.
wow, i would not have imagined a gov't agency to be agile enough to release something like this. i'm kind of impressed. here's to hoping this isn't just all bark and there's some actual teeth to his.
The language is not bureaucratic, its elegant, precise, and clear. Like written by a passionate person who cares about the topic, rather than a government drone.
Note that for the banking industry various “semi regulations” have been in place spawning a cottage industry for “Model Risk Management”. Basically you have to explain how you (attempt to) keep bias out of your training, re-test often, etc.
Point is here that as a bank you don’t stop providing loans to certain demographies etc.
Jokes on them, I'll call my AI product Full Self Thinking.
It's $10,000 if you buy now, but we'll be raising prices to $15,000 by June.
And yes, it will just be a worst version of the latest ChatGPT, but I'll hide it in a black box and continually tell you that it's so far ahead of the competition, we can't even see them in the rear view mirror!
On the contrary, FTC seems to have given up on pressing problems for the American consumers (like price gouging in broadband, generics, etc.) and is instead going after "trendy" topics that will get them and their leaders more press coverage.
The FTC wasn’t doing anything for decades. It’s doing a lot now… like it was made to do. Unchecked monopolization of so many markets has been terrible for the economy.
It's not always industry, sometimes agencies are closer to academia, certain types of lawyers (maybe this is considered industry too), etc. But seems in line with Lina Khan's efforts
I hope people develop a better understanding of this than the bloody coin fiasco. Society can’t effectively deal with something it considers magic - even a low resolution understanding + accessible expertise will help massively (think law or medicine).
Take Chat GPT. In the case of humans, language is just a way to express knowledge. In ChatGPT its the other way around - the focus is the correctness of the word chain. It happens to get some things right because other people have chained similar sentences together in expressing that knowledge. Even if it’s “lying” to you as long as the sentence is correct and the paragraph reads nicely, it’s done the job it was designed to do.
I have no higher hopes that AI products will be regulated any better than cryptocurrencies. Fundamentally they both operate at breakneck speeds which regulators have proven they can only, at best, react to damage being done. Maybe they surprise us all and proactively regulate AI well, but they'd have to prove that instead of being taken at their word like this press release. Technology's speed combined with human greed is what allows it to do damage before regulation knows what's going on, and that fundamental difference isn't likely to change.
We are working on an AI product in a highly regulated industry (Investing).
Recently we have been experimenting using the GPT3 API as a Junior Equity Analyst. On an eyeball check the results of the technology are impressive.
The problem is that there is no way to validate the feedback on scale. I.e., we can't receive statistics about the feedback from the API.
In contrast, for our own Entity Recognition models we can (and do) calculate probabilities that explain why a certain entity is shown.
Hence, I think for API users of GPT3, OpenAI should return additional statistics why a certain result is returned the way it is to make it really useful and more importantly compliant.
For now GPT is creating the filler content that moves the Web in 2023.
But, given the results I have seen from our PoC's, it can do more and will do more in the future.
It's almost always correct, otherwise it would be worthless.
Yes, it's possible to construct questions that lead to nonsensical answers, and sometimes nonsensical answers are given even to sensible questions, but saying that ChatGPT's answers are "occasionally" correct is weapons grade BS. ChatGPT is a hair's breadth from being an ultimate answer machine, and is far more likely to be correct on almost any question than the average human.
The FUD that is currently being manufactured around language models is insane. I guess we should all stop using search engines, since those are even less reliable.
>In contrast, for our own Entity Recognition models we can (and do) calculate probabilities that explain why a certain entity is shown.
>Hence, I think for API users of GPT3, OpenAI should return additional statistics why a certain result is returned the way it is to make it really useful and more importantly compliant.
For LLMs, you can get the same thing: the distribution of probabilities for the next token, for each token. But right now we cannot say why the probabilities are the way they are, same goes for your image recognition models.
The problem in a nutshell, and the one the FTC had pointed out, is Model explainability. I was working in the past of an AI for automated lending decisions. We were asked to be able to explain every single decision the engine took.
If now a news article reaches our AI engine, it will tag, categorize, classify, and rank this news article. All based on models that are explainable.
LLMs, at least how I personally implemented them in the past, create a huge black box that is largely non-explainable.
A blogger (i.e. experienced ML professional in fintech) published an excellent write up of AI in fintech in December - his basic take was that there's a lot of room for the tech to grow before it becomes truly ubiquitous, because answers in finance must be correct 100% of the time. Worth a read!
For most of our models we return more information.
Especially if you look at it from a vendor/customer perspective I believe this to be quite important.
it hasn't been a problem for the social platforms or other companies operating in the hovering of personal data for profit. the trick is to get too big to fail status before gov't agencies can come shut you down.
I think this is great and as a practitioner, something I've incorporated and demand in all of the contracts I work on is a clear and explicit validation and uncertainty analysis. It was always an issue when I worked at big corp that there was pressure to bury the lede when it came to uncertainty/ propagation of error. I considered it to be misleading and since then I make sure that any work that I'm a part of has in explicit language exactly how we'll analyse and approach bias and uncertainty as a part of the original work. I also try and make sure that assessments of uncertainty are 'part of the product' so there is no getting around things and every prediction also comes with an 'I told you so' layer.
I'm an absolute proponent of the kinds of products that get labeled as AI. I also think that useful predictions can be made in the face of uncertainty, because lets be real, human assessments, predictions and decisions also come with a degree of uncertainty (we usually just fail to rigorously quantify them).
Just me, or does this read like a Godfather-style warning?
"Nice AI product you got over there. Would be a shame if something happened to it."
But what's different about this AI product/hype cycle compared to every other cycle that tech has had? Why does the FTC feel the need to make proactive enforcement warnings about this one? What about crypto? Or going back further, the dotcom era?
> But what’s different about this AI product/hype cycle compared to every other cycle that tech has had?
That its a hype cycle that is happening now, with a lot of shady actors overselling things.
FTC has done things like this during other cycles, too. Letting industries know they are aware, and simultaneously letting the public know to be aware (often, they’ll provide multiple messages, some specifically worded toward industry and others toward the public, but even with just one the message works both ways.)
On one hand, the US is surely happy that they’re the epicentre, on the other hand the potential impact on unemployment (fiscal and social unrest) is massive….
Ironically, here in Australia we've just been having a "Royal Commission" (high end public legal investigation) into how the government used an algorithm to pretty much screw over a bunch of people on welfare (several years ago).
“Does the product actually use AI at all? If you think you can get away with baseless claims…”
Last I checked basic optimization techniques like simulated annealing and gradient descent - as well as a host of basic statistical tools - are standard parts of an introductory AI textbook. I’ve been on the receiving end of government agency enforcement (SEC) and it felt a lot like a shakedown. This language carries a similar intent: if we decide we don’t like you, watch out!
> AI is defined in many ways and often in broad terms. The variations stem in part from whether one sees it as a discipline (e.g., a branch of computer science), a concept (e.g., computers performing tasks in ways that simulate human cognition), a set of infrastructures (e.g., the data and computational power needed to train AI systems), or the resulting applications and tools. In a broader sense, it may depend on who is defining it for whom, and who has the power to do so.
I don't see how they can possibly enforce "if it doesn't have AI, it's false advertising to say it does" when they cannot define AI. "I'll know it when I see it" is truly an irksome thorn.
Deterministic if/then statements simulate a surprising coverage of average human cognition, so who's to say a program comprised of them is neither artificial nor intelligent? (That's hand-waving over the more mathematical fact that even the most advanced AI of today is all just branching logic in the end. It just happens to have been automatically generated through a convoluted process that we call "training" resulting in complicated conditions for each binary decision).
In general I like the other bullet points, but I find it really bizarre they'd run with this one.
This was the pinnacle of AI in the 80's. They called them "expert systems".
The FTC is going to get hammered in a court if they ever try to test this.
For example, using an alternative of Amazon's Mechanical Turk to process data is clearly a case where your product does not use AI. Which I believe is more likely the kind of scenario envisioned when the author was writing that sentence.
If, for example, a company marketed a toaster that "uses AI to toast your bread perfectly", I would expect that language to indicate something more sophisticated than an ordinary mechanical thermostat.
To put it another way, if you found out that Chat GPT was implemented without any machine learning, and was just an elaborate creation of traditional software, would the consumer of the product have been harmed by false claims of AI?
Deleted Comment
I also see many (particularly "legacy") products say they're "AI-driven" or "powered by AI", when in actuality one minor feature uses some AI, even in the broadest sense.
> Are you exaggerating what your AI product can do?
> Are you promising that your AI product does something better than a non-AI product?
> Are you aware of the risks?
I'm guessing everyone here has come across examples of "AI" tossed onto something that either 1) 10 years ago wouldn't have been called AI or 2) the thought of something with a more recent interpretation of "AI" being core to the function of the product is a little scary and/or feels a little unnecessary.
Maybe it is a shakedown/warning. I think that's fair. We should have better definitions so that these agencies can't overstep, and products should have a better explanation of what "AI" means in their context. Until then yeah, vague threats versus vague promises.
For every legitimate AI project, there have been a thousand “entrepreneurs” who spend 4 hours putting a webflow site on top of GPT APIs and claim they’ve built an “AI product”. There’s no limit on the amount of BS benefits they claim. They seem like the same people who just finished running the crypto scams.
It seems quite obvious to me that this cohort is the target of this statement.
GPT _is_ AI though, no? I would think that this would count. Might violate "a re you exaggerating what your AI product can do" or "are you aware of the risks" instead though.
>In the 2021 Appropriations Act, Congress directed the Federal Trade Commission to study and report on whether and how artificial intelligence (AI) “may be used to identify, remove, or take any other appropriate action necessary to address” a wide variety of specified “online harms.”
>We assume that Congress is less concerned with whether a given tool fits within a definition of AI than whether it uses computational technology to address a listed harm. In other words, what matters more is output and impact. Thus, some tools mentioned herein are not necessarily AI-powered. Similarly, and when appropriate, we may use terms such as automated detection tool or automated decision system, which may or may not involve actual or claimed use of AI.
Quite hilarious really!
It’s just saying the key criteria to evaluate is not whether the software is called AI or not. But what the software actually does.
Isn’t that just common sense?
If the FTC tells OpenAI to stop mentioning AI, I would be surprised. Even if that happens, I am sure ChatGPT will remain just as popular.
If examined closely, the finger wagging in this post is remarkably petty. This guy was likely part of the angry crowd who didn't like Steve Jobs describing the iPhone as "magical". The standard should be "a lie that causes measurable, material harm", not some company exaggerated in their advertising. Advertisers exaggerate, that is just something people have to live with.
Well, if the enforcement agency is the SEC I would think that it made a good deal of difference to the actual value of your company?
> “Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.”
So is y=7.4x+5 an “AI” running inside our app or is it just the output from an “AI tool” FTC?
String together a bunch of these "smart cells" and observe that we can process sequences of data by linking the cells together. Further observe that if we have a separate set of cells (technically it's an attention vector, not quite a group of neurons) whose loss function is with respect to each individual token in the sequences, we can "pay attention" to specific segments in the sequences.
Add a few more gates and feedback loops, scale up the number of cells to 10^12, and you basically have a state of the art chatbot. Capiche?
maybe the textbook needs to be investigated
that's meant to sound ironic no matter which side of the issue you're on
Deleted Comment
AI is indeed a very ambiguous and subjectively defined term. In my own personal subjective opinion anything that does not have survival instinct is not remotely intelligent. By that definition unicellular organisms are more intelligent than a Tesla self driving vehicle.
Linear regression is an important part of statistics but is still ultimately a couple matmuls and a matrix inversion in its basic form.
The language is not bureaucratic, its elegant, precise, and clear. Like written by a passionate person who cares about the topic, rather than a government drone.
https://www.fdic.gov/news/financial-institution-letters/2017...
https://occ.gov/publications-and-resources/publications/comp...
Let's see what the article says:
> When you talk about AI in your advertising, the FTC may be wondering, among other things: (...)
Just "wondering", no teeth.
The FTC is actually pretty good about doing things like this when a wave of similar types of borderline business practices erupt.
It's $10,000 if you buy now, but we'll be raising prices to $15,000 by June.
And yes, it will just be a worst version of the latest ChatGPT, but I'll hide it in a black box and continually tell you that it's so far ahead of the competition, we can't even see them in the rear view mirror!
Then, my next thought was:
Who (in industry) wrote this? (and got FTC to publish it? wanting to drive some agenda forward).
Up until very recently, society considered everything magic. Many still do.
The problem is that there is no way to validate the feedback on scale. I.e., we can't receive statistics about the feedback from the API.
In contrast, for our own Entity Recognition models we can (and do) calculate probabilities that explain why a certain entity is shown.
Hence, I think for API users of GPT3, OpenAI should return additional statistics why a certain result is returned the way it is to make it really useful and more importantly compliant.
It is a very impressive toy, but still just a toy for now.
Here’s how:
https://github.com/williamcotton/empirical-philosophy/blob/m...
Dead Comment
It's almost always correct, otherwise it would be worthless.
Yes, it's possible to construct questions that lead to nonsensical answers, and sometimes nonsensical answers are given even to sensible questions, but saying that ChatGPT's answers are "occasionally" correct is weapons grade BS. ChatGPT is a hair's breadth from being an ultimate answer machine, and is far more likely to be correct on almost any question than the average human.
The FUD that is currently being manufactured around language models is insane. I guess we should all stop using search engines, since those are even less reliable.
>In contrast, for our own Entity Recognition models we can (and do) calculate probabilities that explain why a certain entity is shown.
>Hence, I think for API users of GPT3, OpenAI should return additional statistics why a certain result is returned the way it is to make it really useful and more importantly compliant.
For LLMs, you can get the same thing: the distribution of probabilities for the next token, for each token. But right now we cannot say why the probabilities are the way they are, same goes for your image recognition models.
If now a news article reaches our AI engine, it will tag, categorize, classify, and rank this news article. All based on models that are explainable.
LLMs, at least how I personally implemented them in the past, create a huge black box that is largely non-explainable.
https://chaosengineering.substack.com/p/artificial-intellige...
If that’s not helpful, were you getting at having the model return some rich data about the attention weights that went into generating some token?
Dead Comment
While avoiding getting bought by Elon \o/
I'm an absolute proponent of the kinds of products that get labeled as AI. I also think that useful predictions can be made in the face of uncertainty, because lets be real, human assessments, predictions and decisions also come with a degree of uncertainty (we usually just fail to rigorously quantify them).
"Nice AI product you got over there. Would be a shame if something happened to it."
But what's different about this AI product/hype cycle compared to every other cycle that tech has had? Why does the FTC feel the need to make proactive enforcement warnings about this one? What about crypto? Or going back further, the dotcom era?
That its a hype cycle that is happening now, with a lot of shady actors overselling things.
FTC has done things like this during other cycles, too. Letting industries know they are aware, and simultaneously letting the public know to be aware (often, they’ll provide multiple messages, some specifically worded toward industry and others toward the public, but even with just one the message works both ways.)
On one hand, the US is surely happy that they’re the epicentre, on the other hand the potential impact on unemployment (fiscal and social unrest) is massive….
https://robodebt.royalcommission.gov.au
So, an "AI" (or ML, etc) version of said scheme could also impact "unemployment" in some further crappy ways too.