Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.
“Demand” is mostly their training of models, which they’ve yet to demonstrate is a profitable business.
Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.
Google has enough money, still has positive revenue and still invests in AI + Deepmind.
Google doesn't need to do anything to make any other numbers work.
Gemini 3.1 pro is really good; Meta just signed a deal with Google for their TPUs.
Nano Banana 2 Pro is alsy very good.
OpenAI numbers might not add up, Antrophic might burn through cash, but not google.
And it doesn't matter anyway because as long as google can afford it, Microsoft HAS TO do this too and Microsoft also can afford it. The same with Amazon.
Microsoft invests in OpenAI and Amazon invests in Antrophic.
"Not surprisingly companies are willing to get into bed with more and more questionable use cases…"
But not all companies as we have seen over the last week or so.
Irregardless, all companies doing so will have to balance the ethics of their choices against the public perception of their company as all of us are free to make choices that align with our own personal ethics.
(In short, they don't get to hide behind "everyone else is doing it".)
Questionable use cases like hyperscalers housing confidential data of military operations? Use case is the same, private companies supporting military operations, as they have for ages.
Sounds sketchy as hell but the article suggests its for unclassified work, like "drafting meeting notes, creating action items, and breaking large projects into step-by-step plans".
I think I'd be more annoyed if my government weren't using tools to make BS work more efficient.
>The DOD’s workforce of more than 3 million people will now be able to use a no-code or low-code tool called Agent Designer to create their own digital assistants for repetitive administrative tasks.
So the problem is filling out forms is too onerous, but rather than fix the process, create a device that fills the form with slop and then another device that approves or rejects the slop form.
I could have sworn I signed up for the other future-the one without quite this much stupid.
As someone who moved from software companies to IT management, seeing this move to fully embrace 'everything in Excel' or basically undefined business use cases/processes moved into software ad hoc and without validation, it's going to be interesting to see how this plays out. Especially for companies that have outsourced IT and expect software to be defined/tested out business processes in supported systems.
In house IT is going to be huge in a couple of years sorting out this mess. I would have never guessed the future would be all custom Excel spreadsheets, but instead of Excel just random code in random languages with random data stores.
Everyone’s scared that it would be used for war but how would they break the alignment on llm models? They don’t even allow me to generate black people on AI. How the hell will it work for war related tasks? Or would there be a separate model fine tuned for government that allows being used to kill people?
You don’t say “find people to kill and kill them” you say, “given this list of locations, which ones could be harboring terrorists or hidden military bases?” Etc. Or even more abstract constructs based on domain aliases where AI assists in pattern matching and automation but isn’t really thinking in terms of moral domains.
War is a racket. It always has been. It is possibly the oldest, easily the most profitable, surely the most vicious. It is the only one international in scope. It is the only one in which the profits are reckoned in dollars and the losses in lives. A racket is best described, I believe, as something that is not what it seems to the majority of the people. Only a small "inside" group knows what it is about. It is conducted for the benefit of the very few, at the expense of the very many. Out of war a few people make huge fortunes - Smedley D. Butler
This should surprise no one. A CIA-backed VC was one of the first investors of Google. Big tech will always serve the powers that be. Employees that think their letters of appeal will do anything live in a fantasy land. That’s not how the real world works.
Engineering Ethics is a standard required class in any engineering discipline and a whole field of discussion. The ethics of working on military stuff (or even just government stuff) is nowhere near as cut and dry as your question seems to imply.
For example:
- What if the country asked you to develop technology to track and hack journalists or political rivals the administration doesn't like?
- What if the country asked you to develop chemical weapons? Is it different if the weapons would be used on their own population or only on external "enemies"?
- What if the country asked you to personally assassinate a civilian of another country? What if they asked you to create a program that would do that? What if they asked you to simply create a list of targets, and you knew they'd be assassinated?
- What if the country asked you to build something in an unsafe way that you're pretty certain will cause harm to people?
- What if the country asked you to make a public statement lying about the purpose behind what you're building?
Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.
Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.
Google doesn't need to do anything to make any other numbers work.
Gemini 3.1 pro is really good; Meta just signed a deal with Google for their TPUs.
Nano Banana 2 Pro is alsy very good.
OpenAI numbers might not add up, Antrophic might burn through cash, but not google.
And it doesn't matter anyway because as long as google can afford it, Microsoft HAS TO do this too and Microsoft also can afford it. The same with Amazon.
Microsoft invests in OpenAI and Amazon invests in Antrophic.
Given Anthropic is also funded by them, either they are desperate to not lose or they really don't think Anthropic has a moat.
But not all companies as we have seen over the last week or so.
Irregardless, all companies doing so will have to balance the ethics of their choices against the public perception of their company as all of us are free to make choices that align with our own personal ethics.
(In short, they don't get to hide behind "everyone else is doing it".)
I think I'd be more annoyed if my government weren't using tools to make BS work more efficient.
So the problem is filling out forms is too onerous, but rather than fix the process, create a device that fills the form with slop and then another device that approves or rejects the slop form.
I could have sworn I signed up for the other future-the one without quite this much stupid.
In house IT is going to be huge in a couple of years sorting out this mess. I would have never guessed the future would be all custom Excel spreadsheets, but instead of Excel just random code in random languages with random data stores.
No mistakes,
Thanks.
...is as true now as ever.
For example:
- What if the country asked you to develop technology to track and hack journalists or political rivals the administration doesn't like?
- What if the country asked you to develop chemical weapons? Is it different if the weapons would be used on their own population or only on external "enemies"?
- What if the country asked you to personally assassinate a civilian of another country? What if they asked you to create a program that would do that? What if they asked you to simply create a list of targets, and you knew they'd be assassinated?
- What if the country asked you to build something in an unsafe way that you're pretty certain will cause harm to people?
- What if the country asked you to make a public statement lying about the purpose behind what you're building?