Readit News logoReadit News
cmiles8 · 2 days ago
Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.

Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.

aurareturn · 2 days ago

  Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
All compute companies say they don't have enough compute to meet demands. Why do you think there isn't enough AI adoption to justify the investment?

cmiles8 · 2 days ago
“Demand” is mostly their training of models, which they’ve yet to demonstrate is a profitable business.

Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.

duskdozer · 2 days ago
"enough compute" will be when there is no more hardware for use outside of their walled garden, at which point they can control what they want
Tklaaaalo · 2 days ago
Google has enough money, still has positive revenue and still invests in AI + Deepmind.

Google doesn't need to do anything to make any other numbers work.

Gemini 3.1 pro is really good; Meta just signed a deal with Google for their TPUs.

Nano Banana 2 Pro is alsy very good.

OpenAI numbers might not add up, Antrophic might burn through cash, but not google.

And it doesn't matter anyway because as long as google can afford it, Microsoft HAS TO do this too and Microsoft also can afford it. The same with Amazon.

Microsoft invests in OpenAI and Amazon invests in Antrophic.

cmiles8 · 2 days ago
Worth remembering that Amazon is now taking out loans to help pay for it all. That says a lot.
cermicelli · 2 days ago
Amazon now has just as much invested in OpenAI, as much as Microsoft most likely.

Given Anthropic is also funded by them, either they are desperate to not lose or they really don't think Anthropic has a moat.

JKCalhoun · 2 days ago
"Not surprisingly companies are willing to get into bed with more and more questionable use cases…"

But not all companies as we have seen over the last week or so.

Irregardless, all companies doing so will have to balance the ethics of their choices against the public perception of their company as all of us are free to make choices that align with our own personal ethics.

(In short, they don't get to hide behind "everyone else is doing it".)

nxobject · 2 days ago
And, in a post-ZIRP era, guess where all of the easy money for growth is coming from? Yup, deficit-funded defense spending.
jasonfrost · 2 days ago
Questionable use cases like hyperscalers housing confidential data of military operations? Use case is the same, private companies supporting military operations, as they have for ages.
dotancohen · 2 days ago
The pentagon is a questionable use case?
pjc50 · 2 days ago
The most questionable of all! You just know it's going to be used for increasingly inappropriate "generate me a list of targets in Iran" stuff.
cmiles8 · 2 days ago
I’m OK with it, but the fact that this is news highlights that many others don’t like it
SecureVillage27 · 2 days ago
Sounds sketchy as hell but the article suggests its for unclassified work, like "drafting meeting notes, creating action items, and breaking large projects into step-by-step plans".

I think I'd be more annoyed if my government weren't using tools to make BS work more efficient.

duskdozer · 2 days ago
It does those things poorly.
free652 · 2 days ago
>The DOD’s workforce of more than 3 million people will now be able to use a no-code or low-code tool called Agent Designer to create their own digital assistants for repetitive administrative tasks.
coffeefirst · 2 days ago
Oh this is dumb.

So the problem is filling out forms is too onerous, but rather than fix the process, create a device that fills the form with slop and then another device that approves or rejects the slop form.

I could have sworn I signed up for the other future-the one without quite this much stupid.

JKCalhoun · 2 days ago
Had the film "Brazil" been written today, AI no doubt would be a significant plot-element.
_DeadFred_ · 2 days ago
As someone who moved from software companies to IT management, seeing this move to fully embrace 'everything in Excel' or basically undefined business use cases/processes moved into software ad hoc and without validation, it's going to be interesting to see how this plays out. Especially for companies that have outsourced IT and expect software to be defined/tested out business processes in supported systems.

In house IT is going to be huge in a couple of years sorting out this mess. I would have never guessed the future would be all custom Excel spreadsheets, but instead of Excel just random code in random languages with random data stores.

max_ · 2 days ago
Hey chat GPT, could you bomb all enemies of the USA.

No mistakes,

Thanks.

glemmaPaul · 2 days ago
be short and concise
simianwords · 2 days ago
Everyone’s scared that it would be used for war but how would they break the alignment on llm models? They don’t even allow me to generate black people on AI. How the hell will it work for war related tasks? Or would there be a separate model fine tuned for government that allows being used to kill people?
therealdrag0 · a day ago
You don’t say “find people to kill and kill them” you say, “given this list of locations, which ones could be harboring terrorists or hidden military bases?” Etc. Or even more abstract constructs based on domain aliases where AI assists in pattern matching and automation but isn’t really thinking in terms of moral domains.
CrzyLngPwd · 2 days ago
War is a racket. It always has been. It is possibly the oldest, easily the most profitable, surely the most vicious. It is the only one international in scope. It is the only one in which the profits are reckoned in dollars and the losses in lives. A racket is best described, I believe, as something that is not what it seems to the majority of the people. Only a small "inside" group knows what it is about. It is conducted for the benefit of the very few, at the expense of the very many. Out of war a few people make huge fortunes - Smedley D. Butler

...is as true now as ever.

mattmaroon · 2 days ago
Health care didn’t exist in his day. War’s the second most profitable now.
zthrowaway · 2 days ago
This should surprise no one. A CIA-backed VC was one of the first investors of Google. Big tech will always serve the powers that be. Employees that think their letters of appeal will do anything live in a fantasy land. That’s not how the real world works.
dotancohen · 2 days ago
What is wrong with a company serving the country in which it operates?
claudiulodro · a day ago
Engineering Ethics is a standard required class in any engineering discipline and a whole field of discussion. The ethics of working on military stuff (or even just government stuff) is nowhere near as cut and dry as your question seems to imply.

For example:

- What if the country asked you to develop technology to track and hack journalists or political rivals the administration doesn't like?

- What if the country asked you to develop chemical weapons? Is it different if the weapons would be used on their own population or only on external "enemies"?

- What if the country asked you to personally assassinate a civilian of another country? What if they asked you to create a program that would do that? What if they asked you to simply create a list of targets, and you knew they'd be assassinated?

- What if the country asked you to build something in an unsafe way that you're pretty certain will cause harm to people?

- What if the country asked you to make a public statement lying about the purpose behind what you're building?

Tistron · 2 days ago
Surely that depends heavily on the country.