Readit News logoReadit News
alexdoesstuff commented on Deloitte to refund the Australian government after using AI in $440k report   theguardian.com/australia... · Posted by u/fforflo
alexdoesstuff · 2 months ago
This is primarily a story of a failure to supervise the creation of the report, rather than anything related to AI.

The role of the outsourced consultancy in such a project is to make sure the findings withstand public scrutiny. They clearly failed on this. It's quite shocking that the only consequence is a partial refund rather than a review of any current and future engagements with the consultancy due to poor performance.

There shouldn't be a meaningful difference if the error in the report is minor or consequential for the finding, or if it is introduced by poorly used AI or a caffeinated up consultant in a late-night session.

alexdoesstuff commented on I only use Google Sheets   mayberay.bearblog.dev/why... · Posted by u/mugamuga
corry · 3 months ago
Always overlooked point in these pro/anti-spreadsheet discussions:

A spreadsheet gives you a DB, a quickly and easily customized UI, and iterative / easy-to-debug data processing all in a package that everyone in the working world already understands. AND with a freedom that allows the creator to do it however they want. AND it's fairly portable.

You can build incredible things in spreadsheets. I remain convinced that it's the most creative and powerful piece of software we have available, especially so for people who can't code.

With that power and freedom comes downsides, sure; and we can debate the merits of it being online, or whether this or that vendor is preferable; but my deep appreciation for spreadsheets remains undiminished by these mere trifles.

It's the best authoring tool we've ever devised.

EDIT TO ADD: the only other thing that seems to 'rhyme' with spreadsheets in the same way is: HyperCard. Flexible workbench that let you stitch together applications, data, UX, etc. RIP HyperCard, may you be never forgotten.

alexdoesstuff · 3 months ago
To expand on the overlooked point: it gives you a DB and a programming environment (however challenged) that you can use without needing sign-off from IT. In any moderately sizeable organization, getting approval to use anything but standard software is slow and painful.

Nobody wants to explain to IT that they need to install Python on their machine, or drivers for sqlite, or - god forbid - get a proper database. Because that requires sign-off from several people, a proper justification, and so on.

alexdoesstuff commented on OpenAI and Microsoft tensions are reaching a boiling point   wsj.com/tech/ai/openai-an... · Posted by u/jmsflknr
htrp · 6 months ago
>The startup, growing frustrated with its partner, has discussed making antitrust complaints to regulators

Ratting MSFT out to the government doesn't seem like the move of someone with a strong hand.

alexdoesstuff · 6 months ago
The $13bn investment in 2023 was so clearly structured to skirt antitrust concerns that it's unsurprising that that avenue is discussed.

Since then, MSFT has made other regulatory-aggressive investments, and the recent Meta / Scale AI is similarly aggressively designed.

alexdoesstuff commented on LLM providers on the cusp of an 'extinction' phase as capex realities bite   theregister.com/2025/03/3... · Posted by u/abawany
siscia · 9 months ago
Found funny that for something that is pretty much a commodity at this point, adoption seems to be the most important metrics.

Yes, there are differences between the models, and yes some may work better.

But picking the model at this point is just picking the cheapest option. For most use cases any model will do.

alexdoesstuff · 9 months ago
Full agree!

Being close to the edge of AI usage, it's important to realize that most AI use cases are not "fully autonomous AI software engineer" or "deep research into a niche topic" but way more innocuous: Improve my blog post, what's the capital of France, what are some nice tourist sites to see around my next vacation destination.

For those non-edge use cases, costs are an issue, but so are inertia and switching costs. A big reason OpenAI and ChatGPT are so huge is that it's still their go-to model for all of these non-edge use cases as it's well known, well adopted, and quite frankly very efficiently priced.

alexdoesstuff commented on LLM providers on the cusp of an 'extinction' phase as capex realities bite   theregister.com/2025/03/3... · Posted by u/abawany
throwup238 · 9 months ago
> As the global tech research company forecasts worldwide generative AI (GenAI) spending will reach $644 billion in 2025, up around 76 percent from 2024

I’m having a hard time squaring the number $644 billion and the phrase “extinction phase.”

I don’t believe their actual estimate of GenAI spending but if it’s even in the same ballpark as the real value, that’s not an extinction.

alexdoesstuff · 9 months ago
Reading through the source [1] they basically get to that huuuuge number by including AI-enabled devices such as phones that have some AI functionality even if not core to their value proposition. That's basically reclassifying a big chunk of smartphones, TVs, and other consumer tech as GenAI spending.

Of the "real" categories, they expect: Service 27bn (+162% y/y) Software 37bn (+93% y/y) Servers 180bn (+33% y/y) for a total of $245bn (+58% y/y)

That's not shabby numbers, but way more reasonable. Hyperscaler total capex [2] is expected to be around $330bn in 2025 (up +32% y/y) so that'll most likely include a good chunk of the server spend.

[1] https://www.gartner.com/en/newsroom/press-releases/2025-03-3...

[2] https://www.marvin-labs.com/blog/deepseek-impact-of-high-qua...

alexdoesstuff commented on The AI Industry's Business Model Is Cracking–DeepSeek Just Proved It   marvin-labs.com/blog/deep... · Posted by u/alexdoesstuff
jqpabc123 · 10 months ago
Nothing has really been proven just yet.

I say this because all available LLMs are currently running largely on "funny money" --- subsidized by either venture capitalism or government.

We won't know the real costs until they are forced to survive in the marketplace on their own merits. And based on nothing but their energy and hardware needs, they won't be exactly "cheap" and will follow a "computing as a service" model subject to bait and switch tactics.

Basically, LLMs turn traditional computing upside down. Instead of reliable results at low cost, LLMs offer unreliable results at high cost.

And because of this, I expect the real world use cases to be much smaller than many seem to expect. The two prominent early examples are search engines (where accuracy is not essential) and research involving trial and error (where accuracy will be verified).

alexdoesstuff · 10 months ago
Author here

I mostly agree on the first point. Even prior to the price race to the bottom, no AI Lab managed to make any money above marginal cost on inference, let alone recoup investment in infrastructure or model training. Clearly, investment in infrastructure and model training have been largely subsidized by VCs. It's a bit unclear how much of a subsidy inference costs had. The fact that AWS runs hosted inference at roughly similar cost than AI Labs suggests to me that there's at least not a massive subsidy going on at the moment.

I don't subscribe to the narrative that nation states (i.e. China) massively support DeepSeek. Thus, while their core business as a hedge fund is clearly profitable, they have considerably less deep pockets and willingness to front losses than the investors in VC supported AI Labs. Consequently, I expect their inference cost to at least cover their marginal costs (i.e. energy) and maybe some infrastructure investment.

All that suggests that they've managed to lower cost (and with that presumable resource and energy requirements) of inference considerable, which to me is a clear game changer.

alexdoesstuff commented on Reader-LM: Small Language Models for Cleaning and Converting HTML to Markdown   jina.ai/news/reader-lm-sm... · Posted by u/matteogauthier
monacobolid · a year ago
How could it possibly be (a better solution) when there are X different ways to do any single thing in html(/css/js)? If you have a website that uses a canvas to showcase the content (think presentation or something like that), where would you even start? People are still discussing whether the semantic web is important; not every page is utf8 encoded, etc. IMHO small LLMS (trained specifically for this) combined with some other (more predictable) techniques are the best solution we are going to get.
alexdoesstuff · a year ago
Fully agree on the premise: there are X different ways to do anything on the web. But - prior to this - the solution seemed to be: everyone starts from scratch with some ad-hoc Regex, and plays a game of whackamole to cover the first n of the x different ways to do things.

Best of my knowledge there isn't anything more modern than Mozilla's readability and that's essentially a tool from the early 2010s.

alexdoesstuff commented on Reader-LM: Small Language Models for Cleaning and Converting HTML to Markdown   jina.ai/news/reader-lm-sm... · Posted by u/matteogauthier
alexdoesstuff · a year ago
Feels surprising that there isn't a modern best-in-class non-LLM alternative for this task. Even in the post, they described that they used a hodgepodge of headless Chrome, readability, lots of regex to create content-only HTML.

Best I can tell, everyone is doing something similar, only differing in the amount of custom situation regex being used.

u/alexdoesstuff

KarmaCake day50March 31, 2022View Original