Readit News logoReadit News
aprilfoo commented on I failed to recreate the 1996 Space Jam website with Claude   j0nah.com/i-failed-to-rec... · Posted by u/thecr0w
liampulles · 17 days ago
It seems to me that Claude's error here (which is not unique to it) is self-sycophancy. The model is too eager to convince itself it did a good job.

I'd be curious to hear from experienced agent users if there is some AGENTS.md stuff to make the LLM more clear speaking? I wonder if that would impact the quality of work.

aprilfoo · 17 days ago
> It seems to me that Claude's error here (which is not unique to it) is self-sycophancy. The model is too eager to convince itself it did a good job.

It seems this applies to the whole AI industry, not just LLMs.

aprilfoo commented on Why we migrated from Python to Node.js   blog.yakkomajuri.com/blog... · Posted by u/yakkomajuri
aprilfoo · 2 months ago
Posts inviting language/frameworks flame wars are clickbaits... and i fell for it, again.
aprilfoo commented on Rouille – Rust Programming, in French   github.com/bnjbvr/rouille... · Posted by u/mihau
aprilfoo · 2 months ago
Fantastique !

Let's create "Piaf" to see la vie en rose, the French à la https://codewithrockstar.com/.

aprilfoo commented on Keep Android Open   keepandroidopen.org/... · Posted by u/LorenDB
cryptoneo · 2 months ago
The play store ID process is ridiculous, their AI is making up BS why it wouldn't let your documents pass, clearly no human in the loop.

In the EU we can report this to: comp-market-information@ec.europa.eu

State that: Google is abusing its dominant position on the market for Android-app distribution by “denial of access to an essential facility”. Google is not complying with their "gatekeeper" DMA obligations (Article 5(4), Article 6(12), Article 11, Article 15)

Attach evidence.

Financial penalty is the only way to pressure this company to abide law.

aprilfoo · 2 months ago
The EU's DMA team replied to a previous inquiry:

> [...] the Digital Markets Act (‘DMA’) obliges gatekeepers like Google to effectively allow the distribution of apps on their operating system through third party app stores or the web. At the same time, the DMA also permits Google to introduce strictly necessary and proportionate measures to ensure that third-party software apps or app stores do not endanger the integrity of the hardware or operating system or to enable end users to effectively protect security. [...]

They seem to be on it, but no surprise: it's all about Google's claims for "security" and "ongoing dialogue gatekeepers".

Freedom to use own hardware or software, no.

aprilfoo commented on A definition of AGI   arxiv.org/abs/2510.18212... · Posted by u/pegasus
aprilfoo · 2 months ago
Filling forms is a terribly artificial activity in essence. They are also very culturally biased, but that fits well with the material the NNs have been trained with.

So, surely those IQ-related tests might be acceptable rating tools for machines and they might get higher scores than anyone at some point.

Anyway, is the objective of this kind of research to actually measure the progress of buzzwords, or amplify them?

aprilfoo commented on Rule-Based Expert Systems: The Mycin Experiments (1984)   shortliffe.net/Buchanan-S... · Posted by u/mindcrime
ACCount37 · 3 months ago
AI is a broad field because intelligence is broad field.

And what's remarkable about LLMs is exactly that: they don't reason like machines. They don't use the kind of hard machine logic you see in an if-else chain. They reason using the same type of associative abstract thinking as humans do.

aprilfoo · 3 months ago
Surely "intelligence" is a broad field... i might not be so that great at it, but i hope that's ok.

"[LLMs] reason using the same type of associative abstract thinking as humans do": do you have a reference for this bold statement?

I entered "associative abstract thinking llm" in a good old search engine. The results point to papers rather hinting that they're not so good at it (yet?), for example: https://articles.emp0.com/abstract-reasoning-in-llms/.

aprilfoo commented on Rule-Based Expert Systems: The Mycin Experiments (1984)   shortliffe.net/Buchanan-S... · Posted by u/mindcrime
andrehacker · 3 months ago
Ah, the early days of AI.

If a book or movie is ever made about the history of AI, the script would include this period of AI history and would probably go something like this…

(Some dramatic license here, sure. But not much more than your average "based on true events" script.)

In 1957, Frank Rosenblatt built a physical neural network machine called the Perceptron. It used variable resistors and reconfigurable wiring to simulate brain-like learning. Each resistor had a motor to adjust weights, allowing the system to "learn" from input data. Hook it up to a fridge-sized video camera (20x20 resolution), train it overnight, and it could recognize objects. Pretty wild for the time.

Rosenblatt was a showman—loud, charismatic, and convinced intelligent machines were just around the corner.

Marvin Minsky, a jealous academic peer of Frank, was in favor of a different approach to AI: Expert Systems. He published a book (Perceptrons, 1969) which all but killed research into neural nets. Marvin pointed out that no neural net with a depth of one layer could solve the "XOR" problem.

While the book's findings and mathematical proof were correct, they were based on incorrect assumptions (that the Perceptron only used one layer and that algorithms like backpropagation did not exist).

As a result, a lot of academic AI funding was directed towards Expert Systems. The flagship of this was the MYCIN project. Essentially, it was a system to find the correct antibiotic based on the exact bacteria a patient was infected with. The system thus had knowledge about thousands and thousands of different diseases with their associated symptoms. At the time, many different antibiotics existed, and using the wrong one for a given disease could be fatal to the patient.

When the system was finally ready for use... after six years (!), the pharmaceutical industry had developed “broad-spectrum antibiotics,” which did not require any of the detailed analysis MYCIN was developed for.

The period of suppressing Neural Net research is now referred to as (one of) the winter(s) of AI.

--------

As said, that is the fictional treatment. In reality, the facts, motivations, and behavior of the characters are a lot more nuanced.

aprilfoo · 3 months ago
"AI" is too much of a broad umbrella term of competing ideas, from symbolic logic (FOL, expert systems) to statistical operations (NNs). It's clear today that the latter has won the race, but ignoring this history doesn't seem to be a very smart move.

I'm in no way an expert but I feel that today's LLMs lack some concepts well known in the research of logical reasoning. Something like: semantic.

aprilfoo commented on Personal data storage is an idea whose time has come   blog.muni.town/personal-d... · Posted by u/erlend_sh
InMice · 3 months ago
Among the first page and 2nd page (top 60) there is always atleast 1 post about how we're gonnna "take back the web" or make it back into some form of our 90s millenial nostalgia memories, self hosting, federated this or that, etc etc.

Meanwhile - Nothing changes, everything generally gets worse and younger generations come into the world with no memories of the 90s internet or the world before mobile devices or surveillence everywhere.

Applying for a job or apartment or anything today means creating endless pointless copies of your pesonal information in databases across the world that will eventually be neglected, hacked, exploited, sold off etc

I dont know the way out if there is one, I guess we can keep fantasizing and thinking about it. It just feels like it would be easier to get the earth to start spinning the other way sometimes.

aprilfoo · 3 months ago
I think it's about showing that different models are possible for people who do care and are willing to reflect and change the way they operate.

The big majority goes with the comfort of the mainstream, almost by definition.

aprilfoo commented on Germany must stand firmly against client-side scanning in Chat Control [pdf]   signal.org/blog/pdfs/germ... · Posted by u/greyface-
aprilfoo · 3 months ago
Signal is doing a great job fighting for its survival as a public platform.

It's obvious that "chat-control" cannot be effective in its official purpose: there are already and will be many ways to evade surveillance like CSS for those who really want to.

But it might achieve a devastating side-product, the dream of any authoritarian regime: the criminalization of privacy, which would lead to the end of freedom as we know it. "1984" was supposed to be a warning, not an instruction manual.

aprilfoo commented on Peter Hummelgaard: I believe that more surveillance equates to more freedom   mastodon.social/@chatcont... · Posted by u/nickslaughter02
aprilfoo · 3 months ago
That could be true in a perfect world, where people and systems executing the surveillance are perfectly capable, respectful, honest and accountable, forever.

But in a perfect world surveillance is not necessary anyway: that kind of statement is just fallacious rhetoric.

u/aprilfoo

KarmaCake day30October 31, 2024View Original