Let's create "Piaf" to see la vie en rose, the French à la https://codewithrockstar.com/.
In the EU we can report this to: comp-market-information@ec.europa.eu
State that: Google is abusing its dominant position on the market for Android-app distribution by “denial of access to an essential facility”. Google is not complying with their "gatekeeper" DMA obligations (Article 5(4), Article 6(12), Article 11, Article 15)
Attach evidence.
Financial penalty is the only way to pressure this company to abide law.
> [...] the Digital Markets Act (‘DMA’) obliges gatekeepers like Google to effectively allow the distribution of apps on their operating system through third party app stores or the web. At the same time, the DMA also permits Google to introduce strictly necessary and proportionate measures to ensure that third-party software apps or app stores do not endanger the integrity of the hardware or operating system or to enable end users to effectively protect security. [...]
They seem to be on it, but no surprise: it's all about Google's claims for "security" and "ongoing dialogue gatekeepers".
Freedom to use own hardware or software, no.
So, surely those IQ-related tests might be acceptable rating tools for machines and they might get higher scores than anyone at some point.
Anyway, is the objective of this kind of research to actually measure the progress of buzzwords, or amplify them?
And what's remarkable about LLMs is exactly that: they don't reason like machines. They don't use the kind of hard machine logic you see in an if-else chain. They reason using the same type of associative abstract thinking as humans do.
"[LLMs] reason using the same type of associative abstract thinking as humans do": do you have a reference for this bold statement?
I entered "associative abstract thinking llm" in a good old search engine. The results point to papers rather hinting that they're not so good at it (yet?), for example: https://articles.emp0.com/abstract-reasoning-in-llms/.
If a book or movie is ever made about the history of AI, the script would include this period of AI history and would probably go something like this…
(Some dramatic license here, sure. But not much more than your average "based on true events" script.)
In 1957, Frank Rosenblatt built a physical neural network machine called the Perceptron. It used variable resistors and reconfigurable wiring to simulate brain-like learning. Each resistor had a motor to adjust weights, allowing the system to "learn" from input data. Hook it up to a fridge-sized video camera (20x20 resolution), train it overnight, and it could recognize objects. Pretty wild for the time.
Rosenblatt was a showman—loud, charismatic, and convinced intelligent machines were just around the corner.
Marvin Minsky, a jealous academic peer of Frank, was in favor of a different approach to AI: Expert Systems. He published a book (Perceptrons, 1969) which all but killed research into neural nets. Marvin pointed out that no neural net with a depth of one layer could solve the "XOR" problem.
While the book's findings and mathematical proof were correct, they were based on incorrect assumptions (that the Perceptron only used one layer and that algorithms like backpropagation did not exist).
As a result, a lot of academic AI funding was directed towards Expert Systems. The flagship of this was the MYCIN project. Essentially, it was a system to find the correct antibiotic based on the exact bacteria a patient was infected with. The system thus had knowledge about thousands and thousands of different diseases with their associated symptoms. At the time, many different antibiotics existed, and using the wrong one for a given disease could be fatal to the patient.
When the system was finally ready for use... after six years (!), the pharmaceutical industry had developed “broad-spectrum antibiotics,” which did not require any of the detailed analysis MYCIN was developed for.
The period of suppressing Neural Net research is now referred to as (one of) the winter(s) of AI.
--------
As said, that is the fictional treatment. In reality, the facts, motivations, and behavior of the characters are a lot more nuanced.
I'm in no way an expert but I feel that today's LLMs lack some concepts well known in the research of logical reasoning. Something like: semantic.
Meanwhile - Nothing changes, everything generally gets worse and younger generations come into the world with no memories of the 90s internet or the world before mobile devices or surveillence everywhere.
Applying for a job or apartment or anything today means creating endless pointless copies of your pesonal information in databases across the world that will eventually be neglected, hacked, exploited, sold off etc
I dont know the way out if there is one, I guess we can keep fantasizing and thinking about it. It just feels like it would be easier to get the earth to start spinning the other way sometimes.
The big majority goes with the comfort of the mainstream, almost by definition.
It's obvious that "chat-control" cannot be effective in its official purpose: there are already and will be many ways to evade surveillance like CSS for those who really want to.
But it might achieve a devastating side-product, the dream of any authoritarian regime: the criminalization of privacy, which would lead to the end of freedom as we know it. "1984" was supposed to be a warning, not an instruction manual.
But in a perfect world surveillance is not necessary anyway: that kind of statement is just fallacious rhetoric.
I'd be curious to hear from experienced agent users if there is some AGENTS.md stuff to make the LLM more clear speaking? I wonder if that would impact the quality of work.
It seems this applies to the whole AI industry, not just LLMs.