Individuals? Most information technology makes us dumber in isolation, but with the tools we end up net faster.
The scary thing is that it is less about making things "better" than it is making them cheaper. AI isn't winning on skill, its winning on being "80% the quality at 20% the price."
So if you see "us" as the economic super-organism managed by very powerful people, then it makes us a lot smarter!
1. language features we have and they don't understand
2. language features we both have
3. language features they have and we don't understand
Probably in that order.
Then it's just a question of gathering a couple of different species that are seemingly intelligent. Such as: corvids, octopuses, whales, etc. And see if the species can be reasoned with. If so, then you can set up schools where you can train them on human things and vice versa. Eventually you can form interspecies groups and really test the hell out of things.
Doing it that way will really challenge human exceptionalism, as well as the exceptionalism of that particular species.
I know it sounds a bit far off, but I figured that we might be able to get there with AI. I mean, we're getting better and better at giving machines tons and tons of data, and it somehow makes some sense of it.
So far, I think it's not necessarily the human species that is exceptional. It's the revolutionary periods it went through in order to become more exceptional hunters, so we could dominate and control the world in the way we want to. Things such as: discovery of fire, agriculture (+ creating defensive settlements) and antibiotics. We couldn't kill bacteria for a long time. We still have trouble with viruses and are getting into trouble with bacteria again. Could dolphins or whales have done it too, if they were land creatures?
Why do people think that is? Have there been any attempts to change this from the inside over the past decade? Where are professional associations like the ACM in all of this? It's a shameful state of affairs and reflects poorly on the whole discipline.
People who design bridges and vehicles have real responsibilities and standards they are held to, yet somehow the software that actually runs these things is exempt.
This is how Boeing negligently murdered hundreds of people with MCAS. By taking responsibility for safety away from actual engineers and misplacing it with people who write software.
Its a lot of work to grass-roots something like that, and I don't have the charisma for it.
Calculators are uncontroversial now. But when they first became cheap and widely available, they were not allowed in math classes. Then only four function calculators, then graphing calculators. But still today, programmable calculators are prohibited in many academic contexts.
"When will I use this in real life" is a declaration that you have no expectations of learning the next lesson that builds upon this one.
This extension might make the internet more accessible for you!
LLMs are autoencoders for sequences. If an LLM can write the code, the entropy value of that code is low. We know that already, most human communication is low entropy, but the LLMs being good at it implies there is a more efficient structure we could be using. All the embeddings are artifacts of structure, but the entire ANN model obfuscates structures it encodes.
Clearly there are better programming languages, closer fit to our actual intents, than the existing ones. The LLM will never show them to us, we need to go make/find them ourselves.
wasmi, discussed in the post, has a feature called "fuel" that i want in ever WASM interpreter ever. It lets me manage resource-based attacks in the WASM binary.