I'm a strong advocate for a united effort to create a training set of the collected works of mankind free to any AI company to use if, for instance, it uses its profits to fund UBI, or some other program to pay us for what they use.
Anything but AGI profits being used to keep score in the Oligarchy Olympics. I agree that is a bridge too far.
Everything about this piece marks it as opinion (most notable the language used), if that's what you're getting at.
Also, noted below the piece:
> This is an edited version of the Australian Society of Authors 2024 Colin Simpson Memorial Keynote lecture, titled ‘Creative Futures: Imagining a place for creativity in a world of artificial intelligence’
I think I was fair to call this out, and the article has been flagged by others.
I don't see any how for-profit AI is a threat to anything but for-profit art.
However, I do know that printing opinion pieces as news is definitely a threat to journalism.
The leader in the field is BeMyEyes, of course. They've been working with Microsoft to integrate GPT-4o vision models into their app, with some great success. What we haven't seen yet is the move to live-video image recognition that could come from something like an OrCam or Meta glasses (they recently announced a partnership with Meta). I'm guessing there are serious safety issues with the model missing important information and leading someone vulnerable astray.
https://www.bemyeyes.comhttps://www.bemyeyes.com/blog/be-my-eyes-meta-accessibility-...
OrCam has a new product (woe upon those of us who have the paltry OrCam MyEye2) that the Meta glasses will be competing against at an eye-watering > $4K price point, that seems to do less.
https://www.orcam.com/en-us/orcam-myeye-3-pro
As with the hearing aid industry which recently went over-the-counter causing prices to plummet, the vision aid product category is in temporary disarray as inexpensive new technologies makes their way into a premium-price market.
- the language network, which delivers formal linguistic competence - the multiple demand network, which provides reasoning ability - the default network, which tracks narratives above the clause level - the theory of mind network, which infers the mental state of another entity
This leads to their argument that a modular structure would lead to enhanced ability for an LLM to be both formally and functionally competent. (While LLMs currently exhibit human-level formal linguistic competence, their functional competence--the ability to navigate the real world through language--has room for improvement.)
Transformer models, they note, have degree of emergent modularity through "allowing different attention heads to attend to different input features."
I was wondering, is it possible to characterize the degree of emergent modularity in current systems?
"We also find more abstract features—responding to things like bugs in computer code, discussions of gender bias in professions, and conversations about keeping secrets."
1: https://www.anthropic.com/research/mapping-mind-language-mod...
One thing a high ACE score did for me was make me an irresistible force, even as it seemed to turn the rest of the world into immovable objects.
A human does not do this.
First of all, most questions we have been asked before. We have made mistakes in answering them before, and we remember these, so we don’t repeat them.
Secondly, we (at least some of us) think before we speak. We have an initial reaction to the question, and before expressing it, we relate that thought to other things we know. We may do “sanity checks“ internally, often habitually without even realizing it.
Therefore, we should not expect an LLM to generate the correct answer immediately without giving it space for reflection.
In fact, if you observe your thinking, you might notice that your thought process often takes on different roles and personas. Rarely do you answer a question from just one persona. Instead, most of your answers are the result of internal discussion and compromise.
We also create additional context, such as imagining the consequences of saying the answer we have in mind. Thoughts like that are only possible once an initial “draft” answer is formed in your head.
So, to evaluate the intelligence of an LLM based on its first “gut reaction” to a prompt is probably misguided.
Let me know if you need any further revisions!
Not saying this is ideal, just that it isn’t the showstopper you present it as. In fact, when people talk about “human values”, it might be worth reflecting on whether this a thing we’re supposed to be protecting or expunging?
"I'm not a textbook player, I'm a gut player.” —President George W. Bush.
https://www.heraldtribune.com/story/news/2003/01/12/going-to...
https://soundcloud.com/wort-fm/christine-wenc-on-the-legacy-...
https://soundcloud.com/hachetteaudio/funny-because-its-true-...
https://www.nytimes.com/2025/03/12/books/new-nonfiction-book...