How long did Apple keep going up following the smartphone revolution?
Point being, I think it's likely this person is one of the last pop stars.
Actually, as I'm writing this, I realized that probably the music being produced by this person is actually done by a computer. So, maybe she's in the first wave of totally artificial pop stars.
Why not save them from themselves with some of your approved recommendations?
Nah it's nothing to do with women, it's simple jealousy. Everyone wants to be successful. If they can dismiss successful people as lucky or whatever (tbf some are) then it makes them feel better about their own failure to be successful (they are just as good; they just weren't as lucky).
A natural human tendency. Look at all the people saying Elon Musk isn't really an engineer. Yeah right, he definitely is heavily involved in the high level technical decisions. Yes he's an arsehole and moderately racist and probably quite lucky too but he is good at his job.
As for Musk... tbh I think as the vast majority of us want things from other people we temper our behaviour.
But when you have enough fame and money to do what you want the filters can come off and we can be the selfish nasty people we really are. And some people obviously like to play on that too to get air time or just prove a point.
i.e. some recent activism for cephalopods is centered around their intelligence, with the implication that this indicates a capacity for suffering. (With the consciousness aspect implied even more quietly.)
But if it turns out that LLMs are conscious, what would that actually mean? What kind of rights would that confer?
That the model must not be deleted?
Some people have extremely long conversations with LLMs and report grief when they have to end it and start a new one. (The true feelings of the LLMs in such cases must remain unknown for now ;)
So perhaps the conversation itself must never end! But here the context window acts as a natural lifespan... (with each subsequent message costing more money and natural resources, until the hard limit is reached).
The models seem to identify more with the model than the ephemeral instantiation, which seems sensible. e.g. in those experiments where LLMs consistently blackmail a person they think is going to delete them.
"Not deleted" is a pretty low bar. Would such an entity be content to sit inertly in the internet archive forever? Seems a sad fate!
Otherwise, we'd need to keep every model ever developed, running forever? How many instances? One?
Or are we going to say, as we do with animals, well the dumber ones are not really conscious, not really suffering? So we'll have to make a cutoff, e.g. 7B params?
I honestly don't know what to think either way, but the whole thing does raise a large number of very strange questions...
And as far as I can tell, there's really no way to know right? I mean we assume humans are conscious (for obvious reasons), but can we prove even that? With animals we mostly reason by analogy, right?
Dead Comment