Build tanks -> increase GDP
Send those tanks to burn with men inside -> increase GDP per capita
Not to claim this is a perfect watertight definition, but what if we define it like this:
* Original = created from ones "latent" space. For a human it would be their past experiences as encoded in their neurons. For an AI it would be their training as encoded in model weights.
* Remixed = created from already existing physical artifacts, like sampling a song, copying a piece of an image and transforming it, etc.
With this definition both humans and AI can create both original and remixed works, depending on where the source material came from - latent or physical space.
What's the significance of "physical" song or image in your definition? Aren't your examples just 3rd party latent spaces, compressed as DCT coefficients in jpg/mp3, then re-projected through a lens of cochlear or retinal cells into another latent space of our brain, which makes it tickle? All artist human brains have been trained on the same media, after all.
When we zoom this far out in search of a comforting distinction, we encounter the opposite: all the latent spaces across all modalities that our training has produced, want to naturally merge into one.
Perhaps epidemic isn't the right word here because they must have been already unwell. At least these activities are relatively harmless.
I'm assuming here, but would you say that better critical thinking skills would have helped me avoid spending that Saturday with ChatGPT? It is often said that critical thinking is the antidote to religion, but I have a suspicion that there's a huge prerequisite which is general broad knowledge about the world.
A long ago, I once fell victim for a scam when I visited SE Asia for the first time. A pleasant man on the street introduced himself as a school teacher, showed me around, then put me in a tuktuk which showed me around some more before dropping me off in front of a tailor shop. Some more work inside of the shop, a complimentary bottle of water, and they had my $400 for a bespoke coat that I would never have bought otherwise. Definitely a teaching experience. This art is also how you'd prime an LLM to produce the output you want.
Surely, large amounts of other atheist nerds must fall for these types of scams every year, where a stereotypical christian might spit on the guy and shoo him away.
I'm not saying that being religious would not increase one's chances of being susceptible, I just think that any idea will ring "true" in your head if you have zero counterfactual priors against it or if you're primed to not retrieve them from memory. That last part is the essence of what critical thinking actually is, in my opinion, and it doesn't work if you lack the knowledge. Knowing that you don't know something is also a decent alternative to having the counter-facts when you're familiar with an adjacent domain.
I believe it's actually the opposite!
Anybody armed with this tool and little prior training could learn the difference between a Samsung S11 and the symmetry, take a new configuration from the endless search space that it is, correct for the dozen edge cases like the electron-phonon coupling, and publish. Maybe even pass peer review if they cite the approved sources. No requirement to work out the Lagrangians either, it is also 100% testable once we reach Kardashev-II.
This says more about the sad state of modern theoretical physics than the symbolic gymnastics required to make another theory of everything sound coherent. I'm hoping that this new age of free knowledge chiropractors will change this field for the better.
I thought the AI safety risk stuff was very over-blown in the beginning. I'm kinda embarrassed to admit this: About 5/6 months ago, right when ChatGPT was in it's insane sycophancy mode I guess, I ended up locked in for a weekend with it...in...what was in retrospect, a kinda crazy place. I went into physics and the universe with it and got to the end thinking..."damn, did I invent some physics???" Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like "this is genuinely interesting stuff!" - and the LLM kept telling me it was genuinely interesting stuff and I should continue - I even emailed a friend a "wow look at this" email (he was like, dude, no...) I talked to my wife about it right after and she basically had me log off and go for a walk. I don't think I would have gotten into a thinking loop if my wife wasn't there, but maybe, and then that would have been bad. I feel kinda stupid admitting this, but I wanted to share because I do now wonder if this kinda stuff may end up being worse than we expect? Maybe I'm just particularly susceptible to flattery or have a mental illness?
ChatGPT in its sycophancy era made me buy a $35 domain and waste a Saturday on a product which had no future. It hyped me up beyond reason for the idea of an online, worldwide, liability-only insurance for cruising sailboats, similar to SafetyWing. "Great, now you're thinking like a true entrepreneur!"
In retrospect, I fell for it because the onset of its sycophancy was immediate and without any additional signals like maybe a patch note from OpenAI.
For text, with a two-byte tokenizer you get 2^16 (~65.000) possible next tokens, and computing a probability distribution over them is very much doable. But the "possible next frames" in a video feed would already be an extremely large number. If one frame is 1 megabyte uncompressed (instead of just 2 bytes for a text token) there are 2^(8*2^20) possible next frames, which is far too large a number. So we somehow need to predict only an embedding of a frame, of how the next frame of a video feed will look approximately.
Moreover, for robotics we don't want to just predict the next (approximate) frame of a video feed. We want to predict future sensory data more generally. That's arguably what animals do, including humans. We constantly anticipate what happens to us in "the future", approximately, and where the farther future is predicted progressively less exactly. We are relatively sure of what happens in a second, but less and less sure of what happens in a minute, or a day, or a year.
There's then evidence of what's called Predictive Coding. When that future happens, a higher level circuit decides how far off we were, and then releases appropriate neuromodulators to re-wire that circuit.
That would mean that to learn faster, you want to expose yourself to situations where you are often wrong: be often surprised and go down the wrong paths. Have a feedback mechanism which will tell you when you're wrong. This is maybe also why the best teachers are the ones who often ask the class questions for which there are counter-intuitive answers.