Readit News logoReadit News
robotbikes commented on Will Smith's concert crowds are real, but AI is blurring the lines   waxy.org/2025/08/will-smi... · Posted by u/jay_kyburz
ulrikrasmussen · 9 days ago
I think AI-"upscaled" videos are as jarring to look at as a newly bought TV before frame smoothing has been disabled. Who seriously thinks this looks better, even if the original is a slightly grainy recording from the 90's?

I was recently sent a link to this recording of a David Bowie & Nine Inch Nails concert, and I got a serious uneasy feeling as if I was on a psychedelic and couldn't quite trust my perception, especially at the 2:00 mark: https://www.youtube.com/watch?v=7Yyx31HPgfs&list=RD7Yyx31HPg...

It turned out that the video was "AI-upscaled" from an original which is really blurry and sometimes has a low frame rate. These are artistic choices, and I think the original, despite being low resolution, captures the intended atmosphere much better: https://www.youtube.com/watch?v=1X6KF1IkkIc&list=RD1X6KF1Ikk...

We have pretty good cameras and lenses now. We don't need AI to "improve" the quality.

robotbikes · 9 days ago
This reminds me of colorized black and white movies from the 90s although I can know imagine AI being used to do that and upscale the past creating new hyper-real versions of the past.

Dead Comment

robotbikes commented on Copyparty – Turn almost any device into a file server   github.com/9001/copyparty... · Posted by u/saint11
henry700 · a month ago
Anyone remember DAP, Download Accelerator Plus? The colorful bars were nice. A part of my childhood, downloading shareware Windows games through dial-up.
robotbikes · a month ago
I remember that...
robotbikes commented on Snorting the AGI with Claude Code   kadekillary.work/blog/#20... · Posted by u/beigebrucewayne
rbren · 3 months ago
I’m biased [0], but I think we should be scripting around LLM-agnostic open source agents. This technology is changing software development at its foundations—-we need to ensure we continue to control how we work.

[0] https://github.com/all-hands-ai/openhands

robotbikes · 3 months ago
This looks like a good resource. There are some pretty powerful models that will run on a Nvidia 4090 w/ 24gb of RAM. Devstral and Queen 3. Ollama makes it simple to run them on your own hardware, but the cost of the GPU is a significant investment. But if you are paying $250 a month for a proprietary tool it would pay for itself pretty quickly.
robotbikes commented on I let Claude Code write an entire book   github.com/JayDoubleu/age... · Posted by u/JayD0ubleu
robotbikes · 3 months ago
I really found the story in chapter 14 (recursive self-improvement) about the guy who got so addicted to self-improvement that he ended up in his own meta-reality unable to understand even himself because he was getting so much better and hacking his learning. A completely fabricated story with no basis in reality that I'm aware of but man there are a lot of bullet points to make it seem factual. What are we going to do about the worrying trend of 10X hackers self-improving so much that they aren't able to exist in the real world. Here's an excerpt

"The Addiction to Acceleration The fourth uncomfortable truth is how recursive improvement becomes compulsive. Kenji can’t stop because each day of not improving his improvement feels like stagnation. When you’re accelerating, constant velocity feels like moving backward.

This addiction manifests as: • Inability to accept plateau phases • Anxiety when not optimizing optimization • Devaluing of steady-state excellence • Compulsion to add meta-levels • Fear of falling behind yourself Recursive improvement can become its own trap."

I find that this criticism is far less applicable to say individuals but perhaps it could be levied against the way companies are currently treating AI. Which of course is where this comes from.

robotbikes · 3 months ago
Honestly there are some interesting concepts and broad overviews of them but this is hardly a "book" but just a verbose LLM document that briefly lists a lot of concepts without sufficiently or consistently fleshing them out into actual meaningful chapters. Not to say that this sort of thing isn't potentially useful but it seems more like the starting point of an outline of a book rather than anything resembling a finished published book.
robotbikes commented on I let Claude Code write an entire book   github.com/JayDoubleu/age... · Posted by u/JayD0ubleu
gradus_ad · 3 months ago
The premise is wrong. Humans don't listen to one another and nod unthinkingly. They criticize and validate relentlessly. Friends aren't mistaken for oracles. We have learned to not trust one another, except when one is speaking from deep experience and expertise in a given domain.

AI is presented as an expert in every domain though, so we are lulled into a vulnerable state of unvigilance.

robotbikes · 3 months ago
I really found the story in chapter 14 (recursive self-improvement) about the guy who got so addicted to self-improvement that he ended up in his own meta-reality unable to understand even himself because he was getting so much better and hacking his learning. A completely fabricated story with no basis in reality that I'm aware of but man there are a lot of bullet points to make it seem factual. What are we going to do about the worrying trend of 10X hackers self-improving so much that they aren't able to exist in the real world. Here's an excerpt

"The Addiction to Acceleration The fourth uncomfortable truth is how recursive improvement becomes compulsive. Kenji can’t stop because each day of not improving his improvement feels like stagnation. When you’re accelerating, constant velocity feels like moving backward.

This addiction manifests as: • Inability to accept plateau phases • Anxiety when not optimizing optimization • Devaluing of steady-state excellence • Compulsion to add meta-levels • Fear of falling behind yourself Recursive improvement can become its own trap."

I find that this criticism is far less applicable to say individuals but perhaps it could be levied against the way companies are currently treating AI. Which of course is where this comes from.

robotbikes commented on I'd rather read the prompt   claytonwramsey.com/blog/p... · Posted by u/claytonwramsey
Herring · 4 months ago
Interesting. I think I'm a better editor so I use it as a writer, but it makes sense that it works the other way too for strong writers. Your way might even be better, since evaluating a text is likely easier than constructing a good text (Which is why your process worked even back with 3.5).
robotbikes · 4 months ago
I have a horrible time editing my own work. Decision paralysis and what not, but I did have the idea that a good way to practice would be editing the content of LLM generated fictional narratives. I think the point that many are making that LLMs are useful as cognitive aids that augment thinking rather than replacements for thinking. They can be used to train your mind by inspiring thoughts you wouldn't have came up with on your own.
robotbikes commented on Lessons Learned Writing a Book Collaboratively with LLMs    · Posted by u/scottfalconer
robotbikes · 4 months ago
Nice. I leverage the strengths of AI in a way that affirms the human element in the collaboration. AI as it exists in LLMs is a powerful source of potentially meaningful language but at this point LLMs don't have a consistent conscious mind that exists over time like humans do. So it's more like summoning a djinn to perform some task and then it disappears back into the ether. We of course can interweave these disparate tasks into a meaningful structure and it sounds like you have some good strategies for how to do this.

I have found that using an LLM to critique your writing is a helpful way of getting free generic but specific feedback. I find this route more interesting than the copy pasta AI voiced stuff. Suggesting that AI embodys a specific type of character such as a pirate can make the answers more interesting than just finding the median answer, add some flavor to the white bread.

robotbikes commented on Nobody knows what's going on   raptitude.com/2024/06/nob... · Posted by u/herbertl
rozap · a year ago
One thing a coworker said once that I think about a lot: ever read an article about a subject that you know a bit about, and invariably you come to the conclusion that the writer doesn't really have a good grasp on what they're talking about. Now think about all the articles you read about subjects that you don't know much about, why would the accuracy be any higher on those ones?

Kind of a bummer to think about.

robotbikes · a year ago
And now just think of all of the people who will be getting their knowledge from LLMs which are literally making up stuff through statistical linguistic inference on a grand scale from hearsay.
robotbikes commented on Clues to disappearance of North America's large mammals 50k years ago   phys.org/news/2024-05-clu... · Posted by u/wglb
soneca · a year ago
Did I understood correctly that we now have tools that will likely provide more clues to the disappearance but the text mentions no particular clue, yet? (I read it diagonally trying to dodge the ads)
robotbikes · a year ago
That was my read. They can now identify species of very fragmentary bone remains via collagen protein matching. They didn't say what if anything clues this would/could lead to.

u/robotbikes

KarmaCake day833September 4, 2014View Original