Readit News logoReadit News
earslap commented on Zoom bias: The social costs of having a 'tinny' sound during video conferences   phys.org/news/2025-03-bia... · Posted by u/bookofjoe
__mharrison__ · 5 months ago
Teleprompter!
earslap · 5 months ago
probably not very relevant in a zoom meeting but there are people hypersensitive to the eye movements of someone reading from a teleprompter as well! When I figure someone is reading from a script with their eyes moving, it really affects my focus for some reason. Here is someone trying to find the ideal setup to prevent it: https://www.youtube.com/watch?v=8LkRMtWfhn4
earslap commented on The mysterious flow of fluid in the brain   quantamagazine.org/the-my... · Posted by u/pseudolus
theGnuMe · 5 months ago
I've felt my brain flush, dunno if anyone else has.
earslap · 5 months ago
Sometimes I hear a 10-20hz pulsing sound (so pretty fast, chirping sound) emanating from around my brainstem / back of neck at the level of my ears and a slight feeling of some liquidy movement inside. I always thought it was connected to CSP movement but maybe it isn't. But if it isn't I can't see what else it can be as it happens when I'm completely stationary.
earslap commented on Zoom bias: The social costs of having a 'tinny' sound during video conferences   phys.org/news/2025-03-bia... · Posted by u/bookofjoe
earslap · 5 months ago
There are also documented effects of your camera quality, and even where your camera is placed (as you tend to look at your screen during a call so your gaze is always relative to it).
earslap commented on Arbitrary-Scale Super-Resolution with Neural Heat Fields   therasr.github.io/... · Posted by u/0x12A
LoganDark · 6 months ago
I wonder if there is a de-artifacting model out there.
earslap · 6 months ago
I think the company named Topaz had a photoshop plugin to remove "jpeg artifacts" - I don't know if they are using a neural model for it though.
earslap commented on Gödel, Escher, Bach, and AI (2023)   theatlantic.com/ideas/arc... · Posted by u/pcfwik
zabzonk · 6 months ago
> it makes no sense whatsoever to let the artificial voice of a chatbot, chatting randomly away at dazzling speed, replace the far slower but authentic and reflective voice of a thinking, living human being.

well, of course this is the basic problem with these systems - how to resolve?

earslap · 6 months ago
first you need to find a way to differentiate human thinking from machine thinking. you need to basically show how human thought is not the result of statistical learning and inference but something completely different. if it is not possible, the distinction is moot, something that only appeals to emotion.
earslap commented on OpenAI asks White House for relief from state AI rules   finance.yahoo.com/news/op... · Posted by u/jonbaer
WorldPeas · 6 months ago
gpt-47 costs at least $1m/tok
earslap · 6 months ago
we are working on <impossible problem stumping humanity>. We have considered the following path to find a solution. Are we on the right track? Only answer Yes or No.

(1 week of GPUs whirring later)

AI: Your

(that will be $1 million, thank you)

earslap commented on The brain's waste clearing lymphatic system shown in people for first time   nih.gov/news-events/nih-r... · Posted by u/SubiculumCode
teagoat · 10 months ago
Is increased glymphatic clearance good or bad?
earslap · 10 months ago
good
earslap commented on The Languages of English, Math, and Programming   github.com/norvig/pytudes... · Posted by u/stereoabuse
earslap · 10 months ago
It is more obvious when taken to extreme: With the current feedforward transformer architectures, there is a fixed amount of compute per token. Imagine asking a very hard question with a yes/no answer to an LLM. There are infinite number of cases where the compute available to the calculation of the next token is not enough to definitively solve that problem, even given "perfect" training.

You can increase the compute for allowing more tokens for it to use as a "scratch pad" so the total compute available will be num_tokens * ops_per_token but there still are infinite amount of problems you can ask that will not be computable within that constraint.

But, you can offload computation by asking for the description of the computation, instead of asking for the LLM to compute it. I'm no mathematician but I would not be surprised to learn that the above limit applies here as well in some sense (maybe there are solutions to problems that can't be represented in a reasonable number of symbols given our constraints - Kolmogorov Complexity and all that), but still for most practical (and beyond) purposes this is a huge improvement and should be enough for most things we care about. Just letting the system describe the computation steps to solve a problem and executing that computation separately offline (then feeding it back if necessary) is a necessary component if we want to do more useful things.

earslap commented on Language is not essential for the cognitive processes that underlie thought   scientificamerican.com/ar... · Posted by u/orcul
shepherdjerred · 10 months ago
> What this tells us for AI is that we need something else besides LLMs.

Humans not taking this approach doesn’t mean that AI cannot.

earslap · 10 months ago
Not only that but also LLMs "think" in a latent representation that is several layers deep. Sure, the first and last layers make it look like it is doing token wrangling, but what is happening in the middle layers is mostly a mystery. First layer deals directly with the tokens because that is the data we are observing (a "shadow" of the world) and last layer also deals with tokens because we want to understand what the network is "thinking" so it is a human specific lossy decoder (we can and do remove that translator and plug the latent representations to other networks to train them in tandem). There is no reason to believe that the other layers are "thinking in language".

u/earslap

KarmaCake day196January 24, 2012View Original