Readit News logoReadit News
earslap commented on NIST was 5 μs off UTC after last week's power cut   jeffgeerling.com/blog/202... · Posted by u/jtokoph
V__ · 4 days ago
Has anyone here ever needed microsecond precision? Would love to hear about it.
earslap · 3 days ago
How do you even get usable microsecond precision sync info from a server thousands of kilometers away? The latency is variable so the information you get can't be verified / will be stale the moment it arrives. I'm quite ignorant on the topic.
earslap commented on Zoom bias: The social costs of having a 'tinny' sound during video conferences   phys.org/news/2025-03-bia... · Posted by u/bookofjoe
__mharrison__ · 9 months ago
Teleprompter!
earslap · 9 months ago
probably not very relevant in a zoom meeting but there are people hypersensitive to the eye movements of someone reading from a teleprompter as well! When I figure someone is reading from a script with their eyes moving, it really affects my focus for some reason. Here is someone trying to find the ideal setup to prevent it: https://www.youtube.com/watch?v=8LkRMtWfhn4
earslap commented on The mysterious flow of fluid in the brain   quantamagazine.org/the-my... · Posted by u/pseudolus
theGnuMe · 9 months ago
I've felt my brain flush, dunno if anyone else has.
earslap · 9 months ago
Sometimes I hear a 10-20hz pulsing sound (so pretty fast, chirping sound) emanating from around my brainstem / back of neck at the level of my ears and a slight feeling of some liquidy movement inside. I always thought it was connected to CSP movement but maybe it isn't. But if it isn't I can't see what else it can be as it happens when I'm completely stationary.
earslap commented on Zoom bias: The social costs of having a 'tinny' sound during video conferences   phys.org/news/2025-03-bia... · Posted by u/bookofjoe
earslap · 9 months ago
There are also documented effects of your camera quality, and even where your camera is placed (as you tend to look at your screen during a call so your gaze is always relative to it).
earslap commented on Arbitrary-Scale Super-Resolution with Neural Heat Fields   therasr.github.io/... · Posted by u/0x12A
LoganDark · 9 months ago
I wonder if there is a de-artifacting model out there.
earslap · 9 months ago
I think the company named Topaz had a photoshop plugin to remove "jpeg artifacts" - I don't know if they are using a neural model for it though.
earslap commented on Gödel, Escher, Bach, and AI (2023)   theatlantic.com/ideas/arc... · Posted by u/pcfwik
zabzonk · 9 months ago
> it makes no sense whatsoever to let the artificial voice of a chatbot, chatting randomly away at dazzling speed, replace the far slower but authentic and reflective voice of a thinking, living human being.

well, of course this is the basic problem with these systems - how to resolve?

earslap · 9 months ago
first you need to find a way to differentiate human thinking from machine thinking. you need to basically show how human thought is not the result of statistical learning and inference but something completely different. if it is not possible, the distinction is moot, something that only appeals to emotion.
earslap commented on OpenAI asks White House for relief from state AI rules   finance.yahoo.com/news/op... · Posted by u/jonbaer
WorldPeas · 9 months ago
gpt-47 costs at least $1m/tok
earslap · 9 months ago
we are working on <impossible problem stumping humanity>. We have considered the following path to find a solution. Are we on the right track? Only answer Yes or No.

(1 week of GPUs whirring later)

AI: Your

(that will be $1 million, thank you)

earslap commented on The brain's waste clearing lymphatic system shown in people for first time   nih.gov/news-events/nih-r... · Posted by u/SubiculumCode
teagoat · a year ago
Is increased glymphatic clearance good or bad?
earslap · a year ago
good
earslap commented on The Languages of English, Math, and Programming   github.com/norvig/pytudes... · Posted by u/stereoabuse
earslap · a year ago
It is more obvious when taken to extreme: With the current feedforward transformer architectures, there is a fixed amount of compute per token. Imagine asking a very hard question with a yes/no answer to an LLM. There are infinite number of cases where the compute available to the calculation of the next token is not enough to definitively solve that problem, even given "perfect" training.

You can increase the compute for allowing more tokens for it to use as a "scratch pad" so the total compute available will be num_tokens * ops_per_token but there still are infinite amount of problems you can ask that will not be computable within that constraint.

But, you can offload computation by asking for the description of the computation, instead of asking for the LLM to compute it. I'm no mathematician but I would not be surprised to learn that the above limit applies here as well in some sense (maybe there are solutions to problems that can't be represented in a reasonable number of symbols given our constraints - Kolmogorov Complexity and all that), but still for most practical (and beyond) purposes this is a huge improvement and should be enough for most things we care about. Just letting the system describe the computation steps to solve a problem and executing that computation separately offline (then feeding it back if necessary) is a necessary component if we want to do more useful things.

u/earslap

KarmaCake day198January 24, 2012View Original