Readit News logoReadit News
kingsleyopara commented on IP blocking the UK is not enough to comply with the Online Safety Act   prestonbyrne.com/2025/11/... · Posted by u/pinkahd
kingsleyopara · a month ago
The most frustrating part here is that this car crash of a policy had cross party support so there wasn’t even a way for UK people like me to vote against it.
kingsleyopara commented on iPhone 17 Pro and iPhone 17 Pro Max   apple.com/iphone-17-pro/... · Posted by u/meetpateltech
kingsleyopara · 3 months ago
Surprising there’s no matte-black iPhone 17 Pro - dark, low-reflectance finishes are standard in pro video kit because they minimise specular reflections and stray highlights; keeping a shiny silver finish and skipping a subdued matte black feels like a strange choice and undercuts the “Pro” claim.
kingsleyopara commented on Apple Debuts iPhone 17   apple.com/newsroom/2025/0... · Posted by u/excerionsforte
kingsleyopara · 3 months ago
Surprising there’s no matte-black iPhone 17 Pro - dark, low-reflectance finishes are standard in pro video kit because they minimise specular reflections and stray highlights; keeping a shiny silver finish and skipping a subdued matte black feels like a strange choice and undercuts the “Pro” claim.
kingsleyopara commented on Gemini CLI   blog.google/technology/de... · Posted by u/sync
cperry · 6 months ago
Hi - I work on this. Uptake is a steep curve right now, spare a thought for the TPUs today.

Appreciate all the takes so far, the team is reading this thread for feedback. Feel free to pile on with bugs or feature requests we'll all be reading.

kingsleyopara · 6 months ago
Thanks so much for this! I’d really appreciate a more consumer oriented subscription offering, similar to Claude Max, that combines Gemini CLI (with IP compliance) and the Gemini app (extra points for API access too!).
kingsleyopara commented on Gemini CLI   blog.google/technology/de... · Posted by u/sync
bachmeier · 6 months ago
> Google is fumbling the bag so badly with the pricing.

In certain areas, perhaps, but Google Workspace at $14/month not only gives you Gemini Pro, but 2 TB of storage, full privacy, email with a custom domain, and whatever else. College students get the AI pro plan for free. I recently looked over all the options for folks like me and my family. Google is obviously the right choice, and it's not particularly close.

kingsleyopara · 6 months ago
Gemini 2.5 pro in workspace was restricted to 32k tokens [0] - do you know if this is still the case?

[0] https://www.reddit.com/r/GoogleGeminiAI/comments/1jrynhk/war...

kingsleyopara commented on SeedLM: Compressing LLM Weights into Seeds of Pseudo-Random Generators   machinelearning.apple.com... · Posted by u/pizza
timschmidt · 8 months ago
> it doesn’t offer any advantages over 3- or 4-bit quantization.

"zero-shot accuracy retention at 4- and 3-bit compression to be on par with or better than state-of-the-art methods, while maintaining performance comparable to FP16 baselines."

My reading of that says FP16 accuracy at Q3 or Q4 size / memory bandwidth. Which is a huge advantage.

kingsleyopara · 8 months ago
For zero-shot accuracy from Table 3:

* LLaMA 3 8B: baseline 72.26, 4-bit 71.31, 3-bit 62.79

* LLaMA 3 70B: baseline 79.51, 4-bit 78.06, 3-bit 74.68

These results seem comparable to modern quantization methods—for example, the ~4-bit results for smaller LLaMA models listed here: https://ai.meta.com/blog/meta-llama-quantized-lightweight-mo...

kingsleyopara commented on SeedLM: Compressing LLM Weights into Seeds of Pseudo-Random Generators   machinelearning.apple.com... · Posted by u/pizza
visarga · 8 months ago
Very interesting trick, using a dictionary of basis vectors which are quickly computed from a seed without storage. But the result is the same 3 or 4 bit quantization, with only a slight improvement. Their tiles are small, just 8 or 12 weights, it's why compression doesn't go too far. It would have been great if this trick lowered quantization <1 bit/weight, that would require longer tiles. Wondering what are the limits if we use a larger reservoir of cheap entropy as part of neural net architecture, even in training.

Congrats to Apple and Meta, makes sense they did the research, this will go towards efficient serving of LLMs on phones. And it's very easy to implement.

kingsleyopara · 8 months ago
I was about to post something similar. While the research is interesting, it doesn’t offer any advantages over 3- or 4-bit quantization. I also have to assume they explored using longer tiles but found it to be ineffective — which would make sense to me from an information theory perspective.
kingsleyopara commented on James Webb Space Telescope reveals that most galaxies rotate clockwise   smithsonianmag.com/smart-... · Posted by u/instagraham
exe34 · 8 months ago
I think it would be more that the entire universe is in a black hole spinning in a certain direction.
kingsleyopara · 8 months ago
Exactly and anything new entering the spinning black hole is likely to inherit its spin.
kingsleyopara commented on Framework's first desktop is a strange–but unique–mini ITX gaming PC   arstechnica.com/gadgets/2... · Posted by u/perihelions
adgjlsfhk1 · 10 months ago
On linux, the gpu can go up to 110 GB.
kingsleyopara · 10 months ago
Apologies, I stand corrected. Do you have a reference for this? I'm genuinely curious why the 96GB "limit" is so frequently cited - I assumed it must be a hardware limitation.
kingsleyopara commented on Framework's first desktop is a strange–but unique–mini ITX gaming PC   arstechnica.com/gadgets/2... · Posted by u/perihelions
sunshowers · 10 months ago
True, but less RAM.
kingsleyopara · 10 months ago
One thing to note on the more RAM: for the 128GB option, my understanding is that the GPU is limited to using only 96GB [1]. In contrast, on Macs, you can safely increase this to, for example, 116GB using `sysctl`.

[1] https://www.tomshardware.com/pc-components/cpus/amds-beastly...

u/kingsleyopara

KarmaCake day307January 11, 2021View Original