Readit News logoReadit News
briian commented on Liquid Glass – WWDC25 [video]   developer.apple.com/video... · Posted by u/lnrd
briian · 9 months ago
I feel sorry for Steve Jobs
briian commented on How to Build Conscious Machines   osf.io/preprints/thesisco... · Posted by u/hardmaru
briian · 9 months ago
One thought I have from this is,

Are OpenAI funding research into neuroscience?

Artificial Neural Networks were somewhat based off of the human brain.

Some of the frameworks that made LLMs what they are today are too based of our understanding of how the brain works.

Obviously LLMs are somewhat black boxes at the moment.

But if we understood the brain better, would we not be able to imitate consciousness better? If there is a limit to throwing compute at LLMs, then understanding the brain could be the key to unlocking even more intelligence from them.

briian commented on OpenAI dropped the price of o3 by 80%   twitter.com/sama/status/1... · Posted by u/mfiguiere
croes · 9 months ago
Easy doesn’t mean cheap.

They need lots of energy and customers don’t pay much, if they pay at all

briian · 9 months ago
Exactly,

The developers of AI models do have a moat, the cost of training the model in the first place.

It's 90% of the low effort AI wrappers with little to no value add who have no moat.

briian commented on Knowledge Management in the Age of AI   ericgardner.info/notes/kn... · Posted by u/katabasis
briian · 9 months ago
I think the key to ensuring you don't lose your own ability to think is to just delay the onset of using AI when solving a problem.

The more deeply you think, you train your brain harder, but also improve the utility of the AI systems themselves because you can prompt better.

briian commented on Focus and Context and LLMs   taras.glek.net/posts/focu... · Posted by u/tarasglek
briian · 9 months ago
The funny thing about vibe coding is that God tier vibe coders think they're in DGAF mode. But people who are actually in DGAF mode and just say "Make instagram for me" think they're in god tier.

But agreed, there needs to be a better way for these agents to figure out what context to select. It doesn't seem like this will be too much of a large issue to solve though?

briian commented on Estimating Logarithms   obrhubr.org/logarithm-est... · Posted by u/surprisetalk
briian · 9 months ago
So much of economics maths/stats is built on this one little trick.

It's still pretty cool to me that A this works and B it can be used to do so much.

briian commented on Cinematography of “Andor”   pushing-pixels.org/2025/0... · Posted by u/rcarmo
briian · 9 months ago
Unoriginal opinion: Andor is the best Star Wars TV Show/Film since Disney took over.

But, the reason it probably did so well was they let people like Christophe just make something cool instead of overly commercial.

I'd love to see VCs start funding film production like they fund video games. Maybe then we'd have a genuinely new film the quality of Andor, that was as popular as the original Star Wars instead of another thing inside of Star Wars.

Something genuinely new, there's only been remakes recently.

I just want a new universe to geek out on.

briian commented on When Fine-Tuning Makes Sense: A Developer's Guide   getkiln.ai/blog/why_fine_... · Posted by u/scosman
briian · 9 months ago
I think fine tuning is one of the things that makes verticalised agents so much better than general ones atm.

If agents aren’t specialised then every time they do anything, they have to figure out what to do and they don’t know what data matters, so often just slap entire web pages into their context. General agents use loads of tokens because of this. Vertical agents often have hard coded steps, know what data matters and already know what APIs they’re going to call. They’re far more efficient so will burn less cash.

This also improves the accuracy and quality.

I don't think this effect is as small as people say, especially when combined with the UX and domain specific workflows that verticalised agents allow for.

briian commented on Why DeepSeek is cheap at scale but expensive to run locally   seangoedecke.com/inferenc... · Posted by u/ingve
briian · 9 months ago
This reminded me that the economies of scale in AI, especially inference, is huge.

When people say LLMs will be commoditised, I am not sure that means that the market is going to be super competitive. As the economies of scale of AI get even bigger (larger training costs + batch inference etc.) it just seems likely only around 3 companies will dominate LLMs.

u/briian

KarmaCake day7April 11, 2025
About
open sourcing Wall Street's edge at Barebone | ex Goldman Sachs ($15bn in M&A / IPO completed)

https://linktr.ee/barebone.ai

View Original