Readit News logoReadit News
joshuajooste05 commented on Endometriosis is an interesting disease   owlposting.com/p/endometr... · Posted by u/crescit_eundo
joshuajooste05 · 6 months ago
My girlfriend has endometriosis, I hadn't really read much about it until now, thank you for writing!

I think this is a story too common in women's healthcare.

It's often massively underfunded and underesearched, another symptom of the fact our society had not let women into STEM/politics for decades, and continues to erect barriers to encourage them not too.

I like the fact you spelled out the incentives for PhDs to do so at the end ;). Would be great!

joshuajooste05 commented on OpenAI dropped the price of o3 by 80%   twitter.com/sama/status/1... · Posted by u/mfiguiere
candiddevmike · 6 months ago
It's going to be a race to the bottom, they have no moat.
joshuajooste05 · 6 months ago
There was an article on here a week or two ago on batch inference.

Do you not think that batch inference gives at least a bit of a moat whereby unit costs fall with more prompts per unit of time, especially if models get more complicated and larger in the future?

joshuajooste05 commented on Knowledge Management in the Age of AI   ericgardner.info/notes/kn... · Posted by u/katabasis
briian · 7 months ago
I think the key to ensuring you don't lose your own ability to think is to just delay the onset of using AI when solving a problem.

The more deeply you think, you train your brain harder, but also improve the utility of the AI systems themselves because you can prompt better.

joshuajooste05 · 7 months ago
I constantly find myself just jumping to AI whenever I have a question. It is actually scaring me how much I just rely on it.
joshuajooste05 commented on The last six months in LLMs, illustrated by pelicans on bicycles   simonwillison.net/2025/Ju... · Posted by u/swyx
joshuajooste05 · 7 months ago
Does anyone have any thoughts on privacy/safety regarding what he said about GPT memory.

I had heard of prompt injection already. But, this seems different, completely out of humans control. Like even when you consider web search functionality, he is actually right, more and more, users are losing control over context.

Is this dangerous atm? Do you think it will become more dangerous in the future when we chuck even more data into context?

joshuajooste05 commented on Ask HN: How are parents who program teaching their kids today?    · Posted by u/laze00
joshuajooste05 · 7 months ago
Prompting, especially for code, is not too difficult of a skill to pick up, but the ability to A understand syntax and B develop the way of thinking is much harder.

I think the only way to learn to code is to really limit the use of AI (obvs can speed up some things, but never let him copy and paste from it)

I don't really think there is a substitute for encouraging him to push through without AI tbh.

When he eventually start's vibe coding, it will be like putting a v8 in a Ferrari instead of a VW golf.

joshuajooste05 commented on What's working for YC companies since the AI boom   jamesin.substack.com/p/wh... · Posted by u/jseidel
xorcist · 7 months ago
I see a pattern with AI companies. They always try to solve a really hard and not very useful problem. It's the same as with self driving car companies ten years ago: If you believe self driving tech is ripe for commercialization, the reasonable thing to do is something capital intensive and a special case where the technology most likely to succeed. For instance, heavy trucks automatically following others in formations for long drives. Saves gas, money, and potentially personnel.

There is a clear business case and buying large trucks is already a capex play. Then slowly work your way through more complex logistic problems from there. But no! The idea to sell was clearly the general problem including small cars that drive children to school through a suburban ice storm with lots of cyclists. Because that's clearly where the money is?

It's the same with AI. The consumer case is clearly there, people are easily impressed by it, and it is a given that consumers would pay to use it in products such as Illustrator, Logic Pro, modelling software etc. Maybe yet another try in radiology image processing, the death trap of startups for many decades now, but where there is obvious potential. But no! We want to design general purpose software -- in general purpose high level languages intended for human consumption! -- not even generating IR directly or running the model itself interactively.

If the technology really was good enough to do this type of work, why not find a specialized area with a few players limited by capex? Perhaps design a new competitive CPU? That's something we already have both specifications and tests for, and should be something a computer could do better than human. If an LLM could do a decent job there, it would easily be a billion dollar business. But no, let's write Python code and web apps!

joshuajooste05 · 7 months ago
Agreed, the agents people are building are not solving the real issues.

The other thing people have been trying to do is build general agents e.g. Manus.

I just think this misses the key value add that agents can add at the moment.

A general agent would need to match the depth of every vertical agent, which is basically AGI. Until we reach AGI, verticalized agents for specific real issues will be where the money/value is at.

u/joshuajooste05

KarmaCake day11May 27, 2025View Original