Do you not think that batch inference gives at least a bit of a moat whereby unit costs fall with more prompts per unit of time, especially if models get more complicated and larger in the future?
Do you not think that batch inference gives at least a bit of a moat whereby unit costs fall with more prompts per unit of time, especially if models get more complicated and larger in the future?
The more deeply you think, you train your brain harder, but also improve the utility of the AI systems themselves because you can prompt better.
I had heard of prompt injection already. But, this seems different, completely out of humans control. Like even when you consider web search functionality, he is actually right, more and more, users are losing control over context.
Is this dangerous atm? Do you think it will become more dangerous in the future when we chuck even more data into context?
I think the only way to learn to code is to really limit the use of AI (obvs can speed up some things, but never let him copy and paste from it)
I don't really think there is a substitute for encouraging him to push through without AI tbh.
When he eventually start's vibe coding, it will be like putting a v8 in a Ferrari instead of a VW golf.
There is a clear business case and buying large trucks is already a capex play. Then slowly work your way through more complex logistic problems from there. But no! The idea to sell was clearly the general problem including small cars that drive children to school through a suburban ice storm with lots of cyclists. Because that's clearly where the money is?
It's the same with AI. The consumer case is clearly there, people are easily impressed by it, and it is a given that consumers would pay to use it in products such as Illustrator, Logic Pro, modelling software etc. Maybe yet another try in radiology image processing, the death trap of startups for many decades now, but where there is obvious potential. But no! We want to design general purpose software -- in general purpose high level languages intended for human consumption! -- not even generating IR directly or running the model itself interactively.
If the technology really was good enough to do this type of work, why not find a specialized area with a few players limited by capex? Perhaps design a new competitive CPU? That's something we already have both specifications and tests for, and should be something a computer could do better than human. If an LLM could do a decent job there, it would easily be a billion dollar business. But no, let's write Python code and web apps!
The other thing people have been trying to do is build general agents e.g. Manus.
I just think this misses the key value add that agents can add at the moment.
A general agent would need to match the depth of every vertical agent, which is basically AGI. Until we reach AGI, verticalized agents for specific real issues will be where the money/value is at.
I think this is a story too common in women's healthcare.
It's often massively underfunded and underesearched, another symptom of the fact our society had not let women into STEM/politics for decades, and continues to erect barriers to encourage them not too.
I like the fact you spelled out the incentives for PhDs to do so at the end ;). Would be great!