Let me illustrate the point using a different argument with the same structure: 1) The best professional chefs are excellent at cutting onions 2) Therefore, if we train a model to cuy onions using gradient descent, that model will be a very good profrssional chef
2) clearly does not follow from 1)
Note humans generate their own non-complete world model. For example there are sounds and colors we don’t hear or see. Odors we don’t smell. Etc…. We have an incomplete model of the world, but we still have a model that proves useful for us.
I use Claude Code with both Django and React which its surprisingly good with. I rather use software thats tried and tested. The only time I let it write its own is when I want ultra minimal CSS.
In fact, LLMs will be better than humans in learning new frameworks. It could end up being the opposite that frameworks and libraries become more important with LLMs.
All it took was a few years of higher interest rates and a depressed investment environment!
On TV and Reddit. In the real world you’re not getting policy outcomes today for a handshake of a payout tomorrow without someone in office to guarantee your end.
That said, Trump also investigated Obama for the Netflix deal. Will he investigate Melania now?
Deleted Comment