The bigger challenge might be that people with ML expertise need to solve problems of human-AI interaction and alignment because the training for the former is uni-disciplanary while the latter is trans-disciplinary.
The bigger challenge might be that people with ML expertise need to solve problems of human-AI interaction and alignment because the training for the former is uni-disciplanary while the latter is trans-disciplinary.
Their problem is that the quality of engineering started off being critical (who cares how good the content is if you get endless streaming failures?) and is now not so important.
The same corporate strategy and culture that hired "A-player" engineers for streaming is hiring "A-player" studios for content.
Defining A-players as such means you've set the rules of the game instead of building a culture of adaptive success criteria to meet customer opportunities. The label itself is a function of organizational ossification. This is the likely legacy of our tech giants; innovative in only one direction and not able to change fast enough to avoid becoming a brittle, mediocre institution over time.
As consumers, we can all feel this ossified mediocrity every day.
It's funny, because I do not like the process of software engineering at all! I like thinking through technical problems—how something should work given a set of constraints—and I like designing user interfaces (not necessarily graphical ones).
And I just love using Claude Code! I can tell it what to do and it does the annoying part.
It still takes work, by the way! Even for entirely "vibe coded" apps, I need to think through exactly what I want, and I need to test and iterate, and when the AI gets stuck I need to provide technical guidance to unblock it. But that's the fun part!
No software engineer is good enough to time-efficiently write the whole stack from machine code up - it will always be an arbitrary and idiomatic set of problems and this is what LLMs are so good at parsing.
Using "Scribe" cycles to define the right problem and carefully review code outputs seems like the way.
I've always enjoyed this question on their FAQ that gives some tips for potential competitors - https://www.listennotes.com/api/faq/#faq2
> There are at least 3,035,027 podcasts and 156,316,374 episodes on the Internet...
Knowing how to ask someone what they are thinking/feeling is a key skillset of anyone building a product for someone else. Its nuanced enough that books like "The Mom Test" break this down for entrepreneurs to implement tactically. On the other hand, West's research also suggests that one can comfortably underweight their own instincts laden with ego-centric and culture-centric biases. Further, you can also comfortably underweight the observations of your colleagues who might assert empathic abilities.
Perhaps the most interesting segment of this podcast was the story of how the author and her tenured colleague were able to dismiss their own intuitions about a acrimonious rivalry with one another and evaluate their relationship scientifically through hundreds of questions from their own research. They went from disliking one another to getting married.
It's about what he created, not what he didnt create.
They're not acquiring the product he built, they're acquiring the product vision.