Computer music, as it existed a couple decades ago, still played exactly what you asked it to, and it wasn't filling areas where you underspecified the music with a statistical model of trillions of existing songs. And that's the difference, for me: the ability to underspecify, and have the details be filled in and added in a way that to the audience will be perceived as intentful, but which is not.
Whether drawing (writing etc.) through AI counts as drawing (as making art) is a debate we have to resolve in the upcoming future.
But this made my mind explode:
> So yes, let’s be bold and assume that AI codevelopers make programmers ten times as productive. (Your mileage may vary, depending on how eager your developers are to learn new skills.)
Has anyone ever seen this hypothetical 10x AI developer? Why do we always back into such hand-wavy arguments when talking about the efficiency of AI-supported software engineering?
Here's what I think the flaw is in all the AI hype's arguments, including the one in this article (I hope Tim O'Reilly can withstand this small amount of debate).
Currently, LLM AIs are stochastic parrots and they don't offer creating levels of abstractions, i.e. creatively and responsively packaging ideas into some higher level form that can be reused.
All the examples in the article did offer a higher level of abstraction: assembly, high-level programming languages, libraries & frameworks like React, database systems etc.
AIs don't offer abstractions. They are not creative, they don't have "better ideas" than what their training data contains. They don't take responsibility for their work.
Us engineers at our company have all tried and are using some AI tools but they don't nearly work as well as management would think so. They make us 10%, maybe in the best case 20% more efficient, but not 10x efficient or anything.
The terminology didn't catch on, but the idea is out there. Compare "game modes" in Overwatch, for example:
https://overwatch.blizzard.com/en-us/news/22938941/introduci...
These would be super hard to backfill later, because usually only the developer who implements them knows everything about the units (services, methods, classes etc.) in question.
With a strongly typed language, a suite of fast unit tests can already be in feature parity with a much slower integration test, because even if mocked out, they essentially test the whole call chain.
They can offer even more, because unit tests are supposed to test edge cases, all error cases, wrong/malformed/null inputs etc. By using integration tests only, as the call chain increases on the inside, it would take an exponentially higher amount of integration tests to cover all cases. (E.g. if a call chain contains 3 services, with 3 outcomes each, theoretically it could take up to 27 integration test cases to cover them all.)
Also, ballooning unit test sizes or resorting to unit testing private methods give the developer feedback that the service is probably not "single responsibility" enough, providing incentive to split and refactor it. This leads to a more maintainable service architecture, that integration tests don't help with.
(Of course, let's not forget that this kind of unit testing is probably only reasonable on the backend. On the frontend, component tests from a functional/user perspective probably bring better results - hence the popularity of frameworks like Storybook and Testing Library. I consider these as integration rather than unit tests.)
I think some functional programming languages solve that by running a good chunk (if not all) of the application in a single trace.
And, of course, async/non-blocking calls, as tracing a call along different threads or promises may not be available all the time.
You mean, those execs will start to use the "AI" excuse that they have to cut salaries because it's the only way to keep people employed ?
https://www.reuters.com/technology/ibm-pause-hiring-plans-re...