These projects are often characterized by very complex functional requirements, yet are undertaken by those who primarily only know (and endlessly argue about) non-functional requirements.
These projects are often characterized by very complex functional requirements, yet are undertaken by those who primarily only know (and endlessly argue about) non-functional requirements.
Am I missing something else?
1. We trained it on a fraction of the world's information (e.g. text and media that is explicitly online)
2. It carries all of the biases us humans have and worse the biases that are present in the information we chose to explicitly share online (which may or may not be different to the experiences humans have in every day life)
All seem biased to recent buzzwords and approaches. Discussions will include the same hand-waving of DDD, event-sourcing and hexagonal services, i.e. the current fashion. Nothing of worth apparently preceded them.
I fear that we are condemned to a future where there is no new novel progress, but just a regurgitation of those current fashion and biases.
Every reason people prefer a car or bike over the bus is a reason non-deterministic agents are a bad interface.
And that analogy works as a glimpse into the future - we’re looking at a fast approaching world where LLMs are the interface to everything for most of us - except for the wealthy, who have access to more deterministic services or actual human agents. How long before the rich person car rental service is the only one with staff at the desk, and the cheaper options are all LLM based agents? Poor people ride the bus, rich people get to drive.
It has always seemed to me that workflow or processes need to be deterministic and not decided by an LLM.
CS103 Methodologies: Advanced Hack at it ‘till it Works
CS103 History: Fashion, Buzzwords and Reinvention
CS104 AI teaches Software Architecture (CS103 prerequisite)
Accordingly, it seems to imply that we as developers can’t be accountable for anything but effort. It’s a sad condemnation of our industry, and at odds with any (normal) commercial undertaking that has limited resources that must be allocated among competing alternatives.
Any real manager knows the basics of calculating the best choice amongst competing alternatives by establishing projected cashflows and calculating the PV (present value) of each. But not for software - we’re too special.
(normal) - one that can sustain itself on a commercial basis, rather than just on injected capital or borrowed funds.
No, it neither thinks nor learns. It can give an illusion of thinking, and an AI model itself learns nothing. Instead it can produce a result based on its training data and context.
I think it important that we do not ascribe human characteristics where not warranted. I also believe that understanding this can help us better utilize AI.