For example, if you are doing DDD and your repository implementation is about SQL, adding another layer of abstraction is not worth. But if your design is less sophisticated, or you are in an early stage of the project, you may find appealing to use that abstraction.
How to measure the effectiveness of a given prompt seems to me a big deal now.
- Big tech monopolizing the models, data, and hardware.
- Copyright concerns.
- Job security.
- AIs becoming sentient and causing harm for their own ends.
- Corporations intentionally using AI to cause harm for their own ends.
- Feedback loops will flood the internet with content of unknown provenance, which get included in the next model, etc.
- AI hallucinations resulting in widespread persistent errors that cause an epistemological crisis.
- The training set is inherently biased; human knowledge and perspectives not represented in this set could be systematically wiped from public discourse.
We can have meaningful discussions on each of these topics. And I'm sure we all have a level of concern assigned to each (personally, I'm far more worried about an epistemological crisis and corporate abuse than some AI singularity).
But we're seeing these topics interact in real-time to make a system with huge emergent societal properties. Not sure anyone has a handle on the big picture (there is no one driving the bus!) but there's plenty of us sitting in the passenger seats and raising alarm bells about what we see out our respective little windows.
What a win for Microsoft if they can expose openAI as a usable solution...
Home servers, powered by low cost hardware, may become a real thing.
> Complex problems exist...
Complex is an attribute for the solution, the problem could be solvable or not.