Increasingly I think that "free speech" should apply to humans only, not to humans armed with a gas-powered bullshit spewer.
If they sensor something like this, how could we trust platforms with the actually important subjects?
Now, the OpenAI will be lecturing their own users, while expecting them to make them rich. I suspect, the users will find it insulting.
Generation for personal use is not illegal, as far as I know.
Lets focus on the tech firms that produce software.
Two things should happen if AI proliferates into software development:
1) Increasing top line - due to more projects being taken by enabling labour to be more productive 2) Operating margin increasing - due to labour input declining and taking more cost-reduction projects
If those 2 things dont occur - the AI investment was a waste of money from a financial perspective. And this is before I even discount the cash flows by the cost of capital of these high-risk projects (high discount rate).
At some point everyone will be analysed in this manner. Only Nvidia is winning as it stands, ironically, not because of LLMs. But rather because they sell the hardware that LLMs operate on.
Internally, they could actually write 1,000X more software and it will be absorbed by internal customers. They will buy less packaged software from tech firms (unless it's infrastructure), internally they could keep the same headcount or more, as AI allows them to write more software.
But that prompt leads me to believe that you're going to get rather 'random' results due to leaving SO much room for interpretation.
Also, in my experience, punctuation is important - particularly for pacing and grouping of logical 'parts' of a task and your prompt reads like a run on sentence.
Making a lot of assumptions here - but I bet if I were in your shoes and looking to write a prompt to start a task of a similar type that my prompt would have been 5 to 20x the length of yours (depending on complexity and importance) with far more detail, including overlapping of descriptions of various tasks (ie; potentially describing the same thing more than once in different ways in context/relation to other things to establish relation/hierarchy).
I'm glad you got what you needed - but these types of prompts and approaches are why I believe so many people think these models aren't useful.
You get out of them what you put into them. If you give them structured and well written requirements as well as a codebase that utilizes patterns you're going to get back something relative to that. No different than a developer - if you gave a junior coder, or some team of developers the following as a feature requirement: `implement a fuzzy search for conversations and reports either when selecting "Go to Conversation" or "Go to Report" and typing the title or when the user types in the title in the main input field, and none of the standard elements match, a search starts with a 2s delay` then you can't really be mad when you don't get back exactly what you wanted.
edit: To put it another way - spend a few more minutes on the initial task/prompt/description of your needs and you're likely to get back more of what you're expecting.