A rewrite based on functional equivalency is not infringing on the copyright as long as no creative expression was copied. That was the heart of the Google case, whether the API itself was creative expression or functionality.
There are many aspects to what can be considered creative expression, including names, organization, non-functional aspects. An algorithm would not be protected expression. If an AI can write it without reference to the original source code, using only documented behavior, then it would not be infringing (proving that it didn't copy anything from training data might be tough though). It also would not itself be copyrightable, except for elements that could be traced back as "authorship" to the humans who worked with the AI.
If LLMs can create GOOD software based only on functionality, not by copying expression, then they could reproduce every piece of GPL software and release it as Public Domain (which it would have to be if no human has any authorship in it). By the same principle that the GPL software wasn't infringing on the programs they copied functionality from, neither would the AI software. That's a big IF at this point, though, the part about producing GOOD software without copying.
Yes, there is a lot of hype, wailing, gnashing of teeth, but if it is good enough to be a worry, it is also good enough to empower the individual to survive it.
Ultimately, if it is all hype, it will soon crumble; if it is not then productivity will increase by leaps and bounds. The only key issue is to make sure that all the gains aren't taken by a small group of people (whether the current rich and powerful, or those that displace them using new paradigms).
I suggest getting comfortable with the idea of a UBI.
Apparently they know better even though
1. They didn't issue the prompt, so they... knew what I was meaning by the phrase (obviously they don't)
2. The LLM/AI took my prompt and interpreted it exactly how I meant it, and behaved exactly how I desired.
3. They then claim that it's about "knowing exactly what's going on" ... even though they didn't and they got it wrong.
This is the advantage of an LLM - if it gets it wrong, you can tell it.. it might persist with an erroneous assumption, but you can tell it to start over (I proved that)
These "humans" however are convinced that only they can be right, despite overwhelming evidence of their stupidity (and that's why they're only JUNIORS in their fields)
Always starting over and trying to get it all into one single prompt can be much more work, with no better results than iteratively building up a context (which could probably be proven to sometimes result in a "better" result that could not have been achieved otherwise).
Just telling it to "forget everything, let's start over" will have significantly different results than actually starting over. Whether that is sufficient, or even better than alternatives, is entirely dependent on the problem and the context it is supposed to "forget". If your response had been "try just telling it to start over, it might work and be a lot easier than actually starting over" you might have gotten a better reception. Calling everyone morons because your response indicates a degree of misunderstanding how an LLM operates is not helpful.