Why are programs - the result of the ingenuity of people working in software field - not protected against AI slop stuff.
Why is there not any kind of narrative out there describing how fake and soulless is code written by any AI agent?
because soulless code does not matter. For other fields the result is more subjective, I don't like movies with desaturated color palette, a lot of people like them, maybe LLMs can produce new genre of movies, which people who appreciate classic films or music find it soulless, and find it sad that the peasants kind of like these films and the whole thing a risk for their careers or whole craft and the human effort in making their art work.
In code its objective, either the result work or not work, I guess you can stretch "it works" to have a different meaning that can include maintainability where it starts to get more subjective, but at the end of the day you will also can get to a point where the whole thing can collapse under its weight.
I think this is the main difference in reaction to LLMs between different fields, fields that are subjective and more to sensitive to receiver taste you can notice a rage(I think range is an overstatement) against it, while fields where the result is objective the reaction from people is simply saying it does or doesn't work.
(The reason I did that is that the anti-crawler protections also unfortunately hit some legit users, and we don't want to block legit users. However, it seems that I turned the knobs down too far.)
In this case, though, we had a secondary failure: PagerDuty woke me up at 5:24am, I checked HN and it seemed fine, so I told PagerDuty the problem was resolved. But the problem wasn't resolved - at that point I was just sleeping through it.
I'll add more as we find out more, but it probably won't be till later this afternoon PST.
what type of protections are used on HN? rate-limiting? ip range blacklist?