The controls in the actual proposal are less reasonable: they create finable infractions for any claim in a job ad deemed "misleading" or "inaccurate" (findings of fact that requires a an expensive trial to solve) and prohibit "perpetual postings" or postings made 90 days in advance of hiring dates.
The controls might make it harder to post "ghost jobs" (though: firms posting "ghost jobs" simply to check boxes for outsourcing, offshoring, or visa issuance will have no trouble adhering to the letter of this proposal while evading its spirit), but they will also impact firms that don't do anything resembling "ghost job" hiring.
Firms working at their dead level best to be up front with candidates still produce steady feeds of candidates who feel misled or unfairly rejected. There are structural features of hiring that almost guarantee problems: for instance, the interval between making a selection decision about a candidate and actually onboarding them onto the team, during which any number of things can happen to scotch the deal. There's also a basic distributed systems problem of establishing a consensus state between hiring managers, HR teams, and large pools of candidates.
If you're going to go after "ghost job" posters, you should do something much more targeted to what those abusive firms are actually doing, and raise the stakes past $2500/infraction.
Sure, the people who make the AI scraper bots are going to figure out how to actually do the work. The point is that they hadn't, and this worked for quite a while.
As the botmakers circumvent, new methods of proof-of-notbot will be made available.
It's really as simple as that. If a new method comes out and your site is safe for a month or two, great! That's better than dealing with fifty requests a second, wondering if you can block whole netblocks, and if so, which.
This is like those simple things on submission forms that ask you what 7 + 2 is. Of course everyone knows that a crawler can calculate that! But it takes a human some time and work to tell the crawler HOW.
Sure the program itself is jank in multiple ways but it solves the problem well enough.
Well, spam is not a technical problem either. It's a social problem and one day in a distant future society will go after spammers and other bad actors and the problem will be mostly gone.
I just learned a brand-new term for this: It's called the "Goomba Fallacy"[1]
uh huh? This seems more like it's the authors idea of what he wants children to be than how children are in reality.