For what it's worth, that is precisely how this effort will be gaslit. Divide and conquer.
Let me elaborate on this. Infinite growth Capitalism combined with technological development leads to a civilization whose citizens are constantly concerned with more, a nicer way to say this is growth. This in essence means greater and greater levels of consumption. Their entire existence, their brains, are wired for this hit from growth and progress. Buying a new phone, a new car, new clothes, promotions, salary increases, bigger houses, longer vacations, more sexual partners, etc. They are looking to constantly grow. Without that hit, they would very quickly become dejected and depressed.
But this all comes at a great cost. For one thing, people living for that hit don't live in the present, they are always oriented in the future giving up their entire present for a promise of a greater future which never comes, all they get is a hit along the way to that future and anticipation of next hit motivates them. And it's possible to have a psychology that is not set up to constantly seek this sort of hit. Therefore, for a person living with infinite growth capitalism, the transition is probably highly undesirable.
Most people like to use the same SKU to get more exposure on amazon.com though.
Edit: can't reply due to rate-limiting, but in principle, all they have to do to search for service is power up the receiver briefly, running at a very low duty cycle. This has no noticeable impact on battery life or any other aspect of the phone's operation.
The issue with the P(A) and P(B) argument is that police already use databases heavily, and most people don't have any problem with it. But why when it comes to facial recognition, is it too dangerous to use technology to drive efficiency.
If they're looking for somebody named Jane Doe, anybody with that name shows up on a list and police investigate. Of course if there are Jane Does in a 2 mile radius, they start with those. So why not just say if the system delivers a match within x accuracy and the person is within y residents (plus a variety of other variables), and x/y is below a threshold, then the match can be presented to police for further investigation.
Searching databases for matches is fine for names, or fingerprints, shoe prints, tire track, fiber analysis - but not faces? I personally wonder if it's really any different, or if its just better tailored for the media outrage machine because "China does it", or because "facial recognition targets minorities".
This tech can be good if applied to a narrow range of people like you suggest (eg. only searching people who live in neighboring blocks) but nobody is actually doing that. We should pass laws requiring a rigorous analysis of these probabilities for such databases to be used, including a conversation about what rate of false positives we are willing to tolerate. Guardrails should be put in place to enforce those limits. If this is too hard, we don’t have a strong enough handle on this technology to be using it.
Here’s the scenario that scares me the most:
Police identify a suspect using facial recognition. Then puts that person in a lineup for a witness. Of course the witness is going to say “that’s the one!” because the suspect actually looks like the perpetrator. The witness will be sure, the cops will be sure, and a jury will convict. And this scenario is completely determined by the use of the facial recognition database. This will happen unless we pass laws to prevent it.
The arguments against facial recognition like that there can be false positives, or that can affect some groups more than others, doesn't that also apply when people are identifying people? If so isn't the real solution to require more evidence than just a facial match, not to ban an effective way of narrowing a suspect pool. That way police can spend less time manually identifying people and more time getting other evidence.
Suppose a store is robbed, and there’s a video.
The police identity some suspects - the guy who just got out of jail for robbing the same store, and another person the store owner had a dispute with. Neither of them look like the robber in the video. Then the police take a still from the video and knock on some doors around the block. Somebody recognizes the person in the video, and the police investigate that person. This scenario seems pretty fair to me.
Now suppose the police run it through the facial recognition system. It identifies one person as a 99% match, and the police go investigate this person. This scenario does not seem so fair to me.
Here’s how I see the math:
P(A) = P(robber has a doppelgänger living on the same block) = .01
P(B) = P(robber had a doppelgänger somewhere in the database) = .9
P(X) = P(police screw up investigation, and will convict the suspect whether or not they are guilty) = .2
P(AX) = .002
P(BX) = .18
The exact numbers are made up, but as long as P(A) << P(B), you can see you this tech will result in a huge increase in false convictions. Even if P(X) is low, the number of false convictions increases by P(B)/P(A).