Readit News logoReadit News
neilv · 2 months ago
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.

And some teen may be traumatized. Again, unsafe.

Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.

omnipresent12 · 2 months ago
https://www2.ljworld.com/news/schools/2025/aug/07/lawrence-s...

Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.

These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.

avidiax · 2 months ago
https://archive.is/DYPBL

> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. “I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.

It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.

michaelt · 2 months ago
> They are suing Gaggle, who claims they never intended their system to be used that way.

Yeah, there's a shop near me that sells bongs "intended" for use with tobacco only.

reaperducer · 2 months ago
The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time.

All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.

HWR_14 · 2 months ago
> They are suing Gaggle, who claims they never intended their system to be used that way.

Is there some legal way to sue a pair of actors (Gaggle and school) then let them sue each other over who has to pay what percentage?

RajT88 · 2 months ago
> This is a paid addon, though

Holy shitballs. In my experience such pain addons have very cheap labor attached to them, certainly not what you would expect based on the sales pitch.

b00ty4breakfast · 2 months ago
>...its purpose is to “prioritize safety and awareness through rapid human verification.

Oh look, a corporation refusing to take responsibility for literally anything. How passe.

chillingeffect · 2 months ago
The invention of the corporation is virtually to eliminate responsibility/culpability from any individual.

Human car crash? Human punishment. Corporate-owned car crash? A fine which reduces salaries some negligible percent.

JumpCrisscross · 2 months ago
> a corporation refusing to take responsibility for literally anything. How passe

Versus all the natural people at the highest echelons of our political economic system valiantly taking responsibility for fuckall?

DrewADesign · 2 months ago
Engineer: hey I made this cool thing that can help people in public safety roles process information and make decisions more efficiently! It gives false positives but you save more time than it takes less time to weed through them.

Someone nearby: well what if they use it to replace human thinking instead of augment it?

Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.

Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.

Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…

::6 months later—some kid is being held at gunpoint over snacks.::

casey2 · 2 months ago
Nice fantasy, but the reality is that the "people in public safety roles" love using flimsy pretenses to harass and abuse vulnerable populations. I wish it was just overeager sales and marketing, but you're view of humanity is way too naive especially as masked thugs are disappearing people in the street as we type.
JimTheMan · 2 months ago
Refer to the post office scandal in Britain and the robodebt debacle in Australia.

The authorities are just itching to have their brains replaced with by dumb computer logic, without regard for community safety and wellbeing.

random3 · 2 months ago
It’s actually “AI swarmed” since no human reasoning, only execution, was exerted - basically have an AI directing resources.
trhway · 2 months ago
delegating decision to AI, excluding human from the "human in the loop" is kind of unexpected as a first step, as in general it was expected that exclusion will start from the other end. Sideway i wonder how is that going to happen on the battlefield.

for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.

actionfromafar · 2 months ago
Reverse Centaur. MANNA.
goopypoop · 2 months ago
when attacked by bees am I hive swarmed?
janalsncm · 2 months ago
In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.

But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.

Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.

nkrisc · 2 months ago
In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.
tacticus · 2 months ago
> But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.

Given the probability of police officers in the USA taking any action as hostile and then ending up shooting him a false positive here is the same as swatting someone.

The system here sent the police off to kill someone.

lelandfe · 2 months ago
I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.

We answered the screams at the door to guns pointed at our faces, and countless cops.

It was explained to us that this was the restrained version. We got a knock.

Unfortunately, I understand why these responses can't be neutered too much. You just never know.

trehalose · 2 months ago
False positives can effectively lead to false negatives too. If too many alarms end in teens getting swatted (or worse) for eating chips, people might ignore the alarm if an actual school shooter triggers it. Might assume the AI is just screaming about a bag of chips again.
Spooky23 · 2 months ago
I think a “true positive” is an issue as well if the protocol to manage it isn’t appropriate. If the kid was armed with something other than nacho cheese, the provocative reaction could have easily set off a tragic chain of events.

Reality is there are guns in schools every day. “Solutions” like this aren’t making anyone safer. School shooters don’t fit this profile - they are planners, not impulsive people hanging out at the social event.

More disturbing is the meh attitude of both the company and the school administration. They almost engineered a tragedy through incompetence, and learned nothing.

bilbo0s · 2 months ago
>And some teen may be traumatized.

Um. That's not really the danger here.

The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.

This tech is not supposed to be used in this fashion. It's not ready.

neilv · 2 months ago
Did you want to emphasize or clarify the first danger I mentioned?

My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.

When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.

I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.

wat10000 · 2 months ago
I fully agree, but we also really need to get to a place where drawing the attention of police isn't an axiomatically life-threatening situation.

Deleted Comment

krapp · 2 months ago
Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.

Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.

[0]Even though no other free society has to pay that price but whatever.

akoboldfrying · 2 months ago
> The danger is that it's as clear as day that in the future someone is gonna be killed.

This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.

So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)

tartoran · 2 months ago
"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"

Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.

xbar · 2 months ago
Charge the superintendent with swatting.

Decision-maker accountability is the only thing that halts bad decision-making.

nomel · 2 months ago
> Charge the superintendent with swatting.

This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.

dekken_ · 2 months ago
> Make them pay money

It already cost money paying for the time and resources to be misappropriated.

There needs to be resignations, or jail time.

SAI_Peregrinus · 2 months ago
The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).

Deleted Comment

russdill · 2 months ago
I wonder how much more likely it is to get a false positive from a black student.
vee-kay · 2 months ago
The question is whether that Doritos carrying kid is still alive, instead of being shot at by the violent cops (who typically do nothing when an actual shooter is roaming a school on a killing spree; apropos the Uvalve school shooting, when hundreds of cops milled around the school in full body armor, refusing to engage the shooter on killing spree inside the school, and they even prevented the parents from going inside to rescue their kids) based on a false positive about a gun (and the cops must have figured that it's likely a false positive, because it is info from AI surveillance), only because he is white?
kelnos · 2 months ago
Before clicking on the article, I kinda assumed the student was black. I wouldn't be surprised if the AI model they're using has race-related biases. On the contrary, I would be surprised if it didn't.
joe_the_user · 2 months ago
I assume they were provide gift cards good for psychotherapy sessions.
akoboldfrying · 2 months ago
> Make them pay money for false positives instead of direct support and counselling.

Agreed.

> This technology is not ready for production

No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).

neuralRiot · 2 months ago
I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.
Zigurd · 2 months ago
In the US cops kill more people than terrorists. As long as you quantifying values take that into account.
froobius · 2 months ago
Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)

We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?

[1] https://arxiv.org/abs/1506.02640

EdwardDiego · 2 months ago
And it feels like they missed the "human in the loop" bit. One day this company is likely to find itself on the end of a wrongful death lawsuit.
an0malous · 2 months ago
They’ll likely still be profitable after accounting for those. This is why sociopaths are so successful at business

Dead Comment

jawns · 2 months ago
I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.

He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.

My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.

But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.

cyanydeez · 2 months ago
Someday there'll be a lawyer in court telling us how strong the AI evidence was because companies are spending billions of dollars on it.
mothballed · 2 months ago
Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.
bluGill · 2 months ago
And there needs to be an opposing lawyer ready to tear that argument to pieces.
Terr_ · 2 months ago
You mean in the same fallacious sense of "you can tell cigarettes are good because so many people buy them"?
kelnos · 2 months ago
The article says the police later showed the student the photo that triggered the alert. He had a crumpled-up Doritos bag in his pocket. So there was no gun in the photo, just a pocket bulge that the AI thought was a gun... which sounds like a hallucination, not any actual reasonable pattern-matching going on.

But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.

Deleted Comment

hinkley · 2 months ago
Is use of force without justification automatically excessive force or is there a gray area?
mentalgear · 2 months ago
Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.
MiiMe19 · 2 months ago
I might be missing something but I don' think this article isn't about palantir or any of their products
wartywhoa23 · 2 months ago
Palantir is but one head of the hydra which has hundreds of them, and all concerns about a single one apply to the whole beast hundredfold.
yifanl · 2 months ago
You're absolutely right, Palantir just needs a different name and then they'd have no issues.
joomla199 · 2 months ago
This comment has a double negative, which makes it a false positive.
seanhunter · 2 months ago
The article is about omnialert, not palantir, but don’t let the facts get in the way of your soapbox rant.
mzajc · 2 months ago
Same fallible systems, same end goal of mass surveillance.
courseofaction · 2 months ago
American, please, wake up. The masked border police are on the streets arresting citizens, the military is being paid as a client of the president, corruption is legal, and a mass surveillance machine unfathomable to prior dictatorships is being/has been established. You're fucked. Listen to the soapbox. It is very, very relevant. Wake up.

Deleted Comment

wartywhoa23 · 2 months ago
I'm pretty sure that some people will continue to apply the term "soapbox ranting" to all opposition against the technofascism even when victims of its false positives will be in need of coroners, not psychologists.
protocolture · 2 months ago
I dont think a guy who knows so much about the anti christ could be wrong.

Dead Comment

rolph · 2 months ago
Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.

the AI "swatted" someone.

bilbo0s · 2 months ago
Calling it today. This company is going to get innocent kids killed.

How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?

First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.

This is a really bad idea right now. The technology is just not there yet.

mothballed · 2 months ago
And then there's plenty of bullies who might put a sticker of a picture of a gun on someone's back, knowing it will set off the image recognition. It's only a matter of time until they figure that out.
mrguyorama · 2 months ago
>First time it happens, there will be an explosion of protests.

Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved

In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.

Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.

Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.

Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.

callalex · 2 months ago
> The technology is just not there yet.

The technology literally can NEVER be there. It is completely impossible to positively identify a bulge in clothing as a handgun. But that doesn’t stop irresponsible salesmen from making the claim anyway.

etothet · 2 months ago
The corporate version of "It's a feature, not a bug."
nyeah · 2 months ago
Clearly it did not prioritize human safety.
tencentshill · 2 months ago
"rapid human verification." at gunpoint. The Torment Nexus has nothing on these AI startups.
palmotea · 2 months ago
Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.
anigbrowl · 2 months ago
I get that you're being sarcastic and find the police response appalling, but the sad reality of Poe's Law is that there are a lot of people who would unironically say this and would have cheered if the cops had shot this kid, either because they hate black people or because they get off on violence and police shootings are a social sanctioned way to indulge that taste.
vee-kay · 2 months ago
We all know the cops will go for the easy prey:

* Even hundreds of cops in full body armor and armed with automatic guns will not dare to engage a single "lone wolf" shooter doing a killing spree in a school; the heartless cowards may even prevent the parents from going inside to rescue their kids: Uvalde school shooting incident

* Cop on a ego trip, will shoot down a clearly harmless kid calmly eating a burger in his own car (not a stolen car): Erik Cantu incident

* Cops are not there to serve the society, they are not there to ensure safety and peace for the neighborhood, they are merely armed militia to protect the rich and powerful elites: https://www.alternet.org/2022/06/supreme-court-cops-protect-...

drak0n1c · 2 months ago
The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.
ggreer · 2 months ago
According to a news article[1], a human did review the video/image and flagged it as a false positive. It was the principal who told the school cop, who then called other cops:

> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.

What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?

1. https://www.wbaltv.com/article/student-handcuffed-ai-system-...

Etheryte · 2 months ago
Next up, a captcha that verifies you're not a robot by swatting you and checking at gunpoint.
proee · 2 months ago
Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.

Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.

There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.

How does this not spiral out of control?

mpeg · 2 months ago
To be fair, at least you can choose not to wear the cargo pants.

A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...

stavros · 2 months ago
How is it fair to say that? That's some "why did you make me hurt you"-level justification.
franktankbank · 2 months ago
> guess his ethnicity...

Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.

malux85 · 2 months ago
Speak up citizens!

Email the state congressman and tell them what you think.

Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.

Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy

anigbrowl · 2 months ago
If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.
xp84 · 2 months ago
Get Precheck or global entry. I only do a scanner every 5 years or so when I get pulled at random for it. Otherwise it's metal detector only. Unless your zippers have such chunky metal that they set that off you'll be fine. My belt and watch don't.

Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.

Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."

hollow-moe · 2 months ago
"Just pay to not be harrassed or have your rights/dignity stepped on" a typical take to find on the orange site.
rkagerer · 2 months ago
...maybe not, but a few bucks could still solve this problem

Sure, can't argue with that. But doesn't it bug you just a little that (paying a fee to avoid harassment) doesn't look all that disimilar from a protection racket? As to whether it's a few bucks or many, now you're just a mark negotiating the price.

fgbarben · 2 months ago
The brownshirts will never get my money.
voidUpdate · 2 months ago
I don't often fly, but back when I went to germany on a school trip, on the return flight I got pulled aside into a small room by whatever the german equivalent of TSA is and they swabbed the skin of my belly, and the inside of my bag. I'm guessing it was a drugs check and I must have just looked shifty because I get nervous in situations like that, but I do find it funny that they pulled me aside instead of the guys with me who almost certainly had something on them.

Also my partner has told me that apparently my armpits sometimes smell of weed or beer, despite me not coming in contact with either of those for a very long time, and now I definitely don't want to get taken into a small room by a TSA person (After some googling, apparently those smells can be associated with high stress)

walkabout · 2 months ago
I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.

(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)

hinkley · 2 months ago
I was getting pulled out of line in the 90’s for having long hair. I don’t dress in shitty clothes or fancy ones, I didn’t look funny, just the hair, which got regular compliments from women.

I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.

The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.

JustExAWS · 2 months ago
Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.

In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.

proee · 2 months ago
I wasn't implying TSA-cargo-pant-groping is comparable. My point is to show escalation in public facing systems. We have been dealing with TSA. Now we get AI Scanners. What's next?

Also, no need to escalate this into a race issue.

more_corn · 2 months ago
Why don’t you pay the bribe and skip the security theater scanner? It’s cheap. Most travel cards reimburse for it too.
proee · 2 months ago
I'm sure CLEAR is already having giddy discussion on how they can charge you to get pre-verified access to walk around in public. We can all wear CLEAR certified dog tags so the cops can hastle the non-dog-tagged people.
jason-phillips · 2 months ago
I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.
oceanplexian · 2 months ago
If the system used any kind of logic whatsoever a CCW permit would not only allow you to bypass airport security but also carry in the airport (Speaking as both a pilot and a permit holder)

Would probably eliminate the need for the TSA security theater so that will probably never happen.

dheera · 2 months ago
The TSA scanners also trigger easily on crotch sweat.
hsbauauvhabzb · 2 months ago
I enjoy a good grope, so I’ll keep that in mind the next time I’m heading into the us.