AI Misidentifies Doritos Bag as Gun, Police Detain Teen at Baltimore School

A Baltimore high school student was detained at gunpoint after an AI system mistook his crumpled Doritos bag for a gun. Omnilert acknowledged a false positive but said the system worked as intended, while the student reports no direct apology and now feels unsafe at school. The case fuels debate over AI surveillance reliability as such technologies spread into sensitive settings.
Key Points
- An AI gun detection system flagged a Doritos bag as a gun, leading armed police to detain 16-year-old Taki Allen at Kenwood High School.
- Allen was handcuffed at gunpoint and searched; police later showed him the AI image that misidentified the chips bag as a weapon.
- Omnilert admitted a false positive but claimed the system functioned as intended with rapid human verification; the school offered counseling but, per Allen, did not apologize or contact him directly.
- The incident challenges Omnilert’s “near-zero false positives” claim and amplifies debate over the reliability and harms of AI surveillance in schools.
- The story is placed in a broader context of expanding AI use, including military decision support and flawed age-verification scans, highlighting risks of misidentification.
Sentiment
The overall sentiment of the Hacker News discussion is overwhelmingly negative and highly critical. There is strong agreement with the article's core message regarding the dangers, unreliability, and potential for severe harm caused by AI gun detection systems, coupled with deep cynicism towards corporate accountability, police actions, and the erosion of civil liberties.
In Agreement
- The AI system actively created an unsafe, traumatizing situation for the student, rather than enhancing safety, and Omnilert's claim that it "functioned as intended" is disingenuous corporate spin.
- The technology is not ready for deployment in public, high-stakes environments like schools, and there is a high certainty that such false positives will eventually lead to someone being killed due to overzealous police response.
- These AI systems lack transparency regarding their accuracy, training data, and false positive rates, and are likely monetizing basic, un-fine-tuned object detection models without proper validation.
- There is significant concern about racial bias in both the AI's threat detection and the police's response, suggesting that Black students are disproportionately at risk.
- The concept of "rapid human verification" at gunpoint is dystopian, effectively turning the AI into a tool for "swatting" individuals based on flawed data.
- There is a severe lack of accountability and liability for false positives, with calls for consequences (financial, resignations, or legal) for the companies, school administration, and police involved.
- The deployment of real-time AI surveillance for gun threats in schools is indicative of a broader, problematic trend towards authoritarian mass surveillance and the erosion of civil liberties.
- The school's lack of a direct apology and its decision to parrot the corporate line, rather than acknowledging the severity of the mistake, is criticized.
Opposed
- Some argue that all technologies have inherent risks and benefits, and society must weigh these, suggesting that preventing one school shooting might outweigh the cost of a false positive, provided data on rates is available.
- The school might be located in a genuinely dangerous area with a history of violence, which could explain why they adopted such a system out of desperation, even if imperfect.
- School administrators might operate on a "better safe than sorry" principle, choosing to call in police even on a questionable alert to avoid legal liability if a real threat were missed.