AI Misidentifies Doritos Bag as Gun, Police Detain Teen at Baltimore School

Added Oct 23, 2025
Article: NegativeCommunity: NegativeMixed
AI Misidentifies Doritos Bag as Gun, Police Detain Teen at Baltimore School

A Baltimore high school student was detained at gunpoint after an AI system mistook his crumpled Doritos bag for a gun. Omnilert acknowledged a false positive but said the system worked as intended, while the student reports no direct apology and now feels unsafe at school. The case fuels debate over AI surveillance reliability as such technologies spread into sensitive settings.

Key Points

  • An AI gun detection system flagged a Doritos bag as a gun, leading armed police to detain 16-year-old Taki Allen at Kenwood High School.
  • Allen was handcuffed at gunpoint and searched; police later showed him the AI image that misidentified the chips bag as a weapon.
  • Omnilert admitted a false positive but claimed the system functioned as intended with rapid human verification; the school offered counseling but, per Allen, did not apologize or contact him directly.
  • The incident challenges Omnilert’s “near-zero false positives” claim and amplifies debate over the reliability and harms of AI surveillance in schools.
  • The story is placed in a broader context of expanding AI use, including military decision support and flawed age-verification scans, highlighting risks of misidentification.

Sentiment

The community overwhelmingly agrees with the article's critical stance. Nearly every commenter condemns the AI system, the company's dismissive response, and the school's failure to take responsibility. The few voices advocating for measured cost-benefit analysis are heavily downvoted. The consensus view is that this technology is dangerous, unready, and that the institutional incentive structures guarantee escalation regardless of the AI's accuracy.

In Agreement

  • The AI system created a dangerous armed confrontation that didn't exist before — it essentially swatted a teenager over a bag of chips
  • Omnilert's claim that the system 'functioned as intended' is corporate doublespeak that ignores the real-world harm caused
  • The technology is fundamentally not ready for deployment; some argue it can never work since identifying a concealed weapon from a clothing bulge is impossible
  • Perverse incentives mean the human-in-the-loop is useless: administrators always escalate because inaction has career consequences but overreaction doesn't
  • The school's failure to apologize or contact the student directly shows institutional indifference to the harm caused
  • There should be financial penalties, accountability, or even criminal charges (analogous to swatting) for decision-makers who deploy these systems
  • Racial bias in both AI systems and police response is a serious concern, especially given the student was Black

Opposed

  • False negatives (missing an actual gun in a school) have deadly consequences too; the risk calculus isn't as one-sided as critics suggest
  • A single false positive isn't enough data to condemn the entire system — proper cost-benefit analysis requires knowing true and false positive rates
  • A human may have reviewed the AI-flagged image and also believed it looked like a gun, making this partly a human judgment failure, not purely an AI one
  • The school has genuine safety problems (stabbings, robberies) and administrators are grasping at any available tool
AI Misidentifies Doritos Bag as Gun, Police Detain Teen at Baltimore School | TD Stuff