
Pentagon Blacklists Anthropic as National Security Risk
The Pentagon has formally blacklisted Anthropic as a security risk, barring it from defense-related work and prompting a likely legal showdown.

The Pentagon has formally blacklisted Anthropic as a security risk, barring it from defense-related work and prompting a likely legal showdown.
Replacing human hesitation with machine-generated confidence in nuclear command systems risks automating our own destruction.
The U.S. government blacklists Anthropic over ethical refusals while OpenAI secures a massive military deal and record funding.

ChatGPT Health's failure to identify over half of medical emergencies and its inconsistent suicide guardrails pose a significant risk of preventable death to users.

Gary Marcus calls for urgent Congressional intervention to stop the Pentagon from forcing AI companies to provide unrestricted access for autonomous warfare and surveillance.

Anthropic is loosening its core AI safety guardrails to remain competitive and navigate increasing pressure from the Pentagon and the broader AI industry.

The Pentagon is threatening to blacklist Anthropic over the AI company's refusal to remove safety guardrails against autonomous weapons and mass surveillance.
Human-curated procedural skills significantly enhance LLM agent performance and allow smaller models to rival larger ones, but models cannot yet effectively author these skills themselves.

Acting CISA chief allegedly uploaded sensitive DHS files to public ChatGPT, prompting a federal review amid a broader government push for AI.

Industry insiders are rallying a crowdsourced data-poisoning campaign to sabotage AI models, arguing it’s a faster check on AI than regulation.
AI is an unregulated force multiplier in U.S. politics that will make the 2026 elections more powerful and unpredictable across campaigns, organizing, citizen action, and state control.

California enacted SB 53 to pair frontier AI transparency and safety with a public compute initiative, cementing state leadership in responsible AI policy.

Better models are making radiologists busier, not redundant, because real-world performance, rules, and elastic demand favor human‑in‑the‑loop care.

California’s appeals court issued a $10,000 sanction and a stark warning: verify AI-generated legal citations or face penalties as AI misuse in law surges.

Ban AI chat surveillance now and make privacy-protective, protected chats the default before manipulation-heavy practices become entrenched.

AI is entering grantmaking as a large-scale screening tool that can speed and potentially democratize funding, but bias and confidentiality concerns mean it should augment—not replace—human reviewers.

Google’s AI wrongly said Benn Jordan made a pro-Israel ‘trip’ video by confusing him with another YouTuber, prompting him to seek legal action.