The Switchboard
AI news, translated for operators.
The Big Take
Justice Department Says Anthropic Can't Be Trusted With Military AI Systems
The Department of Justice defended its supply-chain risk designation against Anthropic, arguing in court filings that the company could "sabotage or subvert" military systems if its corporate ethical "red lines" are crossed. Microsoft and AI researchers filed amicus briefs supporting Anthropic's position. The conflict exposes fundamental tension between AI ethics commitments and government contracting requirements.
What this means for you: Enterprise AI vendor selection now carries geopolitical implications. If your organization handles sensitive government work, evaluate whether your AI provider's policy positions could create contract conflicts. Diversify provider relationships to maintain flexibility.
For Media & Publishing Leaders
Small Publishers Lost 60% of Search Traffic in Two Years
Chartbeat data reveals publishers with 1,000-10,000 daily pageviews lost 60% of search referral traffic over two years, while large publishers lost only 22%. ChatGPT referrals grew 200% but still represent less than 1% of total traffic. The asymmetry suggests AI search disruption is hitting small publishers disproportionately.
What this means for you: If you operate small-to-mid-size publishing properties, Google Search is no longer a reliable growth channel. Prioritize direct audience relationships, email lists, and alternative distribution strategies immediately.
From SEO to GEO: Generative Engine Optimization Emerges
Publishers are shifting strategy from ranking in search results to being cited in AI-generated answers. Generative Engine Optimization (GEO) favors clear structure, credible sources, and early placement of key insights. Conde Nast's experience illustrates the stakes: visibility in AI results is becoming existential.
What this means for you: Audit your content structure for AI citability. Place key facts and claims early in articles, use clear section headers, and ensure your domain authority signals credibility to AI systems that select sources for synthesis.
For Operations & RevOps Leaders
Pentagon Plans to Have AI Companies Train on Classified Data
The Department of Defense is discussing secure environments for AI companies like OpenAI and xAI to train military-specific models directly on classified data. This would embed sensitive intelligence into the models themselves rather than using AI to query classified databases.
What this means for you: Training AI on proprietary or sensitive data is becoming the standard approach for specialized applications. If your organization has unique data assets, evaluate secure training partnerships rather than relying solely on general-purpose models.
The AI Stack
Mistral AI Launches Forge for Proprietary Model Training
Mistral AI released Forge, a platform enabling full model training lifecycle including pre-training, RLHF, and continuous improvement using proprietary data. The platform targets enterprises that want to own rather than rent AI infrastructure.
What this means for you: Model ownership is becoming a viable alternative to API-based AI. If you have substantial proprietary data and specialized use cases, evaluate whether the investment in custom model training could provide competitive advantage over competitors using generic APIs.