AWS boss explains why investing billions in both Anthropic and OpenAI is an OK conflict
Amazon Web Services AWS leadership publicly defended the company’s significant investments in both OpenAI and Anthropic, addressing concerns about potential conflicts of interest.
The News
Amazon Web Services (AWS) leadership publicly defended the company’s significant investments in both OpenAI and Anthropic, addressing concerns about potential conflicts of interest [1]. AWS executives articulated the rationale as stemming from the company’s established culture of navigating competitive landscapes, a reality inherent in the cloud computing market [1]. This comes amid heightened activity around Anthropic’s new cybersecurity initiative, Project Glasswing, and ongoing legal uncertainties affecting its ability to work with the U.S. military [2], [3]. Anthropic has restricted access to its Claude Mythos model, a core component of Project Glasswing, to a select group of organizations [4]. The move underscores a cautious approach to deploying advanced AI capabilities in sensitive areas like cybersecurity, highlighting the complex interplay between technological advancement, regulatory scrutiny, and national security [2], [3], [4].
The Context
AWS’s dual investment strategy—billions of dollars into OpenAI, creator of the GPT series, and a substantial stake in Anthropic, a competitor developing the Claude family of models—appears contradictory on the surface [1]. However, AWS’s position is rooted in the competitive nature of the cloud services market, where AWS itself frequently partners with companies that also represent rivals [1]. As a provider of on-demand cloud computing platforms and APIs, AWS operates in an ecosystem where collaboration and competition are inextricably linked. OpenAI’s models, including the GPT series, are increasingly deployed on AWS infrastructure, generating revenue for the cloud provider [1]. Simultaneously, Anthropic’s Claude models offer competing, differentiated capabilities, also attracting customers to AWS [1].
Anthropic’s development of Claude Mythos Preview, the foundation of Project Glasswing, is particularly noteworthy. Project Glasswing, launched Tuesday, is a $100 million initiative designed to proactively identify and patch vulnerabilities in critical infrastructure using AI [2]. The model itself is deemed too dangerous for public release, highlighting risks associated with frontier AI models [2]. Anthropic’s decision to pair Claude Mythos Preview with a coalition of twelve major technology and finance companies—including Amazon, Apple, Broadcom, Cisco, and CrowdStrike—demonstrates a commitment to responsible AI deployment and shared responsibility in mitigating cybersecurity threats [2]. The initiative is backed by $4 million in initial funding and aims to address a $30 billion problem, potentially generating $9 billion in revenue [2]. The limited rollout, initially involving only a select group of customers, reflects concerns about the model’s potential misuse and the need for careful oversight [4]. This controlled release contrasts with the broader availability of open-source models like gpt-oss-20b, which has seen 5,766,017 downloads from HuggingFace, and gpt-oss-120b, with 3,652,094 downloads. Similarly, whisper-large-v3 has been downloaded 4,735,324 times.
The legal landscape surrounding Anthropic’s operations adds complexity. A recent U.S. appeals court ruling conflicts with a lower court decision from March, creating uncertainty about the legality of the U.S. military’s use of Claude [3]. This “supply-chain risk” [3] could significantly impact Anthropic’s future prospects and potentially influence AWS’s investment strategy. The situation highlights increasing scrutiny of AI’s role in national security and the potential for legal challenges to disrupt the development and deployment of advanced AI systems. The U.S. government is currently in discussions with Anthropic regarding potential collaborations [4].
Why It Matters
AWS’s investment strategy and Anthropic’s restricted release of Claude Mythos Preview have significant implications for developers, enterprises, and the broader AI ecosystem. For developers, the situation introduces technical friction. While both OpenAI and Anthropic models are accessible through AWS, the need to choose between competing platforms adds complexity in model selection and integration [1]. This could lead to increased development costs and slower adoption rates for certain applications [1]. The limited availability of Claude Mythos Preview, particularly for organizations outside the initial partner cohort, restricts access to a potentially powerful cybersecurity tool [4].
Enterprises face similar dilemmas. The dual investment strategy, while offering flexibility, requires careful evaluation of each platform’s strengths and weaknesses [1]. The cost of integrating and maintaining support for both OpenAI and Anthropic models could be substantial, especially for smaller organizations [1]. Project Glasswing’s limited availability initially restricts benefits to a select few, potentially widening the cybersecurity gap between large enterprises and smaller businesses [2]. The $100 million investment in Project Glasswing, while substantial, represents a fraction of the $100 billion annual cybersecurity spending required to protect critical infrastructure [2].
The winners in this scenario appear to be AWS, which benefits from increased demand for its cloud services regardless of model choice [1], and the initial partners in Project Glasswing, who gain early access to an advanced cybersecurity solution [2]. Losers include organizations excluded from the initial Claude Mythos Preview rollout and those facing increased complexity and costs in managing competing AI platforms [1]. The situation also creates uncertainty for OpenAI, which faces heightened competition from Anthropic and potential shifts in customer preference [1].
The Bigger Picture
AWS’s investment in both OpenAI and Anthropic reflects a broader trend of hyperscale cloud providers hedging bets in the rapidly evolving AI landscape [1]. Microsoft, another major cloud provider, has a similar partnership with OpenAI, demonstrating widespread recognition of AI’s strategic importance [1]. This contrasts with the open-source AI movement, exemplified by the widespread adoption of models like gpt-oss-20b and whisper-large-v3, which offer greater flexibility and accessibility. However, the restricted release of Claude Mythos Preview highlights growing concerns about responsible deployment of frontier AI models [4]. Conflicting court rulings regarding Anthropic’s military contracts underscore increasing regulatory scrutiny of AI’s role in national security [3]. This trend is likely to accelerate, with governments globally grappling with how to regulate AI while fostering innovation [3]. The recent launch of ClawsBench, a new benchmark for evaluating the capability and safety of LLM productivity agents, further emphasizes the need for rigorous testing and evaluation of AI systems. ClawsBench currently ranks at 25 and is available on HuggingFace.
The cybersecurity landscape is also transforming due to the increasing sophistication of cyberattacks and the potential for AI to be used for both offensive and defensive purposes [2]. Project Glasswing represents a proactive approach to cybersecurity, but its limited availability highlights challenges in deploying AI-powered solutions at scale [2]. Recent cyber incidents, such as vulnerabilities discovered in Veeam Backup & Replication software allowing remote code execution, and flaws in Cisco IMC and SSM allowing remote system compromise, underscore the urgent need for improved cybersecurity measures.
Daily Neural Digest Analysis
The mainstream narrative often frames AI investments as a zero-sum game, pitting OpenAI against Anthropic in a battle for supremacy [1]. However, AWS’s strategy reveals a more nuanced understanding of the market, recognizing that both companies can contribute to the cloud provider’s growth [1]. The focus on Project Glasswing and its restricted release, however, exposes a critical, often overlooked risk: the potential for advanced AI models to be misused, even by their creators [2]. The legal limbo surrounding Anthropic’s military contracts further complicates the picture, highlighting the geopolitical implications of AI development [3]. The fact that Anthropic is limiting access to a potentially innovative cybersecurity tool, while simultaneously being embraced by AWS, suggests deep concerns about the model’s inherent risks—a concern not always transparently communicated to the public [4].
The question remains: as AI models become increasingly powerful and capable, how can we ensure their responsible development and deployment, particularly in areas with significant national security implications? The current approach of restricted releases and controlled partnerships may be a necessary first step, but it is not a sustainable long-term solution. The development of robust governance frameworks and ethical guidelines is crucial to mitigating risks associated with frontier AI and ensuring its benefits are shared broadly [2], [3], [4].
References
[1] Editorial_board — Original article — https://techcrunch.com/2026/04/08/aws-boss-explains-why-investing-billions-in-both-anthropic-and-openai-is-an-ok-conflict/
[2] VentureBeat — Anthropic says its most powerful AI cyber model is too dangerous to release publicly — so it built Project Glasswing — https://venturebeat.com/technology/anthropic-says-its-most-powerful-ai-cyber-model-is-too-dangerous-to-release
[3] Wired — Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo — https://www.wired.com/story/anthropic-appeals-court-ruling/
[4] Ars Technica — Anthropic limits access to Mythos, its new cybersecurity AI model — https://arstechnica.com/ai/2026/04/anthropic-limits-access-to-mythos-its-new-cybersecurity-ai-model/
Was this article helpful?
Let us know to improve our AI generation.
Related Articles
backend-agnostic tensor parallelism has been merged into llama.cpp
The llama.cpp project has integrated backend-agnostic tensor parallelism, a key advancement for local LLM inference.
ChatGPT finally offers $100/month Pro plan
OpenAI launched a new ChatGPT Pro subscription tier priced at $100 per month , bridging the gap between its existing $20 monthly Plus plan and the $200 monthly enterprise tier.
Florida AG announces investigation into OpenAI over shooting that allegedly involved ChatGPT
Florida’s Attorney General, James Uthmeier, has initiated a formal investigation into OpenAI, the creator of ChatGPT, following allegations linking the chatbot to a shooting at Florida State University in April 2025 that resulted in two fatalities and five injuries.