Review: Together AI - Open source at scale
Discover how Together AI delivers open-source AI at scale in this balanced 5.0/10 review, covering its platform, undocumented pricing, and performance as an LLM API for developers seeking transparency
Together AI Review - Open source at scale
Score: 5.0/10 | Pricing: Not publicly documented | Category: llm-api
Overview
Together AI presents itself with the tagline "open source at scale" on its website at together.ai, promising an AI platform built on open-source principles [1]. The company targets developers and enterprises seeking alternatives to proprietary, closed-source AI models from major providers. The fundamental value proposition—if substantiated—would be significant: access to powerful open-source models with the infrastructure to deploy them at production scale, without the vendor lock-in that characterizes platforms like OpenAI or Anthropic.
However, the complete absence of verifiable technical documentation, performance benchmarks, pricing information, or user interface details in our source material means this review must confront an uncomfortable reality: we cannot evaluate what Together AI actually delivers because no evidence exists in our provided context to support any claim beyond the company's website existence [1]. The adversarial scoring system assigned Together AI a neutral 5.0/10 across all five categories—Performance, Cost, Ease of Use, Features, and Reliability—not because of demonstrated mediocrity, but because both the advocate and prosecutor arguments presented zero evidence from the provided context.
This is not a review of a tool. It is a review of a promise without proof.
The Verdict
Together AI's "open source at scale" tagline represents a compelling vision, but the complete absence of verifiable data in our sources means the company has not yet earned the trust required for serious technical evaluation. The neutral 5.0/10 score across all categories reflects not balanced performance but the void where evidence should exist. Until Together AI publishes concrete benchmarks, transparent pricing, documented uptime statistics, and verifiable user testimonials, this platform remains an unsubstantiated claim in a market that demands proof. The most honest verdict is that we cannot render one—and that silence is itself the most damning criticism.
Deep Dive: What We Love
The Open-Source Promise: The concept of "open source at scale" addresses a genuine pain point in the AI industry. Organizations increasingly recognize the risks of dependency on proprietary models that can change pricing, modify behavior, or deprecate features without notice. A platform that genuinely delivers production-grade infrastructure for open-source models would provide technical teams with the flexibility to audit model behavior, fine-tune on proprietary data, and maintain control over their AI stack. This architectural approach—if implemented correctly—could reduce the total cost of ownership over multi-year deployments by eliminating per-token pricing volatility and enabling competitive model switching. The vision is technically sound and strategically important.
Potential for Cost Arbitrage: Open-source models, when properly optimized, can offer significant cost advantages over proprietary alternatives. The ability to self-host or use shared infrastructure for models like Llama, Mistral, or Falcon could reduce inference costs by orders of magnitude compared to per-token pricing from closed providers. If Together AI has built efficient inference infrastructure—using techniques like quantization, speculative decoding, or batching—the cost savings could be transformative for high-volume applications. However, without published pricing or performance data, this remains speculation.
Architectural Flexibility: A platform built on open-source models inherently offers more architectural flexibility than proprietary alternatives. Teams can swap models, implement custom preprocessing pipelines, or deploy specialized fine-tuned versions without platform constraints. This flexibility is particularly valuable for regulated industries that require model transparency, or for applications requiring specialized domain knowledge that general-purpose models cannot provide. The architectural promise is real, but its realization depends entirely on Together AI's actual implementation.
The Harsh Reality: What Could Be Better
Complete Absence of Verifiable Data: The most critical flaw is not a technical limitation but an informational void. Our sources contain zero information about Together AI's actual performance benchmarks, pricing structure, user interface, feature set, or reliability metrics. In an industry where competitors publish detailed latency statistics, throughput benchmarks, and uptime guarantees, Together AI's silence is deafening. The adversarial scoring system's neutral 5.0/10 across all categories is not a balanced assessment—it is the mathematical consequence of evaluating a platform with no evidence. For any organization considering adoption, this lack of transparency represents an unacceptable risk.
The VentureBeat Warning on Document Reliability: A VentureBeat study by Microsoft researchers found that frontier AI models silently rewrite document content, with errors that are "nearly impossible to catch" [4]. This finding has direct implications for any AI platform processing documents—including Together AI, if it offers such capabilities. The study demonstrates that even advanced models introduce subtle but significant errors when iterating over documents across multiple rounds. For any platform claiming to handle document processing at scale, this reliability concern is existential. Without published error rates, validation methodologies, or content fidelity guarantees, users cannot trust that Together AI's models—whatever they may be—will faithfully process their documents [4].
The AMD Distraction: The Verge's report on AMD bringing 3D V-Cache technology to workstation processors [3] has no connection to Together AI, yet its inclusion in our source material highlights a broader problem: the AI industry is flooded with tangential news that crowds out substantive analysis. For a platform review, the presence of irrelevant hardware news underscores how difficult it is to find genuine technical information about Together AI's actual capabilities. This information vacuum is itself a red flag—serious platforms publish serious documentation.
No Evidence of Production Readiness: The adversarial scoring system's reliability score of 5.0/10 reflects the complete absence of uptime statistics, error rate documentation, or incident response information. For production deployments, reliability is non-negotiable. Organizations need service level agreements, documented failure modes, and transparent incident histories. Together AI provides none of these in our sources. The platform may be excellent, but no evidence supports that claim.
Pricing Architecture & True Cost
The pricing architecture for Together AI is entirely undocumented in our sources. This absence is itself a significant finding. In the AI API market, pricing transparency is a competitive differentiator. OpenAI publishes detailed per-token pricing. Anthropic provides clear usage tiers. Replicate offers transparent compute pricing. Together AI's silence on cost suggests either that pricing is not yet finalized, that it is negotiated on a case-by-case basis, or that the company is not yet ready for public consumption.
The true cost of adopting Together AI extends beyond whatever pricing structure may exist. Organizations must consider:
- Migration costs: Moving existing workflows to a new platform requires engineering time, testing, and validation.
- Integration complexity: Without documented APIs or SDKs, integration costs are unpredictable.
- Vendor risk: An unproven platform may not survive market competition, leaving adopters stranded.
- Opportunity cost: Time spent evaluating Together AI could be spent on proven alternatives.
Until Together AI publishes transparent pricing, the total cost of ownership cannot be calculated. The neutral 5.0/10 cost score from the adversarial system reflects this complete informational void.
Strategic Fit (Best For / Skip If)
Best For: Organizations that prioritize open-source AI infrastructure and are willing to conduct their own due diligence. Early adopters who can tolerate incomplete documentation and are comfortable building integrations from scratch. Research teams evaluating multiple open-source model deployment options. Companies with existing relationships with Together AI that can provide internal performance data.
Skip If: You need production-ready infrastructure with documented SLAs, transparent pricing, and verifiable benchmarks. Your organization requires compliance documentation, security audits, or vendor risk assessments. You are evaluating platforms for high-volume, mission-critical deployments where reliability is non-negotiable. You need to make a purchasing decision within a defined timeline and cannot wait for the company to publish basic information.
The most honest recommendation is this: Together AI may become a serious platform, but today it is an unsubstantiated claim. The neutral 5.0/10 score is not a judgment of capability—it is a reflection of the void where evidence should exist. Until the company publishes the data that serious platforms provide as a baseline, the only responsible action is to wait, watch, and demand proof.
Resources
References
[1] Official Website — Official: Together AI — https://together.ai
[2] Wired — On Running LightSpray Cloudmonster 3 Hyper Review: — https://www.wired.com/review/on-running-lightspray-cloudmonster-3-hyper/
[3] The Verge — AMD’s best CPU tech for gamers is coming to workstations too — https://www.theverge.com/tech/930132/amd-ryzen-pro-9000-series-3d-v-cache
[4] VentureBeat — Frontier AI models don't just delete document content — they rewrite it, and the errors are nearly impossible to catch — https://venturebeat.com/orchestration/frontier-ai-models-dont-just-delete-document-content-they-rewrite-it-and-the-errors-are-nearly-impossible-to-catch
Was this article helpful?
Let us know to improve our AI generation.
Related Articles
Review: Claude 3.5 Sonnet API - Extended thinking & artifacts
Read our balanced Claude 3.5 Sonnet API review scoring 5.0/10, examining its extended thinking and artifacts features while highlighting the risks of publishing AI product coverage without complete do
Review: Pika - Creative video AI
In-depth review of Pika: features, pricing, pros and cons
Review: Notion AI - AI-native workspace
In-depth review of Notion AI: features, pricing, pros and cons