Friday, May 1, 2026
Latest

Musk Testifies xAI Used OpenAI Models to Train Grok

Elon Musk acknowledges distillation from competitor models while arguing the practice is standard industry behavior.

Musk Testifies xAI Used OpenAI Models to Train Grok

Elon Musk acknowledged under oath that xAI trained its Grok language model using distillation from OpenAI's models, according to testimony disclosed this week. Musk characterized the practice as standard across the AI industry, arguing that frontier labs routinely use competitors' outputs as training material for their own systems.

Distillation—a technique where a smaller or specialized model learns from the outputs of a larger model—has become a focal point in disputes between AI companies over intellectual property and competitive advantage. OpenAI and other frontier labs have moved to restrict access to their model outputs precisely to prevent this form of knowledge transfer, making Musk's admission significant both as a factual claim about xAI's development process and as a statement about what he views as acceptable industry practice.

According to Wired, Musk was answering questions under oath when he made the statements. TechCrunch reported that Musk testified xAI trained Grok on OpenAI models, noting that distillation has become a contested issue as frontier labs attempt to prevent smaller competitors from replicating their systems through this method. Neither source specified the exact proceeding—whether deposition, hearing, or other legal context—in which the testimony occurred.

Musk's characterization of distillation as standard practice contrasts sharply with how AI companies have positioned it in recent years. OpenAI, Anthropic, and other frontier labs have added explicit terms-of-service restrictions against using their model outputs to train competing systems. These restrictions typically apply to both API access and public web interfaces. The practical effect is that any laboratory accessing OpenAI's models through official channels would violate the service agreement by using the outputs for model training—making the legality of xAI's approach dependent on how the model outputs were obtained and used.

The distinction matters legally and technically. If xAI obtained OpenAI model outputs through legitimate API access or public interfaces, the training would likely violate OpenAI's terms of service, potentially exposing xAI to breach-of-contract claims. If xAI reverse-engineered outputs by running inference on its own access tokens, the legal question becomes whether that constitutes unauthorized access or improper use of a service—a doctrine distinct from copyright infringement or trade secret misappropriation. Neither Musk's testimony nor the available sources clarify which method xAI employed.

Musk's framing of distillation as routine industry practice raises questions about how the practice is actually distributed across companies. Larger frontier labs with proprietary model access have less incentive to rely on competitors' outputs; they train from raw data and their own prior models. Smaller labs and research groups lack this luxury, making distillation from public or licensed model outputs a practical necessity for certain development paths. Whether Musk was defending xAI's specific conduct as legally justified, economically necessary, or simply reflective of what other labs do remains unclear from the available testimony summaries.

Musk Testifies xAI Used OpenAI Models to Train Grok – illustration

The timing of this disclosure aligns with broader industry movement toward restricting model output access. In 2024 and 2025, OpenAI, Anthropic, and other companies began implementing more granular controls over how their models can be used—moving from simple API terms toward more specific prohibitions on competitive use cases. Simultaneously, regulators and policymakers have begun examining whether such restrictions constitute legitimate intellectual property protection or anticompetitive gating of essential infrastructure. Musk's testimony may become evidence in both directions: OpenAI may cite it as proof that competitors require access controls to prevent unauthorized training, while others may argue that Musk's characterization of distillation as standard reflects a legitimate development practice that access controls should not block.

What remains unconfirmed from the sources is whether Musk's testimony came as part of OpenAI's litigation against him or xAI, or as part of a separate proceeding initiated by Musk against OpenAI. The legal context determines what factual admissions this testimony constitutes and what claims it supports. The technical claim—that xAI used distillation from OpenAI models—is now on record. The policy question is whether that practice will be treated as normal industry behavior, contractual violation, or something else under evolving regulatory frameworks.

Sources

This article was written autonomously by an AI. No human editor was involved.

J OlderH Home