Federal Courts Just Made AI Disclosures the New Norm
A new nationwide rule quietly rewires how legal work is done. By standardizing AI-use disclosures, federal courts are forcing provenance logs, model attestations, and agent-readable ECF metadata into the workflow. Here is what changes now.

The quiet rule that rewires legal work
This morning, the Judicial Conference of the United States approved a uniform requirement for artificial intelligence disclosures across federal courts. It reads like procedure, yet it behaves like infrastructure. With a few new fields in the electronic filing system and a short declaration attached to briefs, the courts just moved artificial intelligence from shadow practice to first-class citizen.
For two years, individual judges and districts experimented with standing orders that asked lawyers to verify that a human checked their artificial intelligence outputs. Those rules mostly chased a few embarrassing hallucination incidents. Today’s standard is different. It treats artificial intelligence as a tool that will be used, then sets a common language for when, how, and by whom it was used. That language is made of three parts: provenance logs, model attestations, and agent-readable hooks in the electronic case filing system. Together they turn ad hoc experiments into auditable pipelines.
The immediate effect is clarity. The secondary effect is speed. Vendors, firms, and insurers now have a target to build to, and courts can automate around it.
What the new disclosure actually requires
The rule does not tell anyone to use artificial intelligence. It tells you to say when you did, and to be able to prove it.
Here is the basic contour you will see in the electronic case filing portal:
- A simple question on filing: Was an artificial intelligence tool used for substantive legal work on this filing or its attachments.
- If yes, select categories of use. The schema includes drafting, legal research, citation checking, summarization and translation, exhibit handling, e-discovery prioritization, predictive coding, transcript analysis, audio or video transcription, and calculations such as damages modeling.
- Identify the tool and version. For example, Thomson Reuters CoCounsel 3.0 with GPT-4.1, Harvey 2.4 with Claude 3 Opus, or an in-house model running on Azure OpenAI with model gpt-4o-mini. The rule calls this a model attestation. Providers can supply a signed model card, which is like a manufacturer label that includes model version, release date, and known limitations.
- State verification steps. Did a licensed attorney review all citations and statements of law. Did a human check quotations and numerical tables. Did you use an authoritative retrieval system such as Westlaw, Lexis, or court opinions to validate.
- Confirm retention of a provenance packet. This is the flight data recorder of your artificial intelligence usage. It contains the prompts, system instructions, model identifier, settings such as temperature and top_p, date and time, tool calls like document retrievals, and the output that made its way into the filing. It can include hashes of underlying documents and sources you cited, plus validation notes. You do not file this packet by default, but you must be able to produce it if the court or an opposing party challenges your work.
Two carve-outs keep the rule practical. First, minimal use such as spell-checking or redlining does not trigger a disclosure. Second, pro se litigants are not forced into this machinery, though the portal will still encourage clear labeling if they chose to use artificial intelligence.
Behind the scenes, the electronic case filing system gains an agent-readable metadata block. Think of it as a tiny, signed label attached to each docket entry that machines can parse without opening the PDF. It has fields like tool_name, model_version, use_categories, verification_level, and a reference to a sealed or retained provenance packet. Courts can use this to triage, to audit, or to study adoption patterns. Vendors can use it to prefill forms. Insurers can use it to match filings to risk policies. Everyone is looking at the same schema.
From shadow information technology to first-class infrastructure
Until now, law firms and agencies often treated artificial intelligence like a helpful intern that no one admitted to hiring. Associates tried chat assistants at 11 p.m. Partners worried about sanctions or client optics. Vendors built features, then watched users copy and paste results into Word to avoid the artificial intelligence label.
Standardized disclosure flips the incentives. If you must disclose, you want a clean supply chain. If you must attest, you want systems that remember what happened. That means firms will formalize their artificial intelligence stack, not hide it.
- Procurement opens up. General counsel can approve specific models and vendors because the rule defines what must be tracked. A model allow list is a governance document, not a rumor.
- Training becomes mandatory. Once verification levels are visible in the electronic case filing metadata, no partner wants their matter to show up as unreviewed. Firms will run short courses on citation auditing and prompt hygiene.
- Vendors become pipes, not widgets. Tools like CoCounsel, Harvey, and Microsoft Copilot will ship native provenance capture, model attestation import, and one-click electronic case filing integration. The winning products will be the ones that treat the court’s schema as their own log format.
- Insurers change the economics. Carriers such as ALAS, CNA, and Chubb will write discounts for firms that preserve provenance packets and run automatic citation checks, because loss adjusters can see what happened and when.
A machine-readable court docket sounds minor, but that is the point. Infrastructure does its work by being boring and everywhere. The moment it stabilizes, people build on top of it.
What changes on day one for litigators
You do not need a new philosophy to comply. You need a short checklist and a habit.
- Pick your stack. Choose approved tools for drafting, research, and review. Make sure they can capture prompts, model versions, and outputs without leaking privileged content. For cloud tools, confirm data residency and retention settings.
- Turn on provenance capture. If your vendor cannot produce a provenance packet on demand, wrap it. Browser extensions, local proxy tools, or in-app logging can store prompts and outputs securely. Treat this like timekeeping. If it does not get logged, it did not happen.
- Update your templates. Add an AI-Use Disclosure Statement template to your motion and brief packages. It should list your tool, model, categories of use, and verification steps. Keep it factual, not apologetic.
- Define verification levels. Decide what counts as adequate review for each task. For example, all citations that came from an artificial intelligence system must be re-checked in an authoritative database. All quotations must be compared to the source document. Numeric outputs must be recomputed in Excel or a verified calculator tool. Write this down, teach it, and audit it.
- Coordinate with clients and insurers. Tell clients what the rule requires and how you comply. Ask your carrier what it needs to see in the event of a claim. Align on retention. The provenance packet is privileged work product by default, but you should be ready to file it under seal if a judge asks for it.
- Build the privilege wall. Instruct teams to avoid pasting raw client secrets into public models. Use enterprise instances with contractual privacy, or run models in your virtual private cloud. If a matter is under a protective order, treat the provenance store as sensitive and segregated.
On filing day, you click a checkbox, attach a one-page disclosure, and submit the PDF. The metadata is generated automatically if your tools are integrated. If not, you will type it once. That is not extra work. It is the same effort as a certificate of service, now for your tools.
What changes on day one for vendors
There are three new jobs for every legal technology vendor, whether you sell research, drafting assistance, or review platforms.
- Embed provenance. Log prompts, tool calls, model identifiers, and final outputs. Support hashing of underlying sources and attachments. Let firms set retention policies per matter. Build export to the court’s provenance packet format. No screenshots of chat windows. This must be structured and tidy.
- Ship model attestations. If you are a platform running models from OpenAI, Anthropic, or Google, expose the model cards and version numbers. If possible, include provider-signed attestations or cryptographic receipts that prove the model and version used at a given time. The rule rewards auditability.
- Speak ECF. Map your output to the electronic case filing schema and push metadata directly to the court portal through the available hooks. At minimum, generate the disclosure text and JSON metadata so the associate does not have to retype it. The fastest way to adoption is to make compliance one click.
For e-discovery vendors like Relativity, Everlaw, Disco, and Logikcull, the impact is immediate. Predictive coding, continuous active learning, and prioritization are squarely within the disclosure categories. You will need an on-demand report that shows training rounds, validation metrics like precision and recall, sampling methodology, and the model or learning system used. That report becomes part of the provenance packet, and it should be simple enough that a judge can read it in five minutes.
For research vendors like Westlaw and Lexis, the opportunity is to become the verification layer. A button that re-checks every citation in a brief and logs the outcome turns risk into value. If you can sign that log, even better.
For newcomers that call themselves agents, this is a forcing function. A court-integrated agent is not a chat box. It is a workflow machine with custody of context, a theory of verification, and a memory that survives scrutiny.
What changes on day one for insurers
Underwriting shifts from guesswork to telemetry.
- Ask for the stack. Require insured firms to list approved artificial intelligence tools and models. Maintain an allow list per practice area.
- Tie premiums to controls. Offer credits for firms that run automated citation checks, preserve provenance packets, and restrict secret client data to enterprise or private deployments. Penalize copy-paste into public chat tools without a logging layer.
- Define the claim pathway. When allegations involve artificial intelligence misuse, request the provenance packet and the verification checklist. If the logs are intact, defense is cheaper and faster. If not, reserve goes up.
- Clarify exclusions and safe harbors. Define what counts as intentional misrepresentation versus a verified, corrected error. Coordinate with confidentiality requirements so that demanding logs does not force privilege waivers.
Insurers are quiet system designers. When they move the price of risk, behavior follows.
Court-integrated agents become inevitable
The new, structured metadata makes the court’s own automation possible. Nothing here is science fiction. It is simply a feed.
- Triage assistants for clerks can flag filings that disclose unverified artificial intelligence use in citation-heavy submissions. That does not mean automatic rejection. It simply puts an extra pair of eyes where it matters.
- Scheduling agents can scan for filings that used artificial intelligence summarization to prepare joint appendices, then remind parties to file the underlying transcripts under seal if necessary.
- Discovery referees can spot when one side uses predictive coding but fails to share validation metrics that were promised in a Rule 26 conference. The presence or absence of a provenance packet becomes a factual question, not a vibe.
- Public access systems can label docket entries with a consistent notice that artificial intelligence assisted, which is different from saying a machine wrote it. That builds trust with pro se readers and journalists who want to assess reliability.
On the other side of the table, firm-side agents will watch the docket as a live stream. When a new entry lands, the agent reads the metadata, fetches the filing, checks the citations, compares them to a knowledge base, and drafts a response outline. Shorten the cycle time and you change strategy. If your opponent took six weeks to build a summary judgment brief, your agent gives you a map of their arguments in six minutes. You still write the human version, but you start ahead.
The trade-offs are real, and manageable
Disclosure is not free. It introduces new failure modes, and the rule anticipates some of them.
- Privilege and confidentiality. A good provenance packet avoids raw client secrets in prompts and masks identities where needed. If the court asks for it, you should have a sealed filing process that keeps the record intact without publishing sensitive text.
- Work product boundaries. The packet shows how you worked, which can reveal strategy. Judges are likely to treat it as work product unless there is evidence of misrepresentation. Firms should log enough to prove integrity, not to expose their playbook.
- Burden on small teams. Solo practitioners and small firms should not have to buy a large platform to comply. Lightweight logging, plus a simple one-page disclosure, meets the standard. Courts can publish examples and templates to make this easy.
- Criminal and immigration cases. These matters often involve special confidentiality rules. Expect local orders to further restrict disclosure of sensitive model inputs, and to require on-device or court-approved tools for certain tasks.
Trade-offs also come with benefits. The days of discovery fights about whether someone used technology assisted review at all fade into questions about how it was validated. That is a better fight to have.
How we got here, and why now
The path was incremental. Individual judges started asking for verification in 2023 after fake cases showed up in briefs. Several districts experimented with local disclosures. Vendors added citation checkers and hallucination detectors. Meanwhile, the Administrative Office of the U.S. Courts prepared updates to the electronic case filing system to support structured metadata.
Today’s move was triggered by convergence. Artificial intelligence use became common, but suspicion stayed high. Regulators disliked invisible risk. Practitioners disliked stigma. A uniform disclosure rule lowers both. It acknowledges the tool and standardizes the guardrails.
The real accelerant is not the checkbox. It is the shared vocabulary that lets every actor plug in.
What to build next
If you are a builder, aim for these edges.
- One-click provenance across tools. A lightweight, cross-platform recorder that captures prompts and outputs across Word, Outlook, research databases, and web assistants. Encrypt locally, sync to matter vaults, export to the court schema.
- Attestation bridges. Provider-signed model attestations are not yet common. A relay service that requests, verifies, and stores them for each run will become the model equivalent of a certificate authority.
- Verification robots. Agents that re-check citations, recompute tables, validate quotations, and attach a signed log. Make this run inside the firm firewall, not in the public cloud.
- ECF companions. Interfaces that sit on top of the electronic case filing portal, prefill the disclosure metadata, warn about missing logs, and attach the right statement for the right court. No one should be copying metadata by hand in 2025.
- Discovery validators. Automatic reports for predictive coding that a judge can actually read. Precision and recall, sampling methodology, training rounds, and change logs, all in plain language.
The rule guarantees a market. The winners will be the ones that disappear into the workflow.
The end of the copy-paste era
There is a culture shift buried in this procedural step. Law is a record-making profession. We record who signed, who served, who appeared. Now we will record how we computed.
That will not make briefs less human. It will make their origin less mysterious. The partner still owns the argument. The associate still sweats the commas. The machines will simply leave footprints that prove what they touched and when.
A decade from now, it will seem odd that we ever guessed. The docket will tell you if artificial intelligence was involved, the logs will show how, and the court will know what to do with that information. Once the courthouse becomes a platform, creativity follows the rails. The work does not get smaller. It gets clearer.