← Back to Blog

March 8, 2026 · Kidd James · Publishing · 7 min read

Why Your Publisher Is Still Using Google Docs (And Why That's a Problem)

Traditional publishing has no provenance story. It has contracts, copyright registrations, and email threads. None of those tell you when the words were written, whether they were changed, or whether they were generated by a model that was trained on your previous work.

Here is a thought experiment. You write a 90,000-word novel. You send it to your publisher. Your publisher sends it to their editorial team. Over the next eighteen months, the text goes through developmental edits, copyedits, line edits, and proofreading. Three different people have had write access to the Google Doc. The published version contains 87,400 of your words and approximately 2,600 words that were rewritten by someone else.

Who owns those 2,600 words? More practically: if someone asks which version of chapter twelve you submitted in March and which version contained the editorial revision, can you prove it? Can your publisher?

The answer, almost certainly, is no. And in 2026, this is a problem that is about to become catastrophic.

The Provenance Gap

Traditional publishing's technological infrastructure is astonishingly primitive. Most manuscripts are submitted as Word documents or PDFs. Major publishers maintain version control via email timestamps and file rename conventions like FINAL_DRAFT_edited_v7_ACTUAL_FINAL.docx. The copyright registration at the Library of Congress captures a snapshot, but it happens after the fact, and it doesn't tell you anything about the document's history before that moment.

This was fine for most of publishing history because the attack surface was small. If a publisher tampered with your manuscript after submission, you would notice. Your draft existed on your computer. The comparison was possible.

The AI Problem Changes Everything

Now imagine the same scenario, but instead of 2,600 rewritten words, your publisher's editorial AI has suggested 40,000 revisions, many of which you accepted. Or: a competitor's AI was trained on your unpublished manuscript that was attached to a document in the shared editorial system. Or: your publisher claims the chapter you wrote in 2024 is substantially similar to work that was reportedly written by someone else in 2023, and neither party can establish a timestamp that the other side cannot dispute.

The question is not whether AI will generate books. It already does. The question is whether anyone can prove that a specific human wrote a specific text at a specific time. — From the LPS-1 academic paper, DOI 10.5281/zenodo.18646886

This is new territory. Lawyers are comfortable with copyright in a world where authorship is obvious. They are not comfortable with copyright in a world where an AI trained on your style can produce a statistically indistinguishable novel and the timestamp on the training data predates your submission.

What On-Chain Provenance Actually Does

When you anchor a manuscript on Polygon Mainnet via LPS-1, several things happen simultaneously. A SHA-256 hash of the full manuscript text is computed and stored in a smart contract. A Merkle tree over individual chapters is constructed — meaning any chapter can be proven to have existed in that form at that block height, without revealing the rest of the manuscript. The block timestamp is provided by the Polygon network, not by any party to the dispute.

The result: if you submitted your manuscript on October 3rd, 2025 at 14:22 UTC, and that submission was anchored on-chain at block 68,441,237, that fact is permanent. The transaction hash is 0x... and it is on a public ledger that neither party controls. Your publisher cannot alter the founding timestamp. Your competitor cannot antedate their similar work. An AI company training on leaked manuscripts cannot claim priority over content that was hashed before they accessed it.

This is not a theoretical benefit. This is what the legal framework of authorship is going to require in the next five years. The publishers who don't build provenance infrastructure will lose disputes they should have won, simply because they couldn't produce verifiable timestamps under adversarial conditions.

The LPS-1 Standard Is Free

Everything described above is available today, for free, on GitHub. The LPS-1 Reference Implementation is MIT licensed. Any author can anchor their work at L0 (a single hash, one transaction, costs pennies in MATIC) all the way to L5 (full Merkle tree, IPFS pin, Bitcoin OpenTimestamps cross-reference, edition freeze with non-upgradeable contract).

XXXIII was built to demonstrate that this works — that you can write a novel, anchor it, narrate it with AI audio, and make it freely readable on the internet while maintaining a complete, verifiable provenance record that no party can dispute. The 2,500 Donkeys is the proof of concept. Private Placement Programs and Crypto War Room are the expansion pack.

Your publisher is still using Google Docs. That's fine, for now. But the authors who anchor their work today will be in a very different legal and commercial position in five years than the authors who don't. The provenance gap is only going to widen.


Use the LPS-1 Standard — Or Hire Someone Who Already Has

MIT-licensed and open. Anchor your manuscript today. Or, if your catalog needs a full chain-of-custody review, get a Provenance Audit or the $97 field guide.

Services & Pricing Get the Report — $97 GitHub