Harvard University announced Thursday itâs releasing a high-quality dataset of nearly one million public-domain books that could be used by anyone to train large language models and other AI tools. The dataset was created by Harvardâs newly formed Institutional Data Initiative with funding from both Microsoft and OpenAI. It contains books scanned as part of the Google Books project that are no longer protected by copyright.
Around five times the size of the notorious Books3 dataset that was used to train AI models like Metaâs Llama, the Institutional Data Initiative’s database spans genres, decades, and languages, with classics from Shakespeare, Charles Dickens, and Dante included alongside obscure Czech math textbooks and Welsh pocket dictionaries. Greg Leppert, executive director of the Institutional Data Initiative, says the project is an attempt to âlevel the playing fieldâ by giving the general public, including small players in the AI industry and individual researchers, access to the sort of highly-refined and curated content repositories that normally only established tech giants have the resources to assemble. âIt’s gone through rigorous review,â he says.
Leppert believes the new public domain database could be used in conjunction with other licensed materials to build artificial intelligence models. âI think about it a bit like the way that Linux has become a foundational operating system for so much of the world,â he says, noting that companies would still need to use additional training data to differentiate their models from those of their competitors.
Burton Davis, Microsoftâs vice president and deputy general counsel for intellectual property, emphasized that the companyâs support for the project was in line with its broader beliefs about the value of creating âpools of accessible dataâ for AI startups to use that are âmanaged in the publicâs interest.â In other words, Microsoft isnât necessarily planning to swap out all of the AI training data it has used in its own models with public domain alternatives like the books in the new Harvard database. âWe use publicly available data for the purposes of training our models,â Davis says.
As dozens of lawsuits filed over the use of copyrighted data for training AI wind their way through the courts, the future of how artificial intelligence tools are built hangs in the balance. If AI companies win their cases, theyâll be able to keep scraping the internet without needing to enter into licensing agreements with copyright holders. But if they lose, AI companies could be forced to overhaul how their models get made. A wave of projects like the Harvard database are plowing forward under the assumption thatâno matter what happensâthere will be an appetite for public domain datasets.
In addition to the trove of books, the Institutional Data Initiative is also working with the Boston Public Library to scan millions of articles from different newspapers now in the public domain, and it says itâs open to forming similar collaborations down the line. The exact way the books dataset will be released is not settled. The Institutional Data Initiative has asked Google to work together on public distribution, but the search giant hasnât publicly agreed to host it yet, though Harvard says itâs optimistic it will. (Google did not respond to WIREDâs requests for comment.)