According to an article published in the Washington Post, the internet and human behaviors on it provide a tremendous reservoir of information for artificial intelligence (AI). Researchers examined a dataset of over 500,000 personal blogs, accounting for 3.8% of the total “tokens” in the dataset.
Google’s C4 dataset, which contains the contents of 15 million webpages, has been used to train high-profile English-language AIs such as T5 and LLaMA from Facebook. Websites from many areas, including journalism, entertainment, software development, medical, and content production, are included in the collection. However, the information also includes at least 27 other sites that the US government has designated as pirate and counterfeit marketplaces.
The websites in Google’s C4 data set that are reportedly responsible for training chatbots include patents.google.com, wikipedia.org, scribd.com, nytimes.com, journals.plos.org, latimes.com, theguardian.com, huffpost.com, patents.com, washingtonpost.com, coursera.org, fool.com, frontiersin.org, instructables.com, ipfs.io, businessinsider.com, chicagotribune.com, booking.com, theatlantic.com, and about 80 others.
Despite the fact that C4 is a huge dataset, large language models are believed to need even larger ones. For example, OpenAI’s GPT-3 training data, which was released in 2020, began with up to 40 times the amount of web scraped data seen in C4. GPT-3’s training data also includes the whole English language Wikipedia, a collection of free books authored by unpublished writers, which are widely used by large technological businesses, and a compilation of text from Reddit users’ favorite links.
Many firms do not document the contents of their training data, either internally or externally, because to worries about uncovering personally identifiable information, copyrighted material, and other data obtained without authorization, according to experts.
The sources for this piece include an article in WashingtonPost.