Bloomberg launches BloombergGPT

Share post:

Bloomberg has created its own large-scale generative artificial intelligence (AI) model, BloombergGPT.

This model is a large language model (LLM) designed to “know” everything the entire company “knows.” BloombergGPT is specifically trained on a wide range of financial data, including the largest domain-specific dataset yet constructed, consisting of more than 363 billion tokens of Bloomberg’s financial data, news, filings, press releases, web-scraped financial documents, and social media.

The company has also trained BloombergGPT on another 345 billion tokens from general-purpose datasets obtained from elsewhere, including hundreds of English news sources, excluding those written by Bloomberg journalists, to maintain factuality and reduce bias. The Pile, which includes everything from YouTube captions to Project Gutenberg and a complete copy of Wikipedia, has also been included.

BloombergGPT will assist in improving existing financial NLP tasks, such as sentiment analysis, named entity recognition, news classification, and question answering. It is also able to translate natural language requests into the Bloomberg Query Language, a task which is tightly connected to Bloomberg’s needs. Additionally, the model can suggest Bloomberg-style headlines for news stories.

The new model is the first step in the development and application of this new technology for the financial industry, bringing the full potential of AI to the financial domain. This will unlock new opportunities for marshalling the vast quantities of data available on the Bloomberg Terminal to better help the firm’s customers.

BloombergGPT is trained on a corpus of more than 700 billion tokens, larger than OpenAI’s GPT-3 that was trained on about 500 billion tokens in 2020. The company’s research paper details the development of BloombergGPT and is written by Bloomberg’s Shijie Wu, Ozan Ä°rsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann.

The sources for this piece include an article in NiemanLab.

SUBSCRIBE NOW

Related articles

Tests unable to distinguish AI from human reviews

AI-generated restaurant reviews can now pass the Turing test, successfully fooling both human readers and automated detectors, according...

Zuckerberg shares his vision with investors and Meta stock tanks

In an era where instant gratification is often the norm, Meta CEO Mark Zuckerberg’s strategic pivot towards long-term,...

AI surpasses human benchmarks in most areas: Stanford report

Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) has published the seventh annual issue of its AI Index...

Microsoft and OpenAI partner to build a $100 Billion AI supercomputer “Stargate”

In a bold stride towards computational supremacy, Microsoft, in partnership with OpenAI, is reported to be laying the...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways