New research reveals inner workings of AI

Share post:

A group of researchers from the Massachusetts Institute of Technology, Stanford University, and Google has made a ground-breaking discovery about how AI language models that power text and image generation tools work.

The study discovered that AI learns to accurately undertake new tasks from only a few examples using an in-context learning system, mainly picking up new skills on the run. When given a prompt, a language model can take a set of inputs and outputs and generate new, often accurate predictions about a task for which it has not been clearly trained.

In-context learning is a machine learning approach in which the AI system learns from its surroundings, such as interactions with humans or other systems. This type of learning allows the AI to understand and respond to its surroundings, gradually improving its performance.

The researchers carried out their experiment by feeding the model synthetic data or prompts that the program had never seen before. Despite this, Akyürek, the lead researcher claims that the language model was able to generalize and then extrapolate knowledge from them.

The team hypothesized that AI models that exhibit in-context learning create smaller models within themselves to accomplish new tasks. The researchers put their theory to the test by examining a transformer, a neural network model that uses a concept known as “self-attention” to track relationships in sequential data, such as words in a sentence.

Their findings shed light on how artificial intelligence processes information and makes decisions. This advancement could lead to more efficient, accurate, and trustworthy AI systems. This is due to the fact that it will allow experts to develop more reliable and efficient AI solutions that will revolutionize industries.

The sources for this piece include an article in Vice.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

Zuckerberg shares his vision with investors and Meta stock tanks

In an era where instant gratification is often the norm, Meta CEO Mark Zuckerberg’s strategic pivot towards long-term,...

AI surpasses human benchmarks in most areas: Stanford report

Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) has published the seventh annual issue of its AI Index...

Microsoft and OpenAI partner to build a $100 Billion AI supercomputer “Stargate”

In a bold stride towards computational supremacy, Microsoft, in partnership with OpenAI, is reported to be laying the...

US Bill Aims to Unveil AI Training Data Sources Amid Copyright Concerns

In a significant move toward transparency, a bill was introduced in the US Congress on Tuesday by California...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways