Site icon Tech Newsday

New research reveals inner workings of AI

A group of researchers from the Massachusetts Institute of Technology, Stanford University, and Google has made a ground-breaking discovery about how AI language models that power text and image generation tools work.

The study discovered that AI learns to accurately undertake new tasks from only a few examples using an in-context learning system, mainly picking up new skills on the run. When given a prompt, a language model can take a set of inputs and outputs and generate new, often accurate predictions about a task for which it has not been clearly trained.

In-context learning is a machine learning approach in which the AI system learns from its surroundings, such as interactions with humans or other systems. This type of learning allows the AI to understand and respond to its surroundings, gradually improving its performance.

The researchers carried out their experiment by feeding the model synthetic data or prompts that the program had never seen before. Despite this, Akyürek, the lead researcher claims that the language model was able to generalize and then extrapolate knowledge from them.

The team hypothesized that AI models that exhibit in-context learning create smaller models within themselves to accomplish new tasks. The researchers put their theory to the test by examining a transformer, a neural network model that uses a concept known as “self-attention” to track relationships in sequential data, such as words in a sentence.

Their findings shed light on how artificial intelligence processes information and makes decisions. This advancement could lead to more efficient, accurate, and trustworthy AI systems. This is due to the fact that it will allow experts to develop more reliable and efficient AI solutions that will revolutionize industries.

The sources for this piece include an article in Vice.

Exit mobile version