Large language models have emotional intelligence, research says

Share post:

A study by researchers at Microsoft and the University of Toronto has shown that LLMs do indeed have some level of emotional intelligence, and that this intelligence can be enhanced by providing them with emotional prompts.

The researchers conducted a series of experiments on different LLMs, including Flan-T5-Large, Vicuna, Llama 2, BLOOM, ChatGPT, and GPT-4. They found that LLMs performed better on a variety of tasks when they were given emotional prompts, such as “This is very important to my career” or “I’m really excited about this project.”

For example, on a task that involved determining whether two words have the same meaning, LLMs were able to achieve an accuracy of 57% when given no prompt. However, when they were given the prompt “This is very important to my career,” their accuracy increased to 67%.

The researchers also conducted a human study to evaluate the quality of generative tasks performed by LLMs using both vanilla and emotional prompts. The results showed that the emotional prompts significantly boosted the performance of generative tasks, with an average improvement of 10.9% in terms of performance, truthfulness, and responsibility metrics.

The researchers believe that their findings could have a number of implications for the development and use of LLMs. For example, they suggest that emotional prompts could be used to improve the performance of LLMs in tasks such as customer service, education, and healthcare.

The researchers also believe that EmotionPrompt works for LLMs because it helps them to better understand the context of the task at hand. When an LLM is given an emotional prompt, it is able to better understand the user’s intent and the desired outcome of the task. This leads to improved performance on a variety of tasks, including both deterministic and generative tasks.

The sources for this piece include an article in Arxiv.

SUBSCRIBE NOW

Related articles

Research Raises Concerns Over AI Impact on Code Quality

Recent findings from GitClear, a developer analytics firm, indicate that the increasing reliance on AI assistance in software...

Microsoft to train 100,000 Indian developers in AI

Microsoft has launched an ambitious program called "AI Odyssey" to train 100,000 Indian developers in artificial intelligence by...

NIST issues cybersecurity guide for AI developers

Paper identifies the types of cyberattacks that can manipulate the behavior of artificial intelligen

Canada, U.S. sign international guidelines for safe AI development

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems. It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways