Meta, the parent company of Facebook, has presented six research papers at the International Speech Communication Association (INTERSPEECH 2023) conference in Dublin.
The papers focus on advances in speech recognition and understanding, including new methods for improving the accuracy of speech recognition, developing more robust spoken language understanding systems, and creating expressive speech synthesis models.
One of the papers, titled “Multi-head State Space Model for Speech Recognition,” introduces a new architecture that can improve the accuracy of speech recognition by capturing both local and global temporal patterns in speech data. The paper also presents a new model called the Stateformer, which achieves state-of-the-art results on the LibriSpeech speech recognition dataset.
Another paper, titled “Modality Confidence Aware Training for Robust End-to-End Spoken Language Understanding,” addresses the problem of inaccurate text representations in end-to-end spoken language understanding systems. The paper proposes a new method for training these systems that takes into account the confidence levels of automatic speech recognition (ASR) hypotheses.
The list includes EXPRESSO, which offers a new dataset (EXPRESSO) for expressive speech synthesis with 26 styles which discusses challenges, and proposes new training method. It also includes Handling the Alignment for Wake Word Detection: A Comparison Between Alignment-Based, Alignment-Free & Hybrid Approaches. Meta compares this alignment-based, alignment-free, and hybrid approaches for activating smart devices with specific keywords.
Furthermore, Meta presented the MuAViC benchmark for speech translation, comprising a multilingual audio-visual corpus and evaluation metrics as well as ESPnet-SE++, a speech enhancement system designed to improve speech quality in noisy conditions.
The sources for this piece include an article in AnalyticsIndiaMag.