All Articles / Blog Posts / Dubber AI researchers sponsoring innovative university projects

Dubber AI researchers sponsoring innovative university projects

chloe.sawle@dubber.net
30 January 2023

During 2022, Senior researchers at the Dubber AI Centre of Excellence, worked with leading university groups to co-supervise student projects and internships across Honours, Masters and PhD levels.

We’re proud to be helping foster emerging talent in speech and language AI in Australia by partnering with local universities.  These connections are a great way for us to keep track of the latest academic advances, while giving students the opportunity to work on problems with practical applications.” Dr Iain McCowan, Director of AI at Dubber

“The School of Electrical Engineering and Telecommunications is pleased to be partnering with innovative Australian industry partners like Dubber to develop new digital signal processing and machine learning capability. These honours thesis topics jointly supervised by Dubber experts provide invaluable workplace-relevant skill development for UNSW Engineering students, and build industry-university collaboration” Professor Julien Epps, Head of School of Electrical Engineering and Telecommunications, UNSW

The projects showcased a range of innovative conversational AI initiatives. As we look forward to welcoming a new cohort in 2023, here is a summary of the projects we enjoyed supporting in 2022:

Darshana Madduma – PhD Internship

UniversityProfessor Sridha SridharanQUT.

Title: Evaluation of the Applicability of Audio Transformer Networks for Speaker Diarization.

Darshana investigated how emerging pre-trained general acoustic models can be applied to multiple end tasks, with an initial focus on automatically segmenting speaker turns in conversations.  These models have the promise of increased accuracy while sharing computational resources across different tasks.

 

Jack Murray – Honours Project

University: Dr Vidhyasaharan SethuUNSW

Title: Modern Speech Representation Frameworks for Spoken Language Identification and Segmentation

As speech technologies advance, we face the problem of providing equal access to multilingual speakers, and speakers of less common languages. Some issues with existing systems are that they often support only one language at a time and are usually trained using thousands of hours of labelled data. For less common languages, these large, labelled datasets often don’t exist. Jack’s project implemented a language segmentation algorithm trained on a minimal amount of data by leveraging pre-trained general acoustic models. This could form the basis of a system to segment and transcribe conversations that contain multiple languages.

“It was great having Dubber AI experts supporting my project.  Not only did I learn a lot more than I would have on my own, but it was also really motivating to see practical applications of this type of research”

Jack recently came second (out of around 200 entries) in EE&T’s Thesis Poster Competition, as judged by industry, alumni and academics. Congratulations, Jack!

 

Pavithra Bharatham Rengarajan – Masters Project

University:Dr Lingqiao LiuUniversity of Adelaide (AIML)

Title: Unsupervised Dialogue Segmentation with Next Sentence Prediction

Automatic segmentation of a conversation into topics is a key task towards effective presentation or summarisation of meetings.  This is a difficult problem however, and large annotated datasets for this task are not readily available. Pavithra’s project focused on unsupervised techniques that could be used with limited data.  The most promising approach used Next Sentence Prediction (NSP) from the Bidirectional Encoder Representations from Transformers (BERT) model to identify the change of context or direction of a conversation transcript.

 

Madeline Younes – Honours Project

University: Associate Professor Beena AhmedUNSW

Title: Exploring Transfer Learning for Arabic Dialect Identification

The goal of Madeline’s project was to automatically distinguish between different spoken Arabic dialects to help improve transcription accuracy.  Given there are scarce data resources for many of these dialects, a novel approach to Dialectal Identification (DID) was investigated by leveraging pre-trained acoustic models.  A multi-stage approach was developed to first classify into one of four umbrella Arabic dialects, and then into a more fine-grained regional DID with seventeen dialects.

 

Erin Moss – Honours Project

University: Associate Professor Beena AhmedUNSW

Title: Domain Specific Sentiment Analysis

Erin’s project investigated how first determining the domain context (topic, or subject matter) could be used to improve training of sentiment classification models for Natural Language Processing.  The system developed included a first statistical model to detect the domain, followed by a second stage where this information was used to select a specialised lexicon on which the sentiment analysis classifier was trained.

 

We’re helping service providers and organisations unlock the full potential of conversations.Learn more about the Dubber AI Centre of Excellence.

Related Posts
Dubber Moments  ‘Platinum Winner – Best AI Innovation in Telco’

Dubber Moments ‘Platinum Winner – Best AI Innovation in Telco’

Dubber Introduces ‘Moments’: A Revolutionary Step in Communications Intelligence

Dubber Introduces ‘Moments’: A Revolutionary Step in Communications Intelligence

Microsoft Teams users improve compliance and unlock critical business insights with innovative Dubber and NTSCOM solutions

Microsoft Teams users improve compliance and unlock critical business insights with innovative Dubber and NTSCOM solutions