Tencent logo
Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems)
full-timeUnited States$122k - $229k

Summary

Location

United States

Salary

$122k - $229k

Type

full-time

Explore Jobs

About this role

Business Unit

What the Role Entails

Job Responsibilities:

  • We are building large-scale, native multimodal model systems that jointly support vision, audio, and text to enable comprehensive perception and understanding of the physical world. You will join the core research team focused on speech and audio, contributing to the following key research areas:
  • Develop general-purpose, end-to-end large speech models covering multilingual automatic speech recognition (ASR), speech translation, speech synthesis, paralinguistic understanding, and general audio understanding.
  • Advance research on speech representation learning and encoder/decoder architectures to build unified acoustic representations for multi-task and multimodal applications.
  • Explore representation alignment and fusion mechanisms between audio/speech and other modalities in large multimodal models, enabling joint modeling with image and text.
  • Build and maintain high-quality multimodal speech datasets, including automatic annotation and data synthesis technologies.

Who We Look For

  • Ph.D. in Computer Science, Electrical Engineering, Artificial Intelligence, Linguistics, or a related field; or Master’s degree with several years of relevant experience.
  • Solid understanding of speech and audio signal processing, acoustic modeling, language modeling, and large model architectures.
  • Proficient in one or more core speech system development pipelines such as ASR, TTS, or speech translation; experience with multilingual, multitask, or end-to-end systems is a plus.
  • Candidates with in-depth research or practical experience in the following areas are strongly preferred:
  • Speech representation pretraining (e.g., HuBERT, Wav2Vec, Whisper)
  • Multimodal alignment and cross-modal modeling (e.g., audio-visual-text)
  • Experience driving state-of-the-art (SOTA) performance on audio understanding tasks with large models
  • Proficient in deep learning frameworks such as PyTorch or TensorFlow; experience with large-scale training and distributed systems is a plus.
  • Familiar with Transformer-based architectures and their applications in speech and multimodal training/inference.

Location State(s)

US-Washington-Bellevue

The expected base pay range for this position in the location(s) listed above is $122,500.00 to $229,700.00 per year. Actual pay may vary depending on job-related knowledge, skills, and experience. Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis. Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company’s 401(k) plan. The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee’s tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year. Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level. Benefits may also be pro-rated for those who start working during the calendar year.

Equal Employment Opportunity at Tencent

As an equal opportunity employer, we firmly believe that diverse voices fuel our innovation and allow us to better serve our users and the community. We foster an environment where every employee of Tencent feels supported and inspired to achieve individual and common goals.

Other facts

Tech stack
Speech Recognition,Speech Translation,Speech Synthesis,Audio Understanding,Deep Learning,PyTorch,TensorFlow,Multimodal Systems,Transformer Architectures,Data Synthesis,Acoustic Modeling,Language Modeling,Representation Learning,Multilingual Systems,End-to-End Systems,Cross-Modal Modeling

About Tencent

Tencent is a world-leading internet and technology company that develops innovative products and services to improve the quality of life of people around the world.

Founded in 1998 with its headquarters in Shenzhen, China, Tencent's guiding principle is to use technology for good. Our communication and social services connect more than one billion people around the world, helping them to keep in touch with friends and family, access transportation, pay for daily necessities, and even be entertained.

Tencent also publishes some of the world's most popular video games and other high-quality digital content, enriching interactive entertainment experiences for people around the globe.

Tencent also offers a range of services such as cloud computing, advertising, FinTech, and other enterprise services to support our clients' digital transformation and business growth.

Tencent has been listed on the Stock Exchange of Hong Kong since 2004.

Team size: 10,001+ employees
LinkedIn: Visit
Industry: Software Development
Founding Year: 1998

What you'll do

  • The role involves developing large-scale multimodal model systems that integrate speech, audio, and text for enhanced understanding of the physical world. Key tasks include advancing speech representation learning and building high-quality multimodal speech datasets.

Ready to join Tencent?

Take the next step in your career journey

Frequently Asked Questions

What does Tencent pay for a Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems)?

Tencent offers a competitive compensation package for the Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems) role. The salary range is USD 123k - 230k per year. Apply through Clera to learn more about the full compensation details.

What does a Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems) do at Tencent?

As a Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems) at Tencent, you will: the role involves developing large-scale multimodal model systems that integrate speech, audio, and text for enhanced understanding of the physical world. Key tasks include advancing speech representation learning and building high-quality multimodal speech datasets..

Why join Tencent as a Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems)?

Tencent is a leading Software Development company. The Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems) role offers competitive compensation.

Is the Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems) position at Tencent remote?

The Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems) position at Tencent is based in United States, United States. Contact the company through Clera for specific work arrangement details.

How do I apply for the Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems) position at Tencent?

You can apply for the Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems) position at Tencent directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process. You can also learn more about Tencent on their website.