Research Scientist - Voice AI Foundations (San Francisco) Job at Deepgram, San Francisco, CA

VDUrUVJzOUtOQWF5dEg1T3BaWEhDQVVOd2c9PQ==
  • Deepgram
  • San Francisco, CA

Job Description

Research Scientist - Voice AI Foundations

Research Scientist - Voice AI Foundations

3 days ago Be among the first 25 applicants

Company Overview

Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram

Company Overview

Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram

The Opportunity

Voice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.

The Role

You will pioneer the development of Latent Space Models (LSMs), a new approach that aims to solve the fundamental data, scale, and cost challenges associated with building robust, contextualized voice AI. Your research will focus on solving one or more of the following problems:

  • Build next-generation neural audio codecs that achieve extreme, low bit-rate compression and high fidelity reconstruction across a world-scale corpus of general audio.
  • Pioneer steerable generative models that can synthesize the full diversity of human speech from the codec latent representation, from casual conversation to highly emotional expression to complex multi-speaker scenarios with environmental noise and overlapping speech.
  • Develop embedding systems that cleanly factorize the codec latent space into interpretable dimensions of speaker, content, style, environment, and channel effects -- enabling precise control over each aspect and the ability to massively amplify an existing seed dataset through “latent recombination”.
  • Leverage latent recombination to generate synthetic audio data at previously impossible scales, unlocking joint model and data scaling paradigms for audio. Endeavor to train multimodal speech-to-speech systems that can 1) understand any human irrespective of their demographics, state, or environment and 2) produce empathic, human-like responses that achieve conversational or task-oriented objectives.
  • Design model architectures, training schemes, and inference algorithms that are adapted for hardware at the bare metal enabling cost efficient training on billion-hour datasets and powering real-time inference for hundreds of millions of concurrent conversations.

The Challenge

We are seeking researchers who:

  • See unsolved problems as opportunities to pioneer entirely new approaches
  • Can identify the one critical experiment that will validate or kill an idea in days, not months
  • Have the vision to scale successful proofs-of-concept 100x
  • Are obsessed with using AI to automate and amplify your own impact

If you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.

It's Important to Us That You Have

  • Strong mathematical foundation in statistical learning theory, particularly in areas relevant to self-supervised and multimodal learning
  • Deep expertise in foundation model architectures, with an understanding of how to scale training across multiple modalities
  • Proven ability to bridge theory and practice—someone who can both derive novel mathematical formulations and implement them efficiently
  • Demonstrated ability to build data pipelines that can process and curate massive datasets while maintaining quality and diversity
  • Track record of designing controlled experiments that isolate the impact of architectural innovations and validate theoretical insights
  • Experience optimizing models for real-world deployment, including knowledge of hardware constraints and efficiency techniques
  • History of open-source contributions or research publications that have advanced the state of the art in speech/language AI

How We Generated This Job Description

This job description was generated in two parts. The “Opportunity”, “Role”, and “Challenge” sections were generated by a human using Claude-3.5-sonnet as a writing partner. The objective of these sections is to clearly state the problem that Deepgram is attempting to solve, how we intend to solve it, and some guidelines to help you decide if Deepgram is right for you. Therefore, it is important that this section was articulated by a human.

The “It’s Important to Us” section was automatically derived from a multi-stage LLM analysis (using o1) of key foundational deep learning papers related to our research goals. This work was completed as an experiment to test the hypothesis that traits of highly productive and impactful researchers are reflected directly in their work. The analysis focused on understanding how successful researchers approach problems, from mathematical foundations through to practical deployment. The problems Deepgram aims to solve are immensely difficult and span multiple disciplines and specialties. As such, we chose seminal papers that we believe reflect the pioneering work and exemplary human characteristics needed for success. The LLM analysis culminates in an “Ideal Researcher Profile”, which is reproduced below along with the list of foundational papers.

Ideal Researcher Profile

An ideal researcher, as evidenced by the recurring themes across these foundational papers, excels in five key areas: (1) Statistical & Mathematical Foundations, (2) Algorithmic Innovation & Implementation, (3) Data-Driven & Scalable Systems, (4) Hardware & Systems Understanding, and (5) Rigorous Experimental Design. Below is a synthesis of how each paper highlights these qualities, with references illustrating why they matter for building robust, impactful deep learning models.

  • Statistical & Mathematical Foundations Mastery of Core Concepts

Many papers, like Scaling Laws for Neural Language Models and Neural Discrete Representation Learning (VQ-VAE), reflect the importance of power-law analyses, derivation of novel losses, or adaptation of fundamental equations (e.g., in VQ-VAE's commitment loss or rectified flows in Scaling Rectified Flow Transformers). Such mathematical grounding clarifies why models converge or suffer collapse.

Combining Existing Theories in Novel Ways

Papers such as Moshi (combining text modeling, audio codecs, and hierarchical generative modeling) and Finite Scalar Quantization (FSQ's adaptation of classic scalar quantization to replace vector-quantized representations) show how reusing but reimagining known techniques can yield breakthroughs. Many references (e.g., the structured state-space duality in Transformers are SSMs) underscore how unifying previously separate research lines can reveal powerful algorithmic or theoretical insights.

Logical Reasoning and Assumption Testing

Across all papers—particularly in the problem statements of Whisper or Rectified Flow Transformers—the authors present assumptions (e.g., scaling data leads to zero-shot robustness or straight-line noise injection improves sample efficiency) and systematically verify them with thorough empirical results. An ideal researcher similarly grounds new ideas in well-formed, testable hypotheses.

  • Algorithmic Innovation & Implementation Creative Solutions to Known Bottlenecks

Each paper puts forth a unique algorithmic contribution—Rectified Flow Transformers redefines standard diffusion paths, FSQ proposes simpler scalar quantizations contrasted with

Job Tags

Full time, Casual work,

Similar Jobs

GHR Healthcare - Travel Division

Travel PACU Registered Nurse - $1,852 per week Job at GHR Healthcare - Travel Division

 ...GHR Healthcare - Travel Division is seeking a travel nurse RN PACU - Post Anesthesia Care for a travel nursing job in Atlanta, Georgia. Job Description & Requirements ~ Specialty: PACU - Post Anesthesia Care ~ Discipline: RN ~ Start Date: 08/04/2025~ Duration... 

Postal Jobs Resource

Mail Carrier - Rural Assistant - No Experience Required Job at Postal Jobs Resource

Role Overview: USPS is actively accepting applications for Assistant Rural Carriers nationwide. In this position, you will deliver and collect packages along assigned...  ...role may require using your personal vehicle for mail and package delivery. As an ARC, you may be hired... 

Hampton Inn Draper

Weekend Laundry Attendant Job at Hampton Inn Draper

 ...Description Job Description Hampton Inn Draper in Utah is calling all energetic go-getters to apply to join our cleaning team as a part-time Weekend Laundry Attendant ! WHY YOU SHOULD JOIN OUR TEAM We are a hotel that values our employees . We pay our... 

Cracker Barrel

Loss Prevention Agent Job at Cracker Barrel

 ...protecting the assets of the Distribution Center (DC) and Ecommerce by preventing theft, ensuring safety, and enforcing security protocols within...  ...receivers, and operational processes to identify shrinkage or loss risks (including ecommerce).+ Perform routine security... 

Host Healthcare

Local Contract CT Technologist - Level I Trauma & Siemens Job at Host Healthcare

Job Description Host Healthcare is seeking a local contract CT Technologist for a local contract job in Boston, Massachusetts. Job Description & Requirements ~ Specialty: CT Technologist ~ Discipline: Allied Health Professional ~ Start Date: 08/24/2025~...