Overview
At Dolby science meets art and high tech means more than computer code. We continue to revolutionize how people create deliver and enjoy entertainment worldwide. Currently our Mobile team is developing a suite of innovative software products powered by Artificial Intelligence (AI) and Neural Networks (NN).
In this role you will stand at the intersection of AI research and engineering implementation. Your mission is to ensure complex neural network algorithms run lightly and efficiently on mobile SoCs. You wont just verify model outputs; you will dive deep into the models optimize training workflows and push the boundaries of AI robustness. We are looking for engineers with a deep acoustic background keen auditory perception and a passion for AI algorithms. If you have deep academic roots in Music Engineering Music Technology Music Computing or ECE (Audio Track) and aspire to apply theory to world-changing consumer products this is your stage.
Responsibilities
- Algorithm Evaluation & Tuning: Deeply understand the principles of AI audio algorithms (e.g. Noise Reduction Spatial Audio Audio Enhancement) and design scientific experimental protocols to evaluate performance in real-world mobile scenarios.
- Perceptual & Objective Quality Assessment: Conduct subjective evaluations (MOS scoring) using your professional listening skills and establish a multi-dimensional quality assessment system using objective tools (e.g. POLQA PEAQ FFT analysis).
- AI Model Boundary Exploration: Perform stress tests on deep learning models for on-device inference; identify failure points in extreme scenarios to help the research team iterate and improve model robustness.
- Mobile Product Experience: Evaluate inference latency memory footprint and power consumption of AI models on iOS/Android platforms. Participate in the full lifecycle testing of Dolbys latest mobile apps to ensure consistent high-quality experiences across diverse hardware environments.
- Test Tooling Development: Build automated model verification frameworks using Python to enable regression testing after model iterations. Develop scripts in Python or MATLAB for audio processing batch analysis and automated data acquisition.
Qualifications
- Education: Bachelors degree or higher in ECE (Audio Track) Electrical Engineering Computer Science or a related field.
- Theoretical Foundation: Solid grasp of Digital Signal Processing (DSP) fundamentals (filtering transforms codecs). Basic understanding of Machine Learning/Deep Learning (CNN RNN Transformer) and familiarity with PyTorch or TensorFlow.
- Mobile Awareness: Preliminary understanding of how mobile chipsets (NPU/DSP) handle AI workloads.
- Technical Toolkit:
- Proficiency in Python for processing large-scale datasets and general scripting.
- Familiarity with MATLAB or similar tools for signal modeling and analysis.
- Core Competencies: Strong curiosity and a research-oriented mindset; ability to independently read academic papers and translate them into actionable test cases.
- Communication: Excellent English skills (reading writing and oral) for seamless collaboration with global R&D teams.
Preferred Qualifications
- Internship experience in renowned A/V laboratories audio chipset companies or major internet tech firms.
- Familiarity with the multimedia frameworks of Android or iOS.
- Strong interest or project experience in Spatial Audio or Dolby Atmos.
#LI-JZ1
Required Experience:
IC
OverviewAt Dolby science meets art and high tech means more than computer code. We continue to revolutionize how people create deliver and enjoy entertainment worldwide. Currently our Mobile team is developing a suite of innovative software products powered by Artificial Intelligence (AI) and Neural...
Overview
At Dolby science meets art and high tech means more than computer code. We continue to revolutionize how people create deliver and enjoy entertainment worldwide. Currently our Mobile team is developing a suite of innovative software products powered by Artificial Intelligence (AI) and Neural Networks (NN).
In this role you will stand at the intersection of AI research and engineering implementation. Your mission is to ensure complex neural network algorithms run lightly and efficiently on mobile SoCs. You wont just verify model outputs; you will dive deep into the models optimize training workflows and push the boundaries of AI robustness. We are looking for engineers with a deep acoustic background keen auditory perception and a passion for AI algorithms. If you have deep academic roots in Music Engineering Music Technology Music Computing or ECE (Audio Track) and aspire to apply theory to world-changing consumer products this is your stage.
Responsibilities
- Algorithm Evaluation & Tuning: Deeply understand the principles of AI audio algorithms (e.g. Noise Reduction Spatial Audio Audio Enhancement) and design scientific experimental protocols to evaluate performance in real-world mobile scenarios.
- Perceptual & Objective Quality Assessment: Conduct subjective evaluations (MOS scoring) using your professional listening skills and establish a multi-dimensional quality assessment system using objective tools (e.g. POLQA PEAQ FFT analysis).
- AI Model Boundary Exploration: Perform stress tests on deep learning models for on-device inference; identify failure points in extreme scenarios to help the research team iterate and improve model robustness.
- Mobile Product Experience: Evaluate inference latency memory footprint and power consumption of AI models on iOS/Android platforms. Participate in the full lifecycle testing of Dolbys latest mobile apps to ensure consistent high-quality experiences across diverse hardware environments.
- Test Tooling Development: Build automated model verification frameworks using Python to enable regression testing after model iterations. Develop scripts in Python or MATLAB for audio processing batch analysis and automated data acquisition.
Qualifications
- Education: Bachelors degree or higher in ECE (Audio Track) Electrical Engineering Computer Science or a related field.
- Theoretical Foundation: Solid grasp of Digital Signal Processing (DSP) fundamentals (filtering transforms codecs). Basic understanding of Machine Learning/Deep Learning (CNN RNN Transformer) and familiarity with PyTorch or TensorFlow.
- Mobile Awareness: Preliminary understanding of how mobile chipsets (NPU/DSP) handle AI workloads.
- Technical Toolkit:
- Proficiency in Python for processing large-scale datasets and general scripting.
- Familiarity with MATLAB or similar tools for signal modeling and analysis.
- Core Competencies: Strong curiosity and a research-oriented mindset; ability to independently read academic papers and translate them into actionable test cases.
- Communication: Excellent English skills (reading writing and oral) for seamless collaboration with global R&D teams.
Preferred Qualifications
- Internship experience in renowned A/V laboratories audio chipset companies or major internet tech firms.
- Familiarity with the multimedia frameworks of Android or iOS.
- Strong interest or project experience in Spatial Audio or Dolby Atmos.
#LI-JZ1
Required Experience:
IC
View more
View less