Location: Los Angeles CA (Hybrid 3 Days Onsite)
Type: Full-Time
Experience: 9 14 Years
Job Summary
We are seeking a Lead Data Engineer with strong expertise in Databricks PySpark and scalable data engineering to support large-scale media and streaming data environments. The role involves building high-performance ETL/ELT pipelines optimizing data infrastructure and supporting analytics and reporting initiatives.
Must-Have Skills
- 9 years of experience in Data Engineering
- Strong hands-on experience with Databricks
- Expertise in SQL Python and PySpark
- Strong experience in ETL / ELT pipeline development
- Experience with Data Modeling & Data Warehousing
- Experience building Lakehouse architectures
- Hands-on experience with Airflow or workflow orchestration tools
- Strong understanding of performance optimization and monitoring
- Experience with AWS cloud platforms
- Excellent communication and stakeholder management skills
Key Responsibilities
- Design build and maintain scalable ETL/ELT pipelines
- Develop and optimize data models lakehouse and warehouse solutions
- Build high-volume batch and near real-time data pipelines
- Monitor troubleshoot and optimize production workloads
- Implement orchestration and workflow automation solutions
- Collaborate with analytics reporting and engineering teams
- Lead technical discussions and mentor engineering teams
- Support scalable solutions for streaming and subscription datasets
Nice to Have
- Experience in Media / Streaming / Subscription platforms
- Experience handling high-volume consumer datasets
- Exposure to real-time data processing environments
Location: Los Angeles CA (Hybrid 3 Days Onsite) Type: Full-Time Experience: 9 14 Years Job Summary We are seeking a Lead Data Engineer with strong expertise in Databricks PySpark and scalable data engineering to support large-scale media and streaming data environments. The role involves build...
Location: Los Angeles CA (Hybrid 3 Days Onsite)
Type: Full-Time
Experience: 9 14 Years
Job Summary
We are seeking a Lead Data Engineer with strong expertise in Databricks PySpark and scalable data engineering to support large-scale media and streaming data environments. The role involves building high-performance ETL/ELT pipelines optimizing data infrastructure and supporting analytics and reporting initiatives.
Must-Have Skills
- 9 years of experience in Data Engineering
- Strong hands-on experience with Databricks
- Expertise in SQL Python and PySpark
- Strong experience in ETL / ELT pipeline development
- Experience with Data Modeling & Data Warehousing
- Experience building Lakehouse architectures
- Hands-on experience with Airflow or workflow orchestration tools
- Strong understanding of performance optimization and monitoring
- Experience with AWS cloud platforms
- Excellent communication and stakeholder management skills
Key Responsibilities
- Design build and maintain scalable ETL/ELT pipelines
- Develop and optimize data models lakehouse and warehouse solutions
- Build high-volume batch and near real-time data pipelines
- Monitor troubleshoot and optimize production workloads
- Implement orchestration and workflow automation solutions
- Collaborate with analytics reporting and engineering teams
- Lead technical discussions and mentor engineering teams
- Support scalable solutions for streaming and subscription datasets
Nice to Have
- Experience in Media / Streaming / Subscription platforms
- Experience handling high-volume consumer datasets
- Exposure to real-time data processing environments
View more
View less