Business Unit/Group: Studio Economics
Requisition Number: 23158-1
Intended Start Date: 9/30/2025
Contract Duration: 4-months
Possibility For Extension / Conversion Possible
Max Hourly Pay Rate: BR/hr
OT Required / Expected No
WB Games Resource(s) No
CNN Resource(s) No
What We Do/Project
As part of the Studio Economics transformation we are evolving how finance business and technology collaborate shifting to lean-agile user-centric small product-oriented delivery teams (PODs) that deliver integrated intelligent scalable solutions and bring together engineers product owners designers data architects and domain experts.
Each pod is empowered to own outcomes end-to-end refining requirements building solutions testing and delivering in iterative increments. We emphasize collaboration over handoffs working software over documentation alone and shared accountability for delivery. Engineers contribute not only code but also to design reviews backlog refinement and retrospectives ensuring decisions are transparent and scalable across pods. We prioritize reusability automation and continuous improvement balancing rapid delivery with long-term maintainability.
The Senior Data Engineer plays a hands-on role within the Platform Pod ensuring data pipelines integrations and services are performant reliable and reusable. This role partners closely with Data Architects Cloud Architects and application pods to deliver governed AI/ML-ready data products.
Job Responsibilities / Typical Day in the Role
Design & Build Scalable Data Pipelines
Lead development of batch and streaming pipelines using AWS-native tools (Glue Lambda Step Functions Kinesis) and modern orchestration frameworks.
Implement best practices for monitoring resilience and cost optimization in high-scale pipelines.
Collaborate with architects to translate canonical and semantic data models into physical implementations.
Enable Analytics & AI/ML Workflows
Build pipelines that deliver clean well-structured data to analysts BI tools and ML pipelines.
Work with data scientists to enable feature engineering and deployment of ML models into production environments.
Ensure Data Quality & Governance
Embed validation lineage and anomaly detection into pipelines.
Contribute to the enterprise data catalog and enforce schema alignment across pods.
Partner with governance teams to implement role-based access tagging and metadata standards.
Mentor & Collaborate Across Pods
Guide junior data engineers sharing best practices in pipeline design and coding standards.
Participate in pod ceremonies (backlog refinement sprint reviews) and program-level architecture syncs.
Promote reusable services and reduce fragmentation by advocating platform-first approaches.
Must Have Skills / Requirements
1) Data Engineering Experinece with hands-on expertise in AWS services (Glue Kinesis Lambda RDS DynamoDB S3) and orchestration tools (Airflow Step Functions).
a. 7 years of experience
2) Proven ability to optimize pipelines for both batch and streaming use cases.
a. 7 years of experience
3) Knowledge of data governance practices including lineage validation and cataloging.
a. 7 years of experience
Nice to Have Skills / Preferred Requirements
1) Proven ability to optimize pipelines for both batch and streaming use cases.
2) Knowledge of data governance practices including lineage validation and cataloging.
3) Strong collaboration and mentoring skills; ability to influence pods and domains.
Soft Skills:
1) Strong collaboration and mentoring skills; ability to influence pods and domains.
Technology Requirements:
1) Experience with data engineering with hands-on expertise in AWS services (Glue Kinesis Lambda RDS DynamoDB S3) and orchestration tools (Airflow Step Functions).
2) Strong skills in SQL Python PySpark and scripting for data transformations.
3) Experience working with modern data platforms (Snowflake Databricks Redshift Informatica).
4) Proven ability to optimize pipelines for both batch and streaming use cases.
5) Knowledge of data governance practices including lineage validation and cataloging.
Education / Certifications
1) None
Interview Process / Next Steps
1) 1-2 rounds - Manager and VP
Additional Notes
Sourcing in CA Burbank
Hybrid - 3 days on-site.
Business Unit/Group: Studio EconomicsRequisition Number: 23158-1Intended Start Date: 9/30/2025Contract Duration: 4-monthsPossibility For Extension / Conversion PossibleMax Hourly Pay Rate: BR/hrOT Required / Expected NoWB Games Resource(s) NoCNN Resource(s) NoWhat We Do/ProjectAs part of the Studio ...
Business Unit/Group: Studio Economics
Requisition Number: 23158-1
Intended Start Date: 9/30/2025
Contract Duration: 4-months
Possibility For Extension / Conversion Possible
Max Hourly Pay Rate: BR/hr
OT Required / Expected No
WB Games Resource(s) No
CNN Resource(s) No
What We Do/Project
As part of the Studio Economics transformation we are evolving how finance business and technology collaborate shifting to lean-agile user-centric small product-oriented delivery teams (PODs) that deliver integrated intelligent scalable solutions and bring together engineers product owners designers data architects and domain experts.
Each pod is empowered to own outcomes end-to-end refining requirements building solutions testing and delivering in iterative increments. We emphasize collaboration over handoffs working software over documentation alone and shared accountability for delivery. Engineers contribute not only code but also to design reviews backlog refinement and retrospectives ensuring decisions are transparent and scalable across pods. We prioritize reusability automation and continuous improvement balancing rapid delivery with long-term maintainability.
The Senior Data Engineer plays a hands-on role within the Platform Pod ensuring data pipelines integrations and services are performant reliable and reusable. This role partners closely with Data Architects Cloud Architects and application pods to deliver governed AI/ML-ready data products.
Job Responsibilities / Typical Day in the Role
Design & Build Scalable Data Pipelines
Lead development of batch and streaming pipelines using AWS-native tools (Glue Lambda Step Functions Kinesis) and modern orchestration frameworks.
Implement best practices for monitoring resilience and cost optimization in high-scale pipelines.
Collaborate with architects to translate canonical and semantic data models into physical implementations.
Enable Analytics & AI/ML Workflows
Build pipelines that deliver clean well-structured data to analysts BI tools and ML pipelines.
Work with data scientists to enable feature engineering and deployment of ML models into production environments.
Ensure Data Quality & Governance
Embed validation lineage and anomaly detection into pipelines.
Contribute to the enterprise data catalog and enforce schema alignment across pods.
Partner with governance teams to implement role-based access tagging and metadata standards.
Mentor & Collaborate Across Pods
Guide junior data engineers sharing best practices in pipeline design and coding standards.
Participate in pod ceremonies (backlog refinement sprint reviews) and program-level architecture syncs.
Promote reusable services and reduce fragmentation by advocating platform-first approaches.
Must Have Skills / Requirements
1) Data Engineering Experinece with hands-on expertise in AWS services (Glue Kinesis Lambda RDS DynamoDB S3) and orchestration tools (Airflow Step Functions).
a. 7 years of experience
2) Proven ability to optimize pipelines for both batch and streaming use cases.
a. 7 years of experience
3) Knowledge of data governance practices including lineage validation and cataloging.
a. 7 years of experience
Nice to Have Skills / Preferred Requirements
1) Proven ability to optimize pipelines for both batch and streaming use cases.
2) Knowledge of data governance practices including lineage validation and cataloging.
3) Strong collaboration and mentoring skills; ability to influence pods and domains.
Soft Skills:
1) Strong collaboration and mentoring skills; ability to influence pods and domains.
Technology Requirements:
1) Experience with data engineering with hands-on expertise in AWS services (Glue Kinesis Lambda RDS DynamoDB S3) and orchestration tools (Airflow Step Functions).
2) Strong skills in SQL Python PySpark and scripting for data transformations.
3) Experience working with modern data platforms (Snowflake Databricks Redshift Informatica).
4) Proven ability to optimize pipelines for both batch and streaming use cases.
5) Knowledge of data governance practices including lineage validation and cataloging.
Education / Certifications
1) None
Interview Process / Next Steps
1) 1-2 rounds - Manager and VP
Additional Notes
Sourcing in CA Burbank
Hybrid - 3 days on-site.
View more
View less