- Use modern tools and technologies to build reliable and performant pipelines and infrastructure with extreme scale requirements- Solve tough problems across the technology spectrum including designing creating and extending data storage processing and analytic solutions- Automate and optimize existing analytic workloads by recognizing patterns of data and technology usage- Must be able to work in a rapidly changing environment and perform effectively in a sprint-based agile development environment
- 3 years of professional experience and background in computer science mathematics or similar quantitative field
- Proficiency in Java as well as other relevant languages and frameworks (Spark Python SQL Trino Glue)
- Demonstrated ability to implement and extend highly performant and resilient data services
- Worked in cloud environments and are familiar with object stores and other common cloud-native data storage and processing frameworks
- Experience working with distributed systems (Cassandra Kubernetes Docker etc.)
- Extract Transform Load (ETL) and streaming experience using Spark Kafka Hive Iceberg or similar technologies at petabyte scale
- Experience with workflow scheduling / orchestration such as Airflow DBT etc.
- Ability to take requirements from design through to implementation both independently and working collaboratively within teams
- Ability to work closely with operational teams on deployment monitoring management concerns
- BS/MS in Computer Science Distributed Systems Software Engineering or related field; and experience designing building maintaining and extending web-scale production systems
- Ability to design and implement effective testing and operations strategies for data pipelines and data products
- Worked in CI/CD environments
- Experience with applying data encryption and data security standards
- Experience using one or more scripting languages (e.g. Python bash etc.)
- Experience supporting and working with cross-functional teams in a dynamic environment
- Understanding of modern data engineering approaches and are aware of what leading players are doing
- Experience implementing machine learning and data science workloads a plus
- Ability to communicate technical concepts to a business-focused audience
- Experience in AdTech highly desirable
- Most importantly a sense of humor and an eagerness to learn
- Use modern tools and technologies to build reliable and performant pipelines and infrastructure with extreme scale requirements- Solve tough problems across the technology spectrum including designing creating and extending data storage processing and analytic solutions- Automate and optimize e...
- Use modern tools and technologies to build reliable and performant pipelines and infrastructure with extreme scale requirements- Solve tough problems across the technology spectrum including designing creating and extending data storage processing and analytic solutions- Automate and optimize existing analytic workloads by recognizing patterns of data and technology usage- Must be able to work in a rapidly changing environment and perform effectively in a sprint-based agile development environment
- 3 years of professional experience and background in computer science mathematics or similar quantitative field
- Proficiency in Java as well as other relevant languages and frameworks (Spark Python SQL Trino Glue)
- Demonstrated ability to implement and extend highly performant and resilient data services
- Worked in cloud environments and are familiar with object stores and other common cloud-native data storage and processing frameworks
- Experience working with distributed systems (Cassandra Kubernetes Docker etc.)
- Extract Transform Load (ETL) and streaming experience using Spark Kafka Hive Iceberg or similar technologies at petabyte scale
- Experience with workflow scheduling / orchestration such as Airflow DBT etc.
- Ability to take requirements from design through to implementation both independently and working collaboratively within teams
- Ability to work closely with operational teams on deployment monitoring management concerns
- BS/MS in Computer Science Distributed Systems Software Engineering or related field; and experience designing building maintaining and extending web-scale production systems
- Ability to design and implement effective testing and operations strategies for data pipelines and data products
- Worked in CI/CD environments
- Experience with applying data encryption and data security standards
- Experience using one or more scripting languages (e.g. Python bash etc.)
- Experience supporting and working with cross-functional teams in a dynamic environment
- Understanding of modern data engineering approaches and are aware of what leading players are doing
- Experience implementing machine learning and data science workloads a plus
- Ability to communicate technical concepts to a business-focused audience
- Experience in AdTech highly desirable
- Most importantly a sense of humor and an eagerness to learn
View more
View less