Title: Data Engineer
Location: Cupertino CA or Austin TX (Hybrid)
Requirements
10 years software engineering including strong SQL and data focus Expertise in languages like Python Java or Scala and technologies like Airflow Spark Trino Kafka Ability to analyze complex datasets and design solutions with optimum quality and efficiency Familiarity with SDLC best practices version control CI/CD
Description
You will architect develop and test large scale and efficient solutions that provide Client leadership with the accurate data required to rapidly understand and adapt to changing business conditions.
Design and implement scalable efficient and high-quality methods of consuming data from a diverse set of sources with variable quality and predictability
Create data products enabling self-service and predictability by any consumer Build libraries and frameworks that drive leverage and productivity for the entire team
Optimize and maintain solutions driving improvements in efficiency data quality and operational excellence
We are a rapidly growing team with plenty of interesting technical and business challenges to solve.
We seek a self-starter who is willing to learn fast adapt well to changing requirements and work with cross functional teams.
Preferred Qualifications
BS or MS in Engineering/ Computer Science
Experience with cloud services such as AWS GCP or Azure for data infrastructure and storage.
Knowledge of infrastructure as code (e.g. Terraform) and container orchestration tools (e.g. Kubernetes).
Title: Data Engineer Location: Cupertino CA or Austin TX (Hybrid) Requirements 10 years software engineering including strong SQL and data focus Expertise in languages like Python Java or Scala and technologies like Airflow Spark Trino Kafka Ability to analyze complex datasets and design so...
Title: Data Engineer
Location: Cupertino CA or Austin TX (Hybrid)
Requirements
10 years software engineering including strong SQL and data focus Expertise in languages like Python Java or Scala and technologies like Airflow Spark Trino Kafka Ability to analyze complex datasets and design solutions with optimum quality and efficiency Familiarity with SDLC best practices version control CI/CD
Description
You will architect develop and test large scale and efficient solutions that provide Client leadership with the accurate data required to rapidly understand and adapt to changing business conditions.
Design and implement scalable efficient and high-quality methods of consuming data from a diverse set of sources with variable quality and predictability
Create data products enabling self-service and predictability by any consumer Build libraries and frameworks that drive leverage and productivity for the entire team
Optimize and maintain solutions driving improvements in efficiency data quality and operational excellence
We are a rapidly growing team with plenty of interesting technical and business challenges to solve.
We seek a self-starter who is willing to learn fast adapt well to changing requirements and work with cross functional teams.
Preferred Qualifications
BS or MS in Engineering/ Computer Science
Experience with cloud services such as AWS GCP or Azure for data infrastructure and storage.
Knowledge of infrastructure as code (e.g. Terraform) and container orchestration tools (e.g. Kubernetes).
View more
View less