Role: Flink Data Engineer
Location: Mountain View/San Diego CA (100% Onsite)
Job Type: Contract
No of interview: 2 (1 Video interview 2nd In person interview)
Must have:
- Expertise in Apache Flink
- Strong programming skills in Java or Scala and SQL
- Knowledge of stream processing and batch processing
- Experience working with Apache Kafka.
- Proven experience with projects in Apache Flink production Java or Scala
- Knowledge of relational databases.
- Knowledge of MPP like AWS Athena
- Knowledge of HIVE
- Experience of AWS cloud services
What Youll Do:
- Work on writing new data pipelines.
- Debug and optimize existing data pipelines.
- Analyze pipelines consuming high resources or having high execution time and optimize as needed
- Implementation for automating pipeline management and metrics setup for observability
- Gather requirements work on high level design implement (code) and deliver efficient and scalable DW solutions in high data growth environment
- Oversee team activities related to coding unit testing system testing resolve defect originating during system test and deploy the fixes for the defects when needed
Good to Have
- Familiarity with big data technologies like Spark Hive
- Familiarity with CI/CD and basic dev ops.
- Familiarity with shell scripting
Role: Flink Data Engineer Location: Mountain View/San Diego CA (100% Onsite) Job Type: Contract No of interview: 2 (1 Video interview 2nd In person interview) Must have: Expertise in Apache Flink Strong programming skills in Java or Scala and SQL Knowledge of stream processing and batch proce...
Role: Flink Data Engineer
Location: Mountain View/San Diego CA (100% Onsite)
Job Type: Contract
No of interview: 2 (1 Video interview 2nd In person interview)
Must have:
- Expertise in Apache Flink
- Strong programming skills in Java or Scala and SQL
- Knowledge of stream processing and batch processing
- Experience working with Apache Kafka.
- Proven experience with projects in Apache Flink production Java or Scala
- Knowledge of relational databases.
- Knowledge of MPP like AWS Athena
- Knowledge of HIVE
- Experience of AWS cloud services
What Youll Do:
- Work on writing new data pipelines.
- Debug and optimize existing data pipelines.
- Analyze pipelines consuming high resources or having high execution time and optimize as needed
- Implementation for automating pipeline management and metrics setup for observability
- Gather requirements work on high level design implement (code) and deliver efficient and scalable DW solutions in high data growth environment
- Oversee team activities related to coding unit testing system testing resolve defect originating during system test and deploy the fixes for the defects when needed
Good to Have
- Familiarity with big data technologies like Spark Hive
- Familiarity with CI/CD and basic dev ops.
- Familiarity with shell scripting
View more
View less