Job Summary
We are seeking a skilled and innovative Data Processing Engineer to join our team. As a Data Processing Engineer you will be responsible for the design development and validation of softwares for Data Processing Engineering across both cloud and on-premises environments. You will be working closely with a team of senior software developers and a technical director. You will be responsible for contributing to the design and development and testing of code. The software applications you build will be used by our internal product teams partners and customers.
We are looking for a hands-on lead engineer who is familiar with Databases Python Spark Scala and Java. Any cloud experience is an advantage. You should be passionate about learning be creative and have the ability to work with and mentor junior engineers.
Job Requirements
- Mastery of data modeling database and data warehouse design and schema optimization
- Proficiency with big data frameworks (Spark Hadoop Kafka Flink)
- Hands-on ETL experience data pipeline Orchestration
- Experience in design and build the Data Processing Platform and understand scale performance and fault-tolerance
- Expertise in identifying the right tools to deliver product features by performing research POCs and interacting with various open-source forums
- Work experience on technologies related to NoSQL SQL and in-memory databases
- Good understanding of best-in-class monitoring processes to enable data applications meet SLAs
- Work experience with Python and Java to write data pipelines and data processing layers.
- Strong in CS fundamentals Unix shell scripting and Database Concepts
- Working expertise with Data processing pipeline implementation Kafka Spark NOSQL DBs especially MongoDB (Cassandra TSDB) and SQL
- Hands on experience in Oracle writing Procedures packages and functions
- Awareness of Data Governance (Data Quality Metadata Management Security etc.)
- Knowledge and experience with Kafka Cassandra or Mongo is an added advantage
- Familiarity with GenAI Agile concepts Continuous Integration and Continuous Delivery
- Experience in Linux Environment with containers (Docker & Kubernetes) is an advantage.
Education
- 4 to 6 years of experience and must be hands-on with coding.
- A Bachelor of Science Degree in Computer Science or a Master Degree; or equivalent experience is required.
Required Experience:
IC
Job Summary We are seeking a skilled and innovative Data Processing Engineer to join our team. As a Data Processing Engineer you will be responsible for the design development and validation of softwares for Data Processing Engineering across both cloud and on-premises environments. You will be work...
Job Summary
We are seeking a skilled and innovative Data Processing Engineer to join our team. As a Data Processing Engineer you will be responsible for the design development and validation of softwares for Data Processing Engineering across both cloud and on-premises environments. You will be working closely with a team of senior software developers and a technical director. You will be responsible for contributing to the design and development and testing of code. The software applications you build will be used by our internal product teams partners and customers.
We are looking for a hands-on lead engineer who is familiar with Databases Python Spark Scala and Java. Any cloud experience is an advantage. You should be passionate about learning be creative and have the ability to work with and mentor junior engineers.
Job Requirements
- Mastery of data modeling database and data warehouse design and schema optimization
- Proficiency with big data frameworks (Spark Hadoop Kafka Flink)
- Hands-on ETL experience data pipeline Orchestration
- Experience in design and build the Data Processing Platform and understand scale performance and fault-tolerance
- Expertise in identifying the right tools to deliver product features by performing research POCs and interacting with various open-source forums
- Work experience on technologies related to NoSQL SQL and in-memory databases
- Good understanding of best-in-class monitoring processes to enable data applications meet SLAs
- Work experience with Python and Java to write data pipelines and data processing layers.
- Strong in CS fundamentals Unix shell scripting and Database Concepts
- Working expertise with Data processing pipeline implementation Kafka Spark NOSQL DBs especially MongoDB (Cassandra TSDB) and SQL
- Hands on experience in Oracle writing Procedures packages and functions
- Awareness of Data Governance (Data Quality Metadata Management Security etc.)
- Knowledge and experience with Kafka Cassandra or Mongo is an added advantage
- Familiarity with GenAI Agile concepts Continuous Integration and Continuous Delivery
- Experience in Linux Environment with containers (Docker & Kubernetes) is an advantage.
Education
- 4 to 6 years of experience and must be hands-on with coding.
- A Bachelor of Science Degree in Computer Science or a Master Degree; or equivalent experience is required.
Required Experience:
IC
View more
View less