Job Description Kafka Data EngineerData Engineer to build and manage data pipelines that support batch and streaming data solutions. The role requires expertise in creating seamless data flows across platforms like Data Lake/Lakehouse in Cloudera Azure Databricks Kafka for both batch and stream data pipelines experience in develop test and maintain data pipelines (batch & stream) using Cloudera Spark Kafka and Azure services like ADF Cosmos DB Databricks NoSQL DB/ Mongo DB programming skills in spark python or scala & data pipelines to improve speed performance and reliability ensuring that data is available for data consumers as ETL pipelines for downstream consumers by transform data as per business closely with Data Architects and Data Analysts to align data solutions with business needs and ensure the accuracy and accessibility of data validation checks and error handling processes to maintain high data quality and consistency across data analytical and problem solving skills with a focus on optimizing data flows and addressing impacts in the data pipeline. Qualifications8 years of IT experience with at least 5 years in data engineering and cloud-based data experience with Cloudera/any Data Lake Confluent/Apache Kafka and Azure Data Services (ADF Databricks Cosmos DB).Deep knowledge of NoSQL databases (Cosmos DB MongoDB) and data modeling for performance and expertise in designing and implementing batch and streaming data pipelines using Databricks Spark or in creating scalable reliable and high-performance data solutions with robust data governance collaboration skills to work with stakeholders mentor junior Data Engineers and translate business needs into actionable or masters degree in computer science IT or a related field.