Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
We are seeking an experienced Kafka Streaming Platform Engineer to design develop and manage real-time data streaming solutions using Apache Kafka. This role is essential in building and optimizing data pipelines ensuring system scalability and integrating Kafka with various systems. The engineer will play a key role in maintaining and optimizing Kafka clusters working with Kafka Connect Kafka Streams and other associated technologies.
You will collaborate with cross-functional teams including data architects DBAs and application developers and leverage your experience with DevOps practices (CI/CD Docker Kubernetes) for automated deployments and infrastructure management. Strong problem-solving skills and a solid understanding of event-driven architectures are essential for this role.
Design develop and maintain data pipelines using Apache Kafka including real-time streaming applications.
Build highly scalable data solutions that can handle large volumes of data in real-time.
Install configure monitor and optimize Kafka clusters ensuring high availability and efficient resource utilization.
Manage Kafka Connect and ensure smooth integration with external systems (databases messaging systems etc.).
Design and implement event-driven architectures using Kafka Streams Kafka Connect KSQL and other Kafka-related technologies.
Leverage pub/sub patterns to enable real-time event processing and improve system responsiveness.
Integrate Kafka with various databases (relational NoSQL) applications and other data processing platforms.
Utilize Change Data Capture (CDC) techniques using Kafka Connect Debezium or custom connectors to enable real-time data sync across platforms.
Ensure data ingestion and transformation pipelines are optimized for performance reliability and scalability.
Continuously monitor and improve Kafka cluster performance minimizing latency and maximizing throughput.
Proactively monitor Kafka clusters to identify and resolve any performance or availability issues.
Troubleshoot complex data streaming issues and ensure high uptime and system stability.
Apply software engineering best practices such as object-oriented programming (OOP) TDD and design patterns to ensure code maintainability and quality.
Write efficient clean and well-documented code to meet project requirements.
Utilize DevOps methodologies (CI/CD Docker Kubernetes) for automated deployments and infrastructure management.
Automate infrastructure provisioning Kafka cluster scaling and deployment workflows.
Create and maintain technical documentation including data flow diagrams design documents and operational procedures.
Ensure that all processes and architecture are well-documented and easily understood by the team.
Collaborate with cross-functional teams including data architects DBAs and application developers to align on project goals and technical solutions.
Provide technical support and guidance to team members ensuring high-quality implementations.
8 years of experience with Apache Kafka including expertise in Kafka Connect Kafka Streams and Kafka brokers.
Strong proficiency in Java with experience in object-oriented programming (OOP).
Shell scripting and Python experience (desirable).
Solid understanding of event-driven architectures pub/sub patterns and real-time data processing.
Database knowledge: Experience with MongoDB relational databases or other data storage solutions.
Experience with cloud platforms such as AWS Azure or GCP.
Solid experience with DevOps practices including CI/CD Docker and Kubernetes.
Strong troubleshooting and problem-solving skills with an ability to identify and resolve complex issues in Kafka clusters.
Schema registry experience and familiarity with tools related to Kafka (e.g. Confluent Schema Registry Control Center) is a plus.
Experience with Change Data Capture (CDC) Debezium or similar tools is highly desirable.
Full Time