About Us:
Why Join Us:
About the Role
We are building a realtime analytics platform to improve uptime of connected devices while minimizing maintenance costs. This will involve collecting & analyzing large volumes of sensor data and enabling observability & predictive analytics.
We are seeking a highly skilled Senior Data Engineer with expertise in building and maintaining scalable highperformance data architectures. The ideal candidate will have handson experience with modern data engineering tools and technologies such as Python Scala ELT processes Lakehouse architecture Spark Databricks realtime data streaming systems like Kafka and Flink and observability tools such as ELK stack Prometheus Grafana. You will be responsible for developing and optimizing largescale data pipelines ensuring the availability and performance of data systems and supporting advanced analytics and machine learning initiatives.
If you are passionate about building systems from scratch solving complex data challenges and pushing the boundaries of what is possible with big data this role is for you.
What Youll Do:
Development and implementation of advanced data models to extract insights from complex datasets.
Collaborate with crossfunctional teams to devise datadriven solutions for service optimization.
Conduct experiments and analyses to enhance service predictions and outcomes.
Present findings and actionable recommendations to stakeholders translating complex data insights into clear understandable concepts.
Mentor junior data scientists and foster a collaborative team environment.
What Were Looking For:
4 years of experience in data engineering with a focus on building largescale data systems. Indepth knowledge of ELT processes and Lakehouse architecture.
Data Pipelines: Expertise in Python and Scala for data pipeline development. Proven experience with Apache Spark and Databricks for big data processing and analytics.
Handson experience with observability tools like Prometheus Grafana and ELK stack/OpenSearch.
Experience with realtime streaming systems such as Kafka and Flink.
Experience in working with large datasets and highthroughput data environments.
Familiarity with cloud platforms like Azure GCP or AWS.
Strong problemsolving skills and ability to work in fastpaced environments.
Excellent communication and collaboration skills.
What We Do and Value:
Company Perks & Benefits:
Competitive salary equity and spot bonuses.
Paid sick leave.
Latest MacBook Pro for your work.
Comprehensive health insurance.
Paid parental leave.
Flexible work arrangementswork from home or our vibrant Bengaluru office.
Our Commitment to Diversity and Inclusion:
If youre excited about using data to drive service intelligence and want to be part of a forwardthinking team wed love to hear from you!