Role: Data Engineer
Location: Bangalore Electronic city - Hybrid - 2 days Onsite
Strong proficiency in SQL Python and PySpark
These are essential skills and must be evaluated thoroughly.
Minimum 2 years of handson experience in Databricks (mandatory)
Experience on AWS or GCP is preferred but either is acceptable. The above two requirements are nonnegotiable as client expects candidates to be productive from Day 1.
Skillset Required
5 years of experience in software development with a strong foundation in distributed systems cloud-native architectures and data platforms.
Expertise in big data technologies such as Apache Spark and real-time streaming technologies like Apache Kafka.
Strong programming skills in Python Pyspark SQL Mandatory
Advanced knowledge of a major cloud platform (AWS Azure GCP) and its ecosystem of data services AWS preferred
Proficiency with Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation.
Strong understanding of advanced data modeling techniques and modern data warehouses.
Ability to design scalable fault-tolerant and maintainable distributed systems.
Excellent communication and stakeholder management skills.
Experience in at least one of these domains SCM / Marketing / Federated Model Good to have