About the Role
We are looking for a highly skilled and experiencedSenior Data Engineer to join our team in Hyderabad. The ideal candidate will bring strong technical expertise in building scalable data platforms and pipelines using modern technologies such asPython Scala AWS Redshift Terraform Jenkins and Docker. This role demands a hands-on professional who thrives in a fast-paced collaborative environment and is eager to solve complex data problems.
Key Responsibilities
- Design build and optimize robust scalable and securedata pipelines andplatform components.
- Collaborate with data scientists analysts and engineering teams to ensure seamlessdata flow integration andavailability across systems.
- Developinfrastructure as code usingTerraform to automate provisioning and environment management.
- Manage containerized services and workflows usingDocker.
- Set up manage and optimizeCI/CD pipelines usingJenkins for continuous integration and deployment.
- Optimize performance scalability and reliability of large-scale data systems onAWS.
- Write clean modular and efficient code inPython andScala to support ETL data transformation and processing tasks.
- Support data architecture planning and participate in technical reviews and design sessions.
Must-Have Skills
- Strong hands-on experience withPythonScalaSQL andAmazon Redshift.
- Proven expertise inAWS cloud services and ecosystem (EC2 S3 Redshift Glue Lambda etc.).
- Experience implementingInfrastructure as Code (IaC) withTerraform.
- Proficient in managing and deployingDocker containers in development and production environments.
- Hands-on experience withCI/CD pipelines usingJenkins.
- Strong understanding of data architecture ETL pipelines and distributed data processing systems.
- Excellent problem-solving skills and ability to mentor junior engineers.
Nice-to-Have
- Experience working inregulated domains like healthcare or finance.
- Exposure toApache AirflowSpark orDatabricks.
- Familiarity with data quality frameworks and observability tool