Data Engineer
Job Summary
Job Description:
Business Title
Data Engineer
Years of Experience
Min 3 and max upto 7.
Job Descreption
Looking for a handson Senior Data Engineer AWS with experience to development build and maintain scalable secure and highperformance data platforms on AWS.
This is an individual contributor role focused on data pipeline development cloud data engineering and analytics enablement. The role requires strong handson skills in AWS data services SQL and Python along with experience building reliable batch and streaming data pipelines in a global delivery environment.
Must have skills
Cloud & Data Engineering (AWS)
Strong handson experience with AWS data services including:
- Amazon S3
- AWS Glue
- Amazon Athena
- Amazon Redshift
- Amazon EMR
Experience designing cloudnative data lakes and data warehouse architectures
Solid understanding of batch data pipelines and basic exposure to streaming concepts
SQL & Python (Mandatory)
Strong SQL skills (mandatory)
Writing complex queries joins aggregations and transformations
Experience working with large datasets in Redshift / Athena
Strong Python skills (mandatory)
Python for data engineering and ETL use cases
Experience with PySpark / Spark is a strong plus
Good understanding of data modeling transformations and performance tuning
Data Processing & Engineering
Handson experience with distributed data processing frameworks (Spark / PySpark)
Experience handling structured and semistructured data
Understanding of schema evolution data quality checks and validation logic
DevOps & Platform Basics
Working knowledge of Infrastructure as Code (Terraform and/or CloudFormation)
Basic experience with CI/CD pipelines for data workloads
Understanding of logging and monitoring using CloudWatch
Collaboration
Ability to work closely with architects DevOps QA and business stakeholders
Good communication skills to explain technical concepts clearly
Good to have skills Exposure to streaming technologies such as Amazon Kinesis or Kafka
Familiarity with Lakehouse and modern data platform patterns
Experience integrating AWS data platforms with BI / reporting tools
Basic knowledge of data governance data quality and metadata concepts
Awareness of AWS cost optimization best practices
Experience working in Agile delivery models with global clients
Exposure to AI / ML
Key responsibiltes
Data Engineering & Development
Design and build scalable ETL / ELT pipelines on AWS
Develop SQLbased data transformations and Pythonbased data pipelines
Implement data ingestion pipelines using AWS services such as S3 Glue EMR
Build data models optimized for analytics performance and cost efficiency
Platform & Operations
Support deployment and execution of data pipelines across environments
Monitor pipeline performance reliability and data quality
Troubleshoot data pipeline issues and perform rootcause analysis
Apply best practices for security reliability and scalability
Collaboration & Delivery
Work closely with architects and product teams to understand requirements
Translate business and analytics needs into working AWS data solutions
Contribute to documentation code reviews and engineering standards
Education Qulification
1. Bachelors or Master Degree or equivalent Degree
Certification If Any
Certified Solutions Architect / DevOps Professional
2. Snowflake Core
Shift timing 12 PM to 9 PM and / or 2 PM to 11 PM - IST time zone
Location:
DGS India - Pune - Kharadi EON Free ZoneBrand:
MerkleTime Type:
Full timeContract Type:
PermanentRequired Experience:
IC
About Company
Dentsu is an integrated growth and transformation partner to the world’s leading organizations. Founded in 1901 in Tokyo, Japan, and now present in approximately 120 countries.