Job Title: AWS Data Engineer
Experience: 5 to 8 Years
Location: Hyderabad: Hybrid
Employment Type: contract
Job Summary:We are seeking an experienced AWS Data Engineer to design build and maintain
scalable data pipelines and cloud-based data platforms on Amazon Web Services
(AWS). The ideal candidate will have strong expertise in data integration data
modeling ETL development and cloud data architecture with a focus on
performance scalability and security.
Key Responsibilities: Design and implement data ingestion transformation and storage
pipelines using AWS services such as Glue Lambda EMR Redshift and
S3.
Develop and optimize ETL/ELT workflows to support analytics data science
and reporting requirements.
Collaborate with data scientists analysts and business teams to understand
data needs and ensure reliable data delivery.
Build and maintain data lake and data warehouse architectures on AWS.
Work with both structured and unstructured data ensuring high quality
consistency and availability.
organizational standards.
Manage data security governance and compliance according to
Implement data validation quality checks and monitoring frameworks for
pipelines.
Optimize performance and cost across storage compute and data
processing layers.
Leverage Infrastructure as Code (IaC) tools like Terraform or
CloudFormation for environment setup and automation.
Support DevOps and CI/CD practices for automated data pipeline
deployments.
Required Skills & Qualifications: Bachelors or Masters degree in Computer Science Information Systems
Data Engineering or a related field.
5-8 years of professional experience in data engineering ETL
development or cloud data solutions.
Hands-on expertise in AWS data services such as:
o AWS Glue S3 Lambda Redshift EMR Athena Step Functions
and Kinesis.
Strong proficiency in SQL and Python for data processing and automation.
Solid understanding of data modeling (OLTP and OLAP) data
warehousing concepts and performance tuning.
Experience with ETL tools (AWS Glue Talend Informatica dbt or similar).
Familiarity with big data technologies such as Spark Hadoop or PySpark.
Knowledge of version control (Git) and CI/CD pipelines for data projects.
Strong understanding of data security encryption and IAM policies in
Required Skills:
TALENDOLAPDEVOPSOLTPETL TOOLSDATA SECURITYPYTHONDATA MODELINGCI/CDSPARKETL DEVELOPMENTSQLRESTFULGITAWSINFORMATICADATA PROCESSING
Job Title: AWS Data EngineerExperience: 5 to 8 YearsLocation: Hyderabad: HybridEmployment Type: contractJob Summary:We are seeking an experienced AWS Data Engineer to design build and maintainscalable data pipelines and cloud-based data platforms on Amazon Web Services(AWS). The ideal candidate will...
Job Title: AWS Data Engineer
Experience: 5 to 8 Years
Location: Hyderabad: Hybrid
Employment Type: contract
Job Summary:We are seeking an experienced AWS Data Engineer to design build and maintain
scalable data pipelines and cloud-based data platforms on Amazon Web Services
(AWS). The ideal candidate will have strong expertise in data integration data
modeling ETL development and cloud data architecture with a focus on
performance scalability and security.
Key Responsibilities: Design and implement data ingestion transformation and storage
pipelines using AWS services such as Glue Lambda EMR Redshift and
S3.
Develop and optimize ETL/ELT workflows to support analytics data science
and reporting requirements.
Collaborate with data scientists analysts and business teams to understand
data needs and ensure reliable data delivery.
Build and maintain data lake and data warehouse architectures on AWS.
Work with both structured and unstructured data ensuring high quality
consistency and availability.
organizational standards.
Manage data security governance and compliance according to
Implement data validation quality checks and monitoring frameworks for
pipelines.
Optimize performance and cost across storage compute and data
processing layers.
Leverage Infrastructure as Code (IaC) tools like Terraform or
CloudFormation for environment setup and automation.
Support DevOps and CI/CD practices for automated data pipeline
deployments.
Required Skills & Qualifications: Bachelors or Masters degree in Computer Science Information Systems
Data Engineering or a related field.
5-8 years of professional experience in data engineering ETL
development or cloud data solutions.
Hands-on expertise in AWS data services such as:
o AWS Glue S3 Lambda Redshift EMR Athena Step Functions
and Kinesis.
Strong proficiency in SQL and Python for data processing and automation.
Solid understanding of data modeling (OLTP and OLAP) data
warehousing concepts and performance tuning.
Experience with ETL tools (AWS Glue Talend Informatica dbt or similar).
Familiarity with big data technologies such as Spark Hadoop or PySpark.
Knowledge of version control (Git) and CI/CD pipelines for data projects.
Strong understanding of data security encryption and IAM policies in
Required Skills:
TALENDOLAPDEVOPSOLTPETL TOOLSDATA SECURITYPYTHONDATA MODELINGCI/CDSPARKETL DEVELOPMENTSQLRESTFULGITAWSINFORMATICADATA PROCESSING
View more
View less