We are seeking a highly skilled AWS Data Engineer with a strong background in Red Hat Linux and expertise in building ETL pipelines to support cloud data migration initiatives. The ideal candidate will have hands-on experience with AWS Migration Services and will play a critical role in designing developing and maintaining scalable data integration solutions in a secure and reliable cloud environment.
Key Responsibilities:
- Design build and optimize robust ETL pipelines for data migration and integration using AWS services such as AWS Glue Data Migration Service (DMS) Lambda and Step Functions.
- Work with AWS Migration Services to migrate on-premises data sources to AWS cloud-based data lakes and databases.
- Administer and troubleshoot Red Hat Linux environments as part of the data pipeline infrastructure.
- Collaborate with data architects cloud engineers and stakeholders to define data migration requirements and strategies.
- Ensure data quality data governance and security standards are enforced throughout the ETL lifecycle.
- Monitor and maintain data workflows to ensure performance scalability and fault tolerance.
- Automate and orchestrate data workflows using scripting (Python Bash) and cloud-native tools.
- Support and troubleshoot production data pipelines and perform root cause analysis on failures.
Required Skills & Experience:
- 5 years of experience in Data Engineering or ETL development.
- 3 years of hands-on experience with AWS services including:
- AWS Glue AWS Lambda AWS DMS S3 Redshift EMR and Step Functions.
- Proficiency in Red Hat Linux administration shell scripting and system troubleshooting.
- Experience in designing and implementing ETL solutions for cloud migration projects.
- Strong proficiency in Python SQL and cloud-native data processing frameworks.
- Solid understanding of data warehousing data lakes and cloud-native architectures.
Familiarity with DevOps practices and tools such as CloudFormation Terraform or CI/CD pipelines is a plus.
We are seeking a highly skilled AWS Data Engineer with a strong background in Red Hat Linux and expertise in building ETL pipelines to support cloud data migration initiatives. The ideal candidate will have hands-on experience with AWS Migration Services and will play a critical role in designing de...
We are seeking a highly skilled AWS Data Engineer with a strong background in Red Hat Linux and expertise in building ETL pipelines to support cloud data migration initiatives. The ideal candidate will have hands-on experience with AWS Migration Services and will play a critical role in designing developing and maintaining scalable data integration solutions in a secure and reliable cloud environment.
Key Responsibilities:
- Design build and optimize robust ETL pipelines for data migration and integration using AWS services such as AWS Glue Data Migration Service (DMS) Lambda and Step Functions.
- Work with AWS Migration Services to migrate on-premises data sources to AWS cloud-based data lakes and databases.
- Administer and troubleshoot Red Hat Linux environments as part of the data pipeline infrastructure.
- Collaborate with data architects cloud engineers and stakeholders to define data migration requirements and strategies.
- Ensure data quality data governance and security standards are enforced throughout the ETL lifecycle.
- Monitor and maintain data workflows to ensure performance scalability and fault tolerance.
- Automate and orchestrate data workflows using scripting (Python Bash) and cloud-native tools.
- Support and troubleshoot production data pipelines and perform root cause analysis on failures.
Required Skills & Experience:
- 5 years of experience in Data Engineering or ETL development.
- 3 years of hands-on experience with AWS services including:
- AWS Glue AWS Lambda AWS DMS S3 Redshift EMR and Step Functions.
- Proficiency in Red Hat Linux administration shell scripting and system troubleshooting.
- Experience in designing and implementing ETL solutions for cloud migration projects.
- Strong proficiency in Python SQL and cloud-native data processing frameworks.
- Solid understanding of data warehousing data lakes and cloud-native architectures.
Familiarity with DevOps practices and tools such as CloudFormation Terraform or CI/CD pipelines is a plus.
View more
View less