Job Description: AWS Data Engineer.
Cliff W2
Inperson interview
Description:
This position is for AWS Data Engineer(Solid Hands On Python Scripting Exp is MUST)
5 7 years experience with AWS data engineering ETL/ELT pipelines and cloud native data processing.
Looking for proficiency in the following areas
- AWS Data Services (Glue PySpark Lambda Step Functions S3 DynamoDB Redshift RDS CloudWatch IAM)
-Building and maintaining ETL/ELT data pipelines using AWS Glue PySpark Lambda and orchestration frameworks
-Python scripting for ETL transformations automation and data quality checks
-Experience with distributed data processing (Spark / PySpark)
-Experience integrating data from APIs AWS services and external systems
-Familiarity with monitoring and logging tools (CloudWatch Splunk)
-Experience working with COTS or enterprise platforms as data sources
Other Requirements
- Exposure to Agile methodology
- Good communication documentation and teamwork skills
- Ability to support off-hours production releases and provide on-call support for critical data pipelines
Education
- Bachelors degree in Computer Science Information Systems Engineering or related field
- Relevant professional certifications
Job Description: AWS Data Engineer. Cliff W2 Inperson interview Description: This position is for AWS Data Engineer(Solid Hands On Python Scripting Exp is MUST) 5 7 years experience with AWS data engineering ETL/ELT pipelines and cloud native data processing. Looking for proficiency in the ...
Job Description: AWS Data Engineer.
Cliff W2
Inperson interview
Description:
This position is for AWS Data Engineer(Solid Hands On Python Scripting Exp is MUST)
5 7 years experience with AWS data engineering ETL/ELT pipelines and cloud native data processing.
Looking for proficiency in the following areas
- AWS Data Services (Glue PySpark Lambda Step Functions S3 DynamoDB Redshift RDS CloudWatch IAM)
-Building and maintaining ETL/ELT data pipelines using AWS Glue PySpark Lambda and orchestration frameworks
-Python scripting for ETL transformations automation and data quality checks
-Experience with distributed data processing (Spark / PySpark)
-Experience integrating data from APIs AWS services and external systems
-Familiarity with monitoring and logging tools (CloudWatch Splunk)
-Experience working with COTS or enterprise platforms as data sources
Other Requirements
- Exposure to Agile methodology
- Good communication documentation and teamwork skills
- Ability to support off-hours production releases and provide on-call support for critical data pipelines
Education
- Bachelors degree in Computer Science Information Systems Engineering or related field
- Relevant professional certifications
View more
View less