Senior AWS Engineer

Not Interested
Bookmark
Report This Job

profile Job Location:

Torrance, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: 17 days ago
Vacancies: 1 Vacancy

Job Summary

Description:
Key Responsibilities

  • Design and implement scalable ETL/ELT pipelines using AWS Glue with PySpark for large-scale data processing.
  • Develop and maintain serverless integrations using AWS Lambda for event-driven processing and system integrations.
  • Design and optimize Amazon Redshift data warehouse solutions including stored procedures performance tuning and advanced SQL analytics.
  • Lead implementation of vendor file transfer and ingestion solutions using AWS Transfer Family.
  • Implement database migration and replication pipelines using AWS Database Migration Service (DMS).
  • Design orchestration workflows using Apache Airflow or similar workflow orchestration tools.
  • Analyze data quality transformation logic and performance using SQL queries and data analysis techniques.
  • Guide Development team members on AWS best practices for data engineering cost optimization and performance optimization.
  • Collaborate with enterprise architecture security and compliance teams to ensure SOX and regulatory requirements are met.
  • Troubleshoot production data pipelines and integration issues across AWS services.

What Will the Person Be Working On
Support AHFC BI Environments

WANTS

  • Bachelor s degree in Computer Science Information Technology Data Engineering or equivalent experience.
  • 7 years of experience building data platforms or data engineering solutions.
  • 5 years of hands-on experience with AWS cloud services.
  • Strong experience with AWS Glue using PySpark for large-scale ETL processing.
  • Hands-on experience developing serverless applications using AWS Lambda.
  • Deep experience with Amazon Redshift including performance tuning SQL analytics and stored procedures.
  • Experience implementing workflow orchestration using Apache Airflow or similar orchestration tools.
  • Experience implementing secure vendor integrations using AWS Transfer Family.
  • Experience designing data migration and replication pipelines using AWS DMS.
  • Strong SQL skills and experience analyzing data using complex queries.
  • Knowledge of security encryption IAM policies and compliance considerations in AWS environments.
  • Experience working in financial services or regulated environments preferred.
  • Strong troubleshooting problem-solving and data analysis capabilities.
Description: Key Responsibilities Design and implement scalable ETL/ELT pipelines using AWS Glue with PySpark for large-scale data processing. Develop and maintain serverless integrations using AWS Lambda for event-driven processing and system integrations. Design and optimize Amazon Redshift data ...
View more view more