Amazon Web Services has been the worlds most comprehensive and broadly adopted cloud platform. AWS offers over 100 fully featured services to millions of active customers around the worldincluding the fastestgrowing startups largest enterprises and leading government agenciesto power their infrastructure. Business Product & Operations (BPO) is a diverse team that supports infrastructure and other foundational initiatives that span and support Sales Marketing and Global Services Operations teams within AWS. We are looking for a handson Data Engineer with experience developing and delivering data platforms as we build the next iteration of our datadriven ecosystem with a focus on enhancing and expanding our Phoenix data product. Come join a team at the forefront of transforming how AWS does business with its key customers.
As a Data Engineer in AWS you will partner with crossfunctional teams including Business Intelligence Engineers Analysts Software Developers and Product Managers to develop scalable and maintainable data pipelines on both structured and unstructured data. You will play a crucial role in achieving our ambitious objectives including establishing Phoenix as AWS premier order management and orchestration engine transforming BPO into a fully datadriven organization and centralizing BPO data assets while enabling selfserve analytics.
The ideal candidate has strong business judgment a good sense of architectural design excellent written and documentation skills and experience with big data technologies (Spark/Hive Redshift EMR and other AWS technologies). This role involves overseeing existing pipelines as well as developing brand new ones to support key initiatives. Youll work on implementing comprehensive data governance frameworks increasing adoption of advanced analytics and AI/ML tools and migrating data and ETL processes to more efficient systems. Additionally youll contribute to implementing selfserve analytics platforms optimizing data pipeline creation processes and integrating data from multiple sources to support BPOs growing data needs.
The operating environment is fastpaced and dynamic with a strong teamoriented and welcoming culture. To thrive you must be detailoriented enthusiastic and flexible. In return you will gain tremendous experience with the latest big data technologies and exposure to various use cases to improve process effectiveness customer experience and automation.
Key job responsibilities
Design and Develop ETL processes using AWS services such as AWS Glue Lambda EMR and Step Functions aiming to reduce pipeline creation time and improve efficiency
Implement and maintain a comprehensive data governance framework for Phoenix ensuring data integrity security and compliance
Automate data monitoring alerting and incident response processes to ensure the reliability and availability of data pipelines striving for near realtime data delivery
Collaborate with crossfunctional teams including analysts business intelligence engineers and stakeholders to understand data requirements and design solutions that support BPOs transformation into a datadriven organization
Lead the development and implementation of a selfserve analytics platform empowering both technical and nontechnical users to drive their own analytics and reporting
Explore and implement advanced analytics and AI/ML tools to enhance data processing and insights generation capabilities
Stay uptodate with the latest AWS data services features and best practices recommending improvements to the data architecture to support BPOs growing data needs
Provide technical support and troubleshooting for issues related to data pipelines data quality and data processing ensuring Phoenix becomes the trusted source of truth for AWS agreements and order management
About the team
About AWS
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description we encourage candidates to apply. If your career is just starting hasnt followed a traditional path or includes alternative experiences dont let it stop you from applying.
Why AWS
Amazon Web Services (AWS) is the worlds most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating thats why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Inclusive Team Culture
Here at AWS its in our nature to learn and be curious. Our employeeled affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences inspire us to never stop embracing our uniqueness.
Mentorship & Career Growth
Were continuously raising our performance bar as we strive to become Earths Best Employer. Thats why youll find endless knowledgesharing mentorship and other careeradvancing resources here to help you develop into a betterrounded professional.
Work/Life Balance
We value worklife harmony. Achieving success at work should never come at the expense of sacrifices at home which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home theres nothing we cant achieve in the cloud.
About Sales Marketing and Global Services (SMGS)
AWS Sales Marketing and Global Services (SMGS) is responsible for driving revenue adoption and growth from the largest and fastest growing small and midmarket accounts to enterpriselevel customers including public sector.
3 years of data engineering experience
Experience with SQL
Experience in at least one modern scripting or programming language such as Python Java Scala or NodeJS
Experience with data modeling warehousing and building ETL pipelines
Knowledge of distributed systems as it pertains to data storage and computing
Experience with AWS technologies like Redshift S3 AWS Glue EMR Kinesis FireHose Lambda and IAM roles and permissions
Experience with Apache Spark / Elastic Map Reduce
Experience with nonrelational databases / data stores (object storage document or keyvalue stores graph databases columnfamily databases)
Experience building/operating highly available distributed systems of data extraction ingestion and processing of large data sets
Knowledge of professional software engineering & best practices for full software development life cycle including coding standards software architectures code reviews source control management continuous deployments testing and operational excellence
Experience working on and delivering end to end projects independently
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status disability or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit
for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $118900/year in our lowest geographic market up to $205600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on jobrelated knowledge skills and experience. Amazon is a total compensation company. Dependent on the position offered equity signon payments and other forms of compensation may be provided as part of a total compensation package in addition to a full range of medical financial and/or other benefits. For more information please visit This position will remain posted until filled. Applicants should apply via our internal or external career site.