AWS Utility Computing (UC) provides product innovations from foundational services such as Amazons Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2) to consistently released new product innovations that continue to set AWSs services and features apart in the industry. As a member of the UC organization youll support the development and management of Compute Database Storage Internet of Things (Iot) Platform and Productivity Apps services in AWS including support for customers who require specialized security solutions for customers who require specialized security solutions for their cloud services.
Each day thousands of developers make trillions of transactions worldwide on our cloud. Almost all of them are harnessing the power of Amazon Web Services (AWS) to enable innovative applications websites and businesses. We store all these transactions for analysis and reporting.
Amazon Web Services is seeking an outstanding Data Engineer to join the AWS Data Lake team. has a culture of datadriven decisionmaking and demands business intelligence that is timely accurate and actionable.
The AWS Data Platform teams mission is to help customers to see and understand their use of the AWS Cloud. We collect and process billions of usage transactions every day into actionable information in the Data Lake and make it available to our internal service owners to analyze their business and service our external customers.
We are truly leading the way to disrupt the data warehouse industry. We are accomplishing this vision by leveraging relational database technologies like Redshift along with emerging Big Data technologies like Elastic Map Reduce (EMR) to build a data platform capable of scaling with the everincreasing volume of data produced by AWS services. The successful candidate will can shape and build AWS data lake and supporting systems for years to come.
You should have deep expertise in the design creation management and business use of large datasets across a variety of data platforms. You should have excellent business and communication skills to work with business owners to understand data requirements and to build ETL to ingest the data into the data lake. You should be an expert at designing implementing and operating stable scalable lowcost solutions to flow data from production systems into the data lake. Above all be passionate about working with vast data sets and someone who loves to bring datasets together to answer business questions and drive growth.
We have a formal mentor search application that lets you find a mentor that works best for you based on location job family job level etc. Your manager can also help you find a mentor or two because two is better than one. In addition to formal mentors we work and train together so that we are always learning from one another and we celebrate and support the career progression of our team members.
Key job responsibilities
Design implement and support data warehouse / data lake infrastructure using AWS big data stack Python Redshift Quicksight Glue/lake formation EMR/Spark/Scala Athena etc.
Extract huge volumes of structured and unstructured data from various sources (Relational /Nonrelational/NoSQL database) and message streams and construct complex analyses.
Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting
Perform detailed sourcesystem analysis sourcetotarget data analysis and transformation analysis
Participate in the full development cycle for ETL: design implementation validation documentation and maintenance.
Drive programs and mentor resources to build scalable solutions aligning to teams long term strategy
About the team
About AWS
Amazon Web Services (AWS) is the worlds most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating thats why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description we encourage candidates to apply. If your career is just starting hasnt followed a traditional path or includes alternative experiences dont let it stop you from applying.
Work/Life Balance
We value worklife harmony. Achieving success at work should never come at the expense of sacrifices at home which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home theres nothing we cant achieve in the cloud.
Inclusive Team Culture
Here at AWS its in our nature to learn and be curious. Our employeeled affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences inspire us to never stop embracing our uniqueness.
Mentorship & Career Growth
Were continuously raising our performance bar as we strive to become Earths Best Employer. Thats why youll find endless knowledgesharing mentorship and other careeradvancing resources here to help you develop into a betterrounded professional.
5 years of data engineering experience
Experience with data modeling warehousing and building ETL pipelines
Experience with SQL
Experience in at least one modern scripting or programming language such as Python Java Scala or NodeJS
Experience mentoring team members on best practices
Experience with big data technologies such as: Hadoop Hive Spark EMR
Experience operating large data warehouses
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit
for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.