Amazon Logistics is looking for a customer focused analytically and technically skilled Data Engineer to build advanced data and reporting solutions for AMZL leadership and BI teams.
This position will be responsible for building and managing real time data pipelines maintaining reporting infrastructures work on complex automation pipelines leveraging AWS and building analytical tools to support our growing Amazon Logistics business in Japan.
The successful candidate will be able to effectively extract transform load and visualize critical data to improve the latency and accuracy of the existing data pipelines and drive faster analytics through data.
This individual will work with business software development and science teams to understand their data requirements and ensure all the teams have reliable data that drives effective business analytics. This role requires an individual with software development and data warehouse skills.
Key job responsibilities
Own the design development and maintenance of last mile data sets
Manipulate/mine data from database tables (Redshift Apache Spark SQL)
Conduct deep dive investigations into issues related to incorrect and missing data
Identify and adopt best practices in developing data pipelines and tables: data integrity test design build validation and documentation.
Continually improve ongoing reporting and data processes in AMZL
Work with in-house scientists global supply chain transportation and logistics teams and software teams to identify new features and projects.
Identify ways to automate complex processes through AWS.
This is an individual contributor role that will partner with internal stakeholders across multiple teams gathering requirements and delivering complete solutions
Internal job description
- Undergraduate or graduate students graduating in2027
- Speak write and read fluently in both Japanese and English
- 1 years of data engineering experience
- Experience with data modeling warehousing and building ETL pipelines
- Experience with one or more query language (e.g. SQL PL/SQL DDL MDX HiveQL SparkSQL Scala)
- Experience with one or more scripting language (e.g. Python KornShell)
- Experience with big data technologies such as: Hadoop Hive Spark EMR
- Experience with any ETL tool like Informatica ODI SSIS BODI Datastage etc.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit
for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.