Amazon Global Selling has been helping individuals and businesses increase sales and reach new customers around the globe. Today more than 50% of Amazons total unit sales come from third-party selection. The Global Selling team in China is responsible for recruiting local businesses to sell on Amazons 19 overseas marketplaces and supporting local Sellers success and growth on the Amazon. Our vision is to be the first choice for all types of Chinese business to go globally.
The Amazon Global Selling Analytics Intelligence and Technology (AGS-AIT) team serves as the research automation and insight arm of the International Seller Service data hub enabling rapid delivery of growth insights through strategic investments in regional data foundations self-service business intelligence solutions and artificial intelligence tools.
The AGS-AIT team is positioned to establish AI-ready foundational capabilities across the AGS organization while maintaining excellence in business insight generation and self-service BI/AI application development.
AGS-AIT is looking for a Data Engineer to collaborate with cross-functional teams to design and develop data infrastructure and analytics capabilities for AGS AI and Automation initiatives.
Key job responsibilities
Design and implement end-to-end data pipelines (ETL) to ensure efficient data collection cleansing transformation and storage supporting both real-time and offline analytics needs.
Develop automated data monitoring tools and interactive dashboards to enhance business teams insights into core metrics (e.g. user behavior AI model performance).
Collaborate with cross-functional teams (e.g. Product Operations Tech) to align data logic integrate multi-source data (e.g. user behavior transaction logs AI outputs) and build a unified data layer.
Establish data standardization and governance policies to ensure consistency accuracy and compliance.
Provide structured data inputs for AI model training and inference (e.g. LLM applications recommendation systems) optimizing feature engineering workflows.
- 1 years of data engineering experience
- Experience with data modeling warehousing and building ETL pipelines
- Experience with one or more query language (e.g. SQL PL/SQL DDL MDX HiveQL SparkSQL Scala)
- Experience with one or more scripting language (e.g. Python KornShell)
- Experience with big data technologies such as: Hadoop Hive Spark EMR
- Experience with any ETL tool like Informatica ODI SSIS BODI Datastage etc.
- Experience with AWS technologies like Redshift S3 AWS Glue EMR Kinesis FireHose Lambda and IAM roles and permissions
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit
for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.