Paragon is a case management system (CMS) used by 180 business teams within Stores to manage their communications and interactions with their customers. These business teams using Paragon are our enterprise customers. Paragon CMS includes: a customizable workbench UI a lifecycle manager a routing layer such as tenant configuration case storage security and data insights and analytic. The successful candidate is expected to contribute to all parts of the data engineering and deployment lifecycle including design development documentation testing and maintenance. They must possess good verbal and written communication skills be self-driven and deliver high quality results in a fast paced environment. You will thrive in our collaborative environment working alongside accomplished engineers who value teamwork and technical excellence. Were looking for experienced technical leaders.
Key job responsibilities
1. Design/implement automation and manage our massive data infrastructure to scale for the analytics needs of case management.
2. Build solutions to achieve BAA(Best At Amazon) standards for system efficiency IMR efficiency data availability consistency & compliance.
3. Enable efficient data exploration experimentation of large datasets on our data platform and implement data access control mechanisms for stand-alone datasets
4. Design and implement scalable and cost effective data infrastructure to enable Non-IN(Emerging Marketplaces and WW) use cases on our data platform
5. Interface with other technology teams to extract transform and load data from a wide variety of data sources using SQL Amazon and AWS big data technologies
6. Must possess strong verbal and written communication skills be self-driven and deliver high quality results in a fast-paced environment.
7. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations
8. Enjoy working closely with your peers in a group of very smart and talented engineers.
- 1 years of data engineering experience
- Experience with data modeling warehousing and building ETL pipelines
- Experience with one or more query language (e.g. SQL PL/SQL DDL MDX HiveQL SparkSQL Scala)
- Experience with one or more scripting language (e.g. Python KornShell)
- Experience with big data technologies such as: Hadoop Hive Spark EMR
- Experience with any ETL tool like Informatica ODI SSIS BODI Datastage etc.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit
for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.