DescriptionYou thrive on diversity and creativity and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.
As a Data Engineer II at JPMorgan Chase within the Commercial & Investment Bank Payments Technology team youare part of an agile team that works to enhance design and deliver the data collection storage access and analytics solutions in a secure stable and scalable way. As an emerging member of a data engineering team you execute data solutions through the design development and technical troubleshooting of multiple components within a technical product application or system while gaining the skills and experience needed to grow within your role.
Job responsibilities
- Collaborate with all of JPMorgans lines of business and functions to delivery software solutions.
- Experiment Architect develop and productionize efficient Data pipelines Data services and Data platforms contributing to the Business.
- Design and implement highly scalable efficient and reliable data processing pipelines and perform analysis and insights to drive and optimize business result.
- Organizes updates and maintains gathered data that will aid in making the data actionable
- Demonstrates basic knowledge of the data system components to determine controls needed to ensure secure data access
- Adds to team culture of diversity equity inclusion and respect
Required qualifications capabilities and skills
- Formal training or certification on large scale technology program concepts and 2 years applied experience in Data Technologies.
- Experienced programming skills with Java and Python.
- Experience across the data lifecycle building Data frameworks working with Data lakes.
- Experience with Batch and Real time Data processing with Spark or Flink
- Basic knowledge of the data lifecycle and data management functions
- Advanced at SQL (e.g. joins and aggregations)
- Working understanding of NoSQL databases
- Significant experience with statistical data analysis and ability to determine appropriate tools to perform analysis
- Basic knowledge of data system components to determine controls needed
- Good knowledge / experience on infrastructure as code.
Preferred qualifications capabilities and skills
- Cloud computing: Amazon Web Service Docker Kubernetes Terraform
- Experience in big data technologies: Hadoop Hive Spark Kafka.
- Experience in distributed system design and development