DescriptionWe have an opportunity to impact your career and provide an adventure where you can push the limits of whats possible.
As a Data Platform Engineering Lead at JPMorgan Chase within Asset and Wealth Management youare an integral part of an agile team that works to enhance build and deliver trusted market-leading technology products in a secure stable and scalable way. As a core technical contributor you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firms business objectives.
Job responsibilities
- Lead the design development and implementation of scalable data pipelines and ETL batches using Python/PySpark on AWS.
- Execute standard software solutions design development and technical troubleshooting
- Use infrastructure as code to build applications to orchestrate and monitor data pipelines create and manage on-demand compute resources on cloud programmatically create frameworks to ingest and distribute data at scale.
- Manage and mentor a team of data engineers providing guidance and support to ensure successful product delivery and support.
- Collaborate proactively with stakeholders users and technology teams to understand business/technical requirements and translate them into technical solutions.
- Optimize and maintain data infrastructure on cloud platform ensuring scalability reliability and performance.
- Implement data governance and best practices to ensure data quality and compliance with organizational standards.
- Monitor and troubleshoot application and data pipelines identifying and resolving issues in a timely manner.
- Stay up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement.
- Add to team culture of diversity equity inclusion and respect.
Required qualifications capabilities and skills
- Formal training or certification on software engineering concepts and 5 years applied experience
- Experience in software development and data engineering with demonstrable hands-on experience in Python and PySpark.
- Proven experience with cloud platforms such as AWS Azure or Google Cloud.
- Good understanding of data modeling data architecture ETL processes and data warehousing concepts.
- Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks.
- Experience with big data technologies and services like AWS EMRs Redshift Lambda S3.
- Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab for data engineering platforms.
- Good knowledge of SQL and NoSQL databases including performance tuning and optimization.
- Experience with declarative infra provisioning tools like Terraform Ansible or CloudFormation.
- Strong analytical skills to troubleshoot issues and optimize data processes working independently and collaboratively.
- Experience in leading and managing a team/pod of engineers with a proven track record of successful project delivery.
Preferred qualifications capabilities and skills
- Knowledge of machine learning model lifecycle language models and cloud-native MLOps pipelines and frameworks is a plus.
- Familiarity with data visualization tools and data integration patterns.