DescriptionAs a Data Engineer III at JPMorgan Chase within the Applied AI ML & Data Engineering team you will join our dynamic team to elevate your software engineering career. You will play a crucial role in designing developing and maintaining critical technology products and data pipelines across multiple technical areas supporting the firms business objectives in a secure stable and scalable manner.
As a seasoned member of an agile team you will be responsible for creating and deploying machine learning models developing trusted data collection and analytics solutions and maintaining critical data pipelines. You will collaborate with cross-functional teams to solve complex problems and contribute to a culture of diversity opportunity inclusion and respect.
Job Responsibilities:
- Create deploy and support ML models throughout their lifecycle.
- Develop test and maintain critical data pipelines and architectures.
- Deploy and scale distributed systems in a cloud environment (AWS).
- Implement software solutions design development and technical troubleshooting.
- Collaborate with cross-functional teams to solve complex problems.
- Work with Large Language Models and run them efficiently.
- Support review of controls to ensure sufficient protection of enterprise data.
- Advise and make custom configuration changes in tools to generate products.
Required Qualifications Capabilities and Skills:
- Experience as a Software Engineer with 4 years with at least 1 year in Machine Learning.
- Bachelors degree or advanced student in Software Engineering Computer Science Data Science or similar.
- Strong knowledge of machine learning techniques and programming skills in Python.
- Experience with machine learning libraries such as TensorFlow PyTorch and Scikit-learn.
- Strong knowledge of modern back-end technologies and cloud technologies.
- Advanced SQL skills and understanding of NoSQL databases.
- Experience with software engineering best practices such as version control testing and CI/CD.
Preferred Qualifications Capabilities and Skills:
- Experience with ETL processes and Data Engineering tools mainly using Databricks.
- Experience in CI/CD pipelines and tools such as Jenkins Spinnaker GIT/Bitbucket Terraform Kubernetes and Docker.
- Experience in NLP and working with LLMs.
- Experience with streaming technologies such as Kafka.
- Knowledge and experience with dashboards such as Grafana.