DescriptionBe part of a dynamic team where your distinctive skills will contribute to a winning culture and team.
As a Data Engineer III at JPMorgan Chase within Wealth Management you will be a seasoned member of an agile team tasked with designing and delivering reliable data collection storage access and analytics solutions that are secure stable and scalable. Your responsibilities will include developing testing and maintaining essential data pipelines and architectures across diverse technical areas supporting various business functions to achieve the firms objectives.
Job responsibilities
- Supports the review of controls to ensure sufficient protection of enterprise data
- Reviews and makes customizations in one or two tools to generate a product at the business or customers request
- Updates logical or physical data models based on new use cases
- Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
- Contributes to the teams culture by promoting collaboration and innovation
- Designs develops and maintains robust data pipelines to automate the extraction transformation and loading (ETL) of data from various sources into data warehouses or data lakes
- Implements scalable and efficient data architectures that support data processing and analytics
- Integrates data from multiple sources ensuring consistency accuracy and reliability
- Collaborates with data scientists and analysts to understand data requirements and provide solutions
- Monitors and troubleshoots data pipeline performance issues and implements solutions
- Documents data pipeline processes architectures and workflows for future reference and training
Required qualifications capabilities and skills
- Formal training or certification on data engineering disciplines and 3 years applied experience
- Advanced proficiency in NoSQL databases and SQL (e.g. joins and aggregations)
- Proficiency in programming languages such as Java and Python for data processing tasks
- Proficient in Object-Oriented Programming (OOP) concepts with a strong ability to design and implement robust reusable and maintainable code structures across various programming languages
- Extensive experience with cloud platforms particularly Amazon Web Services (AWS) including EMR Glue Lambda and ECS to design deploy and manage scalable and efficient cloud-based solutions.
- Hands-on experience with frameworks like Apache Spark leveraging its capabilities for large-scale data processing and analytics to drive efficient and insightful data solutions.
- Proven experience in utilizing Cucumber and Gherkin for behavior-driven development (BDD)
- Proficiency in Unix scripting data structures data serialization formats such as JSON AVRO or similar and big-data storage formats such as Parquet
- Strong understanding of data architecture data modeling and data warehousing concepts
- Ability to integrate data from various sources ensuring consistency and accuracy
- Significant experience with statistical data analysis and the ability to determine appropriate tools and data patterns for analysis
Preferred qualifications capabilities and skills
- Familiarity with CI/CD pipelines Docker and Kubernetes
- Provision infrastructure using a high-level configuration language - Terraform
- Utilized Splunk to monitor and analyze system performance
- Experience with Datadog or Dynatrace for real-time monitoring and performance analysis of applications and infrastructure
- Flexibility and eagerness to learn new technologies and skills
DescriptionBe part of a dynamic team where your distinctive skills will contribute to a winning culture and team.As a Data Engineer III at JPMorgan Chase within Wealth Management you will be a seasoned member of an agile team tasked with designing and delivering reliable data collection storage acce...
DescriptionBe part of a dynamic team where your distinctive skills will contribute to a winning culture and team.
As a Data Engineer III at JPMorgan Chase within Wealth Management you will be a seasoned member of an agile team tasked with designing and delivering reliable data collection storage access and analytics solutions that are secure stable and scalable. Your responsibilities will include developing testing and maintaining essential data pipelines and architectures across diverse technical areas supporting various business functions to achieve the firms objectives.
Job responsibilities
- Supports the review of controls to ensure sufficient protection of enterprise data
- Reviews and makes customizations in one or two tools to generate a product at the business or customers request
- Updates logical or physical data models based on new use cases
- Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
- Contributes to the teams culture by promoting collaboration and innovation
- Designs develops and maintains robust data pipelines to automate the extraction transformation and loading (ETL) of data from various sources into data warehouses or data lakes
- Implements scalable and efficient data architectures that support data processing and analytics
- Integrates data from multiple sources ensuring consistency accuracy and reliability
- Collaborates with data scientists and analysts to understand data requirements and provide solutions
- Monitors and troubleshoots data pipeline performance issues and implements solutions
- Documents data pipeline processes architectures and workflows for future reference and training
Required qualifications capabilities and skills
- Formal training or certification on data engineering disciplines and 3 years applied experience
- Advanced proficiency in NoSQL databases and SQL (e.g. joins and aggregations)
- Proficiency in programming languages such as Java and Python for data processing tasks
- Proficient in Object-Oriented Programming (OOP) concepts with a strong ability to design and implement robust reusable and maintainable code structures across various programming languages
- Extensive experience with cloud platforms particularly Amazon Web Services (AWS) including EMR Glue Lambda and ECS to design deploy and manage scalable and efficient cloud-based solutions.
- Hands-on experience with frameworks like Apache Spark leveraging its capabilities for large-scale data processing and analytics to drive efficient and insightful data solutions.
- Proven experience in utilizing Cucumber and Gherkin for behavior-driven development (BDD)
- Proficiency in Unix scripting data structures data serialization formats such as JSON AVRO or similar and big-data storage formats such as Parquet
- Strong understanding of data architecture data modeling and data warehousing concepts
- Ability to integrate data from various sources ensuring consistency and accuracy
- Significant experience with statistical data analysis and the ability to determine appropriate tools and data patterns for analysis
Preferred qualifications capabilities and skills
- Familiarity with CI/CD pipelines Docker and Kubernetes
- Provision infrastructure using a high-level configuration language - Terraform
- Utilized Splunk to monitor and analyze system performance
- Experience with Datadog or Dynatrace for real-time monitoring and performance analysis of applications and infrastructure
- Flexibility and eagerness to learn new technologies and skills
View more
View less