DescriptionPush the limits of whats possible with us as an experienced member of our Software Engineering team.
As Data Engineer III - Big Data/Java/Python at JPMorgan Chase you will join the Financial Planning and Analysis (FP & A) team to design and implement the next generation buildout of a cloud native Driver based FP & A platform for JPMC. The FP & A organization aims to provide comprehensive solutions to managing the firms planning forecasting & budgeting. The program will include strategic buildout of systematic sourcing (data lake) driver-based forecasting models and AI first approach to bring digital first reporting capabilities. The target platform must process 40-60 million transactions and positions daily calculate forecasts as well provide a slice & dice model to provide users with a multidimensional picture of plans forecast & budget.
Job Responsibilities:
- Design develop and maintain scalable data pipelines for ingesting processing and transforming large volumes of structured and unstructured data.
- Implement data mining techniques to extract valuable insights from complex data sets.
- Build and optimize data architectures using big data tools and frameworks (e.g. databricks Spark Python).
- Ensure data quality integrity and security throughout the data lifecycle.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
- Monitor and troubleshoot data pipeline performance and resolve issues as they arise.
- Document data processes workflows and best practices.
Required Qualifications Capabilities and Skills
- Formal training or certification on software engineering concepts and 3 years applied experience
- Strong hands-on development experience and in-depth knowledge of Java Python Spark Data bricks & Bigdata related technologies
- Proven experience in building and maintaining data pipelines and ETL processes.
- Strong understanding of infrastructure using terraform.
- Proficiency in SQL and experience with relational and NoSQL databases.
- Experience with data mining data wrangling and data transformation techniques.
- Knowledge of data modeling data warehousing and data governance best practices.
- Strong problem-solving skills and attention to detail.
- Strong skills on OLAP (cube like) systems. Eg Atoti.
- Excellent communication and collaboration skills.
Preferred Qualifications Capabilities and Skills:
- Experience of working in big data solutions with evidence of ability to analyze data to drive solutions
- Familiarity with cloud platforms (e.g. AWS) and their big data services is a plus.
Required Experience:
IC
DescriptionPush the limits of whats possible with us as an experienced member of our Software Engineering team.As Data Engineer III - Big Data/Java/Python at JPMorgan Chase you will join the Financial Planning and Analysis (FP & A) team to design and implement the next generation buildout of a cloud...
DescriptionPush the limits of whats possible with us as an experienced member of our Software Engineering team.
As Data Engineer III - Big Data/Java/Python at JPMorgan Chase you will join the Financial Planning and Analysis (FP & A) team to design and implement the next generation buildout of a cloud native Driver based FP & A platform for JPMC. The FP & A organization aims to provide comprehensive solutions to managing the firms planning forecasting & budgeting. The program will include strategic buildout of systematic sourcing (data lake) driver-based forecasting models and AI first approach to bring digital first reporting capabilities. The target platform must process 40-60 million transactions and positions daily calculate forecasts as well provide a slice & dice model to provide users with a multidimensional picture of plans forecast & budget.
Job Responsibilities:
- Design develop and maintain scalable data pipelines for ingesting processing and transforming large volumes of structured and unstructured data.
- Implement data mining techniques to extract valuable insights from complex data sets.
- Build and optimize data architectures using big data tools and frameworks (e.g. databricks Spark Python).
- Ensure data quality integrity and security throughout the data lifecycle.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
- Monitor and troubleshoot data pipeline performance and resolve issues as they arise.
- Document data processes workflows and best practices.
Required Qualifications Capabilities and Skills
- Formal training or certification on software engineering concepts and 3 years applied experience
- Strong hands-on development experience and in-depth knowledge of Java Python Spark Data bricks & Bigdata related technologies
- Proven experience in building and maintaining data pipelines and ETL processes.
- Strong understanding of infrastructure using terraform.
- Proficiency in SQL and experience with relational and NoSQL databases.
- Experience with data mining data wrangling and data transformation techniques.
- Knowledge of data modeling data warehousing and data governance best practices.
- Strong problem-solving skills and attention to detail.
- Strong skills on OLAP (cube like) systems. Eg Atoti.
- Excellent communication and collaboration skills.
Preferred Qualifications Capabilities and Skills:
- Experience of working in big data solutions with evidence of ability to analyze data to drive solutions
- Familiarity with cloud platforms (e.g. AWS) and their big data services is a plus.
Required Experience:
IC
View more
View less