DescriptionWe have an opportunity to impact your career and provide an adventure where you can push the limits of whats our innovative Capital Analytics team at JPMorganChase where we leverage cutting-edge technology to drive data-driven decision-making and enhance business performance. We are seeking a talented and motivated Software/Data Engineer to join our team and contribute to our mission of transforming data into actionable insights.
As a Lead Software Engineer at JPMorgan Chase within the Capital Technology team you will play a crucial role in designing developing and maintaining scalable data processing solutions using Databricks Python and AWS. You will collaborate with cross-functional teams to deliver high-quality data solutions that support our business objectives.
Job responsibilities
- Execute creative data-driven software solutions including design development and technical troubleshooting with the ability to think beyond routine approaches to solve technical problems.
- Design and implement data pipelines and scalable data processing workflows using Python PySpark SQL and Databricks or Spark for large-scale complex data environments.
- Develop fact and dimension data models for reporting and analytics.
- Write secure high-quality production code and review and debug code written by others.
- Identify and automate remediation of recurring issues to improve the operational stability of software applications and systems.
- Lead evaluation sessions with external vendors startups and internal teams to assess architectural designs technical credentials and applicability within existing systems.
- Lead communities of practice across Software Engineering to promote awareness and adoption of new technologies. Foster a team culture of diversity opportunity inclusion and respect.
- Collaborate with business stakeholders to develop data management strategies transforming data into insights that drive strategic decisions.
- Ensure data quality consistency security and lineage throughout all stages of data processing and transformation as well as supporting data migration and modernization initiatives transitioning legacy systems to cloud-based data warehouses.
- Document data flows logic and transformation rules to maintain transparency and facilitate knowledge sharing.
- Troubleshoot and resolve performance and quality issues in both batch and real-time data pipelines. Deliver comprehensive solutions to data challenges by applying appropriate data strategies and tools.
Required qualifications capabilities and skills
- Proven experience in data management ETL/ELT pipeline development and large-scale data processing.
- Proficiency in SQL Python and PySpark.
- Hands-on experience with data lake platforms (Databricks Spark or similar).
- Strong understanding of data quality security and lineage best practices.
- Experience with cloud-based data warehouse migration and modernization.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and documentation abilities.
- Ability to collaborate effectively with business and technical stakeholders.
Required Experience:
IC
DescriptionWe have an opportunity to impact your career and provide an adventure where you can push the limits of whats our innovative Capital Analytics team at JPMorganChase where we leverage cutting-edge technology to drive data-driven decision-making and enhance business performance. We are seek...
DescriptionWe have an opportunity to impact your career and provide an adventure where you can push the limits of whats our innovative Capital Analytics team at JPMorganChase where we leverage cutting-edge technology to drive data-driven decision-making and enhance business performance. We are seeking a talented and motivated Software/Data Engineer to join our team and contribute to our mission of transforming data into actionable insights.
As a Lead Software Engineer at JPMorgan Chase within the Capital Technology team you will play a crucial role in designing developing and maintaining scalable data processing solutions using Databricks Python and AWS. You will collaborate with cross-functional teams to deliver high-quality data solutions that support our business objectives.
Job responsibilities
- Execute creative data-driven software solutions including design development and technical troubleshooting with the ability to think beyond routine approaches to solve technical problems.
- Design and implement data pipelines and scalable data processing workflows using Python PySpark SQL and Databricks or Spark for large-scale complex data environments.
- Develop fact and dimension data models for reporting and analytics.
- Write secure high-quality production code and review and debug code written by others.
- Identify and automate remediation of recurring issues to improve the operational stability of software applications and systems.
- Lead evaluation sessions with external vendors startups and internal teams to assess architectural designs technical credentials and applicability within existing systems.
- Lead communities of practice across Software Engineering to promote awareness and adoption of new technologies. Foster a team culture of diversity opportunity inclusion and respect.
- Collaborate with business stakeholders to develop data management strategies transforming data into insights that drive strategic decisions.
- Ensure data quality consistency security and lineage throughout all stages of data processing and transformation as well as supporting data migration and modernization initiatives transitioning legacy systems to cloud-based data warehouses.
- Document data flows logic and transformation rules to maintain transparency and facilitate knowledge sharing.
- Troubleshoot and resolve performance and quality issues in both batch and real-time data pipelines. Deliver comprehensive solutions to data challenges by applying appropriate data strategies and tools.
Required qualifications capabilities and skills
- Proven experience in data management ETL/ELT pipeline development and large-scale data processing.
- Proficiency in SQL Python and PySpark.
- Hands-on experience with data lake platforms (Databricks Spark or similar).
- Strong understanding of data quality security and lineage best practices.
- Experience with cloud-based data warehouse migration and modernization.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and documentation abilities.
- Ability to collaborate effectively with business and technical stakeholders.
Required Experience:
IC
View more
View less