DescriptionYoure ready to gain the skills and experience needed to grow within your role and advance your career and we have the perfect software engineering opportunity for you.
As a Software Engineer IIBig Data/Pyspark at JPMorgan Chase within the Consumer and Community BankingCustomer Identity and Authentication team you are part of an agile team that works to enhance design and deliver the software components of the firms stateoftheart technology products in a secure stable and scalable way. As an emerging member of a software engineering team you execute software solutions through the design development and technical troubleshooting of multiple components within a technical product application or system while gaining the skills and experience needed to grow within your role.
Job responsibilities
- Design develop and maintain scalable data pipelines and ETL processes to support data integration and analytics. Implement ETL transformations on big data platforms utilizing NoSQL databases like MongoDB DynamoDB and Cassandra.
- Utilize Python for data processing and transformation tasks ensuring efficient and reliable data workflows. Work handson with SPARK to manage and process large datasets efficiently.
- Implement data orchestration and workflow automation using Apache Airflow. Apply understanding of EventDriven Architecture (EDA) and Event Streaming with exposure to Apache Kafka.
- Use Terraform for infrastructure provisioning and management ensuring a robust and scalable data infrastructure. Deploy and manage containerized applications using Kubernetes (EKS) and Amazon ECS
- Implement AWS enterprise solutions including Redshift S3 EC2 Data Pipeline and EMR to enhance data processing capabilities.
- Develop and optimize data models to support business intelligence and analytics requirements. Work with graph databases to model and query complex relationships within data.
- Create and maintain interactive and insightful reports and dashboards using Tableau to support datadriven decisionmaking.
- Collaborate with crossfunctional teams to understand data requirements and deliver solutions that meet business needs.
Required qualifications capabilities and skills
- Formal training or certification on software engineering concepts and 2 years of applied experience
- Strong programming skills in Python with basic knowledge of Java
- Experience with Apache Airflow for data orchestration and workflow management
- Familiarity with container orchestration platforms such as Kubernetes (EKS) and Amazon ECS. Experience with Terraform for infrastructure as code and cloud resource management
- Proficiency in data modeling techniques and best practices. Exposure to graph databases and experience in modeling and querying graph data
- Experience in creating reports and dashboards using Tableau
- Experience with AWS enterprise implementations including Redshift S3 EC2 Data Pipeline and EMR
- Handson experience with SPARK and managing large datasets. Experience in implementing ETL transformations on big data platforms particularly with NoSQL databases (MongoDB DynamoDB Cassandra)
- Understanding of EventDriven Architecture (EDA) and Event Streaming with exposure to Apache Kafka
Preferred qualifications capabilities and skills
- Strong analytical and problemsolving skills with attention to detail
- Ability to work independently and collaboratively in a team environment
- Good communication skills with the ability to convey technical concepts to nontechnical stakeholders
- A proactive approach to learning and adapting to new technologies and methodologies