drjobs Data Engineer III - Databricks Data Modeling and Python

Data Engineer III - Databricks Data Modeling and Python

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Columbus, OH - USA

Monthly Salary drjobs

$ 104500 - 140000

Vacancy

1 Vacancy

Job Description

Description

Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.

As a Data Engineer III at JPMorgan Chase within the Corporate Technology Global Supply Services team youserve as a seasoned member of an agile team to design and deliver trusted data collection storage access and analytics solutions in a secure stable and scalable way. You are responsible for developing testing and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firms business objectives.

Job responsibilities

  • Implements data solutions to make highquality data available for analytics and reporting
  • Collaborates with data analysts architects engineers and business stakeholders to understand data requirements.
  • Ensures data quality and consistency by identifying and resolving data issues and creating data reconciliations.
  • Optimizes data workflows and processing for performance scalability and reliability.
  • Monitors data pipelines and proactively addresses issues to minimize downtime and disruptions.
  • Documents data engineering processes data lineage and data dictionaries
  • Stays current with data engineering technologies best practices and industry trends.
  • Designs and develops complex data pipelines ETL (Extract Transform Load) processes and decision support systems data procedures.
  • Adds to team culture of diversity equity inclusion and respect

Required qualifications capabilities and Skills

  • Formal training or certification on Data Engineering concepts and 3 years applied experience.
  • 3 years of experience using technologies such as Databricks Pyspark AWS is essential and creating ETL Pipeline from scratch is a must.
  • 3 years of experience working with AWS (Lambda Step Function SQS SNS API Gateway secrets manager and storage services is a must.
  • 3 years of experience in software engineering and objectoriented programming skills with expertise in Python and Terraform
  • Familiar with development tools such as Jenkins Jira Git/Stash spinnaker
  • Hands on experience with opensource frameworks/libraries such as Apache NiFi Apache Airflow and Autosys.
  • Strong understanding of REST API development using FASTAPI or equivalent frameworks.
  • Familiarity with unit testing frameworks such as pytest or unittest.
  • Advanced at SQL (e.g. joins and aggregations)
  • Extensive experience in statistical data analysis with the ability to select appropriate tools and identify data patterns for effective analysis as well as experience throughout the data lifecycle.

Preferred Qualifications Capabilities and Skills

  • Data modeling skills.
  • Familiarity with Kubernetes Kafka.
  • Experience with containers and containerbased deployment environment (Docker Kubernetes etc.
  • Exposure to Oracle Database Pl/SQL programming & Informatica.




Employment Type

Full-Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.