Data Pipeline Engineer Remote (Ohio, USA) Contract

Acestack

Not Interested
Bookmark
Report This Job

profile Job Location:

Columbus, NE - USA

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy

Job Summary

Job Title: Data Pipeline Engineer
Location: Remote (Ohio USA)
Employment Type: Contract Long Term

Prior experience with Cardinal is highly preferred.

Job Overview: We are seeking a highly skilled Data Pipeline Engineer to design build and optimize data pipelines in a cloud-based environment. The ideal candidate will have strong expertise in Databricks (Python & PySpark) Azure SQL and data modeling with a solid understanding of ETL/ELT design patterns. You will be responsible for ensuring the performance reliability and scalability of our data solutions while collaborating with cross-functional teams.

Key Responsibilities:

  • Design develop and maintain scalable and efficient data pipelines using Databricks and Azure Data Services.
  • Implement robust ETL/ELT frameworks for structured and unstructured data.
  • Build and optimize data models to support analytics reporting and machine learning workloads.
  • Collaborate with data architects analysts and DevOps teams to define data requirements and solutions.
  • Ensure data quality governance and security across all data layers.
  • Leverage Docker and Azure Kubernetes Service (AKS) for automation containerization and scalability.
  • Monitor troubleshoot and optimize data pipeline performance and resource utilization.
  • Document technical processes data flows and best practices.

Must-Have Skills:

  • Bachelors or Masters degree in Computer Science Data Engineering Information Systems or a related field.
  • 9 Years of overall IT experience is required
  • Strong experience with Databricks (Python SQL PySpark).
  • Proficiency with Azure SQL Server (Managed Instance Azure SQL DB SQL Server on VMs).
  • Expertise in Data Modeling ETL/ELT frameworks and data architecture patterns.
  • Hands-on experience with Docker and Azure Kubernetes Service (AKS).
  • Experience in performance optimization pipeline orchestration and CI/CD integration.

Nice-to-Have Skills:

  • Experience with Azure Data Factory Synapse Analytics or Power BI integration.
  • Familiarity with Delta Lake Parquet and Data Lakehouse architectures.
  • Exposure to DevOps practices and infrastructure as code (IaC) in Azure.

Job Title: Data Pipeline Engineer Location: Remote (Ohio USA) Employment Type: Contract Long Term Prior experience with Cardinal is highly preferred. Job Overview: We are seeking a highly skilled Data Pipeline Engineer to design build and optimize data pipelines in a cloud-based environment. T...
View more view more

Key Skills

  • Continuous Integration
  • Docker
  • Jenkins
  • Kubernetes
  • Build Automation
  • S3
  • ASME Codes & Standards
  • Redshift
  • Spark
  • CI/CD
  • Kafka
  • Scala