Location: St. Louis MO
Title: Data Engineer
Experience: 10 Years (Software/Data Engineering)
Job Description
Core Responsibilities
-
Drive adoption and implementation of tools and platforms that support both internal and external customers.
-
Serve as a key contributor across the full data/software/platform development lifecycle: design development documentation testing deployment and support.
-
Operate efficiently within highly secure environments adhering to PII and PCI-DSS standards.
-
Support architects and engineers in designing and building scalable secure and agile data applications and platforms.
-
Design develop and optimize batch and real-time data pipelines using Medallion Architecture preferably on Snowflake or Databricks.
-
Build modular test-driven data transformation workflows using dbt following strict TDD practices.
-
Implement CI/CD pipelines via GitLab and Jenkins for automated testing deployment and monitoring.
-
Embed DataOps principles across all phases of the pipeline lifecycletesting monitoring versioning collaboration and automation.
-
Create scalable reusable data models to support analytics and reporting including Power BI dashboards.
-
Develop optimize and support real-time streaming pipelines using technologies such as Kafka and Spark Structured Streaming.
-
Establish data observability frameworks for monitoring data quality freshness lineage and anomaly detection.
-
Lead deployments migrations and upgrades for data platforms ensuring minimal downtime and strong mitigation planning.
-
Collaborate with cross-functional stakeholders to translate requirements into reliable high-impact data solutions.
-
Maintain comprehensive documentation covering pipeline architecture processes standards and operating procedures.
-
Troubleshoot complex data and system issues using advanced analytical and problem-solving skills.
-
Communicate clearly and effectively with both technical and non-technical stakeholders.
Required Qualifications
-
Bachelors degree in Computer Science Engineering or related technical discipline.
-
15 years of proven experience in software development data engineering or platform engineering.
-
Deep hands-on expertise with Databricks Python PySpark and Spark SQL.
-
Strong experience building high-performance transformations involving joins window functions aggregations partitioning and caching strategies.
-
Skilled in developing and managing real-time streaming pipelines.
-
Experience with Delta Live Tables (DLT) and Databricks Workflows; knowledge of Lakeflow Declarative Pipelines is a plus.
-
Strong DevOps background including:
-
Proven experience with dbt for modular testable and scalable transformation workflows.
-
Solid understanding of cloud database ecosystems (AWS Azure or GCP).
-
Expertise in designing scalable data models and dashboards using Power BI.
-
Advanced SQL development and query optimization skills.
-
Demonstrated capability in building and managing data observability frameworks.
-
Strong track record in planning and executing large-scale deployments upgrades and migrations with minimal operational impact.
Location: St. Louis MO Title: Data Engineer Experience: 10 Years (Software/Data Engineering) Job Description Core Responsibilities Drive adoption and implementation of tools and platforms that support both internal and external customers. Serve as a key contributor across the full data/software/...
Location: St. Louis MO
Title: Data Engineer
Experience: 10 Years (Software/Data Engineering)
Job Description
Core Responsibilities
-
Drive adoption and implementation of tools and platforms that support both internal and external customers.
-
Serve as a key contributor across the full data/software/platform development lifecycle: design development documentation testing deployment and support.
-
Operate efficiently within highly secure environments adhering to PII and PCI-DSS standards.
-
Support architects and engineers in designing and building scalable secure and agile data applications and platforms.
-
Design develop and optimize batch and real-time data pipelines using Medallion Architecture preferably on Snowflake or Databricks.
-
Build modular test-driven data transformation workflows using dbt following strict TDD practices.
-
Implement CI/CD pipelines via GitLab and Jenkins for automated testing deployment and monitoring.
-
Embed DataOps principles across all phases of the pipeline lifecycletesting monitoring versioning collaboration and automation.
-
Create scalable reusable data models to support analytics and reporting including Power BI dashboards.
-
Develop optimize and support real-time streaming pipelines using technologies such as Kafka and Spark Structured Streaming.
-
Establish data observability frameworks for monitoring data quality freshness lineage and anomaly detection.
-
Lead deployments migrations and upgrades for data platforms ensuring minimal downtime and strong mitigation planning.
-
Collaborate with cross-functional stakeholders to translate requirements into reliable high-impact data solutions.
-
Maintain comprehensive documentation covering pipeline architecture processes standards and operating procedures.
-
Troubleshoot complex data and system issues using advanced analytical and problem-solving skills.
-
Communicate clearly and effectively with both technical and non-technical stakeholders.
Required Qualifications
-
Bachelors degree in Computer Science Engineering or related technical discipline.
-
15 years of proven experience in software development data engineering or platform engineering.
-
Deep hands-on expertise with Databricks Python PySpark and Spark SQL.
-
Strong experience building high-performance transformations involving joins window functions aggregations partitioning and caching strategies.
-
Skilled in developing and managing real-time streaming pipelines.
-
Experience with Delta Live Tables (DLT) and Databricks Workflows; knowledge of Lakeflow Declarative Pipelines is a plus.
-
Strong DevOps background including:
-
Proven experience with dbt for modular testable and scalable transformation workflows.
-
Solid understanding of cloud database ecosystems (AWS Azure or GCP).
-
Expertise in designing scalable data models and dashboards using Power BI.
-
Advanced SQL development and query optimization skills.
-
Demonstrated capability in building and managing data observability frameworks.
-
Strong track record in planning and executing large-scale deployments upgrades and migrations with minimal operational impact.
View more
View less