Who we are
DigiCert is a global leader in intelligent trust helping organizations protect the digital interactions people rely on every day. From websites and cloud services to connected devices and critical systems we make sure digital experiences are secure private and authentic.
Our AI-powered DigiCert ONE platform brings together certificates DNS and lifecycle management to help organizations stay ahead of risk as technology and threats evolve. Trusted by more than 100000 organizationsincluding 90% of the Fortune 500DigiCert helps businesses operate with confidence today while preparing for whats next including a quantum-safe future.
Job summary
We are looking for a Senior ETL Engineer to design build and maintain scalable data pipelines that power analytics reporting and data-driven decision making across the organization. This role requires strong SQL and Python expertise hands-on Databricks experience and a deep understanding of data and how it is used to drive analytics and insights. You will work closely with data engineers analytics teams and business stakeholders to ensure high-quality reliable and well-modeled data that elevates our analytics capabilities.
What you will do
- Design develop and maintain robust ETL/ELT pipelines using SQL and Python working with large-scale datasets
- Build scalable and maintainable data workflows leveraging Spark / PySpark and cloud-based data platforms
- Translate business and analytical requirements into high-quality well-structured datasets
- Apply strong data modeling and data design principles to support downstream analytics and reporting
- Optimize data pipelines for performance reliability and cost efficiency
- Ensure data quality accuracy and consistency through validation testing and monitoring
- Partner closely with analytics and BI teams to enable trusted analytics-ready data
- Collaborate with analytics teams on reporting and dashboard needs to ensure data accuracy correctness and performance
- Troubleshoot and resolve data pipeline failures and data issues end-to-end
- Contribute to architectural decisions best practices and data engineering standards
- Participate in code reviews technical design discussions and platform improvements
- Mentor junior engineers and help elevate overall data engineering practices
- Document data pipelines workflows and operational processes
What you will have
- 5 years of experience in data engineering ETL development or a related role
- Strong proficiency in SQL and Python with hands-on experience building production-grade ETL pipelines using both
- Hands-on experience with Databricks Spark and PySpark
- Experience working with cloud data platforms (AWS Azure or GCP)
- Strong understanding of data warehousing concepts dimensional modeling and analytics-friendly data design
- A strong sense of data and how it supports analytics reporting and business insights
- Experience building batch and incremental data pipelines
- Familiarity with Git version control and CI/CD practices
- Strong problem-solving skills and ability to work independently on complex data challenges
- Clear communication skills and ability to collaborate across engineering and analytics teams
Nice to have
- Experience with streaming or near real-time data pipelines (Kafka Kinesis etc.)
- Familiarity with data quality observability and monitoring tools
- Experience supporting BI tools and analytics use cases
- Exposure to machine learning pipelines or feature engineering
- Prior experience mentoring or guiding junior engineers
- Experience working in Agile or Scrum environments
Benefits
- Generous time off policies
- Top shelf benefits
- Education wellness and lifestyle support
#LI-SS1
Required Experience:
Senior IC
Who we areDigiCert is a global leader in intelligent trust helping organizations protect the digital interactions people rely on every day. From websites and cloud services to connected devices and critical systems we make sure digital experiences are secure private and authentic.Our AI-powered Digi...
Who we are
DigiCert is a global leader in intelligent trust helping organizations protect the digital interactions people rely on every day. From websites and cloud services to connected devices and critical systems we make sure digital experiences are secure private and authentic.
Our AI-powered DigiCert ONE platform brings together certificates DNS and lifecycle management to help organizations stay ahead of risk as technology and threats evolve. Trusted by more than 100000 organizationsincluding 90% of the Fortune 500DigiCert helps businesses operate with confidence today while preparing for whats next including a quantum-safe future.
Job summary
We are looking for a Senior ETL Engineer to design build and maintain scalable data pipelines that power analytics reporting and data-driven decision making across the organization. This role requires strong SQL and Python expertise hands-on Databricks experience and a deep understanding of data and how it is used to drive analytics and insights. You will work closely with data engineers analytics teams and business stakeholders to ensure high-quality reliable and well-modeled data that elevates our analytics capabilities.
What you will do
- Design develop and maintain robust ETL/ELT pipelines using SQL and Python working with large-scale datasets
- Build scalable and maintainable data workflows leveraging Spark / PySpark and cloud-based data platforms
- Translate business and analytical requirements into high-quality well-structured datasets
- Apply strong data modeling and data design principles to support downstream analytics and reporting
- Optimize data pipelines for performance reliability and cost efficiency
- Ensure data quality accuracy and consistency through validation testing and monitoring
- Partner closely with analytics and BI teams to enable trusted analytics-ready data
- Collaborate with analytics teams on reporting and dashboard needs to ensure data accuracy correctness and performance
- Troubleshoot and resolve data pipeline failures and data issues end-to-end
- Contribute to architectural decisions best practices and data engineering standards
- Participate in code reviews technical design discussions and platform improvements
- Mentor junior engineers and help elevate overall data engineering practices
- Document data pipelines workflows and operational processes
What you will have
- 5 years of experience in data engineering ETL development or a related role
- Strong proficiency in SQL and Python with hands-on experience building production-grade ETL pipelines using both
- Hands-on experience with Databricks Spark and PySpark
- Experience working with cloud data platforms (AWS Azure or GCP)
- Strong understanding of data warehousing concepts dimensional modeling and analytics-friendly data design
- A strong sense of data and how it supports analytics reporting and business insights
- Experience building batch and incremental data pipelines
- Familiarity with Git version control and CI/CD practices
- Strong problem-solving skills and ability to work independently on complex data challenges
- Clear communication skills and ability to collaborate across engineering and analytics teams
Nice to have
- Experience with streaming or near real-time data pipelines (Kafka Kinesis etc.)
- Familiarity with data quality observability and monitoring tools
- Experience supporting BI tools and analytics use cases
- Exposure to machine learning pipelines or feature engineering
- Prior experience mentoring or guiding junior engineers
- Experience working in Agile or Scrum environments
Benefits
- Generous time off policies
- Top shelf benefits
- Education wellness and lifestyle support
#LI-SS1
Required Experience:
Senior IC
View more
View less