Job Summary:
We are hiring an experienced Data Engineer to architect build and optimize scalable data pipelines and analytics solutions. The ideal candidate must have hands-on experience with data integration frameworks data modeling and distributed data processing along with a Databricks and/or Palantir certification.
This Position is a hybrid work environment / 95% Remote Candidate must live within 2 hours of a government Facility.
Key Responsibilities:
- Design and implement robust scalable data pipelines using modern data engineering frameworks.
- Build and manage ETL/ELT processes data lakes and data warehouses.
- Ensure high performance and availability of enterprise data platforms.
- Collaborate with data scientists and business analysts to deliver analytics-ready data.
- Support data governance quality and security compliance.
- Maintain presence on the program for a minimum of one year.
Minimum Qualifications:
- Bachelors degree in Science Math Engineering or Information Systems.
- 7 years of data engineering experience in production environments.
- Databricks and/or Palantir certification (required).
- Proficient in Python SQL and Spark.
- Strong understanding of cloud-based data platforms (AWS Azure or GCP).
- Demonstrated hands-on coding experience in building and deploying data pipelines.
Preferred Qualifications:
- Experience with Delta Lake Apache Airflow or Kafka.
- Familiarity with data privacy and compliance frameworks (e.g. GDPR HIPAA).
Job Summary:We are hiring an experienced Data Engineer to architect build and optimize scalable data pipelines and analytics solutions. The ideal candidate must have hands-on experience with data integration frameworks data modeling and distributed data processing along with a Databricks and/or Pala...
Job Summary:
We are hiring an experienced Data Engineer to architect build and optimize scalable data pipelines and analytics solutions. The ideal candidate must have hands-on experience with data integration frameworks data modeling and distributed data processing along with a Databricks and/or Palantir certification.
This Position is a hybrid work environment / 95% Remote Candidate must live within 2 hours of a government Facility.
Key Responsibilities:
- Design and implement robust scalable data pipelines using modern data engineering frameworks.
- Build and manage ETL/ELT processes data lakes and data warehouses.
- Ensure high performance and availability of enterprise data platforms.
- Collaborate with data scientists and business analysts to deliver analytics-ready data.
- Support data governance quality and security compliance.
- Maintain presence on the program for a minimum of one year.
Minimum Qualifications:
- Bachelors degree in Science Math Engineering or Information Systems.
- 7 years of data engineering experience in production environments.
- Databricks and/or Palantir certification (required).
- Proficient in Python SQL and Spark.
- Strong understanding of cloud-based data platforms (AWS Azure or GCP).
- Demonstrated hands-on coding experience in building and deploying data pipelines.
Preferred Qualifications:
- Experience with Delta Lake Apache Airflow or Kafka.
- Familiarity with data privacy and compliance frameworks (e.g. GDPR HIPAA).
View more
View less