Role: Database Modeler
Location: Remote Must be commutable to any Fed headquarters or branch office (San Francisco CA; Chicago IL; NYC NY; Richmond VA; Kansas City MO; Cleveland OH; Charlotte NC; Dallas TX)
Duration: 12 months with possible extension
Type of Employment: W2 Contract
Job Summary:
We are seeking a Database Modeler with strong Data Mesh and Databricks expertise to design scalable data models and data products. This role focuses on building efficient data pipelines ensuring data quality and enabling domain-driven data architecture.
Key Responsibilities:
- Design and implement data models and data products within a Data Mesh architecture
- Build and optimize data pipelines using Databricks and Spark
- Implement data validation and quality checks to ensure data integrity
- Collaborate with data engineers architects and business stakeholders
Required Skills:
- Strong experience with Databricks Spark SQL and Python
- Experience with Data Mesh architecture and data product modeling
- Expertise in designing and optimizing data pipelines
- Strong problem-solving and communication skills
Preferred:
- Experience with Data Vault methodology and tools like ER/Studio
Role: Database Modeler Location: Remote Must be commutable to any Fed headquarters or branch office (San Francisco CA; Chicago IL; NYC NY; Richmond VA; Kansas City MO; Cleveland OH; Charlotte NC; Dallas TX) Duration: 12 months with possible extension Type of Employment: W2 Contract Job Summa...
Role: Database Modeler
Location: Remote Must be commutable to any Fed headquarters or branch office (San Francisco CA; Chicago IL; NYC NY; Richmond VA; Kansas City MO; Cleveland OH; Charlotte NC; Dallas TX)
Duration: 12 months with possible extension
Type of Employment: W2 Contract
Job Summary:
We are seeking a Database Modeler with strong Data Mesh and Databricks expertise to design scalable data models and data products. This role focuses on building efficient data pipelines ensuring data quality and enabling domain-driven data architecture.
Key Responsibilities:
- Design and implement data models and data products within a Data Mesh architecture
- Build and optimize data pipelines using Databricks and Spark
- Implement data validation and quality checks to ensure data integrity
- Collaborate with data engineers architects and business stakeholders
Required Skills:
- Strong experience with Databricks Spark SQL and Python
- Experience with Data Mesh architecture and data product modeling
- Expertise in designing and optimizing data pipelines
- Strong problem-solving and communication skills
Preferred:
- Experience with Data Vault methodology and tools like ER/Studio
View more
View less