About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying optimization measurement and attribution in one efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising digital media and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
We are seeking a Staff Data Engineer to lead the design implementation and evolution of our identity services and data governance platform. This role is critical to ensuring trusted privacy-safe and well-governed data across the organization. You will work at the intersection of data engineering identity resolution privacy and platform is anindividual contributor role where you will work to define and implement a strategic vision for data engineering within the organization.
What youll do:
- Identity Services:
- Design and maintain a scalable identity resolution platform
- Build pipelines and services to ingest normalize link and version identity data across multiple sources
- Ensure deterministic and probabilistic matching logic that is transparent auditable and measurable
- Partner with product and analytics teams to expose identity data through reliable well-documented APIs and datasets
- Build and operate batch and streaming pipelines using modern data stack tools
- Create clear documentation standards and runbooks for identity and governance systems
- Data Governance & Trust
- Own data governance foundations including data lineage quality checks schema enforcement and access controls
- Implement privacy-by-design principles (PII handling consent enforcement retention policies)
- Collaborate with legal privacy and security teams to operationalize regulatory requirements (e.g. GDPR CCPA)
- Establish monitoring and alerting for data quality freshness and integrity
What were looking for:
- Data engineering experience with proven track record building data infrastructure using Spark with Scala
- Proven experience building data infrastructure using Spark with Scala for at least 5 years
- Experience in delivering significant technical initiatives and building reliable large scale services
- Experience in delivering APIs backed by relationship-heavy datasets
- Experience implementing data governance practices including data quality metadata management and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data lakes cloud warehouses and storage formats
- Strong proficiency in AWS services
- Successful design and implementation of scalable and efficient data infrastructure
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
- Excellent written and verbal communication skills
- Bachelors degree in Computer Science or a related field
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
Required Experience:
Staff IC
About tvScientifictvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying optimization measurement and...
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying optimization measurement and attribution in one efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising digital media and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
We are seeking a Staff Data Engineer to lead the design implementation and evolution of our identity services and data governance platform. This role is critical to ensuring trusted privacy-safe and well-governed data across the organization. You will work at the intersection of data engineering identity resolution privacy and platform is anindividual contributor role where you will work to define and implement a strategic vision for data engineering within the organization.
What youll do:
- Identity Services:
- Design and maintain a scalable identity resolution platform
- Build pipelines and services to ingest normalize link and version identity data across multiple sources
- Ensure deterministic and probabilistic matching logic that is transparent auditable and measurable
- Partner with product and analytics teams to expose identity data through reliable well-documented APIs and datasets
- Build and operate batch and streaming pipelines using modern data stack tools
- Create clear documentation standards and runbooks for identity and governance systems
- Data Governance & Trust
- Own data governance foundations including data lineage quality checks schema enforcement and access controls
- Implement privacy-by-design principles (PII handling consent enforcement retention policies)
- Collaborate with legal privacy and security teams to operationalize regulatory requirements (e.g. GDPR CCPA)
- Establish monitoring and alerting for data quality freshness and integrity
What were looking for:
- Data engineering experience with proven track record building data infrastructure using Spark with Scala
- Proven experience building data infrastructure using Spark with Scala for at least 5 years
- Experience in delivering significant technical initiatives and building reliable large scale services
- Experience in delivering APIs backed by relationship-heavy datasets
- Experience implementing data governance practices including data quality metadata management and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data lakes cloud warehouses and storage formats
- Strong proficiency in AWS services
- Successful design and implementation of scalable and efficient data infrastructure
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
- Excellent written and verbal communication skills
- Bachelors degree in Computer Science or a related field
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
Required Experience:
Staff IC
View more
View less