Job Title: - Data Engineer
Job Location: Juno Beach FL
Job Type: Long-term Growth
Job Description:
Data Engineer - HRIT & Corporate Services IT
Information Technology Draft - v0.1
About the Role
The Data Engineer is a foundational role within the HR IT and Corporate Services IT organization responsible for building and sustaining the data infrastructure that powers analytics automation and AI-driven capabilities across the enterprise. This role sits at the intersection of HR/Corporate Services systems and the Google Cloud Platform data stack ensuring that data flowing from operational platforms is clean structured governed and ready to support intelligent applications.
As advances its AI activation agenda - including Gemini Enterprise Vertex AI and the HR Services 2027 initiative - the Data Engineer ensures the semantic layer above core systems remains coherent and trustworthy. Without this foundation automation becomes brittle and AI outputs become unreliable.
Key Responsibilities
Data Pipeline Development & Maintenance
Design build and maintain data pipelines that move and transform data from source systems (SAP SuccessFactors ServiceNow HRSD SAP S/4HANA Fieldglass SAP IAS) into Google Cloud Platform (BigQuery Cloud Storage Pub/Sub)
Ensure pipelines are reliable observable and recoverable - with automated alerting for failures or data anomalies
Manage data ingestion patterns for both batch and near-real-time use cases
Semantic Layer & Data Modeling
Define and maintain consistent semantic definitions for core HR and Corporate Services entities: employee position organizational unit cost center pay grade job classification and related constructs
Build and govern dimensional models and data marts that serve reporting self-service analytics and AI grounding use cases
Resolve definitional conflicts across systems (e.g. where SuccessFactors and SAP S/4 represent the same concept differently)
AI & Automation Enablement
Prepare label and structure datasets to support Retrieval-Augmented Generation (RAG) patterns and LLM grounding for Gemini and Vertex AI applications
Partner with solution architects and AI practitioners to ensure data contracts between pipelines and AI models are well-defined and stable
Support agent and automation use cases (ServiceNow Now Assist Google Agentspace) with clean structured context data
Data Quality & Governance
Implement data quality rules validation checks and monitoring across the HRIT data estate
Identify and escalate data integrity issues before they surface in dashboards reports or AI outputs
Support audit and compliance requirements by maintaining data lineage documentation and access controls
Collaborate with HR and Corporate Services data owners to establish and enforce data standards
Analytics & Self-Service Enablement
Build and maintain curated datasets and semantic models in BigQuery / Looker that enable HR leaders Corporate Services leaders and HRIT team members to access trusted data without custom IT requests
Partner with reporting and analytics consumers to understand requirements and translate them into reusable data products
Platform & Integration Support
Collaborate with SAP CPI and integration teams to understand data contracts and transformation logic at system boundaries
Contribute to data architecture decisions as part of the broader REWIRE and Google stack activation program
Support data migration efforts (e.g. S/4 transformation SuccessFactors module expansions) with pipeline and model changes
Required Skills & Experience
Technical
3 years of experience in data engineering data integration or a closely related role
Proficiency in SQL and Python for data transformation and pipeline development
Experience with cloud data platforms - Google Cloud Platform (BigQuery) preferred; Azure or AWS considered
Familiarity with data pipeline frameworks (e.g. Apache Beam dbt Dataflow or equivalent)
Working knowledge of REST APIs and data exchange patterns (JSON XML flat file)
Understanding of dimensional modeling data warehousing concepts and semantic layer design
Domain
Exposure to HR or enterprise business systems (SAP SuccessFactors ServiceNow SAP S/4HANA or similar) strongly preferred
Ability to work with business stakeholders to translate data needs into technical requirements
Mindset
Treats data as a product - thinks about consumers reliability and usability not just pipeline execution
Comfortable operating in an environment where source systems are complex and definitions are inconsistent
Proactive about data quality - finds breaks before users do
Preferred Skills
Experience with SAP CPI or other middleware/integration platforms
Familiarity with dbt for data transformation and semantic modeling
Exposure to LLM grounding RAG patterns or AI/ML data preparation
Experience supporting regulated industries (energy finance healthcare) where auditability matters
Google Cloud Professional Data Engineer certification (or in progress)
Job Title: - Data Engineer Job Location: Juno Beach FL Job Type: Long-term Growth Job Description: Data Engineer - HRIT & Corporate Services IT Information Technology Draft - v0.1 About the Role The Data Engineer is a foundational role within the HR IT and Corporate Services IT organization re...
Job Title: - Data Engineer
Job Location: Juno Beach FL
Job Type: Long-term Growth
Job Description:
Data Engineer - HRIT & Corporate Services IT
Information Technology Draft - v0.1
About the Role
The Data Engineer is a foundational role within the HR IT and Corporate Services IT organization responsible for building and sustaining the data infrastructure that powers analytics automation and AI-driven capabilities across the enterprise. This role sits at the intersection of HR/Corporate Services systems and the Google Cloud Platform data stack ensuring that data flowing from operational platforms is clean structured governed and ready to support intelligent applications.
As advances its AI activation agenda - including Gemini Enterprise Vertex AI and the HR Services 2027 initiative - the Data Engineer ensures the semantic layer above core systems remains coherent and trustworthy. Without this foundation automation becomes brittle and AI outputs become unreliable.
Key Responsibilities
Data Pipeline Development & Maintenance
Design build and maintain data pipelines that move and transform data from source systems (SAP SuccessFactors ServiceNow HRSD SAP S/4HANA Fieldglass SAP IAS) into Google Cloud Platform (BigQuery Cloud Storage Pub/Sub)
Ensure pipelines are reliable observable and recoverable - with automated alerting for failures or data anomalies
Manage data ingestion patterns for both batch and near-real-time use cases
Semantic Layer & Data Modeling
Define and maintain consistent semantic definitions for core HR and Corporate Services entities: employee position organizational unit cost center pay grade job classification and related constructs
Build and govern dimensional models and data marts that serve reporting self-service analytics and AI grounding use cases
Resolve definitional conflicts across systems (e.g. where SuccessFactors and SAP S/4 represent the same concept differently)
AI & Automation Enablement
Prepare label and structure datasets to support Retrieval-Augmented Generation (RAG) patterns and LLM grounding for Gemini and Vertex AI applications
Partner with solution architects and AI practitioners to ensure data contracts between pipelines and AI models are well-defined and stable
Support agent and automation use cases (ServiceNow Now Assist Google Agentspace) with clean structured context data
Data Quality & Governance
Implement data quality rules validation checks and monitoring across the HRIT data estate
Identify and escalate data integrity issues before they surface in dashboards reports or AI outputs
Support audit and compliance requirements by maintaining data lineage documentation and access controls
Collaborate with HR and Corporate Services data owners to establish and enforce data standards
Analytics & Self-Service Enablement
Build and maintain curated datasets and semantic models in BigQuery / Looker that enable HR leaders Corporate Services leaders and HRIT team members to access trusted data without custom IT requests
Partner with reporting and analytics consumers to understand requirements and translate them into reusable data products
Platform & Integration Support
Collaborate with SAP CPI and integration teams to understand data contracts and transformation logic at system boundaries
Contribute to data architecture decisions as part of the broader REWIRE and Google stack activation program
Support data migration efforts (e.g. S/4 transformation SuccessFactors module expansions) with pipeline and model changes
Required Skills & Experience
Technical
3 years of experience in data engineering data integration or a closely related role
Proficiency in SQL and Python for data transformation and pipeline development
Experience with cloud data platforms - Google Cloud Platform (BigQuery) preferred; Azure or AWS considered
Familiarity with data pipeline frameworks (e.g. Apache Beam dbt Dataflow or equivalent)
Working knowledge of REST APIs and data exchange patterns (JSON XML flat file)
Understanding of dimensional modeling data warehousing concepts and semantic layer design
Domain
Exposure to HR or enterprise business systems (SAP SuccessFactors ServiceNow SAP S/4HANA or similar) strongly preferred
Ability to work with business stakeholders to translate data needs into technical requirements
Mindset
Treats data as a product - thinks about consumers reliability and usability not just pipeline execution
Comfortable operating in an environment where source systems are complex and definitions are inconsistent
Proactive about data quality - finds breaks before users do
Preferred Skills
Experience with SAP CPI or other middleware/integration platforms
Familiarity with dbt for data transformation and semantic modeling
Exposure to LLM grounding RAG patterns or AI/ML data preparation
Experience supporting regulated industries (energy finance healthcare) where auditability matters
Google Cloud Professional Data Engineer certification (or in progress)
View more
View less