Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailRoles and Responsibilities
In this role you will:
Architect the data product to be scalable performant and well-integrated with the GridOS Data Fabric.
Lead the design and implementation of data ingestion pipelines for real-time and batch data.
Design and implement data models and schemas that support optimal data organization consistency and performance.
Ensure that schema design and query performance are optimized to handle increasing data volumes and complexity.
Ensure data governance security and quality standards are met.
Monitor the performance of data pipelines APIs and queries and optimizing for scalability and reliability.
Collaborate with cross-functional teams to ensure the data product meets business and technical requirements.
Design APIs (REST GraphQL etc.) for easy secure access to the data.
Participate in the data domain technical and business discussions relative to future architect direction.
Gather and analyse data and develops architectural requirements at project level.
Researches and evaluates emerging data technology industry and market trends to assist in project development activities.
Coach and mentor team members.
Education Qualification
Bachelors Degree in Computer Science or STEM Majors (Science Technology Engineering and Math) with advanced experience.
Desired characteristics
Experience as a Data Product Architect or Data Engineer with a focus on building data products and APIs.
Experience in designing and implementing data ingestion pipelines using technologies like Kafka or ETL frameworks
Hands-on experience in designing and exposing APIs (REST GraphQL gRPC etc) for data access and consumption
Expertise in data modeling schema design and data organization to ensure data consistency integrity and scalability.
Experience with query optimization techniques to ensure fast and efficient data retrieval while balancing performance with data complexity.
Strong knowledge of data governance practices including metadata management data lineage and compliance with regulatory standards (e.g. GDPR).
Familiarity with cloud platforms (e.g. AWS Google Cloud Azure) and leveraging cloud-native data services (e.g. S3 Redshift BigQuery Azure Data Lake).
In-depth knowledge of data security practices (RBAC ABAC encryption authentication) to ensure secure data access and protection.
Experience working with data catalogs data quality practices and implementing data validation techniques.
Familiarity with data orchestration tools (e.g. Apache Airflow NiFi).
Expertise in optimizing and maintaining high-performance APIs and data pipelines at scale.
Strong understanding of data federation and data virtualization principles for seamless data integration and querying across multiple systems.
Familiarity with microservices architecture and designing APIs that integrate with distributed systems.
Excellent communication skills with the ability to work effectively with cross-functional teams including data engineers product managers and business stakeholders.
Ability to consult customer on alignment of outcomes and desired technical solutions at an enterprise level.
Ability to analyse design and develop a software solution roadmap and implementation plan based upon a current vs. future state of the business.
We value building teams diverse in thought and experiences. If you like what youve read and are excited by this opportunity but dont meet all the requirements we encourage you to make the jump and apply anyway.
Relocation Assistance Provided: No
Required Experience:
Staff IC
Full-Time