Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailIts fun to work in a company where people truly BELIEVE in what theyre doing!
Were committed to bringing passion and customer focus to the business.
Kyriba is a global leader in liquidity performance that empowers CFOs Treasurers and IT leaders to connect protect forecast and optimize their liquidity. As a secure and scalable SaaS solution Kyriba brings intelligence and financial automation that enables companies and banks of all sizes to improve their financial performance and increase operational efficiency. Kyribas real-time data and AI-empowered tools empower its 3000 customers worldwide to quantify exposures project cash and liquidity and take action to protect balance sheets income statements and cash flows. Kyriba manages more than 3.5 billion bank transactions and $15 trillion in payments annually and gives customers complete visibility and actionability so they can optimize and fully harness liquidity across the enterprise and outperform their business strategy. For more information visit.
Position Summary:
We are seeking a versatile and innovative Data Engineer to design build and maintain scalable data pipelines and infrastructure that support analytics reporting Machine Learning (ML) Generative AI (GenAI) Business Intelligence (BI) and automation initiatives. The ideal candidate will have practical experience with cloud data platforms big data processing and a keen interest in enabling advanced analytics and automation throughout the organization.
Key Responsibilities:
Data Engineering
Design implement and optimize robust ETL pipelines using Databricks and AWS S3 to support analytics ML BI and automation use cases.
Build and maintain data architectures for structured and unstructured data ensuring data quality lineage and security.
Integrate data from multiple sources including external APIs and on-premise systems to create a unified data environment.
Machine Learning & GenAI
Collaborate with Data Scientists and ML Engineers to deliver data sets and features for model training validation and inference.
Develop and operationalize ML/GenAI pipelines automating data preprocessing feature engineering model deployment and monitoring (using tools such as Databricks MLflow).
Support the deployment and maintenance of GenAI models and LLMs (Large Language Models) in production environments.
Stay up to date on emerging ML and GenAI technologies and best practices.
Business Intelligence & Reporting
Work with BI Developers and Analysts to provide clean reliable data sources for reporting and dashboarding via QlikView.
Enable data access and transformation for self-service BI and ensure BI solutions are scalable and performant.
Facilitate the integration of advanced analytics and ML/GenAI outputs into BI and reporting solutions.
Automation
Partner with Automation Specialists to design and implement data-driven automated workflows using MuleSoft and other platforms.
Develop and maintain automation scripts and integrations to streamline data flows improve operational efficiency and reduce manual effort.
Governance & Collaboration
Implement data governance security and compliance best practices across all data assets.
Document data flows pipelines and architectures for technical and business stakeholders.
Collaborate across teams (data science BI business IT) to align data engineering efforts with strategic objectives.
Requirements:
Bachelors or Masters degree in Computer Science Engineering Information Systems or related field.
Proven experience as a Data Engineer or similar role. (3 years)
Expertise in Databricks and AWS S3; solid knowledge of big data and cloud data platforms.
Strong programming skills in Python (preferred for ML/automation) SQL and/or Scala.
Experience building data pipelines for analytics ML BI and automation use cases.
Familiarity with ML frameworks (scikit-learn TensorFlow PyTorch) MLOps tools (Databricks MLflow AWS SageMaker) and GenAI libraries (e.g. HuggingFace LangChain) is highly desirable.
Experience supporting BI/reporting solutions preferably with QlikView or similar BI tools.
Hands-on experience with automation/integration platforms such as MuleSoft is a strong plus.
Understanding of data governance security quality and compliance.
Excellent communication collaboration and problem-solving skills.
Nice to Have:
Experience deploying and operationalizing GenAI/LLM models at scale.
Experience with API development and integration.
Knowledge of DevOps or CI/CD pipelines for data solutions.
Relevant AWS Databricks or QlikView certifications.
Full-Time