Employer Active
About Us:HugoBank a leader in digital banking based in Pakistan is a subsidiary of a renowned Singaporebased consortium. As pioneers in the financial sector our engagement with the community goes beyond just business. We are powered by a vision to offer innovative financial solutions while fostering community growth and upholding sustainable development principles. About the role: As a Platform Data Engineer at HugoBank you will be responsible for creating and managing a comprehensive data infrastructure that supports all data initiatives including Transaction Categorization Spending Pattern Analysis Credit Risk Assessment and Report Generation. Your role involves building a scalable and secure platform that enables efficient data processing storage and access for various datadriven projects across the organization. Architect Data Platform: Design and implement a robust data platform that supports diverse data needs for machine learning models analytics and reporting. Scalable Data Pipelines: Create and maintain scalable data pipelines to handle data ingestion processing and distribution for multiple data initiatives concurrently. Data Integration: Integrate data from various sources ensuring that data is consistent reliable and ready for analysis. Infrastructure Management: Manage cloudbased and onpremise data infrastructure optimizing for performance cost and scalability. Data Security & Compliance: Implement security measures to protect sensitive data and ensure compliance with data protection regulations. Collaboration: Work closely with Data Scientists Engineers Analysts and other stakeholders to understand their data requirements and provide the necessary data infrastructure. Performance Monitoring: Continuously monitor and optimize data systems to ensure high performance and availability. Disaster Recovery: Implement backup and disaster recovery solutions to prevent data loss. Documentation & Best Practices: Document the data architecture and establish best practices for data management usage and quality. Requirements Big Data Technologies: Proficiency with big data tools like Hadoop Spark Kafka and others for handling largescale data processing. Programming: Strong programming skills in languages like Python Java or Scala. Cloud Platforms: Experience with cloud services like AWS Azure or GCP particularly their datarelated services. SQL/NoSQL Databases: Indepth knowledge of SQL and NoSQL databases design and optimization. Data Pipeline Tools: Experience with data pipeline and workflow management tools like Apache Airflow NiFi or Luigi. Security & Compliance: Knowledge of data security principles and regulations (e.g. GDPR CCPA) to ensure compliance. DevOps Tools: Familiarity with containerization (Docker Kubernetes) and infrastructure as code (Terraform CloudFormation). Data Warehousing: Understanding of data warehousing concepts and experience with platforms like Redshift BigQuery or Snowflake. ProblemSolving: Strong problemsolving skills and the ability to work in a fastpaced and evolving environment.
Full Time