Role: Data Engineer
Work Model: Hybrid
Level: MidSenior
Introduction
The ideal candidate will use their passion for big data and analytics to provide insights to the business covering a range of topics. They will be responsible for conducting both recurring and ad hoc analysis for business users. As a Data Engineer you will play a critical role in the development and maintenance of our data infrastructure. You will work closely with crossfunctional teams to ensure data availability quality and accessibility for analysis.
Position Outputs/Competencies
- Collaborate with data scientists analysts and business stakeholders to understand data requirements.
- Design develop and maintain data pipelines and ETL processes.
- Implement and maintain data warehousing and data storage solutions.
- Optimize data pipelines for performance scalability and reliability.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data infrastructure issues.
- Stay current with emerging technologies and best practices in data engineering.
- Systematic solution design of the ETL and data pipeline in line with business user specifications
- Develop and implement ETL pipelines aligned to the approved solution design
- Ensure data governance and data quality assurance standards are upheld
- Deal with customers in a customer centric manner
- Effective SelfManagement and Teamwork
Minimum Qualification and Experience
- Bachelors degree in Computer Science Information Technology or a related field.
- Proven experience as a Data Engineer in a professional setting.
- Proficiency in data engineering technologies and programming languages (e.g. SQL Python Scala Java).
- Strong knowledge of data storage database design and data modelling concepts
- Experience with ETL tools data integration and data pipeline orchestration.
- Familiarity with data warehousing solutions (e.g. Snowflake Redshift).
- Excellent problemsolving and troubleshooting skills.
- Strong communication and collaboration skills.
- 510 years Experience and understanding in designing and developing data warehouses according to the Kimball methodology.
- Adept at design and development of ETL processes.
- SQL development experience preferably SAS data studio and AWS experience The ability to ingest/output CSV JSON and other flat file types and any related data sources.
- Proficient in Python or R or willingness to learn. Experience within Retail Financial Services and Logistics environments.
- Redshift Technologies
- Understanding of data security and compliance best practices.
- Relevant certifications (e.g. AWS Certified Data Analytics Google Cloud Professional Data Engineer).
- Centrally Located Offices
- Retirement & Risk Benefits
- Flexible Working Arrangements
- Tenure Bonuses
- Professional Development
- Employee Discounts
- Medical Insurance Options
- Employee Wellness
- OnSite Canteen