We are seeking experienced Senior Data Engineer with a strong background in designing and building scalable data architectures. You will play a key role in creating and optimising our data pipelines improving data flow across our organisation and working closely with cross-functional teams to ensure data accessibility and quality. This role requires deep knowledge of BigData ecosystem and Datalake concepts as well as hands-on expertise in modern big data technologies like Advanced SQL Spark Flink Trino Iceberg and SnowflakeData Pipeline Development: * Design build and maintain scalable ELT processes using Spark Flink Snowflake and other big data frameworks. * Implement robust high-performance data pipelines in cloud environments. * Deep and hand-on knowledge of at-least one programming language like Python Java or Scala * Expertise with advanced SQL skills and knowledge of BI/Analytics & Datawarehouse Architecture: * Develop and maintain efficient Datalake solutions * Ensure Datalake reliability consistency and cost-effectiveness. * Develop data models and schemas optimised for performance and scalability. * Experience with modern data warehouses like Iceberg Snowflake & CI/CD: * Comfortable with basic DevOps principles and tools for CI/CD (Jenkins GitLab CI or GitHub Actions). * Familiar with containerisation and orchestration tools (Docker Kubernetes). * Familiarity with Infrastructure as Code (Terraform CloudFormation) is a Tuning & Optimisation: * Identify bottlenecks optimise processes and improve overall system performance. * Monitor job performance troubleshoot issues and refine long-term solutions for system & Leadership: * Work closely with data scientists analysts and stakeholders to understand data needs and deliver solutions. * Mentor and guide junior data engineers on best practices and cutting-edge technologies.
At-least 5 years of hands on experience in developing and building data pipelines on Cloud & Hybrid infrastructure for analytical needs
Experience working with any cloud based data warehouse solutions - Snowflake SingleStore etc. along with expertise in SQL and Advance SQL.
Experience in designing and building dimensional data models to improve accessibility efficiency and quality of data
Bachelors Degree or equivalent in data engineering computer science or similar field.
High expertise in modern cloud warehouse data lakes and implementation experience on any of the cloud platforms (preferably AWS)
Expertise working with data at scale (peta bytes) with big data tech stack and advanced programming languages e:g Python Java or Scala.
Database development experience with Relational or MPP/distributed systems such as Snowflake SingleStore
Hands-on experience with distributed computing in large-scale data environments.
Excellent problem solving critical thinking with ability to evaluate and apply new technologies in a short time
Experience in working with global collaborators with ability to influence decision making.
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.