Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailPosition Overview:We are seeking a highly skilled Senior Data Engineer Python PySpark & Azure Databricks to join our dynamic data engineering team. This role focuses on building scalable high-performance data pipelines using Python and PySpark within the Azure Databricks environment. While familiarity with broader Azure services is valuable the emphasis is on distributed data processing and automation using modern big data frameworks. Prior experience in the Property & Casualty (P&C) insurance industry is a strong Responsibilities:Data Pipeline Development & Optimization:Design develop and maintain scalable ETL/ELT data pipelines using Python and Azure Databricks to process large volumes of structured and semi-structured data data quality checks error handling and performance tuning across all stages of data Architecture & Modeling:Contribute to the design of cloud-based data architectures that support analytics and reporting use and maintain data models that adhere to industry best practices and support business with Delta Lake Bronze/Silver/Gold data architecture patterns and metadata management Integration (Azure):Integrate and orchestrate data workflows using Azure Data Factory Azure Blob Storage and Event Hub where cloud compute resources and manage cost-effective data processing at & Stakeholder Engagement:Partner with data analysts data scientists and business users to understand evolving data with DevOps and platform teams to ensure reliable secure and automated data in Agile and contribute to sprint planning demos and & Best Practices:Maintain clear and comprehensive documentation of code pipelines and architectural to internal data engineering standards and promote best practices for code quality testing and CI/CD.
Qualifications :
Position Overview:We are seeking a highly skilled Senior Data Engineer Python PySpark & Azure Databricks to join our dynamic data engineering team. This role focuses on building scalable high-performance data pipelines using Python and PySpark within the Azure Databricks environment. While familiarity with broader Azure services is valuable the emphasis is on distributed data processing and automation using modern big data frameworks. Prior experience in the Property & Casualty (P&C) insurance industry is a strong Responsibilities:Data Pipeline Development & Optimization:Design develop and maintain scalable ETL/ELT data pipelines using Python and Azure Databricks to process large volumes of structured and semi-structured data data quality checks error handling and performance tuning across all stages of data Architecture & Modeling:Contribute to the design of cloud-based data architectures that support analytics and reporting use and maintain data models that adhere to industry best practices and support business with Delta Lake Bronze/Silver/Gold data architecture patterns and metadata management Integration (Azure):Integrate and orchestrate data workflows using Azure Data Factory Azure Blob Storage and Event Hub where cloud compute resources and manage cost-effective data processing at & Stakeholder Engagement:Partner with data analysts data scientists and business users to understand evolving data with DevOps and platform teams to ensure reliable secure and automated data in Agile and contribute to sprint planning demos and & Best Practices:Maintain clear and comprehensive documentation of code pipelines and architectural to internal data engineering standards and promote best practices for code quality testing and CI/CD.
Remote Work :
No
Employment Type :
Full-time
Full-time