As a Python Data Engineer you will:
- Analyze and interpret business documents particularly in the Finance and Risk domain focusing on corporate credit risk analytical models.
- Collaborate with stakeholders to translate business requirements into technical specifications.
- Design build and maintain scalable data pipelines and workflows using Python PySpark and Databricks.
- Convert SAS code of analytical assets to Python or PySpark ensuring accuracy and efficiency.
- Conduct thorough testing validation and troubleshooting to meet performance and functionality standards.
- Deploy solutions using Azure Pipelines and monitor their performance postdeployment.
- Develop a comprehensive migration plan including timelines resource allocation and risk management.
- Collaborate with crossfunctional teams including analysts data scientists SAS developers and engineers to ensure successful program migration and integration.
- Provide training and support on new Databricks Python/PySparkbased solutions.
- Maintain detailed documentation of migration processes code changes testing procedures and performance metrics.
What You Bring to the Table:
- Strong handson experience (5 years) in Python and PySpark with deep knowledge in data engineering.
- Proficiency in Databricks and Azure Data Factory (ADF) for building and deploying data pipelines.
- Expertise in SQL and its application in data analysis.
- Demonstrated experience in working with analytical models particularly in the Finance & Risk domain.
- A solid understanding of data structures data quality and data migration.
- Strong debugging and troubleshooting skills with expertise in test automation and frameworks.
- Excellent communication skills and the ability to align stakeholders with technical solutions under pressure.
- Experience in working with APIs and integrating Databricks jobs using AppServices and Python.
You should possess the ability to:
- Lead endtoend development from requirement analysis to deployment ensuring highquality scalable and performant solutions.
- Work efficiently under tight timelines solving problems and addressing risks in the process.
- Collaborate and communicate effectively with various stakeholders including business analysts modeling teams and SAS developers.
- Develop comprehensive migration plans and ensure compliance with internal policies and procedures.
- Troubleshoot and resolve technical issues offering support during the postdeployment phase.
What We Bring to the Table:
- A challenging and dynamic work environment within a leadingedge technology stack.
- Opportunities to collaborate with crossfunctional teams and stakeholders from the Finance & Risk domain.
- The chance to work with cuttingedge tools like Databricks PySpark and Azure Pipelines.
- Competitive compensation and the potential for career growth within the company.
As a Python Data Engineer, you will: Analyze and interpret business documents, particularly in the Finance and Risk domain, focusing on corporate credit risk analytical models. Collaborate with stakeholders to translate business requirements into technical specifications. Design, build, and maintain scalable data pipelines and workflows using Python, PySpark, and Databricks. Convert SAS code of analytical assets to Python or PySpark, ensuring accuracy and efficiency. Conduct thorough testing, validation, and troubleshooting to meet performance and functionality standards. Deploy solutions using Azure Pipelines and monitor their performance post-deployment. Develop a comprehensive migration plan, including timelines, resource allocation, and risk management. Collaborate with cross-functional teams, including analysts, data scientists, SAS developers, and engineers, to ensure successful program migration and integration. Provide training and support on new Databricks, Python/PySpark-based solutions. Maintain detailed documentation of migration processes, code changes, testing procedures, and performance metrics. What You Bring to the Table: Strong hands-on experience (5+ years) in Python and PySpark, with deep knowledge in data engineering. Proficiency in Databricks and Azure Data Factory (ADF) for building and deploying data pipelines. Expertise in SQL and its application in data analysis. Demonstrated experience in working with analytical models, particularly in the Finance & Risk domain. A solid understanding of data structures, data quality, and data migration. Strong debugging and troubleshooting skills, with expertise in test automation and frameworks. Excellent communication skills and the ability to align stakeholders with technical solutions under pressure. Experience in working with APIs and integrating Databricks jobs using AppServices and Python. You should possess the ability to: Lead end-to-end development, from requirement analysis to deployment, ensuring high-quality, scalable, and performant solutions. Work efficiently under tight timelines, solving problems and addressing risks in the process. Collaborate and communicate effectively with various stakeholders, including business analysts, modeling teams, and SAS developers. Develop comprehensive migration plans and ensure compliance with internal policies and procedures. Troubleshoot and resolve technical issues, offering support during the post-deployment phase. What We Bring to the Table: A challenging and dynamic work environment within a leading-edge technology stack. Opportunities to collaborate with cross-functional teams and stakeholders from the Finance & Risk domain. The chance to work with cutting-edge tools like Databricks, PySpark, and Azure Pipelines. Competitive compensation and the potential for career growth within the company.