Data Scientist II
Job Summary
- Key Responsibilities
- Design develop and maintain scalable ETL/ELT pipelines using Azure Data Factory (ADF)
- Configure and manage Linked Services Datasets and Pipelines in ADF
- Develop and optimize data transformation workflows using Azure Databricks (PySpark)
- Work across Lakehouse architecture layers (Bronze/Silver/Gold) storage accounts Unity Catalog and support metadatadriven design (control tables mappings).
- Strong technical expertise across Power BI SQL Power Platform tools and Azure services
- Build and manage dbt models for data transformation testing and documentation
- Implement real-time data ingestion using Azure Event Hub
- Integrate data from multiple sources (databases APIs cloud storage on-prem systems)
- Ensure data quality by implementing dbt tests and validations
- Monitor troubleshoot and optimize data workflows
- Collaborate with cross-functional teams to understand business data requirements
- Maintain data governance security and performance standards
- Strong understanding of data warehousing concepts and ELT methodologies
- Experience working with Azure Data Lake Storage (ADLS) or similar storage solutions
- Knowledge of version control (Git) and CI/CD processes
- Willing to work in production support
- Configure and manage Linked Services Datasets and Pipelines in ADF
- Design develop and maintain scalable ETL/ELT pipelines using Azure Data Factory (ADF)
Required Experience:
IC
About Company
COVID-19 presents an unprecedented and challenging situation for all of us. At Conduent, our top priority is to protect our associate base and the communities where we live and work, while continuing to serve and help our clients at a time when they need us most. We’d like to thank ev ... View more