Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via email
PRIMARY RESPONSIBILITIES
Develop and maintain Databricks notebooks using Python and SQL.
Configure and manage Databricks clusters and integrate with version control systems such as GitHub.
Enable seamless integration between on-premises databases and Power BI for reporting and analytics.
Design and build large-scale data pipelines using Azure native data processing frameworks.
Collaborate with architects engineers analysts and business stakeholders to deliver enterprise-grade data-driven solutions.
Provide technical leadership and guidance on cloud architecture and implementation strategies.
Coordinate with platform teams Azure API Management (APIM) GitHub and support teams to ensure smooth operations.
Analyze business requirements and design scalable secure and efficient solutions on the Azure cloud platform.
Develop test and optimize software components to enhance the performance and reliability of data platforms.
Lead end-to-end project execution working closely with business users IT teams data stewards and third-party vendors.
Integrate and standardize data from diverse sources while ensuring compliance with data quality and accessibility standards.
Implement streaming data solutions and reusable design patterns in a big data environment.
Collaborate with data scientists to operationalize machine learning models and algorithms within automated data workflows.
Apply sound judgment and technical expertise to resolve moderately complex data engineering challenges.
Review and provide feedback on core code changes and support production deployments.
CORE TECHNOLOGIES
Azure: Azure Databricks Azure Data Factory Azure Synapse Analytics Azure Functions Azure Data Lake Storage Gen2 Azure Event Grid Azure Event Hubs Azure Service Bus Azure Key Vault Azure Monitor Azure Log Analytics Azure API Management (APIM) Azure DevOps.
Scripting: Python SQL Bash.
Databases: SQL Server Oracle PostgreSQL Delta Lake
Big Data: Apache Spark
Version Control: GitHub Git Azure DevOps
Visualization: Power BI & Integration with REST APIs for custom dashboards.
Data Integration & Workflow Orchestration: Azure Data Factory Databricks Workflows
QUALIFICATIONS
IT professional experience in Azure Cloud with Minimum 3 years of experience in developing and maintaining data pipelines using Azure Databricks Spark and other Big Data technologies.
Proficiency in programming languages such as Python & SQL
Ability to recreate existing legacy application logic and functionality into Azure Databricks/Data Lake SQL Database and SQL Datawarehouse environment.
Experience with Azure services such as Data Factory Azure Machine Learning and Azure DevOps.
Strong understanding of ETL processes and data warehousing concepts.
Excellent interpersonal and communication skills
Experience with software configuration management tools such as Git/GitHub
Full-time