Key Responsibilities:
Support data pipelines using Databricks PySpark and SQL on Azure
Work with business and technology stakeholders to deliver secure scalable and high-performance data solutions in the banking environment
Support data quality governance and integration with upstream and downstream systems
Troubleshoot monitor and enhance performance of data workflows
Willing to work in shifts and weekend rotations
Required Skills:
Proven expertise in Azure Data Services (ADF Data Lake Synapse etc.)
Strong hands-on experience with Databricks and PySpark programming
Proficiency in SQL for data modelling transformations and performance tuning
Solid understanding of banking domain data processes
Excellent problem-solving and communication skills
Additional Experience & Capabilities:
35 years of experience as Data Analyst with skills in data analysis SQL and Azure Big Data
Technical experience in data management methodologies and data systems
Skilled in using software applications including MS Office
Skilled in advanced analytical software tools data analysis methods and specialized reporting techniques
Basic database ETL and reporting understanding including SQL skills using tools such as SQL Developer or TOAD to query data from data warehouses
Skilled in agile methodologies and tools including Jira and Confluence
Ability to understand architecture and network documents
Ability to work on infrastructure projects involving data encryption logging monitoring and various security tools
Ability to independently identify assess and escalate issues requiring senior management attention
Ability to communicate effectively in both oral and written form
Ability to work collaboratively and build relationships
Ability to work independently and successfully as a member of a team
Ability to exercise sound judgment in making decisions
Ability to analyze organize and prioritize work while meeting multiple deadlines
Ability to handle confidential information with discretion