For one of our longterm multiyear projects we need a Microsoft Azure Architect out of New York NY.
Responsibilities:
Experience in data integration activities including: architecting designing coding and testing phases
Architect the data warehouse and provide guidance to the team in implementation using Snowflake SnowSQL and other big data technologies
Handson experience with Snowflake utilities SnowSQL SnowPipe Big Data model techniques using Python
Experience in performance tuning of the snow pipelines and should be able to trouble the issue quickly
Extensive experience in relational as well as NoSQL data stores methods and approaches (star and snowflake dimensional modeling)
Understanding of data transformation and translation requirements and suggest tools to leverage to get the job done
Understanding data pipelines and modern ways of automating data pipeline using cloud based implementation and Testing and clearly document the requirements to create technical and functions specs
Possesses strong leadership skills with a willingness to lead create Ideas and be assertive.
Should be able to demonstrate the proposed solution with excellent communication and presentation skill
Engage with onsiteoffshore team for daily activities Status reporting weekly and monthly basis.
Qualifications:
Should have minimum of 12 years of IT experience.
Minimum 4 years of experience in designing and implementing a fully operational solution on Snowflake Data Warehouse
Experience with Python and a major relational database.
Understanding of RESTful API design
Passion for industry best practices and computer programming
Excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies
Should be having good presentation and communication skills both written and verbal Ability to problem solve and able to convert the requirements to design
Ability to troubleshooting issues as and when arisen.
Ability to test the developed jobs and preparing test documents
Work Experience on optimizing the performance of the Spark jobs.