100% TELECOMMUTE
Team: 1 Product Owner 1 PM Implementation Manager 1 SME/Analytics Manager 2 ETL/Algorithm Developers 2 Report Developers and 7 Data Analyst.
Project:
- This individual will be using a mapping document to write the ETL process to load the PI data mart.
- They will be testing the load process.
- They will be create a balancing routine to be executed after each data load.
- They will work closely with data analytics and QA testes to ensure that the data is loaded correctly.
- As a part of the ETL process there will be procedures for summarizing data.
- They will be responsible for creating the ADF pipeline to move data from SQL Server to Snowflake and from Snowflake to SQL Server.
Responsibilities:
- Lead the development and management of ETL processes for Program Integrity data marts in Snowflake and SQL Server.
- Design build and maintain data pipelines using Azure Data Factory (ADF) to support ETL operations and Peer Group Profiling procedures and outputs.
- Implement and manage DevOps practices to ensure efficient data loading and workload balancing within the Program Integrity data mart.
- Collaborate with cross-functional teams to ensure seamless data integration and consistent data flow across platforms.
- Monitor optimize and troubleshoot data pipelines and workflows to maintain performance and reliability.
- Independently identify and resolve complex technical issues to maintain operational efficiency.
- Communicate effectively and foster collaboration across teams to support project execution and alignment.
- Apply advanced analytical skills and attention to detail to uphold data accuracy and quality in all deliverables.
- Lead cloud-based project delivery ensuring adherence to timelines scope and performance benchmarks.
- Ensure data security and compliance with relevant industry standards and regulatory requirements.
TOP 3 REQUIREMENTS:
- Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing with a strong emphasis on building scalable and efficient data solutions.
- More than 2 years of focused experience in Snowflake including the design development and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting.
- Practical experience with Azure data tools such as Azure Data Factory (ADF) SQL Server and Blob Storage alongside Snowflake.
Additional Requirements:
- Skilled in managing various data formats (CSV JSON VARIANT) and executing data loading/exporting tasks using SnowSQL with orchestration via ADF.
- Proficient in using data science and analytics tools like Snowpark Apache Spark pandas NumPy and Scikit-learn for complex data processing and modeling.
- Strong experience with Python and other scripting languages for data manipulation and automation.
- Proven ability to develop cloud-native ETL processes and data pipelines using modern technologies on Azure and Snowflake.
- Demonstrated excellence in analytical thinking and independent problem-solving.
- Strong interpersonal skills with a track record of effective communication and teamwork.
- Consistent success in delivering projects within cloud environments meeting performance and quality standards.
- Working knowledge of medical claims processing systems including familiarity with core functionalities workflows and data structures used in healthcare claims management.
Preferred:
- Certification in Azure or Snowflake
- Experience with data modeling and database design
- Knowledge of data governance and data quality best practices
- Familiarity with other cloud platforms (e.g. AWS Google Cloud)
Experience that will set candidates apart:
- Recent medical claim processing experience.
- Data science and analytics experience.
Ideal Background:
- Experience with setting up DDL mapping data and extract transform and load (ETL) procedures in Snowflake and SQL Server.
- Experience with Azure services; Azure Data Factory pipelines for ETL and movement of data between Snowflake and SQL Server.
- Experience with identifying and creating balancing procedures with medical claim data and ability to resolve balancing issues.
- Ability to easily communicate process with technical staff as well as non-technical staff.
- Experience with creating algorithms or models with tools like like Snowpark Apache Spark pandas NumPy and Scikit-learn for complex data processing and modeling.
- Experience with Python - used for data manipulation and automation.
Required Skills : ETLSQL
Basic Qualification :
Additional Skills :
This is a high PRIORITY requisition. This is a PROACTIVE requisition
Background Check : No
Drug Screen : No