We currently have an open requirement for a Data Engineer role and are looking for candidates with the following skill set. If you have any suitable profiles please share their resumes.
Job Summary:
Data Engineer with strong expertise in PySpark ETL development and Data Warehousing/Business Intelligence (DW/BI) projects. Resource will be responsible for end-to-end development covering Financial Attribution SCD Booking and Referring Agreements Data Aggregations and SOR Onboarding
Location : Charlotte preferred Rate: SE4
Key Responsibilities:
Design develop and optimize ETL pipelines using PYSPARK S3 and Dremio
Working on ProfitView Modernization which requires Pyspark Python Dremio ETL Financial exp.
Work with large-scale structured and unstructured data from various sources.
Implement data ingestion transformation and loading processes into data lakes and data warehouses.
Collaborate with BI developers data analysts and business stakeholders to understand data requirements.
Ensure data quality integrity and governance across all data pipelines.
Monitor and troubleshoot performance issues.
Participate in code reviews testing and deployment processes.
Document technical solutions data flows and architecture.
Required Skills & Qualifications:
Strong hands-on experience with PySpark for data processing and transformation.
Proficiency in ETL tools Informatica Oracle PL/SQL Teradata
Experience in enterprise frameworks UNIX script writing
Experience in job scheduling batch process data analysis defect resolution
Solid understanding of Data Warehousing concepts (e.g. star/snowflake schema slowly changing dimensions).
Experience with cloud platforms (Azure or GCP) and services like S3
Strong SQL skills for data extraction transformation and analysis.
Experience with version control systems (e.g. Git) and CI/CD pipelines.
Excellent problem-solving and communication skills.
Agile/Scrum knowledge is a plus