Data Engineer
Pittsburgh, PA - USA
Job Summary
Key Responsibilities
- Design build and maintain scalable data pipelines using AWS cloud services.
- Develop and optimize ETL/ELT workflows for structured and semi-structured data.
- Write complex SQL queries for data transformation validation and analytics.
- Build data processing jobs using Python and PySpark.
- Integrate and manage data in Snowflake cloud data warehouse.
- Support Power BI dashboards by delivering curated analytics-ready datasets.
- Ensure data quality governance performance and security best practices.
- Collaborate with analytics reporting and business teams in an agile environment.
Required Skills
- SQL (advanced querying performance tuning)
- Data Engineering & Data Pipelines
- Python PySpark
- Snowflake
- AWS Services (S3 Glue EMR Lambda Redshift any combination)
- Power BI (data modeling performance optimization support)
Good to Have
- Financial services / banking domain experience (BNY preferred)
- CI/CD pipelines for data workloads
- Data quality frameworks
- Agile / Scrum experience