Job Title: Data Analyst (SQL AWS Python PySpark)
Location: Richmond VA (hybrid) need local within commute
Duration: 12 Months with the possibility of extension
Job Overview:
We are seeking a Data Analyst to support BAU (Business-As-Usual) operations focusing on data analysis reporting and ensuring high data quality. The ideal candidate will work closely with business and technical teams to deliver actionable insights and maintain reliable data pipelines.
Required Skill:
- Strong hands-on experience with SQL (complex queries joins performance tuning)
- PySpark experience.
- Experience working in AWS environments (S3 Redshift Glue etc.)
- Proficiency in Python for data analysis (Pandas NumPy etc.)
- Solid understanding of data analysis and reporting concepts
- Experience in data quality checks and validation techniques
Prefered:
- Experience with BI tools (Tableau Power BI etc.)
- Familiarity with data pipelines and ETL processes
Job description:
- Develop and maintain reports and dashboards for business stakeholders
- Perform data analysis to identify trends patterns and insights
- Ensure data quality accuracy and consistency across datasets
- Write and optimize SQL queries for data extraction and reporting
- Work with AWS-based data environments (S3 Redshift etc.)
- Utilize Python for data processing automation and analysis
- Collaborate with cross-functional teams to support ongoing business needs
- Troubleshoot data issues and provide timely resolutions
Job Title: Data Analyst (SQL AWS Python PySpark) Location: Richmond VA (hybrid) need local within commute Duration: 12 Months with the possibility of extension Job Overview: We are seeking a Data Analyst to support BAU (Business-As-Usual) operations focusing on data analysis reporting and ensurin...
Job Title: Data Analyst (SQL AWS Python PySpark)
Location: Richmond VA (hybrid) need local within commute
Duration: 12 Months with the possibility of extension
Job Overview:
We are seeking a Data Analyst to support BAU (Business-As-Usual) operations focusing on data analysis reporting and ensuring high data quality. The ideal candidate will work closely with business and technical teams to deliver actionable insights and maintain reliable data pipelines.
Required Skill:
- Strong hands-on experience with SQL (complex queries joins performance tuning)
- PySpark experience.
- Experience working in AWS environments (S3 Redshift Glue etc.)
- Proficiency in Python for data analysis (Pandas NumPy etc.)
- Solid understanding of data analysis and reporting concepts
- Experience in data quality checks and validation techniques
Prefered:
- Experience with BI tools (Tableau Power BI etc.)
- Familiarity with data pipelines and ETL processes
Job description:
- Develop and maintain reports and dashboards for business stakeholders
- Perform data analysis to identify trends patterns and insights
- Ensure data quality accuracy and consistency across datasets
- Write and optimize SQL queries for data extraction and reporting
- Work with AWS-based data environments (S3 Redshift etc.)
- Utilize Python for data processing automation and analysis
- Collaborate with cross-functional teams to support ongoing business needs
- Troubleshoot data issues and provide timely resolutions
View more
View less