INSenior AssociateDatabricksData AnalyticsAdvisoryGurugaon

Not Interested
Bookmark
Report This Job

profile Job Location:

Gurgaon - India

profile Monthly Salary: Not Disclosed
Posted on: 17 hours ago
Vacancies: 1 Vacancy

Job Summary

Line of Service

Advisory

Industry/Sector

Not Applicable

Specialism

Data Analytics & AI

Management Level

Senior Associate

Job Description & Summary

At PwC our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights enabling informed decision-making and driving business growth.

In data engineering at PwC you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines data integration and data transformation solutions.

Why PWC

At PwC you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work powered by technology in an environment that drives innovation will enable you to make a tangible impact in the real world. We reward your contributions support your wellbeing and offer inclusive benefits flexibility programmes and mentorship that will help you thrive in work and life. Together we grow learn care collaborate and create a future of infinite experiences foreach other. Learn more about us.

At PwC we believe in providing equal employment opportunities without any discrimination on the grounds of gender ethnic background age disability marital status sexual orientation pregnancy gender identity or expression religion or other beliefs perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this we have zero tolerance for any discrimination and harassment based on the above considerations.

Job Description: We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS Azure Databricks and GCP. The ideal candidate will have a strong background in data engineering with a focus on data ingestion transformation and warehousing. They should also possess excellent knowledge of PySpark or Spark and a proven ability to optimize performance in Spark job executions.

Key Responsibilities:

- Design build and maintain scalable data pipelines for a variety of cloud platforms including AWS Azure Databricks and GCP.

- Implement data ingestion and transformation processes to facilitate efficient data warehousing.

- Utilize cloud services to enhance data processing capabilities:

- AWS: Glue Athena Lambda Redshift Step Functions DynamoDB SNS.

- Azure: Data Factory Synapse Analytics Functions Cosmos DB Event Grid Logic Apps Service Bus.

- GCP: Dataflow BigQuery DataProc Cloud Functions Cosmos DB Event Grid Logic Apps Service Bus.

- GCP: Dataflow BigQuery DataProc Cloud Functions Bigtable Pub/Sub Data Fusion.

- Optimize Spark job performance to ensure high efficiency and reliability.

- Stay proactive in learning and implementing new technologies to improve data processing frameworks.

- Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications:

- 3-8 years of experience in data engineering with a strong focus on cloud environments.

- Proficiency in PySpark or Spark is mandatory.

- Proven experience with data ingestion transformation and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP):

- Demonstrated ability in performance optimization of Spark jobs.

- Strong problem-solving skills and the ability to work independently as well as in a team.

- Cloud Certification (AWS Azure or GCP) is a plus. - Familiarity with Spark Streaming is a bonus.

Mandatory skill sets:

Python Pyspark SQL with (AWS or Azure or GCP)

Preferred skill sets:

Python Pyspark SQL with (AWS or Azure or GCP)

Years of experience required:

3-8 years

Education qualification:

BE/BTECH ME/MTECH MBA MCA

Education (if blank degree and/or field of study not specified)

Degrees/Field of Study required: Bachelor of Technology Bachelor of Engineering

Degrees/Field of Study preferred:

Certifications (if blank certifications not specified)

Required Skills

Data Engineering Microsoft Azure

Optional Skills

Accepting Feedback Accepting Feedback Active Listening Agile Scalability Amazon Web Services (AWS) Analytical Thinking Apache Airflow Apache Hadoop Azure Data Factory Communication Creativity Data Anonymization Data Architecture Database Administration Database Management System (DBMS) Database Optimization Database Security Best Practices Databricks Unified Data Analytics Platform Data Engineering Data Engineering Platforms Data Infrastructure Data Integration Data Lake Data Modeling Data Pipeline 27 more

Desired Languages (If blank desired languages not specified)

Travel Requirements

Not Specified

Available for Work Visa Sponsorship

No

Government Clearance Required

No

Job Posting End Date

January 5 2026


Required Experience:

Senior IC

Line of ServiceAdvisoryIndustry/SectorNot ApplicableSpecialismData Analytics & AIManagement LevelSenior AssociateJob Description & SummaryAt PwC our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clien...
View more view more

About Company

Company Logo

At PwC, our purpose is to build trust in society and solve important problems. We’re a network of firms in 155 countries with over 284,000 people who are committed to delivering quality in assurance, advisory and tax services. Find out more and tell us what matters to you by vis ... View more

View Profile View Profile