EY GDS Consulting AI And DATA AWS Data Engineer Senior

Not Interested
Bookmark
Report This Job

profile Job Location:

Delhi - India

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy

Job Summary

At EY youll have the chance to build a career as unique as you are with the global scale support inclusive culture and technology to become the best version of you. And were counting on your unique voice and perspective to help EY become even better too. Join us and build an exceptional experience for yourself and a better working world for all.

Job Description Data Products AWS Data Engineer

Objectives and Purpose

  • The Senior Data Engineer ingests builds and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs.
  • The Senior Data Engineer will:
    • Clean aggregate and organize data from disparate sources and transfer it to data warehouses.
    • Support development testing and maintenance of data pipelines and platforms to enable data quality to be utilized within business dashboards and tools.
    • Create maintain and support the data platform and infrastructure that enables the analytics front-end; this includes the testing maintenance construction and development of architectures such as high-volume large-scale data processing and databases with proper verification and validation processes.

Your key responsibilities

  • Strong Clinical and Operational domain knowledge in Pharma (Life Science ) Industry
  • Must have 3 to 5 years of experience with Python (or a similar language) and Data Engineering.
  • Hands on experience with EMR
  • Hands on experience with AWS ( Athena Redshift)
  • Hands on experience working with Snowflake
  • Hands on experience with DataIku
  • Good to have working exposure to Databricks
  • Hands on experience with Dremio
  • Expert-level experience in creating/analyzing complex SQL and creating data pipelines data modeling skills and frameworks.
  • Expert-level experience in creating/analyzing complex SQL and creating data pipelines data modeling skills and frameworks.
  • Strong technical knowledge and extensive application support experience in systems and applications within clinical development
  • Problem-solving analytical quality mindset and strong business acumen
  • Ability to translate complex business questions and requirements and develop effective solutions.
  • Understanding of clinical data standards
  • Excellent analytical skills
  • Detail-oriented demonstrating good interpersonal and leadership skills a team player.
  • Self-motivated work effectively under pressure.
  • Effective verbal and written communication skills

Required Skills & Qualifications

  • Experience: 3-5 years of progressive experience in Data Engineering Data Warehousing or a related role.
  • Programming: Expertise in Python for data manipulation scripting and pipeline development.
  • Data Transformation: Strong proficiency with modern SQL and experience with dbt (data build tool) for implementing data transformations and quality checks.
  • Big Data Ecosystem: Hands-on experience with PySpark and distributed processing frameworks specifically on AWS EMR.
  • Cloud Data Warehousing: Practical experience working with and optimizing large datasets in cloud data warehouses such as Amazon Redshift and/or Snowflake.
  • Cloud Storage: Experience with Amazon S3 for building and managing data lakes.
  • DevOps/CI/CD: Familiarity with CI/CD tools like Jenkins and source control systems specifically Bitbucket (or similar like Git/GitHub/GitLab).
  • Methodology: Proven experience working effectively within an Agile - SCRUM development framework.
  • Communication: Exceptional written and verbal communication skills with the ability to influence and collaborate across teams.

Desired skillsets

  • Experience in the Pharmaceutical/Biotech industry particularly with clinical or regulated data.
  • Knowledge of data visualization tools (e.g. Tableau Power BI).
  • Familiarity with containerization technologies (e.g. Docker Kubernetes).
  • Bachelors or masters degree in computer science Engineering or a related field.

EY Building a better working world



EY exists to build a better working world helping to create long-term value for clients people and society and build trust in the capital markets.



Enabled by data and technology diverse EY teams in over 150 countries provide trust through assurance and help clients grow transform and operate.



Working across assurance consulting law strategy tax and transactions EY teams ask better questions to find new answers for the complex issues facing our world today.


Required Experience:

IC

At EY youll have the chance to build a career as unique as you are with the global scale support inclusive culture and technology to become the best version of you. And were counting on your unique voice and perspective to help EY become even better too. Join us and build an exceptional experience f...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala

About Company

Company Logo

Bij EY Studio+ creëren we transformatieve ervaringen die mensen in beweging brengen en markten vormgeven. We combineren design, technologie en commercieel inzicht, aangevuld met EY.ai, een verenigend platform en aangedreven door ons volledige spectrum van diensten.

View Profile View Profile