Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailCalling all innovators find your future at Fiserv.
Were Fiserv a global leader in Fintech and payments and we move money and information in a way that moves the world. We connect financial institutions corporations merchants and consumers to one another millions of times a day quickly reliably and securely. Any time you swipe your credit card pay through a mobile app or withdraw money from the bank were involved. If you want to make an impact on a global scale come make a difference at Fiserv.
Job Title
Data EngineerData Engineers are Data enthusiasts who use their toolbox of technical talent to deliver data driven solutions. This person will be part of and support agile teams of data analytics domain in EMEA by designing and building cutting-edge data migration data integration data replication and data streaming systems to ensure we make data available with amazing quality and speed in Snowflake. They will be responsible for architecting and implementing very large-scale data intelligence solutions around Snowflake Data Warehouse and hence a solid experience and understanding of architecting designing and operationalization of large-scale data and analytics solutions on Snowflake Cloud Data Warehouse is .
They will in collaboration with a multidisciplinary delivery team be responsible for the deployment monitoring troubleshooting and maintenance of critical data driven solutions in production.
Primary Responsibilities
Developing ETL pipelines in and out of data warehouse using combination of Java/Scala/Python Spark jobs for data transformation and aggregation
Writing SQL queries against Snowflake.
Provide production support for Data Warehouse issues such data load problems transformation translation problems
Develop unit tests for transformations and aggregations
Develop production grade real time or batch data integrations between systems
Real time data processing of events from Kafka using streaming processing into data warehouse.
Design and build data pipelines of medium to high complexity
Translate requirements for BI and Reporting to Database design and reporting design
Understanding data transformation and translation requirements and which tools to leverage to get the job done
Design and build machine learning pipelines of medium to high complexity
Execute practices such as continuous integration and test-driven development to enable the rapid delivery of working code.
Deploy production grade data pipelines data infrastructure and data artifacts as code.
Develop estimates for data driven solutions
Communicate technical product and project information to stakeholders
Establish standards of good practice such as coding standards and data governance
Peer review code developed by others
Knowledge & Skills
Minimum BSc or BTech / B.E in Computer Science Engineering or related discipline.
Relevant professional qualification such as AWS Certified Big Data SnowPro Core certification other Data Engineer certifications
Strong development hands-on background in creating Snow pipe and complex data transformations and manipulations using Snow Pipe Snow SQL
Hands-on experience with Snowflake external tables concepts Staging Snow scheduler & performance tuning.
Good understanding of Snowflake Time travel concepts and zero-copy cloning Network policies clustering and tasks
5 year experience working in an enterprise big data environment
Deep knowledge of Spark Kafka and data warehouse such as snowflake Hive Redshift etc
Hands-on experience in development deployment and operation of data technologies and platforms such as:
Integration using APIs micro-services and ETL patterns
Low latency/Streaming batch and micro batch processing
Data platforms such as Hadoop Hive Redshift or Snowflake
Cloud Services such as AWS
Cloud query services such as Athena
DevOps Platforms such as Gitlab
Containerisation technologies such as Docker and Kubernetes
Orchestration solutions such as Airflow
Deep knowledge of key non-functional requirements such as availability scalability operability and maintainability
Deep knowledge of SQL
OS knowledge particularly Linux
High on social skills team spirit and empathy.
Willingness to take ownership and ability to show strong personal commitment for the department and the team goals; must be comfortable with being considered a reliable and proactive influential team member who is not afraid to take on responsibility in the team.
Ability to communicate clearly to business analysts and stakeholders as well as technical representatives.
Very strong and proven communication and coordination skills open-minded and determined.
Responsible for planning highlighting and implementing possible improvements for existing and new applications.
Good to have:
Migration experience to Snowflake
Hands on experience with Oracle RDBMS
Exposure to Streamsets DBT or other ETL tool
Thank you for considering employment with Fiserv. Please:
Our commitment to Diversity and Inclusion:
Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race color religion national origin gender gender identity sexual orientation age disability protected veteran status or any other category protected by law.
Note to agencies:
Fiserv does not accept resume submissions from agencies outside of existing do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions.
Warning about fake job posts:
Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Full-Time