Position Title: Senior Ab Initio Developer
Locations: Charlotte NC (300 S Brevard St) 3 days onsite 2 days remote weekly
Only GC/USC/GCEAD/H4EAD/L2EAD
Duration: 12 months likely to be extended based on project need/performance
3 Junior Profiles
3 Senior Pfofiles
10 years of overall IT experience it is okay that this will be a combo of skills/technologies
These resources will be converting code from Ab Initio into BigQuery in a GCP environment
Most critical technical experience is ETL Ab Initio and GCP
Bare minimum hands-on experience will accept is 5 years of hands-on Ab Initio Development experience and 2 years of GCP experience this should be our focus
Looking for hands on developers
Candidates should have an understanding of Teradata
Important Notes Directly from the Hiring Manager (NO EXCEPTIONS):
Resume Format/Content:
Length - 2 page max
Summary of overall professional experience
Technical Skills (recently used first)
3 Initiatives highlighting usage of required & desired skills
Education & Certifications with Years completed
Submission Expectations:
Do not send lengthy profiles with all sorts of formats (only above format)
Candidate must come for an in-person interview (1 1.5 hour interview)
Must Haves:
MUST have 10 years of overall IT experience
8 years of Ab Initio experience (minimum of 5 years of experience)
4 years of GCP experience (minimum of 2 years of experience)
4 years of experience with BigQuery (minimum of 2 years experience)
Expertise with SQL/ETL
4 years of Agile and JIRA experience
Experience with technical stakeholder interactions
Enterprise level experience
EXCELLENT written and verbal communication skills
Day to Day:
Designing coding and testing new data pipelines using Ab Initio Designing
Implementing ETL/ELT Processes
Writing optimizing and debugging complex SQL queries for data manipulation aggregation and reporting particularly for data within Teradata and BigQuery
Data ingestion - develop and manage processes to ingest large volumes of data into GCPs BigQuery
Manage and monitor GCP resources specifically used for data processing and storage
Optimize Cloud Data Workloads
Desired Experience:
Java experience highly desired
Python experience highly desired
Experience with Spark Hadoop MapR Data Lake
Background in Banking/Financial Technology Deposits Payments Cards domain etc.