Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailTredence is a global analytics services and solutions company. We are one of the fastest growing private companies in the country for three straight years according to the Inc. 5000 and we continue to set ourselves apart from our competitors by attracting the greatest talent in the data analytics and data science space. Our capabilities range from Data Visualization Data Management to Advanced analytics Big Data and Machine Learning. Our uniqueness is in building Scalable Big Data Solutions on Onprem/GCP/Azure cloud in a very cost effective and easily scalable manner for our clients. We also come in with some strong IP and prebuilt analytics solutions in data mining BI and Big Data.
Roles and Responsibilities:
Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack
Ability to provide solutions that are forwardthinking in data engineering and analytics space
Collaboration with DW/BI leads to understanding new ETL pipeline development requirements.
Triage issues to find gaps in existing pipelines and fix the issues
Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs
Help joiner team members to resolve issues and technical challenges.
Drive technical discussion with client architect and team members
Orchestrate the data pipelines in scheduler via Airflow
Skills and Qualifications:
Bachelors and/or masters degree in computer science or equivalent experience.
Must have total 6 yrs. of IT experience and 3 years experience in Data warehouse/ETL projects.
Deep understanding of Star and Snowflake dimensional modelling.
Strong knowledge of Data Management principles
Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture
Should have handson experience in SQL Python and Spark (PySpark)
Candidate must have experience in AWS/ Azure stack
Desirable to have ETL with batch and streaming (Kinesis).
Experience in building ETL / data warehouse transformation processes
Experience with Apache Kafka for use with streaming data / eventbased data
Experience with other OpenSource big data products Hadoop (incl. Hive Pig Impala)
Experience with Open Source nonrelational / NoSQL data repositories (incl. MongoDB Cassandra Neo4J)
Experience working with structured and unstructured data including imaging & geospatial data.
Experience working in a Dev/Ops environment with tools such as Terraform CircleCI GIT.
Proficiency in RDBMS complex SQL PL/SQL Unix Shell Scripting performance tuning and troubleshoot
Databricks Certified Data Engineer Associate/Professional
There is a reason we are one of the fastest growing private companies in the country! You will have the opportunity to work with some of the smartest and funloving people in the data analytics space. You will work with the latest technologies and interface directly with the key decision stakeholders at our clients some of the largest and most innovative businesses in the world. Our people are our greatest asset and we value every one of them. Come see why were so successful in one of the most competitive and fastest growing industries in the world.
Required Experience:
Manager
Full Time