ABOUT KALLES GROUP:
Everyone deserves to be secure. Our mission at Kalles Group is to help secure the future for companies of all shapes and sizes.
While our expertise spans multiple disciplines our method remains consistent: building trust and relationship with people -- whether you are a client a consultant or--in this case--a candidate.
No matter what role you come from--whether youre an executive or just starting your career-you can expect our highest level of attention and respect. We want to find the right fit for each role but we also want you to find the right fit for your career.
We believe the best way to show you what our team is like is to treat you like youre already a part of it. We hope youll consider joining our team of experienced professionals who are building their careers at Kalles Groupand having fun while doing it.
WHAT YOU WILL DO:
We are looking for an experienced Data Engineer to be a key part of a new high-energy team dedicated to IT this role you will collaborate closely with Data Architects Data Scientists and BI Engineers to design and maintain scalable data pipelines and data models. Your primary focus will be on building and optimizing data pipelines that ensure the availability and quality of data for reports and advanced analytics. You will also work with DevOps Engineers during CI/CD processes to automate and manage data pipelines sourced from both internal and external systems. If you have a passion for data engineering and excel in fast-paced environments this role might be an excellent fit for you.
Key Responsibilities:
- Develop Data Pipelines: Work with Data Architects to design data models and develop pipelines that store data in structured formats.
- Enhance Data Quality and Reliability: Continuously seek out opportunities to improve the consistency performance and overall reliability of data management.
- Ad-hoc Data Analysis: Perform ad-hoc data retrieval to support business intelligence efforts reporting and dashboard creation.
- Ensure Data Accuracy: Evaluate the accuracy of data from diverse sources to guarantee integrity and dependability.
- Database Administration: Oversee database setup upgrades and ensure proper documentation of configurations.
- Create Enterprise Data Assets: Build and operationalize data pipelines that deliver certified enterprise-ready data sets for various consumers (business intelligence analytics APIs/Services).
- Collaborate with Data Experts: Partner with Data Architects Data Stewards and Data Quality Engineers to enhance data storage ingestion and orchestration practices.
- ETL/ELT Development: Design and implement ETL/ELT processes using Informatica Intelligent Cloud Services (IICS) to transform and load data.
- Cloud Integration: Utilize Azure services like Azure SQL DW (Synapse) ADLS Azure Event Hub Cosmos DB Databricks and Delta Lake to accelerate the delivery of data solutions.
- Implement Big Data Solutions: Create scalable big data and NoSQL platforms to deliver impactful insights across the organization.
- Optimize Internal Processes: Lead efforts to automate manual workflows and optimize data delivery methods for better performance.
- Translate Technical Concepts: Communicate technical ideas and solutions effectively to non-technical stakeholders both verbally and in writing.
- Peer Collaboration: Conduct peer reviews of other data engineers work to ensure adherence to best practices and high-quality outcomes.
ABOUT YOU:
- Your values:
- Integrity: You believe in doing the right thing even when its uncomfortable seemingly inefficient or costly.
- Purposefulness: You have a desire to serve others with your skillset and an openness to continuous learning and growth.
- Ownership: You stick to your commitments follow up with action and seek clarity in communication & expectations.
YOUR EXPERIENCE:
- 4 years of experience in engineering and operationalizing data pipelines for large complex datasets.
- 2 years of hands-on experience with Informatica PowerCenter and/or IICS.
- 3 years of experience working with cloud technologies including ADLS Azure Databricks Spark Azure Synapse Cosmos DB and other big data solutions.
- Experience with various data sources including DB2 SQL Oracle flat files (CSV delimited) APIs XML and JSON.
- Advanced SQL skills with a solid understanding of relational databases and business data; adept at writing complex SQL queries across diverse data sources.
- 3 years of experience in Data Modeling ETL processes and Data Warehousing.
- Strong knowledge of database storage concepts including data lakes relational databases NoSQL Graph databases and data warehousing.
WE WOULD BE THRILLED IF YOU HAVE:
- Experience implementing data integration techniques such as event/message-based integration (e.g. Kafka Azure Event Hub) and ETL processes.
- Familiarity with Git and Azure DevOps.
- Experience delivering data solutions using Agile software development methodologies.
- Exposure to the retail industry.
- Experience with UC4 Job Scheduler.
- Experience as a consultant.
WHAT WE OFFER:
- The annual salary range for this role is $120000-$160000.
- Work/life balance we know theres more to life than work! We encourage our team to pursue other passions get outside and spend time with family. We work with clients and consultants to set expectations for a manageable workload.
LOCATION:
This role requires on-site presence at our clients location in the Seattle WA area.
HOW TO APPLY:
Please fill out the form below (including uploading your most recent resume) and well be in touch! We know imposter syndrome can be a barrier to many great applicants. We hope youll still consider applying. Thats why weve made the application process as short and simple as possible.
Even if youre not a fit for the role you can expect to hear back from us! We want you to have the best experience as a candidate so please feel free to share feedback at any stage of the process to .
Kalles Group is an equal-opportunity employer and does not discriminate on the basis of creed nationality race ethnicity disability gender or other protected class.