Our Culture:
- At LUMIQ we strive to create a community of passionate data professionals who aim to transcend the usual corporate dynamics. We offer you the freedom to ideate commit and navigate your career trajectory at your own pace.
- Culture of ownership empowerment to drive outcomes.
- Our culture encourages Tech Poetry combining creativity and technology to create solutions that revolutionize the industry. We trust our people to manage their responsibilities with minimal policy constraints. Our team is composed of the industrys brightest minds from PhDs and engineers to industry specialists from Banking Insurance NBFCs AMCs who will challenge and inspire you to reach new heights.
Key Responsibilities:
- Solution Design & Implementation
- Understand complex business requirements and translate them into scalable and maintainable cloud-based data architectures.
- Design end-to-end data solutions using AWS services (Redshift Glue Lambda EC2 EKS/Kubernetes etc.).
- Data Pipeline Development & Automation
- Build and maintain automated ETL/ELT pipelines using Python PySpark SQL Airflow and GCP.
- Integrate with DBT for data transformation and modeling.
- Data Modeling & Warehousing
- Design and implement efficient data models for Redshift and other cloud-based warehouses.
- Optimize performance and ensure data integrity and governance.
- Monitoring RCA & Issue Resolution
- Perform Root Cause Analysis (RCA) for issues in the existing data warehouse automations and reporting systems.
- Monitor pipeline health and proactively identify and resolve data anomalies or system failures.
- Visualization & Dashboards
- Collaborate with BI and analytics teams to support the creation of dashboards and visualizations.
- Design data models specifically tailored for efficient visual analytics.
- Collaboration & Documentation
- Work cross-functionally with business analysts product owners and other engineering teams.
- Maintain technical documentation for solutions and workflows.
Technical Skills Requirements:
- Strong proficiency in SQL Python and PySpark
- Hands-on experience with GCP services:
- Redshift Glue Lambda EC2 EKS/Kubernetes
- Experience with Airflow DBT and data orchestration frameworks
- Solid understanding of data modeling warehousing and data quality practices
- Experience in dashboarding and data visualization support (Power BI/Tableau/Looker optional)
- Familiarity with CI/CD and version control systems like Git.