About the Role: Young Graduates Databricks Data & AI Engineer
Are you a recent graduate eager to launch your career at the intersection of Big Data and Artificial Intelligence We are looking for ambitious Young Grads Data & AI Engineers to join our elite Databricks team (50 FTEs).
Your Journey & Responsibilities
- End-to-End Pipeline Engineering: Design develop and deploy robust data pipelines using Databricks and Apache Spark. Youll learn to ingest data from diverse sources (APIs IoT streams ERPs).
- Master the Lakehouse: Get hands-on with Delta Lake to ensure ACID transactions on top of data lakes ensuring data is reliable versioned and AI-ready.
- Hands-on AI Development: Work on implementing Generative AI solutions including RAG (Retrieval-Augmented Generation) architectures vector databases and fine-tuning data for LLMs.
- Building for AI: Create the specialized data structures required for Generative AI including vectorizing data for RAG (Retrieval-Augmented Generation) and building feature stores for Machine Learning models.
- Automation & DataOps: Implement Databricks Workflows and CI/CD pipelines to automate data movement ensuring that high-quality data is always available for business stakeholders and AI agents.
- Cloud & Infrastructure: Learn to integrate Databricks with cloud-native services (like Azure Data Lake AWS S3 or Google BigQuery) mastering the art of cost-effective and scalable compute.
- Data Quality & Governance: Use Unity Catalog to implement fine-grained governance ensuring that the AI solutions we build are secure compliant and ethical.
Qualifications :
Qualifications
- Educational Background: A recent University Degree (Masters preferred) in Computer Science (Business) Engineering or a specialized program in Data Science/AI.
- Programming & Logic: Strong academic foundation in Python Scala or Java. You should be comfortable writing clean efficient code.
- Data Foundations: A solid grasp of SQL and an understanding of how databases work (Relational vs. NoSQL). Familiarity with the concepts of ETL/ELT is a huge plus.
- AI Passion: A hunger to build AI thingsyou follow the latest trends in LLMs and want to know how the data under the hood makes them work.
- Communication: Excellent interpersonal skills with the ability to collaborate in an agile multi-disciplinary team.
- Languages: Fluency in English and proficiency in Dutch or French is mandatory.
Bonus Points
- Academic projects or internships involving PySpark Hadoop or Kafka.
- Experience with Docker or basic Cloud certifications (Azure AWS or GCP).
- Exposure to dbt (data build tool) or airflow for orchestration.
- Knowledge of specialized AI libraries (LangChain LlamaIndex or Pandas).
Additional Information :
Why choose us
- Thrive in a fast-growing entrepreneurial environment
- Work with innovative and impactful Cloud Data & AI projects for industry-leading clients
- Connect with extraordinary talents in a collaborative and diverse culture
- Enjoy a healthy work-life balance in an inspiring eco-friendly workplace
- Competitive compensation package
At Devoteam we combine strong values respect frankness ambition entrepreneurship & collaboration with a fun and supportive environment that empowers you to innovate grow and succeed in the fast-paced world of Cloud Data & AI.
Remote Work :
No
Employment Type :
Full-time
About the Role: Young Graduates Databricks Data & AI EngineerAre you a recent graduate eager to launch your career at the intersection of Big Data and Artificial Intelligence We are looking for ambitious Young Grads Data & AI Engineers to join our elite Databricks team (50 FTEs).Your Journey & Resp...
About the Role: Young Graduates Databricks Data & AI Engineer
Are you a recent graduate eager to launch your career at the intersection of Big Data and Artificial Intelligence We are looking for ambitious Young Grads Data & AI Engineers to join our elite Databricks team (50 FTEs).
Your Journey & Responsibilities
- End-to-End Pipeline Engineering: Design develop and deploy robust data pipelines using Databricks and Apache Spark. Youll learn to ingest data from diverse sources (APIs IoT streams ERPs).
- Master the Lakehouse: Get hands-on with Delta Lake to ensure ACID transactions on top of data lakes ensuring data is reliable versioned and AI-ready.
- Hands-on AI Development: Work on implementing Generative AI solutions including RAG (Retrieval-Augmented Generation) architectures vector databases and fine-tuning data for LLMs.
- Building for AI: Create the specialized data structures required for Generative AI including vectorizing data for RAG (Retrieval-Augmented Generation) and building feature stores for Machine Learning models.
- Automation & DataOps: Implement Databricks Workflows and CI/CD pipelines to automate data movement ensuring that high-quality data is always available for business stakeholders and AI agents.
- Cloud & Infrastructure: Learn to integrate Databricks with cloud-native services (like Azure Data Lake AWS S3 or Google BigQuery) mastering the art of cost-effective and scalable compute.
- Data Quality & Governance: Use Unity Catalog to implement fine-grained governance ensuring that the AI solutions we build are secure compliant and ethical.
Qualifications :
Qualifications
- Educational Background: A recent University Degree (Masters preferred) in Computer Science (Business) Engineering or a specialized program in Data Science/AI.
- Programming & Logic: Strong academic foundation in Python Scala or Java. You should be comfortable writing clean efficient code.
- Data Foundations: A solid grasp of SQL and an understanding of how databases work (Relational vs. NoSQL). Familiarity with the concepts of ETL/ELT is a huge plus.
- AI Passion: A hunger to build AI thingsyou follow the latest trends in LLMs and want to know how the data under the hood makes them work.
- Communication: Excellent interpersonal skills with the ability to collaborate in an agile multi-disciplinary team.
- Languages: Fluency in English and proficiency in Dutch or French is mandatory.
Bonus Points
- Academic projects or internships involving PySpark Hadoop or Kafka.
- Experience with Docker or basic Cloud certifications (Azure AWS or GCP).
- Exposure to dbt (data build tool) or airflow for orchestration.
- Knowledge of specialized AI libraries (LangChain LlamaIndex or Pandas).
Additional Information :
Why choose us
- Thrive in a fast-growing entrepreneurial environment
- Work with innovative and impactful Cloud Data & AI projects for industry-leading clients
- Connect with extraordinary talents in a collaborative and diverse culture
- Enjoy a healthy work-life balance in an inspiring eco-friendly workplace
- Competitive compensation package
At Devoteam we combine strong values respect frankness ambition entrepreneurship & collaboration with a fun and supportive environment that empowers you to innovate grow and succeed in the fast-paced world of Cloud Data & AI.
Remote Work :
No
Employment Type :
Full-time
View more
View less