AI Data Engineer / Data Pipeline Engineer
Remote
Contract
Were looking for a Data Engineer experienced in building modern data pipelines and integrating AI driven automation workflows. The candidate should be comfortable working with both traditional ETL systems and next generation AI/agent based technologies.
Key Responsibilities
- Design and build ETL and data pipelines using modern tools and frameworks.
- Work with real time/streaming data (Kafka Pub/Sub or Kinesis).
- Develop agent based or chat driven automations using frameworks like LangGraph.
- Integrate and manage vector databases (Pinecone Chroma or Weaviate) for AI/ML workflows.
- Collaborate with teams to define data requirements and deliver scalable data solutions.
- Maintain data quality reliability and operational performance across all pipelines.
Required Skills
- Python programming expertise.
- Experience with LangGraph and/or LangFuse.
- Knowledge of ETL processes data orchestration and cloud data stacks (Google Cloud AWS etc.).
- Proficiency in SQL and familiarity with NoSQL databases.
- Working knowledge of data streaming tools and vector databases.
- Understanding of AI agents RAG and AI data pipelines.
Preferred Skills
- Experience with plug and play ETL tools.
- Exposure to LLM frameworks or RAG implementations.
#Li-BS1
AI Data Engineer / Data Pipeline Engineer Remote Contract Were looking for a Data Engineer experienced in building modern data pipelines and integrating AI driven automation workflows. The candidate should be comfortable working with both traditional ETL systems and next generation AI/agent b...
AI Data Engineer / Data Pipeline Engineer
Remote
Contract
Were looking for a Data Engineer experienced in building modern data pipelines and integrating AI driven automation workflows. The candidate should be comfortable working with both traditional ETL systems and next generation AI/agent based technologies.
Key Responsibilities
- Design and build ETL and data pipelines using modern tools and frameworks.
- Work with real time/streaming data (Kafka Pub/Sub or Kinesis).
- Develop agent based or chat driven automations using frameworks like LangGraph.
- Integrate and manage vector databases (Pinecone Chroma or Weaviate) for AI/ML workflows.
- Collaborate with teams to define data requirements and deliver scalable data solutions.
- Maintain data quality reliability and operational performance across all pipelines.
Required Skills
- Python programming expertise.
- Experience with LangGraph and/or LangFuse.
- Knowledge of ETL processes data orchestration and cloud data stacks (Google Cloud AWS etc.).
- Proficiency in SQL and familiarity with NoSQL databases.
- Working knowledge of data streaming tools and vector databases.
- Understanding of AI agents RAG and AI data pipelines.
Preferred Skills
- Experience with plug and play ETL tools.
- Exposure to LLM frameworks or RAG implementations.
#Li-BS1
View more
View less