Wood Wide AI is a Carnegie Mellon spin-out pioneering a neuro-symbolic intelligence layer for enterprise numeric and tabular data. Our platform transforms raw numeric signals into rich embeddings for prediction anomaly detection and reasoning enabling organizations to move beyond brittle ML pipelines toward adaptive context-aware intelligence.
Were early fast-moving and ambitious. Our founders combine deep research experience with hands-on product execution and were building a lean world-class team that thrives on curiosity ownership and speed.
About The Role
Were hiring an AI Integrations Backend Engineer to bridge our core numeric intelligence engine with the rapidly evolving agentic ecosystem. Youll design and maintain the interfaces that connect Wood Wide AIs embeddings and reasoning engine to LLM agents orchestration frameworks and Model Context Protocol (MCP)-based tools. This role blends backend engineering AI systems integration and developer experience to make our product plug-and-play for AI builders and enterprises alike.
Youll work closely with the founding team to turn cutting-edge research into robust production-grade microservices and SDKs that power agentic workflows.
What Youll Do
Design and build core backend APIs using Python (FastAPI) to serve numeric intelligence functions like embedding generation reasoning calls and model inference endpoints.
Integrate our engine with LLM agents and tool-calling interfaces (OpenAI Anthropic Gemini) to enable structured reasoning over numeric data.
Develop microservices and a Model Context Protocol (MCP) server exposing modular tools that agents can securely invoke to process tabular or time-series data.
Orchestrate agentic workflows using frameworks such as Vercel AI SDK LangGraph PydanticAI or custom planners and evaluate trade-offs in performance and observability.
Build and maintain a Python SDK with clean abstractions and developer-first ergonomics.
Develop data connectors for major environments such as Databricks Snowflake Postgres and S3/GCS.
Implement auth rate limiting usage metering and structured logging for reliable production operations.
Containerize and deploy microservices via Docker GitHub Actions and GCP/AWS ensuring scalability and maintainability.
Collaborate cross-functionally with ML and DX teammates to ensure seamless data flow and user experience.
What Were Looking For
1-3 years backend engineering experience (Python FastAPI Flask or similar).
Proven experience designing API-first microservices and integrating with AI or ML systems.
Experienced in designing and managing scalable data pipelines and storage solutions using tools like Postgres Redis Kafka and Airbyte.
Experience with Docker GitHub Actions and cloud providers (e.g. GCP AWS).
Strong fundamentals in REST/gRPC design authentication and CI/CD.
Familiarity with LLM tool-calling APIs agentic orchestration frameworks and MCP-based architectures.
Experience with LangGraph PydanticAI MCP servers or equivalent orchestration stacks.
Knowledge of vector databases (FAISS pgvector Pinecone Weviate)
Experience using LLM APIs such as OpenAI and Anthropics.
Clear communication strong documentation and collaborative mindset.
Bonus Points:
Why Join Us
Join a high-caliber founding team defining a new layer of the AI stack.
Work on the next frontier of GenAI at the intersection of symbolic reasoning numeric ML and agentic intelligence.
Ship real systems that power the next generation of AI agents.
Flexible high-ownership work environment with deep technical impact and visibility.
Interested
Reach out! We value capability curiosity and drive above all.
Wood Wide AI is a Carnegie Mellon spin-out pioneering a neuro-symbolic intelligence layer for enterprise numeric and tabular data. Our platform transforms raw numeric signals into rich embeddings for prediction anomaly detection and reasoning enabling organizations to move beyond brittle ML pipeline...
Wood Wide AI is a Carnegie Mellon spin-out pioneering a neuro-symbolic intelligence layer for enterprise numeric and tabular data. Our platform transforms raw numeric signals into rich embeddings for prediction anomaly detection and reasoning enabling organizations to move beyond brittle ML pipelines toward adaptive context-aware intelligence.
Were early fast-moving and ambitious. Our founders combine deep research experience with hands-on product execution and were building a lean world-class team that thrives on curiosity ownership and speed.
About The Role
Were hiring an AI Integrations Backend Engineer to bridge our core numeric intelligence engine with the rapidly evolving agentic ecosystem. Youll design and maintain the interfaces that connect Wood Wide AIs embeddings and reasoning engine to LLM agents orchestration frameworks and Model Context Protocol (MCP)-based tools. This role blends backend engineering AI systems integration and developer experience to make our product plug-and-play for AI builders and enterprises alike.
Youll work closely with the founding team to turn cutting-edge research into robust production-grade microservices and SDKs that power agentic workflows.
What Youll Do
Design and build core backend APIs using Python (FastAPI) to serve numeric intelligence functions like embedding generation reasoning calls and model inference endpoints.
Integrate our engine with LLM agents and tool-calling interfaces (OpenAI Anthropic Gemini) to enable structured reasoning over numeric data.
Develop microservices and a Model Context Protocol (MCP) server exposing modular tools that agents can securely invoke to process tabular or time-series data.
Orchestrate agentic workflows using frameworks such as Vercel AI SDK LangGraph PydanticAI or custom planners and evaluate trade-offs in performance and observability.
Build and maintain a Python SDK with clean abstractions and developer-first ergonomics.
Develop data connectors for major environments such as Databricks Snowflake Postgres and S3/GCS.
Implement auth rate limiting usage metering and structured logging for reliable production operations.
Containerize and deploy microservices via Docker GitHub Actions and GCP/AWS ensuring scalability and maintainability.
Collaborate cross-functionally with ML and DX teammates to ensure seamless data flow and user experience.
What Were Looking For
1-3 years backend engineering experience (Python FastAPI Flask or similar).
Proven experience designing API-first microservices and integrating with AI or ML systems.
Experienced in designing and managing scalable data pipelines and storage solutions using tools like Postgres Redis Kafka and Airbyte.
Experience with Docker GitHub Actions and cloud providers (e.g. GCP AWS).
Strong fundamentals in REST/gRPC design authentication and CI/CD.
Familiarity with LLM tool-calling APIs agentic orchestration frameworks and MCP-based architectures.
Experience with LangGraph PydanticAI MCP servers or equivalent orchestration stacks.
Knowledge of vector databases (FAISS pgvector Pinecone Weviate)
Experience using LLM APIs such as OpenAI and Anthropics.
Clear communication strong documentation and collaborative mindset.
Bonus Points:
Why Join Us
Join a high-caliber founding team defining a new layer of the AI stack.
Work on the next frontier of GenAI at the intersection of symbolic reasoning numeric ML and agentic intelligence.
Ship real systems that power the next generation of AI agents.
Flexible high-ownership work environment with deep technical impact and visibility.
Interested
Reach out! We value capability curiosity and drive above all.
View more
View less