Who we are
Perceptas mission is to transform critical institutions with applied AI. We care that industries that power the world (e.g. healthcare manufacturing energy) benefit from frontier technology.
To make that happen we embed with industry-leading customers to drive AI transformation. We bring together:
Forward-deployed expertise in engineering product and research
Mosaic our in-house toolkit for rapidly deploying agentic workflows
Strategic partnerships with Anthropic McKinsey AWS companies within the General Catalyst portfolio and more
Our team is a quickly growing group of Applied AI Engineers Embedded Product Managers and Researchers motivated by diffusing the promise of AI into improvements we can feel in our day to day lives.
Percepta is a direct partnership with General Catalyst a global transformation and investment company.
About the role
Were hiring Machine Learning Engineers who will work directly within customer teams to define and deliver high-impact AI systems. We dont build prototypes or laboratory projects - youll design build and ship production-grade AI agents and workflows that drive millions in business value for customers.
Our Machine Learning Engineers:
Engineer and optimize AI/ML systems: Build end-to-end ML pipelines for data ingestion training evaluation and deployment. Adapt and extend LLM models with fine-tuning distillation retrieval systems and tool-use to solve domain-specific problems.
Evaluate AI systems rigorously: Develop custom evaluations to ensure models succeed in real-world environments.
Bring frontier methods into practice: Track the latest techniques in areas like RAG tool use multi-step agent orchestration fine-tuning methods and evaluation frameworks - and apply them to specific customer challenges.
Collaborate across product and research: Partner with research and product teams to turn frontier techniques into production-ready features and workflows.
Advance our core product: Encode the lessons from our customer engagements in our Mosaic product consistently contributing reusable ML components infrastructure abstractions and performance improvements.
What were looking for
AI-nativeness: Youre excited about the potential for AI to transform businesses and want to play a hands-on role in bringing frontier technology into critical institutions.
Strong ML foundations with hands-on experience building and deploying production models / AI systems.
Being generative and collaborative: You love constantly jamming on new what if ideas with teammates and partners to bridge applied engineering product and research efforts.
Extreme ownership: Youre willing to jump in and love being the one on the hook. You arent going to wait to be pointed at a taskyoure going to identify what you think we should do next and then do it.
Execution excellence and speed: You can build stuff in messy environments and know how to get code written and shipped quickly. You can hold the balance of speed and quality and know when to push the pace vs. when to slow down.
Customer-obsession and respect: Youre motivated by understanding customer pain points and iterating directly with end users to deliver wins quickly.
Bonus if you have
Hands-on experience with LLM tooling (e.g. LangGraph Mastra Agents SDK).
Experience fine-tuning distilling and deploying LLMs or other foundation models in production.
Background in retrieval RAG pipelines or multi-step agent design (including tool use and human-in-the-loop systems).
Strong engineering foundations in Python/TypeScript cloud deployment (AWS/GCP/Azure) and modern MLOps/DevOps tooling.
Prior startup or founding engineer experience balancing craft ownership and speed.
Were working against an incredibly ambitious mission. It wont be easy but it will likely be the most fulfilling work of your career. If this excites you lets chat even if you dont meet all of the qualifications above.
Required Experience:
IC
Who we arePerceptas mission is to transform critical institutions with applied AI. We care that industries that power the world (e.g. healthcare manufacturing energy) benefit from frontier technology. To make that happen we embed with industry-leading customers to drive AI transformation. We bring t...
Who we are
Perceptas mission is to transform critical institutions with applied AI. We care that industries that power the world (e.g. healthcare manufacturing energy) benefit from frontier technology.
To make that happen we embed with industry-leading customers to drive AI transformation. We bring together:
Forward-deployed expertise in engineering product and research
Mosaic our in-house toolkit for rapidly deploying agentic workflows
Strategic partnerships with Anthropic McKinsey AWS companies within the General Catalyst portfolio and more
Our team is a quickly growing group of Applied AI Engineers Embedded Product Managers and Researchers motivated by diffusing the promise of AI into improvements we can feel in our day to day lives.
Percepta is a direct partnership with General Catalyst a global transformation and investment company.
About the role
Were hiring Machine Learning Engineers who will work directly within customer teams to define and deliver high-impact AI systems. We dont build prototypes or laboratory projects - youll design build and ship production-grade AI agents and workflows that drive millions in business value for customers.
Our Machine Learning Engineers:
Engineer and optimize AI/ML systems: Build end-to-end ML pipelines for data ingestion training evaluation and deployment. Adapt and extend LLM models with fine-tuning distillation retrieval systems and tool-use to solve domain-specific problems.
Evaluate AI systems rigorously: Develop custom evaluations to ensure models succeed in real-world environments.
Bring frontier methods into practice: Track the latest techniques in areas like RAG tool use multi-step agent orchestration fine-tuning methods and evaluation frameworks - and apply them to specific customer challenges.
Collaborate across product and research: Partner with research and product teams to turn frontier techniques into production-ready features and workflows.
Advance our core product: Encode the lessons from our customer engagements in our Mosaic product consistently contributing reusable ML components infrastructure abstractions and performance improvements.
What were looking for
AI-nativeness: Youre excited about the potential for AI to transform businesses and want to play a hands-on role in bringing frontier technology into critical institutions.
Strong ML foundations with hands-on experience building and deploying production models / AI systems.
Being generative and collaborative: You love constantly jamming on new what if ideas with teammates and partners to bridge applied engineering product and research efforts.
Extreme ownership: Youre willing to jump in and love being the one on the hook. You arent going to wait to be pointed at a taskyoure going to identify what you think we should do next and then do it.
Execution excellence and speed: You can build stuff in messy environments and know how to get code written and shipped quickly. You can hold the balance of speed and quality and know when to push the pace vs. when to slow down.
Customer-obsession and respect: Youre motivated by understanding customer pain points and iterating directly with end users to deliver wins quickly.
Bonus if you have
Hands-on experience with LLM tooling (e.g. LangGraph Mastra Agents SDK).
Experience fine-tuning distilling and deploying LLMs or other foundation models in production.
Background in retrieval RAG pipelines or multi-step agent design (including tool use and human-in-the-loop systems).
Strong engineering foundations in Python/TypeScript cloud deployment (AWS/GCP/Azure) and modern MLOps/DevOps tooling.
Prior startup or founding engineer experience balancing craft ownership and speed.
Were working against an incredibly ambitious mission. It wont be easy but it will likely be the most fulfilling work of your career. If this excites you lets chat even if you dont meet all of the qualifications above.
Required Experience:
IC
View more
View less