Job Title: ML Systems Engineer – Interface APIs & Claims Scrubbing (RCM AI Platform)
Role Overview
We are hiring an ML Systems Engineer to own the production interface layer between our denial
prediction models and real-world RCM workflows. This role is responsible for delivering low-
latency highly reliable inference APIs integrating AI predictions into claims scrubbing and billing
workflows and ensuring real-time decisioning at scale.
This is not a pure ML modeling role—it is about making AI usable fast and operationally critical
within live healthcare systems. If the model is the brain this role builds the nervous system.
Key Responsibilities
• Design and own low-latency inference APIs (<200ms) for real-time denial prediction and
claim scoring.
• Build and maintain the claims scrubbing engine that combines payer rules edits and AI-
driven risk signals.
• Integrate AI outputs into Angular-based denial management UI and billing worklists for
actionable workflows.
• Develop robust API layers connecting ML models with RCM systems (billing coding
clearinghouses).
• Architect high-throughput fault-tolerant systems for real-time and batch inference.
• Implement caching queuing and optimization strategies to meet strict latency SLAs.
• Own the feedback loop pipeline: capture denial outcomes (835) user corrections and
resubmit signals to improve model accuracy.
• Ensure correctness traceability and auditability of every prediction and rule applied to
a claim.
• Monitor system performance (latency uptime drift) and proactively resolve bottlenecks.
• Collaborate closely with ML engineers frontend teams and RCM SMEs to align system
behavior with real workflows.
Required Qualifications
• 5–10 years of experience in backend or systems engineering with exposure to ML system
deployment.
• Strong expertise in building high-performance APIs (Python or Go preferred).
• Experience deploying and scaling ML inference systems in production environments.
• Deep understanding of system design: latency optimization caching concurrency and
distributed systems.
• Experience with REST/gRPC APIs message queues (Kafka RabbitMQ) and real-time
processing.
• Familiarity with healthcare data formats (X12 837/835 HL7 FHIR) and RCM workflows.
• Experience integrating frontend systems (Angular or similar) with backend APIs.
• Strong debugging and performance tuning skills.
Preferred Qualifications
• Experience building claims scrubbing engines or rule-based validation systems in RCM.
• Exposure to denial prediction or revenue integrity platforms.
• Experience with feature stores model serving frameworks (FastAPI TensorFlow Serving
TorchServe).
• Knowledge of frontend-backend interaction patterns for real-time decision systems.
• Experience with cloud infrastructure (AWS/GCP/Azure) containers (Docker
Kubernetes).
• Understanding of payer edits clearinghouse logic and claim lifecycle timing constraints.
Key Traits for Success
• Obsessive about latency and system performance (<200ms is a hard constraint not a
goal).
• Thinks in production systems not just features.
• Strong ownership of reliability and correctness in high-stakes environments.
• Ability to balance rules-based logic with probabilistic AI outputs.
• Deep respect for workflow timing in RCM operations.
What Success Looks Like
• Real-time denial prediction seamlessly embedded in claim submission workflows.
• Sub-200ms API response times at scale.
• High system uptime with zero disruption to billing operations.
• Claims scrubbed with both rules AI before submission improving clean claim rates.
• Continuous feedback loop improving model accuracy without manual intervention.
Why Join PAIX
• Own the most critical layer of an AI-first RCM platform—where predictions become
action.
• Build systems that directly impact hospital cash flow and operational efficiency.
• Work on real-time healthcare AI infrastructure not offline analytics.
• Opportunity to define performance benchmarks for AI in RCM systems
Job Title: ML Systems Engineer – Interface APIs & Claims Scrubbing (RCM AI Platform)Role OverviewWe are hiring an ML Systems Engineer to own the production interface layer between our denialprediction models and real-world RCM workflows. This role is responsible for delivering low-latency highly rel...
Job Title: ML Systems Engineer – Interface APIs & Claims Scrubbing (RCM AI Platform)
Role Overview
We are hiring an ML Systems Engineer to own the production interface layer between our denial
prediction models and real-world RCM workflows. This role is responsible for delivering low-
latency highly reliable inference APIs integrating AI predictions into claims scrubbing and billing
workflows and ensuring real-time decisioning at scale.
This is not a pure ML modeling role—it is about making AI usable fast and operationally critical
within live healthcare systems. If the model is the brain this role builds the nervous system.
Key Responsibilities
• Design and own low-latency inference APIs (<200ms) for real-time denial prediction and
claim scoring.
• Build and maintain the claims scrubbing engine that combines payer rules edits and AI-
driven risk signals.
• Integrate AI outputs into Angular-based denial management UI and billing worklists for
actionable workflows.
• Develop robust API layers connecting ML models with RCM systems (billing coding
clearinghouses).
• Architect high-throughput fault-tolerant systems for real-time and batch inference.
• Implement caching queuing and optimization strategies to meet strict latency SLAs.
• Own the feedback loop pipeline: capture denial outcomes (835) user corrections and
resubmit signals to improve model accuracy.
• Ensure correctness traceability and auditability of every prediction and rule applied to
a claim.
• Monitor system performance (latency uptime drift) and proactively resolve bottlenecks.
• Collaborate closely with ML engineers frontend teams and RCM SMEs to align system
behavior with real workflows.
Required Qualifications
• 5–10 years of experience in backend or systems engineering with exposure to ML system
deployment.
• Strong expertise in building high-performance APIs (Python or Go preferred).
• Experience deploying and scaling ML inference systems in production environments.
• Deep understanding of system design: latency optimization caching concurrency and
distributed systems.
• Experience with REST/gRPC APIs message queues (Kafka RabbitMQ) and real-time
processing.
• Familiarity with healthcare data formats (X12 837/835 HL7 FHIR) and RCM workflows.
• Experience integrating frontend systems (Angular or similar) with backend APIs.
• Strong debugging and performance tuning skills.
Preferred Qualifications
• Experience building claims scrubbing engines or rule-based validation systems in RCM.
• Exposure to denial prediction or revenue integrity platforms.
• Experience with feature stores model serving frameworks (FastAPI TensorFlow Serving
TorchServe).
• Knowledge of frontend-backend interaction patterns for real-time decision systems.
• Experience with cloud infrastructure (AWS/GCP/Azure) containers (Docker
Kubernetes).
• Understanding of payer edits clearinghouse logic and claim lifecycle timing constraints.
Key Traits for Success
• Obsessive about latency and system performance (<200ms is a hard constraint not a
goal).
• Thinks in production systems not just features.
• Strong ownership of reliability and correctness in high-stakes environments.
• Ability to balance rules-based logic with probabilistic AI outputs.
• Deep respect for workflow timing in RCM operations.
What Success Looks Like
• Real-time denial prediction seamlessly embedded in claim submission workflows.
• Sub-200ms API response times at scale.
• High system uptime with zero disruption to billing operations.
• Claims scrubbed with both rules AI before submission improving clean claim rates.
• Continuous feedback loop improving model accuracy without manual intervention.
Why Join PAIX
• Own the most critical layer of an AI-first RCM platform—where predictions become
action.
• Build systems that directly impact hospital cash flow and operational efficiency.
• Work on real-time healthcare AI infrastructure not offline analytics.
• Opportunity to define performance benchmarks for AI in RCM systems
View more
View less