Job Title: AI Performance Engineer (Chatbot and AI applications)
Primary skills: AI Performance Engineer for Chatbot and AI applications
Location: Edison NJ
Mode of work: Onsite
Core Responsibilities
- Test Design & Execution: Create and run performance tests to evaluate AI model responsiveness stability and scalability under varying loads.
- Bottleneck Identification: Analyze system performance to pinpoint latency throughput issues and resource consumption (CPU/GPU/Memory).
- AI Model Validation: Test Large Language Models (LLMs) or ML applications for accuracy fairness and speed including validating model performance on new unseen data.
- Infrastructure Testing: Build automation frameworks for cloud and virtualized environments where AI models are deployed.
- Reporting: Document and present actionable insights on performance improvements to development teams and stakeholders.
Required Qualifications & Skills
- Technical Proficiency: Strong knowledge of performance testing tools (e.g. JMeter Gatling Locust LoadRunner).
- Programming & Scripting: Proficiency in Python Java or Shell scripting.
- AI/ML Knowledge: Experience with AI infrastructure LLMs or ML model monitoring.
- Monitoring Tools: Experience with observability tools like Datadog Dynatrace or Splunk.
- Education: Bachelors degree in Computer Science Engineering or a related field.
Job Title: AI Performance Engineer (Chatbot and AI applications) Primary skills: AI Performance Engineer for Chatbot and AI applications Location: Edison NJ Mode of work: Onsite Core Responsibilities Test Design & Execution: Create and run performance tests to evaluate AI model responsivenes...
Job Title: AI Performance Engineer (Chatbot and AI applications)
Primary skills: AI Performance Engineer for Chatbot and AI applications
Location: Edison NJ
Mode of work: Onsite
Core Responsibilities
- Test Design & Execution: Create and run performance tests to evaluate AI model responsiveness stability and scalability under varying loads.
- Bottleneck Identification: Analyze system performance to pinpoint latency throughput issues and resource consumption (CPU/GPU/Memory).
- AI Model Validation: Test Large Language Models (LLMs) or ML applications for accuracy fairness and speed including validating model performance on new unseen data.
- Infrastructure Testing: Build automation frameworks for cloud and virtualized environments where AI models are deployed.
- Reporting: Document and present actionable insights on performance improvements to development teams and stakeholders.
Required Qualifications & Skills
- Technical Proficiency: Strong knowledge of performance testing tools (e.g. JMeter Gatling Locust LoadRunner).
- Programming & Scripting: Proficiency in Python Java or Shell scripting.
- AI/ML Knowledge: Experience with AI infrastructure LLMs or ML model monitoring.
- Monitoring Tools: Experience with observability tools like Datadog Dynatrace or Splunk.
- Education: Bachelors degree in Computer Science Engineering or a related field.
View more
View less