cover image
ExpertsHub.ai

Senior Machine Learning Engineer

On site

Charlotte, United states

Senior

Freelance

22-01-2026

Share this job:

Skills

Kubernetes Monitoring Change Management Architecture OpenShift Microservices

Job Specifications

Requirements:

Top Must-Haves:

-Managing, Operations and Support MLOps/LLMOps pipelines.

-Troubleshooting LLM models.

-Model optimization.

Production support engineer who focusses on LLM models/AI and uses TensorRT LLM and Triton Inference Server

**The position is more of a hands-on technical trouble-shooter of production solutions and less of a designing, architecting engineering new solutions.

Job Description:

AI Operations Platform Consultant

Experience deploying, managing, operating, and troubleshooting containerized services at scale on Kubernetes for mission-critical applications (OpenShift)

Experience with deploying, configuring, and tuning LLMs using TensorRT-LLM and Triton Inference server.

Managing MLOps/LLMOps pipelines, using TensorRT-LLM and Triton Inference server to deploy inference services in production

Setup and operation of AI inference service monitoring for performance and availability.

Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.

Operation and support of MLOps/LLMOps pipelines, using TensorRT-LLM and Triton Inference server to deploy inference services in production

Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.

Experience with standard processes for operation of a mission critical system – incident management, change management, event management, etc.

Managing scalable infrastructure for deploying and managing LLMs

Deploying models in production environments, including containerization, microservices, and API design

Triton Inference Server, including its architecture, configuration, and deployment.

Model Optimization techniques using Triton with TRTLLM

Model optimization techniques, including pruning, quantization, and knowledge distillation

WE ARE LOOKING FOR CANDIDATES THAT HAVE:

Brings extensive experience operating large-scale GPU-accelerated AI platforms, deploying and managing LLM inference systems on Kubernetes with strong expertise in Triton Inference Server and TensorRT-LLM. They have repeatedly built and optimized production-grade LLM pipelines with GPU-aware scheduling, load balancing, and real-time performance tuning across multi-node clusters. Their background includes designing containerized microservices, implementing robust deployment workflows, and maintaining operational reliability in mission-critical environments. They have led end-to-end LLMOps processes involving model versioning, engine builds, automated rollouts, and secure runtime controls. The candidate has also developed comprehensive observability for inference systems, using telemetry and custom dashboards to track GPU health, latency, throughput, and service availability. Their work consistently incorporates advanced optimization methods such as mixed precision, quantization, sharding, and batching to improve efficiency. Overall, they bring a strong blend of platform engineering, AI infrastructure, and hands-on operational experience running high-performance LLM systems in production

About the Company

At ExpertsHub.ai, we bridge the gap between businesses and top-tier AI experts. Our AI-powered platform ensures seamless connections, matching clients with skilled and vetted AI professionals who deliver quality results. Whether you need expert guidance, specialized services, or on-demand AI talent, ExpertsHub.ai makes hiring smarter, faster, and more efficient. Know more