Build, deploy, and operate production AI platforms: productionize LLM/SLM APIs, fine-tune and serve Hugging Face models, containerize services, manage cloud deployments, CI/CD, monitoring, scaling, and cost optimization.
Job Title
AI Platform Specialist (Python, LLMs/SLMs, Cloud)
Job SummaryWe are looking for an AI Platform Specialist to build, deploy, and operate AI models in production. This role is deployment-focused, with emphasis on scalability, reliability, and cost efficiency.
You will work hands-on with Python, LLM/SLM APIs (OpenAI, Gemini), and open-source models (Hugging Face) to turn AI models into production-ready platforms and services.
Key Responsibilities- Deploy AI/ML models built in Python into production environments
- Productionize OpenAI, Gemini, and other LLM APIs
- Build, fine-tune, and deploy Small Language Models (SLMs)
- Work with Hugging Face models (loading, fine-tuning, inference, optimization)
- Build and maintain REST APIs for AI services
- Containerize and deploy applications using Docker
- Deploy and manage services on cloud platforms (AWS / GCP / Azure)
- Set up and maintain CI/CD pipelines
- Monitor latency, failures, and inference cost
- Manage model versioning, scaling, and rollbacks
- Strong Python (production experience)
- Experience deploying AI/ML systems into production
- Hands-on experience with OpenAI / Gemini / LLM APIs
- Experience with the Hugging Face ecosystem
- Understanding of SLMs and model optimization
- REST APIs and backend systems
- Docker and containerized deployments
- Cloud platforms (AWS / GCP / Azure)
- Linux and CI/CD workflows
- Kubernetes or serverless deployments
- MLOps tools (MLflow, DVC, Kubeflow)
- Monitoring and logging tools
- Experience optimizing LLM/SLM cost, latency, and memory
- Basic training or fine-tuning workflows
- Ability to deploy and operate AI platforms reliably
- Strong focus on simplicity, performance, and cost
- Hands-on, ownership-driven mindset
- Comfort working in a lean startup environment
- Experience in Python / AI deployment / MLOps / DevOps
- Prior experience with production AI systems preferred
Similar Jobs
Artificial Intelligence • Consumer Web • Edtech • HR Tech • Information Technology • Software • Conversational AI
The Software Engineering Manager will lead an engineering team in developing AI-native technologies, promote spec-driven development, ensure quality and stakeholder communication, and translate user needs into features.
Top Skills:
ClaudeGraphQLLanggraphNode.jsPostgresRabbitMQReactSse Streaming
Security • Cybersecurity
As a QA Engineer in DevOps, you will test and analyze requirements, write test cases, debug environments, and investigate customer scenarios.
Top Skills:
Ci/CdDockerGoJavaJenkinsKotlinKubernetesLinuxPython
Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
The Associate Manager develops global medical content under supervision, collaborates with teams to ensure quality and compliance, and tracks project progress.
Top Skills:
Generative Ai Technology PlatformsMedical Content ToolsMultimedia
What you need to know about the Hyderabad Tech Scene
Because of its proximity to leading research institutions and a government committed to the city's growth, Hyderabad's tech scene is booming. With plans to establish India's first "AI city," the city is on track to become one of the world's most anticipated tech hubs, with companies like TransUnion, Schrödinger and Freshworks, among others, already calling the city home.


