Skip to content

AI Engineer

  • Remote
    • Bangalore, Karnātaka, India
  • Engineering

Job description

About Toku

At Toku, we create enterprise cloud communications and customer engagement solutions to reimagine customer experiences for enterprises. We provide an end-to-end approach to help businesses overcome the complexity of digital transformation in APAC markets and enhance their CX with mission-critical cloud communication solutions. Toku combines local strategic consulting expertise, bespoke technology, regional in-country infrastructure, connectivity and global reach to serve the diverse needs of enterprises.

About the Role

As Toku continues creating momentum for its products in the APAC region and helping customers with their communications needs, we are looking for an AI Engineer with strong expertise in large language models (LLMs), deep learning, and MLOps. You’ll be working on the design, training, and deployment of state-of-the-art generative models and intelligent systems to power our next-generation contact center and unified communication platforms.

This is an impactful position during a growth phase for the business. You’ll contribute to model development, system design, and applied research in a collaborative and highly visible environment. If you have a strong technical foundation in ML/AI, curiosity for the latest research, and enjoy working across teams to bring intelligent features to life—this role is for you.

What you will be doing

As an AI Engineer, you will collaborate with stakeholders across the organization. Your primary focus will be on training, evaluating, and deploying ML models, especially LLMs and generative systems, while supporting their integration into cloud-native applications. You'll also stay up to date with the latest research, experiment with new architectures, and help build the foundational AI systems that scale with our platform.

Toku Engineering defines competencies across five axes to guide performance evaluations and individual growth. The Senior AI Engineer role profile is structured accordingly:

Delivery

  • Train, fine-tune, evaluate, and deploy deep learning models, including LLMs and generative models

  • Build robust, reusable ML pipelines and APIs using Python (FastAPI), integrated into scalable backend systems

  • Contribute to building cloud-native AI services leveraging AWS tools such as Lambda, SageMaker, S3, and API Gateway

  • Develop and maintain MLOps processes, ensuring reproducibility, automation, and monitoring of deployed models

  • Collaborate with backend engineers to integrate models into production systems with APIs and event-driven workflows

  • Write clean, maintainable, and well-documented code for both model training and deployment pipelines

  • Monitor live models for performance degradation, data drift, and implement feedback loops where appropriate

  • Contribute to prompt design, model evaluation, and performance tuning of generative models used in production

  • Share progress, blockers, and proposed solutions proactively during planning and technical discussions

Strategic Alignment

  • Read and evaluate recent research papers in LLMs, generative models, and deep learning; implement promising architectures

  • Drive experimentation and benchmarking of different modeling approaches (e.g., transformers, diffusion, RAG, LoRA, quantization)

  • Contribute to selecting tools and frameworks that align with Toku’s technical strategy in AI/ML

  • Support strategic AI initiatives that enable personalization, automation, or intelligent augmentation of our platform

  • Champion best practices for model reproducibility, explainability, and performance measurement

  • Maintain awareness of the evolving AI landscape, proposing ideas for long-term innovation and system enhancement

Talent

  • Communicate effectively with cross-functional teams, including product, backend, frontend, and DevOps

  • Contribute meaningfully to technical discussions, offering insights into model architecture, performance, and design trade-offs

  • Share technical findings, experimental results, and research insights through documentation or internal demos

  • Collaborate respectfully with peers to build AI features that are both innovative and maintainable

  • Be a dependable, constructive teammate with a collaborative, learning-oriented mindset

Culture

  • Engage in cross-team collaboration, working with designers, engineers, and product stakeholders to align AI work with user needs

  • Participate in team meetings, planning sessions, and brainstorming discussions

  • Contribute to a positive, inclusive, and transparent engineering culture

  • Demonstrate curiosity, humility, and a growth mindset in daily work

  • Support a culture of experimentation and continuous improvement in AI development

Technical Excellence

  • Demonstrate strong hands-on expertise in Python (PyTorch, FastAPI), with working knowledge of Go or TypeScript being a plus

  • Apply deep knowledge of LLMs and transformer-based architectures (e.g., BERT, GPT, LLaMA, Mistral)

  • Experience training and fine-tuning deep learning models, including model optimization for inference (quantization, distillation, etc.)

  • Utilize tools such as HuggingFace Transformers, LangChain, MLflow, and databases

  • Familiar with MLOps workflows, containerization (Docker), orchestration (Airflow/Kubernetes), and cloud deployment on AWS

  • Build and deploy scalable, secure AI services as part of a microservice or serverless architecture

  • Strong understanding of evaluation techniques for generative models, NLP, and model performance monitoring

  • Write production-grade code, pipelines, and APIs that are well-structured and testable.

Job requirements

We would love to hear from you if you have:

  • A Bachelor's or Master's degree in Computer Science, AI/ML, Engineering, or a related field

  • 5+ years of experience in software development, with at least 3 years focused on AI/ML or applied deep learning

  • Proven experience training and deploying LLMs or generative models in production environments

  • Strong foundation in Python for AI development; Go or TypeScript/Node.js is a plus

  • Experience with cloud platforms and tools (especially AWS)

  • Familiarity with modern ML frameworks and libraries (e.g., PyTorch, HuggingFace, LangChain)

  • Good understanding of NLP, generative modeling, and vector-based retrieval systems

  • Experience integrating ML models with backend systems via APIs

  • Excellent communication and collaboration skills

  • A curiosity-driven mindset and willingness to continuously learn new techniques and technologies

If you would love to experience working in a fast-paced, growing company and believe you meet most of the requirements, come join us!

Remote
  • Bangalore, Karnātaka, India
Engineering

or

Apply with Linkedin unavailable
Apply with Indeed unavailable