JT International S.A. logo
A.I. EXPERT MANAGER
full-timeBucharest

Summary

Location

Bucharest

Type

full-time

Explore Jobs

About this role

 

 

At JTI we celebrate differences, and everyone truly belongs. 46,000 people from all over the world are continuously building their unique success story with us. 83% of employees feel happy working at JTI.

 

To make a difference with us, all you need to do is bring your human best.

 

What will your story be? Apply now!  

Learn more: jti.com

 

 

 

 

 

 

AI PLATFORM ENGINEER MANAGER

 

Provides guidance on the design and management of the AI infrastructure and AI Gateway ecosystem, formulate best practices and engineering standards, and organize processes for LLMOps, governance, and platform evolution. Build processes and tools to maintain high model availability, inference quality, and infrastructure maintainability. You will develop and maintain the architectural roadmap for AI Services (Azure AI Foundry, Azure AI Search, Databricks), ensuring strict alignment with Enterprise Architecture strategies and Security standards.

 

Position:

 

AI Platform Governance, LLMOps monitoring, automation, optimization, and support:

 

  • Institute Engineering Patterns: Define and enforce standard patterns that support secure Model consumption, RAG (Retrieval-Augmented Generation) pipeline integration, Vector embedding strategies, and standardized Agentic interactions (e.g., via MCP).
  • Optimize Inference Architecture: Create optimal inference and routing architectures, ensuring that applications utilize the most suitable model configurations (Context Window, Temperature, Tier) from the point of view of latency, response quality, and cost efficiency.
  • Infrastructure Design: Conceptualizing and generating infrastructure that allows Generative AI capabilities to be accessed securely and scalably by enterprise applications via the centralized AI Gateway.
  • Refactor & Evolve: Reformulating existing AI Gateway policies, throttling rules, and integration frameworks to optimize their functioning, reduce latency, and accommodate new model capabilities.
  • Tech Watch: Remaining up-to-date with GenAI industry standards and rapid technological advancements (e.g., Real-time API, Reasoning Models) to improve the operation and capabilities of the platform.

 

AI Platform Engineering & Operations:

 

  • AI Infrastructure Lifecycle: Design, implement, and manage the core AI ecosystem (Azure OpenAI, API Management, AI Search, Databricks) using robust Infrastructure as Code (IaC) frameworks (Terraform/Bicep) to ensure reproducibility and configuration drift control.
  • CI/CD for AI Operations: Develop and maintain pipelines to automate not just infrastructure deployment, but also API Gateway policy updates, prompt engineering versioning, and safe rollouts of model configuration changes.
  • Technical Coordination: Facilitate tight technical collaboration between Security Engineers, Data Architects, and external vendors to streamline the integration of complex AI capabilities, removing silos between operations and development.
  • Deep Observability & LLMOps: Implement advanced monitoring, distributed tracing, and alerting dashboards to track AI-specific metrics (Token usage, TPM/RPM limits, Inference Latency, Model Drift) alongside standard infrastructure health.
  • Operational Automation: Write and maintain scripts (Python/PowerShell) to automate operational toil, such as API key rotation, automated consumer onboarding, and compliance reporting tasks.

 

Standardization, Innovation & Technical Evolution:

 

  • Engineering Efficiency: Identify development tasks where implementation time can be reduced by creating standard assets: reusable APIM Policy fragments (for caching, throttling, tracing), modular Terraform/Bicep templates for AI resources, and standardized Python libraries for common interactions (e.g., token counting, logging).
  • Pattern Optimization: Work with Technical Delivery Leads to review and identify integration patterns that could be parameterized in the AI Gateway (e.g., centralized PII masking or unified content filtering) and ensure adoption across projects to optimize time and token costs.
  • Tech Watch: Stay abreast of rapid trends and new capabilities within the Azure AI ecosystem and the broader Generative AI landscape (e.g., Small Language Models, reasoning models, or new vector search algorithms).

 

Data Integration & RAG Engineering:

 

  • Vector Infrastructure Management: Manage and optimize the Vector Search infrastructure (Azure AI Search / Databricks Vector Search). Define engineering standards for index configuration, HNSW parameters, and hybrid search ranking to ensure sub-second retrieval latency.
  • Data-to-AI Pipelines: Architect the high-performance pipelines that ingest enterprise data from the Data Lake into the AI context. Define rigorous technical standards for text chunking strategies, embedding generation, and metadata enrichment to maximize retrieval accuracy for consuming applications.

 

Requirements:

 

  • College or University degree
  • 10+ years hands-on Technical Lead, Platform Engineer Lead role.
  • 5+ years proven experience managing Azure Infrastructure oriented to Data Services in an enterprise environment.
  • 5+ years experience in Infrastructure as Code (Terraform or Bicep) and CI/CD pipelines.
  • 5+ years  leading technical interactions with vendors/partners (Code reviews, technical acceptance).
  • 3+ years experience on managing Azure OpenAI, Azure AI Search, Azure AI Foundry, RAG implementations.
  • Deep technical expertise in LLM internals and inference mechanics, coupled with practical experience in emerging standards like Model Context Protocol (MCP), Agentic frameworks (e.g., Semantic Kernel, AutoGen), and the optimization of complex RAG architectures and vector search strategies.
  • Azure Data Lakes, DataBricks and Data Factory experience is a must
  • Understanding cloud services, infrastructure management, and cloud-native applications is key.
  • Very good understanding of Azure PaaS management, resource monitoring and security principles.
  • Deep hands-on knowledge of Azure API Management, Networking (VNETs, Private Links).
  • Solid scripting skills (Python, PowerShell, or Bash).
  • Fluent English.

 

Are you ready to join us? Build your success story at JTI. Apply now!

Next Steps:

 

After applying, if selected, please anticipate the following within 1-3 weeks of the job posting closure: Phone screening with Talent Advisor > Assessment tests > Interviews > Offer. Each step is eliminatory and may vary by role type.

 

At JTI, we strive to create a diverse and inclusive work environment. As an equal-opportunity employer, we welcome applicants from all backgrounds. If you need any specific support, alternative formats, or have other access requirements, please let us know.

 

 

Other facts

Tech stack
AI Infrastructure,Azure AI Foundry,Azure AI Search,Databricks,Infrastructure as Code,CI/CD Pipelines,LLMOps,Model Optimization,Data Integration,Vector Search,Python,PowerShell,Technical Coordination,Monitoring,Security Standards,Cloud Services

About JT International S.A.

We’re JTI, Japan Tobacco International and we believe in freedom. We think that the possibilities are limitless when you’re free to choose. In fact, we’ve spent over 20 years innovating, creating new and better products for our consumers to choose from. It’s how we’ve grown to be present in 130 countries.

But our business isn’t just business. Our business is our people. Their talent. Their potential. We believe when they’re free to be themselves, grow, travel and develop, amazing things can happen for our business.

That’s why our employees, from around the world, choose to be a part of JTI. 83% of employees feel happy working at JTI. And that's why we’ve been awarded Global Top Employer status, 10 years running.

So when you’re ready to choose a career you’ll love, in a company you’ll love, feel free to #JoinTheIdea.

A 'follow'​ confirms you are 18+

Team size: 10,001+ employees
LinkedIn: Visit
Industry: Tobacco Manufacturing

What you'll do

  • The AI Platform Engineer Manager will guide the design and management of the AI infrastructure, ensuring high model availability and inference quality. Responsibilities include optimizing inference architecture, managing AI services, and implementing advanced monitoring for AI-specific metrics.

Ready to join JT International S.A.?

Take the next step in your career journey

Frequently Asked Questions

What does a A.I. EXPERT MANAGER do at JT International S.A.?

As a A.I. EXPERT MANAGER at JT International S.A., you will: the AI Platform Engineer Manager will guide the design and management of the AI infrastructure, ensuring high model availability and inference quality. Responsibilities include optimizing inference architecture, managing AI services, and implementing advanced monitoring for AI-specific metrics..

Why join JT International S.A. as a A.I. EXPERT MANAGER?

JT International S.A. is a leading Tobacco Manufacturing company.

Is the A.I. EXPERT MANAGER position at JT International S.A. remote?

The A.I. EXPERT MANAGER position at JT International S.A. is based in Bucharest, Romania. Contact the company through Clera for specific work arrangement details.

How do I apply for the A.I. EXPERT MANAGER position at JT International S.A.?

You can apply for the A.I. EXPERT MANAGER position at JT International S.A. directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process. You can also learn more about JT International S.A. on their website.