Morgan Consulting is partnering with a data driven innovator that is shaping the next era of Generative AI, Agentic AI, and Machine Learning. As they scale advanced AI capabilities across products, platforms, and engineering practices, they are seeking a Lead AI Engineer to define technical direction, build high impact solutions, and mentor a cross‑functional squad working across production GenAI, Agentic AI and ML Systems.
About the Role:(reference V-66350)
This is a hands‑on technical leadership role, you will architect, build, and operate reliable ML/AI solutions, elevate engineering standards, and accelerate the adoption of Generative and Agentic AI across mission‑critical, client‑facing platforms. This position combines strategic influence with deep technical execution, ideal for someone who wants to lead, innovate, and deliver tangible impact.
What You Will Do:
- Set technical vision and roadmap for ML/AI engineering, including GenAI testing, automation, agentic patterns, MLOps, and production GenAI.
- Lead and mentor engineers across automation, frontend testing, and ML through reviews, pairing, and capability uplift.
- Architect end‑to‑end ML/AI systems: feature pipelines, model training/evaluation, inference/agentic services, observability, and scalable continuous delivery.
- Establish GenAI and Agentic AI testing frameworks, including custom LLM test models, and drive best practice in evaluation, safety, and release governance.
- Build and evolve automation frameworks for web/mobile (Playwright, Cypress, Appium/Detox) with GenAI‑driven test generation, prioritisation, and self‑healing.
- Standardise ML/GenAI delivery patterns: feature stores, CI/CD, registries, canary/blue‑green, rollbacks, A/B & back‑testing, agent orchestration.
- Own reliability, performance, resilience, latency, and cost optimisation; ensure SLOs, alerting, and runbooks for ML and production GenAI.
- Close gaps across DevSecOps, guardrails, data governance, secrets, PII handling, and compliance for ML, LLMs, and agents.
- Partner with Data Science to productionise advanced models (LLMs, multimodal, agentic systems) with experimentation, telemetry, and monitoring.
- Champion continuous improvement, automation‑first mindset, developer experience uplift, and platform investments.
- Design and implement feature pipelines, inference services, and agentic execution paths (APIs, streaming, batch) with drift, hallucination, performance, and cost monitoring.
- Build and maintain CI/CD pipelines for ML, GenAI, and automation (GitHub Actions primary; Jenkins/Codemagic as needed).
- Lead test strategy for ML, LLMs, and agentic systems: contract tests, model evals, guardrails, synthetic data, behavioural tests, production validation.
- Perform peer reviews and ensure engineering standards: secure coding, documentation, reproducibility.
- Drive estimation, prioritisation, and delivery alignment with product and platform teams.
- Diagnose and resolve production issues across ML/LLM failures, agentic misbehaviour, data drift, and infra bottlenecks; own incident analysis and remediation.
- Ensure observability and explainability: metrics, logs, traces, model cards, evaluation reports for all ML/AI systems.
- Support adoption of quantitative/price‑assurance frameworks and contribute to solver/simulation‑based applications.
Proven experience leading engineering teams or guilds across Gen AI and Agentic AI/ML/MLOps and test automation in production, customer-facing contexts.
Strong software engineering in Python and one of Rust/C++/Java/Go, with demonstrable knowledge of concurrency, parallelisation, performance tuning, and memory safety.
Hands-on with ML pipelines, model registries, feature stores, containerisation (Docker) and orchestration (Kubernetes).
Expertise in CI/CD and quality engineering at scale; deep experience integrating GitHub Actions (preferred) or Jenkins/Codemagic.
Solid track record with frontend/mobile test frameworks (Playwright, Cypress, Appium/Detox, Selenium) and GenAI enabled testing (test synthesis, self healing, flake reduction).
Proficient in model experimentation & evaluation (A/B testing, back testing, cross validation, error analysis), plus real time prediction use cases (regression, classification, time-series, anomaly detection).
Strong grasp of DevSecOps for ML, secrets, compliance, access control, PII handling, dependency risk, and secure SDLC.
Excellent communication, stakeholder management, and the ability to influence standards across multiple teams.
Experience with quant/price assurance frameworks or solver/simulation based applications.
Familiarity with data models such as Medallion, Kimball, or Data Vault.
TypeScript/JavaScript for tooling and test frameworks; mobile CI familiarity.
Knowledge of LLM evaluation frameworks, prompt testing/hardening, and guardrail libraries.
Why Join:
Opportunity to gain significant GenAI and Agentic AI production experience at enterprise scale.
Strong Career Development.
Amazing culture - data driven innovator that is shaping the next era of Generative AI, Agentic AI, and Machine Learning.
If you're a Lead AI Engineer, who thrives in a fast-paced environment, and is outcome focussed we would love to hear from you. Apply now or reach out for a confidential conversation (reference V-66350).
