Senior AI Compute Infrastructure Engineer

Kraken

Employer Active

Posted 11 hrs ago

Experience

5 - 10 Years

Education

Any Graduation

Nationality

Any Nationality

Gender

Not Mentioned

Vacancy

1 Vacancy

Job Description

Roles & Responsibilities

The team

Kraken is building a dedicated AI Compute and Infrastructure team to power the next generation of model training, inference, evaluation, and experimentation across the exchange. This team sits within engineering leadership and owns the infrastructure layer that lets Kraken run AI workloads with control, speed, reliability, and cost discipline.

The team is responsible for GPU and accelerator infrastructure, cluster operations, scheduling, model serving, observability, capacity planning, and cost-efficient compute at scale. This is the backbone that allows Kraken to train, serve, evaluate, and iterate on AI systems in-house where it matters for privacy, latency, reliability, cost, or product differentiation.

You will join a small, senior, high-impact team working directly with AI/ML researchers, platform engineers, security teams, and product teams. The mandate is simple: make Kraken's AI ambitions real by building compute infrastructure that is fast, dependable, efficient, and production-grade.

The opportunity

  • Own and operate GPU and accelerator clusters used for training, inference, evaluation, and experimentation, including drivers, runtimes, kernels, device plugins, node configuration, scheduling primitives, and workload isolation.

  • Design infrastructure that enables Kraken teams to run models locally on GPUs where it is strategically and economically preferable, reducing unnecessary dependency on external providers and containing compute costs.

  • Build and improve scheduling, orchestration, placement, quota management, and utilization systems across heterogeneous accelerator environments.

  • Optimize inference pipelines for latency, throughput, reliability, memory efficiency, and cost using frameworks such as vLLM, Triton Inference Server, TensorRT, or equivalent serving stacks.

  • Partner with ML engineers and researchers to remove bottlenecks in training, evaluation, batch inference, online inference, deployment, and production debugging workflows.

  • Build observability for GPU utilization, memory pressure, queue depth, saturation, token throughput, request latency, failed workloads, capacity pressure, and spend.

  • Drive reliability, incident response, alerting, runbooks, and post-incident improvements for always-on AI compute infrastructure.

  • Evaluate and integrate new hardware, cloud instance families, specialized accelerators, runtimes, schedulers, and serving frameworks as the AI infrastructure landscape evolves.

  • Build tooling that makes GPU usage visible, accountable, and easier for internal teams to consume without needing to become infrastructure experts.

  • Contribute to long-term architecture decisions that balance performance, cost efficiency, scalability, operational simplicity, and production safety.

Skills you should HODL

  • 5+ years of infrastructure engineering experience, with significant time spent on GPU compute, ML infrastructure, distributed systems, high-performance computing, or large-scale production platforms.

  • Hands-on experience operating GPU clusters or accelerator-backed infrastructure in production or production-like environments, including scheduling, orchestration, utilization monitoring, and cost optimization.

  • Strong systems engineering fundamentals across Linux, networking, storage, containers, Kubernetes, distributed runtimes, and production debugging.

  • Experience with ML serving frameworks such as vLLM, Triton Inference Server, TensorRT, TorchServe, KServe, Ray Serve, or equivalent systems.

  • Proficiency in Python for infrastructure automation, tooling, debugging, integration, and operational workflows.

  • Practical understanding of performance tradeoffs across batching, concurrency, memory usage, GPU utilization, model size, latency, throughput, availability, and cost.

  • Track record of optimizing compute costs while maintaining clear performance, reliability, and availability expectations.

  • Experience building observable systems with useful metrics, logs, traces, dashboards, alerts, and incident workflows.

  • Comfortable working in high-stakes, always-on environments where uptime, throughput, correctness, and operational discipline are critical.

  • Clear communicator who can translate infrastructure tradeoffs for researchers, product teams, platform engineers, security stakeholders, and engineering leadership.

Nice to haves

  • Experience at a frontier AI lab, hyperscaler, high-frequency trading firm, research platform, or high-scale ML organization.

  • Familiarity with custom silicon or specialized accelerators such as TPUs, AWS Trainium, Gaudi, or similar platforms.

  • Background in capacity planning, procurement input, reserved capacity strategy, cloud accelerator economics, or GPU fleet cost management.

  • Experience with distributed training frameworks such as DeepSpeed, Megatron-LM, FSDP, Ray, or equivalent systems.

  • Experience debugging CUDA, NCCL, kernel, driver, runtime, memory, networking, or low-level performance issues.

  • Experience with Rust, C++, Go, CUDA, or other systems languages used for performance-critical infrastructure.

  • Crypto, financial services, trading infrastructure, or security-sensitive production infrastructure experience.

Desired Candidate Profile

5+ years of infrastructure engineering experience, with significant time spent on GPU compute, ML infrastructure, distributed systems, high-performance computing, or large-scale production platforms.

Hands-on experience operating GPU clusters or accelerator-backed infrastructure in production or production-like environments, including scheduling, orchestration, utilization monitoring, and cost optimization.

Strong systems engineering fundamentals across Linux, networking, storage, containers, Kubernetes, distributed runtimes, and production debugging.

Experience with ML serving frameworks such as vLLM, Triton Inference Server, TensorRT, TorchServe, KServe, Ray Serve, or equivalent systems.

Proficiency in Python for infrastructure automation, tooling, debugging, integration, and operational workflows.

Practical understanding of performance tradeoffs across batching, concurrency, memory usage, GPU utilization, model size, latency, throughput, availability, and cost.

Track record of optimizing compute costs while maintaining clear performance, reliability, and availability expectations.

Experience building observable systems with useful metrics, logs, traces, dashboards, alerts, and incident workflows.

Comfortable working in high-stakes, always-on environments where uptime, throughput, correctness, and operational discipline are critical.

Clear communicator who can translate infrastructure tradeoffs for researchers, product teams, platform engineers, security stakeholders, and engineering leadership.

Company Industry

Department / Functional Area

Keywords

  • Senior AI Compute Infrastructure Engineer

Disclaimer: Naukrigulf.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@naukrigulf.com

Kraken

Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.

Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you ll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken s focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.

Read More

https://jobs.ashbyhq.com/kraken.com/055adaf0-a15b-40f3-89ab-dc5ab4fa719e