This talk explores building a complete self-hosted LLM stack in Rust: Paddler, a distributed load balancer for serving LLMs at scale, and Poet, a static site generator that consumes those LLMs for AI-powered content features.
In this talk, we'll dive deep into what makes concurrency coordination costly, and explore some pathways to mitigate that cost.
I’ll share what the Rust job market really looks like in 2025 — where companies are hiring, which skills stand out, and how the recruitment process actually works behind the scenes.
For infrastructure engineers, SREs, platform teams, and Rust developers who've felt the pain of configuration drift, failed deployments, and infrastructure code that simply doesn't scale safely.