RLEE - Low-Level Engineering & Kernel Inference Optimization

San Francisco
$14,50021,700/month
Remote
Full-time

RL EnvironmentsKernel OptimizationGPU/CUDACompilers (LLVM/MLIR)PyTorch ExtensionsDistributed Inference (vLLM/NCCL)

Brief description of the vacancy

Brief Description of the Role

We're hiring Low-Level Engineers to design and build RL environments that teach LLMs kernel development, hardware optimization, and systems programming. The goal is to create realistic feedback loops where models learn to write high-performance code across GPU and CPU architectures.

This is a remote contractor role with ≥4 hours overlap to PST and advanced English (C1/C2) required.

About the company

Company Preference Model via XOR.AI

About the Company

Preference Model is building the next generation of training data to power the future of AI. Today's models are powerful but fail to reach their potential across diverse use cases because so many of the tasks that we want to use these models are out of distribution. Preference Model creates RL environments where models encounter research and engineering problems, iterate, and learn from realistic feedback loops.

Our founding team has previous experience on Anthropic's data team building data infrastructure, tokenizers, and datasets behind the Claude model. We are partnering with leading AI labs to push AI closer to achieving its transformative potential.

The company is backed by Tier 1 Silicon Valley VC.

Responsibilities

  • Design and build MLE/SWE environments and diverse tasks.
  • Target a specified language model and satisfy the required difficulty distribution.

Requirements

Minimal Qualifications

  • Strong Python (engineering-quality, not notebook-only)
  • Production mindset (debugging, reliability, iteration speed)
  • Clear understanding of LLMs, their current limitations
  • Ability to meet throughput expectations and respond quickly to feedback
  • You may be a good fit if one of the following applies
  • Deep understanding of memory hierarchies (registers, L1/L2/shared memory, HBM, system RAM) and their performance implications
  • Threading models, synchronization primitives, and concurrent programming (warps, thread blocks, barriers, atomics)
  • Cache coherence, memory access patterns, coalescing, and bank conflicts
  • JIT compilation frameworks (e.g., Triton, JAX/XLA, TorchInductor, Numba)
  • AOT compilation and optimization passes (LLVM, MLIR, TVM)
  • Compiler and kernel frameworks such as CUTLASS, BitBLAS, or JAX/Pallas
  • Modern C++, including templates, concurrency, and build systems
  • Assembly-level programming and low-level optimization across GPU and CPU architectures (e.g., x86, ARM, NVIDIA Hopper, NVIDIA Blackwell)
  • Debugging and optimizing GPU kernels using CUDA and/or HIP/ROCm
  • Developing PyTorch custom operators, backend extensions, or dispatcher integrations (e.g., ATen, TorchScript, or custom backends)
  • Customizing, extending, or optimizing vLLM, including distributed inference workflows
  • GPU communication libraries and collectives, such as NVIDIA NCCL, AMD RCCL, MPI, or UCX
  • Mixed-precision and low-precision kernels (e.g., FP16, BF16, FP8, INT8), including numerical stability and performance trade-offs

Working conditions

Remote contractor role, flexible schedule.

Hourly contractor rate: 90-125 USD/hour (dependent on the expertise level and quality of take-home assignment).

Contacts

Log InOnly registered users can open employer contacts.

Our website uses cookies, including web analytics services. By using the website, you consent to the processing of personal data using cookies. You can find out more about the processing of personal data in the Privacy policy