Reinforcement Learning Environment Engineer

San Francisco
$15,80024,000/month
Remote
Full-time

RL Environments; MLE; LLM Tasks; Difficulty Distribution; Remote Contractor; PST Overlap (≥4h); Advanced English (C1/C2);

Brief description of the vacancy

We’re hiring RL Environments Engineers to design and build MLE/SWE environments that deliver high-quality, diverse tasks with minimal supervision. You will target a specific language model, meet a defined difficulty distribution, and deliver about one task every 10 hours. This is a remote contractor role with ≥4 hours overlap to PST and advanced English (C1/C2) required.

About the company

Company Preference Model via XOR.AI

Preference Model is building the next generation of training data to power the future of AI. Today's models are powerful but fail to reach their potential across diverse use cases because so many of the tasks that we want to use these models for are outside of their training data distribution. Preference Model creates reinforcement learning environments that encapsulate real-world use cases, enabling AI systems to practice, adapt, and learn from feedback grounded in reality. We seek to bring the real world into distribution for the models.

Our founding team has previous experience on Anthropic’s data team building data infrastructure, tokenizers, and datasets behind the Claude model. We are partnering with leading AI labs to push AI closer to achieving its transformative potential.

The company is backed by Tier 1 Silicon Valley VC.

Responsibilities

  • Design and build MLE/SWE environments and diverse tasks.
  • Target a specified language model and satisfy the required difficulty distribution.
  • Deliver ~1 task per 8-10 hours once onboarded.
  • Edit tasks within 24 hours based on customer feedback.
  • Onboard quickly and start delivering on day one with minimal supervision.

Requirements

What we’re looking for (must-haves)

  • Strong Python (engineering-quality, not notebook-only).
  • Hands-on LLM/GenAI work in production: you’ve shipped and operated real systems (not “wrapped an API and called it AI”).
  • Experience designing environments/tasks for RL and/or evaluations.
  • Strong product/engineering ownership: comfortable building, fixing, and scaling end-to-end pipelines.
  • Docker + production mindset (debugging, reliability, iteration speed).
  • ≥4 hours PST overlap and advanced English (C1/C2) for specs, reviews, and feedback.
  • Ability to meet throughput expectations and respond quickly to feedback.

Strong signals (nice-to-have, big plus)

  • Experience in high-stakes or regulated domains (e.g., healthcare, finance, fraud/risk, safety-critical systems).
  • ML systems experience: CI/CD, monitoring, evaluation harnesses, MLOps, scalable pipelines.
  • Systems depth: C++/Rust/Scala/Java, performance/infra optimization, distributed systems.
  • Exposure to RL / bandits / agentic systems (not required, but a strong signal).

Not a fit if

  • You’re primarily a prompt engineer without strong ML/engineering foundations.
  • You’re a research-only / academic-only profile with little or no shipping/production ownership.
  • You’ve only built in notebooks or rely heavily on managed AutoML tools.

Working conditions

  • Remote, independent contractor engagement.
  • hours/week - full time - need 4 hours overlap in the working hours with the team in Pacific time zone;
  • Deliverables-driven; begin shipping on day one.
  • Conversion & relocation: Potential path to FTE and relocation to the Bay Area if performance and mutual fit align.

Contacts

Log InOnly registered users can open employer contacts.

Our website uses cookies, including web analytics services. By using the website, you consent to the processing of personal data using cookies. You can find out more about the processing of personal data in the Privacy policy