Research Scientist - Frontier Data
AfterQueryโบ
๐San Francisco, California, US
Posted Today ยท via ashby
Apply on ashbyโJob Description
About AfterQuery
AfterQuery builds the training data and evaluation infrastructure that frontier AI labs use to make their models better. We work with the world's leading labs to design high signal datasets and run rigorous evaluations that go beyond static benchmarks. We are a small, early team (post Series A) where individual contributors have a direct impact on how the next generation of models learn and improve.
The Role
You'll design the datasets and evaluation frameworks that shape how frontier models are trained and measured. Working directly with research teams at top AI labs, you'll experiment with data collection strategies, diagnose model failure modes, and develop the metrics that determine whether a model is actually getting better. This is hands-on, high leverage work: you'll go from hypothesis to live experiment quickly, and your output will directly influence model training runs at scale.
What You'll Do
Design data slides and explore data shapes that expose meaningful model failure modes across domains like finance, code, and enterprise workflows
Build and refine evaluation rubrics and reward signals for RLHF and RLVR training pipelines
Model annotator behavior and run experiments to improve different model capabilities
Develop quantitative frameworks for measuring dataset quality, diversity, and downstream impact on model alignment and capability
Partner with lab research teams to translate their training objectives into concrete data and evaluation specifications
What We're Looking For
Great candidates are undergrad research or master's research (but haven't done a phd)
Major plus if they've worked for/interned for any RL environment companies in the past or any AI safety or benchmarking orgs like METR, Artificial Analysis, etc..
Genuine obsession with how data structure, selection, and quality drive model behavior
Ability to design lightweight experiments, move fast, and extract actionable insights from messy results
Comfort working across domains (you'll touch finance, software engineering, policy, and more)
Strong quantitative instincts and familiarity with LLM training pipelines, RLHF/RLVR, or evaluation methodology
A bias toward building over theorizing
Compensation Structure:
$250k-450k total compensation + equity
Details
- Department
- Research
- Work Type
- onsite
- Locations
- San Francisco, California, US
- Salary
- $150K - $250K
- Posted
- April 14, 2026
- Source
- ashby