Senior Machine Learning Infrastructure Engineer
PhysicsX›
📍London, GB
Posted Today · via greenhouse
Apply on greenhouse→Job Description
About us
Note: We are currently recruiting for multiple positions, however please only apply for the role that best aligns with your skillset and career goals.
The Role
The Senior ML Infrastructure Engineer will extend and operate the infrastructure that powers our research model training, fine-tuning, and serving pipelines. You will be embedded within our Research function, partnering directly with ML engineers and research scientists to ensure they can train Large Physics Models efficiently and reliably at scale.
Team Context
In this role, you will be vertically embedded in Research, working daily with:
- Research Scientists who determine the model architectures and methods
- ML Engineers who implement and develop the models
- Simulation Data Engineers who are accountable for upstream data pipelines
You will have end-to-end responsibilities over the research infrastructure, with the autonomy to make architectural decisions and the responsibility to keep data flowing reliably.
Horizontally, you will be part of an infrastructure engineering group responsible for infrastructure across the company.
What you will do
Training Infrastructure
- Design and operate distributed training infrastructure for neural operator architectures (Transolver, Point Cloud Transformer, etc.) on our large NVIDIA DGX B200 platform.
- Optimize training pipelines for throughput, fault tolerance, and cost efficiency, including checkpointing strategies, gradient accumulation, and multi-node synchronization.
- Build and maintain experiment tracking and observability systems that give researchers clear visibility into training runs, hyperparameter sweeps, and model performance.
Data I/O and Performance
- Solve data loading bottlenecks for large-scale mesh datasets.
- Optimize data pipelines for efficient I/O from cloud storage, including prefetching, caching, and format optimization.
- Work with heterogeneous data sources of varying formats and resolutions.
Model Serving and Deployment
- Build serving infrastructure for pre-trained LPMs, supporting both zero-shot inference and uncertainty quantification (Monte Carlo Dropout).
- Design and implement model packaging pipelines for customer deployment. Models must run reliably in customer environments with fine-tuning capabilities.
- Ensure reproducibility: any model checkpoint should be deployable with consistent behaviour.
Platform and Tooling
- Improve developer experience for the Research team with fast iteration cycles, reliable CI/CD, clear debugging tools.
- Collaborate with the broader Infrastructure team on shared patterns and standards.
What you bring to the table
- Ability to scope and effectively deliver projects, prioritising activity as needed.
- Problem-solving skills and the ability to analyse issues, identify causes, and recommend solutions quickly.
- Excellent collaboration and communication skills, especially in a research setting. You can translate "the model isn't converging" into infrastructure hypotheses and solutions, and can bridge technical abstractions with implementations.
- 5+ years of experience building and operating ML infrastructure at scale:
- Deep expertise in distributed training: you've debugged NCCL hangs, optimized collective communication, and know when to use FSDP vs. DDP vs. pipeline parallelism
- Strong systems fundamentals: Linux, networking (including domain specific NVLink and InfiniBand), storage I/O, profiling and performance optimization
- Production experience with Kubernetes and SLURM for job orchestration on GPU clusters
- Proficiency in Python and ML frameworks (PyTorch strongly preferred)
- Experience with cloud GPU infrastructure; ideally CoreWeave or similar GPU/HPC-focused clouds
Ideally
- Experience with geometric deep learning or neural operators, ****architectures that operate on meshes, point clouds, or graphs
- Background in HPC for simulation engineering, familiarity with how CFD/FEA workflows generate and consume data
- Experience building model serving infrastructure with latency and throughput requirements
- Familiarity with experiment tracking tools (Weights & Biases, MLflow) and observability stacks (Prometheus, Grafana)
- Experience packaging models for deployment into customer environments (containers, model registries, versioning)
What we offer
- Equity options – share in our success and growth.
- 10% employer pension contribution – invest in your future.
- Free office lunches – great food to fuel your workdays.
- Flexible working – balance your work and life in a way that works for you.
- Hybrid setup – enjoy our new Shoreditch office while keeping remote flexibility.
- Enhanced parental leave – support for life’s biggest milestones.
- Private healthcare – comprehensive coverage
- Personal development – access learning and training to help you grow.
- Work from anywhere – extend your remote setup to enjoy the sun or reconnect with loved ones.
Details
- Department
- Research
- Work Type
- remote
- Locations
- London, GB
- Posted
- April 13, 2026
- Source
- greenhouse