About us
PhysicsX is a deep-tech company with roots in numerical physics and Formula One, dedicated to accelerating hardware innovation at the speed of software. We are building an AI-driven simulation software stack for engineering and manufacturing across advanced industries. By enabling high-fidelity, multi-physics simulation through AI inference across the entire engineering lifecycle, PhysicsX unlocks new levels of optimization and automation in design, manufacturing, and operations — empowering engineers to push the boundaries of possibility. Our customers include leading innovators in Aerospace & Defense, Materials, Energy, Semiconductors, and Automotive.
The Role
PhysicsX is building a platform that enables Data Scientists and Simulation Engineers to build, train, and deploy Deep Physics Models. The platform handles massive volumes of complex simulation data and enables high-fidelity multi-physics simulation through AI inference.
We're looking for a Senior Software Engineer with a strong background in building data platforms. You won't just be moving data from A to B - you'll be architecting and building the distributed systems, services, and APIs that form the backbone of our platform. You'll bridge the gap between complex physical simulations and modern data infrastructure, implementing storage solutions for AI/ML pipelines and creating the analytical layers that allow our engineers to visualise and understand their results.
As a senior engineer, you'll shape technical direction by authoring Technical Decision Records, mentoring less experienced engineers, and driving the standards that keep our platform reliable, secure, and performant. This role is for builders who love coding robust software as much as designing efficient data architectures.
What You Will Do
- Design and architect scalable distributed systems, microservices, and APIs for high-dimensional simulation data across the machine learning lifecycle — from data processing and model training to inference services.
- Build and maintain systems that execute user-submitted code safely, robustly, and securely — including sandboxing, resource isolation, and access controls.
- Build tools that enable data scientists and engineers to create automated, robust pipelines for data ingestion and processing — powering active learning loops.
- Build interoperable no-code and pro-code tools for enterprise users with varying skill levels.
- Architect and integrate modern Data Warehouses, Data Lakes, and high-performance storage solutions to handle the unique demands of complex simulations, multimodal data and deep learning workloads.
- Build internal tools that enable BI dashboards and scientific data visualizations, making large datasets intuitive and accessible.
- Define system architecture for new capabilities, making trade-offs across performance, reliability, cost, and developer experience.
- Own your work end-to-end — from architectural design through deployment and maintenance in a fast-paced, agile environment.
- Define reliability guarantees, quality of service metrics, and performance standards for the services you own. Proactively diagnose and resolve complex performance bottlenecks.
- Develop and enforce API schema standards and schema drift mitigation strategies. Ensure compliance with established patterns for security, data segregation, and access control.
- Drive best practices in CI/CD, automated testing, observability, and infrastructure-as-code. Build and maintain deployment pipelines, including zero-downtime and multi-service deployments.
- Author and review Technical Decision Records. Participate in Technology Radar reviews to evaluate and adopt new tools and approaches.
- Mentor junior and mid-level engineers, facilitate technical discussions, build consensus around architectural decisions, and translate research needs into well-defined technical requirements.
- Influence engineering roadmap and contribute to technical strategy beyond your immediate team.
What you bring to the table
- A passion for the craft — you're driven by engineering excellence and committed to fostering that culture across the team.
- Strong software engineering foundations — solid grasp of algorithms, data structures, and system design. You write clean, maintainable, testable code and have strong command of Golang or Rust and Python.
- Distributed systems and data engineering experience — proven track record building big data processing platforms in production, moving beyond scripting to robust engineering solutions (e.g., Databricks/Delta Lake, Snowflake, BigQuery). Hands-on experience architecting Data Warehouses and Data Lakes.
- API and service design maturity — experience designing multi-service systems with attention to schema governance, forward compatibility, and data access patterns.
- User code execution — experience building systems that run user code safely, robustly, and securely, with an understanding of sandboxing, isolation, and threat models.
- Multimodal data handling — experience working with storage engines or databases that handle diverse data types, including relational data, vector embeddings, and large binary blobs.
- Security awareness — familiarity with designing for security requirements and participating in security testing and compliance workflows.
- Reliability and observability mindset — experience providing QoS guarantees, implementing monitoring and alerting, and optimising observability in production.
- CI/CD and deployment expertise — hands-on experience building and optimising CI/CD pipelines, including multi-service and zero-downtime deployments across numerous customer environments.
- Diagnostic and optimisation skills — proactive approach to diagnosing performance bottlenecks in data processing and storage systems.
- Communication and leadership — excellent communication skills to understand data needs from research scientists and translate them into technical specifications. Experience mentoring engineers and facilitating technical decisions.
- Incremental mindset — you work in small steps toward larger goals, driving change through continuous improvement rather than massive redesigns. You can zoom in on details and zoom out to see the big picture.
Ideally
- Polyglot programming — deep expertise in Python and mastery of high-performance compiled languages like Golang, C++, or Rust.
- Big data scale — experience designing and maintaining big data systems, with a track record of running complex analytics on massive datasets in production.
- Domain knowledge — understanding of 3D geometry processing (meshes, point clouds) and data structures used in physics-based simulations.
- Advanced testing — experience with fuzzing, deterministic simulation testing, or fault injection in production systems.
- Kubernetes expertise — ability to leverage resources that extend the Kubernetes API (e.g., CRDs, Operators) and infrastructure configuration tools (Crossplane, ArgoCD, Helm charts).
- Infrastructure flexibility — understanding of what it takes to build software that runs in cloud, on-premises, and air-gapped environments.
What we offer
Build what actually matters
Help shape an AI-native engineering company at a formative stage, tackling problems that genuinely matter for industry and society. This is work with real-world impact - and something you can be proud to stand behind.
Learn alongside exceptional people
Work with a high-caliber, collaborative team of engineers, scientists, and operators who care deeply about doing great work, and about helping each other get better. We come from diverse backgrounds, but we share a commitment to operating at the highest level and addressing some of the most complex challenges out there. If you’re ambitious, thoughtful, and driven by impact, you’ll feel at home.
Influence over hierarchy
We operate with a flat structure: good ideas win - wherever they come from. Questioning assumptions and challenging the status quo isn’t just welcomed, it’s expected.
Sustainable pace, long-term ambition
Building meaningful technology is a marathon, not a sprint. We believe in balancing focused, ambitious work with a life beyond it. Our hybrid model blends time together in our Shoreditch office with work-from-home days, giving you the flexibility to work sustainably while staying connected in person.
And it doesn’t stop there …
🚀Equity options - share meaningfully in the company you’re helping to build.
🏦10% employer pension contribution - because investing in future matters.
🍽️Free office lunches - to keep you energised and focused.
👶Enhanced parental leave - 3 months full pay paternity and 6 months full pay maternity leave, to provide extra flexibility during the moments that matter most.
🍼YellowNest nursery scheme - to help working parents manage childcare costs.
☀️ 25 days of Annual Leave (+ Public Holidays) - because taking time to rest matters.
🏥Private medical insurance - 100% employee cover, giving you complete peace of mind.
💪Wellhub Subscription - gain access to thousands of gyms, classes and wellness apps, supporting both physical and mental wellbeing.
👀Eye tests - because good work depends on good health.
📈Personal development - dedicated support for learning, development, and leveling up over time.
💛Employee Assistance Programme (EAP) - confidential wellbeing support, available whenever you need it.
🚲Bike2Work scheme and 🚆Season ticket loan - to make getting to work easier and greener.
🚗Octopus EV salary sacrifice - for a simpler, more sustainable way to drive electric.
🔎 Watch this space, we’re continuing to build this as we grow…
We value diversity and are committed to equal employment opportunity regardless of sex, race, religion, ethnicity, nationality, disability, age, sexual orientation or gender identity. We strongly encourage individuals from groups traditionally underrepresented in tech to apply. To help make a change, we sponsor bright women from disadvantaged backgrounds through their university degrees in science and mathematics. We collect diversity and inclusion data solely for the purpose of monitoring the effectiveness of our equal opportunities policies and ensuring compliance with UK employment and equality legislation. This information is confidential, used only in aggregate form, and will not influence the outcome of your application.