Principal Software Engineer - Data Platform

Rapid7
Belfast
1 year ago
Applications closed

Related Jobs

View all jobs

Principal Machine Learning Engineer - Production Systems

Principal Machine Learning Engineer – Production Systems

Software Engineer, Applied Artificial Intelligence (AI)

Principal Data Scientist

Data Scientist

Principal Data Scientist London, United Kingdom

As a Principal Engineer, you’ll get the opportunity to be a hands-on engineer, learning best practice engineering processes and approaches whilst receiving ongoing development through coaching, mentoring and pairing with other engineers on your team. From problem-solving to challenging old ways of thinking, you will have the opportunity to unleash your full potential and creativity whilst working with cutting edge technologies in a dynamic and collaborative team.

About the Team

The Data Platform team is responsible for building ETL Pipelines that fuel the Data Platform at Rapid7. Moving Product Data into our Data Platform for product teams to develop new features, enhance existing features and build shared experiences to create value for customers across the world.

We have a cutting edge data stack including Kafka, K8s, Spark and Iceberg.

About the Role


The Principal Engineer role is a part of our Data Platform Engineering team. In this role you will be focussed on helping our product teams move data into our Data Platform for in product experiences and product analytics.

As a Principal Engineer on the Data Platform Engineering team, you will be responsible for architecting and scaling streaming and batch data pipelines, while also designing the CI/CD infrastructure that ensures efficient development and deployment of data services. You will play a key role in shaping the architecture of our data platform, collaborating with cross-functional teams to deliver highly available, performant, and scalable solutions for both real-time and large-scale data processing.

In this role, you will:

Architect and implement a highly scalable Data Platform that supports Change Data Capture (CDC) using Debezium and Kafka for data replication across different databases and services.

Design and maintain large-scale data lakes using Apache Iceberg, ensuring efficient data partitioning, versioning, and schema evolution to support real-time analytics and historical data access.

Build and optimize CI/CD pipelines for the deployment and automation of data platform services using tools like Jenkins.

Lead the integration of Apache Spark for large-scale data processing and ensure that both batch and streaming workloads are handled efficiently.

Collaborate with our Platform Delivery teams to ensure high availability and performance of the data platform, implementing monitoring, disaster recovery, and automated testing frameworks.

Provide technical leadership and mentoring to junior engineers, promoting best practices in CDC architecture, distributed systems, and CI/CD automation.

Ensure that the platform adheres to data governance principles, including data lineage tracking, auditing, and compliance with regulatory requirements.

Stay informed about the latest advancements in CDC, data engineering, and infrastructure automation to guide future platform improvements.

Work closely with product and data science teams to understand business requirements and translate them into scalable and efficient data platform solutions.

Stay current with the latest trends in data engineering and infrastructure, making recommendations for improvements and introducing new technologies as appropriate.

The skills you’ll bring include:

10+ years of experience in software engineering with a focus on data platform engineering, data infrastructure, or distributed systems.

Expertise in building data pipelines using Apache Kafka or similar for ingesting, processing, and distributing high-throughput data.

Strong experience designing and managing CI/CD pipelines for data platform services using tools such as Jenkins.

Experience with Apache Iceberg (or similar Delta Lake/Apache Hudi) for managing versioned, partitioned datasets in data lakes with an understanding of Apache Spark for both batch and streaming data processing, including optimization strategies for distributed data workloads.

Expertise in designing distributed systems and managing high-throughput, fault-tolerant, and low-latency data architectures.

Strong programming skills in Java, Scala, or Python.

Experience with cloud-based environments (AWS, GCP, Azure) and containerized infrastructure using Kubernetes and Docker.

The attitude and ability to thrive in a high-growth, evolving environment

Collaborative team player who has the ability to partner with others and drive toward solutions

Strong creative problem solving skills

Solid communicator with excellent written and verbal communications skills both within the team and cross functionally

Passionate about delighting customers, puts the customer needs at the forefront of all decision making

Excellent attention to detail


We know that the best ideas and solutions come from multi-dimensional teams. That’s because these teams reflect a variety of backgrounds and professional experiences. If you are excited about this role and feel your experience can make an impact, please don’t be shy - apply today.

#LI_FB1

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

AI Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Changing career into artificial intelligence in your 30s, 40s or 50s is no longer unusual in the UK. It is happening quietly every day across fintech, healthcare, retail, manufacturing, government & professional services. But it is also surrounded by hype, fear & misinformation. This article is a realistic, UK-specific guide for career switchers who want the truth about AI jobs: what roles genuinely exist, what skills employers actually hire for, how long retraining really takes & whether age is a barrier (spoiler: not in the way people think). If you are considering a move into AI but want facts rather than Silicon Valley fantasy, this is for you.

How to Write an AI Job Ad That Attracts the Right People

Artificial intelligence is now embedded across almost every sector of the UK economy. From fintech and healthcare to retail, defence and climate tech, organisations are competing for AI talent at an unprecedented pace. Yet despite the volume of AI job adverts online, many employers struggle to attract the right candidates. Roles are flooded with unsuitable applications, while highly capable AI professionals scroll past adverts that feel vague, inflated or disconnected from reality. In most cases, the issue isn’t a shortage of AI talent — it’s the quality of the job advert. Writing an effective AI job ad requires more care than traditional tech hiring. AI professionals are analytical, sceptical of hype and highly selective about where they apply. A poorly written advert doesn’t just fail to convert — it actively damages your credibility. This guide explains how to write an AI job ad that attracts the right people, filters out mismatches and positions your organisation as a serious employer in the AI space.

Maths for AI Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are a software engineer, data scientist or analyst looking to move into AI or you are a UK undergraduate or postgraduate in computer science, maths, engineering or a related subject applying for AI roles, the maths can feel like the biggest barrier. Job descriptions say “strong maths” or “solid fundamentals” but rarely spell out what that means day to day. The good news is you do not need a full maths degree worth of theory to start applying. For most UK roles like Machine Learning Engineer, AI Engineer, Data Scientist, Applied Scientist, NLP Engineer or Computer Vision Engineer, the maths you actually use again & again is concentrated in a handful of topics: Linear algebra essentials Probability & statistics for uncertainty & evaluation Calculus essentials for gradients & backprop Optimisation basics for training & tuning A small amount of discrete maths for practical reasoning This guide turns vague requirements into a clear checklist, a 6-week learning plan & portfolio projects that prove you can translate maths into working code.