Position: Data Engineer (Python/Databricks)
Location: Remote
Salary: up to £80,000 + Bens
Are you passionate about health tech and innovation? Do you want to be at the forefront of transforming clinical research with cutting-edge technology? If so, we have an exciting new role for you!
Join our dynamic and forward-thinking team as a Data Engineer and help us build secure, scalable microservices that operationalise clinical research applications. This is your chance to make a meaningful impact on healthcare while working with some of the most advanced technologies in data engineering.
About Us
We are a pioneering health tech company dedicated to revolutionising clinical research through innovative data solutions. Our cross-functional team, including Frontend Developers, QA Engineers, and DevOps Engineers, collaborates to create high-performance data pipelines and REST APIs that drive AI applications and external data integrations.
Your Role
As a Data Engineer, you will:
Build and Optimise Data Pipelines: Implement high-performance data pipelines for AI applications using Databricks.
Develop REST APIs: Create REST APIs required for seamless external data integrations.
Ensure Data Security: Apply protocols and standards to secure clinical data in-motion and at-rest.
Shape Data Workflows: Use your expertise with Databricks components such as Delta Lake, Unity Catalog, and ML Flow to ensure our data workflows are efficient, secure, and reliable.Key Responsibilities
Data Engineering with Databricks: Utilize Databricks to design and maintain scalable data infrastructure.
Integration with Azure Data Factory: Leverage Azure Data Factory for orchestrating and automating data movement and transformation.
Python Development: Write clean, efficient code in Python (3.x), using frameworks like FastAPI and Pydantic.
Database Management: Design and manage relational schemas and databases, with a strong focus on SQL and PostgreSQL.
CI/CD and Containerisation: Implement CI/CD pipelines and manage container technologies to support a robust development environment.
Data Modeling and ETL/ELT Processes: Develop and optimize data models, ETL/ELT processes, and data lakes to support data analytics and machine learning.Requirements
Expertise in Databricks: Proficiency with Databricks components such as Delta Lake, Unity Catalog, and ML Flow.
Azure Data Factory Knowledge: Experience with Azure Data Factory for data orchestration.
Clinical Data Security: Understanding of protocols and standards related to securing clinical data.
Python Proficiency: Strong skills in Python (3.x), FastAPI, Pydantic, and Pytest.
SQL and Relational Databases: Knowledge of SQL, relational schema design, and PostgreSQL.
CI/CD and Containers: Familiarity with CI/CD practices and container technologies.
Data Modeling and ETL/ELT: Experience with data modeling, ETL/ELT processes, and data lakes.Why Join Us?
Innovative Environment: Be part of a team that is pushing the boundaries of health tech and clinical research.
Career Growth: Opportunities for professional development and career advancement.
Cutting-Edge Technology: Work with the latest tools and platforms in data engineering.
Impactful Work: Contribute to projects that have a real-world impact on healthcare and clinical research.If you are a versatile Data Engineer with a passion for health tech and innovation, we would love to hear from you. This is a unique opportunity to shape the future of clinical research with your expertise in data engineering.
🔬 Shape the Future of Health Tech with Us! Apply Today! 🔬
To find out more about Computer Futures please visit
Computer Futures, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy | Registered office | 8 Bishopsgate, London, EC2N 4BQ, United Kingdom | Partnership Number | OC(phone number removed) England and Wales