Job Type:
Contract
Considering making an application for this job Check all the details in this job description, and then click on Apply.Job Location:
Wimbledon , UKJob Description :For this role, senior experience of Data Engineering and building automated data pipelines on IBM Datastage & DB2, AWS and Databricks from source to operational databases through to curation layer is expected using the latest cloud modern technologies where experience of delivering complex pipelines will be significantly valuable to how to maintain and deliver world class data pipelines.Knowledge in the following areas essential:Databricks:
Expertise in managing and scaling Databricks environments for ETL, data science, and analytics use cases.AWS Cloud:
Extensive experience with AWS services such as S3, Glue, Lambda, RDS, and IAM.IBM Skills:
DB2, Datastage, Tivoli Workload Scheduler, Urban CodeProgramming Languages:
Proficiency in Python, SQL.Data Warehousing & ETL:
Experience with modern ETL frameworks and data warehousing techniques.DevOps & CI/CD:
Familiarity with DevOps practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog).Familiarity with big data technologies like Apache Spark, Hadoop, or similar.ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data storesLeadership & Strategy:
Lead Data Engineering team(s) in designing, developing, and maintaining highly scalable and performant data infrastructures.Customer Data Platform Development:
Architect and manage our data platforms using IBM (legacy platform) & Databricks on AWS technologies (e.g., S3, Lambda, Glacier, Glue, EventBridge, RDS) to support real-time and batch data processing needs.Data Governance & Best Practices:
Implement best practices for data governance, security, and data quality across our data platform. Ensure data is well-documented, accessible, and meets compliance standards.Pipeline Automation & Optimisation:
Drive the automation of data pipelines and workflows to improve efficiency and reliability.Team Management:
Mentor and grow a team of data engineers, ensuring alignment with business goals, delivery timelines, and technical standards.Cross Company Collaboration:
Work closely with all levels of business stakeholder including data scientists, finance analysts, MI and cross-functional teams to ensure seamless data access and integration with various tools and systems.Cloud Management:
Lead efforts to integrate and scale cloud data services on AWS, optimising costs and ensuring the resilience of the platform.Performance Monitoring: Establish monitoring and alerting solutions to ensure the high performance and availability of data pipelines and systems to ensure no impact to downstream consumers.
#J-18808-Ljbffr