JOB OBJECTIVE / SUMMARY
To design, build, and maintain robust, scalable, and high-performance ETL / ELT data pipelines for
reporting, business intelligence, and machine learning initiatives2. The role is critical for ensuring the
quality, lineage, and governance of all critical data assets.
DUTIES & RESPONSIBILITIES
i. Build and optimize data pipelines using tools like Airflow / Prefect to ingest data from core
banking, payment, and third-party sources.
ii. Design and implement dimensional and denormalized data models within the Data
Warehouse (e.g., Postgres / Oracle / BigQuery).
iii. Utilize streaming technologies like Kafka and transformation tools like DBT to process data in
real-time or near real-time.
iv. Implement data quality checks and maintain data lineage documentation for governance.
v. Leverage Python and SQL extensively for scripting, data manipulation, and pipeline
development.
KEY PERFORMANCE INDICATORS
KPI Area
1. Data Pipeline Reliability
2. Data Freshness & Delivery
3. Efficiency
Measure
1.Pipeline SLA Adherence (Uptime) & Percentage of automated tests (e.g., dbt tests).
2. Mean Latency for critical reports / data sets
3. Data Warehouse query run time (p95).
Target
1. >
99.9% Uptime, 90% of Critical Data Models Tested.
2. <1 hour for batch, <5 seconds for streaming data
3. Reduced by 20% QoQ.
EDUCATION AND EXPERIENCE
environments is highly desirable.
KNOWLEDGE
Quality
SKILLS / COMPETENCIES
Postgres / Oracle / BigQuery, DBT, APIs, Git, CI / CD
Salary : Open to discussion
Data Engineer • Lagos, Lagos, NG