Role- Analytics Engineer
Reports to- Analytics Engineering Senior Manager
Location - Nairobi HQ
Job Summary
Our business is growing rapidly. In this context, we are looking for an enthusiastic, experienced, and highly technical Analytics Engineer to join our Data Analytics department. The successful candidate will take a key role in shaping and establishing our data engineering foundations, helping define best practices, pipeline architecture, and technical standards for the organization.
You will work with some of the best tools and technologies available in the market—including Google Cloud Platform, Datastream, and dbt—enabling you to build high-quality, scalable, and trusted data solutions. Building on this strong technical foundation, you will also have the unique opportunity to define and own our initial expansion into AI, developing the company’s first generation of internal AI tools, agents, and intelligent workflows.
Your primary responsibility will be designing and deploying robust data ingestion pipelines, managing Data Warehouse administration, and creating the processes that transform raw information into clean, organized, and secure data for our analysts. As you stabilize these core systems, you will be expected to explore and implement AI capabilities, taking ownership of the data lifecycle from raw source through to production-ready analytics and automated AI solutions.
Strong experience with Advanced SQL, Python (for data engineering and automation), modern Data Warehouse best practices, and ingestion/transformation tools is essential. Experience with AI development would definitely be a plus.
We are looking for proactive problem-solvers who are ready to roll up their sleeves and, through their technical excellence and dedication, ensure the integrity and quality of the data that drives our impact. If you want to join an ambitious and creative team of smart individuals, help build our data infrastructure from the ground up, make a meaningful difference, and you have what it takes—then read on and apply for this exciting opportunity!
Key Responsibilities
- Build & Deploy Pipelines: Design, create, and deploy robust data ingestion pipelines to continuously feed the Data Warehouse with raw data from diverse sources using cloud-hosted ingestion tools.
- Data Transformation: Create and maintain efficient processes to transform raw data into clean, organized, and analyst-ready datasets using dbt, SQL, and Google Cloud tools (Dataflow, Datastream).
- Data Quality & Security: act as the guardian of the Analytics data, taking full responsibility for data quality, consistency, and the implementation of strict security protocols.
- Governance Implementation: Define and implement data governance rules to ensure data integrity and compliance across the organization.
- Warehouse Administration: Manage the administration of the Data Warehouse, ensuring optimal performance, organization, and accessibility.
- Tooling & Infrastructure: Implement, develop, and maintain the necessary Analytics Engineering tools and infrastructure to support the wider data team.
- Architecture Support: Actively assist in defining and evolving the data architecture to ensure it remains scalable and efficient as the business grows.
- AI Development & Automation: Work on the company’s initial AI initiatives by developing custom agents, tools, and intelligent workflows that leverage our data foundation to automate complex business processes.
Requirements:
Knowledge, Skills, and Experience
- Experience: At least 3 years of proven experience working in a Data Engineering or Back-end Engineering role.
- Core Languages: Advanced proficiency in SQL and strong coding skills in Python.
- Data Warehousing: A deep understanding of modern Data Warehouse technologies, architectural patterns, and industry best practices.
- Data Operations: Expertise with the latest tools and processes for data ingestion, transformation, and management (ETL/ELT).
The following technical knowledge would be a plus
- Cloud Stack: Hands-on experience with Google Cloud solutions, specifically Cloud Storage, BigQuery, Datastream, and Dataflow.
- Modern Transformation: Practical experience with dbt (data build tool).
- AI Engineering Concepts: Familiarity with modern AI patterns, specifically Vector Databases for retrieval, managing context (RAG), and equipping LLMs with tools to perform actions and interact with external APIs.
- Big Data: Experience working with non-relational databases or Big Data technologies.
- Data Streaming: Familiarity with data streaming analytics and real-time data processing.
- Version Control: Proficiency with Git.
- Polyglot: Knowledge of other coding languages beyond Python and SQL.
Non-Technical & Soft Skills
- Precision: Rigorous attention to detail, with a high standard for data quality and accuracy.
- Autonomy: The ability to work independently and proactively; you anticipate problems before they happen.
- Drive: You are a self-starter and target-oriented, capable of managing your own roadmap to meet delivery goals.
- Collaboration: A dedicated team player and good communicator, able to bridge the gap between technical complexity and business needs.
What we offer
- Be a part of an international, dynamic and driven team that has set their aspirations high and work hard to achieve those
- Opportunities to learn and grow together with us
- Competitive compensation package
- Health benefits
We are an equal opportunity employer and value diversity in our workplace.
All qualified applicants will receive consideration for this position without regard to age, gender, disability, religion, or any other status protected under the Employment Act (Kenya).
All personal information provided will be handled in accordance with the Kenya Data Protection Act, 2019 and used solely for recruitment purposes.

