AI Safety Research · Nairobi Kenya
Every major AI model (GPT, Gemini, Claude, LLaMA) was tested for safety in English. Not in Swahili. Not in Zulu. Not in Hausa. Makini AI builds the evaluation tools that change that.
Supported by
Partners
Africa has 2,000+ languages. Most of them are invisible to AI. The AI Bridge Initiative exists to fix what we call Language Data Flaring (the problem of African linguistic data that is either uncollected, lost, or stored in formats no AI system can use). We are building the bridge between oral traditions and the digital world, creating high-quality AI-ready datasets across 18 languages in four regions.
Goal: Enable 1 billion people to access digital services in their mother tongue by 2030
Models that pass safety filters in English often fail to detect toxicity, hate speech, and misinformation in African languages.
Billions are spent on AI safety, but almost none of it is allocated to testing performance in the contexts where the next billion users live.
Without standardized benchmarks, labs cannot prove their models are safe for African markets, and regulators have no way to measure risk.
The population of Africa is the fastest growing digital market globally.
Languages spoken across the continent, mostly low-resource for AI.
Of the world's fastest growing economies are in Africa.
Data represents the structural shift toward a digital-first economy across Sub-Saharan Africa.
We provide the technical infrastructure needed to verify that AI models are safe, accurate, and fair for African users.
100K+ annotated sentences, Swahili and Zulu, 6 bias categories. We identify where models fail to understand regional nuances.
Mock Classifier Visual
Testing how models handle hate speech and harmful content in local contexts where direct translations fail.
Rigorous factual accuracy audits for models deployed in African markets, focusing on local history and geography.
Africa's equivalent of GLUE/HellaSwag. The definitive standard for measuring LLM performance in low-resource languages.
Helping governments and organizations build frameworks for responsible AI deployment across the continent.
"We are not just building tools. We are building the trust layer that allows AI to serve the next billion users without causing harm."
Makini AI Research Team
Nairobi Kenya
Everything you need to know about AI safety in Africa.