Makini AI

AI Safety Research · Nairobi Kenya

AI is already in Africa.
Nobody checked if it works.

Every major AI model (GPT, Gemini, Claude, LLaMA) was tested for safety in English. Not in Swahili. Not in Zulu. Not in Hausa. Makini AI builds the evaluation tools that change that.

Supported by

Microsoft for Startups
Google Gemini Program

Partners

Gates Foundation
AfriLabs
Meta
Gates Foundation
AfriLabs
Meta
IMPACT

Our Impact

INITIATIVE

The AI Bridge Initiative

Africa has 2,000+ languages. Most of them are invisible to AI. The AI Bridge Initiative exists to fix what we call Language Data Flaring (the problem of African linguistic data that is either uncollected, lost, or stored in formats no AI system can use). We are building the bridge between oral traditions and the digital world, creating high-quality AI-ready datasets across 18 languages in four regions.

Goal: Enable 1 billion people to access digital services in their mother tongue by 2030

East Africa

Kiswahili Dholuo Kikuyu Luganda Kinyarwanda Amharic

West Africa

Yoruba Igbo Hausa Bambara Wolof Naija Pidgin

Southern Africa

isiZulu isiXhosa Setswana Sesotho Tshivenda Chichewa

You've spent billions making AI safe. Not for Africa.

01

Content moderation is broken

Models that pass safety filters in English often fail to detect toxicity, hate speech, and misinformation in African languages.

02

Models are deployed untested

Billions are spent on AI safety, but almost none of it is allocated to testing performance in the contexts where the next billion users live.

03

No accountability, no measurement

Without standardized benchmarks, labs cannot prove their models are safe for African markets, and regulators have no way to measure risk.

Africa is the next AI frontier.

1.4B

Next Billion Users

The population of Africa is the fastest growing digital market globally.

2,000+

Linguistic Diversity

Languages spoken across the continent, mostly low-resource for AI.

6 of 10

Economic Momentum

Of the world's fastest growing economies are in Africa.

Market Readiness Indicators

AI Adoption Rate in African Enterprise 34%
Mobile Internet Penetration 28%
Youth Population Share by 2050 60%

Data represents the structural shift toward a digital-first economy across Sub-Saharan Africa.

Five tools. One mission.

We provide the technical infrastructure needed to verify that AI models are safe, accurate, and fair for African users.

01

Bias Detection

100K+ annotated sentences, Swahili and Zulu, 6 bias categories. We identify where models fail to understand regional nuances.

Mock Classifier Visual

Tribal Bias 82%
Gender Bias 45%
Regional Dialect 12%
02

Content Moderation

Testing how models handle hate speech and harmful content in local contexts where direct translations fail.

03

Model Accuracy

Rigorous factual accuracy audits for models deployed in African markets, focusing on local history and geography.

04

Language Benchmarking

Africa's equivalent of GLUE/HellaSwag. The definitive standard for measuring LLM performance in low-resource languages.

05

Policy Advisory

Helping governments and organizations build frameworks for responsible AI deployment across the continent.

Our Research

Open science for safer AI.

NLP Safety

Measuring Toxicity in Swahili Social Media

Makini AI Research 2024
Evaluation Bias

Regional Bias in Large Language Models

Makini AI Research 2023
Datasets Africa

Low-Resource Language Benchmarking

Makini AI Research 2024
The Mission
"We are not just building tools. We are building the trust layer that allows AI to serve the next billion users without causing harm."
Makini AI Team

Makini AI Research Team

Nairobi Kenya

Frequently Asked Questions

Everything you need to know about AI safety in Africa.

Direct translation of safety benchmarks fails because cultural context, idioms, and harm categories differ significantly. A model that is safe in English can still generate tribal hate speech or dangerous medical advice in Swahili because it wasn't trained or tested on those specific linguistic risks.
We have deep benchmarking data for Swahili, Zulu, Hausa, Amharic, and Yoruba. Our roadmap includes expanding to the top 20 most spoken African languages by the end of the year.
We work with a network of 500+ certified native speakers and linguists across the continent who annotate sentences for bias, toxicity, and cultural relevance. This human-in-the-loop approach ensures our benchmarks are grounded in real-world usage.

If you're deploying AI in Africa, talk to us first.

We partner with labs, governments, and enterprise teams to ensure their technology is safe for the next billion users.

info@makini.tech
+254 710 100 397
Nairobi, Kenya