“AI Ethics provides a comprehensive set of considerations for responsible AI that combines safety, security, human concerns, and environmental concerns,” the Rwanda Ethical AI Webinar panelists stated. While the Rwanda Utilities Regulatory Authority (RURA) is finalizing the Rwanda AI Ethical Guidelines to guide AI developers in Rwanda on ethical AI, GIZ FAIR Forward and C4IR hosted a webinar on Ethical AI, bringing together AI systems developers, deployers, and policy-makers to discuss and to learn what ethical AI is & how it is helping the Rwandan AI ecosystem build responsible & trustworthy AI systems. The panel reminded the audience of the importance of not exploiting AI users and being mindful of the needs of AI users while doing no harm.
As different countries are approaching AI Ethics in their own unique ways, Rwanda Utilities Regulatory Authority (RURA) is finalizing the Rwanda AI Ethical Guidelines to guide AI developers in Rwanda on AI-related risks and principles and requirements for developing ethical AI. GIZ, via its FAIR Forward initiative, together with the Center for the Fourth Industrial Revolution (C4IR) and Saidot, have supported RURA through the process that involved an ongoing AI Ethics Applied Programme and Consultation including AI ethics training, individual coaching and peer-learning exchanges. Recently, the team hosted a public webinar, ‘Ethical AI in Rwanda,’ bringing together AI systems developers, deployers, and policy-makers from Rwanda.
The webinar tackled important issues like safety, security, and human and environmental concerns of AI systems.
Fostering responsible and trustworthy AI, Jacquelene Mwangi (Harvard Law School)
In her keynote speech on AI ethics in Africa, she emphasized the importance of kindness, trust, and sustainability in AI business models. She reminded the audience to consider the needs of AI users and avoid exploiting them, while also prioritizing the prevention of harm. She also emphasized the basic ethical principles of respect for human autonomy, prevention of harm, fairness, transparency, and accountability as key considerations in the design, development, and deployment of AI.
Jacquelene noted that AI has the potential to bring significant benefits, such as improved efficiency in various sectors like health, education, finance, and agriculture. However, it can also pose risks, particularly for marginalized groups. She stressed that the potential for AI to cause harm or do good is not inherent to the technology itself, but rather the result of the interaction between AI ethics, institutions, and technologies, which shape the actions of economic agencies seeking to achieve their goals.
To maximize the benefits and minimize the risks of AI, the speaker recommended context-specific AI principles that are institutionalized and enforceable as obligations, rather than just soft laws. She also praised Rwanda for its efforts to develop AI ethical guidelines and offered the following suggestions for their development:
- Ensuring that the guidelines are tailored to the local context and addressing power dynamics within society.
- Examining data credits and transparency beyond data protection laws.
- Conduct impact assessments before introducing new technologies and considering all possible replication principles.
- Reforming technology regulations to reflect the “do no harm” principle.
- Providing retraining opportunities for workers rather than replacing their jobs with AI.
The conversation continued with a roundtable discussion. During the panel, Dr. Nzisabiara Briand from Babylon Health and Segond Fidens Iragena from the Rwandan company Pick and Pack, discussed and shared their experiences from implementing ethical AI in their own AI projects. Fidens focused on the importance of proper data collection for AI business productivity. He highlighted the importance of transparency using an example of his business that deals with agricultural data with scalability in mind. He also recommended collaboration while deploying AI in business to be able to be guided on AI principles and best practices.
Dr.Brian from Babylon Health shared how AI is used at Babylon Health, from insurance check-ups to consultation and diagnostics. The institution has been providing primary care to patients in Rwanda since 2016, and recently in 2020, there has been an introduction of AI triage tool to improve their patient digital journey with their product “symptom checker.” During the discussion, the audience asked a question about the huge digital literacy divide towards ethical AI to which Dr. Brian responded by ensuring the public that Babylon takes transparency and inclusion at its processes core. He also recommends that AI products should strive to be as inclusive as possible with technology.
Fidens also intervened that low tech solutions like USSD short-codes and AI voice bots could be a solution in the continuous journey of making AI more inclusive. Although it is a challenge, he advised AI companies to take transparency and inclusion into account.
Finally, the panelists offered the following recommendations for the development of Rwanda’s AI ethical guidelines:
- Engage all stakeholders in the AI field to ensure a balance between the advancement of AI and the needs of its users.
- Emphasize transparency and the principle of doing no harm.
- Customize the guidelines to the specific context of Rwanda.
You can re-watch the webinar through a video recording here.