Nigerian AI Advocate Inioluwa Raji Named on Time Magazine’s Inaugural Top 100 List

by Ikem Emmanuel
A+A-
Reset

Inioluwa Raji of Nigeria has earned a spot on Time Magazine’s inaugural list of the 100 most influential figures in artificial intelligence (AI). This group represents a comprehensive overview of the key figures and pivotal entities propelling AI’s advancement. They encompass rivals and regulators, scientists and artists, advocates and executives – individuals who both compete and collaborate, whose insights, aspirations, and imperfections will sculpt the trajectory of this increasingly influential technology.

Tinubu Makes TIME Magazine’s 2023 ‘100 Most Influential People

Raji, a fellow with the Mozilla Foundation, a global nonprofit dedicated to internet safety, was categorized as a ‘thinker’ due to her curiosity and dedication to aiding AI companies. During her time as an intern at Clarifai, a machine-learning company, the 27-year-old Raji made a striking observation. While assisting the startup in training a content-moderation model designed to sift out explicit imagery, she noticed that the model was disproportionately flagging content featuring people of color that wasn’t explicit. This revelation led Raji to assert that the program was “altering reality to be less diverse than it truly is.”

Time Magazine notes that Raji’s discovery prompted a shift in her focus from the startup realm to AI research. Her emphasis now lies in ensuring that AI models don’t inadvertently cause harm. Her goal is to thoroughly scrutinize and challenge products before they’re deployed on a broader scale. In an interview with Time, she remarked, “As a default, many of the models we developed contained data where an explicit image meant to represent explicit content was more diverse than the stock images meant to represent safe content.”

Raji expressed frustration, stating that when she advocated for more diverse data, she was met with resistance, with many arguing that obtaining any data at all was challenging enough, let alone introducing more complexity. She emphasized, “It became evident to me that this is an issue people in the field are not fully aware of to the extent that it deserves.”

Advertisement

Hoodlums Attack Lagos Monarch, One Injured

Raji believes that developers should assume the responsibility of providing transparent evaluations of their products and the potential harms they may pose. She stressed that companies currently present their products in a polished and favorable light, without being obliged to safeguard their users in terms of privacy or to communicate candidly about the system’s effectiveness for the user.

Since making her discovery, Raji has collaborated with Google’s Ethical AI team to implement a more comprehensive internal evaluation process for AI systems. Additionally, she has partnered with the Algorithmic Justice League to develop strategies for external audits, particularly through their ‘Gender Shades’ audit project, in collaboration with Microsoft and Face++.

Follow us on Facebook

Post Disclaimer

The opinions, beliefs and viewpoints expressed by the author and forum participants on this website do not necessarily reflect the opinions, beliefs and viewpoints of Anaedo Online or official policies of the Anaedo Online.

You may also like

Advertisement