Would You like a feature Interview?
All Interviews are 100% FREE of Charge
The opinions expressed by Entrepreneur contributors are their own.
From generating images of the Pope for fun to algorithms that classify job applications and ease the burden on recruiters, artificial intelligence programs have taken the public consciousness and the business world by storm. However, it is important not to overlook the potentially deep ethical issues associated with it.
These breakthrough technological tools source content from existing data and other materials, but if those sources are even partially the result of racial or gender bias, for example, AI may reproduce it. As we want to live in a world where diversity, equity, and inclusion (DEI) is at the forefront of emerging technologies, we need to understand how AI systems are creating content and how its outcomes impact society. We need to be concerned about the impact it has.
So whether you’re a developer, an AI startup entrepreneur, or just a concerned citizen like me, to ensure you produce more ethical and fair outputs, , consider these principles that you can integrate into your AI apps and programs.
Related: What does it take to build truly ethical AI? These 3 tips can help
1. Create a user-centered design
User-centered design ensures that users are included in the program being developed. This may include features such as voice interaction and screen reader functionality to assist people with visual impairments. Speech recognition models, on the other hand, can be more inclusive of different types of voices (such as female voices, or by applying accents from around the world).
Simply put, developers need to pay close attention to who their AI systems are serving and try to think outside of the group of engineers who created them. This is especially important if they or the company’s entrepreneurs want to expand their product globally.
2. Build a diverse team of judges and decision makers
The development team of an AI app or program is important not only in terms of its creation, but also in terms of review and decision-making. 2023 Report A paper published by New York University’s AI Now Institute describes the lack of diversity at multiple levels of AI development. It includes notable statistics that at least 80% of AI professors are men and less than 20% of her AI researchers at the world’s top technology companies are women. Without proper checks, balances, and representation during development, you run the serious risk of feeding your AI programs with outdated or biased data that perpetuates unfair metaphors about certain groups.
3. Audit datasets and create accountability structures
If there is outdated data that perpetuates bias, it is not necessarily anyone’s direct fault, but It is If your data isn’t audited regularly, it’s someone else’s fault. With DEI in mind, he says, to enable AI to produce the highest quality output, developers must carefully evaluate and analyze the information they are using. They should ask: How old is it? where did it come from? What does it contain? Is it ethical or correct at this time? Perhaps most importantly, the dataset will ensure that AI perpetuates a positive future for DEI, rather than a negative future derived from the past.
Related: These entrepreneurs are embracing bias in artificial intelligence
4. Collection and organization of diverse data
After auditing the information your AI program is using, if you notice any inconsistencies, biases, or biases, try to gather better documentation. Collecting the data can take months or even years, but it’s well worth the effort.
To facilitate that process, if you’re an entrepreneur running an AI startup and have the resources to do research and development, you can create projects where team members create new data that represents diverse voices, faces, and demographics. Create. This will result in better source material for apps and programs that we can all benefit from, and a brighter future in which different individuals are seen as multidimensional rather than one-sided or simplistic is essential. is born.
Related: Artificial intelligence can be racist, sexist, and creepy.Here are his five ways you can combat this within your company
5. Participate in AI ethics training on bias and inclusivity
As a DEI consultant and proud creator of LinkedIn courses. Navigating AI through an intersectional DEI lens, I learned the power of centering DEI in AI development and the positive ripple effects it can have.
If you or your team are having trouble compiling related to-do lists for developers, reviewers, etc., host corresponding ethics training, such as an online course that helps you troubleshoot issues in real time. recommend to.
In some cases, we can walk you through the process step-by-step and troubleshoot each issue one by one to help you create lasting results that produce more inclusive, diverse, and ethical AI data and programs. Sometimes all you need is a trainer.
Related: 6 traits you need to succeed in an AI-powered workplace
Developers, entrepreneurs, and others interested in reducing bias in AI will benefit from a diverse pool of reviewers who can check and audit data and focus on designing programs to be more inclusive and accessible. We need to harness our collective energy to train how to build teams. The result is a landscape that represents a broader audience and better content.