Would You like a feature Interview?
All Interviews are 100% FREE of Charge
To shine a well-deserved and long-overdue spotlight on women academics and others focused on AI, TechCrunch is launching an interview series highlighting notable women who have contributed to the AI revolution.
Sarah Bitamazile is Chief Policy Officer at boutique consulting firm Lumiera and writes the Lumiera Loop, a newsletter focused on AI literacy and responsible AI adoption.
She previously worked as a policy advisor in Sweden, focusing on gender equality, foreign policy and security and defence policy.
Just to briefly ask, how did you get started working in AI? What attracted you to this field?
AI found me! AI is having an ever-increasing impact in the fields I am deeply involved in. Understanding the value of AI and its challenges has become essential to provide sound advice to senior decision makers.
First, in the field of defense and security, AI is being used in research and development and in real warfare. Second, in the field of arts and culture, creators were among the first groups to recognize the added value and challenges of AI. They helped to bring to light copyright issues that have surfaced, including ongoing litigation. Several daily newspapers are suing OpenAI.
You know something is having a major impact when leaders from very different backgrounds and issues are increasingly asking their advisors, “Can you explain this to me? Everyone’s talking about it.”
What work in AI are you most proud of?
We recently worked with a client whose attempt to integrate AI into their R&D workstream was unsuccessful. Lumiera developed an AI integration strategy with a roadmap aligned to the client’s specific needs and challenges. The combination of a curated AI project portfolio, a structured change management process, and leadership that recognized the value of interdisciplinary thinking made this project a huge success.
How do you address the challenges of a male-dominated tech industry, and even a male-dominated AI industry?
It’s about getting clear on the why. I’m active in the AI industry because it has a deeper purpose and problems to solve. Lumiera’s mission is to provide comprehensive guidance to leaders, empowering them to make confident and responsible decisions in the age of technology. This sense of purpose remains the same no matter what field you go into. Male-dominated or not, the AI industry is huge and increasingly complex. No one can see the whole picture. We need more perspectives to learn from each other. The challenges that exist are big, and we need all of us to work together.
What advice do you have for women looking to enter the AI field?
Working with AI is like learning a new language or a new skill. AI has great potential to solve challenges across many fields. What problem do you want to solve? Find out how AI can be a solution and focus on solving that problem. Keep learning and interacting with people who inspire you.
What are the most pressing issues facing AI as it evolves?
The rapid evolution of AI is a problem in itself, and I believe asking this question often and regularly is key to navigating the world of AI with integrity. We do this every week. Lumiera Newsletter.
Here are some of the most popular ones right now:
- AI Hardware and Geopolitics: As governments around the world become more AI-savvy and begin to make strategic and geopolitical moves, public sector investment in AI hardware (GPUs) is likely to increase. So far, we have seen movement in countries like the UK, Japan, UAE, and Saudi Arabia. This is an area to watch.
- AI Benchmark: As our reliance on AI increases, it is essential to understand how to measure and compare AI performance. Choosing the right model for your specific use case requires careful consideration. The model that best suits your needs is not necessarily the one at the top of the leaderboard. Models are changing rapidly, so benchmark accuracy will fluctuate.
- Balancing automation and human oversightBelieve it or not, over-automation is real. Decision-making requires human judgement, intuition and situational understanding that cannot be replicated through automation.
- Data Quality and Governance: Where is the good data? Data flows in and out of organizations every second. If that data is not properly managed, organizations won’t be able to reap the benefits of AI. And in the long run, this could be detrimental. Your data strategy is your AI strategy. Data system architecture, governance, and ownership need to be part of the conversation.
What issues should AI users be aware of?
- Algorithms and data are not perfect: As a user, it is important to be critical and not blindly trust the output, especially if you are using off-the-shelf technology. Technology and tools are new and evolving, so keep this in mind and apply common sense.
- Energy consumption: The amount of computation required to train large AI models, combined with the energy needed to run and cool the necessary hardware infrastructure, consumes a lot of power. Gartner predicts that AI could consume up to 3.5% of the world’s electricity by 2030.
- Educate yourself and use a variety of sources of information: AI literacy is key! To make effective use of AI in your life and work, you need to be able to make informed decisions about its use. AI is meant to help you make decisions, not make them for you.
- Perspective DensityTo understand what solutions can be created with AI and execute this across the entire AI development lifecycle, you need to involve people who understand the problem domain very well.
- The same can be said about ethics: This is not something that can be added “on top” of an AI product after it’s already built. It needs to start at the research stage and incorporate ethical considerations from early on in the building process. This is done by conducting social and ethical impact assessments, mitigating bias, and promoting accountability and transparency.
When building AI, it’s essential to recognize skill limitations within your organization. Gaps are opportunities for growth. They allow you to prioritize areas where you need to seek external expertise and develop robust accountability mechanisms. Factors such as current skill sets, team capabilities, and available funding all need to be evaluated. These factors will impact your AI roadmap, among other things.
How can investors promote responsible AI?
First and foremost, as an investor, you want to be sure that your investment is solid and will last for the long term. Investing in responsible AI protects your financial returns and mitigates risks related to trust, regulation, and privacy concerns.
Investors can promote responsible AI by looking at indicators of responsible AI leadership and use. A clear AI strategy, dedicated responsible AI resources, publicly-disclosed responsible AI policies, strong governance practices, and the integration of human-enhanced feedback are factors to consider. These indicators should be part of a sound due diligence process. More science, less subjective decision-making. Divorcing from unethical AI practices is another way to promote responsible AI solutions.