From masters of the digital universe to pariahs peddling machine-dominated dystopias. Perhaps this is not the path his AI developers have taken, but the arrival of Chat GPT on the desktop has intensified the debate over the benefits and risks associated with artificial intelligence tools in recent months. bottom. Against this backdrop, the UK government has announced plans to regulate the sector. So what does this mean for startups?
In formulating proposals for regulation Frameworkthe government promised a light-touch, innovation-friendly approach while addressing public concerns.
And start-ups working in this space would probably be relieved to hear the government talk about opportunities instead of emphasizing risks. Minister for Science, Innovation and Technology Michelle Donnellan said in response to the published proposal: “AI is already delivering amazing social and economic benefits to real people, from improving NHS healthcare to making roads safer. Recent advances such as generative AI await us in the near future. It gives us a glimpse of the big opportunities that lie ahead.”
So the government has avoided being too radical, bearing in mind the need to back British AI startups that attracted more than $4.65 billion in total VC investment last year. No new regulators will appear. Instead, communications watchdog Ofcom and the Competition and Markets Authority (CMA) will share the heavy lifting. And oversight is not overly prescriptive and is guided by broad principles of safety, transparency, accountability and governance, and access to redress.
A smorgasbord of AI risks
Nevertheless, the government has identified a number of potential downsides. These include risks to human rights, equity, public safety, social cohesion, privacy and security.
For example, generative AI (technology that generates content in the form of words, sounds, pictures, and videos) threatens jobs, causes problems for educators, and produces images that blur the line between fiction and reality. There is likely to be. Decision-making AI, widely used by banks to evaluate loan applications and identify potential fraud, produces results that simply reflect existing industry biases, providing a sort of test of unfairness. It has already been criticized for doing so. And, of course, there is the AI that powers driverless vehicles and autonomous weapon systems. The kind of software that makes life or death decisions. This is a headache for regulators. Done wrong, it can stifle innovation and fail to adequately address real-world problems.
So what does this mean for startups operating in this space? Last week, I spoke with Darko Matovski, the CEO and co-founder of CausaLens, his AI-driven decision-making tool.
Need for regulation
“We need regulation,” he says. “Any system that can affect people’s lives must be regulated.”
But he also acknowledges that it’s not easy given the complexity of the software on offer and the diversity of technology in the field.
Matovsky’s own company, cause lensprovides AI solutions that support decision making. To date, the venture, which raised $45 million in VC funding last year, has sold products to markets such as financial services, manufacturing and healthcare. Its use cases include price optimization, supply chain optimization, financial services sector risk management, and market modeling.
At first glance, decision-making software isn’t controversial. Data is collected, processed and analyzed to enable businesses to make better automated choices. But of course, if software is “trained” to make such choices, it runs the risk of inherent bias, which is debatable.
In Matovsky’s view, the challenge is to create software that eliminates bias. “We wanted to build AI that humans can trust,” he says. To that end, the company’s approach has been to create solutions that continuously and effectively monitor cause and effect. This allows the software to adapt to how the environment (such as a complex supply chain) reacts to events and changes, and this is factored into decision making. The idea is that decisions are made according to what is actually happening in real time.
Perhaps the bigger point is that startups will need to think about dealing with the risks associated with certain aspects of AI.
But here a question arises. With dozens, or perhaps hundreds, of AI startups developing solutions, how will regulators keep up with the pace of technological development without stifling innovation? It has proven difficult enough to regulate.
Matovsky said tech companies need to think in terms of dealing with risk and working transparently. “We want to stay ahead of regulators,” he says. “And we want to create a model that can be explained to regulators.”
As a government, we aim to encourage dialogue and cooperation between regulators, civil society, AI start-ups and scaling up. At least that’s what the white paper says.
Part of the UK government’s intention when developing its regulatory plans is to complement its existing AI strategy. The key is to provide a rich environment for innovators to gain market traction and grow.
That raises the question of how much room there is in the market for young companies. Recent headlines around generative AI have focused on Google’s Bard software and Microsoft’s relationship with Chat GPT creator OpenAI. Is this a market for big, well-funded tech companies?
Matovsky thinks otherwise. “AI is pretty big,” he says. “There’s enough for everyone,” he said, pointing to one corner of his market. claimed to be left behind.
The challenge for everyone working in the markets is to build trust and address the real concerns of the public and governments.