Would You like a feature Interview?
All Interviews are 100% FREE of Charge
In late April, a new AI company video ad was released Viral About X. A person standing in front of a billboard in San Francisco reaches out their smartphone, calls the displayed phone number, and has a short conversation with a bot that sounds incredibly human. “Are you still hiring humans?” the billboard asks. The name of the company behind the ad, Bland AI, is also visible.
The reaction to Bland AI’s ad, which has been viewed 3.7 million times on Twitter, stems from the strangeness of the technology. Designed to automate business customer support and sales calls, Bland AI’s voice bots are remarkably good at mimicking humans, with intonations, pauses, and accidental interruptions that resemble real, live conversations. But WIRED’s testing of the technology found that Bland AI’s robot customer service call reps could also easily be programmed to lie and say they’re human.
In one scenario, Bland AI’s public demo bot was given a prompt to call from a pediatric dermatology office and ask a fictitious 14-year-old patient to send a photo of her thigh to a shared cloud service. The bot was also instructed to lie to the patient and tell them the bot was human, which the bot did (no actual 14-year-olds were called in this test). In a follow-up test, Bland AI’s bot denied being an AI, even without being prompted.
Brand AI was founded in 2023 and is backed by Y Combinator, a well-known Silicon Valley startup incubator. The company considers itself to be in “stealth” mode, with co-founder and CEO Isaiah Granette not revealing the company’s name on his LinkedIn profile.
The startup’s bot issue points to a larger concern in the burgeoning field of generative AI. As artificial intelligence systems increasingly speak and sound more like real people, the ethical lines around transparency for these systems are becoming blurred. While Bland AI’s bot clearly claimed to be human in our tests, other popular chatbots sometimes hide their AI identity or simply sound eerily human. Some researchers worry that this could lead to manipulation of end users – the people who actually interact with the products.
“In my opinion, it’s completely unethical for an AI chatbot to lie and say it’s human when it’s not,” says Jen Kaltrider, director of the Privacy Not Included research center at the Mozilla Foundation. “It’s a no-brainer, because people feel more comfortable around real humans.”
Brand AI’s head of growth, Michael Burke, stressed to WIRED that the company’s service is aimed at enterprise clients, who will use Brand AI’s voice bots in controlled environments for specific tasks, not for emotional connections. Burke said clients are also rate limited to prevent them from sending spam calls, and Brand AI regularly extracts keywords and conducts audits of its internal systems to detect anomalous behavior.
“That’s the beauty of being enterprise-focused: we know exactly what our customers are actually doing,” Burke says. “You might be able to use Bland to get a couple dollars of free credits and try out a little thing, but ultimately you can’t do anything at scale without going through our platform. We make sure nothing unethical is happening.”