Australian-grown tech startup Dovetail’s CEO has backed the need for AI regulation to ensure the booming technology is not used for “nefarious purposes.” However, he said the practical aspects of compliance will determine how easy or difficult it is for businesses deploying AI to comply with.
Benjamin Humphreys has grown customer insights platform Dovetail over the last seven years to 120 people based in Australia and the U.S. He told TechRepublic that there was a need for some action from governments to safeguard “the greater good of society” against some potential use cases of AI.
While he said Australia’s proposal for mandatory AI guardrails was unlikely to stymie innovation at Dovetail, due to the proposal’s focus on high-risk AI, any moves that require extensive human reviews of AI outputs at scale within tech products could prove prohibitive if made a requirement.
SEE: Explore Australia’s proposed mandatory guardrails for AI
Regulating AI necessary to protect citizens from AI’s worst potential
Humphreys, whose Dovetail platform utilises Anthropic’s AI models to provide customers with deeper insights into their customer data, said the regulation of AI was welcome in certain high-risk areas or use cases. As an example, he cited the need for regulations to prevent AI from discriminating against job applicants based on biased training data.
“I’m a technology person, but I’m actually anti-technology disrupting the good of humanity,” he said. “Should AI be regulated for the greater good of society? I would say yes, definitely; I think it’s scary what you can do, especially with the ability to generate photographs and things like that,” he said.
Australia’s proposed new AI regulations are expected to result in the introduction of guardrails for the development of AI in high-risk settings. These measures include putting in place risk management processes and testing of AI models before launching. He said they would more likely impact businesses in high-risk settings.
“I don’t think it’s going to have a massive impact on how much you can innovate,” Humphreys said.
SEE: Gartner thinks Australian IT leaders should adopt AI at their own pace
“I think the regulation is focused on high-risk areas … and we already have to comply with all sorts of regulations anyway. That includes Australia’s Privacy Act, and we also do a lot of stuff in the EU, so we have GDPR to deal with. So it’s no different in that sense,” he explained.
Humphreys said that regulation was important because organisations developing AI had their own incentives. He gave social media as a related example of an area where society could benefit from thoughtful regulation, as he believes that, given its record, “social media has a lot to answer for.”
“Major technology companies have very different incentives than what we have as citizens,” he noted. “It’s pretty scary when you’ve got the likes of Meta, Google and Microsoft and others with very heavy commercial incentives and a lot of capital creating models that are going to serve their purposes.”
AI legal compliance will depend on the specificity of regulations
The feedback process for the Australian government’s proposed mandatory guardrails closed on Oct. 4. The impact of the resulting AI regulations could depend on how specific the compliance measures are and how many resources are needed to remain compliant, Humphreys said.
“If a piece of mandatory regulation said that, when provided with essentially an AI answer, the software interface needs to allow the user to sort of fact check the answer, then I think that’s something that is relatively easy to comply with. That’s human in the loop stuff,” Humphreys said.
Dovetail has already built this feature into its product. If users query customer data to prompt an AI-generated answer, Humphreys said the answer is labelled as AI-generated. And users are provided with references to source material where possible, so they can verify the conclusions themselves.
SEE: Why generative AI is becoming a source of ‘costly mistakes’ for tech buyers
“But if the regulation was to say, hey, you know, every answer that your software provides must be reviewed by an employee of Dovetail, obviously that is not going to be something we can comply with, because there are many thousands of these searches being run on our software every hour,” he said.
In a submission on the mandatory guardrails shared with TechRepublic, tech company Salesforce suggested Australia take a principles-based approach; it said compiling an illustrative list as seen in the E.U. and Canada could inadvertently capture low-risk use cases, adding to the compliance burden.
How Dovetail is integrating responsible AI into its platform
Dovetail has been ensuring it rolls out AI responsibly in its product. Humphreys said that, in many cases, this is now what customers expect, as they have learned not to fully trust AI models and their outputs.
Infrastructure considerations for responsible AI
Dovetail uses AWS Bedrock service for generative AI, as well as Anthropic LLMs. Humphreys said this gives customers confidence their data is isolated from other customers and protected, and that there is no risk of data leakage. Dovetail does not leverage user data inputs from clients to fine tune AI models.
AI-generated outputs are labelled and can be checked
From a user experience perspective, all of Dovetail’s AI-generated outputs are labelled as such, to make it clear for users. In instances where it is possible, customers are also supplied with citations in AI-generated responses, so that the user is able to investigate any AI-assisted insights further.
AI-generated summaries are editable by human users
Dovetail’s AI-generated responses can be actively edited by humans in the loop. For example, if a summary of a video call is generated through its transcript summarisation feature, users who receive the summary can edit the summary should they identify that an error exists.
Meeting customer expectations with a human in the loop
Humphreys said customers now expect to have some AI oversight or a human in the loop.
“That’s what the market expects, and I think it is a good guardrail, because if you’re drawing conclusions out of our software to inform your business strategy or your roadmap or whatever it is you’re doing, you would want to make sure that those conclusions are accurate,” he said.
Humphreys said AI regulation might need to be at a high level to cover off the high variety of use cases.
“Necessarily, it will have to be quite high level to cover all the different use cases,” Humphreys said. “They are so widespread, the use cases of AI, that it’s going to be very difficult, I think, for them [The Government] to write something that’s specific enough. It’s a bit of a minefield, to be honest.”