Google has placed one of its developers on paid administrative leave after he became concerned that an AI chatbot system had reached sentience.

According to the Washington Post, the engineer, Blake Lemoine, works for Google's Responsible AI organization and was studying whether Google's LaMDA model promotes discriminatory or hate speech.

The engineer's concerns were apparently sparked by the AI system's persuasive responses to questions about its rights and the ethics of robotics.

In April, he sent executives a document titled "Is LaMDA Sentient?" ", which he claims shows it arguing "that it is sentient because it has feelings, emotions, and subjective experience," according to a transcript of his interactions with the AI (after being placed on leave, Lemoine uploaded the transcript via his Medium account).

The nature of his concerns was "intentionally vague" in the Medium piece, but they were later addressed in the Post story. Lemoine published a series of "interviews" he had with LaMDA on Saturday.

Google denied Lemoine's claim that LaMDA is self-aware in a statement.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," Google spokesperson Brian Gabriel said in a statement. "If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on."

The high-profile suspension adds to the turmoil within Google's AI unit, which has seen a string of departures. Timnit Gebru, a famous AI ethics researcher, claimed in late 2020 that Google fired her for raising concerns about prejudice in AI systems.

Around 2,700 Google employees signed an open letter in favor of Gebru, who has quit, according to Google. Margaret Mitchell, who co-led the Ethical AI project with Gebru, was sacked two months later.

Alex Hanna, a research scientist, and Dylan Baker, a software engineer, both quit as a result. Google fired Satrajit Chatterjee, an AI researcher, earlier this year after he criticized a study article on the application of artificial intelligence to develop computer chips.

Although AI sentience is a prevalent subject in science fiction, few experts believe the technology is progressed enough to develop a self-aware chatbot at this time.

Last year, at Google I/O, the search giant launched LaMDA, which it thinks would improve its conversational AI helpers and allow for more natural discussions. Similar language model technology is already used by the firm for Gmail's Smart Compose function and search engine inquiries.