Google employees highlighted concerns about the company's competitive edge in AI during a recent all-hands meeting, citing the rapid popularity of ChatGPT, which was introduced by OpenAI, a San Francisco-based startup backed by Microsoft.

Some are questioning where Google is in the race to develop intelligent chatbots that can respond to consumer queries. After all, Google's main business is web search, and the company has long claimed to be an AI pioneer. LaMDA, which stands for Language Model for Dialogue Applications, is Google's conversation technology.

"Is this a missed opportunity for Google, considering we've had Lamda for a while?" read one of the most popular questions raised at last week's meeting.

Sundar Pichai, CEO of Alphabet, and Jeff Dean, long-time head of Google's AI unit, answered the topic by noting that the company has equal capabilities but that the cost if something goes wrong is higher since customers must trust the answers they receive from Google.

Google's search engine is used by billions of people worldwide, while ChatGPT only recently passed 1 million users in early December.

"This really strikes a need that people seem to have but it's also important to realize these models have certain type of issues," Dean said.

The company has "a lot" planned for the space for 2023, according to Pichai, who also stated that "this is an area where we need to be bold and responsible so we have to balance that."

OpenAI CEO Sam Altman acknowledged ChatGPT's shortcomings and urged users to exercise caution when relying too heavily on the responses they receive in a tweet over the weekend.

"It's a mistake to be relying on it for anything important right now," Altman wrote. "It's a preview of progress; we have lots of work to do on robustness and truthfulness."

Google, with a market capitalization of more than $1.2 trillion, does not have that luxury. Dean told staff that the business's technology has mostly remained in-house thus far, underlining that the business has considerably more "reputational risk" and is moving "more conservatively than a small startup."

"We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we've been using them to date," Dean said. "But, it's super important we get this right."

He continued, saying "you can imagine for search-like applications, the factuality issues are really important and for other applications, bias and toxicity and safety issues are also paramount."