The Race To Accelerate AI Technology Is Worrying Engineers

ChatGPT in only 2 months, has reached 100 million active users. This makes it the fastest growing consumer app in history. After hearing the news of this breakthrough made by Microsoft, Google has set its eyes strongly on creating a better ChatGPT asap.

Google declared a code red, setting out a specified team to create a chatbot called “Apprentice Bard” that will use Google’s LaMDA (Language Model for Dialogue Applications) conversation technology. The LaMDA team have also been ordered to not attend any meetings that are not related to this new project and that it takes priorization over any other projects. The AI department are also to release 20 new AI-powered products this year alone.

A former engineer at Google, who had worked at the company for the last 7 years, was fired due to claims he made on behalf of the project LaMDA. Blake Lemoine claimed, “I legitimately believe that LaMDA is a person”, because of his beliefs that the AI technology was beginning to become too advanced, and that it could understand things on another level.

Google is already ahead of ChatGPT on basis of recent knowledge as ChatGPT has limited information on events passed 2021, but LaMDA is updated to more current events.

Google shared the following statement with Engadget:

“As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.”