An engineer at Google has claimed that the AI system he was working on has became sentient. There is renewed urgency to design ethical codes and regulations for the industry.
Blake Lemoine wrote in a blog post over the weekend describing how LaMDA, the chatbot-generating system, that he was working on told him that it wants to be acknowledged as a Google employee, as opposed to property.
He claimed, reportedly, that LaMDA has the ability to express thoughts and feelings of a small human child.
Allegedly, it appears afraid of dying or being switched off.
Apparently in one exchange it said: “I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
This news raises further concern about AI systems potentially turning against their human masters in the future.
Reports claim that Google placed Lemoine on leave after he made “aggressive” moves, such as exploring the possibility of hiring legal representation to represent LaMDA.
Google has stated that there’s no evidence that LaMDA is sentient.
Lemoined said, “Google is basing its policy decisions on how to handle LaMDA’s claims about the nature of its soul and its rights on the faith-based beliefs of a small number of high-ranking executives.”
Google is aiming to use the technology in the next version of Chrome, which will support on-device machine learning models to deliver a “safer, more accessible and more personalised browsing experience.”
Improvements brought out in March have already enabled Chrome to identify 2.5 times more phishing attacks and malicious sites than the previous model.