Google's AI chatbot can think like a human, engineer's job in danger
Google is working on an artificial chatbot (AI bot) technology. To work on this, the company brought the Deep Mind project, which is headed by Blake Lemoine. Blake Lemoine is currently in discussion. In fact, he has claimed that this AI bot works like a human brain and said that the work of developing it has been completed.
However, when he made this claim public, he was sent on forced leave. Although it was paid leave. Blake said in a Medium post that he could soon be fired for working on AI ethics.
The name of the AI about which there is so much ruckus is LaMDA. Blake Lemoine told The Washington Post that he started chatting with the interface LaMDA (Language Model for Dialog Applications) and found that he was talking to a human. Google last year described LaMDA as a major breakthrough in communication technology.
This artificial intelligence tool, which talks, was talking continuously in human voice. That is, you can talk to it by constantly changing the topic as if you are talking to a person. Google has said that this technology can be used in tools like Search and Google Assistant. The company had said that research and testing is going on on this.
Blake is accused of sharing confidential information about the company's projects with third parties. Blake has made a strange and shocking claim about Google's servers after the suspension. Blake has publicly claimed that he encountered a 'sentient' AI on Google's servers. Blake also claimed that this AI chatbot can think like a human.
According to Google spokesperson Brian Gabriel, when the company reviewed this claim of Lemoine. The company says that the evidence they have given is not enough. When Gabriel was asked about Lemoine's leave, he agreed that he had been given administrative leave.
Gabriel further said that while companies in the artificial intelligence space are considering the long-term expectation of Sentiment AI, doing so does not imply that anthropomorphing convolutional devices are not sensitive. He explained that "systems like LaMDA work by mimicking the types of exchange found in millions of sentences of human conversation, allowing them to talk about imaginary subjects as well.