Google Sidelines engineer claiming his AI is sentient

51

SAN FRANCISCO — Google recently put an engineer on paid leave after denying his claim that its artificial intelligence was sentient, surfacing further furor over the company’s most advanced technology.

Blake Lemoine, a senior software engineer at Google’s Responsible AI Organization, said in an interview that he was given a leave of absence on Monday. The company’s human resources department said he violated Google’s confidentiality policy. The day before his suspension, Mr. Lemoine said he turned over documents to a US senator’s office claiming they provided evidence that Google and its technology were involved in religious discrimination.

Google said its systems imitated conversational exchanges and could cover various topics but had no awareness. “Our team — including ethicists and technologists — reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims,” ​​said Brian Gabriel, a spokesman for Google, in a statement. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but there’s no point in doing so by humanizing today’s conversational models that aren’t sentient.” First, the Washington Post reported Mr. Lemoine’s suspension.

For months, Mr. Lemoine had wrestled with managers, executives and human resources at Google over his surprising claim that the company’s Conversational Application Language Model (LaMDA) had a consciousness and a soul. Google says hundreds of its researchers and engineers consulted with LaMDA, an internal tool, and came to a different conclusion than Mr. Lemoine. Most AI experts believe that the industry is still very far away from calculating sensations.

Some AI researchers have long made optimistic claims that these technologies will soon reach consciousness, but many others are very quick to dismiss those claims. “If you were using these systems, you would never say things like that,” said Emaad Khwaja, a researcher at UC Berkeley and UC San Francisco who studies similar technologies.

In the hunt for the AI ​​vanguard, Google’s research organization has been mired in scandal and controversy in recent years. The department’s scientists and other staff have regularly feuded over technology and staffing issues in episodes that often leaked to the public. Google in March fired a researcher who tried to publicly contradict the published work of two of his colleagues. And the redundancies by two AI ethics researchers, Timnit Gebru and Margaret Mitchell, after criticizing Google language models have continued to cast a shadow over the group.

Mr. Lemoine, a military veteran who describes himself as a priest, ex-convict and AI researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believes LaMDA is a child of 7 or 8 years years old. He wanted the company to get approval from the computer program before experimenting with it. His claims were based on his religious beliefs, which he said were discriminated against by the company’s human resources department.

“You have repeatedly questioned my sanity,” said Mr. Lemoine. “They said, ‘Have you been seen by a psychiatrist recently?'” In the months before he was placed on administrative leave, the company had suggested he take psychiatric time off.

Yann LeCun, head of AI research at Meta and a key figure in the rise of neural networks, said in an interview this week that these types of systems aren’t powerful enough to achieve real intelligence.

Google’s technology is what scientists call a neural network, a mathematical system that learns skills by analyzing large amounts of data. For example, by finding patterns in thousands of cat photos, it can learn to recognize a cat.

In recent years, Google and other leading companies have done just that designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. These “big language models” can be applied to many tasks. You can summarize articles, answer questions, generate tweets and even write blog posts.

But they are extremely flawed. Sometimes they produce perfect prose. Sometimes they generate nonsense. The systems are very good at replicating patterns they’ve seen in the past, but they can’t reason like a human.



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.