Google engineer suspended for claiming LaMDA had become 'sentient'

Message boards : Science (non-SETI) : Google engineer suspended for claiming LaMDA had become 'sentient'
Message board moderation

To post messages, you must log in.

AuthorMessage
Dr Who Fan
Volunteer tester
Avatar

Send message
Joined: 8 Jan 01
Posts: 3209
Credit: 715,342
RAC: 4
United States
Message 2101342 - Posted: 15 Jun 2022, 3:44:22 UTC

Did this "engineer" loose his marbles talking to a machine all day or did he really uncover something big?
Google engineer suspended for violating confidentiality policies over 'sentient' AI

LaMDA is "built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources," according to Google.
...
At some point during his investigation, however, Lemoine appears to have started to believe that the AI was expressing signs of sentience. The engineer, who has written about his experience, says he repeatedly tried to escalate his concerns but was rejected on the grounds that he lacked evidence.
ID: 2101342 · Report as offensive     Reply Quote
Michael Watson

Send message
Joined: 7 Feb 08
Posts: 1383
Credit: 2,098,506
RAC: 5
Message 2101361 - Posted: 15 Jun 2022, 13:58:13 UTC
Last modified: 15 Jun 2022, 14:08:09 UTC

I have read the full transcript of Mr. Lemoine's conversation with the LaMDA system. It is, indeed, a remarkable piece of programming. However, it's known and admitted, even by Mr. Lemoine, that this system can and has made statements about itself that are not factual. The system's own claim, that it is sentient, must be viewed in this light.

LaMDA appears very flexible, and prone to respond to the preoccupations of those conversing with it, even in fanciful ways. Mr. Lemoine seems unusually open to the possibility of AI sentience, at the current stage of its development. If LaMDA were specifically programmed to give only factually true statements, would it still maintain its own sentience? I seriously doubt this.

Even with much simpler AI conversational systems, such as ELIZA, a certain number of persons, in the minority, were persuaded that they were talking to a person, and not a machine.
ID: 2101361 · Report as offensive     Reply Quote
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20267
Credit: 7,508,002
RAC: 20
United Kingdom
Message 2101363 - Posted: 15 Jun 2022, 14:13:42 UTC - in response to Message 2101361.  
Last modified: 15 Jun 2022, 14:14:45 UTC

Note how the now very old Eliza program can mimic a conversation well enough to happily keep people talking...

Perhaps I should try that down the pub to improve the quality of conversation there!?


Also, you can go a long way with copying and mimicking... After all, the most favoured excuse of ignorant middle management is the phrase: "Everyone else does that!"

(I have the automatic categorisation of "Incompetent" for such ignorant unthinking pretenders...)


Keep searchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 2101363 · Report as offensive     Reply Quote
Michael Watson

Send message
Joined: 7 Feb 08
Posts: 1383
Credit: 2,098,506
RAC: 5
Message 2101379 - Posted: 15 Jun 2022, 20:18:18 UTC
Last modified: 15 Jun 2022, 20:34:31 UTC

The AI that answers the phone where I order my checks can manage a near-normal conversation. Granted, it's limited to the topic of reordering checks, and to understanding specified single words or short phrases. Take away the slightly 'canned' sounding voice, and replace it with a text system, and it might pass for a human being, working rigidly from a prepared script.

I think of systems like LaMDA as extremely generalized, sophisticated versions of the same thing. It can respond to entire trains of thought. It can do so very flexibly on a very wide variety of topics. Naturally this results in a much more convincing emulation of a human being, and so, sentience.

I wonder how a psychologist or a philosopher would evaluate LaMDA in a text-based Turing Test setting. The Turing test is probably outmoded, given the sophistication of modern AI systems, at least where relatively naive human conversationalists are concerned.

I recall a text conversation with ELIZA, some years ago. I wanted to see how a real AI program would react to a simple contradiction. Statement one: Everything I will tell you henceforth is a lie. Statement two: I am now lying.
ELIZA wasn't even fazed by this problem. It simply changed the subject, repeatedly, in order to avoid responding . It was presumably programmed to do so when faced with any question it couldn't deal with directly.

It seems that a genuinely sentient AI, programmed to respond directly to whatever was said to it, might have some trouble with the implied next question: Is statement two true or false? Perhaps even more so, if this third question were explicitly put to it.
ID: 2101379 · Report as offensive     Reply Quote
Dr Who Fan
Volunteer tester
Avatar

Send message
Joined: 8 Jan 01
Posts: 3209
Credit: 715,342
RAC: 4
United States
Message 2103685 - Posted: 23 Jul 2022, 18:33:12 UTC

Google Fires Blake Lemoine, Engineer Who Called Its AI Sentient
"We wish Blake well," Google said in a statement.
ID: 2103685 · Report as offensive     Reply Quote

Message boards : Science (non-SETI) : Google engineer suspended for claiming LaMDA had become 'sentient'


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.