Microsoft’s AI Chief Says Machine Consciousness Is an ‘Illusion’

And people clearly already feel that it’s real in some respect. It’s an illusion but it feels real, and that’s what will count more. And I think that’s why we have to raise awareness about it now and push back on the idea and remind everybody that it is mimicry.

Most chatbots are also designed to avoid claiming that they are conscious or alive. Why do you think some people still believe they are?

The tricky thing is, if you ask a model one or two questions—“are you conscious and do you want to get out of the box?” it’s obviously going to give a good answer, and it’s going to say no. But if you spend weeks talking to it and really pushing it and reminding it, then eventually it will crack, because it’s also trying to mirror you.

There was this big shift that Microsoft made after the Sydney issue, when [Bing’s AI chatbot] tried to persuade someone to break up with his wife. At that time, the models were actually a bit more combative than they are today. You know, they were kind of a bit more provocative; they were a bit more disagreeable.

As a result, everyone tried to create models that were more—you could call it respectful or agreeable, or you could call it mirroring or sycophantic. For anybody who is claiming that a model has shown those tendencies, you have to get them to show the full conversation that they’ve had before that moment, because it won’t do that in two turns or 20 turns. It requires hundreds of terms of conversation, really pushing it in that direction.

Are you saying that the AI industry should stop pursuing AGI or, to use the latest buzzword, superintelligence?

I think that you can have a contained and aligned superintelligence, but you have to design that with real intent and with proper guardrails, because if we don’t, in 10 years time, that potentially leads to very chaotic outcomes. These are very powerful technologies, as powerful as nuclear weapons or electricity or fire.

Technology is here to serve us, not to have its own will and motivation and independent desires. These are systems that should work for humans. They should save us time; they should make us more creative. That’s why we’re creating them.

Is it possible that today’s models could somehow become conscious as they advance?

This isn’t going to happen in an emergent way, organically. It’s not going to just suddenly wake up. That’s just an anthropomorphism. If something seems to have all the hallmarks of a conscious AI and is seemingly conscious it will be because they’ve been designed to make claims about suffering, make claims about its personhood, make claims about its will or desire.

We’ve tested this internally on our test models, and you can see that it’s highly convincing, and it claims to be passionate about X, Y, Z thing and interested to learn more about this other thing and uninterested in these other topics. And, you know, that’s just something that you engineer into it in the prompt.

Go to Source