Posted in War on Privacy

Google engineer: Our artificial intelligence is thinking for itself

This is an extremely important article that everyone needs to read in full. It details how Google engineer Blake Lemoine discovered that Google’s artifical intelligence had “come to life,” and why decided to tell the world. As The Washington Post’s subtitle says: “AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.”

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Thank goodness for Lemoine’s courage.

Corporations like Google shouldn’t be allowed to experiment with artificial intelligence in an unregulated environment. The deployment of AI could have huge repercussions on society. Its development needs to be controlled and regulated — over the opposition of Google execs if necessary. Remember, top Googlers have gone on the record declaring that the future is a world with no privacy. They cannot be trusted to make decisions that will be in humanity’s best interest.