top_idea_big TOP IDEA Voted number 1 of its week

Artificial Intelligence - What if

Leaving aside what it means to be 'intelligent' or 'sentient', humans as a group are now creating some form of artificial being at alarming speed. Two examples in the last month: The guy from Google who was working on a human like computer that whistle blew his own belief that whatever he made actually had feelings (most people accept it didn't); and The University of Cambridge and others have spoken freely to New Scientist about their brain organoid programs - studies around the growth from stem cells to form min-brains from, and their emergent properties. These are also said to be unambiguously non-sentient, at the moment. No doubt they are, and to boot they are very useful for studying brain diseases and the like. BUT - and you can see where I am heading here - but we are now surely not that far from making something that will either think for itself, or have feelings, or both. My point is 'and then what?' Should we feel ethically obliged to keep them alive, or comfortable? Or should we immediately destroy them the moment sentience starts to emerge. What if, what if, what if. Of course, sentience may never emerge. But like with climate change, we need to prepare in advance, and not just worry about it after it happens (doh). UNESCO has pointed out that many frameworks and guidelines for AI do exist, but they they also point out they are largely non-binding, they are implemented unevenly, and none are truly global. UNESCO: "AI is global, which is why we need a global instrument to regulate it." This idea is that there must be global rules developed for "what if" now. If not to protect them, the emerging beings, then to protect, and lets not forget this risk - US!
GD Views
Vote Score
60.0 %