On Artificial Intelligence
how should you think about the future of AI. This extract of an article published this week might help.
"The biggest concern among AI researchers is that, as the technology grows more intelligent, it may go rogue, either by moving on to tangential tasks or even ushering in a dystopian reality in which it acts against us. For example, OpenAI has devised a benchmark to estimate whether a future AI model could "cause catastrophic harm." When it crunched the numbers, it found about a 16.9% chance of such an outcome.
Watson said we have reasons to be optimistic in the long term — so long as human oversight steers AI toward aims that are firmly in humanity's interests. But that's a herculean task. Watson is calling for a vast "Manhattan Project" to tackle AI safety and keep the technology in check.
"Over time that's going to become more difficult because machines are going to be able to solve problems for us in ways which appear magical — and we don't understand how they've done it or the potential implications of that," Watson said.
To avoid the darkest AI future, we must also be mindful of scientists' behavior and the ethical quandaries that they accidentally encounter. Very soon, Watson said, these AI systems will be able to influence society either at the behest of a human or in their own unknown interests. Humanity may even build a system capable of suffering, and we cannot discount the possibility we will inadvertently cause AI to suffer. which means it could hit back, think some.
AGI and, by extension, the singularity is inevitable. So, for him, it doesn't make sense to dwell on the worst implications.
"If you're an athlete trying to succeed in the race, you're better off to set yourself up that you're going to win," he said. "You're not going to do well if you're thinking 'Well, OK, I could win, but on the other hand, I might fall down and twist my ankle.' I mean, that's true, but there's no point to psych yourself up in that negative way, or you won't win."
Comments