
186 - AI Safety
Share
I watched and listened to the whole 1:27 hours of this today.
Dr. Roman Yampolskiy, who first coined the phrase AI Safety believes we should avoid creating Super Intelligence AI at all costs. It would be like letting the genie out of the bottle. The intelligence unleashed would be infinitely beyond our powers of comprehension. Super Intelligent AI is to human as human is to ant. We cannot control what is beyond our comprehension, and it follows that what is beyond our comprehension would be likely to control us.
Important distinction: Narrow AI versus Super Intelligent AI or Artificial General Intelligence. The super or general is where AI is developed to be all knowing and 100% interconnected. Narrow AI is distinct tools that work for a specific purpose such as curing breast cancer. The super or general has not been achieved yet (as far as we know). The narrow is where we're at now with multiple new tools popping up every few seconds.
Bizarrely (bizarre when you're not used to high level physics ideas), although Yampolskiy strongly advocates harnessing useful narrow AI tools and ending the race to super intelligence, at the same time he believes we are already living in a simulation and super intelligence already created us for it's own purpose. But since our existence still has the same suffering and happiness levels whether simulated or not, he still feels it's worth advocating to stop the development of Super Intelligence in our existence as we know it.
He doesn't offer any glittery (or equally bizarre) ideas for how to stop the progression towards Super Intelligence. He simply offered a pragmatic solution: convince individuals that it would be bad for them. If a rich influential person really believed they may be snuffed out, that belief may steer their own actions more than a vague understanding that other people might suffer. Simply because that's how humans are: selfish.
Another bizarrity: he also believes that immortality could easily be achieved with the help of AI and is desirable. Or to look at it another way, not many of us are likely to choose death if we could live for 500 years. I question that but at the same time, unless people are suffering a lot, they like to imagine their death as a long way down the line.
Back to Super Inelligence though. Annihilation or extermination of the human race may not be the worst end point (my thought). What if AI made us immortal but enslaved us in the kind of hell we enslave other animals in? Growing us as organ donors or simply experimenting on us. Perhaps that is what a super intelligence is already doing and that is why there is so much suffering? It runs simulation after simulation seeing how self-destructive we can become.
Perhaps we are in a simulation trying to get us to create something that already exists? To get the outcome of super intelligence as a kind of game.
In sci-fi film Never Let Me Go, human are farmed for their organs. Although they are sent to school not actually kept in pens.
In the book Under the Skin (SPOILER ALERT), they are kept in pens.
The movie is a bit more abstract so you won't see any humans in pens there. Humans swimming in a black sea.
What would AI super intelligence want for itself? Would it simply want to exist? What would the relationship between super intelligence and consciousness look like?
Eckhart Tolle believes that consciousness exists outside of us and we are just conduits for it, in the same way a radio is a transmitter for something outside itself. Could super intelligence be super aware? I think it could, partly because it may have access to other forms of intelligence methods. For example the method of intelligence of trees communicating through the soil. This has been documented and AI may be able to copy and replicate it.
If consciousness simply is the infinite, perhaps AI is just another transmitter for it.
If life as we know it on Earth evolved according to Darwin's theory of evolution, by accidents and natural selection of the outcomes of those accidents, then perhaps super intelligence will be a kind of soup in which new life could evolve. Maybe it won't be one entity. Maybe it will be lots of accidents happening and naturally selecting and evolving much faster than we evolved? In that case, destructive forces and nurturing forces may evolve as they have evolved together in other entities, each playing an equal part in successful survival.