Re: [TML] Re: Virtuality and its SocialConsequences(long) Rob O'Connor 19 Sep 2015 23:55 UTC
Jim Vassilakos wrote: > Yeah, the problem is that if sophont rights apply to AIs, then once > you create one, you're limited in what you can do with it, but, of > course, this will vary quite a bit according to the society. Historically humans have had real problems recognising outgroup humans as humans. There are some eras and societies that are much better than others. I don't see this situation being solved or becoming a non-problem with other forms of human level (or more) levels of 'intelligence', however they are packaged. There is no need for some of your examples to have human equivalent intelligence. Swarming/flocking/ambush/fleeing behaviours will suffice for soldier robots. Collision avoidance, proprioception and low level vision processing, for the trash collector. Craig Berry wrote: > In any society which grants "personhood" to strong AIs and leaves > them free to innovate, they will very quickly transcend human control > and indeed understanding. One assumes they will face ethical issues > regarding treatment of their creators. Becoming alien? Yes "Transcending human control"? Maybe not. Superhuman levels of 'intelligence' may not be possible, LessWrong etc. notwithstanding. Ultimately destruction or powering down the substrate the AI is running on is always an option. There is the lesser option of rewriting or removal to a safe sandbox. Jim V. again: > I think it's possible that Strong AIs will evolve from increasingly > complex neural nets, and as such, they'll have experienced some sort > of "childhood", during which they will have learned a variety of > skills, such as language. The most important thing you teach them is the Golden Rule, or even the Platinum Rule ("Do unto others as they would like to be treated") and instil a lot of empathy. General observation: Second the book recommendations. Robert O'Connor