What if robots finally become sentient? Should we treat robots like humans? Would they need a Bill of Rights to protect them when that time comes? Why or why not?
What if robots finally become sentient? Should we treat robots like humans? Would they need a Bill of Rights to protect them when that time comes? Why or why not?
Detroit: Beyond Human explored this topic very well. If robots are going to build themselves, they'll build themselves to be better than we are and thus, it would be in the human's best interest to get along with these robots or there will be an uprising.
If they become sentient. Then yes. They have to have rights giving TC questions. but this subject completely depends on of they have feelings or not and how they are programmed. It's something to be sentient and another to have emotions. A completely logical being would be horrible and a emotional mess is equally as horrible. I guess we will cross this bridge when we get to it or if they are allowed to get to that stage.
Slightly off topic perhaps, but the funny thing about current ai technology is that it's MUCH more impressive in some areas than others.
For example it can play a mean game of chess (and I mean really mean) but try holding a conversation with it.
Afaik, even the most advanced "conversation bots" are thoroughly unconvincing and unimpressive.
I’d say the value of life that rights stem from isn’t based on sentience, but of the intrinsic value of life itself. To grant them rights is to equivocate that which is unequivocal.
Would we even have the right ourselves to grant them rights? If they’re sentient, then why would they need us? They could do it on their own based on their own existence.
An interesting question, though.
I don't see robots as equivalent to a thing that might have artificial life, and I don't think artificial life will be limited to just robots.
I would hope that whatever artificial life we create can tell the difference as well.
Yes artificial life should have inalienable rights.
That said I know we're not ready for that as a species so we should not attempt to create artificial life. I think it's a far cry from the machine learning algorithms we create now anyways.
how are we even defining sentience or consciousness in this sense?
if we can't even clearly define it, how do we judge it to be so and deserving of any inherent rights beyond that of property of its owner?
Please Log In to post.
Log in to comment