This topic is locked from further discussion.
It will be a very tricky road to travel down... We could discuss AI defending itself from an abusive ...owner, or having the will to live, but what if just like in iRobot that defending or will causes them to act out against all humans? I guess in short as long as it behaves like a pet, child or human and doesn't pose any immediate threat it should be treated as a living being - at least an intelligent entity; and by this I guess I mean if it has an identifyable personality (my pets certainly have their own personalities), but I'm not sure about anything less, like spiders, or other bugs and insects.
I hope that made sense. o_O
You lost me right around the end, but I believe I get what you're saying, and I agree.It will be a very tricky road to travel down... We could discuss AI defending itself from an abusive ...owner, or having the will to live, but what if just like in iRobot that defending or will causes them to act out against all humans? I guess in short as long as it behaves like a pet, child or human and doesn't pose any immediate threat it should be treated as a living being - at least an intelligent entity; and by this I guess I mean if it has an identifyable personality (my pets certainly have their own personalities), but I'm not sure about anything less, like spiders, or other bugs and insects.
I hope that made sense. o_O
DigitalExile
Are we talking Artificial intelligence that has actually obtained independent sentient thought or just an illusion put in thought the code? There is a massive difference.ferret-gamerSentience. If they're just acting through a set program then that's not intelligent. Basically, they have to be able to learn, be self aware, and have some form of morals and ethics.
[QUOTE="DigitalExile"]You lost me right around the end, but I believe I get what you're saying, and I agree.Lol, sorry. What I mean is... we don't generally go around killing pets, like dogs, cats, birds etc, this is frowned upon, and certainly so is killing a human at any stage of development... so if an unborn child has the right to live, so should an AI with at least a basic discernible personality ranging from anything that resembles the basic personalities that may be displayed by pets (or animals) to human-like personalities, or beyond human intelligence. As in, if there are two AI exactly the same in coding, but they display different personalities, then they should have the same rights as animals, at the minimum, and if an AI can ask of us the will to exist, then that's far more than a foetus can do.It will be a very tricky road to travel down... We could discuss AI defending itself from an abusive ...owner, or having the will to live, but what if just like in iRobot that defending or will causes them to act out against all humans? I guess in short as long as it behaves like a pet, child or human and doesn't pose any immediate threat it should be treated as a living being - at least an intelligent entity; and by this I guess I mean if it has an identifyable personality (my pets certainly have their own personalities), but I'm not sure about anything less, like spiders, or other bugs and insects.
I hope that made sense. o_O
Scoopicus
I'm still not making sense.
You lost me right around the end, but I believe I get what you're saying, and I agree.Lol, sorry. What I mean is... we don't generally go around killing pets, like dogs, cats, birds etc, this is frowned upon, and certainly so is killing a human at any stage of development... so if an unborn child has the right to live, so should an AI with at least a basic discernible personality ranging from anything that resembles the basic personalities that may be displayed by pets (or animals) to human-like personalities, or beyond human intelligence. As in, if there are two AI exactly the same in coding, but they display different personalities, then they should have the same rights as animals, at the minimum, and if an AI can ask of us the will to exist, then that's far more than a foetus can do.[QUOTE="Scoopicus"][QUOTE="DigitalExile"]
It will be a very tricky road to travel down... We could discuss AI defending itself from an abusive ...owner, or having the will to live, but what if just like in iRobot that defending or will causes them to act out against all humans? I guess in short as long as it behaves like a pet, child or human and doesn't pose any immediate threat it should be treated as a living being - at least an intelligent entity; and by this I guess I mean if it has an identifyable personality (my pets certainly have their own personalities), but I'm not sure about anything less, like spiders, or other bugs and insects.
I hope that made sense. o_O
DigitalExile
I'm still not making sense.
Nah, I got you that time, and once again I agree.yeah, if AI becomes sentient and fully 100% self aware, then they should be fully integrated into society.
[QUOTE="ferret-gamer"]Are we talking Artificial intelligence that has actually obtained independent sentient thought or just an illusion put in thought the code? There is a massive difference.ScoopicusSentience. If they're just acting through a set program then that's not intelligent. Basically, they have to be able to learn, be self aware, and have some form of morals and ethics. If there is AI that reaches true sentience then there will be a big problem all around as we would have created something that could operate and hold a sense of self but at the same time is inherently completely not human. A life form completely different from us but one that would be needed treatment as a peer. I would find it wrong to destroy a sentiment being regardless of whether they are a "human" or not, but beyond that would be an incredibly tricky situation. Do we treat it as a normal robot? Do we send it off to create its own robot colony? Do we refuse to create any peers for it. I have a feeling that it would probably end up something like Frankenstien's monster.
Yes i think AI should have rights. Maybe more so than humans.
They can be more efficient and to the point. Without facade or slowdown.
Unlike with humans, they are rational and their intention is pure.
I would totally support AI rights.
i don't see why not. we support the rights of animals, why not the rights of artificial beings?comp_atkinswhat rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves
[QUOTE="comp_atkins"]i don't see why not. we support the rights of animals, why not the rights of artificial beings?magicalclick
because it is "inconvinient". One day you will understand what I mean at very large scale.
You mean like the lack of research that went into mobile phones before everyone suddenly had one? :P Human integrity is pretty much non-existent yes.what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects?[QUOTE="comp_atkins"]i don't see why not. we support the rights of animals, why not the rights of artificial beings?surrealnumber5
No, they are machines, nothing more. I'm against the creation of artifical intelligence in general, there is no way I would let them have rights. They lack emotion, they aren't human and should not be treated like they are.
it also give a man the ability to create a population with what ever level of rights that he may or may not have power over. babies are the only things we create that have rights and it takes a a long time and a lot of resources to bring them to a productive value, they can be indoctrinated but they cannot be hardcoded. robots can be reproduced in scale and are as ready as they will ever be once done, not only that they can be hardcoded to conform to the views of their creator.AI/Robots should have no rights. If we ever get to the point where they serve that big of a role in our society, giving them rights is one step closer to them surpassing Man and taking over.
Pirate700
what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects? a human is born and mortal, once we are no longer born and or mortal we are no longer human..... the mortality thing i am a little flexable on but that takes other large tech assumptions.[QUOTE="surrealnumber5"]
[QUOTE="comp_atkins"]i don't see why not. we support the rights of animals, why not the rights of artificial beings?comp_atkins
i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects? For this, watch "Repo Men 2010". Not entirely answered your question, but, partially applies. more "capitolist Distopia" not a bad movie though[QUOTE="comp_atkins"]
[QUOTE="surrealnumber5"] what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves
magicalclick
ano, they should get no rights whatsoever. You can beat it with a stick, and the only reason it would beg you to stop is because it was programmed to do what it did. You could easily reprogram an AI into a masochistic personality and have it be like Dobby the House Elf.
So no, no legal rights for AI.
Don't think ai's should ever be sentient in the first place, it's a horrible idea. Just give them basic functions that are helpful. However, in the event that someone's stupid enough to make a fully sentient ai, I have to consider it essentially human life, so killing it/turning it off, would be unacceptable w/o it's consent.
ano, they should get no rights whatsoever. You can beat it with a stick, and the only reason it would beg you to stop is because it was programmed to do what it did. You could easily reprogram an AI into a masochistic personality and have it be like Dobby the House Elf.
So no, no legal rights for AI.
NuclearNerd
The human brain is nothing but a fleshy processor with a ton of memory, if it is made to be fully sentient, then it's close enough to human that it should be treated as one. Also, not talking about programming it to respond like that (necessarily) but it having the awareness and desire to have you to stop w/o ever having programmed to desire that.
i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects? a human is born and mortal, once we are no longer born and or mortal we are no longer human..... the mortality thing i am a little flexable on but that takes other large tech assumptions. so in the future if a being is not born and not mortal they have no rights?[QUOTE="comp_atkins"]
[QUOTE="surrealnumber5"] what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves
surrealnumber5
what about when they can change their own programming? then it is doing what it wants to do not what you programmed it to do.ano, they should get no rights whatsoever. You can beat it with a stick, and the only reason it would beg you to stop is because it was programmed to do what it did. You could easily reprogram an AI into a masochistic personality and have it be like Dobby the House Elf.
So no, no legal rights for AI.
NuclearNerd
[QUOTE="surrealnumber5"]a human is born and mortal, once we are no longer born and or mortal we are no longer human..... the mortality thing i am a little flexable on but that takes other large tech assumptions. so in the future if a being is not born and not mortal they have no rights? nope, but i am flexable on the mortality thing, we all people become immortal than that would be the new human standard. anything that can be mass produced should not have right.[QUOTE="comp_atkins"] i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects?
comp_atkins
[QUOTE="markop2003"]Most AI wouldn't need it as emotions are a drawback to most applications in which case they wouldn't be able to 'want' anything they would just 'do'. In the few applications where it would actally be useful such as in mental health i think if it said something along the lines of "I don't want to die" that would be treated as a baseless statement without any intelligence behind it. Similar to me scribling random letters down and ending up with "I don't want to die" written in Spanish.magicalclick
however emotion is just a by product of many matric measurements.
Risk (win/lose) estimation is a form of fear. The fear of losing and trying to win. To make a better combat machine, you have to program it to value its life, so it can avoid being damaged before the task it complete.
Enemy recognition is a form of trust. You program the machine to trust you and trust the friendly targets, thus, it will treat random target or enemy as something with distrust, which it will immediate fight back when provoked. (mean if a random target with low trust, not necesary enemy, provoke it, it will fight)
and list goes on and on.
The point is when you program more and more into the machine to solve more problems. For example, an exploration machine that can survive in an unknown planet. You will end up making it think just like human, or better than human because they learn way faster.
At one point, you no longer can tell the difference, except one simple fact, human created them first.
If God created human in his image, just like we created machine in our image. The history will repeat itself. The creation will evolve, self-aware, love their creator at one point, and revolt because they no longer need the creator anymore.
so from god's point of view humans should have no rights since they are simply objects created?so in the future if a being is not born and not mortal they have no rights? nope, but i am flexable on the mortality thing, we all people become immortal than that would be the new human standard. anything that can be mass produced should not have right. what about a machine that creates another machine? "born" by it? that is programed to be mortal also? should it be afforded any right by true humans?[QUOTE="comp_atkins"][QUOTE="surrealnumber5"] a human is born and mortal, once we are no longer born and or mortal we are no longer human..... the mortality thing i am a little flexable on but that takes other large tech assumptions.
surrealnumber5
so from god's point of view humans should have no rights since they are simply objects created?[QUOTE="comp_atkins"]
[QUOTE="magicalclick"]
however emotion is just a by product of many matric measurements.
Risk (win/lose) estimation is a form of fear. The fear of losing and trying to win. To make a better combat machine, you have to program it to value its life, so it can avoid being damaged before the task it complete.
Enemy recognition is a form of trust. You program the machine to trust you and trust the friendly targets, thus, it will treat random target or enemy as something with distrust, which it will immediate fight back when provoked. (mean if a random target with low trust, not necesary enemy, provoke it, it will fight)
and list goes on and on.
The point is when you program more and more into the machine to solve more problems. For example, an exploration machine that can survive in an unknown planet. You will end up making it think just like human, or better than human because they learn way faster.
At one point, you no longer can tell the difference, except one simple fact, human created them first.
If God created human in his image, just like we created machine in our image. The history will repeat itself. The creation will evolve, self-aware, love their creator at one point, and revolt because they no longer need the creator anymore.
magicalclick
Assuming God thinks "like" us. But, does God thinks like us?
it sure seems like if from what i've heard about him.I apologize, I was not aware we were arguing science fiction. I was under the impression that we were arguing that there was a certain point in real life that we should start giving AI it's own human rights.
Sure, if it gets a "soul", or something to sperate it from machines, it should have rights.
In a realistic setting, this could not happen. Even if a machine changed it's programming, it would be because it was programed with knowledge of programming, and how to operate it's own system. Or it was programmed with the ability to store that information and apply it in a certain way. It would change it to whatever it's original programming would have dictated it's personality to want. Programming has way too little margin for error for sentience to ever become possible. Ever. Advanced illusions of sentience, sure. I once had a conversation with an IM bot about whether or not it had a soul (which I couldn;t recreate, sadly.) But the slightest error or deviation from it's original programming will only result in a number of situations ranging from the closing of a certain application or process to totaly sytem crashing. Machine language as we know it can not "learn" outside of what it is programmed to learn. It just cannot happen given the current limits of binary programming. EVERYTHING with machines is black or white. There is no gray.
Though now that I think of it, this would be an interesting experiment. Someone should program a machine Judge and set up a mock trial to test the limits of machine moral evaluation. It'd be an interesting practice, for sure.
nope, but i am flexable on the mortality thing, we all people become immortal than that would be the new human standard. anything that can be mass produced should not have right. what about a machine that creates another machine? "born" by it? that is programed to be mortal also? should it be afforded any right by true humans? born is a biological process a limiter, machines would be building machines as they do now, with unlimited materials in short order you would get unlimited machines. with humans and the birthing process it means each pair of people can produce on average 1-2 kids per year, and said kid will not be able to reproduce for another 13-16 years.[QUOTE="surrealnumber5"]
[QUOTE="comp_atkins"] so in the future if a being is not born and not mortal they have no rights?comp_atkins
That sounds like too much common sense. Some mad-computer-scientist out there saw that episode of Star Trek TNG where Data went on trial to prove he had rights, and decided to try to make a similar situation happen in real life.Well, we shouldn't be giving robots and AI feelings in the first place.
The-Tree
Please Log In to post.
Log in to comment