Rights for Artificial Intelligence.

  • 91 results
  • 1
  • 2

This topic is locked from further discussion.

Avatar image for Scoopicus
Scoopicus

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#1 Scoopicus
Member since 2010 • 25 Posts
I was watching iRobot with some friends recently, when we got into a conversation where a question arose. I'll spare you the details, but here's my question. If artificial intelligence were to be developed and integrated into society, should they get some sort of rights? I say it depends. They should only get them if they have some sort of morals or sense of right and wrong. Basically, the way I see it, if anything has the the ability to say, "Hey, can you please not treat me like I'm worthless? It's making me feel sad," I think that it at least deserves to be acknowledged. And before you make a decision, I want to give you two possible examples. (Yes these are from another source) A company sets up a computer program to take calls. This program has been equipped with morality and ethics processors to better deal with customers. However, after a while the program gets too costly to uphold, so they want to dismantle it. The program discovers this, and to the developers surprise, begins to plead for them not to destroy it, because "it enjoys it's time on Earth." In another scenario, a service droid was found to have brutally murdered his owner, who had dealt much damage to the robot. The company offered to destroy the obviously defective robot, but they still must appear in court. Representatives of the company call the robot up to question why it killed the man. It responded with, "I didn't want to die." I know that these sound ridiculous, but AI rights may be something that has to be acknowledged in the next 10-20 years when we begin to grasp the technology to create fully functioning, self aware, morally existent machines. In fact some countries are prepared for this already. The British Referendum already has jurisdictions set up for the public's opinion on AI rights. I was just curious to see what everyone thought.
Avatar image for DigitalExile
DigitalExile

16046

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#2 DigitalExile
Member since 2008 • 16046 Posts

It will be a very tricky road to travel down... We could discuss AI defending itself from an abusive ...owner, or having the will to live, but what if just like in iRobot that defending or will causes them to act out against all humans? I guess in short as long as it behaves like a pet, child or human and doesn't pose any immediate threat it should be treated as a living being - at least an intelligent entity; and by this I guess I mean if it has an identifyable personality (my pets certainly have their own personalities), but I'm not sure about anything less, like spiders, or other bugs and insects.

I hope that made sense. o_O

Avatar image for Scoopicus
Scoopicus

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#3 Scoopicus
Member since 2010 • 25 Posts

It will be a very tricky road to travel down... We could discuss AI defending itself from an abusive ...owner, or having the will to live, but what if just like in iRobot that defending or will causes them to act out against all humans? I guess in short as long as it behaves like a pet, child or human and doesn't pose any immediate threat it should be treated as a living being - at least an intelligent entity; and by this I guess I mean if it has an identifyable personality (my pets certainly have their own personalities), but I'm not sure about anything less, like spiders, or other bugs and insects.

I hope that made sense. o_O

DigitalExile
You lost me right around the end, but I believe I get what you're saying, and I agree.
Avatar image for deactivated-5cf4b2c19c4ab
deactivated-5cf4b2c19c4ab

17476

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#4 deactivated-5cf4b2c19c4ab
Member since 2008 • 17476 Posts
Are we talking Artificial intelligence that has actually obtained independent sentient thought or just an illusion put in thought the code? There is a massive difference.
Avatar image for Scoopicus
Scoopicus

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#5 Scoopicus
Member since 2010 • 25 Posts
Are we talking Artificial intelligence that has actually obtained independent sentient thought or just an illusion put in thought the code? There is a massive difference.ferret-gamer
Sentience. If they're just acting through a set program then that's not intelligent. Basically, they have to be able to learn, be self aware, and have some form of morals and ethics.
Avatar image for DigitalExile
DigitalExile

16046

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#6 DigitalExile
Member since 2008 • 16046 Posts

[QUOTE="DigitalExile"]

It will be a very tricky road to travel down... We could discuss AI defending itself from an abusive ...owner, or having the will to live, but what if just like in iRobot that defending or will causes them to act out against all humans? I guess in short as long as it behaves like a pet, child or human and doesn't pose any immediate threat it should be treated as a living being - at least an intelligent entity; and by this I guess I mean if it has an identifyable personality (my pets certainly have their own personalities), but I'm not sure about anything less, like spiders, or other bugs and insects.

I hope that made sense. o_O

Scoopicus

You lost me right around the end, but I believe I get what you're saying, and I agree.

Lol, sorry. What I mean is... we don't generally go around killing pets, like dogs, cats, birds etc, this is frowned upon, and certainly so is killing a human at any stage of development... so if an unborn child has the right to live, so should an AI with at least a basic discernible personality ranging from anything that resembles the basic personalities that may be displayed by pets (or animals) to human-like personalities, or beyond human intelligence. As in, if there are two AI exactly the same in coding, but they display different personalities, then they should have the same rights as animals, at the minimum, and if an AI can ask of us the will to exist, then that's far more than a foetus can do.

I'm still not making sense.

Avatar image for Scoopicus
Scoopicus

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#7 Scoopicus
Member since 2010 • 25 Posts

[QUOTE="Scoopicus"][QUOTE="DigitalExile"]

It will be a very tricky road to travel down... We could discuss AI defending itself from an abusive ...owner, or having the will to live, but what if just like in iRobot that defending or will causes them to act out against all humans? I guess in short as long as it behaves like a pet, child or human and doesn't pose any immediate threat it should be treated as a living being - at least an intelligent entity; and by this I guess I mean if it has an identifyable personality (my pets certainly have their own personalities), but I'm not sure about anything less, like spiders, or other bugs and insects.

I hope that made sense. o_O

DigitalExile

You lost me right around the end, but I believe I get what you're saying, and I agree.

Lol, sorry. What I mean is... we don't generally go around killing pets, like dogs, cats, birds etc, this is frowned upon, and certainly so is killing a human at any stage of development... so if an unborn child has the right to live, so should an AI with at least a basic discernible personality ranging from anything that resembles the basic personalities that may be displayed by pets (or animals) to human-like personalities, or beyond human intelligence. As in, if there are two AI exactly the same in coding, but they display different personalities, then they should have the same rights as animals, at the minimum, and if an AI can ask of us the will to exist, then that's far more than a foetus can do.

I'm still not making sense.

Nah, I got you that time, and once again I agree.
Avatar image for deactivated-58df4522915cb
deactivated-58df4522915cb

5527

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#8 deactivated-58df4522915cb
Member since 2007 • 5527 Posts

yeah, if AI becomes sentient and fully 100% self aware, then they should be fully integrated into society.

Avatar image for Danm_999
Danm_999

13924

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#9 Danm_999
Member since 2003 • 13924 Posts
I actually thought the AI was right in that movie. It'd been set up with three specific rules (Asimov's laws of robotics), which prioritised human well being as paramount. It did not invent these laws, it was given them by humans. It carried them out to their logical conclusion, and for this humanity opposed it.
Avatar image for deactivated-5cf4b2c19c4ab
deactivated-5cf4b2c19c4ab

17476

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#10 deactivated-5cf4b2c19c4ab
Member since 2008 • 17476 Posts
[QUOTE="ferret-gamer"]Are we talking Artificial intelligence that has actually obtained independent sentient thought or just an illusion put in thought the code? There is a massive difference.Scoopicus
Sentience. If they're just acting through a set program then that's not intelligent. Basically, they have to be able to learn, be self aware, and have some form of morals and ethics.

If there is AI that reaches true sentience then there will be a big problem all around as we would have created something that could operate and hold a sense of self but at the same time is inherently completely not human. A life form completely different from us but one that would be needed treatment as a peer. I would find it wrong to destroy a sentiment being regardless of whether they are a "human" or not, but beyond that would be an incredibly tricky situation. Do we treat it as a normal robot? Do we send it off to create its own robot colony? Do we refuse to create any peers for it. I have a feeling that it would probably end up something like Frankenstien's monster.
Avatar image for Danm_999
Danm_999

13924

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#11 Danm_999
Member since 2003 • 13924 Posts
[QUOTE="ferret-gamer"] I have a feeling that it would probably end up something like Frankenstien's monster.

Seeing some of the asinine things people completely flip out over at present, I'd say you're not wrong.
Avatar image for KungfuKitten
KungfuKitten

27389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 42

User Lists: 0

#12 KungfuKitten
Member since 2006 • 27389 Posts

Yes i think AI should have rights. Maybe more so than humans.
They can be more efficient and to the point. Without facade or slowdown.
Unlike with humans, they are rational and their intention is pure.
I would totally support AI rights.

Avatar image for comp_atkins
comp_atkins

38929

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#14 comp_atkins
Member since 2005 • 38929 Posts
i don't see why not. we support the rights of animals, why not the rights of artificial beings?
Avatar image for surrealnumber5
surrealnumber5

23044

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#16 surrealnumber5
Member since 2008 • 23044 Posts

i don't see why not. we support the rights of animals, why not the rights of artificial beings?comp_atkins
what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves

Avatar image for KungfuKitten
KungfuKitten

27389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 42

User Lists: 0

#17 KungfuKitten
Member since 2006 • 27389 Posts

[QUOTE="comp_atkins"]i don't see why not. we support the rights of animals, why not the rights of artificial beings?magicalclick

because it is "inconvinient". One day you will understand what I mean at very large scale.

You mean like the lack of research that went into mobile phones before everyone suddenly had one? :P Human integrity is pretty much non-existent yes.
Avatar image for Pirate700
Pirate700

46465

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#18 Pirate700
Member since 2008 • 46465 Posts

AI/Robots should have no rights. If we ever get to the point where they serve that big of a role in our society, giving them rights is one step closer to them surpassing Man and taking over.

Avatar image for comp_atkins
comp_atkins

38929

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#19 comp_atkins
Member since 2005 • 38929 Posts

[QUOTE="comp_atkins"]i don't see why not. we support the rights of animals, why not the rights of artificial beings?surrealnumber5

what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves

i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects?

Avatar image for redstorm72
redstorm72

4646

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 0

#20 redstorm72
Member since 2008 • 4646 Posts

No, they are machines, nothing more. I'm against the creation of artifical intelligence in general, there is no way I would let them have rights. They lack emotion, they aren't human and should not be treated like they are.

Avatar image for surrealnumber5
surrealnumber5

23044

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#22 surrealnumber5
Member since 2008 • 23044 Posts

AI/Robots should have no rights. If we ever get to the point where they serve that big of a role in our society, giving them rights is one step closer to them surpassing Man and taking over.

Pirate700

it also give a man the ability to create a population with what ever level of rights that he may or may not have power over. babies are the only things we create that have rights and it takes a a long time and a lot of resources to bring them to a productive value, they can be indoctrinated but they cannot be hardcoded. robots can be reproduced in scale and are as ready as they will ever be once done, not only that they can be hardcoded to conform to the views of their creator.

Avatar image for surrealnumber5
surrealnumber5

23044

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#24 surrealnumber5
Member since 2008 • 23044 Posts

[QUOTE="surrealnumber5"]

[QUOTE="comp_atkins"]i don't see why not. we support the rights of animals, why not the rights of artificial beings?comp_atkins

what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves

i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects?

a human is born and mortal, once we are no longer born and or mortal we are no longer human..... the mortality thing i am a little flexable on but that takes other large tech assumptions.

Avatar image for surrealnumber5
surrealnumber5

23044

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#25 surrealnumber5
Member since 2008 • 23044 Posts

[QUOTE="comp_atkins"]

[QUOTE="surrealnumber5"] what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves

magicalclick

i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects?

For this, watch "Repo Men 2010". Not entirely answered your question, but, partially applies.

more "capitolist Distopia" not a bad movie though

Avatar image for markop2003
markop2003

29917

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#26 markop2003
Member since 2005 • 29917 Posts
Most AI wouldn't need it as emotions are a drawback to most applications in which case they wouldn't be able to 'want' anything they would just 'do'. In the few applications where it would actally be useful such as in mental health i think if it said something along the lines of "I don't want to die" that would be treated as a baseless statement without any intelligence behind it. Similar to me scribling random letters down and ending up with "I don't want to die" written in Spanish.
Avatar image for rawsavon
rawsavon

40001

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#28 rawsavon
Member since 2004 • 40001 Posts
We can't even figure out the rights for all humans at this point. I think the ability to figure out the rights of A.I. are beyond our means to handle
Avatar image for NuclearNerd
NuclearNerd

399

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#29 NuclearNerd
Member since 2010 • 399 Posts

ano, they should get no rights whatsoever. You can beat it with a stick, and the only reason it would beg you to stop is because it was programmed to do what it did. You could easily reprogram an AI into a masochistic personality and have it be like Dobby the House Elf.

So no, no legal rights for AI.

Avatar image for Inconsistancy
Inconsistancy

8094

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#30 Inconsistancy
Member since 2004 • 8094 Posts

Don't think ai's should ever be sentient in the first place, it's a horrible idea. Just give them basic functions that are helpful. However, in the event that someone's stupid enough to make a fully sentient ai, I have to consider it essentially human life, so killing it/turning it off, would be unacceptable w/o it's consent.

ano, they should get no rights whatsoever. You can beat it with a stick, and the only reason it would beg you to stop is because it was programmed to do what it did. You could easily reprogram an AI into a masochistic personality and have it be like Dobby the House Elf.

So no, no legal rights for AI.

NuclearNerd

The human brain is nothing but a fleshy processor with a ton of memory, if it is made to be fully sentient, then it's close enough to human that it should be treated as one. Also, not talking about programming it to respond like that (necessarily) but it having the awareness and desire to have you to stop w/o ever having programmed to desire that.

Avatar image for comp_atkins
comp_atkins

38929

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#31 comp_atkins
Member since 2005 • 38929 Posts

[QUOTE="comp_atkins"]

[QUOTE="surrealnumber5"] what rights does your cellphone have, wood, or cup filled with water or without? things only have the property rights their owners have, and have none of their own. i cannot break your phone, burn your wood, or drink your water unless you give me the right to, those objects have no rights unto themselves

surrealnumber5

i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects?

a human is born and mortal, once we are no longer born and or mortal we are no longer human..... the mortality thing i am a little flexable on but that takes other large tech assumptions.

so in the future if a being is not born and not mortal they have no rights?
Avatar image for comp_atkins
comp_atkins

38929

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#32 comp_atkins
Member since 2005 • 38929 Posts

ano, they should get no rights whatsoever. You can beat it with a stick, and the only reason it would beg you to stop is because it was programmed to do what it did. You could easily reprogram an AI into a masochistic personality and have it be like Dobby the House Elf.

So no, no legal rights for AI.

NuclearNerd
what about when they can change their own programming? then it is doing what it wants to do not what you programmed it to do.
Avatar image for surrealnumber5
surrealnumber5

23044

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#34 surrealnumber5
Member since 2008 • 23044 Posts

[QUOTE="surrealnumber5"]

[QUOTE="comp_atkins"] i'm not talking about a cell phone or an ipod. what about when humans start to merge with machines? if a person is 90% machine are they still a person? what about 95% machine? what about a human body with an artificial brain? do they have rights? are they still simply objects?

comp_atkins

a human is born and mortal, once we are no longer born and or mortal we are no longer human..... the mortality thing i am a little flexable on but that takes other large tech assumptions.

so in the future if a being is not born and not mortal they have no rights?

nope, but i am flexable on the mortality thing, we all people become immortal than that would be the new human standard. anything that can be mass produced should not have right.

Avatar image for comp_atkins
comp_atkins

38929

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#35 comp_atkins
Member since 2005 • 38929 Posts

[QUOTE="markop2003"]Most AI wouldn't need it as emotions are a drawback to most applications in which case they wouldn't be able to 'want' anything they would just 'do'. In the few applications where it would actally be useful such as in mental health i think if it said something along the lines of "I don't want to die" that would be treated as a baseless statement without any intelligence behind it. Similar to me scribling random letters down and ending up with "I don't want to die" written in Spanish.magicalclick

however emotion is just a by product of many matric measurements.

Risk (win/lose) estimation is a form of fear. The fear of losing and trying to win. To make a better combat machine, you have to program it to value its life, so it can avoid being damaged before the task it complete.

Enemy recognition is a form of trust. You program the machine to trust you and trust the friendly targets, thus, it will treat random target or enemy as something with distrust, which it will immediate fight back when provoked. (mean if a random target with low trust, not necesary enemy, provoke it, it will fight)

and list goes on and on.

The point is when you program more and more into the machine to solve more problems. For example, an exploration machine that can survive in an unknown planet. You will end up making it think just like human, or better than human because they learn way faster.

At one point, you no longer can tell the difference, except one simple fact, human created them first.

If God created human in his image, just like we created machine in our image. The history will repeat itself. The creation will evolve, self-aware, love their creator at one point, and revolt because they no longer need the creator anymore.

so from god's point of view humans should have no rights since they are simply objects created?

Avatar image for comp_atkins
comp_atkins

38929

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#37 comp_atkins
Member since 2005 • 38929 Posts

[QUOTE="comp_atkins"][QUOTE="surrealnumber5"] a human is born and mortal, once we are no longer born and or mortal we are no longer human..... the mortality thing i am a little flexable on but that takes other large tech assumptions.

surrealnumber5

so in the future if a being is not born and not mortal they have no rights?

nope, but i am flexable on the mortality thing, we all people become immortal than that would be the new human standard. anything that can be mass produced should not have right.

what about a machine that creates another machine? "born" by it? that is programed to be mortal also? should it be afforded any right by true humans?

Avatar image for comp_atkins
comp_atkins

38929

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#39 comp_atkins
Member since 2005 • 38929 Posts

[QUOTE="comp_atkins"]

[QUOTE="magicalclick"]

however emotion is just a by product of many matric measurements.

Risk (win/lose) estimation is a form of fear. The fear of losing and trying to win. To make a better combat machine, you have to program it to value its life, so it can avoid being damaged before the task it complete.

Enemy recognition is a form of trust. You program the machine to trust you and trust the friendly targets, thus, it will treat random target or enemy as something with distrust, which it will immediate fight back when provoked. (mean if a random target with low trust, not necesary enemy, provoke it, it will fight)

and list goes on and on.

The point is when you program more and more into the machine to solve more problems. For example, an exploration machine that can survive in an unknown planet. You will end up making it think just like human, or better than human because they learn way faster.

At one point, you no longer can tell the difference, except one simple fact, human created them first.

If God created human in his image, just like we created machine in our image. The history will repeat itself. The creation will evolve, self-aware, love their creator at one point, and revolt because they no longer need the creator anymore.

magicalclick

so from god's point of view humans should have no rights since they are simply objects created?

Assuming God thinks "like" us. But, does God thinks like us?

it sure seems like if from what i've heard about him.
Avatar image for Pixel-Pirate
Pixel-Pirate

10771

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#41 Pixel-Pirate
Member since 2009 • 10771 Posts
Only if AI actually breaks free from it's coding, making free thought and decisions outside of what it was programmed to do. Otherwise it is still a machine doing what it was programmed to do. Once it starts having it's own thoughts, dreams, and goals that are seperate from what the programmers put in, then rights can be discussed.
Avatar image for NuclearNerd
NuclearNerd

399

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#42 NuclearNerd
Member since 2010 • 399 Posts

I apologize, I was not aware we were arguing science fiction. I was under the impression that we were arguing that there was a certain point in real life that we should start giving AI it's own human rights.

Sure, if it gets a "soul", or something to sperate it from machines, it should have rights.

In a realistic setting, this could not happen. Even if a machine changed it's programming, it would be because it was programed with knowledge of programming, and how to operate it's own system. Or it was programmed with the ability to store that information and apply it in a certain way. It would change it to whatever it's original programming would have dictated it's personality to want. Programming has way too little margin for error for sentience to ever become possible. Ever. Advanced illusions of sentience, sure. I once had a conversation with an IM bot about whether or not it had a soul (which I couldn;t recreate, sadly.) But the slightest error or deviation from it's original programming will only result in a number of situations ranging from the closing of a certain application or process to totaly sytem crashing. Machine language as we know it can not "learn" outside of what it is programmed to learn. It just cannot happen given the current limits of binary programming. EVERYTHING with machines is black or white. There is no gray.

Though now that I think of it, this would be an interesting experiment. Someone should program a machine Judge and set up a mock trial to test the limits of machine moral evaluation. It'd be an interesting practice, for sure.

Avatar image for surrealnumber5
surrealnumber5

23044

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#44 surrealnumber5
Member since 2008 • 23044 Posts

[QUOTE="surrealnumber5"]

[QUOTE="comp_atkins"] so in the future if a being is not born and not mortal they have no rights?comp_atkins

nope, but i am flexable on the mortality thing, we all people become immortal than that would be the new human standard. anything that can be mass produced should not have right.

what about a machine that creates another machine? "born" by it? that is programed to be mortal also? should it be afforded any right by true humans?

born is a biological process a limiter, machines would be building machines as they do now, with unlimited materials in short order you would get unlimited machines. with humans and the birthing process it means each pair of people can produce on average 1-2 kids per year, and said kid will not be able to reproduce for another 13-16 years.

Avatar image for GazaAli
GazaAli

25216

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#45 GazaAli
Member since 2007 • 25216 Posts
TC you excites me, really. I for one is ready to embrace AI as intelligent beings and would definitely treat them so. I don't know if you are into anime or not, but watching Eve no Jikan really tickled something in me.
Avatar image for Theokhoth
Theokhoth

36799

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#46 Theokhoth
Member since 2008 • 36799 Posts
I like to think of the matter in some similar way to the question of whether or not we ever encounter intelligent extraterrestrials. If a sapient alien comes to Earth in peace, would it be right to kill it, or torture it, or capture it, or otherwise significantly mistreat it, simply because it isn't human? I don't think so. So if artificial intelligence ever comes to the point where it reaches true sapience, yes, I do think they should be fully integrated into society; if they are on the same level as humans then they should be treated the same as humans.
Avatar image for aransom
aransom

7408

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#47 aransom
Member since 2002 • 7408 Posts

If I tried to start a program on my PC and it said, "I won't run that program because I have rights," I'd say, "fine, this electrical outlet still belongs to me, so I'll just pull your plug out of it."

Avatar image for kuraimen
kuraimen

28078

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#48 kuraimen
Member since 2010 • 28078 Posts
What if it is programed to say "Hey, can you please not treat me like I'm worthless? It's making me feel sad"? That's the problem with AI researchers, most of them don't differentiate from a simulation and the real deal. For a machine to develop cognition I think it will be so different from ours that we wouldn't easily recognize feelings and other things that show it is a cognoscent being. I don't think a machine will ever be capable of showing human-like kind of cognition.
Avatar image for The-Tree
The-Tree

3315

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#49 The-Tree
Member since 2010 • 3315 Posts

Well, we shouldn't be giving robots and AI feelings in the first place.

Avatar image for aransom
aransom

7408

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#50 aransom
Member since 2002 • 7408 Posts

Well, we shouldn't be giving robots and AI feelings in the first place.

The-Tree

That sounds like too much common sense. Some mad-computer-scientist out there saw that episode of Star Trek TNG where Data went on trial to prove he had rights, and decided to try to make a similar situation happen in real life.