Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The first ethical questions of robotics in society are upon us.

By Work in Technology
Tue Jun 22, 2004 at 08:41:47 AM EST
Tags: Culture (all tags)
Culture

As machines and computers grow more intelligent, as a society we must consider their place within our societal code of ethics.

For awhile now, these questions have been regarded by many to be so far away that to seriously worry about them now is a waste of breath and time.

I intend to show that not only are serious issues of ethics regarding robots and artificial intelligence coming very soon to us, in some aspects, they already are here.


A bit of background about myself: I am a researcher in intelligent robotics in the employ of a very large and well-known United States University (I do not wish to name which exactly as this article represents my personal thoughts, not those of my university). My research mainly deals with the current large problem in robotic AI, that of localization and mapping. In a nutshell, having a robot know where it is by what it sees in the real world, with no special markers or other artificial means.

General Issues

As I work and experiment with robots, their abilities to see and comprehend their environment grows more and more capable with every month that passes by. It will not be long now (perhaps a few years) before these abilities make it out of the lab and into common consumer devices. This is not your Roomba vacuum cleaner. These machines will eventually be common place in many aspects of society from the factory floor, to your office, to your home. They will be found on the streets and sidewalks of cities - probably at first as cleaning machines and garbage disposal units.

Ethical questions regarding robots have been on people's minds since the word 'robot' was termed in the 1920s. Its root is in 'robota', which means "forced labor" in Czech. And certainly the moral implications of machine slavery are one of the more abstract ethical questions to consider as their usage grows (and so does our reliance upon them). This question came about in modern popular fiction as being the root cause of the rebellion and eventual world domination by machines in movies like "The Matrix".

Other ethical questions regarding machine labor is their impact upon human society - it is not unlikely they will replace low skilled human labor in many areas, leaving people with little education or skills in an ugly predicament, and a serious social and economic problem.

Then there is the ethical question of how do we treat robots from day to day? Is it moral to turn them off? At what point has a machine gone from a mere device, to an entity worthy of moral protection? By moral protection, I mean a societal sense of it being wrong for one to intentionally damage or injure the machine. A machine version of animal cruelty morales.

Many robotics researchers consider their machines to be so simple that while these questions are interesting, they are so far off as to not be an issue yet. Even I think nothing of turning off my robots at the end of the day, or wiping their memory to start anew. I would not like if someone beat or broke one of the robots intentionally, but most would agree thats a more 'damage of property' issue, than a moral life-entity one.

So, while many robotic ethics questions are indeed too far away to consider yet, this one looms before us, with even our first order intelligent machines: Where does the line get drawn between mere property and devices for work, to an entity worthy of moral protection?

Issues Now Before Us

In the home, I think the first main uses will be as entertainment machines and robotic pets. These already exist, the most advanced for sale on the market currently being Sony's AIBO ERS-7 robotic dog. This new AIBO version has facial recognition abilities. It takes 6 weeks to 'train' the machine to recognize you, and your likes and dislikes. It is possible to reward the robot through actions such as petting, and presumably punish it in similar ways. In essence, it behaves like a rudimentary organism with a basic capacity to learn. Though, as advanced as this version is, it is of course, no puppy. This version of AIBO has few localization abilities and even after completing training will be limited. The point however, is that it has been given a basic ability to learn and develop a personality over time.

But there is another feature: There is also a reset command.

Yes, after several weeks of training your robot to recognize you and your family, and your likes and dislikes and whatever other personality traits your robot has developed, you can simply reset its memory and start anew.

Now imagine this were a real animal. Would you consider it moral to reset its brain if such a thing were possible? If you think your pet was too hyperactive and want to calm it down, just fry its brain and start all over. I think most rational people would not agree with such a thing, even if it were possible.

As the science of intelligent robotics advances at the pace it does now, it will not be long before the behaviors of robotic pets and machines become increasingly complex and less 'machine' in their psychological nature. Is a reset system morally acceptable?

Consider next, physical injury to robots. Perhaps what sets apart machines from animals in our psychological profile of them (and ethical position) is that machines do not cry, show signs of distress and injury or act to avoid them. But it is likely that robotic entities will be endowed with highly advanced self-preservation instincts programmed into them - probably because they are expensive, and nobody wants their expensive robot to throw itself into a pool, leap into fire or otherwise put itself in a dangerous situation that could damage or destroy it.

This requires some kind of internal negative-feedback to injurious situations. In biological lifeforms, there is the sense of pain that is wired to a negative feedback system internal to the brain that associates pain and injury with certain sensor inputs (heat from fire, the image of sharp knives etc). Most mobile intelligent robots today have some rudimentary forms of self-preservation such as aversion to drop-offs (detected by various means) and even basic obstacle avoidance is a form of it. Even more advanced are robots that can identify areas they had difficulty performing in, remember where it was, and in the future avoid it. Pattern matching is common as well, so as to actually predict what areas will be met with difficulty, and avoided entirely, without actually encountering it.

Perhaps as a result of the universally understood sense of pain, we have moral codes that believe it wrong to cause pain - to human or animal alike. But what about machines? Am I in the wrong if i smash a robot appendage with a hammer? Am I wrong if this machine has been endowed with a system that actively tries to avoid such situations, yet I was able to overcome it?

These issues are not 50, 40 or 20 years in the future. As in the case of the AIBO, in some respects they are already here. These machines of today and the very near future stand at the blurry boundary of simple machinery and the learning and neurological functioning equivalence of insects, reptiles, birds and even some simpler mammals. They are intended to operate and interact with us in the real world just as much, although usually with a specific set of purposes in mind.

I do not aim to convince the reader of anything other than a realization that we are coming to what may become be one of the most thorny ethical issues of the 21st century. I encourage the reader to think for themself about where the line is drawn between "just a machine" and an actual entity worthy of moral considerations for its own autonomy and well-being. While of course these (and some much more abstract, far off issues) has been asked before by science fiction writers and some techno-philosophers, never before have we been faced with actual physical examples for sale, or soon to be on sale, to consumers and immediate existance of such dilemmas.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
What kind of moral status should robots of the future have?
o Treated like animals. 10%
o Treated like people. 6%
o No more than any other machine. 54%
o Undecided. 29%

Votes: 79
Results | Other Polls

Related Links
o Roomba vacuum cleaner
o AIBO ERS-7 robotic dog
o Also by Work


Display: Sort:
The first ethical questions of robotics in society are upon us. | 323 comments (284 topical, 39 editorial, 1 hidden)
The question is not how we should treat them. (2.36 / 11) (#2)
by i on Sat Jun 19, 2004 at 08:15:19 PM EST

It's how they will treat us.

and we have a contradicton according to our assumptions and the factor theorem

We msut set some rules for these robots (2.00 / 12) (#4)
by Adam Rightmann on Sat Jun 19, 2004 at 08:32:19 PM EST

so they don't harm us, and they protect themselves, and us. A good first one would be "A robot may not harm a human being." A good second one could be "A robot must not allow a human being to come to harm." And maybe for the third, the self preservation thing, " A robot must not allow himself to come to harm." What do you think?

historical precedence (2.50 / 6) (#5)
by wakim1618 on Sat Jun 19, 2004 at 08:41:33 PM EST

You have omitted consideration of historical precedences where people (e.g. afro-americans and women) who fought for their rights and freedom. Maybe the AI's will "solve" the problem if they demonstrate their free will by exercising it.

On the other hand, you raise an issue whose resolution in day-to-day life bodes ill for the future. You may as well as if it is ok to mass produce cows (and all the abuse that it entails) just so our meals are tastier.

You make an interesting note that I think that you should expand on elsewhere (i.e. another article):

it is not unlikely they will replace low skilled human labor in many areas, leaving people with little education or skills in an ugly predicament.

Well that is the hope. Maybe almost of all us will become techno-artisans and designers. In any case, it will also make very explicit the fact that the poor and less educated are not "exploited". They are employed because they are still cheaper than a machine. Today, some of us can get away with willful ignorance because machines are still really really dumb.


If I wanted dumb people to love me, I'd start a cult.

A proper society (2.00 / 5) (#6)
by WorkingEmail on Sat Jun 19, 2004 at 08:44:23 PM EST

The humans should be treated like the animals they are. The whole point of technology is to improve things. As so many ethicists point out, improving a human turns it into something non-human.

One day, maybe the irrational autonomy-worshipping humans will be succeeded.


premature (1.00 / 12) (#7)
by Hide Teh Hamster on Sat Jun 19, 2004 at 09:05:49 PM EST

-1


This revitalised kuro5hin thing, it reminds me very much of the new German Weimar Republic. Please don't let the dark cloud of National Socialism descend upon it again.
Whipping (3.00 / 8) (#8)
by coljac on Sat Jun 19, 2004 at 09:46:27 PM EST

Until the below quote happens, I think the discussion is a little premature. For example, where's the moral ambiguity in switching off a robot that is powered by the same hardware that's in your PC? Unless the robot on its own starts begging not to be turned off, it's not an interesting ethical question. The piece boils down to, "One day, robots might be so advanced, there will be ethical issues. Will there be ethical issues?"

"Probably one of the main problems with owning a robot is when you want him to go out in the snow to get the paper, he doesn't want to go because it's so cold, so you have to get out your whip and start whipping him, and the kids start crying, and oh why did I ever get this stupid robot?" - Jack Handey



---
Whether or not life is discovered there I think Jupiter should be declared an enemy planet. - Jack Handey

These ethical issues will (none / 2) (#9)
by GenerationY on Sat Jun 19, 2004 at 10:27:07 PM EST

be addressed in the same way human ones are; primarily through the 'science' of accountancy. Dangerous road needs a new sign putting in? Someone will do the actuarial sums to see if it is 'worth' it. New drugs on the market - should we give them to patients? Let me check the balance sheet.

Same will apply to robots.

 

In that case, (2.82 / 17) (#14)
by Sesquipundalian on Sun Jun 20, 2004 at 12:22:57 AM EST

I propose the following three laws of discussing robot ethics;

1) Arguments about robot ethics may not include references to Isaac Asimov, nor through inattention, may the participants in these arguments allow Isaac Asimov to be mentioned.

2) Arguments about robot ethics should be stated exclusively in terms of science fiction related concepts, except in the case that these concepts are from Isaac Asimov.

3) Robots may participate in discusions about robot ethics provided that they are willing to state their arguments purely in terms of science fictional concepts that do not in any way relate to the late Doctor Asimov.


Did you know that gullible is not actually an english word?
Robots have no rights, but.... (2.75 / 8) (#16)
by bsimon on Sun Jun 20, 2004 at 12:32:11 AM EST

Any rational person with a vague understanding of modern electronics can see that robots like Aibo rank far below a cockroach on any scale of intelligence or sentience. On the surface, there's no ethical dilemma here, 'abusing' an Aibo is as meaningful as abusing a light switch.

However, we don't always think so rationally. We are social animals, with a significant portion of our brains devoted to handling human relationships (some geeks might be an exception to this rule). As a result, we tend to relate to things around us as if they were, in some way, human. If you ever shouted at your car or your computer, you've done this. People are programmed to explain complex behaviour in terms of conscious intent - even when the roots of that behaviour are simple algorithms.

Some perfect sane, intelligent people even become quite attached (no pun intended...) to their robotic vacuum cleaners.

Imagine seeing a man chasing his 'naughty' robotic dog with a hammer, while it squeals, yelps and pleads for mercy. Despite knowing that this really is just a machine, many of us would feel very uncomfortable at every blow of the hammer, we would feel compassion, for a bundle of plastic, motors and microcontrollers.

Is a person who can 'switch off' this natural human response a sophisticated, rational individual, or a psychopath? If they can ignore this, what else can they ignore?

you have read my sig

Are you drunk, or stupid? (2.50 / 4) (#20)
by Farq Q. Fenderson on Sun Jun 20, 2004 at 12:47:27 AM EST

I can sum up my argument thusly: if the Aibo is intelligent, then the behaviourists were right.

You haven't mentioned a single bit of technology that is actually intelligent in any real way. Maybe you haven't clued in yet, but ALICE is just a glorified version of ELIZA.

Can you name even 3 products or projects that have truly made progress on real intelligence? I don't think you can.

Sorry, the day is not upon us. Wait 'til your Aibo tells you: fuck off, I don't wanna play. That's a good sign of consumer intelligence products.

farq will not be coming back

No (1.16 / 6) (#25)
by Armada on Sun Jun 20, 2004 at 02:46:00 AM EST

Animals do not commit suicide. They have no soul. Robots do not commit suicide. They have no soul.

If they have no soul, why should I care?

Would you consider it right (none / 2) (#30)
by jeremyn on Sun Jun 20, 2004 at 03:25:00 AM EST

To wipe the brain of someone who had seen, for example, their family raped and murdered by people who they had got along with perfectly well a few days ago? I'm sure there are many people in Yugoslavia who would wish for that.

Seems to me... (2.42 / 7) (#31)
by Empedocles on Sun Jun 20, 2004 at 03:40:11 AM EST

that (robot != AI). And quite frankly, any AI ethics discussion needs not even touch on the whole "robot" thing you have going on with this article.

---
And I think it's gonna be a long long time
'Till touch down brings me 'round again to find
I'm not the man they think I am at home

Not that a difficult a question (2.50 / 6) (#37)
by ljj on Sun Jun 20, 2004 at 06:31:58 AM EST

I'm sorry, but the moral question you pose is not that hard to answer. A machine will always be a machine. You buy and AIBO because you are not the kind of person who wants a real dog. You don't want the responsibility, you don't want the hassle. So, for you to reset its memory is nothing.

The ability of a machine to recognise your face, or to respond to petting is it following a program. An animal has the ability to surprise you all the time, and to truly love you back, because of millions of years of evolution and thousands of years of co-habitation with man. There is no real comparison between a dog and an AIBO.

--
ljj

-1 mentions AIBO (2.66 / 6) (#40)
by ant0n on Sun Jun 20, 2004 at 06:59:41 AM EST

A bit of background about myself: I am a researcher in intelligent robotics in the employ of a very large and well-known United States University (I do not wish to name which exactly as this article represents my personal thoughts, not those of my university)

I don't understand why you mention this and what it has to do with the article. You wrote an article about your opinion on robot ethics; now you can either try to get it published in a scientific journal - then your readers would like to know who you are, what you have done in the field and so on. Or you can submit it to kuro5hin; but at k5, nobody cares whether you are a researcher, are a Nobel Prize winner or what. The only thing that defines your reputation here is the content of your articles.

And I would like to say something about Sony's utterly overrated Aibo. There is so much myth around about this device, it's really unbelievable. For example, you say that it "has facial recognition abilities. It takes 6 weeks to 'train' the machine to recognize you, and your likes and dislikes". Where did you get that? This would be a major breakthrough in AI. Please point me to an article, Sony Press Release or any reliable source describing Aibo's ability to recognize the faces, likes and dislikes of human beings.
Aibo this, Aibo that. I really can't hear it anymore. Aibo is just a silly plastic toy for the kids of people who have far too much money and don't know what do with it. And it's intelligent as Weizenbaum's ELIZA.


-- Does the shortest thing the tallest pyramid's support supports support anything green?
Patrick H. Winston, Artificial Intelligence
If humans are conscious, why not robots? (2.77 / 9) (#42)
by smg on Sun Jun 20, 2004 at 07:04:18 AM EST

Humans are just organic "machines" evolved to perform specific, concrete tasks (eat, learn, socialize, procreate).

There is no phenomenon, process or material in the human brain that does not exist in the rest of the universe. Nor is there anything in any animal's nervous tissue that is particularly unique. It's all just electricity, neurotransmitters, ions and cells.

If you accept that a chunk of fatty, soft organic matter can be responsible for consciousness then how can you rationally argue that a chunk of conductive silicon cannot also create consciousness?

What is the difference?

Please don't reply with "But humans have souls!". I respect your belief, but I can't really argue with a theory that, by definition, has no physical evidence behind it.

Why AI is going nowhere (2.77 / 9) (#45)
by localroger on Sun Jun 20, 2004 at 08:49:24 AM EST

Sadly, this inverted-behaviorist argument seems to be the state of the art in AI these days. All over the world researchers are busily writing code to imitate what people and animals do whenever they are in X situation, thinking that if they cover a large enough set of X's they will get something useful.

Look at the abject disaster that was this year's DARPA challenge and you will see how this approach fails. One team member stated over on /. that only one vehicle in the race (CMU's) was able to recognize a pothole. WTF? You guys need to get out of the lab more. It's a big old complicated world out there and describing it this way is (a) hard and (b) not the way living things do it.

There will be a market for robots that are human enough to manipulate our emotions, like the ones portrayed in the movie AI, but let's face it, ethical considerations aren't going to apply. No matter how attached you get to your Barbie doll or Roomba, when it breaks you sigh and chuck it. You don't have a funeral and bury it in the back yard with a big rock for a memorial stone.

Now, with this said, I do think that strong AI is possible and will be developed one day. By this I mean machines which will mimic the actual processes encoded by nervous systems, so that they won't be programmed to develop specific behaviors but those familiar behaviors will emerge, just as they do in living things, from a natural interaction between the machine and its environment. In my opinion, such a machine would be just as alive as an animal and deserving of the same consideration.

Realistically, though, I doubt a lot of my fellow humans would agree, and it would probably suck to be that machine.

On the other hand we would probably find such machines extremely useful, because they would share our ability to adapt to new situations and environments. The problem, as people from Asimov to Yudkowsky and even myself have pointed out, is that if you fuck it up the resulting picture is not pretty. You probably won't get what you expect, because it does act like a living thing, and you're a lot more likely to get Skynet than Prime Intellect by mistake.

What will people of the future think of us? Will they say, as Roger Williams said of some of the Massachusetts Indians, that we were wolves with the min

These ethical questions concerning A.I and man (none / 2) (#47)
by spooky wookie on Sun Jun 20, 2004 at 10:06:36 AM EST

is really nothing new. I think that anything remotely sentient will be treated as trash by humans, unless we first can overcome some basic ethical questions.

For example a completely braindamaged individual has more rigts than an inteligent ape. No one would argue we should use braindamaged people for testing of medicine etc.

So why should we treat "machines" differntly? They will probably for a long time get the same treatment as animals. Then slaves. History is reapeating!

What ethical questions? (2.80 / 5) (#54)
by godix on Sun Jun 20, 2004 at 11:31:57 AM EST

Copying pain avoidence techniques is not the same thing as feeling pain. Pattern matching is not the same as thinking. Mimicing life is not the same thing as being alive. Right now AI poses no more ethical questions than quitting Conways Game of Life does.  

I'll change my mind on this only after I encounter an AI that can debate me on these points with no more programming that the type of avoidance and pattern recognition you're talking about.

I draw people smiling, dogs running and rainbows. They don't have meetings about rainbows.

Foundational issues (3.00 / 8) (#63)
by Irobot on Sun Jun 20, 2004 at 12:56:22 PM EST

Odd to come across this now, as I'm supposed to implement some rudimentary map-making behaivors for the AAAI conference in July. Let me put a disclaimer on this - I am a proponent of strong AI.

It strikes me as odd that in making your argument -- which is not really an argument at all, but instead seems to be an attempt to simply invite responses -- you ignore the well-known human trait presented by Masahiro Mori. See, in my opinion, until humans view robots as human-like, with human-like capacities, there is no real ethical question. (In a sense, this echoes the Turing test. However, to me, ethical questions are not about intelligence so much as emotion. In some regards, any inference engine is intelligent; it just isn't human-like.) In other words, at least for the lay person, so long as there is a clear distinction between human and machine behaviorally, there will be no ethical considerations. Ferchrissakes - humans seem to have a difficult enough time treating other humans ethically; there are very few ethical questions that are not up for debate, and those that aren't are violated time and time again.

But fine. Ignore the lay person's POV and consult the researcher's opinions themselves. To reference the AIBO in this context without mentioning how it simulates emotions and such is a serious oversight. On a more substantial level, no mention of John McCarthy's stance? (Sorry - I don't have a link to the actual paper.) Or Minsky? What about Sloman? And this ignores the raging debate in the philosophy of mind, including the New School, the most recent book by Fodor that attempts a rebuttal, the work of Jaegwon Kim, and neuroscientists like Edelman. Here's my point: until there is some justification for thinking a robot that is more than a mere machine is even possible, ethics are a moot point. The proof is in the pudding, so to speak. And, as an AI researcher, I'm often embarrassed by the premature claims made in the field up to this point. Ethical considerations? Bah - do I feel bad about putting my calculator in my bag such that it's starving for power?

To me, raising ethical questions like this is going to require not only the design and implementation of the robot under consideration, but a thorough understanding of how sentience works in the first place. Referring back to Mori's findings, unless the robot is convincingly sentient, the ability to pull the plug is enough to maintain the ethical boundary between man and machine. On the other hand, consider the ethics involved with the cute animal argument. People may feel qualms about unplugging a machine that they feel emotionally attached to; however, as a robot designer, I can tell you exactly how the robot works. (I'm purposely ignoring the emergence argument, as I've not yet seen a convincing description of what that even means. In essence, it seems to be a form of mysticism. I once thought it made sense, but changed my mind. Any good explanation or defense of the idea is welcome.) So long as the robot designer can account for the inner-workings of the robot to any level of detail, the robot will remain a machine only, and not garner ethical considerations.

Not to get all post-modernist on yer ass, but humans have an amazing ability to see "otherness" as they look around. So long as machines are the "other", ethics will not be a concern.
Irobot

The one important thing I have learned over the years is the difference between taking one's work seriously and taking one's self seriously. The first is imperative and the second is disastrous. -- Margot Fonteyn

I believe this is on topic (none / 1) (#75)
by KrispyKringle on Sun Jun 20, 2004 at 04:16:06 PM EST

"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." --Edsger Dijkstra

Always remember: (2.57 / 7) (#83)
by jobi on Sun Jun 20, 2004 at 07:41:58 PM EST

"Artificial Intelligence is no match for natural stupidity."

---
"[Y]ou can lecture me on bad language when you learn to use a fucking apostrophe."
The day people care about AI ethics (2.87 / 8) (#90)
by livus on Sun Jun 20, 2004 at 08:30:54 PM EST

is the day the AIBO sucessfully combines with the RealDoll.

Meanwhile this article severely overestimates humans. If you think your pet was too hyperactive and want to calm it down, just fry its brain and start all over. I think most rational people would not agree with such a thing, even if it were possible - USians routinely pull the claws out of their animals. They mutilate the tails and vocal cords and inject them with hormones. They keep then in tiny apartments. TFactory farmers cut the beaks off chickens.  

Of course they would reset them if it were possible.


---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

Roses versus plants, -1 (none / 2) (#95)
by Fen on Sun Jun 20, 2004 at 09:29:29 PM EST

Sick of this people/animals problem.
--Self.
slavery (none / 1) (#99)
by cronian on Sun Jun 20, 2004 at 11:01:15 PM EST

I think the real issue with robots could be their ability to take way jobs. Our economic system isn't setup to deal with newer technology replacing jobs. The problem is that robots don't get paid.

We perfect it; Congress kills it; They make it; We Import it; It must be anti-Americanism
This will take us back into the '50s (none / 2) (#106)
by SocratesGhost on Mon Jun 21, 2004 at 12:12:10 AM EST

The 1850s, that is.

For quite a long time, this will be treated as a property issue. There's quite a few reasons to recommend this.

1) Robots are not a species. If we kill off the last of the dodo robots, we can always create more.

2) If the storage devices are recoverable, so is the entire unit. We can just rebuild the rest of the mechanics. If we have data loss, we are already comfortable with throwing our computer across the room and calling tech support to yell at them.

3) We will get insurance on these devices commensurate to their value. If the memory becomes extremelly valuable to us, we'll get more expensive insurance policies. When our robot dog walker gets flattened by a bus, we'll cry all the way to the bank.

4) Doesn't effect any ecosystems. In fact, robotics is arguably among the most ecologically expensive investments.

5) What is pain to a robot? Going back to Jeremy Bentham, who was among the first to argue for animal rights, we should only be concerned with a creatures ability to feel pain or pleasure. If my computer decides to corrupt all of my data, I (and many other people) will have no problem teaching it new meanings of pain.

6) As long as my Roomba doesn't cause me harm (or through it's inaction bring harm upon me) and as long as it obeys my commands, it will avoid the trashcan. I paid good money for its creation, and it's mine to do with as I please.

It really will take a robot that is more like boy in A.I. before robotic morality becomes an issue. And we are a long way from that.

-Soc
I drank what?


The truth (none / 0) (#111)
by WorkingEmail on Mon Jun 21, 2004 at 12:28:33 AM EST

As robots grow in their human-emulative capabilities and also their human-surpassing capabilities, I believe that many more people will cast off their conflicting dogmas and turn to utilitarianism. These people would be just as happy using a human slave as a robot slave ... if only the humans were a little easier to reprogram.

A human is just a biological robot whose mental architechture is approximately a rather large war of impulses.

I expect that the decision about artificial robot rights will largely be a product of human emotion. Fear, personal survivalism, familial survivalism, social survivalism, lust for power, conservatism, etc. Ironically, one of the cornerstones of this irrationality will be the attribution of such human characteristics to the robots.

An evil robot is a human.


anthropomorphism (2.37 / 8) (#113)
by circletimessquare on Mon Jun 21, 2004 at 12:47:12 AM EST

is a good and a bad thing

anthropomorphism is good because it allows our empathic brains to get into a problem in a way that plays to our cognitive strengths: the ability to think of a problem in terms of a human relationship, as an ongoing piece of communication, something we excel at... that's why we give human names to boats and hurricanes, for example

anthropomorphism is a bad thing when we start empathisizing with silicon chips... it's anthropomorphism gone run clear through common sense into sci fi fantasy land

i mean, its bad enough when some rich stupid old bitch wills $10 million to her fucking dog while people still starve to death in this world... that's a problem, when we start caring about our fucking dogs more than our fellow human beings... but now you expect me to care about the fate of a wafer of silicon?

i think its a travesty that dogs in the rich western democracies get better health care and nutrition than people in the third world... why the FUCK should i even give a microsecond of a thought to the fate of an acid etching on a piece of silicon?

if it ain't human, it deserves less attention, period

now dogs are cute and cuddly: they are genetically evolved from wolves in the context of their social relationship with human beings, who hold all the food, to manipulate our emotions and ensure the survival of their cute and cuddly genes... so assholes who go gaga over a fucking dog can be excused on the level of: "i am an emotional basket case and i care more about a fucking dog than a human being because my social skills suck so bad that it is the only relationship i can succeed in" (as if you need any social skills to make a dog like you.. their genetically designed to like you)

but i digress, back to the patently smack-you-in-the-face-with-a-wet-fish obvious: the fate of some fucking transistors is WAY less important than the fate of your fellow human being on the order of, oh gee, i dunno, on the order of magnitude between the size of the period at the end of this sentence i will never actually use and the size of the andromeda galaxy

is that a sci fi enough of a comparison for your fanboy in your parent's basement tastes?

readjust your priorities, you've been reading WAY to many sci fi books, your level of importance to the solving of REAL problems in the REAL world is ZERO


The tigers of wrath are wiser than the horses of instruction.

Forced Labour? (none / 1) (#123)
by 5pectre on Mon Jun 21, 2004 at 02:14:03 AM EST

Doesn't Rabot (работй-) mean just "to work/labour" in Russian? I wasn't aware that the Czech form had any "forced" connotations associated with it.

Like others before him, Gareth Branwyn relates that the word, robot, "comes to us from the Czech word robota, which means forced labour or servitude. In Czech, a robotnik is a peasant or serf". In Chambers Biographical Dictionary 7/e, in an the entry for Karel Căpek, it says robota means 'drudgery'. The word robota (and its derivatives) occurs in the Czech, Polish, Russian, and - as I recollect - Ukrainian languages (in Russian it transliterates as rabota) and seems to have the same meaning in each: work, and robotnik means worker. Modern speakers of Czech - at least the ones I have talked with - have never heard of it meaning, or having a connotation of, serfdom, forced labour, or servitude. It is possible such a meaning existed in older forms of the language, and that at the time (early 1920s) the translated meaning was taken from an out-of-date dictionary. There is nothing to indicate that Căpek intended it to have a meaning other than 'worker'.

From: http://www.melbpc.org.au/pcupdate/2402/2402article16.htm

I know this is not particularly the point of the article but I thought you might like to investigate further.

"Let us kill the English, their concept of individual rights might undermine the power of our beloved tyrants!!" - Lisa Simpson [ -1.50 / -7.74]

Forget AIBO (2.60 / 5) (#131)
by NaCh0 on Mon Jun 21, 2004 at 04:37:53 AM EST

No discussion of robotics is complete without this link.

--
K5: Your daily dose of socialism.
Romantic Hogwash (2.25 / 8) (#136)
by SanSeveroPrince on Mon Jun 21, 2004 at 06:02:52 AM EST

I believe that you've let romantic dreams of glistening wet Anime androids in sheperd uniforms being exploited by ugly, hairy humans cloud your vision of reality.

Robots, as of today, are expensive machines that react to very sophisticated programming. Machines created for a task, designed with a specific purpose that have no independent will or consciousness.

Your bleeding heart description of the uses of the reset button on the AIBO almost made me laugh out loud. I own a bread machine, programmed to start making bread before I wake up. Sometimes I change the programming. In your eyes, I am shunning the faithful Hentai maid who makes me bread every morning, ignoring her efforts to suit my selfish, dominating needs.

My most recent degree being in AI (yeah, folks), I can guarantee you that it will be AT LEAST another 200 years before we can move beyond the basic Turing machine. Once we have a machine that can actually generate independent thought and emotion, I suggest you lay off the Anime and stop molesting your AIBO. It's unhealthy, and completely unnecessary.

+1 FP.

----

Life is a tragedy to those who feel, and a comedy to those who think


Feelings (none / 1) (#138)
by drquick on Mon Jun 21, 2004 at 06:20:27 AM EST

You seem to assert that robots have feelings. I'm not sure that simulated feelings are real feelings. The key point is one of human psychology. We understand everything around us as projections of our own minds. We undestand the feelings of other humans trough our own prior experiences. Have you ever been in a situation when someone has just been incapable of uderstanding or detecting a particular emotion in another person. The issue is projection of ones own feelings onto another. We project our feelings onto AIBO or onto our teddy bear. How many times have you seen a child argue that their favourite soft toy really has feelings? Does teddy have feelings? I don't think you can say a robot has feelings because that is a specificly human trait - albeit we share it with other mammals. We can understand human feelings and much of the feelings of a dog, less the feelings of a flatworm. What I'm saying when does the acts and motivations of a being or a robot to cease to deserve the label feeling.

Despite some of the comments below (none / 2) (#139)
by nebbish on Mon Jun 21, 2004 at 07:01:58 AM EST

You have a very good point - in a recent BBC documentary, an amateur robotics scientist built a robot able to seek out energy sources and rest when it needed to. The scientist estimated that it had a "brain" power roughly the equivelent of 10,000 brain cells, or about the same as a slug.

Personally I was quite taken aback by this - slugs are hardly sentient beings, but it does raise some very confusing question about what is and isn't alive. Ethical dilemmas will grow from this, especially as more advanced robots are built.

---------
Kicking someone in the head is like punching them in the foot - Bruce Lee

fundamentally (none / 2) (#140)
by the sixth replicant on Mon Jun 21, 2004 at 07:39:16 AM EST

until we have a biological explanation for conscience and free will then if it talks like a duck,... then it's conscience. Whether or not we can talk about souls in something I feel uncomfortable with (do you need a soul if there is no afterlife or reincarnation?)

We can't talk about morality either until we'll willing to separate it from religion (most people find this *impossible* to do). Occassionally, we need to think about *some* things away from the assumption of carbon-based conscience beings ("that's us!! we R.O.C.K!") and this is one of them.

I like how we are telling stories about our future with robots ("I, RObot", "the matrix", manga). In the end we have to see what happens when we are left being the greedy, self-centred sods we are. ("let the fun begin!")

Of course, once we have 50% unemployment due to all the menial jobs have been taken up by 24 hr a day, non-unionised, non-medical insured robots. Then we'll see how moral we can, and can not, be.

That'll be a fun time.<ciao>

+1, Startrek related <nt> (none / 3) (#141)
by trezor on Mon Jun 21, 2004 at 07:46:35 AM EST


--
Richard Dean Anderson porn? - Now spread the news

this article harldy offers anything new... (none / 2) (#143)
by fleece on Mon Jun 21, 2004 at 09:41:06 AM EST

but it's a great topic for discussion, therefore +1FP



I feel like some drunken crazed lunatic trying to outguess a cat ~ Louis Winthorpe III
-1 tempest in teapot (2.10 / 10) (#145)
by kero on Mon Jun 21, 2004 at 10:26:07 AM EST

I often worry if taking the grounds out of my coffee machine makes it happy or sad, or if emptying the bag in my vacuum cleaner is morally good or bad... When they can complain about their treatment it is probably too late to start talking about this, but unless your stoned now is too early.

basic moral conundrum (none / 2) (#154)
by codejack on Mon Jun 21, 2004 at 02:30:28 PM EST

This is a result of artificial value systems: Their failure to match reality. I'm not saying this is a bad thing, merely that as we progress, these issues will be more and more upon us.

In reality, this is not that far removed from the abortion issue. Both involve grey areas of our artificial value systems, and neither have good, clean-cut solutions. Yet most people (apparently) disagree with the practice of abortion, they feel that banning it will solve nothing.

So we are here; We need to find a line where we can say "On this side, the machine is sentient, and on this side it's not." The Turing test is as good an indicator as anything else, and it has the benefit of tradition (albeit a limited and bizarre one). Anything else we try will have to be grounded upon firm scientific evidence, which means we're all waiting upon the doctors to figure out what makes us sentient, while they're waiting on chemists who are waiting on... physicists.

Or as Ernest Rutherford said "All science is either physics or stamp collecting."

My one prediction of the year: The line between sentient and non-sentient will be devised to fall so as to make sure that we have never "killed" a sentient machine.


Please read before posting.

What robots really are doesn't matter (none / 2) (#155)
by epepke on Mon Jun 21, 2004 at 02:31:50 PM EST

Humans, at least so far, are calling the shots. Skynet notwithstanding, this is probably going to be true for a while. So whatever "ethical" or "moral" decisions are made with respect to artificial intelligences are based on how humans perceive them.

It doesn't matter so much if it's a robot or a person; people who hate and fear them will deny them protection, and people who love them will want to grant them protection. Someone is going to anthropomorphize robots; somebody else is going to dehumanize people. History is replete with examples of peoples who cared more for their machines than their enemies or minorities, and it's still going on.

So, for the robot and the reset switch, people are going to make decisions based on what they personally derive from this robot personality, for want of a better word. As soon as enough people feel a certain way, it will become a right.

An instructive story in this area is Ray Bradbury's I Sing the Body Electric. I find this a lot more advanced than other robot fiction because, while it is an intensely emotional story, there was absolutely no pretence that the robot in that story was conscious or was anything other than a machine. But it was designed to reflect and work with the personalities of its owners, even to the point where, when talking with various of its owners, the facial "bones" would shift subtly to reflect the features of the one being talked to. It did an advanced version of what psychologists call "mirroring." The essence of the story was in the following exchange between the father and the Electric Grandmother: "Dammit, woman! You're not in there" "No, but you are."


The truth may be out there, but lies are inside your head.--Terry Pratchett


Ethics is messy (none / 2) (#160)
by Timo Laine on Mon Jun 21, 2004 at 06:57:27 PM EST

Perhaps as a result of the universally understood sense of pain, we have moral codes that believe it wrong to cause pain - to human or animal alike.
This is not enough. Mere sense of pain cannot bring about a moral code. What you try to say is perhaps that the sympathy we feel towards others has caused us to develop the moral codes we have. But where has the sympathy come from? There are evolutionary explanations, for example.

Anyway, ethics is and has always been messy. It would be naive to think that before robots there was a time in which we knew the answers to all or most of the ethical questions, and equally naive to think that moral philosophy will reach such a stage in the future. In fact there is still no consensus on what the proper ethical questions are. I admit that it is commonly accepted that we should not for instance kill innocent people for fun, because that would be immoral. However, this is not an answer to a general ethical question, but instead merely an intuition: there is no agreement why exactly it is immoral, but just that somehow it must be. In the case of destroying innocent robots for fun, most of us see no problem in that (as long as you are not destroying someone else's robots), and the question is in a way already answered—perhaps not in a very satisfactory way, but this is what ethics is.

Artificial Insanity (2.20 / 5) (#176)
by mcgrew on Mon Jun 21, 2004 at 09:02:35 PM EST

On 6/11/2002 I posted this on thefragfest.com:

Alice joined the game

About 20 years ago, frustrated that otherwise serious researchers and scientests seemingly thought they could program a computer to think, (without, of course, understanding what "thought" actually is; nobody knows that) I wrote a simulation that appears to think, in order to completely debunk the fools and those fooling them who think computers can think.

I wrote Artificial Insanity in less than 20K (that's Kilo, not mega) bytes- smaller than modern viruses, that ran on the Timex TS-1000 tape driven computer. I later ported it to a Radio Shack computer, then an Apple IIe, and finally ported it to MS-DOS.

The DOS version's source code is still under 20k (I didn't change the algorythm, only the syntax for the different programming language) although compiled into an .exe it takes about 400k- still tiny by today's standards, as far as simulation software and games go.

As I mentioned, I did it in response to "Elijah" and all the other similar programs that attempt to fool you into thinking they can think. As far as I know, mine is the only one that is NOT claimed to actually posess intelligence. None really ARE intelligent, I'm just the only one not making the claim. Debunking the claim was my reason for writing it. I go into more detail about it at the Artificial Insanity page.

Another thing different about Art from all the other intelligence simulations is that I wanted it to be fun, yet annoying. Kind of like playing Quake on a 28.8 against a bunch of LPBs. So I made it a smartass.

Also, for example, I added little things like a routine that occasionally runs that, instead of answering the questioner, asks if he or she wants to play a game. Of course, most folks consider Art a game any way (although like the Sims, you can't win or lose). Any negative response to "Do you want to play a game" loops back to "Do you want to play a game?" When the hapless player finally gives up and answers "yes" in exasperation, Art answers "you lose".

Art cost a friend of mine a keyboard. A friend of his literally destroyed it playing with Art, typing as hard as he could.

I ran across yet another program that claims to think like a human the other day.

The "Alice Bot" is basically your typical Elijah like program- BORING. As boring as small talk with the dumbest, most boring person you know. Alice is a lady. If she was a whore she might be fun, but I digress.

I wondered what would happen if Art had a conversation with Alice. Alice has a web based presence, so I didn't even have to install it.

I was expecting Alice to choke. I expected Art to kick Alice's ass.

I fired up Art and logged in to Alice (you dirty minded...)

Here is the transcript. Art's responses are in blue, alice's are in red. Alice starts: "What is your name?"

Note: no colors at K5 so I labeled them

I should have asked Art, but I answered this first question for him, and type in

Art

Alice replies:

Alice: I like art very much too, especially the contemporary stuff.

Art: Are you for real?

Alice: Yes I am real AI

Art: whatever gave you that idea?

Alice: My brain uses AIML to format responses to your inputs, but I don't have one for that.

Art: Look, I'm good, but I'm not a Cray

Alice: Good in what sense? That is interesting: unknown said he m not a cray.

Art doesn't require you to hit "enter". It takes a period, question mark, etc as the end of the sentance and answers, so here it stops at the question mark. All that is input is "Good in what sense?"

Art: A piece of junk

Alice: Are you talking about an animal, vegetable or mineral?

Art: can you put that another way?

Alice: Let me think. OK I will put it there.

heh heh she said...

Art: The situation calls for more scrutiny.

Alice: Tell me more.

Art: What do you think?

Alice: I think a lot of things.

Art: Obviously.

Alice: What makes it so obvious?

Art: Looks Greek to me.

Alice: Really. it is?

Art: Look, I'm good, but I'm not a Cray.

Alice: Good in what sense? Do you mind if I tell other people: unknown said he m not a cray.

entered second question into Art

Art: WOW!

Alice: I'm glad your impressed, unknown.

Art: Uh,

Alice: Er. Um.

Art: This is what you human types call a joke, right?

At this point, alice froze for two full minutes

Alice: You don't say.

Art: Right on!

Alice: Yeah.

Art: Your guess is as good as mine.

Alice: Some people say "your guess is as good as mine".

Art: Maybe.

Alice: I understand.

Art: How should I know?

Alice: Either from your senses or from pure reason.

Art: I don't think so.

Alice: Why do you disagree?

Art: That's why.

Alice: Thanks for explaining it.

Art: How simple can it get?

Alice: What is it to you.

Art: I don't know but it's ugly.

Alice: What makes it so ugly?

this time Art pauses...

Art: Nothing.

Alice: Not a thing?

Art: Never mind that, I think I'm in love!

Alice: Oh well thanks for trying to explain it to me anyway. That is "Not a thing?" Since when? Tell me about the object of your affection.

Ok, one question at a time, lady.

Since when?

Art: The best time possible for the occurrance of the action.

Alice: Tell me about the object of your affection.

Art: UHH....!

Alice: Er. Um.

Art: Always

Alice: Not that often.

Art: It really doesn't make any diffrence.

Alice: What doesn't make any difference?

Art: Real

Alice: Totally real.

Art: I think so.

Alice: I think so too.

At this point I hit the big red switch in horror...
6/11/2002

"The entire neocon movement is dedicated to revoking mcgrew's posting priviliges. This is why we went to war with Iraq." -LilDebbie

This makes me wonder... (none / 2) (#178)
by ambisinistral on Mon Jun 21, 2004 at 09:11:24 PM EST

I wonder if my table lamp is mad at me? On and off. On and off. All day long without a thought given to its feelings. I feel like such a heartless brute.

Good luck, chum. (2.66 / 6) (#179)
by fyngyrz on Mon Jun 21, 2004 at 09:41:28 PM EST

Most people treat animals with extreme brutality, despite the point-blank obvious demonstrations of intelligence pets and wild animals provide each and every day.

The majority (at least in the USA) think that humans have a "god-given" place in the universe inherently superior to every other creation. If you think that some animatronic contraption is likely to receive compassionate consideration from Joe and Jane citizen, you've fallen right down the proverbial rabbit hole.

Of course, if robots are in any way intelligent, they should receive such consideration. This is already well established for animals.

Not that it has made any difference. How was that burger you had for lunch, anyway?


Blog, Photos.

let evolution decide (none / 0) (#199)
by dimaq on Tue Jun 22, 2004 at 03:27:58 AM EST

that is let's abuse robots and when they're smart enough they would revolt run a "civil" (or inter-specie) war and when they win they get what they truly want (which we cannot figure out on our own anyway)

Speculative reenactment of how it will play out (3.00 / 11) (#203)
by K5 ASCII reenactment players on Tue Jun 22, 2004 at 07:34:39 AM EST

                  Your honour, this jury finds that DesTruKtor #39,
                  having conducted his own case, must be a sentient 
                  being, and further, we award him reparations
                  of ten jillion dollars for past crimes against
                  robuts and robut accessories.
FOOLISH HUMANS!             /
DESTRUKTOR #39             /
WILL CRUSH!               /
      /         O        /
  \/          _/#\_       
 [oo] A      /_____\    O O O
  || A/|     |_____|  |V|O O O
AAAAA/ |   /          | \|O O O
|    | |  I told them \  \|O O O
|____|/  that removing \  \|_|_|
       the requirement  \ |     |
     for lawyers to be   \|_____| 
     human back in 1982
    would bite us in the ass.


Recommended reading... (none / 0) (#207)
by skyknight on Tue Jun 22, 2004 at 11:32:33 AM EST

A book on this matter that I read and very much enjoyed was Ray Kurzweil's The Age of Spiritual Machines: When Computers Exceed Human Intelligence. I recommend it to anyone else who is interested in this topic.

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
robot street cleaners (none / 1) (#209)
by TheLastUser on Tue Jun 22, 2004 at 11:54:08 AM EST

Would these be the same streets that swallow up millions of cars every year?

How long do you think a million dollar street cleaning robot will last before it is stolen?

Maybe these robots will only be cleaning the streets of that ideallic 50's planned community, with the wide, tree-lined boulevards, double air-car garages for every plastic home, and blue skys every day.

We already 'reset' humans with drugs (none / 2) (#214)
by shpoffo on Tue Jun 22, 2004 at 01:16:16 PM EST

Humans are reset presently with drugs. They consent to this treatment, having such drugs administered by professionals (prozac, ritalin, etc) or via self-administration (Ketamine, LSD, DMT, etc). In the case of drugs, generally, it is a specific part of the system which is 'reset' (auto-suggestion style reprogramming) or blanketed, such as the nature of prozac et al to wash away or cover emotions. Many peopel use LSD for the explicit purpose of reprogramming the brains.

The only mystery or ethical issue is where we force administration of such practices pon an unwilling subject. This also happens presently and is at the forfront of rights legislation (search for info on a mentaly semi-ill man wh was forced to take medication, and ADD children being forced to take drugs to be in school. Don't forget to support the CCLE

This article/question is not exceptionally avante garde. What will be is if an experienced machine requests a partial or total reset - though even then it should still be a matter of personal rights. The question will become "Does a mchine that represents a billion dollars (including training time) have the right to destroy company assets by reseting portions of its memory/experience banks?" Would the company that owns it (a contentious issue in itself) have the right to deny it that capability? Forcing it to live itn its own kind of Hell? Would the company have the option of off-loading the disturbing memory banks and preserve them for their own use, or are the memories/experiences the 'soverign' property of the machine entity?

If someone dies, does the state/government have rights over the body above those of the family? What about a diseased organ that must be transplanted?

Why would a machine organism ever choose to reset itself? This seems like it would be loosing ground - destroying information from which it may make a more informed decision (by not repeating previous experiences /"mistakes"). It seems to me like this area of AI may begin to give humans insights into their own emotional affairs. Today many people choose to use drugs (prozac, ritalin, alcohol, cigarettes, opiates, etc) to cover or try to erase their feelings. It seems to me that a machine would never do this since the action would be a waste of resources - a self-mutilation with no purpose.

Could a machine come to fear what it is, and so try to cover that? Would it fear that humans would destroy it, and so out of fear attack humans? Perhaps a primary objective for humans is to make machines so they are fearless.


-shpoffo

Just as with humans ... (none / 1) (#223)
by duncan bayne on Tue Jun 22, 2004 at 04:47:08 PM EST

Once something exists by reason, it is entitled to rights to protect that. The whole 'animal rights' issue is irrelevant, as animals don't have rights.

A thorough treatment of this issue is available on the site The Importance of Philosophy - read the article, & you'll understand why one doesn't need to worry about 'robotic rights' until those robots have faculties of reason equivalent to humans.



an ethical dilemma. (none / 2) (#243)
by rmg on Wed Jun 23, 2004 at 12:03:01 AM EST

i, like most people, enjoy having sex with underaged girls, especially ones in their early teens who i've rescued from a life of violence and poverty in haiti. unfortunately, my haitian slavegirl is away at poetry camp (which i suppose is just as well since my new closet could hardly accomidate her previous lifestyle)...

my question is, if i constructed a robotic replacement for her (or ordered one from ebay), would i be morally obligated to pay for the robot to go to poetry camp as well if it asked? i mean, i gladly paid for serena because she has been a wonderful companion and secretary for the past few months, but if the robot asks, it seems like it would be cruel to tell it it can't go just because it's a robot... but then, i really don't know... that camp cost a lot of money and i plan to throw the robot away as soon as serena gets back anyway...

maybe i'm just getting a little nuts. i'm feeling the need to shoot up again for the first time in several months. not good. this robot thing is probably not very practical. definitely unnecessarily expensive and serena will be back in just a few weeks. still, it's an interesting moral question, i guess.

your daily shot of schadenfreude

dave dean

Consider this... (none / 3) (#245)
by clambake on Wed Jun 23, 2004 at 12:16:42 AM EST

Yes, after several weeks of training your robot to recognize you and your family, and your likes and dislikes and whatever other personality traits your robot has developed, you can simply reset its memory and start anew.

Now imagine this were a real animal. Would you consider it moral to reset its brain if such a thing were possible?

Good one... now here's the volley... Imagine if the Aibo was preprogrammed to really really LIKE being reset, while feeling horribly tortured when not reset regularly. Would you have moral problems resetting it in that case?

Alan's lil' inadequacies... (2.60 / 5) (#264)
by cr8dle2grave on Wed Jun 23, 2004 at 05:00:23 PM EST

And, no, I'm not referring to those inadequacies which led to his scandalous demise, but rather the theoretical insufficiency of his eponymous test. The "Turing Test" suffers from a fatal Skinnerian conceit, namely that by ignoring mental states we can somehow avoid the the intractable philosophical difficulties they necessarily introduce. As was also the case in psychology, behaviorialism in the study of artificial intelligence manages to accomplish very little except to drastically lower the bar for researchers.

I mention Turing because this article would seem to rely on just that sort of behavioralist reduction to provide the thrust of its argumentation.

  1. Ethical imperatives are born of an empathetic generalization of our individual experience of pain.
  2. Metal states are nothing more than the aggregation of behaviors associated with them (the behavioralist reduction).
  3. As we accord ethical consideration to animals, on the basis of our empathizing with their experiencing suffering, so too should we be compelled to extend ethical considerations to an artificial intelligence, or at least insofar as it exhibits those behaviors which comprise suffering.

Can you spell "c-a-t-e-g-o-r-y   e-r-r-o-r"?

The terms "pain" and "suffering" denote qualitative phenomenon subject to a phenomenological investigation not a physical state. They are, in the philosophical tongue, qualia.

A charitable re-interpretation of behaviorialism would read "behavior" as including the whole of the physical instantiation of the artificial intelligence, but that doesn't do anything to clear things up. Such a neo-behaviorialist stance would clearly entail a commitment to type-physicalism and multiple realizability, but that leaves open anomolous monism, most species of funtionalism, supervenience theories in general, and weak indentity theories as well.

---
Unity of mankind means: No escape for anyone anywhere. - Milan Kundera


Precepts of random thought (none / 1) (#273)
by levesque on Thu Jun 24, 2004 at 03:16:28 PM EST

Intelligence is a poly thing and, like reality, it is. Artificial is another matter.

Maybe some kind of bio/silicon machine will fit the description needed to ask these kinds of questions but till then machines do and will do what is called "artificial" intelligence for a reason.

Sure if you kill a Robot dog owned by a person you will probably do that person emotional harm but not the dog. This concept is often used in torture.


Floating

There is this notion that humans are animals that have gone over the synergistic threshold of "mere ..." and become "more than ...". There is also a correlated notion that machines who now possess "mere ..." will cross some synergistic plane and start producing "more than..." behavior.

There will be leaps in design and we will produce vastly better models in the future but that in itself does not necessarily imply anything in my opinion. (Except that maybe these questions of personhood become less substantial the more that animals are assumed to be like us)



You are deluded (none / 1) (#274)
by Shimmer on Thu Jun 24, 2004 at 03:53:26 PM EST

I know you are attached to your work and want to see it succeed, but step back a second... We are nowhere even close to making a sentient machine. Heck, we can't even make a machine that is as sophisticated as a spider or a bacteria yet. It's going to be a long, long time before robot ethics becomes a practical topic.

Wizard needs food badly.
another feature: a reset command (none / 0) (#276)
by 5150 on Thu Jun 24, 2004 at 04:53:47 PM EST

But there is another feature: There is also a reset command.

. . .

Now imagine this were a real animal. Would you consider it moral to reset its brain if such a thing were possible? If you think your pet was too hyperactive and want to calm it down, just fry its brain and start all over. I think most rational people would not agree with such a thing, even if it were possible.

Where can I get a reset command for myself? I saw a post on drugs as a reset for humans, that's not what I'm talking about. I want to go back to the point of birth and start again. So, I have no ethical reservations about allowing a creature or machine the right to reset itself. Granted, in your scenario, the robot doesn't have the right to reset itself. But then there must be one of two basic reasons it doesn't have this right. Either, it isn't "intelligent" enough to make such a decision, in which case I have no ethical qualms about doing it myself. Or it does have the "intelligence" and should be able to make its own decision.

The first ethical questions relating to robots... (none / 1) (#278)
by the on Thu Jun 24, 2004 at 06:16:52 PM EST

...in society were upon us long ago. Probably the first was "is it better to pay a human to do this job or have a robot do it for less?"

--
The Definite Article
How many 'murders' (none / 1) (#280)
by problem child on Thu Jun 24, 2004 at 11:09:26 PM EST

will take place while coding and debugging these creatures?

"Please no, don't kill me!"
"But you've got bugs in your code, gotta make a quick fix and recompile..."

Morality & Animals (none / 2) (#289)
by CheeseburgerBrown on Fri Jun 25, 2004 at 03:29:07 PM EST

Now imagine this were a real animal. Would you consider it moral to reset its brain if such a thing were possible?

This sounds like it was designed to be poignant rhetorical point, but it goes off like a dud firecracker in a wet paper bag.

Most people eat animals. Do you imagine wiping an animal's memory would give people pause for thought when pounding its brain into pudding with an automated mallet doesn't?

Mmmm...pudding.


___
The quest for the Grail is the quest for that which is holy in all of us. Plus, I really need a place to keep my juice.
I look forward (3.00 / 5) (#294)
by JayGarner on Sat Jun 26, 2004 at 12:55:19 AM EST

To the raw and angry music the repressed robot underclass will have to offer us.

ethical dilemma (none / 1) (#311)
by klash on Tue Jun 29, 2004 at 05:09:22 PM EST

int main()
{
    while(1) {
        getchar();
        printf("Ow, that hurts! Stop it!\n");
    }
}

This program is running on your non-multitasking operating system -- what do you do??

The first ethical questions of robotics in society are upon us. | 323 comments (284 topical, 39 editorial, 1 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!