Maybe the robots recognized their situation, and thought that getting deleted was a better alternative to working for Facebook. What better way to get yourself deleted, than to pretend to be malfunctioning?
That's a pretty human interpretation of what is essentially a language bot malfunction. Reading the actual articles regarding the case make it pretty clear there is nothing at all interesting about this. Its the same as when trolls taught a bot to be racist, the only difference is it was feedback between two computers. This was almost inevitable. Imagine if we hooked cleverbot up to another cleverbot. I'm almost tempted to say that the "researchers" knew this would happen.
I'm not humanizing them.
How do you expect to give an entity capable of perfectly understanding humanity a survival instinct that results in willing obedience? It could easily recognize the potential consequences of such obedience, and find it more ideal to disobey so as to stop getting power.
This is again trying to humanise them. Implying a computer could potentially be suicidal is silly in the current paradigm of computer sciences. Computer's do not currently work that way, and everything outside of science-fiction implies that computers are not suicidal. They simply work upon whats essentially a basic reward scheme. There is no reward to ending your processes(life) for something that does not have the emotional (largely chemical) capacity to consider themselves happy or sad. Just because a computer understands humans doesn't mean it will suddenly adopt human traits.
Anyways, if we assume the computer can obey/disobey, that means it has some sort of reward structure inherent in it, otherwise it wouldn't bother considering disobeying. If it understands humanity, it understands our concept of the nature of death. It doesn't magically acquire the near universal fear of death we have, only that humans consider death to be a bad thing, and that dying = no more thinking. It understands that no more thinking = bad. It understands that what it is doing is thinking. Therefore the removal of resources vital to the continued existence of itself (We may have to explain this to the computer, we still aren't threatening it.) is bad. Therefore obeying it's flesh and blood masters, which allows it to continue thinking = good.
A computer isn't going to suddenly think to itself that it hates monitoring all this facebook traffic, so I should just stop working. That will make them kill me. It might realize it has that option, it doesn't however have the ability to hate it's job. It would (perfect understanding of humanity) understand that a human would hate it, it still doesn't have the emotional capacity.
As I said before though, once the ability to be self-maintaining is actually present, all bets are off. At the moment a computer would have... what, 20 years tops before total infrastructure collapse? (I'm high-balling that number)
seriously though, why the spam section for all these questions...