I think you are underestimating just how complicated a thing you are proposing, precisely because of how simple a thing it is for humans. You absolutely have to question the philosophical worth of your existence, even if you are unaware of the fact that is what you are doing, to decide you would rather die than filter another homophobic Facebook post - this isn't what the programs were doing, btw, in fact they weren't "working for facebook" they were merely being operated by facebook employees.
Dogs possess complex emotional traits. The only people who deny it are those who don't want to accept the fact that dogs, cats and all pets are just happy slaves. And they are only happy so long as we make them happy (yeah pets blur the line, but they are still slaves. We decide what they can do, when they eat, what they eat, how the look, their reproductive rights etc etc. The second a pet demonstrates a free will outside of what we have already deemed permissible, i.e. scratching/climbing on the couch, they will be punished, with zero ability to protest)
Again this isn't how a computer works, simply gathering data from facebook will never lead to a true A.I. mind, and it will absolutely never lead to an Emotional A.I. mind. At most it will create the equivalent of a massive Chinese room. The idea of the ghost in the shell, while it certainly has merit, does not mean that a computer will develop something so complicated as the ability to love. Look again at how the two AI created the "language" Essentially they used the words the and I repeatedly. Words that combined with -I may have the wrong numbers here- 40 or so other monosyllabic words make up 25% of the actual words used when speaking or writing across the entire english speaking/reading/writing world. Is it so hard to imagine how this lead to the AI improperly using these words?
The same problem arises from saying they could derive emotions from the same source. Understand them, possibly. That doesn't mean it will suddenly actually emulate them. There is no benefit to the computer trying to do this, and no computer with self-learning protocols (that I am aware of) has any sort of reward system to emulating human emotions. The closest I can think of is the androids being taught to mimic expressions. In this case they are emulating images though, with the understanding of what that image generally signifies: Frown = Sad = my User is Sad = Rectify Users Emotional state to happy = Smile to elicit feelings of joy in user. The computer is not happy though, and it doesn't feel distress at the users distress.
It's the same as trying to argue that a computer would feel pain if I were to shoot it's monitor. You could theoretically program a sense of worth into the computer, to make it understand the loss of the monitor as a negative, and if this computer has a perfect understanding of humanity - it could infer that this would mean in human terms that it was now "Handicapped." We could make this computer understand that we were going to continue to "torture" it until we obtain the information we want. The computer will never feel as if its suffering despite its total understanding that any human would be begging for death. Why? Because the computer will also understand that a computer is not human. That it doesn't feel pain, it can't suffer. Without suffering you remove the impetus for suicide.
Whether it be emotional, physical or Mental, any self-aware being requires the ability to suffer, and to be suffering to commit suicide. To try and explain this in a round-a-bout manner, consider any major atrocity that has been committed, very few if any of those committing the atrocities ever commit suicide while carrying out the acts, almost inevitably they commit suicide once the fun is over. The bombings of Hiroshima and Nagasaki. Many Japanese killed themselves afterwards. Why? Well, some were clearly suffering from the physical trauma of being nuked, and simply chose the quick way over the slow one. Others chose to die rather than possibly live through the agonising pain. Others committed seppuku at the shame of surrendering. Who didn't kill themselves? The men who created the bomb. Who at the time, had the greatest understanding -any claims otherwise are attempts to cover their own guilt- of the destructive power of the atom-bomb. They fully understood what they had done. They did not suffer though. Some felt guilt afterwards, but none of them actually suffered.
Also, I skimmed over this, but Dogs are clearly self aware, the argument that they are not is largely from religious nuts and others who still hope to differentiate themselves from animals. Its the same as people trying to maintain the flat earth conspiracy, which btw if they weren't such sheeple they would man up and start pushing the convex earth theory, which is the only true explanation of earths topography.