Microsoft Created a Bot to Learn from Users. It Quickly Became a Racist Jerk
Although in this case only words were slung, I think this scenario illustrates a potentially huge problem: horrible people combined with an AI system that can't tell right and wrong (morality) could result in terrible catastrophes.
What if a distributed self-driving car system learned from users, and some people taught it to cut people off, drive aggressively, or even indirectly cause accidents?
I'm not sure how to create a moral system for AI, but this seems like a big obstacle to scaling up virtual agents.