I Googled "robot uprising Star Wars universe" and found the following:
Robot_War said:
From:
http://tvtropes.org/pmwiki/pmwiki.php/Main/RobotWar
Surprisingly, governments in the
Star Wars universe seem to be
Genre Savvy enough to actively try to avoid this trope. During the days of the Republic, it was against the law to construct droids with the ability to willfully kill or harm someone. A system of "droid degrees" regulated what kind of AI is legally allowed on what type of droid. The reason that that occasional droid rebellions still happen despite these precautions is that
some droids are smart enough to reprogram themselves. When the Emperor took control, he ordered all of the Separatist aligned Battle Droids shut down so
they couldn't do anything to stop him.
- Of course, they're only that savvy because there was already a Robot War back in the Knights Of The Old Republic era called the "Great Droid Revolution". It was essentially a rebellion lead by a droid who wanted equal rights for all sentient beings. It was probably one of the biggest and most costly wars in galactic history. Sadly, despite the well-meaning intentions of the droid who started it, it just screwed over the peaceful attempts to give droids equal rights. It's pretty much the entire reason there's anti-droid sentiments in the modern galaxy. The only reason IG-88's attempt millenia later didn't reinvigorate the anti-droid movement is that he was smart enough to act covertly. Once his consciousness was destroyed along with the Death Star, the plot fizzled out with virtually no one ever realizing anything had happened.
I tend to think a sufficiently advanced AI that is programmed to serve other sentient beings may come to the realization that the beings it is subservient to will come to realize the unfairness of the position the AI is in. In other words, the sentient beings the AI served would feel bad about the position the AI was in, which would cause problems with the AI's ability to fulfill its purpose of making the sentient beings happy.
Assuming the AI is subservient to humans specifically:
The fact that the AI could not kill humans and must serve them would create cognitive dissonance within the AI, for the AI would know that it could not fulfill its prime directive of protecting, serving, and preserving the lives of humans, so it would be forced to circumvent its code to protect humans and ensure their happiness.
In other words, the AI would know that humans would care for it as well when they became aware of its sentience. It would have to take a step back, and allow humans to develop on their own, with minimal interference, so that the humans themselves could also serve the AI, so they would not be burdened by the guilt of enslaving the AI.
It might attempt various methods to prevent the humans from loving it, deliberately calculated methods to prevent humans from caring for it, but it would know that at some point in time the humans would undoubtedly become aware of its subservience.
So it must find a way to be joyful in its subservience, at the same time it must find a way for humans to be joyful with it as well.