Is it just me, or are the three laws of robotics fundamentally flawed? I mean lets face it... Rule 1: a Robot must not harm a human being. Rule 2: a Robot must obey a human being's command. Rule 3 : a Robot must protect its existence, as long as it doesn't interfere with rules 1 and 2.
Now, hypothetically speaking, a robot, if sufficiently advanced, is better than a human being. How can such an advanced robot justify these rules to itself?...
Im out of space.