There is an interesting dilemma with the automation software:
https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/
The question boils down to: Do you get in a car that might be programmed to kill you?
The question that was posed is actually bullshit. Driverless cars will not have AI that approaches the human capacity to think about life and death, or morality in general. The car will simply be able to solve physics problems very quickly. Even if it could think about morality, there is no guarantee that anyone will live or die in a specific situation. If it is able to differentiate between pedestrians and inanimate objects, it will be programmed to avoid the pedestrians, and trust that the car's other safety features, such as restraints and airbags are sufficient to protect the driver.
There will likely be other safety features involved. The car will not be able to exceed the speed limit, so it would never be driving at a high rate of speed on a roadway where there is a significant chance of pedestrians being present. It likely won't have to swerve at all in the situation provided, but just notice the pedestrians, and apply the brakes well before a human driver would be able to react.