An example of distributed training for MNIST is available online on https://github.com/ischlag/distributed-tensorflow-example/blob/master/example.py
In addition, note that You can decide to have more than one parameter server for efficiency reasons. Using parameters the server can provide better network utilization, and it allows to scale models to more parallel machines. It is possible to allocate more than one parameter server. The interested reader can have a look to https://www.tensorflow.org/deploy/distributed