We fit the model as we do with all other classifiers (caution, this might take a while):
In [12]: model.fit(X_train, Y_train, batch_size=128, nb_epoch=12,
... verbose=1, validation_data=(X_test, Y_test))
After training completes, we can evaluate the classifier:
In [13]: model.evaluate(X_test, Y_test, verbose=0)
Out[13]: 0.99
And we achieved 99% accuracy! This is worlds apart from the MLP classifier we implemented before. And this is just one way to do things. As you can see, neural networks provide a plethora of tuning parameters, and it is not at all clear which ones will lead to the best performance.