Run Keras on Mac OS with GPU
I just started playing with neural network using software other than Matlab. I want a high level library which can do prototyping real fast to test out my ideas. It will be great if it I could switch between running on CPU and GPU with simple arguments. While testing different tools, I found Keras best suits what I need!
Keras is a Python Deep Learning library backed by Theano and TensorFlow. The design philosophy is focus on minimalist, highly modular. It's super fast to do prototyping and run seamlessly on CPU and GPU!
It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
In this post, I'm going to show how to install Keras on Mac OS and run in GPU mode (Nvidia graphic card required).
- Python 2.7+
Theano is a Python numerical computation library developed by a machine learning group at the Université de Montréal. For simplicity, let's use default backend Theano. You can switch to TensorFlow by following official documentation.
pip install git+git://github.com/Theano/Theano.git
pip install keras
Enable GPU support
Verify Graphic Card
To see if you have valid graphic card running CUDA, find your graphic card info here
About This Mac->
System Report. And check CUDA support list.
CUDA is a parallel computing platform and programming model invented by Nvidia. We will need it to boost our computation speed! Go to Nvidia download page. And select proper OS version you have. After installation, add following path to your
You will also need a GPU-accelerated library of primitives for deep neural networks. Nvidia provides cuDNN here. Extract your downloaded file. If you extract at home folder
~/, the extracted path will be
~/cuda. And this is your
<cuda_path>. Then add following library path to your
export CUDA_ROOT=<cuda_path> export LIBRARY_PATH=$CUDA_ROOT/lib:$CUDA_ROOT/lib64:$LIBRARY_PATH
source ~/.bash_profile to make init file effective. We are all set here! Now let's play with some examples.
Let's first download examples from Keras repo to your
<project_path>. If you don't have one simply run
mkdir ~/projects. And
~/projects is your
$ cd <project_path> $ git clone https://github.com/fchollet/keras.git $ cd keras/examples $ # Run in CPU mode $ THEANO_FLAGS=mode=FAST_RUN python imdb_cnn.py
And you will see the result after some time. It took almost an hour to finish one epoch.
Loading data... 20000 train sequences 5000 test sequences Pad sequences (samples x time) X_train shape: (20000, 100) X_test shape: (5000, 100) Build model... Train on 20000 samples, validate on 5000 samples Epoch 1/2 20000/20000 [==============================] - 3408s - loss: 0.5910 - acc: 1.0000 - val_loss: 0.4226 - val_acc: 1.0000 Epoch 2/2 20000/20000 [==============================] - 3347s - loss: 0.3656 - acc: 1.0000 - val_loss: 0.3576 - val_acc: 1.0000
How about running it in GPU mode? You just need to add two flags (device, floatX) that simple!
$ # Run in GPU mode $ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python imdb_cnn.py
You will see the performance is 100x better. It took 30 seconds for each epoch.
Loading data... 20000 train sequences 5000 test sequences Pad sequences (samples x time) X_train shape: (20000, 100) X_test shape: (5000, 100) Build model... Train on 20000 samples, validate on 5000 samples Epoch 1/2 20000/20000 [==============================] - 30s - loss: 0.6004 - acc: 1.0000 - val_loss: 0.4374 - val_acc: 1.0000 Epoch 2/2 20000/20000 [==============================] - 30s - loss: 0.3693 - acc: 1.0000 - val_loss: 0.3603 - val_acc: 1.0000