GPU is not used for calculations despite tensorflow-gpu installed
My computer has the following software installed: Anaconda (3), TensorFlow (GPU), and Keras. There are two Anaconda virtual environments - one with TensorFlow for Python 2.7 and one for 3.5, both GPU version, installed according to the TF instructions. (I had a CPU version of TensorFlow installed previously in a separate environment, but I've deleted it.) When I run the following: source activate tensorflow-gpu-3.5 python code.py and check nvidia-smi it shows only 3MiB GPU Memory Usage by Python, so it looks like GPU is not used for calculations. (code.py is a simple deep Q-learning algorithm implemented with Keras) Any ideas what can be going wrong?
A good way to debug these problems is to check which operations have been allocated to which devices. You can check this by passing a configuration parameter to the session: session = tf.Session(config=tf.ConfigProto(log_device_placement=True)) When you run your app, you will see some output indicating which devices are being used. You can find more information here: https://www.tensorflow.org/tutorials/using_gpu
Changing recurrent connections and equations of RNN/LSTM in Tensorflow
How to use pre-built word embedding in tfLearn?
Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
How does TensorFlow cluster distribute load across machines if not specified explicitly?
How to to find an unnamed node
On Windows, running “import tensorflow” generates No module named “_pywrap_tensorflow” error
Tensorflow Inception FeedInputs: unable to find feed output input
Tensorflow inference for modified Inception V3 model
Tensorflow: Using a value in a tensor as a parameter
Documentation for Inference from saved model in Tensorflow
How to implement a sliding window in tensorflow?
tensorflow r0.12 resnet_v1_50 slim model nets output
TensorFlow on watchOS
Error in session.run() in Tensorflow
Keras - Stationary result using model.fit()
Deploying Keras Models via Google Cloud ML