GPU is not used for calculations despite tensorflow-gpu installed
My computer has the following software installed: Anaconda (3), TensorFlow (GPU), and Keras. There are two Anaconda virtual environments - one with TensorFlow for Python 2.7 and one for 3.5, both GPU version, installed according to the TF instructions. (I had a CPU version of TensorFlow installed previously in a separate environment, but I've deleted it.) When I run the following: source activate tensorflow-gpu-3.5 python code.py and check nvidia-smi it shows only 3MiB GPU Memory Usage by Python, so it looks like GPU is not used for calculations. (code.py is a simple deep Q-learning algorithm implemented with Keras) Any ideas what can be going wrong?
A good way to debug these problems is to check which operations have been allocated to which devices. You can check this by passing a configuration parameter to the session: session = tf.Session(config=tf.ConfigProto(log_device_placement=True)) When you run your app, you will see some output indicating which devices are being used. You can find more information here: https://www.tensorflow.org/tutorials/using_gpu
Tensorflow seems to be using two GPUs but one GPU seems not be doing anything
Can we write one Graph to aggregate Gradient and value in CPU
Interpreting textsum decode files (more output than input?)
TensorFlow sequence-to-variable models (e.g., text regression)
What the meaning of the plot of tensorboard when using Queues?
Tensorflow MutableHashTable not updating
Difference between image dataset representation in Tensorflow and Theano
In Tensorflow, how do you feed the enqueue op when using QueueRunner?
'Model' object has no attribute 'load_model' keras
teonsorflow does not work for Python program [on hold]
How to delete existed tensorflow variable?
Tensorflow loss change when adding noise to input images
Does TensorFlow job use multiple cores by default?
what is the compatable version of keras for tensorflow 0.7
Profiling TensorFlow using tfprof
Mandelbrot set in tensorflow