Qsub job runs but doesn't write to file
I am running a parallelised code on an SGE cluster, via the qsub command. The code (which compiled successfully on the system on which it is supposed to run) is meant to take a file of input values, minimise some function of those values, and then output the new values to the same input file. The job executes succesfully (code 0), and runs for about 40 minutes of walltime: but nothing is written to the input file. This is my script to submit the jobs: #!/bin/bash #PBS -V #PBS -l select=1:ncpus=20:mpiprocs=20,walltime=02:00:00 #PBS -o some/path #PBS -e some/path #PBS -q smp #PBS -m ae #PBS -M user#username.com #PBS -P Name #PBS -I #PBS -N minMg-1 module load gcc/5.1.0 module load chpc/openmpi/1.10.2/gcc-5.1.0 mpirun -np 20 $SRCDIR/myexecutable args < inputfile.inp I can't see why the thing executes successfully, but doesn't write to inputfile.inp. Strangely, I also don't get the standard ".o" and ".e" files, either. I am sure my mistake may be obvious to someone in the know! Any help would be deeply appreciated.
Pipe Symbol in qsub Job name
SLURM how to qsub a task when another task is finished?
Can multiple qsub submissions read the same group of files?
SGE faild to submit job, attribute is not a memory value
How do you submit a job on multiple queues with Torque?
Maui - preventing jobs from running on the same node
qsub: What is the standard when to get occasional updates on a submitted job?
Submitting a job to qsub generates an error, “Warning: no access to tty”
Running samtools from a qsub
How do I schedule a job on multiple nodes with qsub Univa 8.1.7?
How to specify a fixed job name for jobs submitted by qsub
duplicate jobs in sun grid engine
SGE qsub define variable using bach?
Job chaining with qsub
Determine Load Status in qsub
How to avoid this error ?Unable to run job: error: no suitable queues