We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello torque users and developers
I upgrade torque to new version(6.1.1) and I got some problems
Suppose that I submit job with command
qsub -l nodes=1:ppn=25 my_script.sh
than pbstop shows status with filled slots as much as jobs*#ofppns
but it shows like this
Usage Totals: 5/264 Procs, 5/5 Nodes, 8/40 Jobs Running Node States: 5 free CPU 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 --------------------------------------------------------------- shepherd2 . . . . . . . . . . . . . . . . . . . . . . . . R . . . . . shepherd2 . . . . . . . . . . . . . . . . . . --------------------------------------------------------------- 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 --------------------------------------------------------------- shepherd3 . . . . . . . . . . . . . . . . . . . . . . . . C . . . . . shepherd3 . . . . . . . . . . . . . . . . . . ---------------------------------------------------------------
It shows only one Proc per job is used in torque system, tho I write command as ppn=25.
Here's qstat result
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time ----------------------- ----------- -------- ---------------- ------ ----- ------ --------- --------- - --------- 673.shepherd dlsrnsi batch snakejob.run_mod 79340 1 25 -- -- R 00:00:00 695.shepherd dlsrnsi batch snakejob.run_mod 45783 1 25 -- -- R 00:00:00 696.shepherd dlsrnsi batch snakejob.run_mod 45866 1 25 -- -- R 00:00:00 697.shepherd dlsrnsi batch snakejob.run_mod 82416 1 25 -- -- R 00:00:00 698.shepherd dlsrnsi batch snakejob.run_mod 42329 1 25 -- -- R 00:00:00 699.shepherd dlsrnsi batch snakejob.run_mod 171979 1 25 -- -- R 00:00:00 700.shepherd dlsrnsi batch snakejob.run_mod 172166 1 25 -- -- R 00:00:00 701.shepherd dlsrnsi batch snakejob.run_mod 3387 1 25 -- -- R 00:00:00 702.shepherd dlsrnsi batch snakejob.run_mod -- 1 25 -- -- Q -- 703.shepherd dlsrnsi batch snakejob.run_mod -- 1 25 -- -- Q --
It has some problem in checking elap time.
and here is my queue option. (Acutally I don't have much idea with queue). I think it would be source of problem
create queue batch set queue batch queue_type = Execution set queue batch max_running = 1000 set queue batch resources_max.nodect = 1000 set queue batch resources_max.ncpus = 48 set queue batch resources_max.nodes = 2 set queue batch resources_max.neednodes = 1:ppn=48 set queue batch resources_max.procct = 1000 set queue batch resources_min.ncpus = 1 set queue batch resources_min.procct = 1 set queue batch resources_default.ncpus = 1 set queue batch resources_default.neednodes = batch set queue batch resources_default.nodect = 1 set queue batch resources_default.nodes = 1 set queue batch resources_default.procct = 1000 set queue batch resources_available.nodect = 1000 set queue batch resources_available.procct = 1000 set queue batch resources_available.nodes = 1:ppn=50 set queue batch enabled = True set queue batch started = True
Thank you for reading
Ingoo Lee
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hello torque users and developers
I upgrade torque to new version(6.1.1) and I got some problems
Suppose that I submit job with command
qsub -l nodes=1:ppn=25 my_script.sh
than pbstop shows status with filled slots as much as jobs*#ofppns
but it shows like this
It shows only one Proc per job is used in torque system, tho I write command as ppn=25.
Here's qstat result
It has some problem in checking elap time.
and here is my queue option. (Acutally I don't have much idea with queue). I think it would be source of problem
Thank you for reading
Ingoo Lee
The text was updated successfully, but these errors were encountered: