Jobsub ID 42261.2@dunegpschedd02.fnal.gov
Jobsub ID | 42261.2@dunegpschedd02.fnal.gov |
Workflow ID | 2921 |
Stage ID | 1 |
User name | gpaixao@fnal.gov |
HTCondor Group | group_dune |
Requested | Processors | 1 |
GPU | No |
RSS bytes | 4193255424 (3999 MiB) |
Wall seconds limit | 80000 (22 hours) |
Submitted time | 2025-09-18 15:16:35 |
Site | UK_Manchester |
Entry | UBoone_T2_UK_Manchester_ce02 |
Last heartbeat | 2025-09-18 15:18:05 |
From worker node | Hostname | wn2204251.tier2.hep.manchester.ac.uk |
cpuinfo | AMD EPYC 7513 32-Core Processor |
OS release | Scientific Linux release 7.9 (Nitrogen) |
Processors | 1 |
RSS bytes | 4194304000 (4000 MiB) |
Wall seconds limit | 257400 (71 hours) |
GPU | |
Inner Apptainer? | True |
Job state | jobscript_error |
Started | 2025-09-18 15:17:33 |
Input files | monte-carlo-002921-000009
|
Jobscript | Exit code | 1 |
Real time | 0m (0s) |
CPU time | 0m (0s = 0%) |
Max RSS bytes | 0 (0 MiB) |
Outputting started | |
Output files | |
Finished | 2025-09-18 15:18:05 |
Saved logs | justin-logs:42261.2-dunegpschedd02.fnal.gov.logs.tgz |
List job events Cached HTCondor job logs |
Jobscript log (last 10,000 characters)
Setting up larsoft UPS area... /cvmfs/larsoft.opensciencegrid.org
Setting up DUNE UPS area... /cvmfs/dune.opensciencegrid.org/products/dune/
Justin processors: 1
did_pfn_rse monte-carlo-002921-000009 000009 MONTECARLO
2 42261
usage: hadd [-a A] [-k K] [-T T] [-O O] [-v V] [-j J] [-dbg DBG] [-d D] [-n N]
[-cachesize CACHESIZE]
[-experimental-io-features EXPERIMENTAL_IO_FEATURES] [-f F]
[-fk FK] [-ff FF] [-f0 F0] [-f6 F6]
TARGET SOURCES
OPTIONS:
-a Append to the output
-k Skip corrupt or non-existent files, do not exit
-T Do not merge Trees
-O Re-optimize basket size when merging TTree
-v Explicitly set the verbosity level: 0 request no output, 99 is the default
-j Parallelize the execution in multiple processes
-dbg Parallelize the execution in multiple processes in debug mode (Does not delete partial files stored inside working directory)
-d Carry out the partial multiprocess execution in the specified directory
-n Open at most 'maxopenedfiles' at once (use 0 to request to use the system maximum)
-cachesize Resize the prefetching cache use to speed up I/O operations(use 0 to disable)
-experimental-io-features Used with an argument provided, enables the corresponding experimental feature for output trees
-f Gives the ability to specify the compression level of the target file(by default 4)
-fk Sets the target file to contain the baskets with the same compression
as the input files (unless -O is specified). Compresses the meta data
using the compression level specified in the first input or the
compression setting after fk (for example 206 when using -fk206)
-ff The compression level use is the one specified in the first input
-f0 Do not compress the target file
-f6 Use compression level 6. (See TFile::SetCompressionSettings for the support range of value.)
TARGET Target file
SOURCES Source files
Querying usertests:H4_v34b_7GeV_-27.7-fnal-w2718s1p1 for 100 files
Query: files from usertests:H4_v34b_7GeV_-27.7-fnal-w2718s1p1 where dune.output_status=confirmed ordered skip 800 limit 100
Getting names and metadata
done
{'core.runs': [42261], 'core.runs_subruns': [4226100002]}
Getting paths from rucio
Got 0 paths from 0 files
['hadd', '']
Traceback (most recent call last):
File "/cvmfs/fifeuser4.opensciencegrid.org/sw/dune/7f3f3008d21189b6ef9a78409a02e3f793fbbe00/merge_g4bl.py", line 511, in <module>
do_merge(args)
File "/cvmfs/fifeuser4.opensciencegrid.org/sw/dune/7f3f3008d21189b6ef9a78409a02e3f793fbbe00/merge_g4bl.py", line 119, in do_merge
raise Exception('Error in hadd')
Exception: Error in hadd
Exiting with error