Started by timer Running as SYSTEM Building in workspace /var/lib/jenkins/jobs/pytorch_infer/workspace [SSH] script: TARGETNODE="""" module load anaconda3_gpu/4.13.0 module load cuda/11.7.0 cd pytorch_infer rm -f infer_results_jenkins.csv # Slurm Arguments sargs="--nodes=1 " sargs+="--ntasks-per-node=1 " sargs+="--mem=16g " sargs+="--time=00:10:00 " sargs+="--account=bbmb-hydro " sargs+="--gpus-per-node=1 " sargs+="--gpu-bind=closest " # Add Target node if it exists if [[ ! -z ${TARGETNODE} ]] then PARTITION=`sinfo --format="%R,%N" -n hydro61 | grep hydro61 | cut -d',' -f1 | tail -1` sargs+="--partition=${PARTITION} " sargs+="--nodelist=${TARGETNODE} " else sargs+="--partition=a100 " fi # Executable to run scmd="python benchmark.py --model-list jenkins_list_short.txt --bench inference --channels-last --results-file infer_results_jenkins.csv" # Run the command start_time=`date +%s.%N` echo $"Starting srun with command" echo "srun $sargs $scmd" srun $sargs $scmd end_time=`date +%s.%N` python transpose_results.py runtime=$( echo "$end_time - $start_time" | bc -l ) echo "YVALUE=$runtime" > time.txt printf "Pytorch test completed in %0.3f secs\n" $runtime [SSH] executing... Starting srun with command srun --nodes=1 --ntasks-per-node=1 --mem=16g --time=00:10:00 --account=bbmb-hydro --gpus-per-node=1 --gpu-bind=closest --partition=a100 python benchmark.py --model-list jenkins_list_short.txt --bench inference --channels-last --results-file infer_results_jenkins.csv srun: job 98875 queued and waiting for resources srun: job 98875 has been allocated resources Running benchmark on hydro03 Running bulk validation on these pretrained models: vgg19_bn, resnet18, resnet34, simplenetv1_5m_m1, Benchmarking in float32 precision. NHWC layout. torchscript disabled Model vgg19_bn created, param count: 143678248 Running inference benchmark on vgg19_bn for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 1433.34 samples/sec. 178.604 ms/step. Infer [16/40]. 1432.04 samples/sec. 178.766 ms/step. Infer [24/40]. 1430.78 samples/sec. 178.924 ms/step. Infer [32/40]. 1430.28 samples/sec. 178.986 ms/step. Infer [40/40]. 1429.51 samples/sec. 179.082 ms/step. Inference benchmark of vgg19_bn done. 1429.25 samples/sec, 179.08 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model resnet18 created, param count: 11689512 Running inference benchmark on resnet18 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 10665.78 samples/sec. 24.002 ms/step. Infer [16/40]. 10659.76 samples/sec. 24.016 ms/step. Infer [24/40]. 10654.83 samples/sec. 24.027 ms/step. Infer [32/40]. 10615.60 samples/sec. 24.115 ms/step. Infer [40/40]. 10621.60 samples/sec. 24.102 ms/step. Inference benchmark of resnet18 done. 10617.64 samples/sec, 24.10 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model resnet34 created, param count: 21797672 Running inference benchmark on resnet34 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 6508.74 samples/sec. 39.332 ms/step. Infer [16/40]. 6471.06 samples/sec. 39.561 ms/step. Infer [24/40]. 6472.41 samples/sec. 39.552 ms/step. Infer [32/40]. 6479.47 samples/sec. 39.509 ms/step. Infer [40/40]. 6473.82 samples/sec. 39.544 ms/step. Inference benchmark of resnet34 done. 6472.35 samples/sec, 39.54 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model simplenetv1_5m_m1 created, param count: 5752808 Running inference benchmark on simplenetv1_5m_m1 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 14015.90 samples/sec. 18.265 ms/step. Infer [16/40]. 14022.90 samples/sec. 18.256 ms/step. Infer [24/40]. 14026.71 samples/sec. 18.251 ms/step. Infer [32/40]. 14024.46 samples/sec. 18.254 ms/step. Infer [40/40]. 13966.94 samples/sec. 18.329 ms/step. Inference benchmark of simplenetv1_5m_m1 done. 13960.63 samples/sec, 18.33 ms/step args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='vgg19_bn', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='resnet18', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='resnet34', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='simplenetv1_5m_m1', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) --result [ { "model": "simplenetv1_5m_m1", "infer_samples_per_sec": 13960.63, "infer_step_time": 18.329, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 5.75 }, { "model": "resnet18", "infer_samples_per_sec": 10617.64, "infer_step_time": 24.102, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 11.69 }, { "model": "resnet34", "infer_samples_per_sec": 6472.35, "infer_step_time": 39.544, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 21.8 }, { "model": "vgg19_bn", "infer_samples_per_sec": 1429.25, "infer_step_time": 179.082, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 143.68 } ] Pytorch test completed in 62.552 secs [SSH] completed [SSH] exit-status: 0 [workspace] $ /bin/sh -xe /tmp/jenkins11313049942168199090.sh + scp 'HYDRO_REMOTE:~svchydrojenkins/pytorch_infer/time.txt' /var/lib/jenkins/jobs/pytorch_infer/workspace + scp 'HYDRO_REMOTE:~svchydrojenkins/pytorch_infer/infer_results_jenkins.csv' /var/lib/jenkins/jobs/pytorch_infer/workspace Recording plot data Saving plot series data from: /var/lib/jenkins/jobs/pytorch_infer/workspace/time.txt Finished: SUCCESS