Started by timer Running as SYSTEM Building in workspace /var/lib/jenkins/jobs/pytorch_infer/workspace [SSH] script: TARGETNODE="""" module load anaconda3_gpu/4.13.0 module load cuda/11.7.0 cd pytorch_infer rm -f infer_results_jenkins.csv # Slurm Arguments sargs="--nodes=1 " sargs+="--ntasks-per-node=1 " sargs+="--mem=16g " sargs+="--time=00:10:00 " sargs+="--account=bbmb-hydro " sargs+="--gpus-per-node=1 " sargs+="--gpu-bind=closest " # Add Target node if it exists if [[ ! -z ${TARGETNODE} ]] then PARTITION=`sinfo --format="%R,%N" -n hydro61 | grep hydro61 | cut -d',' -f1 | tail -1` sargs+="--partition=${PARTITION} " sargs+="--nodelist=${TARGETNODE} " else sargs+="--partition=a100 " fi # Executable to run scmd="python benchmark.py --model-list jenkins_list_short.txt --bench inference --channels-last --results-file infer_results_jenkins.csv" # Run the command start_time=`date +%s.%N` echo $"Starting srun with command" echo "srun $sargs $scmd" srun $sargs $scmd end_time=`date +%s.%N` python transpose_results.py runtime=$( echo "$end_time - $start_time" | bc -l ) echo "YVALUE=$runtime" > time.txt printf "Pytorch test completed in %0.3f secs\n" $runtime [SSH] executing... Starting srun with command srun --nodes=1 --ntasks-per-node=1 --mem=16g --time=00:10:00 --account=bbmb-hydro --gpus-per-node=1 --gpu-bind=closest --partition=a100 python benchmark.py --model-list jenkins_list_short.txt --bench inference --channels-last --results-file infer_results_jenkins.csv srun: job 96340 queued and waiting for resources srun: job 96340 has been allocated resources Running benchmark on hydro04 Running bulk validation on these pretrained models: vgg19_bn, resnet18, resnet34, simplenetv1_5m_m1, Benchmarking in float32 precision. NHWC layout. torchscript disabled Model vgg19_bn created, param count: 143678248 Running inference benchmark on vgg19_bn for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 1436.60 samples/sec. 178.198 ms/step. Infer [16/40]. 1435.11 samples/sec. 178.383 ms/step. Infer [24/40]. 1435.11 samples/sec. 178.383 ms/step. Infer [32/40]. 1434.54 samples/sec. 178.455 ms/step. Infer [40/40]. 1433.88 samples/sec. 178.536 ms/step. Inference benchmark of vgg19_bn done. 1433.76 samples/sec, 178.54 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model resnet18 created, param count: 11689512 Running inference benchmark on resnet18 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 10699.53 samples/sec. 23.926 ms/step. Infer [16/40]. 10668.81 samples/sec. 23.995 ms/step. Infer [24/40]. 10680.84 samples/sec. 23.968 ms/step. Infer [32/40]. 10663.68 samples/sec. 24.007 ms/step. Infer [40/40]. 10665.55 samples/sec. 24.003 ms/step. Inference benchmark of resnet18 done. 10661.37 samples/sec, 24.00 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model resnet34 created, param count: 21797672 Running inference benchmark on resnet34 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 6522.22 samples/sec. 39.250 ms/step. Infer [16/40]. 6500.98 samples/sec. 39.379 ms/step. Infer [24/40]. 6490.58 samples/sec. 39.442 ms/step. Infer [32/40]. 6494.50 samples/sec. 39.418 ms/step. Infer [40/40]. 6489.62 samples/sec. 39.448 ms/step. Inference benchmark of resnet34 done. 6488.17 samples/sec, 39.45 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model simplenetv1_5m_m1 created, param count: 5752808 Running inference benchmark on simplenetv1_5m_m1 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 14034.59 samples/sec. 18.241 ms/step. Infer [16/40]. 14047.78 samples/sec. 18.224 ms/step. Infer [24/40]. 14046.65 samples/sec. 18.225 ms/step. Infer [32/40]. 14046.11 samples/sec. 18.226 ms/step. Infer [40/40]. 14039.63 samples/sec. 18.234 ms/step. Inference benchmark of simplenetv1_5m_m1 done. 14033.45 samples/sec, 18.23 ms/step args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='vgg19_bn', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='resnet18', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='resnet34', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='simplenetv1_5m_m1', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) --result [ { "model": "simplenetv1_5m_m1", "infer_samples_per_sec": 14033.45, "infer_step_time": 18.234, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 5.75 }, { "model": "resnet18", "infer_samples_per_sec": 10661.37, "infer_step_time": 24.003, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 11.69 }, { "model": "resnet34", "infer_samples_per_sec": 6488.17, "infer_step_time": 39.448, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 21.8 }, { "model": "vgg19_bn", "infer_samples_per_sec": 1433.76, "infer_step_time": 178.536, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 143.68 } ] Pytorch test completed in 131.895 secs [SSH] completed [SSH] exit-status: 0 [workspace] $ /bin/sh -xe /tmp/jenkins8143069655929801044.sh + scp 'HYDRO_REMOTE:~svchydrojenkins/pytorch_infer/time.txt' /var/lib/jenkins/jobs/pytorch_infer/workspace + scp 'HYDRO_REMOTE:~svchydrojenkins/pytorch_infer/infer_results_jenkins.csv' /var/lib/jenkins/jobs/pytorch_infer/workspace Recording plot data Saving plot series data from: /var/lib/jenkins/jobs/pytorch_infer/workspace/time.txt Finished: SUCCESS