Started by timer Running as SYSTEM Building in workspace /var/lib/jenkins/jobs/pytorch_infer/workspace [SSH] script: TARGETNODE="""" module load anaconda3_gpu/4.13.0 module load cuda/11.7.0 cd pytorch_infer rm -f infer_results_jenkins.csv # Slurm Arguments sargs="--nodes=1 " sargs+="--ntasks-per-node=1 " sargs+="--mem=16g " sargs+="--time=00:10:00 " sargs+="--account=bbmb-hydro " sargs+="--gpus-per-node=1 " sargs+="--gpu-bind=closest " # Add Target node if it exists if [[ ! -z ${TARGETNODE} ]] then PARTITION=`sinfo --format="%R,%N" -n hydro61 | grep hydro61 | cut -d',' -f1 | tail -1` sargs+="--partition=${PARTITION} " sargs+="--nodelist=${TARGETNODE} " else sargs+="--partition=a100 " fi # Executable to run scmd="python benchmark.py --model-list jenkins_list_short.txt --bench inference --channels-last --results-file infer_results_jenkins.csv" # Run the command start_time=`date +%s.%N` echo $"Starting srun with command" echo "srun $sargs $scmd" srun $sargs $scmd end_time=`date +%s.%N` python transpose_results.py runtime=$( echo "$end_time - $start_time" | bc -l ) echo "YVALUE=$runtime" > time.txt printf "Pytorch test completed in %0.3f secs\n" $runtime [SSH] executing... Starting srun with command srun --nodes=1 --ntasks-per-node=1 --mem=16g --time=00:10:00 --account=bbmb-hydro --gpus-per-node=1 --gpu-bind=closest --partition=a100 python benchmark.py --model-list jenkins_list_short.txt --bench inference --channels-last --results-file infer_results_jenkins.csv srun: job 97465 queued and waiting for resources srun: job 97465 has been allocated resources Running benchmark on hydro06 Running bulk validation on these pretrained models: vgg19_bn, resnet18, resnet34, simplenetv1_5m_m1, Benchmarking in float32 precision. NHWC layout. torchscript disabled Model vgg19_bn created, param count: 143678248 Running inference benchmark on vgg19_bn for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 1429.55 samples/sec. 179.077 ms/step. Infer [16/40]. 1427.35 samples/sec. 179.353 ms/step. Infer [24/40]. 1425.64 samples/sec. 179.569 ms/step. Infer [32/40]. 1425.19 samples/sec. 179.625 ms/step. Infer [40/40]. 1424.72 samples/sec. 179.684 ms/step. Inference benchmark of vgg19_bn done. 1424.58 samples/sec, 179.68 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model resnet18 created, param count: 11689512 Running inference benchmark on resnet18 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 10631.23 samples/sec. 24.080 ms/step. Infer [16/40]. 10618.13 samples/sec. 24.110 ms/step. Infer [24/40]. 10614.73 samples/sec. 24.117 ms/step. Infer [32/40]. 10592.57 samples/sec. 24.168 ms/step. Infer [40/40]. 10593.27 samples/sec. 24.166 ms/step. Inference benchmark of resnet18 done. 10589.52 samples/sec, 24.17 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model resnet34 created, param count: 21797672 Running inference benchmark on resnet34 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 6463.42 samples/sec. 39.607 ms/step. Infer [16/40]. 6440.67 samples/sec. 39.747 ms/step. Infer [24/40]. 6435.03 samples/sec. 39.782 ms/step. Infer [32/40]. 6439.48 samples/sec. 39.755 ms/step. Infer [40/40]. 6440.67 samples/sec. 39.747 ms/step. Inference benchmark of resnet34 done. 6439.24 samples/sec, 39.75 ms/step Benchmarking in float32 precision. NHWC layout. torchscript disabled Model simplenetv1_5m_m1 created, param count: 5752808 Running inference benchmark on simplenetv1_5m_m1 for 40 steps w/ input size (3, 224, 224) and batch size 256. Infer [8/40]. 12420.56 samples/sec. 20.611 ms/step. Infer [16/40]. 12380.25 samples/sec. 20.678 ms/step. Infer [24/40]. 12357.61 samples/sec. 20.716 ms/step. Infer [32/40]. 12368.71 samples/sec. 20.697 ms/step. Infer [40/40]. 12342.35 samples/sec. 20.742 ms/step. Inference benchmark of simplenetv1_5m_m1 done. 12337.59 samples/sec, 20.74 ms/step args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='vgg19_bn', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='resnet18', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='resnet34', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) args: Namespace(model_list='jenkins_list_short.txt', bench='inference', detail=False, results_file='infer_results_jenkins.csv', num_warm_iter=10, num_bench_iter=40, model='simplenetv1_5m_m1', batch_size=256, img_size=None, input_size=None, use_train_size=False, num_classes=None, gp=None, channels_last=True, grad_checkpointing=False, amp=False, precision='float32', torchscript=False, fuser='', opt='sgd', opt_eps=None, opt_betas=None, momentum=0.9, weight_decay=0.0001, clip_grad=None, clip_mode='norm', smoothing=0.1, drop=0.0, drop_path=None, drop_block=None) --result [ { "model": "simplenetv1_5m_m1", "infer_samples_per_sec": 12337.59, "infer_step_time": 20.742, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 5.75 }, { "model": "resnet18", "infer_samples_per_sec": 10589.52, "infer_step_time": 24.166, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 11.69 }, { "model": "resnet34", "infer_samples_per_sec": 6439.24, "infer_step_time": 39.747, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 21.8 }, { "model": "vgg19_bn", "infer_samples_per_sec": 1424.58, "infer_step_time": 179.684, "infer_batch_size": 256, "infer_img_size": 224, "param_count": 143.68 } ] Pytorch test completed in 62.685 secs [SSH] completed [SSH] exit-status: 0 [workspace] $ /bin/sh -xe /tmp/jenkins6999037555586087061.sh + scp 'HYDRO_REMOTE:~svchydrojenkins/pytorch_infer/time.txt' /var/lib/jenkins/jobs/pytorch_infer/workspace + scp 'HYDRO_REMOTE:~svchydrojenkins/pytorch_infer/infer_results_jenkins.csv' /var/lib/jenkins/jobs/pytorch_infer/workspace Recording plot data Saving plot series data from: /var/lib/jenkins/jobs/pytorch_infer/workspace/time.txt Finished: SUCCESS