jemartin commited on
Commit
0e9f5ca
·
verified ·
1 Parent(s): 861df73

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ model_name: bvlcalexnet-3.onnx
5
+ tags:
6
+ - validated
7
+ - vision
8
+ - classification
9
+ - alexnet
10
+ ---
11
+ <!--- SPDX-License-Identifier: BSD-3-Clause -->
12
+
13
+ # AlexNet
14
+
15
+ |Model |Download |Download (with sample test data)| ONNX version |Opset version|Top-1 accuracy (%)|Top-5 accuracy (%)|
16
+ | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
17
+ |AlexNet| [238 MB](model/bvlcalexnet-3.onnx) | [225 MB](model/bvlcalexnet-3.tar.gz) | 1.1 | 3| | |
18
+ |AlexNet| [238 MB](model/bvlcalexnet-6.onnx) | [225 MB](model/bvlcalexnet-6.tar.gz) | 1.1.2 | 6| | |
19
+ |AlexNet| [238 MB](model/bvlcalexnet-7.onnx) | [226 MB](model/bvlcalexnet-7.tar.gz) | 1.2 | 7| | |
20
+ |AlexNet| [238 MB](model/bvlcalexnet-8.onnx) | [226 MB](model/bvlcalexnet-8.tar.gz) | 1.3 | 8| | |
21
+ |AlexNet| [238 MB](model/bvlcalexnet-9.onnx) | [226 MB](model/bvlcalexnet-9.tar.gz) | 1.4 | 9| | |
22
+ |AlexNet| [233 MB](model/bvlcalexnet-12.onnx) | [216 MB](model/bvlcalexnet-12.tar.gz) | 1.9 | 12|54.80|78.23|
23
+ |AlexNet-int8| [58 MB](model/bvlcalexnet-12-int8.onnx) | [39 MB](model/bvlcalexnet-12-int8.tar.gz) | 1.9 | 12|54.68|78.23|
24
+ |AlexNet-qdq| [59 MB](model/bvlcalexnet-12-qdq.onnx) | [44 MB](model/bvlcalexnet-12-qdq.tar.gz) | 1.9 | 12|54.71|78.22|
25
+ > Compared with the fp32 AlextNet, int8 AlextNet's Top-1 accuracy drop ratio is 0.22%, Top-5 accuracy drop ratio is 0.05% and performance improvement is 2.26x.
26
+ >
27
+ > **Note**
28
+ >
29
+ > Different preprocess methods will lead to different accuracies, the accuracy in table depends on this specific [preprocess method](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/image_recognition/onnx_model_zoo/alexnet/quantization/ptq/main.py).
30
+ >
31
+ > The performance depends on the test hardware. Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.
32
+
33
+ ## Description
34
+ AlexNet is the name of a convolutional neural network for classification,
35
+ which competed in the ImageNet Large Scale Visual Recognition Challenge in 2012.
36
+
37
+ Differences:
38
+ - not training with the relighting data-augmentation;
39
+ - initializing non-zero biases to 0.1 instead of 1 (found necessary for training, as initialization to 1 gave flat loss).
40
+
41
+ ### Dataset
42
+ [ILSVRC2012](http://www.image-net.org/challenges/LSVRC/2012/)
43
+
44
+ ## Source
45
+ Caffe BVLC AlexNet ==> Caffe2 AlexNet ==> ONNX AlexNet
46
+
47
+ ## Model input and output
48
+ ### Input
49
+ ```
50
+ data_0: float[1, 3, 224, 224]
51
+ ```
52
+ ### Output
53
+ ```
54
+ softmaxout_1: float[1, 1000]
55
+ ```
56
+ ### Pre-processing steps
57
+ ### Post-processing steps
58
+ ### Sample test data
59
+ Randomly generated sample test data:
60
+ - test_data_0.npz
61
+ - test_data_1.npz
62
+ - test_data_2.npz
63
+ - test_data_set_0
64
+ - test_data_set_1
65
+ - test_data_set_2
66
+
67
+ ## Results/accuracy on test set
68
+ The bundled model is the iteration 360,000 snapshot.
69
+ The best validation performance during training was iteration
70
+ 358,000 with validation accuracy 57.258% and loss 1.83948.
71
+ This model obtains a top-1 accuracy 57.1% and a top-5 accuracy
72
+ 80.2% on the validation set, using just the center crop.
73
+ (Using the average of 10 crops, (4 + 1 center) * 2 mirror,
74
+ should obtain a bit higher accuracy.)
75
+
76
+ ## Quantization
77
+ AlexNet-int8 and AlexNet-qdq are obtained by quantizing fp32 AlexNet model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/image_recognition/onnx_model_zoo/alexnet/quantization/ptq/README.md) to understand how to use Intel® Neural Compressor for quantization.
78
+
79
+ ### Environment
80
+ onnx: 1.9.0
81
+ onnxruntime: 1.8.0
82
+
83
+ ### Prepare model
84
+ ```shell
85
+ wget https://github.com/onnx/models/raw/main/vision/classification/alexnet/model/bvlcalexnet-12.onnx
86
+ ```
87
+
88
+ ### Model quantize
89
+ Make sure to specify the appropriate dataset path in the configuration file.
90
+ ```bash
91
+ bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
92
+ --config=alexnet.yaml \
93
+ --data_path=/path/to/imagenet \
94
+ --label_path=/path/to/imagenet/label \
95
+ --output_model=path/to/save
96
+ ```
97
+
98
+ ## References
99
+ * [ImageNet Classification with Deep Convolutional Neural Networks](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)
100
+
101
+ * [Intel® Neural Compressor](https://github.com/intel/neural-compressor)
102
+
103
+ ## Contributors
104
+ * [mengniwang95](https://github.com/mengniwang95) (Intel)
105
+ * [yuwenzho](https://github.com/yuwenzho) (Intel)
106
+ * [airMeng](https://github.com/airMeng) (Intel)
107
+ * [ftian1](https://github.com/ftian1) (Intel)
108
+ * [hshen14](https://github.com/hshen14) (Intel)
109
+
110
+ ## License
111
+ [BSD-3](LICENSE)
112
+