Update README.md
Browse files
README.md
CHANGED
|
@@ -44,27 +44,8 @@ source trt/activate.sh
|
|
| 44 |
### 3. Build the TensorRT engine.
|
| 45 |
|
| 46 |
|
| 47 |
-
#### Method 1: Use the prebuilt engine
|
| 48 |
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
| Supported GPU | Remote Path |
|
| 52 |
-
|:----------------:|:---------------------------------:|
|
| 53 |
-
| GeForce RTX 3090 | `engines/RTX3090/model_onnx.plan` |
|
| 54 |
-
| GeForce RTX 4090 | `engines/RTX4090/model_onnx.plan` |
|
| 55 |
-
| A100 | `engines/A100/model_onnx.plan` |
|
| 56 |
-
|
| 57 |
-
Use the following command to download and place the engine in the specified location.
|
| 58 |
-
|
| 59 |
-
*Note: Please replace `<Remote Path>` with the corresponding remote path in the table above.*
|
| 60 |
-
|
| 61 |
-
```shell
|
| 62 |
-
export REMOTE_PATH=<Remote Path>
|
| 63 |
-
huggingface-cli download Tencent-Hunyuan/TensorRT-engine ${REMOTE_PATH} ./ckpts/t2i/model_trt/engine/
|
| 64 |
-
ln -s ${REMOTE_PATH} ./ckpts/t2i/model_trt/engine/model_onnx.plan
|
| 65 |
-
```
|
| 66 |
-
|
| 67 |
-
#### Method 2: Build your own engine
|
| 68 |
|
| 69 |
If you are using a different GPU, you can build the engine using the following command.
|
| 70 |
|
|
@@ -85,6 +66,27 @@ sh trt/build_engine.sh 1.0
|
|
| 85 |
|
| 86 |
Finally, if you see the output like `&&&& PASSED TensorRT.trtexec [TensorRT v10100]`, the engine is built successfully.
|
| 87 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
### 4. Run the inference using the TensorRT model.
|
| 89 |
|
| 90 |
```shell
|
|
|
|
| 44 |
### 3. Build the TensorRT engine.
|
| 45 |
|
| 46 |
|
|
|
|
| 47 |
|
| 48 |
+
#### Method 1: Build your own engine (Recommend)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
If you are using a different GPU, you can build the engine using the following command.
|
| 51 |
|
|
|
|
| 66 |
|
| 67 |
Finally, if you see the output like `&&&& PASSED TensorRT.trtexec [TensorRT v10100]`, the engine is built successfully.
|
| 68 |
|
| 69 |
+
|
| 70 |
+
#### Method 2: Use the prebuilt engine (only for v1.x)
|
| 71 |
+
|
| 72 |
+
We provide some prebuilt [TensorRT Engines](https://huggingface.co/Tencent-Hunyuan/TensorRT-engine), which need to be downloaded from Huggingface.
|
| 73 |
+
|
| 74 |
+
| Supported GPU | Remote Path |
|
| 75 |
+
|:----------------:|:---------------------------------:|
|
| 76 |
+
| GeForce RTX 3090 | `engines/RTX3090/model_onnx.plan` |
|
| 77 |
+
| GeForce RTX 4090 | `engines/RTX4090/model_onnx.plan` |
|
| 78 |
+
| A100 | `engines/A100/model_onnx.plan` |
|
| 79 |
+
|
| 80 |
+
Use the following command to download and place the engine in the specified location.
|
| 81 |
+
|
| 82 |
+
*Note: Please replace `<Remote Path>` with the corresponding remote path in the table above.*
|
| 83 |
+
|
| 84 |
+
```shell
|
| 85 |
+
export REMOTE_PATH=<Remote Path>
|
| 86 |
+
huggingface-cli download Tencent-Hunyuan/TensorRT-engine ${REMOTE_PATH} ./ckpts/t2i/model_trt/engine/
|
| 87 |
+
ln -s ${REMOTE_PATH} ./ckpts/t2i/model_trt/engine/model_onnx.plan
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
### 4. Run the inference using the TensorRT model.
|
| 91 |
|
| 92 |
```shell
|