diff --git a/README.md b/README.md index 14feecba5..3a05f560b 100644 --- a/README.md +++ b/README.md @@ -295,7 +295,7 @@ You can do MindSpore Lite inference in MindOCR using **MindOCR models** or **Thi For the detailed performance of the trained models, please refer to [https://github.com/mindspore-lab/mindocr/blob/main/configs](./configs). -For details of MindSpore Lite and ACL inference models support, please refer to [MindOCR Models Support List](docs/en/inference/mindocr_models_list.md) and [Third-party Models Support List](docs/en/inference/thirdparty_models_list.md) (PaddleOCR etc.). +For details of MindSpore Lite inference models support, please refer to [MindOCR Models Support List](docs/en/inference/mindocr_models_list.md) and [Third-party Models Support List](docs/en/inference/thirdparty_models_list.md) (PaddleOCR etc.). ## Dataset List diff --git a/configs/cls/mobilenetv3/README.md b/configs/cls/mobilenetv3/README.md index bb0f87f0e..e66f43d9b 100644 --- a/configs/cls/mobilenetv3/README.md +++ b/configs/cls/mobilenetv3/README.md @@ -132,7 +132,7 @@ python tools/train.py -c configs/cls/mobilenetv3/cls_mv3.yaml Please set `distribute` in yaml config file to be `True`. ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 4 python tools/train.py -c configs/cls/mobilenetv3/cls_mv3.yaml ``` diff --git a/configs/cls/mobilenetv3/README_CN.md b/configs/cls/mobilenetv3/README_CN.md index 248e1be20..69b24feae 100644 --- a/configs/cls/mobilenetv3/README_CN.md +++ b/configs/cls/mobilenetv3/README_CN.md @@ -134,7 +134,7 @@ python tools/train.py -c configs/cls/mobilenetv3/cls_mv3.yaml 请确保yaml文件中的`distribute`参数为`True`。 ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 4 python tools/train.py -c configs/cls/mobilenetv3/cls_mv3.yaml yaml ``` diff --git a/configs/det/dbnet/README.md b/configs/det/dbnet/README.md index 161d229fa..062c94898 100644 --- a/configs/det/dbnet/README.md +++ b/configs/det/dbnet/README.md @@ -397,7 +397,7 @@ python tools/train.py -c=configs/det/dbnet/db_r50_icdar15.yaml Please set `distribute` in yaml config file to be True. ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml ``` @@ -418,7 +418,7 @@ Please refer to the tutorial [MindOCR Inference](../../../docs/en/inference/infe - Model Export -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#3-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ```shell python tools/export.py --model_name_or_config dbnet_resnet50 --data_shape 736 1280 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -430,11 +430,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T - Environment Installation -Please refer to [Environment Installation](../../../docs/en/inference/environment.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. - Model Conversion -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. - Inference diff --git a/configs/det/dbnet/README_CN.md b/configs/det/dbnet/README_CN.md index 549400e65..a5da6b43c 100644 --- a/configs/det/dbnet/README_CN.md +++ b/configs/det/dbnet/README_CN.md @@ -375,7 +375,7 @@ python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml 请确保yaml文件中的`distribute`参数为True。 ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml ``` @@ -391,11 +391,11 @@ python tools/eval.py --config configs/det/dbnet/db_r50_icdar15.yaml ## 5. MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: - 模型导出 -请先[下载](#2-实验结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#3-实验结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ```shell python tools/export.py --model_name_or_config dbnet_resnet50 --data_shape 736 1280 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -407,11 +407,11 @@ python tools/export.py --model_name_or_config configs/det/dbnet/db_r50_icdar15.y - 环境搭建 -请参考[环境安装](../../../docs/cn/inference/environment.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 - 模型转换 -请参考[模型转换](../../../docs/cn/inference/convert_tutorial.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 - 执行推理 diff --git a/configs/det/dbnet/README_CN_PP-OCRv3.md b/configs/det/dbnet/README_CN_PP-OCRv3.md index d5b3f9a95..baa57ea7c 100644 --- a/configs/det/dbnet/README_CN_PP-OCRv3.md +++ b/configs/det/dbnet/README_CN_PP-OCRv3.md @@ -326,10 +326,10 @@ model: * 分布式训练 -在大量数据的情况下,建议用户使用分布式训练。对于在多个昇腾910设备或着GPU卡的分布式训练,请将配置参数`system.distribute`修改为True, 例如: +在大量数据的情况下,建议用户使用分布式训练。对于在多个昇腾910设备的分布式训练,请将配置参数`system.distribute`修改为True, 例如: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/det/dbnet/db_mobilenetv3_ppocrv3.yaml ``` @@ -338,7 +338,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/det/dbnet 如果要在没有分布式训练的情况下在较小的数据集上训练模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/det/dbnet/db_mobilenetv3_ppocrv3.yaml ``` diff --git a/configs/det/east/README.md b/configs/det/east/README.md index 3a79a80f7..50075946f 100644 --- a/configs/det/east/README.md +++ b/configs/det/east/README.md @@ -138,7 +138,7 @@ python tools/train.py --config configs/det/east/east_r50_icdar15.yaml Please set `distribute` in yaml config file to be True. ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/det/east/east_r50_icdar15.yaml ``` @@ -158,7 +158,7 @@ Please refer to the tutorial [MindOCR Inference](../../../docs/en/inference/infe - Model Export -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ``` shell python tools/export.py --model_name_or_config east_resnet50 --data_shape 720 1280 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -170,11 +170,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T - Environment Installation -Please refer to [Environment Installation](../../../docs/en/inference/environment.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. - Model Conversion -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. - Inference diff --git a/configs/det/east/README_CN.md b/configs/det/east/README_CN.md index decafbdaa..f19216922 100644 --- a/configs/det/east/README_CN.md +++ b/configs/det/east/README_CN.md @@ -133,7 +133,7 @@ python tools/train.py --config configs/det/east/east_r50_icdar15.yaml 请确保yaml文件中的`distribute`参数为True。 ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/det/east/east_r50_icdar15.yaml ``` @@ -149,11 +149,11 @@ python tools/eval.py --config configs/det/east/east_r50_icdar15.yaml ### 3.6 MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: - 模型导出 -请先[下载](#2-实验结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#2-实验结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ``` shell python tools/export.py --model_name_or_config east_resnet50 --data_shape 720 1280 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -165,11 +165,11 @@ python tools/export.py --model_name_or_config configs/det/east/east_r50_icdar15. - 环境搭建 -请参考[环境安装](../../../docs/cn/inference/environment.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 - 模型转换 -请参考[模型转换](../../../docs/cn/inference/convert_tutorial.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 - 执行推理 diff --git a/configs/det/fcenet/README.md b/configs/det/fcenet/README.md index 061c04e95..ed9282fa2 100644 --- a/configs/det/fcenet/README.md +++ b/configs/det/fcenet/README.md @@ -157,7 +157,7 @@ python tools/train.py -c=configs/det/fcenet/fce_icdar15.yaml Please set `distribute` in yaml config file to be True. ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/fcenet/fce_icdar15.yaml ``` @@ -174,11 +174,11 @@ python tools/eval.py -c=configs/det/fcenet/fce_icdar15.yaml ### 3.6 MindSpore Lite Inference -Please refer to the tutorial [MindOCR Inference](../../../docs/en/inference/inference_tutorial_en.md) for model inference based on MindSpot Lite on Ascend 310, including the following steps: +Please refer to the tutorial [MindOCR Inference](../../../docs/en/inference/inference_tutorial.md) for model inference based on MindSpot Lite on Ascend 310, including the following steps: - Model Export -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ```shell python tools/export.py --model_name_or_config fcenet_resnet50 --data_shape 736 1280 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -190,11 +190,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T - Environment Installation -Please refer to [Environment Installation](../../../docs/en/inference/environment_en.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. - Model Conversion -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial_en.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. - Inference diff --git a/configs/det/fcenet/README_CN.md b/configs/det/fcenet/README_CN.md index cb6897c31..848edf015 100644 --- a/configs/det/fcenet/README_CN.md +++ b/configs/det/fcenet/README_CN.md @@ -165,7 +165,7 @@ python tools/train.py --config configs/det/fcenet/fce_icdar15.yaml 请确保yaml文件中的`distribute`参数为True。 ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/fcenet/fce_icdar15.yaml ``` @@ -181,11 +181,11 @@ python tools/eval.py --config configs/det/fcenet/fce_icdar15.yaml ### 3.6 MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial_cn.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: - 模型导出 -请先[下载](#2-实验结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#2-实验结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ```shell python tools/export.py --model_name_or_config fcenet_resnet50 --data_shape 736 1280 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -197,11 +197,11 @@ python tools/export.py --model_name_or_config configs/det/fcenet/fce_icdar15.yam - 环境搭建 -请参考[环境安装](../../../docs/cn/inference/environment_cn.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 - 模型转换 -请参考[模型转换](../../../docs/cn/inference/convert_tutorial_cn.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 - 执行推理 diff --git a/configs/det/psenet/README.md b/configs/det/psenet/README.md index 9201583bc..0d9caf5ba 100644 --- a/configs/det/psenet/README.md +++ b/configs/det/psenet/README.md @@ -168,7 +168,7 @@ python tools/train.py --config configs/det/psenet/pse_r152_icdar15.yaml Please set `distribute` in yaml config file to be True. ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/det/psenet/pse_r152_icdar15.yaml ``` @@ -188,7 +188,7 @@ Please refer to the tutorial [MindOCR Inference](../../../docs/en/inference/infe - Model Export -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ```shell python tools/export.py --model_name_or_config psenet_resnet152 --data_shape 1472 2624 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -200,11 +200,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T - Environment Installation -Please refer to [Environment Installation](../../../docs/en/inference/environment.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. - Model Conversion -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. - Inference diff --git a/configs/det/psenet/README_CN.md b/configs/det/psenet/README_CN.md index da1d31bc3..0a79fdeea 100644 --- a/configs/det/psenet/README_CN.md +++ b/configs/det/psenet/README_CN.md @@ -168,7 +168,7 @@ python tools/train.py --config configs/det/psenet/pse_r152_icdar15.yaml 请确保yaml文件中的`distribute`参数为True。 ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/det/psenet/pse_r152_icdar15.yaml ``` @@ -184,11 +184,11 @@ python tools/eval.py --config configs/det/psenet/pse_r152_icdar15.yaml ### 3.6 MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: - 模型导出 -请先[下载](#2-实验结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#2-实验结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ```shell python tools/export.py --model_name_or_config psenet_resnet152 --data_shape 1472 2624 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -200,11 +200,11 @@ python tools/export.py --model_name_or_config configs/det/psenet/pse_r152_icdar1 - 环境搭建 -请参考[环境安装](../../../docs/cn/inference/environment.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 - 模型转换 -请参考[模型转换](../../../docs/cn/inference/convert_tutorial.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 - 执行推理 diff --git a/configs/kie/layoutlmv3/README.md b/configs/kie/layoutlmv3/README.md index cc18fd725..63ef63880 100644 --- a/configs/kie/layoutlmv3/README.md +++ b/configs/kie/layoutlmv3/README.md @@ -183,7 +183,7 @@ eval: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.2 Model Training @@ -193,7 +193,7 @@ eval: It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `distribute` as True and run: ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/kie/layoutlmv3/ser_layoutlmv3_xfund_zh.yaml ``` @@ -203,7 +203,7 @@ mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/kie/layou If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/kie/layoutlmv3/ser_layoutlmv3_xfund_zh.yaml ``` diff --git a/configs/kie/layoutlmv3/README_CN.md b/configs/kie/layoutlmv3/README_CN.md index 1f2628ad9..951acd3e4 100644 --- a/configs/kie/layoutlmv3/README_CN.md +++ b/configs/kie/layoutlmv3/README_CN.md @@ -179,7 +179,7 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 +- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 ### 3.2 模型训练 @@ -189,7 +189,7 @@ eval: 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/kie/layoutlmv3/ser_layoutlmv3_xfund_zh.yaml ``` @@ -199,7 +199,7 @@ mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/kie/layou 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/kie/layoutlmv3/ser_layoutlmv3_xfund_zh.yaml ``` diff --git a/configs/kie/vi_layoutxlm/README.md b/configs/kie/vi_layoutxlm/README.md index 5326f11f5..b915aecfa 100644 --- a/configs/kie/vi_layoutxlm/README.md +++ b/configs/kie/vi_layoutxlm/README.md @@ -202,7 +202,7 @@ eval: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.2 Model Training @@ -223,7 +223,7 @@ python tools/param_converter.py \ It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `distribute` as True and run: ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yaml ``` @@ -233,7 +233,7 @@ mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/kie/vi_la If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yaml ``` diff --git a/configs/kie/vi_layoutxlm/README_CN.md b/configs/kie/vi_layoutxlm/README_CN.md index f6f843807..4d33d8650 100644 --- a/configs/kie/vi_layoutxlm/README_CN.md +++ b/configs/kie/vi_layoutxlm/README_CN.md @@ -198,7 +198,7 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 +- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 ### 3.2 模型训练 @@ -219,7 +219,7 @@ python tools/param_converter.py \ 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yaml ``` @@ -229,7 +229,7 @@ mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/kie/vi_la 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yaml ``` diff --git a/configs/layout/yolov8/README.md b/configs/layout/yolov8/README.md index 2bdd6b8e9..f2036c19f 100644 --- a/configs/layout/yolov8/README.md +++ b/configs/layout/yolov8/README.md @@ -93,7 +93,7 @@ eval: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.2 Model Training @@ -104,7 +104,7 @@ eval: It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `distribute` as True and run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/layout/yolov8/yolov8n.yaml ``` @@ -114,7 +114,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/layout/yo If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/layout/yolov8/yolov8n.yaml ``` @@ -134,7 +134,7 @@ To inference with MindSpot Lite on Ascend 310, please refer to the tutorial [Min **1. Model Export** -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ```shell python tools/export.py --model_name_or_config configs/layout/yolov8/yolov8n.yaml --data_shape 800 800 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -144,11 +144,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T **2. Environment Installation** -Please refer to [Environment Installation](../../../docs/en/inference/environment.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. **3. Model Conversion** -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. **4. Inference** diff --git a/configs/layout/yolov8/README_CN.md b/configs/layout/yolov8/README_CN.md index 92e46baa6..986f500f5 100644 --- a/configs/layout/yolov8/README_CN.md +++ b/configs/layout/yolov8/README_CN.md @@ -106,7 +106,7 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 +- 由于全局批大小 (batch_size x num_devices) 对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 ### 3.2 模型训练 @@ -117,7 +117,7 @@ eval: 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/layout/yolov8/yolov8n.yaml ``` @@ -127,7 +127,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/layout/yo 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/layout/yolov8/yolov8n.yaml ``` @@ -144,11 +144,11 @@ python tools/eval.py --config configs/layout/yolov8/yolov8n.yaml ## 4. MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: **1. 模型导出** -请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ```shell python tools/export.py --model_name_or_config configs/layout/yolov8/yolov8n.yaml --data_shape 800 800 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -158,11 +158,11 @@ python tools/export.py --model_name_or_config configs/layout/yolov8/yolov8n.yaml **2. 环境搭建** -请参考[环境安装](../../../docs/cn/inference/environment.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 **3. 模型转换** -请参考[模型转换](../../../docs/cn/inference/convert_tutorial.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 **4. 执行推理** diff --git a/configs/rec/abinet/README.md b/configs/rec/abinet/README.md index 19ae0a706..a7cf8ad53 100644 --- a/configs/rec/abinet/README.md +++ b/configs/rec/abinet/README.md @@ -240,7 +240,7 @@ eval: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. - Dataset: The MJSynth and SynthText datasets come from [ABINet_repo](https://github.com/FangShancheng/ABINet). @@ -252,7 +252,7 @@ eval: It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `distribute` as True and run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/rec/abinet/abinet_resnet45_en.yaml ``` The pre-trained model needs to be loaded during ABINet model training, and the weight of the pre-trained model is @@ -263,7 +263,7 @@ from https://download.mindspore.cn/toolkits/mindocr/abinet/abinet_pretrain_en-82 If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/rec/abinet/abinet_resnet45_en.yaml ``` diff --git a/configs/rec/abinet/README_CN.md b/configs/rec/abinet/README_CN.md index 527ca0bba..6c4fa72e1 100644 --- a/configs/rec/abinet/README_CN.md +++ b/configs/rec/abinet/README_CN.md @@ -253,7 +253,7 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或将学习率线性调整为新的全局批大小。 +- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或将学习率线性调整为新的全局批大小。 - 数据集:MJSynth和SynthText数据集来自作者公布的代码仓[ABINet_repo](https://github.com/FangShancheng/ABINet). @@ -265,7 +265,7 @@ eval: 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/rec/abinet/abinet_resnet45_en.yaml ``` ABINet模型训练时需要加载预训练模型,预训练模型的权重来自https://download.mindspore.cn/toolkits/mindocr/abinet/abinet_pretrain_en-821ca20b.ckpt,需要在“configs/rec/abinet/abinet_resnet45_en.yaml”中model的pretrained添加预训练权重的路径。 @@ -275,7 +275,7 @@ ABINet模型训练时需要加载预训练模型,预训练模型的权重来 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/rec/abinet/abinet_resnet45_en.yaml ``` diff --git a/configs/rec/crnn/README.md b/configs/rec/crnn/README.md index ac7a5299b..df938a673 100644 --- a/configs/rec/crnn/README.md +++ b/configs/rec/crnn/README.md @@ -302,7 +302,7 @@ eval: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.2 Model Training @@ -313,7 +313,7 @@ eval: It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `system.distribute` as True and run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/rec/crnn/crnn_resnet34.yaml ``` @@ -323,7 +323,7 @@ mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/rec/crnn/ If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`system.distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/rec/crnn/crnn_resnet34.yaml ``` @@ -412,7 +412,7 @@ To inference with MindSpot Lite on Ascend 310, please refer to the tutorial [Min **1. Model Export** -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ```shell python tools/export.py --model_name_or_config crnn_resnet34 --data_shape 32 100 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -425,11 +425,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T **2. Environment Installation** -Please refer to [Environment Installation](../../../docs/en/inference/environment.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. **3. Model Conversion** -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. **4. Inference** diff --git a/configs/rec/crnn/README_CN.md b/configs/rec/crnn/README_CN.md index bf3b2656f..26406bb9c 100644 --- a/configs/rec/crnn/README_CN.md +++ b/configs/rec/crnn/README_CN.md @@ -301,7 +301,7 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 +- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 ### 3.2 模型训练 @@ -312,7 +312,7 @@ eval: 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`system.distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/rec/crnn/crnn_resnet34.yaml ``` @@ -322,7 +322,7 @@ mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/rec/crnn/ 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`system.distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/rec/crnn/crnn_resnet34.yaml ``` @@ -378,7 +378,7 @@ Mindocr内置了一部分字典,均放在了 `mindocr/utils/dict/` 位置, 我们采用公开的中文基准数据集[Benchmarking-Chinese-Text-Recognition](https://github.com/FudanVI/benchmarking-chinese-text-recognition)进行CRNN模型的训练和验证。 -详细的数据准备和config文件配置方式, 请参考 [中文识别数据集准备](../../../docs/cn/datasets/chinese_text_recognition.md) +详细的数据准备和config文件配置方式, 请参考 [中文识别数据集准备](../../../docs/zh/datasets/chinese_text_recognition.md) ### 模型训练验证 @@ -404,16 +404,16 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/crnn/ ### 使用自定义数据集进行训练 -您可以在自定义的数据集基于提供的预训练权重进行微调训练, 以在特定场景获得更高的识别准确率,具体步骤请参考文档 [使用自定义数据集训练识别网络](../../../docs/cn/tutorials/training_recognition_custom_dataset_CN.md)。 +您可以在自定义的数据集基于提供的预训练权重进行微调训练, 以在特定场景获得更高的识别准确率,具体步骤请参考文档 [使用自定义数据集训练识别网络](../../../docs/zh/tutorials/training_recognition_custom_dataset_CN.md)。 ## 6. MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: **1. 模型导出** -请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ```shell python tools/export.py --model_name_or_config crnn_resnet34 --data_shape 32 100 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -425,11 +425,11 @@ python tools/export.py --model_name_or_config configs/rec/crnn/crnn_resnet34.yam **2. 环境搭建** -请参考[环境安装](../../../docs/cn/inference/environment.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 **3. 模型转换** -请参考[模型转换](../../../docs/cn/inference/convert_tutorial.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 **4. 执行推理** diff --git a/configs/rec/master/README.md b/configs/rec/master/README.md index 1e367cf3f..c87cdf199 100644 --- a/configs/rec/master/README.md +++ b/configs/rec/master/README.md @@ -298,7 +298,7 @@ eval: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.2 Model Training @@ -309,7 +309,7 @@ eval: It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `distribute` as True and run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/master/master_resnet31.yaml ``` @@ -319,7 +319,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/maste If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/rec/master/master_resnet31.yaml ``` @@ -366,7 +366,7 @@ To inference with MindSpot Lite on Ascend 310, please refer to the tutorial [Min **1. Model Export** -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ```shell python tools/export.py --model_name_or_config master_resnet31 --data_shape 48 160 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -379,11 +379,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T **2. Environment Installation** -Please refer to [Environment Installation](../../../docs/en/inference/environment.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. **3. Model Conversion** -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. **4. Inference** diff --git a/configs/rec/master/README_CN.md b/configs/rec/master/README_CN.md index 6bc54222d..fc58683e6 100644 --- a/configs/rec/master/README_CN.md +++ b/configs/rec/master/README_CN.md @@ -297,7 +297,7 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 +- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 ### 3.2 模型训练 @@ -308,7 +308,7 @@ eval: 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/master/master_resnet31.yaml ``` @@ -318,7 +318,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/maste 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/rec/master/master_resnet31.yaml ``` @@ -362,11 +362,11 @@ Mindocr内置了一部分字典,均放在了 `mindocr/utils/dict/` 位置, ## 5. MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: **1. 模型导出** -请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ```shell python tools/export.py --model_name_or_config master_resnet31 --data_shape 48 160 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -378,11 +378,11 @@ python tools/export.py --model_name_or_config configs/rec/master/master_resnet31 **2. 环境搭建** -请参考[环境安装](../../../docs/cn/inference/environment.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 **3. 模型转换** -请参考[模型转换](../../../docs/cn/inference/convert_tutorial.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 **4. 执行推理** diff --git a/configs/rec/rare/README.md b/configs/rec/rare/README.md index 5ca8f206f..fd1b301da 100644 --- a/configs/rec/rare/README.md +++ b/configs/rec/rare/README.md @@ -263,7 +263,7 @@ eval: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.2 Model Training @@ -274,7 +274,7 @@ eval: It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `distribute` as True and run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/rare/rare_resnet34.yaml ``` @@ -284,7 +284,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/rare/ If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/rec/rare/rare_resnet34.yaml ``` @@ -365,7 +365,7 @@ To inference with MindSpot Lite on Ascend 310, please refer to the tutorial [Min **1. Model Export** -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ```shell python tools/export.py --model_name_or_config rare_resnet34 --data_shape 32 100 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -378,11 +378,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T **2. Environment Installation** -Please refer to [Environment Installation](../../../docs/en/inference/environment.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. **3. Model Conversion** -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. **4. Inference** diff --git a/configs/rec/rare/README_CN.md b/configs/rec/rare/README_CN.md index 8e7abc006..7cac7f0a1 100644 --- a/configs/rec/rare/README_CN.md +++ b/configs/rec/rare/README_CN.md @@ -262,7 +262,7 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 +- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 ### 3.2 模型训练 @@ -273,7 +273,7 @@ eval: 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/rare/rare_resnet34.yaml ``` @@ -283,7 +283,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/rare/ 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/rec/rare/rare_resnet34.yaml ``` @@ -332,7 +332,7 @@ Mindocr内置了一部分字典,均放在了 `mindocr/utils/dict/` 位置, 我们采用公开的中文基准数据集[Benchmarking-Chinese-Text-Recognition](https://github.com/FudanVI/benchmarking-chinese-text-recognition)进行RARE模型的训练和验证。 -详细的数据准备和config文件配置方式, 请参考 [中文识别数据集准备](../../../docs/cn/datasets/chinese_text_recognition.md) +详细的数据准备和config文件配置方式, 请参考 [中文识别数据集准备](../../../docs/zh/datasets/chinese_text_recognition.md) ### 模型训练验证 @@ -346,7 +346,7 @@ mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/rec/rare/ | **语种** | **数据说明** | | :------: | :------: | -| 中文 | [中文识别数据集](../../../docs/cn/datasets/chinese_text_recognition.md) | +| 中文 | [中文识别数据集](../../../docs/zh/datasets/chinese_text_recognition.md) | ### 评估结果和预训练权重 模型训练完成后,在测试集不同场景上的准确率评估结果如下。相应的模型配置和预训练权重可通过表中链接下载。 @@ -361,16 +361,16 @@ mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/rec/rare/ - RARE的MindIR导出时的输入Shape均为(1, 3, 32, 320),只能在昇腾卡上使用。 ### 使用自定义数据集进行训练 -您可以在自定义的数据集基于提供的预训练权重进行微调训练, 以在特定场景获得更高的识别准确率,具体步骤请参考文档 [使用自定义数据集训练识别网络](../../../docs/cn/tutorials/training_recognition_custom_dataset_CN.md)。 +您可以在自定义的数据集基于提供的预训练权重进行微调训练, 以在特定场景获得更高的识别准确率,具体步骤请参考文档 [使用自定义数据集训练识别网络](../../../docs/zh/tutorials/training_recognition_custom_dataset_CN.md)。 ## 6. MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: **1. 模型导出** -请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ```shell python tools/export.py --model_name_or_config rare_resnet34 --data_shape 32 100 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -382,11 +382,11 @@ python tools/export.py --model_name_or_config configs/rec/rare/rare_resnet34.yam **2. 环境搭建** -请参考[环境安装](../../../docs/cn/inference/environment.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 **3. 模型转换** -请参考[模型转换](../../../docs/cn/inference/convert_tutorial.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 **4. 执行推理** diff --git a/configs/rec/robustscanner/README.md b/configs/rec/robustscanner/README.md index 2bbb8e3d7..42c73df99 100644 --- a/configs/rec/robustscanner/README.md +++ b/configs/rec/robustscanner/README.md @@ -294,7 +294,7 @@ eval: ... ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.2 Model Training @@ -305,7 +305,7 @@ eval: It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `distribute` as True and run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/robustscanner/robustscanner_resnet31.yaml ``` @@ -315,7 +315,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/robus If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/rec/robustscanner/robustscanner_resnet31.yaml ``` diff --git a/configs/rec/robustscanner/README_CN.md b/configs/rec/robustscanner/README_CN.md index 43f367bfd..5d4ff505c 100644 --- a/configs/rec/robustscanner/README_CN.md +++ b/configs/rec/robustscanner/README_CN.md @@ -294,10 +294,10 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size * num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 +- 由于全局批大小 (batch_size * num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或根据新的全局批大小线性调整学习率。 **使用自定义数据集进行训练** -- 您可以在自定义的数据集基于提供的预训练权重进行微调训练, 以在特定场景获得更高的识别准确率,具体步骤请参考文档 [使用自定义数据集训练识别网络](../../../docs/cn/tutorials/training_recognition_custom_dataset.md)。 +- 您可以在自定义的数据集基于提供的预训练权重进行微调训练, 以在特定场景获得更高的识别准确率,具体步骤请参考文档 [使用自定义数据集训练识别网络](../../../docs/zh/tutorials/training_recognition_custom_dataset.md)。 ### 3.2 模型训练 @@ -308,7 +308,7 @@ eval: 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/robustscanner/robustscanner_resnet31.yaml ``` @@ -318,7 +318,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/robus 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/rec/robustscanner/robustscanner_resnet31.yaml ``` diff --git a/configs/rec/svtr/README.md b/configs/rec/svtr/README.md index f7842a32e..bf6cacb2a 100644 --- a/configs/rec/svtr/README.md +++ b/configs/rec/svtr/README.md @@ -285,7 +285,7 @@ eval: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.2 Model Training @@ -296,7 +296,7 @@ eval: It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please modify the configuration parameter `distribute` as True and run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple Ascend devices mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/svtr/svtr_tiny.yaml ``` @@ -306,7 +306,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/svtr/ If you want to train or finetune the model on a smaller dataset without distributed training, please modify the configuration parameter`distribute` as False and run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on a CPU/Ascend device python tools/train.py --config configs/rec/svtr/svtr_tiny.yaml ``` @@ -386,7 +386,7 @@ To inference with MindSpot Lite on Ascend 310, please refer to the tutorial [Min **1. Model Export** -Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../README.md) tutorial and use the following command to export the trained ckpt model to MindIR file: +Please [download](#2-results) the exported MindIR file first, or refer to the [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export) tutorial and use the following command to export the trained ckpt model to MindIR file: ```shell python tools/export.py --model_name_or_config svtr_tiny --data_shape 64 256 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -399,11 +399,11 @@ The `data_shape` is the model input shape of height and width for MindIR file. T **2. Environment Installation** -Please refer to [Environment Installation](../../../docs/en/inference/environment.md#2-mindspore-lite-inference) tutorial to configure the MindSpore Lite inference environment. +Please refer to [Environment Installation](../../../docs/en/inference/environment.md) tutorial to configure the MindSpore Lite inference environment. **3. Model Conversion** -Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#1-mindocr-models), +Please refer to [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert), and use the `converter_lite` tool for offline conversion of the MindIR file. **4. Inference** diff --git a/configs/rec/svtr/README_CN.md b/configs/rec/svtr/README_CN.md index a0579ac82..cc306abce 100644 --- a/configs/rec/svtr/README_CN.md +++ b/configs/rec/svtr/README_CN.md @@ -283,7 +283,7 @@ eval: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或将学习率线性调整为新的全局批大小。 +- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整`batch_size`以保持全局批大小不变,或将学习率线性调整为新的全局批大小。 ### 3.2 模型训练 @@ -294,7 +294,7 @@ eval: 使用预定义的训练配置可以轻松重现报告的结果。对于在多个昇腾910设备上的分布式训练,请将配置参数`distribute`修改为True,并运行: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/svtr/svtr_tiny.yaml ``` @@ -304,7 +304,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/svtr/ 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/rec/svtr/svtr_tiny.yaml ``` @@ -353,7 +353,7 @@ Mindocr内置了一部分字典,均放在了 `mindocr/utils/dict/` 位置, 我们采用公开的中文基准数据集[Benchmarking-Chinese-Text-Recognition](https://github.com/FudanVI/benchmarking-chinese-text-recognition)进行SVTR模型的训练和验证。 -详细的数据准备和config文件配置方式, 请参考 [中文识别数据集准备](../../../docs/cn/datasets/chinese_text_recognition.md) +详细的数据准备和config文件配置方式, 请参考 [中文识别数据集准备](../../../docs/zh/datasets/chinese_text_recognition.md) ### 模型训练验证 @@ -374,16 +374,16 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/svtr/ ### 使用自定义数据集进行训练 -您可以在自定义的数据集基于提供的预训练权重进行微调训练, 以在特定场景获得更高的识别准确率,具体步骤请参考文档 [使用自定义数据集训练识别网络](../../../docs/cn/tutorials/training_recognition_custom_dataset_CN.md)。 +您可以在自定义的数据集基于提供的预训练权重进行微调训练, 以在特定场景获得更高的识别准确率,具体步骤请参考文档 [使用自定义数据集训练识别网络](../../../docs/zh/tutorials/training_recognition_custom_dataset_CN.md)。 ## 6. MindSpore Lite 推理 -请参考[MindOCR 推理](../../../docs/cn/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: +请参考[MindOCR 推理](../../../docs/zh/inference/inference_tutorial.md)教程,基于MindSpore Lite在Ascend 310上进行模型的推理,包括以下步骤: **1. 模型导出** -请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../README.md)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: +请先[下载](#2-评估结果)已导出的MindIR文件,或者参考[模型导出](../../../docs/zh/inference/convert_tutorial.md#1-模型导出)教程,使用以下命令将训练完成的ckpt导出为MindIR文件: ```shell python tools/export.py --model_name_or_config svtr_tiny --data_shape 64 256 --local_ckpt_path /path/to/local_ckpt.ckpt @@ -395,11 +395,11 @@ python tools/export.py --model_name_or_config configs/rec/svtr/svtr_tiny.yaml -- **2. 环境搭建** -请参考[环境安装](../../../docs/cn/inference/environment.md#2-mindspore-lite推理)教程,配置MindSpore Lite推理运行环境。 +请参考[环境安装](../../../docs/zh/inference/environment.md)教程,配置MindSpore Lite推理运行环境。 **3. 模型转换** -请参考[模型转换](../../../docs/cn/inference/convert_tutorial.md#1-mindocr模型)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 +请参考[模型转换](../../../docs/zh/inference/convert_tutorial.md#2-mindspore-lite-mindir-转换)教程,使用`converter_lite`工具对MindIR模型进行离线转换。 **4. 执行推理** diff --git a/configs/rec/svtr/README_CN_PP-OCRv3.md b/configs/rec/svtr/README_CN_PP-OCRv3.md index 816a7af93..e32619ce2 100644 --- a/configs/rec/svtr/README_CN_PP-OCRv3.md +++ b/configs/rec/svtr/README_CN_PP-OCRv3.md @@ -289,10 +289,10 @@ model: * 分布式训练 -在大量数据的情况下,建议用户使用分布式训练。对于在多个昇腾910设备或着GPU卡的分布式训练,请将配置参数`system.distribute`修改为True, 例如: +在大量数据的情况下,建议用户使用分布式训练。对于在多个昇腾910设备的分布式训练,请将配置参数`system.distribute`修改为True, 例如: ```shell -# 在多个 GPU/Ascend 设备上进行分布式训练 +# 在多个 Ascend 设备上进行分布式训练 mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/svtr/svtr_ppocrv3_ch.yaml ``` @@ -302,7 +302,7 @@ mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/svtr/ 如果要在没有分布式训练的情况下在较小的数据集上训练模型,请将配置参数`distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/rec/svtr/svtr_ppocrv3_ch.yaml ``` diff --git a/configs/rec/visionlan/README.md b/configs/rec/visionlan/README.md index f49728fa6..1b8ebe50f 100644 --- a/configs/rec/visionlan/README.md +++ b/configs/rec/visionlan/README.md @@ -202,7 +202,7 @@ common: ``` **Notes:** -- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of GPUs/NPUs, or adjust the learning rate linearly to a new global batch size. +- As the global batch size (batch_size x num_devices) is important for reproducing the result, please adjust `batch_size` accordingly to keep the global batch size unchanged for a different number of NPUs, or adjust the learning rate linearly to a new global batch size. ### 3.4 Training @@ -267,7 +267,7 @@ python tools/export.py --model_name_or_config visionlan_resnet45 --data_shape 64 This command will save a `visionlan_resnet45.mindir` under the current working directory. -> Learn more about [Model Export](https://github.com/mindspore-lab/mindocr/blob/main/docs/en/inference/convert_tutorial.md#11-model-export). +> Learn more about [Model Export](../../../docs/en/inference/convert_tutorial.md#1-model-export). ### 4.2 Mindspore Lite Converter Tool @@ -288,7 +288,7 @@ converter_lite \ Running this command will save a `visionlan_resnet45_lite.mindir` under the current working directory. This is the MindSpore Lite MindIR file that we can run inference with on the Ascend310 or 310P platform. You can also define a different file name by changing the `--outputFile` argument. -> Learn more about [Model Conversion](https://github.com/mindspore-lab/mindocr/blob/main/docs/en/inference/convert_tutorial.md#12-model-conversion). +> Learn more about [Model Conversion](../../../docs/en/inference/convert_tutorial.md#2-mindspore-lite-mindir-convert). ### 4.3 Inference on A Folder of Images diff --git a/configs/rec/visionlan/README_CN.md b/configs/rec/visionlan/README_CN.md index 588568826..182d5d9dd 100644 --- a/configs/rec/visionlan/README_CN.md +++ b/configs/rec/visionlan/README_CN.md @@ -183,7 +183,7 @@ common: ``` **注意:** -- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当GPU/NPU卡数发生变化时,调整batch_size以保持全局批大小不变,或将学习率线性调整为新的全局批大小。 +- 由于全局批大小 (batch_size x num_devices) 是对结果复现很重要,因此当NPU卡数发生变化时,调整batch_size以保持全局批大小不变,或将学习率线性调整为新的全局批大小。 ### 3.4 训练 @@ -195,7 +195,7 @@ LF_2:训练MLM并微调骨干网络和VRM LA:使用MLM生成的掩码遮挡特征图,训练骨干网络、MLM和VRM ``` -我们接下来使用分布式训练进行这三个步骤。对于单卡训练,请参考[识别教程](../../../docs/cn/tutorials/training_recognition_custom_dataset.md#单卡训练)。 +我们接下来使用分布式训练进行这三个步骤。对于单卡训练,请参考[识别教程](../../../docs/zh/tutorials/training_recognition_custom_dataset.md#单卡训练)。 ```shell mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/rec/visionlan/visionlan_resnet45_LF_1.yaml diff --git a/configs/table/README.md b/configs/table/README.md index 3e2192af3..072cb65bc 100644 --- a/configs/table/README.md +++ b/configs/table/README.md @@ -147,7 +147,7 @@ python tools/train.py --config configs/table/table_master.yaml Please set `distribute` in yaml config file to be True. ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/table/table_master.yaml ``` diff --git a/configs/table/README_CN.md b/configs/table/README_CN.md index f5a48dd01..b89d97bc6 100644 --- a/configs/table/README_CN.md +++ b/configs/table/README_CN.md @@ -142,7 +142,7 @@ python tools/train.py --config configs/table/table_master.yaml 请确保yaml文件中的`distribute`参数为True。 ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 8 python tools/train.py --config configs/table/table_master.yaml ``` diff --git a/deploy/py_infer/example/infer_args.py b/deploy/py_infer/example/infer_args.py index 8e6a9df99..c984d0f65 100644 --- a/deploy/py_infer/example/infer_args.py +++ b/deploy/py_infer/example/infer_args.py @@ -22,7 +22,7 @@ def get_args(): type=str.lower, default="lite", required=False, - choices=["acl", "lite"], + choices=["lite"], help="Inference backend type.", ) parser.add_argument("--device", type=str, default="Ascend", required=False, choices=["Ascend"], help="Device type.") diff --git a/deploy/py_infer/src/core/model/backend/__init__.py b/deploy/py_infer/src/core/model/backend/__init__.py index 7509a9417..7383de3bf 100644 --- a/deploy/py_infer/src/core/model/backend/__init__.py +++ b/deploy/py_infer/src/core/model/backend/__init__.py @@ -1,4 +1,3 @@ from .lite_model import LiteModel -from .mindx_model import MindXModel -__all__ = ["LiteModel", "MindXModel"] +__all__ = ["LiteModel"] diff --git a/deploy/py_infer/src/core/model/backend/mindx_model.py b/deploy/py_infer/src/core/model/backend/mindx_model.py deleted file mode 100644 index 28612b1fb..000000000 --- a/deploy/py_infer/src/core/model/backend/mindx_model.py +++ /dev/null @@ -1,88 +0,0 @@ -from typing import List - -import numpy as np - -from ....utils import suppress_stdout -from .model_base import ModelBase - - -class MindXModel(ModelBase): - def __init__(self, model_path, device, device_id): - if device.lower() != "ascend": - raise ValueError(f"ACL inference only support Ascend device, but got {device}.") - - super().__init__(model_path, device, device_id) - - def _init_model(self): - global base, Tensor - with suppress_stdout(): - from mindx.sdk import Tensor, base - - base.mx_init() - - self.model = base.model(self.model_path, self.device_id) - if not self.model: - raise ValueError(f"The model file {self.model_path} load failed.") - - # dynamic batch size/image size name: ascend_mbatch_shape_data - # dynamic aipp name: ascend_dynamic_aipp_data - # TODO: self._input_num remove dynamic aipp input_num 1. - self._input_num = self.model.input_num - 1 if self.model.model_gear() else self.model.input_num - self._input_shape = [self.model.input_shape(i) for i in range(self._input_num)] - self._input_dtype = [self.__dtype_to_nptype(self.model.input_dtype(i)) for i in range(self._input_num)] - - def infer(self, inputs: List[np.ndarray]): - inputs = [Tensor(input) for input in inputs] - outputs = self.model.infer(inputs) - list([output.to_host() for output in outputs]) - outputs = [np.array(output) for output in outputs] - return outputs - - def get_gear(self): - gears = self.model.model_gear() - - # TODO: shape gear don't support for multi input - if self._input_num > 1 and gears: - raise ValueError( - f"Shape gear don‘t support model input_num > 1 currently, \ - but got input_num = {self._input_num} for {self.model_path}!" - ) - - # dynamic shape or static shape - if not gears: - return gears - - # TODO: only support NCHW format for shape gear - # dynamic_dims - if len(gears[0]) == 4: - return gears - - # dynamic_batch_size - if len(gears[0]) == 1: - chw = self.input_shape[1:] - return [gear + chw for gear in gears] - - # dynamic_image_size - if len(gears[0]) == 2: - nc = self.input_shape[:2] - return [nc + gear for gear in gears] - - raise ValueError(f"Get gear value failed for {self.model_path}. Please Check ATC conversion process!") - - def __dtype_to_nptype(self, type_): - dtype = base.dtype - - return { - dtype.bool: np.bool_, - dtype.int8: np.int8, - dtype.int16: np.int16, - dtype.int32: np.int32, - dtype.int64: np.int64, - dtype.uint8: np.uint8, - dtype.uint16: np.uint16, - dtype.uint32: np.uint32, - dtype.uint64: np.uint64, - dtype.float16: np.float16, - dtype.float32: np.float32, - dtype.double: np.float64, - }[type_] diff --git a/deploy/py_infer/src/core/model/model.py b/deploy/py_infer/src/core/model/model.py index 9a809533f..adb16b08f 100644 --- a/deploy/py_infer/src/core/model/model.py +++ b/deploy/py_infer/src/core/model/model.py @@ -2,12 +2,12 @@ import numpy as np -from .backend import LiteModel, MindXModel +from .backend import LiteModel from .shape import ShapeType __all__ = ["Model"] -_INFER_BACKEND_MAP = {"acl": MindXModel, "lite": LiteModel} +_INFER_BACKEND_MAP = {"lite": LiteModel} class Model: diff --git a/deploy/py_infer/src/infer_args.py b/deploy/py_infer/src/infer_args.py index fc7285939..ba7dd27e3 100644 --- a/deploy/py_infer/src/infer_args.py +++ b/deploy/py_infer/src/infer_args.py @@ -26,7 +26,7 @@ def get_args(): type=str.lower, default="lite", required=False, - choices=["acl", "lite"], + choices=["lite"], help="Inference backend type.", ) parser.add_argument("--device", type=str, default="Ascend", required=False, choices=["Ascend"], help="Device type.") diff --git a/docs/en/inference/convert_dynamic.md b/docs/en/inference/convert_dynamic.md index bcd3d397d..779ede5eb 100644 --- a/docs/en/inference/convert_dynamic.md +++ b/docs/en/inference/convert_dynamic.md @@ -110,7 +110,7 @@ The output is a single MindIR model: `model_static.mindir` | input_shape | None | Y | model input shape, NCHW format | | data_path | None | N | Path to image folder or annotation file | | input_name | x | N | model input name | -| backend | lite | N | converter backend, lite or acl | +| backend | lite | N | converter backend | | output_path | ./output | N | Path to output model | | soc_version | Ascend310P3 | N | soc_version for Ascend,Ascend310P3 or Ascend310 | diff --git a/docs/en/inference/inference_tutorial.md b/docs/en/inference/inference_tutorial.md index f718f265d..5ad1787ac 100644 --- a/docs/en/inference/inference_tutorial.md +++ b/docs/en/inference/inference_tutorial.md @@ -178,7 +178,7 @@ word_1814.png "cathay" | input_images_dir | str | None | Image or folder path for inference | | device | str | Ascend | Device type, support Ascend | | device_id | int | 0 | Device id | - | backend | str | lite | Inference backend, support acl, lite | + | backend | str | lite | Inference backend, support lite | | parallel_num | int | 1 | Number of parallel in each stage of pipeline parallelism | | precision_mode | str | None | Precision mode, only supports setting by [Model Conversion](convert_tutorial.md) currently, and it takes no effect here | diff --git a/docs/en/tutorials/distribute_train.md b/docs/en/tutorials/distribute_train.md index 4b099bd22..529dbf2cd 100644 --- a/docs/en/tutorials/distribute_train.md +++ b/docs/en/tutorials/distribute_train.md @@ -2,7 +2,6 @@ This document provides a tutorial on distributed parallel training. There are two ways to train on the Ascend AI processor: by running scripts with OpenMPI or configuring `RANK_TABLE_FILE` for training. -On GPU processors, scripts can be run with OpenMPI for training. > Please ensure that the `distribute` parameter in the yaml file is set to `True` before running the following commands for distributed training. @@ -12,8 +11,6 @@ On GPU processors, scripts can be run with OpenMPI for training. - [1.2 Configure RANK\_TABLE\_FILE for training](#12-configure-rank_table_file-for-training) - [1.2.1 Running on Eight (All) Devices](#121-running-on-eight-all-devices) - [1.2.2 Running on Four (Partial) Devices](#122-running-on-four-partial-devices) - - [2. GPU](#2-gpu) - - [2.1 Run scripts with OpenMPI](#21-run-scripts-with-openmpi) ## 1. Ascend @@ -226,22 +223,3 @@ done ``` Note that the `DEVICE_ID` and `RANK_ID` should be matched with `hccl_4p_4567_127.0.0.1.json`. - -## 2. GPU - -### 2.1 Run scripts with OpenMPI - -On GPU hardware platform, only OpenMPI's `mpirun` can be used for distributed training. The following command will run training on devices `0` and `1`. - - -```shell -# n is the number of GPUs used in training -mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml -``` - -In the case when users want to run training on `device 2` and `device 3`, users can run `export CUDA_VISIBLE_DEVICES=2,3` before running the command above, or run the following command: - -```shell -# n is the number of GPUs used in training -CUDA_VISIBLE_DEVICES=2,3 mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml -``` diff --git a/docs/en/tutorials/training_detection_custom_dataset.md b/docs/en/tutorials/training_detection_custom_dataset.md index 5426cbddd..2f69ee689 100644 --- a/docs/en/tutorials/training_detection_custom_dataset.md +++ b/docs/en/tutorials/training_detection_custom_dataset.md @@ -18,7 +18,7 @@ This document provides tutorials on how to train text detection networks using c - [3.3 Inference](#33-inference) - [3.3.1 Environment Preparation](#331-environment-preparation) - [3.3.2 Model Conversion](#332-model-conversion) - - [3.3.3 Inference (Python)](#333-inference-python) + - [3.3.3 Inference](#333-inference) ## 1. Dataset preperation @@ -254,23 +254,17 @@ python tools/train.py -c=configs/det/dbnet/db_r50_icdar15.yaml * Distributed training -In distributed training, `distribute` in yaml config file should be True. On both GPU and Ascend devices, users can use `mpirun` to launch distributed training. For example, using `device:0` and `device:1` to train: +In distributed training, `distribute` in yaml config file should be True. On Ascend devices, users can use `mpirun` to launch distributed training. For example, using `device:0` and `device:1` to train: ```shell -# n is the number of GPUs/NPUs +# n is the number of NPUs mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml ``` Sometimes, users may want to specify the device ids to run distributed training, for example, `device:2` and `device:3`. - - On GPU devices, before running the `mpirun` command above, users can run the following command: - -```shell -export CUDA_VISIBLE_DEVICES=2,3 -``` - On Ascend devices, users should create a `rank_table.json` like this: + ```json Copy{ "version": "1.0", @@ -288,6 +282,7 @@ Copy{ } ``` + To get the `device_ip` of the target device, run `cat /etc/hccn.conf` and look for the value of `address_x`, which is the ip address. More details can be found in [distributed training tutorial](distribute_train.md). ### 3.2 Evaluation @@ -312,17 +307,15 @@ python tools/eval.py -c=configs/det/dbnet/db_r50_icdar15.yaml \ ### 3.3 Inference -MindOCR inference supports Ascend310/Ascend310P devices, supports [MindSpore Lite](https://www.mindspore.cn/lite) and -[ACL](https://www.hiascend.com/document/detail/zh/canncommercial/63RC1/inferapplicationdev/aclcppdevg/aclcppdevg_000004.html) -inference backend. [Inference Tutorial](../inference/inference_tutorial.md) gives detailed steps on how to run inference with MindOCR, which include mainly three steps: environment preparation, model conversion, and inference. +MindOCR inference supports Ascend310/Ascend310P devices and adopts [MindSpore Lite](https://www.mindspore.cn/lite) inference backend. [Inference Tutorial](../inference/inference_tutorial.md) gives detailed steps on how to run inference with MindOCR, which include mainly three steps: environment preparation, model conversion, and inference. #### 3.3.1 Environment Preparation -Please refer to the [environment installation](../inference/environment.md) for more information, and pay attention to selecting the ACL/Lite environment based on the model. +Please refer to the [environment installation](../inference/environment.md) for more information. #### 3.3.2 Model Conversion -Before runing infernence, users need to export a MindIR file from the trained checkpoint. [MindSpore IR (MindIR)](https://www.mindspore.cn/docs/en/r2.0/design/mindir.html) is a function-style IR based on graph representation. The MindIR filew stores the model structure and weight parameters needed for inference. +Before runing infernence, users need to export a MindIR file from the trained checkpoint. [MindSpore IR (MindIR)](https://www.mindspore.cn/docs/en/r2.2/design/mindir.html) is a function-style IR based on graph representation. The MindIR filew stores the model structure and weight parameters needed for inference. Given the trained dbnet checkpoint file, user can use the following commands to export MindIR: @@ -336,7 +329,7 @@ The `data_shape` is the model input shape of height and width for MindIR file. I Please refer to the [Conversion Tutorial](../inference/convert_tutorial.md) for more details about model conversion. -#### 3.3.3 Inference (Python) +#### 3.3.3 Inference After model conversion, the `output.mindir` is obtained. Users can go to the `deploy/py_infer` directory, and use the following command for inference: diff --git a/docs/en/tutorials/training_recognition_custom_dataset.md b/docs/en/tutorials/training_recognition_custom_dataset.md index 0d18b0fef..ba6fadedb 100644 --- a/docs/en/tutorials/training_recognition_custom_dataset.md +++ b/docs/en/tutorials/training_recognition_custom_dataset.md @@ -215,10 +215,10 @@ If users do not need to use the pre-trained model, they can simply delete `model #### Distributed Training -In the case of a large amount of data, we recommend that users use distributed training. For distributed training across multiple Ascend 910 devices or GPU devices, please modify the configuration parameter `system.distribute` to True, for example: +In the case of a large amount of data, we recommend that users use distributed training. For distributed training across multiple Ascend 910 devices devices, please modify the configuration parameter `system.distribute` to True, for example: ```shell -# To perform distributed training on 4 GPU/Ascend devices +# To perform distributed training on 4 Ascend devices mpirun -n 4 python tools/train.py --config configs/rec/crnn/crnn_resnet34_ch.yaml ``` @@ -227,7 +227,7 @@ mpirun -n 4 python tools/train.py --config configs/rec/crnn/crnn_resnet34_ch.yam If you want to train or fine-tune the model on a smaller dataset without distributed training, please modify the configuration parameter `system.distribute` to `False` and run: ```shell -# Training on single CPU/GPU/Ascend devices +# Training on single CPU/Ascend devices python tools/train.py --config configs/rec/crnn/crnn_resnet34_ch.yaml ``` diff --git a/docs/en/tutorials/yaml_configuration.md b/docs/en/tutorials/yaml_configuration.md index fa1d5e192..8bb044337 100644 --- a/docs/en/tutorials/yaml_configuration.md +++ b/docs/en/tutorials/yaml_configuration.md @@ -23,7 +23,7 @@ This document takes `configs/rec/crnn/crnn_icdar15.yaml` as an example to descri | mode | Mindspore running mode (static graph/dynamic graph) | 0 | 0 / 1 | 0: means running in GRAPH_MODE mode; 1: PYNATIVE_MODE mode | | distribute | Whether to enable parallel training | True | True / False | \ | | device_id | Specify the device id while standalone training | 7 | The ids of all devices in the server | Only valid when distribute=False (standalone training) and environment variable 'DEVICE_ID' is NOT set. While standalone training, if both this arg and environment variable 'DEVICE_ID' are NOT set, use device 0 by default. | -| amp_level | Mixed precision mode | O0 | O0/O1/O2/O3 | 'O0' - no change.
'O1' - convert the cells and operations in the whitelist to float16 precision, and keep the rest in float32 precision.
'O2' - Keep the cells and operations in the blacklist with float32 precision, and convert the rest to float16 precision.
'O3' - Convert all networks to float16 precision.
Notice: Model prediction or evaluation does not support 'O3' on GPU platform. If amp_level is set to 'O3' for model prediction and evaluation on GPU platform, the program will switch it to 'O2' automatically.| +| amp_level | Mixed precision mode | O0 | O0/O1/O2/O3 | 'O0' - no change.
'O1' - convert the cells and operations in the whitelist to float16 precision, and keep the rest in float32 precision.
'O2' - Keep the cells and operations in the blacklist with float32 precision, and convert the rest to float16 precision.
'O3' - Convert all networks to float16 precision.
| | seed | Random seed | 42 | Integer | \ | | ckpt_save_policy | The policy for saving model weights | top_k | "top_k" or "latest_k" | "top_k" means to keep the top k checkpoints according to the metric score; "latest_k" means to keep the last k checkpoints. The value of `k` is set via `ckpt_max_keep` | | ckpt_max_keep | The maximum number of checkpoints to keep during training | 5 | Integer | \ | diff --git a/docs/zh/inference/convert_dynamic.md b/docs/zh/inference/convert_dynamic.md index 8301841ca..8f3a465af 100644 --- a/docs/zh/inference/convert_dynamic.md +++ b/docs/zh/inference/convert_dynamic.md @@ -97,7 +97,7 @@ python converter.py \ | input_shape | 无 | 是 | 模型输入shape,NCHW格式 | | data_path | 无 | 否 | 数据集或标注文件的路径 | | input_name | x | 否 | 模型的输入名 | -| backend | lite | 否 | 转换工具, lite或者acl | +| backend | lite | 否 | 转换工具 | | output_path | ./output | 否 | 输出模型保存文件夹 | | soc_version | Ascend310P3 | 否 | Ascend的soc型号,Ascend310P3或Ascend310 | diff --git a/docs/zh/inference/inference_tutorial.md b/docs/zh/inference/inference_tutorial.md index ea71e0066..91f055c9a 100644 --- a/docs/zh/inference/inference_tutorial.md +++ b/docs/zh/inference/inference_tutorial.md @@ -178,7 +178,7 @@ word_1814.png "cathay" | input_images_dir | str | 无 | 单张图像或者图片文件夹 | | device | str | Ascend | 推理设备名称,支持:Ascend | | device_id | int | 0 | 推理设备id | - | backend | str | lite | 推理后端,支持:acl, lite | + | backend | str | lite | 推理后端 | | parallel_num | int | 1 | 推理流水线中每个节点并行数 | | precision_mode | str | 无 | 推理的精度模式,暂只支持在[模型转换](convert_tutorial.md)时设置,此处不生效 | diff --git a/docs/zh/inference/windows_infer.md b/docs/zh/inference/windows_infer.md deleted file mode 100644 index dd5023980..000000000 --- a/docs/zh/inference/windows_infer.md +++ /dev/null @@ -1,55 +0,0 @@ -## Windows C++推理 -### 环境配置 -1. 下载[GCC](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z/download)并解压; -2. 将GCC解压后的目录```mingw64/bin```加入到环境变量Path里; -3. 下载[CMake](https://github.com/Kitware/CMake/releases/download/v3.18.3/cmake-3.18.3-win64-x64.msi)安装,在安装过程中注意勾选Add CMake to the system PATH for the current user,将cmake添加到Path环境变量: -
-WechatIMG92
- -4. 下载[MindSpore Lite](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.0.0/MindSpore/lite/release/windows/mindspore-lite-2.0.0-win-x64.zip)并解压; -5. 下载[opencv-mingw预编译3.4.8 x64版本](https://github.com/huihut/OpenCV-MinGW-Build/archive/refs/tags/OpenCV-3.4.8-x64.zip),并解压; -6. 将OpenCV解压后的```x64/mingw/bin```路径加入到环境变量Path里; -6. 下载[MindOCR代码](https://codeload.github.com/liangxhao/mindocr/zip/refs/heads/cpp_infer)并解压; -7. 下载[Clipper](https://udomain.dl.sourceforge.net/project/polyclipping/clipper_ver6.4.2.zip)并解压,将```cpp```目录下的```clipper.cpp```和```clipper.hpp```文件拷贝到MindOCR代码目录```deploy/cpp_infer_ddl/src/data_process/postprocess```里: -
-WechatIMG91
- -
-WechatIMG93
- -8. 打开[下载页面](https://download.mindspore.cn/toolkits/mindocr/windows/): -
-f0
- -下载数据集[ic15.zip](https://download.mindspore.cn/toolkits/mindocr/windows/ic15.zip),[文本检测模型](https://download.mindspore.cn/toolkits/mindocr/windows/ch/ch_ppocr_server_v2.0_det_infer_cpu.ms),[文本识别模型](https://download.mindspore.cn/toolkits/mindocr/windows/ch/ch_ppocr_server_v2.0_rec_infer_argmax_cpu.ms)以及[字典文件](https://download.mindspore.cn/toolkits/mindocr/windows/ch/ppocr_keys_v1.txt)。 -

- -### 推理方法 -1. 进入MindOCR代码目录```deploy/cpp_infer_ddl/src/```,修改```build.bat```中的MindSpore Lite路径以及OpenCV路径,示例如下: -```text -set LITE_HOME=D:\mindocr_windows\mindspore-lite-2.0.0-win-x64 -set OPENCV_DIR=D:\mindocr_windows\OpenCV-MinGW-Build-OpenCV-3.4.8-x64 -``` -**注意:在修改MindSpore_lite路径```LITE_HOME```和OpenCV路径```OPENCV_DIR```时,需要写成上述样例的反斜杠```\```写法。** - -2. 运行```build.bat```文件(双击打开或者命令行里输入```build.bat```并回车),等待编译完成后,在```deploy/cpp_infer_ddl/src/dist```目录下会生成```infer.exe```文件; - -3. build完成后使用```deploy/cpp_infer_ddl/src/infer.bat```进行推理,注意修改infer.bat里的以下参数: -```text -LITE_HOME=D:/mindocr_windows/mindspore-lite-2.0.0-win-x64 # mindspore lite路径 - -OPENCV_DIR=D:/mindocr_windows/OpenCV-MinGW-Build-OpenCV-3.4.8-x64 # OpenCV路径 - ---input_images_dir D:\ic15\det\test\ch4_test_images # 测试图片目录 ---det_model_path D:\models\ch_ppocr_server_v2.0_det_infer_cpu.ms # 文本检测模型目录 ---rec_model_path D:\models\ch_ppocr_server_v2.0_rec_infer_argmax_cpu.ms # 文本识别模型目录 ---character_dict_path D:\dict\ic15_dict.txt # 字典文件目录 -``` - -**注意: ```LITE_HOME```和```OPENCV_DIR```需要设置成正斜杠```/```写法,infer.exe里面的路径参数都需要设置成反斜杠```\```,与如上样例保持一致**。 - -4. 在```deploy/cpp_infer_ddl/src/```目录中,打开cmd终端,使用以下命令执行推理: -```shell -infer.bat -``` -5. 推理结果存在```deploy/cpp_infer_ddl/src/dist/det_rec```目录下; diff --git a/docs/zh/tutorials/distribute_train.md b/docs/zh/tutorials/distribute_train.md index e8b470d14..c50a08cbd 100644 --- a/docs/zh/tutorials/distribute_train.md +++ b/docs/zh/tutorials/distribute_train.md @@ -1,6 +1,6 @@ # 分布式并行训练 -本文档提供分布式并行训练的教程,在Ascend处理器上有两种方式可以进行单机多卡训练,通过OpenMPI运行脚本或通过配置RANK_TABLE_FILE进行单机多卡训练。在GPU处理器上可通过OpenMPI运行脚本进行单机多卡训练。 +本文档提供分布式并行训练的教程,在Ascend处理器上有两种方式可以进行单机多卡训练,通过OpenMPI运行脚本或通过配置RANK_TABLE_FILE进行单机多卡训练。 > 请确保在运行以下命令进行分布式训练之前,将 `yaml` 文件中的 `distribute` 参数设置为 `True`。 @@ -10,8 +10,6 @@ - [1.2 配置RANK\_TABLE\_FILE进行训练](#12-配置rank_table_file进行训练) - [1.2.1 使用八个(全部)设备进行训练](#121-使用八个全部设备进行训练) - [1.2.2 使用四个(部分)设备进行训练](#122-使用四个部分设备进行训练) - - [2. GPU](#2-gpu) - - [2.1 通过OpenMPI运行脚本进行训练](#21-通过openmpi运行脚本进行训练) ## 1. Ascend @@ -136,6 +134,7 @@ done 当需要训练其他模型时,只要将脚本中的yaml config文件路径替换即可,即`python -u tools/train.py --config path/to/model_config.yaml` 此时训练已经开始,可在`train.log`中查看训练日志。 + #### 1.2.2 使用四个(部分)设备进行训练 要在四个设备上运行训练,例如,`{4, 5, 6, 7}`,`RANK_TABLE_FILE`和运行脚本与在八个设备上运行使用的文件有所不同。 @@ -147,6 +146,7 @@ python hccl_tools.py --device_num "[4,8)" ``` 输出为: + ``` hccl_4p_4567_127.0.0.1.json ``` @@ -218,22 +218,3 @@ done ``` 注意, `DEVICE_ID` 和 `RANK_ID` 的组合关系应该跟 `hccl_4p_4567_127.0.0.1.json` 文件中相吻合. - -## 2. GPU - -### 2.1 通过OpenMPI运行脚本进行训练 - -在 GPU 硬件平台上,MindSpore也支持使用 `OpenMPI` 的 `mpirun` 命令来运行分布式训练。以下命令将在 `device 0`和 `device 1` 上运行训练。 - - -```shell -# n 代表训练使用到的GPU数量 -mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml -``` - -如果用户想在 `device 2` 和 `device 3` 上运行训练,用户可以在运行上面的命令之前运行 `export CUDA_VISIBLE_DEVICES=2,3`,或者直接运行以下命令: - -```shell -# n 代表训练使用到的GPU数量 -CUDA_VISIBLE_DEVICES=2,3 mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml -``` diff --git a/docs/zh/tutorials/frequently_asked_questions.md b/docs/zh/tutorials/frequently_asked_questions.md index 248c03b66..e0f775598 100644 --- a/docs/zh/tutorials/frequently_asked_questions.md +++ b/docs/zh/tutorials/frequently_asked_questions.md @@ -99,7 +99,7 @@ 该错误是`mindspore_lite`tar包中的`libascend_kernel plugin.so`未加入到环境变量`LD_LIBRARY_PATH`导致,解决方法如下 - 1. 查看是否安装了`mindspore_lite`的**云侧推理工具包**。如果未安装,请从 [工具包tar.gz、whl包下载链接](https://gitee.com/link?target=https%3A%2F%2Fwww.mindspore.cn%2Flite%2Fdocs%2Fzh-CN%2Fmaster%2Fuse%2Fdownloads.html),下载Ascend版的云侧版本`tar.gz`包以及`whl`包安装,详细请见 [mindspore lite 安装](https://gitee.com/mindspore-lab/mindocr/blob/main/docs/cn/inference/environment.md)。 + 1. 查看是否安装了`mindspore_lite`的**云侧推理工具包**。如果未安装,请从 [工具包tar.gz、whl包下载链接](https://gitee.com/link?target=https%3A%2F%2Fwww.mindspore.cn%2Flite%2Fdocs%2Fzh-CN%2Fmaster%2Fuse%2Fdownloads.html),下载Ascend版的云侧版本`tar.gz`包以及`whl`包安装,详细请见 [mindspore lite 安装](../inference/environment.md)。 2. 找到`mindspore_lite`的安装路径,如路径为`/your_path_to/mindspore-lite`,cd到该目录下 diff --git a/docs/zh/tutorials/training_detection_custom_dataset.md b/docs/zh/tutorials/training_detection_custom_dataset.md index 4f83e59c5..f74c4ad25 100644 --- a/docs/zh/tutorials/training_detection_custom_dataset.md +++ b/docs/zh/tutorials/training_detection_custom_dataset.md @@ -17,7 +17,7 @@ - [3.3 推理](#33-推理) - [3.3.1 环境准备](#331-环境准备) - [3.3.2 模型转换](#332-模型转换) - - [3.3.3 推理 (Python)](#333-推理-python) + - [3.3.3 推理](#333-推理) ## 1. 数据集准备 目前,MindOCR检测网络支持两种输入格式,分别是: @@ -253,18 +253,14 @@ python tools/train.py -c=configs/det/dbnet/db_r50_icdar15.yaml * 分布式训练 -在分布式训练中,yaml配置文件中的`system.distribute`应该为`True`。在GPU和Ascend设备上,用户可以使用`mpirun`来启动分布式训练。例如,使用`device:0`和`device:1`进行训练: +在分布式训练中,yaml配置文件中的`system.distribute`应该为`True`。在Ascend设备上,用户可以使用`mpirun`来启动分布式训练。例如,使用`device:0`和`device:1`进行训练: ```Shell -# n是GPU/NPU的数量 +# n是NPU的数量 mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml ``` 有时,用户可能想要指定设备id来进行分布式训练,例如,`device:2`和`device:3`。 -在GPU设备上,在运行上面的`mpirun`命令之前,用户可以运行以下命令: -``` -export CUDA_VISIBLE_DEVICES=2,3 -``` 在Ascend设备上,用户应该创建一个像这样的`rank_table.json`: ```json Copy{ @@ -306,15 +302,15 @@ python tools/eval.py -c=configs/det/dbnet/db_r50_icdar15.yaml \ ### 3.3 推理 -MindOCR推理支持Ascend310/Ascend310P设备,支持[MindSpore Lite](https://www.mindspore.cn/lite)和 [ACL](https://www.hiascend.com/document/detail/zh/canncommercial/63RC1/inferapplicationdev/aclcppdevg/aclcppdevg_000004.html) 推理后端。推理教程给出了如何使用MindOCR进行推理的详细步骤,主要包括三个步骤:环境准备、模型转换和推理。 +MindOCR推理支持Ascend310/Ascend310P设备,采用[MindSpore Lite](https://www.mindspore.cn/lite)推理。推理教程给出了如何使用MindOCR进行推理的详细步骤,主要包括三个步骤:环境准备、模型转换和推理。 #### 3.3.1 环境准备 -请参考[环境安装](../inference/environment.md)获取更多信息,并根据模型注意选择ACL/Lite环境。 +请参考[环境安装](../inference/environment.md)获取更多信息。 #### 3.3.2 模型转换 -在运行推理之前,用户需要从训练得到的checkpoint文件导出一个MindIR文件。MindSpore IR (MindIR)是基于图形表示的函数式IR。MindIR文件存储了推理所需的模型结构和权重参数。 +在运行推理之前,用户需要从训练得到的checkpoint文件导出一个MindIR文件。[MindSpore IR (MindIR)](https://www.mindspore.cn/docs/en/r2.2/design/mindir.html)是基于图形表示的函数式IR。MindIR文件存储了推理所需的模型结构和权重参数。 根据训练好的dbnet checkpoint文件,用户可以使用以下命令导出MindIR: ```Shell @@ -327,7 +323,7 @@ python tools/export.py --model_name_or_config configs/det/dbnet/db_r50_icdar15.y 请参考[转换教程](../inference/convert_tutorial.md)获取更多关于模型转换的细节。 -#### 3.3.3 推理 (Python) +#### 3.3.3 推理 经过模型转换后, 用户能得到`output.mindir`文件。用户可以进入到`deploy/py_infer`目录,并使用以下命令进行推理: diff --git a/docs/zh/tutorials/training_recognition_custom_dataset.md b/docs/zh/tutorials/training_recognition_custom_dataset.md index 812650577..99d35b2bb 100644 --- a/docs/zh/tutorials/training_recognition_custom_dataset.md +++ b/docs/zh/tutorials/training_recognition_custom_dataset.md @@ -213,10 +213,10 @@ model: #### 分布式训练 -在大量数据的情况下,建议用户使用分布式训练。对于在多个昇腾910设备或着GPU卡的分布式训练,请将配置参数`system.distribute`修改为True, 例如: +在大量数据的情况下,建议用户使用分布式训练。对于在多个昇腾910设备的分布式训练,请将配置参数`system.distribute`修改为True, 例如: ```shell -# 在4个 GPU/Ascend 设备上进行分布式训练 +# 在4个 Ascend 设备上进行分布式训练 mpirun -n 4 python tools/train.py --config configs/rec/crnn/crnn_resnet34_ch.yaml ``` @@ -225,7 +225,7 @@ mpirun -n 4 python tools/train.py --config configs/rec/crnn/crnn_resnet34_ch.yam 如果要在没有分布式训练的情况下在较小的数据集上训练或微调模型,请将配置参数`system.distribute`修改为False 并运行: ```shell -# CPU/GPU/Ascend 设备上的单卡训练 +# CPU/Ascend 设备上的单卡训练 python tools/train.py --config configs/rec/crnn/crnn_resnet34_ch.yaml ``` diff --git a/docs/zh/tutorials/yaml_configuration.md b/docs/zh/tutorials/yaml_configuration.md index 192e1e791..00f5c64a9 100644 --- a/docs/zh/tutorials/yaml_configuration.md +++ b/docs/zh/tutorials/yaml_configuration.md @@ -24,7 +24,7 @@ | mode | MindSpore运行模式(静态图/动态图) | 0 | 0 / 1 | 0: 表示在GRAPH_MODE模式中运行; 1: PYNATIVE_MODE模式 | | distribute | 是否开启并行训练 | True | True / False | \ | | device_id | 指定单卡训练时的卡id | 7 | 机器可用的卡的id | 该参数仅在distribute=False(单卡训练)和环境变量DEVICE_ID未设置时生效。单卡训练时,如该参数和环境变量DEVICE_ID均未设置,则默认使用0卡。 | -| amp_level | 混合精度模式 | O0 | O0/O1/O2/O3 | 'O0' - 不变化。
'O1' - 将白名单内的Cell和运算转为float16精度,其余部分保持float32精度。
'O2' - 将黑名单内的Cell和运算保持float32精度,其余部分转为float16精度。
'O3' - 将网络全部转为float16精度。
注意:GPU平台上的模型推理或评估暂不支持'O3'模式,如设置为'O3'模式,程序会自动将其转为'O2'模式。| +| amp_level | 混合精度模式 | O0 | O0/O1/O2/O3 | 'O0' - 不变化。
'O1' - 将白名单内的Cell和运算转为float16精度,其余部分保持float32精度。
'O2' - 将黑名单内的Cell和运算保持float32精度,其余部分转为float16精度。
'O3' - 将网络全部转为float16精度。
| | seed | 随机种子 | 42 | Integer | \ | | ckpt_save_policy | 模型权重保存策略 | top_k | "top_k" 或 "latest_k" | "top_k"表示保存前k个评估指标分数最高的checkpoint;"latest_k"表示保存最新的k个checkpoint。 `k`的数值通过`ckpt_max_keep`参数定义 | | ckpt_max_keep | 最多保存的checkpoint数量 | 5 | Integer | \ | diff --git a/mkdocs.yml b/mkdocs.yml index 6710cccaa..224f92adc 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -9,8 +9,8 @@ nav: # - Installation: installation.md - Model Zoo: - Training: mkdocs/modelzoo_training.md - - Inference - MindOCR Models: inference/inference_quickstart.md - - Inference - Third-party Models: inference/inference_thirdparty_quickstart.md + - Inference - MindOCR Models: inference/mindocr_models_list.md + - Inference - Third-party Models: inference/thirdparty_models_list.md - Tutorials: - 1. Datasets: - Dataset Preparation: datasets/converters.md @@ -23,7 +23,7 @@ nav: - Advance Training: tutorials/advanced_train.md - 3. Inference and Deployment: - Python Online Inference: mkdocs/online_inference.md - - Python/C++ Inference on Ascend 310: inference/inference_tutorial.md + - MindOCR Offline Inference: inference/inference_tutorial.md - MindOCR Models List: inference/mindocr_models_list.md - Third-party Models List: inference/thirdparty_models_list.md - Model Conversion: inference/convert_tutorial.md @@ -173,7 +173,7 @@ plugins: Advance Training: 进阶训练 3. Inference and Deployment: 3. 推理和部署 Python Online Inference: 基于Python的在线推理 - Python/C++ Inference on Ascend 310: 基于Python/C++和昇腾310的推理 + MindOCR Offline Inference: MindOCR 离线推理 MindOCR Models List: MindOCR模型支持列表 Third-party Models List: 第三方模型支持列表 Model Conversion: 模型转换 diff --git a/tools/dataset_converters/README_CN.md b/tools/dataset_converters/README_CN.md index 8fe29b60d..f131832c5 100644 --- a/tools/dataset_converters/README_CN.md +++ b/tools/dataset_converters/README_CN.md @@ -1 +1 @@ -请参考[`docs/cn/datasets/converters.md`](../../docs/cn/datasets/converters.md)。 +请参考[`docs/zh/datasets/converters.md`](../../docs/zh/datasets/converters.md)。