Skip to content

Commit 33fa33e

Browse files
authored
Fix typos (#14936)
1 parent d28cb46 commit 33fa33e

File tree

11 files changed

+18
-18
lines changed

11 files changed

+18
-18
lines changed

benchmark/PaddleOCR_DBNet/README.MD

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ x1, y1, x2, y2, x3, y3, x4, y4, annotation
6464
1. config the `dataset['train']['dataset'['data_path']'`,`dataset['validate']['dataset'['data_path']`in [config/icdar2015_resnet18_fpn_DBhead_polyLR.yaml](cconfig/icdar2015_resnet18_fpn_DBhead_polyLR.yaml)
6565
* . single gpu train
6666
```bash
67-
bash singlel_gpu_train.sh
67+
bash single_gpu_train.sh
6868
```
6969
* . Multi-gpu training
7070
```bash

benchmark/PaddleOCR_DBNet/utils/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,5 @@
44
from .util import *
55
from .metrics import *
66
from .schedulers import *
7-
from .cal_recall.script import cal_recall_precison_f1
7+
from .cal_recall.script import cal_recall_precision_f1
88
from .ocr_metric import get_metric
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# -*- coding: utf-8 -*-
22
# @Time : 1/16/19 6:40 AM
33
# @Author : zhoujun
4-
from .script import cal_recall_precison_f1
4+
from .script import cal_recall_precision_f1
55

6-
__all__ = ["cal_recall_precison_f1"]
6+
__all__ = ["cal_recall_precision_f1"]

benchmark/PaddleOCR_DBNet/utils/cal_recall/script.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -394,7 +394,7 @@ def compute_ap(confList, matchList, numGtCare):
394394
return resDict
395395

396396

397-
def cal_recall_precison_f1(gt_path, result_path, show_result=False):
397+
def cal_recall_precision_f1(gt_path, result_path, show_result=False):
398398
p = {"g": gt_path, "s": result_path}
399399
result = rrc_evaluation_funcs.main_evaluation(
400400
p, default_evaluation_params, validate_data, evaluate_method, show_result

deploy/android_demo/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ app主页中有四个按钮,一个下拉列表和一个菜单按钮,他们
6666

6767
<img src="https://paddleocr.bj.bcebos.com/PP-OCRv2/lite/imgs/run_det_cls_rec.jpg" width="400">
6868

69-
模型运行完成后,模型和运行状态显示区`STATUS`字段显示了当前模型的运行状态,这里显示为`run model successed`表明模型运行成功。
69+
模型运行完成后,模型和运行状态显示区`STATUS`字段显示了当前模型的运行状态,这里显示为`run model succeeded`表明模型运行成功。
7070

7171
模型的运行结果显示在运行结果显示区,显示格式为
7272
```text

deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/MainActivity.java

+2-2
Original file line numberDiff line numberDiff line change
@@ -256,7 +256,7 @@ public void onLoadModelSuccessed() {
256256
// Load test image from path and run model
257257
tvInputSetting.setText("Model: " + modelPath.substring(modelPath.lastIndexOf("/") + 1) + "\nOPENCL: " + cbOpencl.isChecked() + "\nCPU Thread Num: " + cpuThreadNum + "\nCPU Power Mode: " + cpuPowerMode);
258258
tvInputSetting.scrollTo(0, 0);
259-
tvStatus.setText("STATUS: load model successed");
259+
tvStatus.setText("STATUS: load model succeeded");
260260

261261
}
262262

@@ -265,7 +265,7 @@ public void onLoadModelFailed() {
265265
}
266266

267267
public void onRunModelSuccessed() {
268-
tvStatus.setText("STATUS: run model successed");
268+
tvStatus.setText("STATUS: run model succeeded");
269269
// Obtain results and update UI
270270
tvInferenceTime.setText("Inference time: " + predictor.inferenceTime() + " ms");
271271
Bitmap outputImage = predictor.outputImage();

docs/applications/快速构建卡证类OCR.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -314,7 +314,7 @@ class MakeShrinkMap(object):
314314
padding = pyclipper.PyclipperOffset()
315315
padding.AddPath(subject, pyclipper.JT_ROUND,
316316
pyclipper.ET_CLOSEDPOLYGON)
317-
shrinked = []
317+
shrunk = []
318318

319319
# Increase the shrink ratio every time we get multiple polygon returned back
320320
possible_ratios = np.arange(self.shrink_ratio, 1,
@@ -323,19 +323,19 @@ class MakeShrinkMap(object):
323323
for ratio in possible_ratios:
324324
distance = polygon_shape.area * (
325325
1 - np.power(ratio, 2)) / polygon_shape.length
326-
shrinked = padding.Execute(-distance)
327-
if len(shrinked) == 1:
326+
shrunk = padding.Execute(-distance)
327+
if len(shrunk) == 1:
328328
break
329329

330-
if shrinked == []:
330+
if shrunk == []:
331331
cv2.fillPoly(mask,
332332
polygon.astype(np.int32)[np.newaxis, :, :], 0)
333333
ignore_tags[i] = True
334334
continue
335335

336-
for each_shirnk in shrinked:
337-
shirnk = np.array(each_shirnk).reshape(-1, 2)
338-
cv2.fillPoly(gt, [shirnk.astype(np.int32)], 1)
336+
for each_shrink in shrunk:
337+
shrink = np.array(each_shrink).reshape(-1, 2)
338+
cv2.fillPoly(gt, [shrink.astype(np.int32)], 1)
339339
if self.num_classes > 1: # 绘制分类的mask
340340
cv2.fillPoly(gt_class, polygon.astype(np.int32)[np.newaxis, :, :], classes[i])
341341

docs/infer_deploy/android_demo.en.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ When you tap **Run Model**, the demo executes the corresponding model(s) in your
6464

6565
<img src="./images/run_det_cls_rec.jpg" width="400">
6666

67-
The status display area shows the current model status (e.g., `run model successed`), indicating that the model ran successfully. The recognition results are formatted as follows:
67+
The status display area shows the current model status (e.g., `run model succeeded`), indicating that the model ran successfully. The recognition results are formatted as follows:
6868

6969
```text
7070
Serial Number: Det: (x1,y1)(x2,y2)(x3,y3)(x4,y4) Rec: Recognized Text, Confidence Score Cls: Classification Label, Classification Score

docs/infer_deploy/android_demo.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ app主页中有四个按钮,一个下拉列表和一个菜单按钮,他们
6363

6464
<img src="./images/run_det_cls_rec.jpg" width="400">
6565

66-
模型运行完成后,模型和运行状态显示区`STATUS`字段显示了当前模型的运行状态,这里显示为`run model successed`表明模型运行成功。
66+
模型运行完成后,模型和运行状态显示区`STATUS`字段显示了当前模型的运行状态,这里显示为`run model succeeded`表明模型运行成功。
6767

6868
模型的运行结果显示在运行结果显示区,显示格式为
6969

docs/paddlex/overview.en.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The All-in-One development tool [PaddleX](https://github.com/PaddlePaddle/PaddleX/tree/release/3.0-beta1), based on the advanced technology of PaddleOCR, supports **low-code full-process** development capabilities in the OCR field. Through low-code development, simple and efficient model use, combination, and customization can be achieved. This will significantly **reduce the time consumption** of model development, **lower its development difficulty**, and greatly accelerate the application and promotion speed of models in the industry. Features include:
44

5-
* 🎨 [**Rich Model One-Click Call**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/quick_start.html): Integrates **48 models** related to text image intelligent analysis, general OCR, general layout parsing, table recognition, formula recognition, and seal recognition into 10 pipelines, which can be quickly experienced through a simple **Python API one-click call**. In addition, the same set of APIs also supports a total of **200+ models** in image classification, object detection, image segmentation, and time series forcasting, forming 30+ single-function modules, making it convenient for developers to use **model combinations**.
5+
* 🎨 [**Rich Model One-Click Call**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/quick_start.html): Integrates **48 models** related to text image intelligent analysis, general OCR, general layout parsing, table recognition, formula recognition, and seal recognition into 10 pipelines, which can be quickly experienced through a simple **Python API one-click call**. In addition, the same set of APIs also supports a total of **200+ models** in image classification, object detection, image segmentation, and time series forecasting, forming 30+ single-function modules, making it convenient for developers to use **model combinations**.
66

77
* 🚀 [**High Efficiency and Low barrier of entry**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/overview.html): Provides two methods based on **unified commands** and **GUI** to achieve simple and efficient use, combination, and customization of models. Supports multiple deployment methods such as **high-performance inference, service-oriented deployment, and edge deployment**. Additionally, for various mainstream hardware such as **NVIDIA GPU, Kunlunxin XPU, Ascend NPU, Cambricon MLU, and Haiguang DCU**, models can be developed with **seamless switching**.
88

0 commit comments

Comments
 (0)