Skip to content
This repository was archived by the owner on Nov 19, 2024. It is now read-only.

Commit 3e7d668

Browse files
committed
Merge branch 'dev'
2 parents 5c8c5ff + 169b81e commit 3e7d668

15 files changed

+21
-68
lines changed

README.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ For additional info read `cv2.getBuildInformation()` output.
7575

7676
You will need ~7GB RAM and ~10GB disk space
7777

78-
I am using Ubuntu 18.04 [multipass](https://multipass.run/) instance: `multipass launch -c 6 -d 10G -m 7G 18.04`.
78+
I am using Ubuntu 18.04 (python 3.6) [multipass](https://multipass.run/) instance: `multipass launch -c 6 -d 10G -m 7G 18.04`.
7979

8080
### Requirements
8181

@@ -104,6 +104,7 @@ sudo ln -s /usr/bin/python3 /usr/bin/python
104104
```bash
105105
git clone https://github.com/banderlog/opencv-python-inference-engine
106106
cd opencv-python-inference-engine
107+
# git checkout dev
107108
./download_all_stuff.sh
108109
```
109110

build/opencv/opencv_setup.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ cmake -D CMAKE_BUILD_TYPE=RELEASE \
4444
-D FFMPEG_INCLUDE_DIRS=$FFMPEG_PATH/include \
4545
-D INF_ENGINE_INCLUDE_DIRS=$ABS_PORTION/dldt/inference-engine/include \
4646
-D INF_ENGINE_LIB_DIRS=$ABS_PORTION/dldt/bin/intel64/Release/lib \
47-
-D INF_ENGINE_RELEASE=2021030000 \
47+
-D INF_ENGINE_RELEASE=2021040000 \
4848
-D INSTALL_CREATE_DISTRIB=ON \
4949
-D INSTALL_C_EXAMPLES=OFF \
5050
-D INSTALL_PYTHON_EXAMPLES=OFF \

create_wheel/setup.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ def __len__(self):
1515

1616
setuptools.setup(
1717
name='opencv-python-inference-engine',
18-
version='2021.04.13',
18+
version='2021.07.10',
1919
url="https://github.com/banderlog/opencv-python-inference-engine",
2020
maintainer="Kabakov Borys",
2121
license='MIT, Apache 2.0',

dldt

Submodule dldt updated 10007 files

download_all_stuff.sh

+2-2
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,8 @@ if test $(lsb_release -rs) != 18.04; then
2323
fi
2424

2525
green "RESET GIT SUBMODULES"
26-
# use `git fetch --unshallow && git checkout tags/<tag>` for update
26+
# git checkout dev
27+
# use `git fetch --tags && git checkout tags/<tag>` for update
2728
git submodule update --init --recursive --depth=1 --jobs=4
2829
# restore changes command will differ between GIT versions (e.g., `restore`)
2930
git submodule foreach --recursive git checkout .
@@ -34,7 +35,6 @@ green "CLEAN BUILD DIRS"
3435
find build/dldt/ -mindepth 1 -not -name 'dldt_setup.sh' -not -name '*.patch' -delete
3536
find build/opencv/ -mindepth 1 -not -name 'opencv_setup.sh' -delete
3637
find build/ffmpeg/ -mindepth 1 -not -name 'ffmpeg_*.sh' -delete
37-
find build/openblas/ -mindepth 1 -not -name 'openblas_setup.sh' -delete
3838

3939
green "CLEAN WHEEL DIR"
4040
find create_wheel/cv2/ -type f -not -name '__init__.py' -delete

ffmpeg

Submodule ffmpeg updated 1963 files

opencv

Submodule opencv updated 434 files

tests/README.md

-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@ cd tests
1919

2020
Something like below. The general idea is to test only inference speed, without preprocessing and decoding.
2121
Also, 1st inference must not count, because it will load all stuff into memory.
22-
I prefer to do such things in `ipython` or `jupyter` with `%timeit`.
2322

2423
**NB:** be strict about Backend and Target
2524

tests/examples.ipynb

+9-9
Large diffs are not rendered by default.

tests/helloworld.png

-9.35 KB
Loading

tests/prepare_and_run_tests.sh

+1-21
Original file line numberDiff line numberDiff line change
@@ -68,30 +68,10 @@ for i in "${models[@]}"; do
6868
wget "${url_start}/${i%.*}/FP32/${i}"
6969
else
7070
# checksum
71-
sha256sum -c "${i}.sha256sum"
72-
fi
73-
done
74-
75-
76-
# for speed test
77-
# {filename: file_google_drive_id}
78-
declare -A se_net=(["se_net.bin"]="1vbonFjVyleGRSd_wR-Khc1htsZybiHCG"
79-
["se_net.xml"]="1Bz3EQwnes_iZ14iKAV6H__JZ2lynLmQz")
80-
81-
# for each key
82-
for i in "${!se_net[@]}"; do
83-
# if file exist
84-
if [ -f $i ]; then
85-
# checksum
86-
sha256sum -c "${i}.sha256sum"
87-
else
88-
# get fileid from associative array and download file
89-
wget --no-check-certificate "https://docs.google.com/uc?export=download&id=${se_net[$i]}" -O $i
71+
sha256sum -c "${i}.sha256sum" || red "PROBLEMS ^^^"
9072
fi
9173
done
9274

9375
green "For \"$WHEEL\""
9476
green "RUN TESTS with ./venv_t/bin/python ./tests.py"
9577
./venv_t/bin/python ./tests.py
96-
green "RUN TESTS with ./venv_t/bin/python ./speed_test.py"
97-
./venv_t/bin/python ./speed_test.py

tests/se_net.bin.sha256sum

-1
This file was deleted.

tests/se_net.xml.sha256sum

-1
This file was deleted.

tests/speed_test.py

-25
This file was deleted.

tests/text_recognition.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ def _get_confidences(self, img: np.ndarray, box: tuple) -> np.ndarray:
3737
return outs
3838

3939
def do_ocr(self, img: np.ndarray, bboxes: List[tuple]) -> List[str]:
40-
""" Run OCR pipeline for a single words
40+
""" Run OCR pipeline with greedy decoder for each single word (bbox)
4141
4242
:param img: BGR image
4343
:param bboxes: list of sepaate word bboxes (ymin ,xmin ,ymax, xmax)
@@ -60,7 +60,7 @@ def do_ocr(self, img: np.ndarray, bboxes: List[tuple]) -> List[str]:
6060
for box in bboxes:
6161
# confidence distribution across symbols
6262
confs = self._get_confidences(img, box)
63-
# get maximal confidence for the whole beam width
63+
# get maximal confidence for the whole beam width aka greedy decoder
6464
idxs = confs[:, 0, :].argmax(axis=1)
6565
# drop blank characters '#' with id == 36 in charvec
6666
# isupposedly we taking only separate words as input

0 commit comments

Comments
 (0)