-
Notifications
You must be signed in to change notification settings - Fork 28
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #22 from clovaai/231016_HugeUpdate
231016 huge update
- Loading branch information
Showing
63 changed files
with
2,368 additions
and
6,023 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
# Description | ||
|
||
- Related issues: | ||
- # | ||
|
||
# Changes in this PR | ||
|
||
# How has this been tested? | ||
|
||
# Checklist | ||
- [ ] This PR follows the coding-style of this project | ||
- [ ] I have tested these changes | ||
- [ ] I have commented hard-to-understand codes |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
name: CI | ||
|
||
on: pull_request | ||
|
||
jobs: | ||
black: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- name: Checkout | ||
uses: actions/checkout@v2 | ||
- name: Setup python | ||
uses: actions/setup-python@v2 | ||
with: | ||
python-version: 3.9 | ||
- name: Upgrade pip | ||
run: pip install --upgrade pip | ||
- name: Install black | ||
run: pip install --upgrade black==23.1.0 | ||
- name: Run black | ||
run: black --check . | ||
|
||
isort: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- name: Checkout | ||
uses: actions/checkout@v2 | ||
- name: Setup python | ||
uses: actions/setup-python@v2 | ||
with: | ||
python-version: 3.9 | ||
- name: Upgrade pip | ||
run: pip install --upgrade pip | ||
- name: Install isort | ||
run: pip install --upgrade isort==5.12.0 | ||
- name: Run isort | ||
working-directory: ./cleval | ||
run: isort --profile black --check . | ||
|
||
pytest: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- name: Checkout | ||
uses: actions/checkout@v2 | ||
- name: Setup python | ||
uses: actions/setup-python@v2 | ||
with: | ||
python-version: 3.9 | ||
- name: Upgrade pip | ||
run: pip install --upgrade pip && pip install -U setuptools wheel | ||
- name: Update apt | ||
run: sudo apt update | ||
- name: Install pre-requirements | ||
run: sudo apt install -y libyajl2 libyajl-dev libleveldb-dev libgl1-mesa-glx libglib2.0-0 | ||
- name: Install cleval | ||
run: pip install six && pip install --force-reinstall --no-cache-dir cleval opencv-python-headless | ||
- name: Install pytest | ||
run: pip install --upgrade pytest | ||
- name: Run pytest | ||
run: pytest |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,15 @@ | ||
__pycache__/ | ||
venv | ||
debug* | ||
tmp* | ||
.vscode | ||
.DS_Store | ||
.idea | ||
gt/image* | ||
output/ | ||
.pytest_cache | ||
.mypy_cache | ||
build/ | ||
dist/ | ||
*.egg-info/ | ||
|
||
venv | ||
debug* | ||
tmp* | ||
profile.txt |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,13 +5,15 @@ Official implementation of CLEval | [paper](https://arxiv.org/abs/2006.06244) | |
## Overview | ||
We propose a Character-Level Evaluation metric (CLEval). To perform fine-grained assessment of the results, *instance matching* process handles granularity difference and *scoring process* conducts character-level evaluation. Please refer to the paper for more details. This code is based on [ICDAR15 official evaluation code](http://rrc.cvc.uab.es/). | ||
|
||
### 2023.10.16 Huge Update | ||
- **Much More Faster Version** of CLEval has been Uploaded!! | ||
- Support CLI | ||
- Support torchmetric | ||
- Support scale-wise evaluation | ||
|
||
### Simplified Method Description | ||
![Explanation](screenshots/explanation.gif) | ||
|
||
## Notification | ||
* 15 Jun, 2020 | initial release | ||
- the reported evaluation results in our paper is measured by setting the ```CASE_SENSITIVE``` option as ```False```. | ||
### Simplified Method Description | ||
![Explanation](resources/screenshots/explanation.gif) | ||
|
||
## Supported annotation types | ||
* **LTRB**(xmin, ymin, xmax, ymax) | ||
|
@@ -24,51 +26,83 @@ We propose a Character-Level Evaluation metric (CLEval). To perform fine-grained | |
* TotalText [Link](https://github.com/cs-chan/Total-Text-Dataset) | ||
* Any other datasets that have a similar format with the datasets mentioned above | ||
|
||
## Getting started | ||
### Clone repository | ||
``` | ||
git clone https://github.com/clovaai/CLEval.git | ||
## Installation | ||
|
||
### Build from pip | ||
download from Clova OCR pypi | ||
```bash | ||
$ pip install cleval | ||
``` | ||
|
||
### Requirements | ||
* python 3.x | ||
* see requirements.txt file to check package dependency. To install, command | ||
or build with url | ||
```bash | ||
$ pip install git+https://github.com/clovaai/CLEval.git --user | ||
``` | ||
pip3 install -r requirements.txt | ||
|
||
### Build from source | ||
|
||
```bash | ||
$ git clone https://github.com/clovaai/CLEval.git | ||
$ cd cleval | ||
$ python setup.py install --user | ||
``` | ||
|
||
## Instructions for the standalone scripts | ||
### Detection evaluation | ||
## How to use | ||
You can replace `cleval` with `PYTHONPATH=$PWD python cleval/main.py` for evaluation using source. | ||
```bash | ||
$ PYTHONPATH=$PWD python cleval/main.py -g=gt/gt_IC13.zip -s=[result.zip] --BOX_TYPE=LTRB | ||
``` | ||
python script.py -g=gt/gt_IC13.zip -s=[result.zip] --BOX_TYPE=LTRB # IC13 | ||
python script.py -g=gt/gt_IC15.zip -s=[result.zip] # IC15 | ||
python script.py -g=gt/gt_TotalText.zip -s=[result.zip] --BOX_TYPE=POLY # TotalText | ||
|
||
### Detection evaluation (CLI) | ||
```bash | ||
$ cleval -g=gt/gt_IC13.zip -s=[result.zip] --BOX_TYPE=LTRB # IC13 | ||
$ cleval -g=gt/gt_IC15.zip -s=[result.zip] # IC15 | ||
$ cleval -g=gt/gt_TotalText.zip -s=[result.zip] --BOX_TYPE=POLY # TotalText | ||
``` | ||
* Notes | ||
* The default value of ```BOX_TYPE``` is set to ```QUAD```. It can be explicitly set to ```--BOX_TYPE=QUAD``` when running evaluation on IC15 dataset. | ||
* Add ```--TANSCRIPTION``` option if the result file contains transcription. | ||
* Add ```--CONFIDENCES``` option if the result file contains confidence. | ||
|
||
### End-to-end evaluation | ||
``` | ||
python script.py -g=gt/gt_IC13.zip -s=[result.zip] --E2E --BOX_TYPE=LTRB # IC13 | ||
python script.py -g=gt/gt_IC15.zip -s=[result.zip] --E2E # IC15 | ||
python script.py -g=gt/gt_TotalText.zip -s=[result.zip] --E2E --BOX_TYPE=POLY # TotalText | ||
### End-to-end evaluation (CLI) | ||
```bash | ||
$ cleval -g=gt/gt_IC13.zip -s=[result.zip] --E2E --BOX_TYPE=LTRB # IC13 | ||
$ cleval -g=gt/gt_IC15.zip -s=[result.zip] --E2E # IC15 | ||
$ cleval -g=gt/gt_TotalText.zip -s=[result.zip] --E2E --BOX_TYPE=POLY # TotalText | ||
``` | ||
* Notes | ||
* Adding ```--E2E``` also automatically adds ```--TANSCRIPTION``` option. Make sure that the transcriptions are included in the result file. | ||
* Add ```--CONFIDENCES``` option if the result file contains confidence. | ||
|
||
### TorchMetric | ||
```python | ||
from cleval import CLEvalMetric | ||
metric = CLEvalMetric() | ||
|
||
for gt, det in zip(gts, dets): | ||
# your fancy algorithm | ||
# ... | ||
# gt_quads = ... | ||
# det_quads = ... | ||
# ... | ||
_ = metric(det_quads, gt_quads, det_letters, gt_letters, gt_is_dcs) | ||
|
||
metric_out = metric.compute() | ||
metric.reset() | ||
``` | ||
|
||
### Profiling | ||
```bash | ||
$ cleval -g=resources/test_data/gt/gt_eval_doc_v1_kr_single.zip -s=resources/test_data/pred/res_eval_doc_v1_kr_single.zip --E2E -v --DEBUG --PPROFILE > profile.txt | ||
$ PYTHONPATH=$PWD python cleval/main.py -g resources/test_data/gt/dummy_dataset_val.json -s resources/test_data/pred/dummy_dataset_val.json --SCALE_WISE --DOMAIN_WISE --ORIENTATION --E2E --ORIENTATION -v --PROFILE --DEBUG > profile.txt | ||
``` | ||
|
||
### Paramter list | ||
<!-- | ||
### Paramters for evaluation script | ||
| name | type | default | description | | ||
| ---- | ---- | ------- | ---- | | ||
| -g | ```string``` | | path to ground truth zip file | | ||
| -s | ```string``` | | path to result zip file | | ||
| -o | ```string``` | | path to save per-sample result file 'results.zip' | --> | ||
| -o | ```string``` | | path to save per-sample result file 'results.zip' | | ||
|
||
| name | type | default | description | | ||
| ---- | ---- | ------- | ---- | | ||
|
@@ -79,30 +113,8 @@ python script.py -g=gt/gt_TotalText.zip -s=[result.zip] --E2E --BOX_TYPE=POLY | |
| --CASE_SENSITIVE | ```boolean``` | ```True``` | set True to evaluate case-sensitively. (only used in end-to-end evaluation) | | ||
* Note : Please refer to ```arg_parser.py``` file for additional parameters and default settings used internally. | ||
|
||
## Instructions for the webserver | ||
|
||
### Procedure | ||
1. Compress the GT file of the dataset you want to evaluate into ```gt.zip``` file and the image files into ```images.zip```. | ||
2. Copy the two files to the ```./gt/``` directory. | ||
3. Run web.py with ```BOX_TYPE``` option. | ||
``` | ||
python web.py --BOX_TYPE=[LTRB,QUAD,POLY] --PORT=8080 | ||
``` | ||
|
||
### Paramters for webserver | ||
| name | type | default | description | | ||
| ---- | ---- | ------- | ---- | | ||
| --BOX_TYPE | ```string``` | ```QUAD``` | annotation type of box (LTRB, QUAD, POLY) | | ||
| --PORT | ```integer``` | ```8080``` | port number for web visualization | | ||
|
||
|
||
### Web Server screenshots | ||
<img src='screenshots/pic1.jpg'> | ||
<img src='screenshots/pic2.jpg'> | ||
|
||
## TODO | ||
- [ ] Support to run the webserver with the designated GT and image files | ||
- [ ] Calculate the length of text based on grapheme for Mulit-lingual dataset | ||
* Note : For scalewise evaluation, we measure the ratio of the shorter length (text height) of the text-box to the longer length of the image. | ||
Through this, evaluation for each ratio can be performed. To adjust the scales, please use SCALE_BINS argument. | ||
|
||
## Citation | ||
``` | ||
|
@@ -118,7 +130,6 @@ python web.py --BOX_TYPE=[LTRB,QUAD,POLY] --PORT=8080 | |
CLEval has been proposed to make fair evaluation in the OCR community, so we want to hear from many researchers. We welcome any feedbacks to our metric, and appreciate pull requests if you have any comments or improvements. | ||
|
||
## License | ||
|
||
``` | ||
Copyright (c) 2020-present NAVER Corp. | ||
|
@@ -140,3 +151,21 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | |
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
THE SOFTWARE. | ||
``` | ||
|
||
### Contribute | ||
Please use pre-commit which uses Black and Isort. | ||
``` | ||
$ pip install pre-commit | ||
$ pre-commit install | ||
``` | ||
|
||
##### Step By Step | ||
1. Write an issue. | ||
2. Match code style (black, isort) | ||
3. Wirte test code. | ||
4. Delete branch after Squash&Merge. | ||
|
||
Required Approve: 1 | ||
|
||
## Code Maintainer | ||
- Donghyun Kim ([email protected]) |
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.