Skip to content

Latest commit

 

History

History
80 lines (56 loc) · 6.28 KB

README.md

File metadata and controls

80 lines (56 loc) · 6.28 KB

Lightweight License Plate Recognition with JAX

License jax_badge LiteRT Hugging Face

This repository is a JAX implementations of lightweight South Korea license plate recognition (LPR) models. The default weight is trained on the Korean license plate dataset and also the model can be trained on any other license plate dataset.

Demo

Try the model on the Hugging Face Spaces!

demo

Deployment

You can download any of deployment requirement files including the model weights, preprocessing and postprocessing scripts, and the model configuration file from Hugging Face Spaces and deploy the model on the cloud or edge devices.

Data Preparation

The labeled data is required to train the model. The data should be organized as follows:

- data
  - labels.names
  - train
    - {license_plate_number}_{index}.jpg
    - {license_plate_number}_{index}.txt
    - ...
  - val
    - {license_plate_number}_{index}.jpg
    - {license_plate_number}_{index}.txt
    - ...

license_plate_number is the license plate number and make sure that the number is formatted like 12가1234, 서울12가1234 and prepare a dict to parse the every character of the license plate number to the integer. The dict should be saved as labels.names file. image_number is the number of the image and it is used to distinguish the same license plate number. The .txt file is the bounding boxes of each character in the license plate number. The format of the .txt file is as follows:

x1 y1 x2 y2
...
xn yn xn yn

The order of the bounding boxes should be the same as the order of the characters in the license plate number.

The dataloader will parse the data and convert the license plate characters to the integer using the labels.names file. The license plate images will be resized to (96, 192) or any other size you want. In addition, the mask of the license plate number will be created via the bounding boxes and the mask will be used to calculate the loss.

The losses of the model are as follows:

For CTC:

$$ L_{ctc} = -\frac{1}{n} \sum_{i=1}^{n} \log(p_{i}) $$

$$ L_{center} = \sum_{i=1}^{n} \left(1 - \frac{c_i}{t}\right)^2 $$

$$ L_{Total} = \alpha * L_{ctc} + \beta * L_{center} = \begin{cases} \begin{aligned} 0, \quad\text{if } t \leq 20k \\ L_{center}, \quad\text{otherwise} \\ \end{aligned} \end{cases} $$

For Mask:

$$ L_{Mask} = L_{Dice} + L_{BCE} $$

Benchmark

Model Input Shape Size Accuracy Speed (ms)
tinyLPR (96, 192, 1) 86 KB 99.12 % 0.42 ms

The speed is tested on the Apple M2 chip.