Skip to content

User Manual

Runtian edited this page Jan 6, 2021 · 3 revisions

We provide a step-by-step user manual here. The user needs to install python3.6 or greater.

1. Divide Whole Slide Images into image tiles

Open cli folder and use extract_tiles_from_wsi_openslide.py to divide WSI into smaller tiles. Here is a basic usage tutorial.

E:\Study\Research\QA\GithubQA\QuickAnnotator\cli>python extract_tiles_from_wsi_openslide.py --help
usage: extract_tiles_from_wsi_openslide.py [-h] [-p PATCHSIZE]
                                           [-l OPENSLIDELEVEL] [-o OUTDIR]
                                           [-b]
                                           [input_pattern [input_pattern ...]]

Convert image and mask into non-overlapping patches

positional arguments:
  input_pattern         Input filename pattern (try: *.png), or txt file
                        containing list of files

optional arguments:
  -h, --help            show this help message and exit
  -p PATCHSIZE, --patchsize PATCHSIZE
                        Patchsize, default 256
  -l OPENSLIDELEVEL, --openslidelevel OPENSLIDELEVEL
                        openslide level to use
  -o OUTDIR, --outdir OUTDIR
                        Target output directory
  -b, --bgremoved       Don't save patches which are considered background,
                        useful for TMAs
2. Open terminal, go to QA's directory
cd quick_annotator
python QA.py
3. Open Chrome, go to
http://localhost:5555

The user could change the port number in the config.ini file.

4. Create a new project, add images to the project
5. Follow instruction on the page: Make Patches, (Re)train Model 0, Embed Patches, View Embedding

The image below shows, make patches successfully completed, and the next step is (Re)train Model which trains for AutoEncoder. When the AutoEncoder is ready, the user will Embed Patches and View Embedding, which directs the user to Embedding Page.

6. Go to Embedding Page, hover over a dot, and direct to annotation page to make annotations

Users can also decide where to annotate by moving the selection square on the Annotation Page.

7. Make annotation and upload them to training set
<em> Demo for step 6 and step 7</em>
8. Train a classifier, and then check prediction results when the model is ready

Like the red arrow shows, the user clicks on Retrain Dl From base to train a new model.

As the blue arrow points, the prediction result is red, indicating that the prediction layer is not available since there is no model available.

see Navigation Bar

9. Select a patch from the image, import the deep learning output, and modify based on that.

See more details here.

10. Repeat 6-9.

11. Download DL model, output, or annotation files by clicking Download in the top menu bar.

Quick Annotator Wiki

QA's Wiki is complete documentation that explains to user how to use this tool and the reasons behind. Here is the catalogue for QA's wiki page:

Home:

  1. Quick Annotator Pages
  1. User Guide
  1. Frequently Asked Questions
Clone this wiki locally