This repo adopts the Oscar image captioning model to a image-to-SMILES recognition framework for processing chemical images. It takes in processed image features as input and predicted SMILES as output.
Check the "Original OSCAR Repo README" section for Installation instructions. Additionally, do the following:
pip install transformers
pip install pytorch-transformers
pip install textdistance
pip install nltk
time conda install -q -y -c conda-forge rdkit=2020.09.2
There is a trained model ready to use in the output_run3/checkpoint-69-82040 folder that was trained with the following parameters and hyperparameters:
- 300,000 images in the form of image feature vectors with dimension [50,1024] each generated from Chem-Detectron2
- learning rate 0.00003
- batch size 256
- epochs 70
- num_beams 5
To train the Chem-OSCAR using the image captioning task, do the following:
python oscar/run_captioning.py --model_name_or_path seyonec/PubChem10M_SMILES_BPE_450k --train_yaml smiles_train.yaml --val_yaml smiles_val.yaml --do_train --evaluate_during_training --learning_rate 0.00003 --per_gpu_train_batch_size 256 --num_train_epochs 70 --save_steps 5000 --output_dir output --scst --seed 11699 --num_beams 5
To generate the predicted SMILES, do the following:
python oscar/run_captioning.py --eval_model_dir output_run3/checkpoint-69-82040 --test_yaml demo_test.yaml --do_test --learning_rate 0.00003 --per_gpu_train_batch_size 256 --num_train_epochs 70 --save_steps 5000 --output_dir output --scst --num_beams 5 --num_keep_best 1
05/28/2020: Released finetuned models on downstream tasks, please check MODEL_ZOO.md.
05/15/2020: Released pretrained models, datasets, and code for downstream tasks finetuning.
01/13/2021: our new work VinVL proposed OSCAR+, an improved version of OSCAR, and provided a better object-attribute detection model to extract features for V+L tasks. The VinVL work achieved SOTA performance on all seven V+L tasks here. Please stay tuned for the model and code release.
03/08/2021: Oscar+ pretraining code released, please check the last section in VinVL_MODEL_ZOO.md. All image features and model checkpoints in VinVL are also released. Please check VinVL for details.
04/13/2021: Our Scene Graph Benchmark Repo has been released. Welcome to use the code there to extract image features with VinVL pretrained models.
This repository contains source code necessary to reproduce the results presented in the paper Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. We propose a new cross-modal pre-training method Oscar (Object-Semantics Aligned Pre-training). It leverages object tags detected in images as anchor points to significantly ease the learning of image-text alignments. We pre-train Oscar on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks. For more on this project, see the Microsoft Research Blog post.
Task | t2i | t2i | i2t | i2t | IC | IC | IC | IC | NoCaps | NoCaps | VQA | NLVR2 | GQA |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Metric | R@1 | R@5 | R@1 | R@5 | B@4 | M | C | S | C | S | test-std | test-P | test-std |
SoTA_S | 39.2 | 68.0 | 56.6 | 84.5 | 38.9 | 29.2 | 129.8 | 22.4 | 61.5 | 9.2 | 70.92 | 58.80 | 63.17 |
SoTA_B | 54.0 | 80.8 | 70.0 | 91.1 | 40.5 | 29.7 | 137.6 | 22.8 | 86.58 | 12.38 | 73.67 | 79.30 | - |
SoTA_L | 57.5 | 82.8 | 73.5 | 92.2 | 41.7 | 30.6 | 140.0 | 24.5 | - | - | 74.93 | 81.47 | - |
----- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
Oscar_B | 54.0 | 80.8 | 70.0 | 91.1 | 40.5 | 29.7 | 137.6 | 22.8 | 78.8 | 11.7 | 73.44 | 78.36 | 61.62 |
Oscar_L | 57.5 | 82.8 | 73.5 | 92.2 | 41.7 | 30.6 | 140.0 | 24.5 | 80.9 | 11.3 | 73.82 | 80.05 | - |
----- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
VinVL_B | 58.1 | 83.2 | 74.6 | 92.6 | 40.9 | 30.9 | 140.6 | 25.1 | 92.46 | 13.07 | 76.12 | 83.08 | 64.65 |
VinVL_L | 58.8 | 83.5 | 75.4 | 92.9 | 41.0 | 31.1 | 140.9 | 25.2 | - | - | 76.62 | 83.98 | - |
gain | 1.3 | 0.7 | 1.9 | 0.6 | -0.7 | 0.5 | 0.9 | 0.7 | 5.9 | 0.7 | 1.69 | 2.51 | 1.48 |
t2i: text-to-image retrieval; i2t: image-to-text retrieval; IC: image captioning on COCO.
We released pre-trained models, datasets, VinVL image features, and Oscar+ pretraining corpus for downstream tasks. Please check VinVL_DOWNLOAD.md for details.
To download checkpoints for the Vanilla OSCAR, please check DOWNLOAD.md for details.
Check INSTALL.md for installation instructions.
Check MODEL_ZOO.md for scripts to run oscar downstream finetuning.
Check VinVL_MODEL_ZOO.md for scripts to run oscar+ pretraining and downstream finetuning.
Please consider citing this paper if you use the code:
@article{li2020oscar,
title={Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks},
author={Li, Xiujun and Yin, Xi and Li, Chunyuan and Hu, Xiaowei and Zhang, Pengchuan and Zhang, Lei and Wang, Lijuan and Hu, Houdong and Dong, Li and Wei, Furu and Choi, Yejin and Gao, Jianfeng},
journal={ECCV 2020},
year={2020}
}
@article{zhang2021vinvl,
title={VinVL: Making Visual Representations Matter in Vision-Language Models},
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
journal={CVPR 2021},
year={2021}
}
Oscar is released under the MIT license. See LICENSE for details.