This demo shows how to use BLIP to do conditional or unconditional image captioning.
cargo run -r --example blip
[Unconditional]: a group of people walking around a bus
[Conditional]: three man walking in front of a bus
Some(["three man walking in front of a bus"])
- Multi-batch inference for image caption
- VQA
- Retrival
- TensorRT support for textual model