Generating FastAPI endpoints from spacy models with vetiver
#12475
isabelizimm
started this conversation in
New Features & Project Ideas
Replies: 1 comment 1 reply
-
Thanks for adding support for spaCy! Deploying the The |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all--
I am working on
vetiver
, an MLOps Python library that helps data scientists version, deploy, and monitor models in a lightweight way (docs, GitHub). I'm interested in supporting spacy, and am looking for user feedback to make this as ergonomic as possible for spacy users. 😄I would love your ideas! Feel free to install the PR:
The core functionality that I am interested in feedback on is vetiver's generation of FastAPI endpoints. Right now I have made a few assumptions: that people are interested in deploying the
nlp
model (rather than the Pipeline object or something else), and that input data will be in some sort of one-column data frame. Does this seem correct for most people?Example usage
You can interact with the API that will start running at
http://127.0.0.1:8000
, or in a new notebook, make predictions from the endpoint using a script like below ⬇️Any and all feedback is appreciated. I'm keen to learn how spacy is currently living in production, and how to make that a more pleasant experience ✨
Cheers,
Isabel
Beta Was this translation helpful? Give feedback.
All reactions