Skip to content
This repository has been archived by the owner on Nov 21, 2022. It is now read-only.

Serve model locally #72

Closed
Haimchen opened this issue Nov 24, 2019 · 1 comment
Closed

Serve model locally #72

Haimchen opened this issue Nov 24, 2019 · 1 comment

Comments

@Haimchen
Copy link
Collaborator

Goal

Have a local instance of the model to make the predictions about tweets.

Description

**Requires adjstment of the API: **

Instead of calling the /predict endpoint, the extension downloads the latest version of the model. Predictions are then made based on this local instance of the model.

First steps would include to find out how much data an extension can store and how big our model is.

@Haimchen Haimchen added this to the 0.3 milestone Nov 24, 2019
@djbusstop
Copy link
Contributor

Closed: same as #34

@djbusstop djbusstop removed this from the 0.3 milestone Apr 3, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants