Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy!!! How can I serve the exported model into tensorflow serving with rest api ? #38

Open
xiaoyangnihao opened this issue Aug 8, 2019 · 1 comment

Comments

@xiaoyangnihao
Copy link

First thanks for author's work. I exprot the saved_model with the export.py. And I want to deploy this model with tensorflow serving and use the rest api to make the prediction. The deployment works well. I start TensorFlow Serving and open the REST API port, But I don't know how to make the prediction, or does anyone can help me, thanks a lot.

[XXX@pachira-sh1 ptts_serving]$ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --rest_api_port=8081 --model_name=tacotron --model_base_path=/home/XXX/ptts_serving/saved_models/tacotron
2019-08-08 14:16:24.073923: I tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config:  model_name: tacotron model_base_path: /home/XXX/ptts_serving/saved_models/tacotron
2019-08-08 14:16:24.084761: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2019-08-08 14:16:24.084845: I tensorflow_serving/model_servers/server_core.cc:573]  (Re-)adding model: tacotron
2019-08-08 14:16:24.192643: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: tacotron version: 1000}
2019-08-08 14:16:24.192704: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: tacotron version: 1000}
2019-08-08 14:16:24.192752: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: tacotron version: 1000}
2019-08-08 14:16:24.192890: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:363] Attempting to load native SavedModelBundle in bundle-shim from: /home/XXX/ptts_serving/saved_models/tacotron/1000
2019-08-08 14:16:24.192931: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /home/XXX/ptts_serving/saved_models/tacotron/1000
2019-08-08 14:16:24.348842: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-08-08 14:16:24.517981: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-08 14:16:25.142247: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:202] Restoring SavedModel bundle.
2019-08-08 14:16:26.318462: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: success. Took 2125501 microseconds.
2019-08-08 14:16:26.318600: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:103] No warmup data file found at /home/XXX/ptts_serving/saved_models/tacotron/1000/assets.extra/tf_serving_warmup_requests
2019-08-08 14:16:26.319257: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: tacotron version: 1000}
2019-08-08 14:16:26.348335: I tensorflow_serving/model_servers/server.cc:326] Running gRPC ModelServer at 0.0.0.0:8500 ...
2019-08-08 14:16:26.365618: I tensorflow_serving/model_servers/server.cc:346] Exporting HTTP/REST API at:localhost:8081 ...
[evhttp_server.cc : 239] RAW: Entering the event loop ...

offical predict show like this:

# 4. Query the model using the predict API
curl -d '{"instances": [1.0, 2.0, 5.0]}' \
    -X POST http://localhost:8501/v1/models/half_plus_two:predict

# Returns => { "predictions": [2.5, 3.0, 4.5] }

I tried like this, But it doesn't work, the format of prediction is wrong or not, I'm not sure, Hope to have the solution, thanks

[XXX@pachira-sh1 ~]$ curl -d '{"instances": ['hello', 5]}' -X POST http://localhost:8081/v1/models/tacotron:predict
{ "error": "JSON Parse error: Invalid value. at offset: 15" }
@xiaoyangnihao
Copy link
Author

xiaoyangnihao commented Aug 8, 2019

@LearnedVector @Suhee05

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant