Releases: roboflow/inference
v0.9.16
🚀 Added
🎬 InferencePipeline
can now process the video using your custom logic
Prior to v0.9.16
, InferencePipeline
was only able to make inference against Roboflow models. Now - you can inject any arbitrary logic of your choice and process videos (files and streams) using custom function you create. Just look at the example:
import os
import json
from inference.core.interfaces.camera.entities import VideoFrame
from inference import InferencePipeline
TARGET_DIR = "./my_predictions"
class MyModel:
def __init__(self, weights_path: str):
self._model = your_model_loader(weights_path)
def infer(self, video_frame: VideoFrame) -> dict:
return self._model(video_frame.image)
def save_prediction(prediction: dict, video_frame: VideoFrame) -> None:
with open(os.path.join(TARGET_DIR, f"{video_frame.frame_id}.json")) as f:
json.dump(prediction, f)
my_model = MyModel("./my_model.pt")
pipeline = InferencePipeline.init_with_custom_logic(
video_reference="./my_video.mp4",
on_video_frame=my_model.infer, # <-- your custom video frame processing function
on_prediction=save_prediction, # <-- your custom sink for predictions
)
# start the pipeline
pipeline.start()
# wait for the pipeline to finish
pipeline.join()
That's not everything! Remember our workflows
feature? We've just added workflows
into InferencePipeline
(in experimental mode). Check InferencePipeline.init_with_workflow(...)
to test the feature.
❗ Breaking change: we've reverted changes introduced in v0.9.15
to InferencePipeline.init(...)
making it compatible with YOLOWorld
model. Now, you would need to use InferencePipeline.init_with_yolo_world(...)
as shown here:
pipeline = InferencePipeline.init_with_yolo_world(
video_reference="YOUR-VIDEO"
on_prediction=...,
classes=["person", "dog", "car", "truck"]
)
We've updated 📖 docs to make it easy to use new feature.
Thanks @paulguerrie for great contribution
🌱 Changed
- Huge changes in 📖 docs - thanks @capjamesg, @SkalskiP, @SolomonLake for contribution
- Improved contributor experience by adding contributor guide and separating GHA CI, such that most important tests could work against repository fork
OpenVINO
as default ONNX Execution Provider for x86 based docker images to improve speed of inference (@probicheaux )- Camera properties in
InferencePipeline
can be set now by caller (@sberan)
🔨 Fixed
- added missing
structlog
dependency to package (@paulguerrie) - clarified models licence (@yeldarby)
- bugs in lambda HTTP inference
- fixed portion of security vulnerabilities
- ❗ breaking: Two exceptions (
WorkspaceLoadError
,MalformedWorkflowResponseError
), when raised will be given HTTP502 error, instead of HTTP500 as previously - bug in
workflows
with class-filter at the level of detection-based model blocks not being applied.
New Contributors
Full Changelog: v0.9.15...v0.9.16
v0.9.15
What's Changed
- YOLO-World Inference Pipeline by @paulguerrie in #282
- QR code workflow step by @sberan in #286
- Add structured API logger by @PawelPeczek-Roboflow in #287
- Feature/yolov9 by @probicheaux in #290
Full Changelog: v0.9.14...v0.9.15
v0.9.15rc1
What's Changed
- YOLO-World Inference Pipeline by @paulguerrie in #282
- QR code workflow step by @sberan in #286
- Add structured API logger by @PawelPeczek-Roboflow in #287
- Feature/yolov9 by @probicheaux in #290
Full Changelog: v0.9.14...v0.9.15rc1
v0.9.14
🚀 Added
LMMs (GPT-4V and CogVLM) 🤝 workflows
Now, with Roboflow workflows
LMMs integration is made easy 💪 . Just look at the demo! 🤯
lmms_in_workflows.mp4
As always, we encourage you to visit workflows
docs 📖 and examples.
This is how to create a multi-functional app with workflows
and LMMs:
inference server start
from inference_sdk import InferenceHTTPClient
LOCAL_CLIENT = InferenceHTTPClient(
api_url="http://127.0.0.1:9001",
api_key=ROBOFLOW_API_KEY,
)
FLEXIBLE_SPECIFICATION = {
"version": "1.0",
"inputs": [
{ "type": "InferenceImage", "name": "image" },
{ "type": "InferenceParameter", "name": "open_ai_key" },
{ "type": "InferenceParameter", "name": "lmm_type" },
{ "type": "InferenceParameter", "name": "prompt" },
{ "type": "InferenceParameter", "name": "expected_output" },
],
"steps": [
{
"type": "LMM",
"name": "step_1",
"image": "$inputs.image",
"lmm_type": "$inputs.lmm_type",
"prompt": "$inputs.prompt",
"json_output": "$inputs.expected_output",
"remote_api_key": "$inputs.open_ai_key",
},
],
"outputs": [
{ "type": "JsonField", "name": "structured_output", "selector": "$steps.step_1.structured_output" },
{ "type": "JsonField", "name": "llm_output", "selector": "$steps.step_1.*" },
]
}
response_gpt = LOCAL_CLIENT.infer_from_workflow(
specification=FLEXIBLE_SPECIFICATION,
images={
"image": cars_image,
},
parameters={
"open_ai_key": OPEN_AI_KEY,
"lmm_type": "gpt_4v",
"prompt": "You are supposed to act as object counting expert. Please provide number of **CARS** visible in the image",
"expected_output": {
"objects_count": "Integer value with number of objects",
}
}
)
🌱 Changed
- developer friendly theming aka dark theme by @onuralpszr in #270 (thanks for contribution 🥇 )
YoloWorld
docs @capjamesg in #276- @ryanjball made their first contribution in #271 with his cookbook for RGB anomaly detection
🔨 Fixed
- turn off instant page for load to cookbook page properly by @onuralpszr in #275 (thanks for contribution 🥇 )
- bug in
workflows
that made cropping in multi-detection set-up
Full Changelog: v0.9.13...v0.9.14
v0.9.13
🚀 Added
YOLO World 🤝 workflows
We've introduced Yolo World model into workflows
making it trivially easy to use the model as any other object-detection model
To try this out, install dependencies first:
pip install inference-sdk inference-cli
Start the server:
inference server start
And run the script:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(api_url="http://127.0.0.1:9001", api_key="YOUR_API_KEY")
YOLO_WORLD = {
"specification": {
"version": "1.0",
"inputs": [
{ "type": "InferenceImage", "name": "image" },
{ "type": "InferenceParameter", "name": "classes" },
{ "type": "InferenceParameter", "name": "confidence", "default_value": 0.003 },
],
"steps": [
{
"type": "YoloWorld",
"name": "step_1",
"image": "$inputs.image",
"class_names": "$inputs.classes",
"confidence": "$inputs.confidence",
},
],
"outputs": [
{ "type": "JsonField", "name": "predictions", "selector": "$steps.step_1.predictions" },
]
}
}
response = CLIENT.infer_from_workflow(
specification=YOLO_WORLD["specification"],
images={
"image": frame,
},
parameters={
"classes": ["yellow filling", "black hole"] # each time you may specify different classes!
}
)
Check details in documentation 📖 and discover usage examples.
🏆 Contributors
@PawelPeczek-Roboflow (Paweł Pęczek)
Full Changelog: v0.9.12...v0.9.13
v0.9.12
🚀 Added
inference
cookbook
Visit our cookbook 🧑🍳
🔨 Fixed
In this release, we are fixing issues spotted in YoloWorld
model released in v0.9.11
, in particular:
- bug with hashing of YOLO World classes making it impossible in some cases to run inference due to improper caching of CLIP embeddings
- bug with YOLO World pre-processing of colour channels causing model misunderstanding of prompted colours
🏆 Contributors
@capjamesg (James Gallagher), @PawelPeczek-Roboflow (Paweł Pęczek)
Full Changelog: v0.9.11...v0.9.12
v0.9.12rc3
Fixed embeddings hashing
v0.9.12rc2
Fixed hashing of text embeddings
v0.9.12rc1
Release candidate with fix to Yolo-World pre-processing
v0.9.11
🚀 Added
YOLO World in the inference
Have you heard about YOLO World model? 🤔 If not - you would probably be interested to learn something about it! Our blog post 📰 may be a good starting point❗
Great news is that YOLO World is already integrated with inference
. Model is capable to perform zero-shot detections of classes specified in inference parameter. Thanks to that, you may start making videos like that just now 🚀
yellow-filling-output-1280x720.mp4
Simply install dependencies.
pip install inference-sdk inference-cli
Start the server
inference server start
And run inference against our HTTP server:
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(api_url="http://127.0.0.1:9001")
result = client.infer_from_yolo_world(
inference_input=YOUR_IMAGE,
class_names=["dog", "cat"],
)
Active Learning 🤝 workflows
Active Learning data collection made simple with workflows
🔥 Now, with just a little bit of configuration you can start data collection to improve your model over time. Just take look how easy it is:
active_learning_in_workflows.mp4
Key features:
- works for all models supported at Roboflow platform, including the ones from Roboflow Universe - making it trivial to use off-the-shelf model during project kick-off stage to collect dataset while serving meaningful predictions
- combines well with multiple
workflows
blocks - includingDetectionsConsensus
- making it possible to sample based on predictions of models ensemble 💥 - Active Learning block may use project-level config of Active Learning or define Active Learning strategy directly in the block definition (refer to Active Learning documentation 📖 for details on how to configure data collection)
See documentation 📖 of new ActiveLearningDataCollector
to find detailed info.
🌱 Changed
InferencePipeline
now works with all models supported at Roboflow platform 🎆
For a long time - InferencePipeline
worked only with object-detection models. This is no longer the case - from now on, other type of models supported at Roboflow platform (including stubs - like my-project/0
) work under InferencePipeline
. No changes are required in existing code. Just put model_id
of your model and the pipeline should work. Sinks suited for detection-only models were adjusted to ignore non-compliant formats of predictions and produce warnings notifying about incompatibility.
🔨 Fixed
- Bug in
yolact
model in #266
🏆 Contributors
@paulguerrie (Paul Guerrie), @probicheaux (Peter Robicheaux), @PawelPeczek-Roboflow (Paweł Pęczek)
Full Changelog: v0.9.10...v0.9.11