Releases: roboflow/inference
v0.16.2
🚀 Added
Segment Anything 2 in workflows
🥳
We prepared great amount of changes to workflows
, could not really decide which update to start with, but at the end we found onboarding of SAM 2 model as most exciting.
Thanks to @hansent effort we have introduced SAM 2 workflow
block.
You can use SAM2 standalone, or you can ground it's predictions with other detection models which is the true power of workflows
. Thanks to grounding, you can generate instance segmentation masks for each bounding box predicted by your object detection model
❗ We do not support SAM2 at Roboflow Hosted Platform yet, but it is possible to use inference server start
command to run local server supporting SAM2 model and connect it to workflows
UI to run examples.
workflows
🤝 SAHI
We've added set of blocks that let people apply SAHI technique based on utilities provided by supervision
.
We are going to work to simplify SAHI usage in workflows
UI, but now you need to use three blocks to effectively apply technique:
Classical Computer Vision methods in workflows
🔥
We do not forget about old good friends - that's why we also added bunch of blocks with classical Computer Vision algorithms:
- Dominant Color block by @NickHerrig in #578
- SIFT, SIFT matching, classical pattern matching and others by @ryanjball in #581
🌱 Changed
- Added encoding='utf-8' to setup.py by @Bhavay-2001 in #556
- Move landing static assets to
/static/
namespace by @iurisilvio in #577 - add exclusion rules for dedicated deployment authorizer by @PacificDou in #576
- Workflow private block properties by @EmilyGavrilenko in #579
🔨 Fixed
- Fix security issues with landing page by @PawelPeczek-Roboflow in #584
- Fixed a Regression in the Custom Metadata Block that was introduced in
v0.16.0
- PR with fix by @chandlersupple (#573) - we kindly ask clients relying on the Custom Metadata Block, running their workflows on-prem to updateinference
orinference
server - Bug in
workflows
Execution Engine that was making it impossible to feed the same block with two identical selectors (fixed in #581)
❗ In release 0.16.0
we introduced bug impacting workflows
and inference_sdk
The mistake was introduced in #565 and fixed in #585 (both by @PawelPeczek-Roboflow 😢 ) and was causing issues with order of results - regarding specific workflows
blocks:
- blocks with Roboflow models, whenever used with batch input (for instance when workflow was run against multiple images, or Dynamic Crop was used) were mismatching order of predictions with respect to order of images
- the same was true for OpenAI block and GPT-4V block
- the problem was also introduced into
inference_sdk
, so whenever client was called with multiple images - results may have been missmatched
We advice all our clients to upgrade to new release and abandon usage inference=0.16.0
🏅 New Contributors
- @Bhavay-2001 made their first contribution in #556
Full Changelog: v0.16.0...v0.16.1
v0.16.0
❗ In release 0.16.0
we introduced bug impacting workflows
and inference_sdk
The mistake was introduced in #565 and fixed in #585 (both by @PawelPeczek-Roboflow 😢 ) and was causing issues with order of results - regarding specific workflows
blocks:
- blocks with Roboflow models, whenever used with batch input (for instance when workflow was run against multiple images, or Dynamic Crop was used) were mismatching order of predictions with respect to order of images
- the same was true for OpenAI block and GPT-4V block
- the problem was also introduced into
inference_sdk
, so whenever client was called with multiple images - results may have been missmatched
🚀 Added
Next bunch of updates for workflows
🥳
⚓ Versioning
From now on, both Execution Engine and workflows
blocks are versioned to ensure greater stability across changes we make to improve ecosystem. Each workflow definition now declares version
forcing the app to run against specific version of Execution Engine. If denoted version is 1.1.0
, then workflow would require Execution Engine >=1.1.0,<2.0.0
and we gain ability to expose concurrently multiple major versions of EE in the library (doing our best to ensure that within a major version we only add features and support everything that was released earlier within the same major). On top of that:
- block manifest metadata field
name
now will be understood as name of blocks family with additional tag calledversion
possible to be added; we propose the following naming conventions for block names:namespace/family_name@v1
. Thanks to those changes anyone could maintain multiple versions of the same block (appending new implementation to their plugin) ensuring backwards compatibilities on breaking changes - each block manifest class may optionally expose class method
get_execution_engine_compatibility(...)
which would be used while model loading to ensure that selected Execution Engine is capable to run specific block
✋ Example block manifest
class BlockManifest(WorkflowBlockManifest):
model_config = ConfigDict(
json_schema_extra={
"name": "My Block",
"version": "v1",
...
}
)
type: Literal["my_namespace/mu_block@v1"]
...
@classmethod
def get_execution_engine_compatibility(cls) -> Optional[str]:
return ">=1.0.0,<2.0.0"
🚨 ⚠️ BREAKING ⚠️ 🚨 Got rid of asyncio in Execution Engine
If you were tired of coroutines performing compute heavy tasks in workflows
:
class MyBlock(WorkflowBlock):
async def run():
pass
we have great news. We've got rid of asyncio in favour of standard functions and methods which are much more intuitive in our setup. This change is obviously breaking all other steps, but worry not. Here is the example of what needs to be changed - usually you just need to remove async
markers, but sometimes unfortunately pieces of asyncio code would need to be recreated.
class MyBlock(WorkflowBlock):
def run():
pass
Endpoint to expose workflow definition schema
Thanks to @EmilyGavrilenko (#550) UI would now be able to verify syntax errors in workflows definitions automatically.
Roboflow Dedicated Deployment is closer and closer 😃
Thanks to @PacificDou, inference
server is getting ready to support new functionality which has a nickname Dedicated Deployment. Stay tuned to learn more details - we can tell that this is something worth waiting for. You may find some hints in the PR.
🔨 Fixed
🚨 ⚠️ BREAKING ⚠️ 🚨 HTTP client of inference
server changes default behaviour
The default value for flag client_downsizing_disabled was changed from False
to True
in release 0.16.0! For clients using models with input size above 1024x1024, running models on hosted platform it should improve predictions quality (as previous default behaviour was causing that input was downsized and then artificially upsized on the server side with worse image quality). There may be some clients that would like to remain previous settings to potentially improve speed (when internet connection is a bottleneck and large images are submitted despite small model input size).
If you liked the previous behaviour more - simply:
from inference_sdk import InferenceHTTPClient, InferenceConfiguration
client = InferenceHTTPClient(
"https://detect.roboflow.com",
api_key="XXX",
).configure(InferenceConfiguration(
client_downsizing_disabled=False,
))
setuptools
were migrated to version above 70.0.0
to mitigate security issue
We've updated rf-clip
package to support setuptools>70.0.0
and bumped the version on inference
side.
🌱 Changed
- 📖 Add documentation for ONNXRUNTIME_EXECUTION_PROVIDERS by @grzegorz-roboflow in #562 - see here
- 📖 Update docs for easier quickstart by @komyg in #544
- 📖 Add Inference Windows CUDA documentation by @capjamesg in #502
- Add @capjamesg to CODEOWNERS by @capjamesg in #564
- Add persistent queue to usage collector by @grzegorz-roboflow in #568
🏅 New Contributors
Full Changelog: v0.15.2...v0.16.0
v0.15.2
What's Changed
- Separate the LMM block into OpenAI and CogVLM by @EmilyGavrilenko in #549
- Do not log warning when usage payload is put back into the queue by @grzegorz-roboflow in #551
- Initialize usage_collector._async_lock only if async look can be obtained by @grzegorz-roboflow in #552
- Cache workflow specification for offline use by @sberan in #555
- Custom Metadata block for Model Monitoring users by @robiscoding in #553
- Allow zone defined as input parameter to be passed to perspective_correction by @grzegorz-roboflow in #559
- Increment
num_errors
only ifpingback
initialized by @iurisilvio in #527 - SAM2 by @hansent and @probicheaux in #557
Full Changelog: v0.15.1...v0.15.2
v0.15.1
What's Changed
- Refactor Visualization Workflow Block Inheritance by @yeldarby in #538
- Add florence2 aliases and bugfix TransformersModel by @probicheaux in #525
- Add BackgroundColorAnnotator block by @capjamesg in #542
- Bugfix: keypoint detection model block by @EmilyGavrilenko in #543
Full Changelog: v0.15.0...v0.15.1
v0.15.0
What's Changed
- Fix broken NMS function by @PawelPeczek-Roboflow in #535
- Usage Tracking by @grzegorz-roboflow in #476
- Add workflow benchmark by @grzegorz-roboflow in #536
- Adjust usage collector by @grzegorz-roboflow in #537
- Add python code block to
workflows
by @PawelPeczek-Roboflow in #509 - Supervision Annotator Blocks by @yeldarby in #533
Full Changelog: v0.14.1...v0.15.0
v0.14.1
🔨 Fixed
We've not removed usage of @deprecated
elements of supervision
in release v0.14.0
which happened just a moment before supervision v0.22.0
. We are sorry for that problem. Fixing it with v0.14.1
.
Thanks @probicheaux for spotting a problem and providing PR with fix.
What to do if you cannot migrate to inference>=0.14.1
?
In script that resolve your environment (or in your requirements definition) enforce supervision<=0.21.0
Full Changelog: v0.14.0...v0.14.1
v0.14.0
🚀 Added
inference
is ready for Florence-2
🤩
Thanks to @probicheaux we have inference
package ready for Florence-2
. It is Large Multimodal Model capable of processing both image and text input handling wide range of generic vision and language-vision tasks.
We are excited to add it to the collection of models offered by inference
. Due to the complexity of build, model is shipped only within
docker image 🐋 . Everything within our official inference
server build for GPU 🤯 . To fully utilise the new models you need to wait on the release in Roboflow platform.
You should be able to spin up your container via inference-cli
:
inference server start
❗ What is required to run the container and what has changed in the build?
We've needed to bump required CUDA version in docker build for GPU server from 11.7
to 11.8
. That is why now, you may not be able to run
the container on servers having older CUDA. We've run the server experimentally on machine with CUDA 11.6 and it worked, but we cannot guarantee that to work on older builds.
🤔 How to run new model?
import requests
payload = {
"api_key": "<YOUR-ROBOFLOW-API-KEY>,
"image": {
"type": "url",
"value": "https://media.roboflow.com/dog.jpeg",
},
"prompt": "<CAPTION>",
"model_id": "<model-id-available-when-roboflow-platform-starts-the-support>"
}
response = requests.post(
f"{server_url}/infer/lmm",
json=payload,
)
print(response.json())
New blocks in workflows
🥹
We have added the following block to workflows
ecosystem:
Property Definition
which let you to use specific attribute of data as an input for next step or as outputDetections Classes Replacement
to replace classes of bounding boxes in scenario when you first run general object-detection model, then crop image based on predictions and you apply secondary classification model. Results of secondary model replaces originally predicted classes- and few others - explore our collection of blocks ✨
Blocks that were added are still in refinement - we may want to improve them over time - so stay tuned!
🌱 Changed
🔐 Mitigation for security vulnerabilities ❗ BREAKING 🚧
To two mitigate security vulnerabilities:
- unsafe deserialisation of pickled inputs enabled by default for self-hosted
inference
- Server-side request forgery (SSRF)
we needed to add couple of changes, among which one is breaking. From now on default value for env variable: ALLOW_NUMPY_INPUT
is False
.
Implications:
- if you rely on pickled numpy images passed to
inference
Python package or sent toinference
server - you need to set this env variable explicitly intoALLOW_NUMPY_INPUT=true
in your environment or start a server with this env variable (see how) - there are also other changes which you can optionally tune to run
inference
server safer - see our docs 📖
🔨 Fixed
❗ Removed bug in inference post-processing
Some models trained at Roboflow platform experienced problem with predictions post-processing when there was padding as
the option selected while creating dataset. Thanks to @grzegorz-roboflow it was fixed in #495
Other minor fixes
- fixed malformed
workflow
outputs in #499 - replace match statement with if-else for Python 3.9 compatibility by @natserract in #488
- InferencePipeline: allow it to run offline even if Active Learning enabled by @sberan in #491
- Import
sky
only when required because it is slow by @iurisilvio in #494 - Change GPT-4 default model into GPT-4o by @PawelPeczek-Roboflow in #500
- Monitoring improvements @robiscoding in #490, @robiscoding in #492
- Extend perspective correction to warp image by @grzegorz-roboflow in #503
- Show block name in error message thrown by steps_initialiser by @grzegorz-roboflow in #504
- Fix issue with workflows blocks after adding request id to response by @PawelPeczek-Roboflow in #505
- Follow config to import core models by @iurisilvio in #508
- Rename workflow blocks by @EmilyGavrilenko in #511
- Update upload weights list by @capjamesg in #512
- Default to Local Workflows Execution by @yeldarby in #515
🏅 New Contributors
- @natserract made their first contribution in #488
- @EmilyGavrilenko made their first contribution in #511
Full Changelog: v0.13.0...v0.14.0
v0.13.0
🚀 Added
🤯 Next-level workflows
Better integration with Roboflow platform
From now on, we have much better alignment with UI workflow creator available in Roboflow app
. Just take a look how nice it presents itself thanks to @hansent @EmilyGavrilenko @casmwenger @kresetar @jchens
But great look is not the only feature, the team has added tons of functionalities, including:
- operations on processed by
workflow
Execution Engine - including filtering and conditions are now possible to be build with UI creators - Roboflow models and projects available to be used are suggested automatically
- Preview option to run workflow that is under development is now available
- ... and much more - check out yourself!
workflows
Universal Query Language (UQL)
We've added Universal Query Language as extension to workflows
eco-system. We've discovered that it would be extremely helpful for users to be able to build chains of transformations (like filtering, selecting only specific bounding boxes, aggregating results etc) or expressions evaluating into booleans. UQL powers UI extensions like the one presented below:
Yes, we know that UQL
is not the best name, but as majority engineers we are struggling to find names for things we create. Please help us in that regards!
workflows
🤝 sv.Detections
From now on, the default representation of predictions from object-detection
, instance-segmentation
and keypoint-detection
models is sv.Detections
. That has a lot of practical implications for blocks creators. Take a look how easy it is to add a block that makes prediction from your custom model. This was mainly possible thanks to @grzegorz-roboflow
👉 Code snippet with your custom model block fitting our eco-system
from typing import Literal, Type
import supervision as sv
from inference.core.workflows.entities.base import (
Batch,
OutputDefinition,
WorkflowImageData,
)
from inference.core.workflows.entities.types import (
BATCH_OF_OBJECT_DETECTION_PREDICTION_KIND,
ImageInputField,
StepOutputImageSelector,
WorkflowImageSelector,
)
from inference.core.workflows.prototypes.block import (
BlockResult,
WorkflowBlock,
WorkflowBlockManifest,
)
class BlockManifest(WorkflowBlockManifest):
type: Literal["MyModel"]
images: Union[WorkflowImageSelector, StepOutputImageSelector] = ImageInputField
@classmethod
def describe_outputs(cls) -> List[OutputDefinition]:
return [
OutputDefinition(
name="predictions", kind=[BATCH_OF_OBJECT_DETECTION_PREDICTION_KIND]
)
]
class MyModelBlock(WorkflowBlock):
def __init__(self):
self._model = load_my_model(...)
@classmethod
def get_manifest(cls) -> Type[WorkflowBlockManifest]:
return BlockManifest
async def run(self, image: WorkflowImageData) -> BlockResult:
result = self._model(image)
detections = sv.Detections(...) # here you need to convert results into sv.Detections - there is a need to add couple of keys into .data property - docs covering that will come soon, in questions - do not hesitate to ask
return {"predictions": detections}
True conditional branching for SIMD operations in workflows
We had a serious technical limitation in previous iterations of workflows
Execution Engine - lack of ability to simulate different execution branches for each element of data processed`. This is no longer the case! Now it is possible to detect high-level objects, make crops based on detections and then for each cropped image independently decide whether or not to save in Roboflow project - based on condition stated in UQL 🤯
But this is not everything! As technical preview we prepared rock-paper-scissor game in workflows
. Check it out here
Advancements in video processing with workflows
This feature is still experimental, but we are making progress - now it is possible to process multiple videos at once with InferencePipeline
and workflows
:
Screen.Recording.2024-06-27.at.13.22.37.mov
👉 Code snippet
from typing import List, Optional
import cv2
import supervision as sv
from inference import InferencePipeline
from inference.core.interfaces.camera.entities import VideoFrame
from inference.core.utils.drawing import create_tiles
STOP = False
ANNOTATOR = sv.BoundingBoxAnnotator()
def main() -> None:
workflow_specification = {
"version": "1.0",
"inputs": [
{"type": "WorkflowImage", "name": "image"},
],
"steps": [
{
"type": "ObjectDetectionModel",
"name": "step_1",
"image": "$inputs.image",
"model_id": "yolov8n-640",
"confidence": 0.5,
}
],
"outputs": [
{"type": "JsonField", "name": "predictions", "selector": "$steps.step_1.predictions"},
],
}
pipeline = InferencePipeline.init_with_workflow(
video_reference=[
"<YOUR-VIDEO>",
"<YOUR-VIDEO>",
],
workflow_specification=workflow_specification,
on_prediction=workflows_sink,
)
pipeline.start()
pipeline.join()
def workflows_sink(
predictions: List[Optional[dict]],
video_frames: List[Optional[VideoFrame]],
) -> None:
images_to_show = []
for prediction, frame in zip(predictions, video_frames):
if prediction is None or frame is None:
continue
detections: sv.Detections = prediction["predictions"]
visualised = ANNOTATOR.annotate(frame.image.copy(), detections)
images_to_show.append(visualised)
tiles = create_tiles(images=images_to_show)
cv2.imshow(f"Predictions", tiles)
cv2.waitKey(1)
if __name__ == '__main__':
main()
Other changes:
- Step Name Property Copy Changes by @yeldarby in #444
- Abstract ImageInputField and RoboflowModelField + Copy Changes by @yeldarby in #445
- Allow CORS by default by @yeldarby in #485
- Add PerspectiveCorrectionBlock and PolygonSimplificationBlock by @grzegorz-roboflow in #441
List of contributors: @EmilyGavrilenko, @casmwenger, @kresetar, @jchens, @yeldarby, @grzegorz-roboflow, @hansent, @SkalskiP, @PawelPeczek-Roboflow
Predictions JSON ➕ visualisation @ Roboflow hosted platform
Previously clients needed to choose between visualisation of predictions and Predictions JSON returned from inference
server running at Roboflow hosted platform. This is no longer the case thanks to @SolomonLake and #467
from inference_sdk import InferenceHTTPClient, InferenceConfiguration
CLIENT = InferenceHTTPClient(
api_url="https://detect.roboflow.com/",
api_key="<YOUR-API-KEY>"
).configure(InferenceConfiguration(
format="image_and_json",
))
response = CLIENT.infer("<your_image>.jpg", model_id="yolov8n-640")
# check out
response["predictions"]
# and
response["visualisation"]
🌱 Changed
- Fixing yolov10 documentation by @nathan-marraccini in #480
- Supervision updates for Predict on a Video, Webcam or RTSP Stream Page by @nathan-marraccini in #477
- Add paligemma aliases for newly uploaded models by @probicheaux in #463
- Add PaliGemma LoRA by @probicheaux in #464
- Bump braces from 3.0.2 to 3.0.3 in /inference/landing by @dependabot in #466
- Fix security vulnerabilities by @PawelPeczek-Roboflow in #483
🥇 New Contributors
- @nathan-marraccini made their first contribution in #480
Full Changelog: v0.12.1...v0.13.0
v0.12.1
🔨 Fixed
Incompatibility of opencv-python
with numpy>=2.0.0
⚔️
Jun 16, there was release of numpy 2.0
making old builds of opencv-python
incompatible with new numpy
.
@grzegorz-roboflow investigated the issue and discovered that inference
users can be impacted if package inference-sdk
was used standalone, due to lack of upper-bound limit on numpy
dependency in that library.
To support impacted community members and Roboflow clients, we've prepared release with bug-fix.
Symptoms of the problem:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead [...]
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
To solve the problem choose one of the following solutions:
👉 Install inference>=0.12.1
pip install "inference>=0.12.1"
# or
pip install "inference-cli>=0.12.1"
# or
pip install "inference-sdk>=0.12.1"
👉 Downgrade numpy
# in your Python environment hosting inference library
pip install "numpy<2.0.0"
We are sorry for inconvenience.
❗ Planned deprecations
np_image_to_base64(...)
to be replaced withencode_image_to_jpeg_bytes(...)
in the future - @grzegorz-roboflow in #469
🌱 Changed
- Remove sv.FPSMonitor deprecation warnings by @grzegorz-roboflow in #461
- Loose boto3 requirements by @iurisilvio in #457 -
inference
should install faster now 🤗 - Fix paligemma generation bug by @probicheaux in #459
- Add support for a tunnel to expose inference server to remote calls by @iurisilvio in #451
- Workflow documentation additions, add YOLOv10 docs by @capjamesg in #475
- fix Docker Getting Started link in docs returns 404 by @grzegorz-roboflow in #478
🏅 New Contributors
- @iurisilvio made their first contribution in #457
Full Changelog: v0.12.0...v0.12.1
v0.12.0
🔨 Fixed
🔥 YOLOv10
in inference
now has pre- and post-processing issues solved
Thanks to @jameslahm we have inconsistencies in results from YOLOv10
model in inference
package sorted out. PR #437
🌱 Changed
❗breaking change
❗Inference from PaliGemma
PaliGemma models changes model category from foundation one into Roboflow model. That implies the following change in a way how it is exposed by inference server
:
Before:
def do_gemma_request(prompt: str, image_path: str):
infer_payload = {
"image": {
"type": "base64",
"value": encode_bas64(image_path),
},
"api_key": "<ROBOFLOW-API-KEY>",
"prompt": prompt,
}
response = requests.post(
f'http://localhost:{PORT}/llm/paligemma',
json=infer_payload,
)
resp = response.json()
Now:
def do_gemma_request(prompt: str, image_path: str):
infer_payload = {
"image": {
"type": "base64",
"value": encode_bas64(image_path),
},
"prompt": prompt,
"model_id": "paligemma-3b-mix-224",
}
response = requests.post(
f'http://localhost:{PORT}/infer/lmm',
json=infer_payload,
)
resp = response.json()
PR #436
Other changes
- Replaced
sv.BoxAnnotator
withsv.BoundingBoxAnnotator
combined withsv.LabelAnnotator
to be prepare forsv.BoxAnnotator
deprecation by @grzegorz-roboflow in #434 - Add PaliGemma documentation, update table of contents by @capjamesg in #429
- Add http get support for legacy model inference by @PacificDou in #449
- Fix dead supported blocks link by @LinasKo in #448
- Docs: Remove banner saying Sv Keypoint annotators are experimental by @LinasKo in #450
🥇 New Contributors
- @jameslahm made their first contribution in #437
Full Changelog: v0.11.2...v0.12.0