Releases: roboflow/inference
v0.9.12rc1
Release candidate with fix to Yolo-World pre-processing
v0.9.11
🚀 Added
YOLO World in the inference
Have you heard about YOLO World model? 🤔 If not - you would probably be interested to learn something about it! Our blog post 📰 may be a good starting point❗
Great news is that YOLO World is already integrated with inference
. Model is capable to perform zero-shot detections of classes specified in inference parameter. Thanks to that, you may start making videos like that just now 🚀
yellow-filling-output-1280x720.mp4
Simply install dependencies.
pip install inference-sdk inference-cli
Start the server
inference server start
And run inference against our HTTP server:
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(api_url="http://127.0.0.1:9001")
result = client.infer_from_yolo_world(
inference_input=YOUR_IMAGE,
class_names=["dog", "cat"],
)
Active Learning 🤝 workflows
Active Learning data collection made simple with workflows
🔥 Now, with just a little bit of configuration you can start data collection to improve your model over time. Just take look how easy it is:
active_learning_in_workflows.mp4
Key features:
- works for all models supported at Roboflow platform, including the ones from Roboflow Universe - making it trivial to use off-the-shelf model during project kick-off stage to collect dataset while serving meaningful predictions
- combines well with multiple
workflows
blocks - includingDetectionsConsensus
- making it possible to sample based on predictions of models ensemble 💥 - Active Learning block may use project-level config of Active Learning or define Active Learning strategy directly in the block definition (refer to Active Learning documentation 📖 for details on how to configure data collection)
See documentation 📖 of new ActiveLearningDataCollector
to find detailed info.
🌱 Changed
InferencePipeline
now works with all models supported at Roboflow platform 🎆
For a long time - InferencePipeline
worked only with object-detection models. This is no longer the case - from now on, other type of models supported at Roboflow platform (including stubs - like my-project/0
) work under InferencePipeline
. No changes are required in existing code. Just put model_id
of your model and the pipeline should work. Sinks suited for detection-only models were adjusted to ignore non-compliant formats of predictions and produce warnings notifying about incompatibility.
🔨 Fixed
- Bug in
yolact
model in #266
🏆 Contributors
@paulguerrie (Paul Guerrie), @probicheaux (Peter Robicheaux), @PawelPeczek-Roboflow (Paweł Pęczek)
Full Changelog: v0.9.10...v0.9.11
v0.9.10
🚀 Added
inference
Benchmarking 🏃♂️
A new command has been added to the inference-cli
for benchmarking performance. Now you can test inference
in different environments with different configurations and measure its performance. Look at us testing speed and scalability of hosted inference at Roboflow platform 🤯
scaling_of_hosted_roboflow_platform.mov
Run your own benchmark with a simple command:
inference benchmark python-package-speed -m coco/3
See the docs for more details.
🌱 Changed
- Improved serialisation logic of requests and responses that helps Roboflow platform to improve model monitoring
🔨 Fixed
- bug #260 causing
inference
API instability in multiple-workers setup and in case of shuffling large amount of models - from now on, API container should not raise strange HTTP 5xx errors due to model management - faulty logic for getting request_id causing errors in parallel-http container
🏆 Contributors
@paulguerrie (Paul Guerrie), @SolomonLake (Solomon Lake ), @robiscoding (Rob Miller) @PawelPeczek-Roboflow (Paweł Pęczek)
Full Changelog: v0.9.9...v0.9.10
v0.9.10rc3
This is a pre-release version that mainly addresses some instabilities in the model manager.
What's Changed
- Add source to cache serializer by @SolomonLake in #242
- Parse request/response before caching by @robiscoding in #227
- Inference benchmarking by @PawelPeczek-Roboflow in #250
Full Changelog: v0.9.9...v0.9.10rc3
v0.9.9
🚀 Added
Roboflow workflows
🤖
A new way to create ML pipelines without writing code. Declare the sequence of models and intermediate processing steps using JSON config and execute using inference
container (or Hosted Roboflow platform). No Python code needed! 🤯 Just watch our feature preview
workflows_feature_preview.mp4
Want to experiment more?
pip install inference-cli
inference server start --dev
Hit http://127.0.0.1:9001 in your browser, then click Jump Into an Inference Enabled Notebook →
button and open the notebook named workflows.ipynb
:
We encourage to acknowledge our documentation 📖 to reveal full potential of Roboflow workflows
.
This feature is still under heavy development. Your feedback is needed to make it better!
Take inference
to the cloud with one command 🚀
Yes, you got it right. inference-cli
package now provides set of inference cloud
commands to deploy required infrastructure without effort.
Just:
pip install inference-cli
And depended on your needs use:
inference cloud deploy --provider aws --compute-type gpu
# or
inference cloud deploy --provider gcp --compute-type cpu
With example posted here, we are just scratching the surface - visit our docs 📖 where more examples are presented.
🔥 YOLO-NAS is coming!
- We plan to onboard YOLO-NAS to the Roboflow platform. In this release we are introducing foundation work to make that happen. Stay tuned!
supervision
🤝 inference
We've extended capabilities of inference infer
command of inference-cli
package. Now it is capable to run inference against images, directories of images and videos, visualise predictions using supervision
and save them in the location of choice.
What does it take to get your predictions?
pip install inference-cli
# start the server
inference server start
# run inference
inference infer -i {PATH_TO_VIDEO} -m coco/3 -c bounding_boxes_tracing -o {OUTPUT_DIRECTORY} -D
There are plenty of configuration options that can alter the visualisation. You can use predefined configs (example: -c bounding_boxes_tracing
) or create your own. See our docs 📖 to discover all options.
🌱 Changed
- ❗
breaking
: Pydantic 2: Inference now depends onpydantic>=2
. - ❗
breaking
: Default values of parameters (likeconfidence
,iou_threshold
etc.) that were set for newer parts ofinference
(including inference HTTP container endpoints) were aligned with more reasonable defaults that hosted Roboflow platform uses. That is going to make the experience ofinference
usage consistent with Roboflow platform. This, however, will alter the behaviour of package for clients that do not specify their own values of parameters while making predictions. Summary:confidence
is from now on defaulted to0.4
andiou_threshold
to0.3
. We encourage clients using self-hosted containers to evaluate results on their end. Changes to be inspected here. - API calls to HTTP endpoints with Roboflow models now accept
disable_active_learning
flag that prevents Active Learning being active for specific request - Documentation 📖 was refreshed. Redesign is supposed to make the content easier to comprehend. We would love to have some feedback 🙏
🔨 Fixed
- ❗
breaking
: Fixed the issue #260 with bug introduced in version v0.9.3 causing classification models with 10 and more classes to assign wrongclass
name to predictions (despite maintaining good class ids) - clients relying onclass
name instead on class_id of predictions were affected. - ❗
breaking
: Typocoglvm -> cogvlm
ininference-sdk
HTTP client method nameprompt_cogvlm(...)
Full Changelog: v0.9.8...v0.9.9
Release candidate of v0.9.9
This is a draft release of v0.9.9
.
v0.9.8
What's Changed
- Add changes that eliminate mistakes spotted while initial e2e tests by @PawelPeczek-Roboflow in #204
- Add ZoomInfo integration by @capjamesg in #205
- Added Kubernetes helm chart by @bigbitbus in #206
- Wrap lambda deployment with AL model manager by @PawelPeczek-Roboflow in #207
- Emable SSL on Redis connection based on env config (to enable AWS lambda connectivity) by @PawelPeczek-Roboflow in #209
- Add Grounding DINO to Inference by @capjamesg in #107
- Extend inference SDK with client for (almost all) core models by @PawelPeczek-Roboflow in #212
- API Key Not Required by Methods by @paulguerrie in #211
- Expose InferencePipeline at the top level by @yeldarby in #210
- Built In Jupyter Notebook by @paulguerrie in #213
- Fix problem with keyless access and Active Learning by @PawelPeczek-Roboflow in #214
Highlights
Grounding DINO
Support for a new core model, Grounding DINO has been added. Grounding DINO is a zero-shot object detection model that you can use to identify objects in images and videos using arbitrary text prompts.
Inference SDK For Core Models
You can now use the Inference SDK with core models (like CLIP). No more complicated request and payload formatting. See the docs here.
Built In Jupyter Notebook
Roboflow Inference Server containers now include a built in Jupyter notebook for development and testing. This notebook can be accessed via the inference server landing page. To use it, go to localhost:9001
in your browser after starting an inference server. Then select "Jump Into An Inference Enabled Notebook". This will open a new tab with a Jupyterlab session, preloaded with example notebooks and all of the inference
dependancies.
New Contributors
- @bigbitbus made their first contribution in #206
Full Changelog: v0.9.7...v0.9.8
v0.9.7
What's Changed
- Bump cuda version for parallel by @probicheaux in #191
- Add stream management HTTP api by @PawelPeczek-Roboflow in #180
- Peter/fix orjson by @probicheaux in #192
- Introduce model aliases by @PawelPeczek-Roboflow in #193
- Fix problem with device request not being list but tuple by @PawelPeczek-Roboflow in #197
- Add inference server stop command by @PawelPeczek-Roboflow in #194
- Inference server start takes env file by @PawelPeczek-Roboflow in #195
- Add pull image progress display by @PawelPeczek-Roboflow in #198
- Improve Inference documentation by @capjamesg in #183
- Catch CLI Error When Docker Is Not Running by @paulguerrie in #203
- Introduce unified batching by @PawelPeczek-Roboflow in #199
- Change the default value for 'only_top_classes' option of close-to-threshold sampling strategy of AL by @PawelPeczek-Roboflow in #200
- updated API_KEY to ROBOFLOW_API_KEY for clarity by @josephofiowa in #202
Highlights
Stream Management API (Enterprise)
The stream management api is designed to cater to users requiring the execution of inference to generate predictions using Roboflow object-detection models, particularly when dealing with online video streams. It enhances the functionalities of the familiar inference.Stream() and InferencePipeline() interfaces, as found in the open-source version of the library, by introducing a sophisticated management layer. The inclusion of additional capabilities empowers users to remotely manage the state of inference pipelines through the HTTP management interface integrated into this package. More info.
Model Aliases
Some common public models now have convenient aliases! The with this release, the COCO base weights for YOLOv8 models can be accessed with user friendly model IDs like yolov8n-640
. See all available model aliases here.
Other Improvements
- Improved inference CLI commands
- Unified batching APIs so that all model types can accept batch requests
- Speed improvements for HTTP interface
New Contributors
- @josephofiowa made their first contribution in #202
Full Changelog: v0.9.6...v0.9.7
v0.9.7rc2 - Test release for fix with CLI run problem
v0.9.7.rc2 Fix makefile, such that onnx runtime is installed
v0.9.7rc1 - Test release for fix with CLI run problem
v0.9.7.rc1 Fix problem with device request not being list