Skip to content

Releases: ray-project/ray

Ray-2.6.3

15 Aug 16:40
8a434b4
Compare
Choose a tag to compare

The Ray 2.6.3 patch release contains fixes for Ray Serve, and Ray Core streaming generators.

Ray Core

🔨 Fixes:

  • [Core][Streaming Generator] Fix memory leak from the end of object stream object #38152 (#38206)

Ray Serve

🔨 Fixes:

  • [Serve] Fix serve run help message (#37859) (#38018)
  • [Serve] Decrement ray_serve_deployment_queued_queries when client disconnects (#37965) (#38020)

RLib

📖 Documentation:

Ray-2.6.2

03 Aug 23:54
f79203d
Compare
Choose a tag to compare

The Ray 2.6.2 patch release contains a critical fix for ray's logging setup, as well fixes for Ray Serve, Ray Data, and Ray Job.

Ray Core

🔨 Fixes:

  • [Core] Pass logs through if sphinx-doctest is running (#36306) (#37879)
  • [cluster-launcher] Pick GCP cluster launcher tests and fix (#37797)

Ray Serve

🔨 Fixes:

  • [Serve] Apply request_timeout_s from Serve config to the cluster (#37884) (#37903)

Ray Air

🔨 Fixes:

Ray-2.6.1

24 Jul 05:07
d68bf04
Compare
Choose a tag to compare

The Ray 2.6.1 patch release contains a critical fix for cluster launcher, and compatibility update for Ray Serve protobuf definition with python 3.11, as well doc improvements.

⚠️ Cluster launcher in Ray 2.6.0 fails to start multi-node clusters. Please update to 2.6.1 if you plan to use 2.6.0 cluster launcher.

Ray Core

🔨 Fixes:

  • [core][autoscaler] Fix env variable overwrite not able to be used if the command itself uses the env #37675

Ray Serve

🔨 Fixes:

  • [serve] Cherry-pick Serve enum to_proto fixes for Python 3.11 #37660

Ray Air

📖Documentation:

  • [air][doc] Update docs to reflect head node syncing deprecation #37475

Ray-2.6.0

21 Jul 02:53
0db82e3
Compare
Choose a tag to compare

Release Highlights

  • Serve: Better streaming support -- In this release, Support for HTTP streaming response and WebSockets is now on by default. Also, @serve.batch-decorated methods can stream responses.
  • Train and Tune: Users are now expected to provide cloud storage or NFS path for distributed training or tuning jobs instead of a local path. This means that results written to different worker machines will not be directly synced to the head node. Instead, this will raise an error telling you to switch to one of the recommended alternatives: cloud storage or NFS. Please see #37177 if you have questions.
  • Data: We are introducing a new streaming integration of Ray Data and Ray Train. This allows streaming data ingestion for model training, and enables per-epoch data preprocessing. The DatasetPipeline API is also being deprecated in favor of Dataset with streaming execution.
  • RLlib: Public alpha release for the new multi-gpu Learner API that is less complex and more powerful compared to our previous solution (blogpost). This is used under PPO algorithm by default.

Ray Libraries

Ray AIR

🎉 New Features:

  • Added support for restoring Results from local trial directories. (#35406)

💫 Enhancements:

🔨 Fixes:

  • Pass on KMS-related kwargs for s3fs (#35938)
  • Fix infinite recursion in log redirection (#36644)
  • Remove temporary checkpoint directories after restore (#37173)
  • Removed actors that haven't been started shouldn't be tracked (#36020)
  • Fix bug in execution for actor re-use (#36951)
  • Cancel pg.ready() task for pending trials that end up reusing an actor (#35748)
  • Add case for Dict[str, np.array] batches in DummyTrainer read bytes calculation (#36484)

📖 Documentation:

  • Remove experimental features page, add github issue instead (#36950)
  • Fix batch format in dreambooth example (#37102)
  • Fix Checkpoint.from_checkpoint docstring (#35793)

🏗 Architecture refactoring:

  • Remove deprecated mlflow and wandb integrations (#36860, #36899)
  • Move constants from tune/results.py to air/constants.py (#35404)
  • Clean up a few checkpoint related things. (#35321)

Ray Data

🎉 New Features:

  • New streaming integration of Ray Data and Ray Train. This allows streaming data ingestion for model training, and enables per-epoch data preprocessing. (#35236)
  • Enable execution optimizer by default (#36294, #35648, #35621, #35952)
  • Deprecate DatasetPipeline (#35753)
  • Add Dataset.unique() (#36655, #36802)
  • Add option for parallelizing post-collation data batch operations in DataIterator.iter_batches() (#36842) (#37260)
  • Enforce strict mode batch format for DataIterator.iter_batches() (#36686)
  • Remove ray.data.range_arrow() (#35756)

💫 Enhancements:

  • Optimize block prefetching (#35568)
  • Enable isort for data directory (#35836)
  • Skip writing a file for an empty block in Dataset.write_datasource() (#36134)
  • Remove shutdown logging from StreamingExecutor (#36408)
  • Spread map task stages by default for arg size <50MB (#36290)
  • Read->SplitBlocks to ensure requested read parallelism is always met (#36352)
  • Support partial execution in Dataset.schema() with new execution plan optimizer (#36740)
  • Propagate iter stats for Dataset.streaming_split() (#36908)
  • Cache the computed schema to avoid re-executing (#37103)

🔨 Fixes:

  • Support sub-progress bars on AllToAllOperators with optimizer enabled (#34997)
  • Fix DataContext not propagated properly for Dataset.streaming_split() operator
  • Fix edge case in empty bundles with Dataset.streaming_split() (#36039)
  • Apply Arrow table indices mapping on HuggingFace Dataset prior to reading into Ray Data (#36141)
  • Fix issues with combining use of Dataset.materialize() and Dataset.streaming_split() (#36092)
  • Fix quadratic slowdown when locally shuffling tensor extension types (#36102)
  • Make sure progress bars always finish at 100% (#36679)
  • Fix wrong output order of Dataset.streaming_split() (#36919)
  • Fix the issue that StreamingExecutor is not shutdown when the iterator is not fully consumed (#36933)
  • Calculate stage execution time in StageStatsSummary from BlockMetadata (#37119)

📖 Documentation:

  • Standardize Data API ref (#36432, #36937)
  • Docs for working with PyTorch (#36880)
  • Split "Consuming data" guide (#36121)
  • Revise "Loading data" (#36144)
  • Consolidate Data user guides (#36439)

🏗 Architecture refactoring:

  • Remove simple blocks representation (#36477)

Ray Train

🎉 New Features:

  • LightningTrainer support DeepSpeedStrategy (#36165)

💫 Enhancements:

  • Unify Lightning and AIR CheckpointConfig (#36368)
  • Add support for custom pipeline class in TransformersPredictor (#36494)

🔨 Fixes:

  • Fix Deepspeed device ranks check in Lightning 2.0.5 (#37387)
  • Clear stale lazy checkpointing markers on all workers. (#36291)

📖 Documentation:

  • Migrate Ray Train code-block to testcode. (#36483)

🏗 Architecture refactoring:

Ray Tune

🔨 Fixes:

  • Optuna: Update distributions to use new APIs (#36704)
  • BOHB: Fix nested bracket processing (#36568)
  • Hyperband: Fix scheduler raising an error for good PENDING trials (#35338)
  • Fix param space placeholder injection for numpy/pandas objects (#35763)
  • Fix result restoration with Ray Client (#35742)
  • Fix trial runner/controller whitelist attributes (#35769)

📖 Documentation:

  • Remove missing example from Tune "Other examples" (#36691)

🏗 Architecture refactoring:

  • Remove tune/automl (#35557)
  • Remove hard-deprecated modules from structure refactor (#36984)
  • Remove deprecated mlflow and wandb integrations (#36860, #36899)
  • Move constants from tune/results.py to air/constants.py (#35404)
  • Deprecate redundant syncing related parameters (#36900)
  • Deprecate legacy modules in ray.tune.integration (#35160)

Ray Serve

💫 Enhancements:

  • Support for HTTP streaming response and WebSockets is now on by default.
  • @serve.batch-decorated methods can stream responses.
  • @serve.batch settings can be reconfigured dynamically.
  • Ray Serve now uses “power of two random choices” routing. This improves enforcement of max_concurrent_queries and tail latencies under load.

🔨 Fixes:

  • Fixed the bug previously unable to use a custom module named after “utils”.
  • Fixed serve downscaling issue by adding a new draining state to the http proxy. This helps http proxies to not take new requests when there are no replicas on the node and prevents interruption on the ongoing requests when the node is downscaled. Also, enables downscaling to happen when the requests use Ray’s object store which is blocking downscaling of the node.
  • Fixed non-atomic shutdown logic. Serve shutdown will be run in the background and not require the client to wait for the shutdown to complete. And won’t be interrupted when the client is force killed.

RLlib

🎉 New Features:

  • Public alpha release for the new multi-gpu Learner API that is less complex and more powerful than the old training stack (blogpost). This is used under PPO algorithm by default.
  • Added RNN support on the new RLModule API
  • Added TF-version of DreamerV3 (link). The comprehensive results will be published soon.
  • Added support for torch 2.x compile method in sampling from environment

💫 Enhancements:

  • Added an Example on how to do pretraining with BC and then continuing finetuning with PPO (example)
  • RLlib deprecation Notices (algorithm/, evaluation/, execution/, models/jax/) (#36826)
  • Enable eager_tracing=True by default. (#36556)

🔨 Fixes:

  • Fix bug in Multi-Categorical distribution. It should use logp and not log_p. (#36814)
  • Fix LSTM + Connector bug: StateBuffer restarting states on every in_eval() call. (#36774)

🏗 Architecture refactoring:

  • Multi-GPU Learner API

Ray Core

🎉 New Features:

  • [Core][Streaming Generator] Cpp interfaces and implementation (#35291)
  • [Core][Streaming Generator] Streaming Generator. Support Core worker APIs + cython generator interface. (#35324)
  • [Core][Streaming Generator] Streaming Generator. E2e integration (#35325)
  • [Core][Streaming Generator] Support async actor and async generator interface. (#35584)
  • [Core][Streaming Generator] Streaming Generator. Support the basic retry/lineage reconstruction (#35768)
  • [Core][Streaming Generator] Allow to raise an exception to avoid check failures. (#35766)
  • [Core][Streaming Generator] Fix a reference leak when a stream is deleted with out of order writes. (#35591)
  • [Core][Streaming Generator] Fix a reference leak when pinning requests are received after refs are consumed. (#35712)
  • [Core][Streaming Generator] Handle out of order report when retry (#36069)
  • [Core][Streaming Generator] Make it compatible with wait (#36071)
  • [Core][Streaming Generator] Remove busy waiting (#36070)
  • [Core][Autoscaler v2] add test for ...
Read more

Ray-2.5.1

21 Jun 18:09
a03efd9
Compare
Choose a tag to compare

The Ray 2.5.1 patch release adds wheels for MacOS for Python 3.11.
It also contains fixes for multiple components, along with fixes for our documentation.

Ray Train

🔨 Fixes:

  • Don't error on eventual success when running with auto-recovery (#36266)

Ray Core

🎉 New Features:

  • Build Python wheels on Mac OS for Python 3.11 (#36373)

🔨 Fixes:

  • [Autoscaler] Fix a bug that can cause undefined behavior when clusters attempt to scale up aggressively. (#36241)
  • Fix mypy error: Module "ray" does not explicitly export attribute "remote" (#36356)

Ray-2.5.0

08 Jun 16:59
586c376
Compare
Choose a tag to compare

The Ray 2.5 release features focus on a number of enhancements and improvements across the Ray ecosystem, including:

  • Training LLMs with Ray Train: New support for checkpointing distributed models, and Pytorch Lightning FSDP to enable training large models on Ray Train’s LightningTrainer
  • LLM applications with Ray Serve & Core: New support for streaming responses and model multiplexing
  • Improvements to Ray Data: In 2.5, strict mode is enabled by default. This means that schemas are required for all Datasets, and standalone Python objects are no longer supported. Also, the default batch format is fixed to NumPy, giving better performance for batch inference.
  • RLlib enhancements: New support for multi-gpu training, along with ray-project/rllib-contrib to contain the community contributed algorithms
  • Core enhancements: Enable new feature of lightweight resource broadcasting to improve reliability and scalability. Add many enhancements for Core reliability, logging, scheduler, and worker process.

Ray Libraries

Ray AIR

💫Enhancements:

  • Experiment restore stress tests (#33706)
  • Context-aware output engine
    • Add parameter columns to status table (#35388)
    • Context-aware output engine: Add docs, experimental feature docs, prepare default on (#35129)
    • Fix trial status at end (more info + cut off) (#35128)
    • Improve leaked mentions of Tune concepts (#35003)
    • Improve passed time display (#34951)
    • Use flat metrics in results report, use Trainable._progress_metrics (#35035)
    • Print experiment information at experiment start (#34952)
    • Print single trial config + results as table (#34788)
    • Print out worker ip for distributed train workers. (#33807)
    • Minor fix to print configuration on start. (#34575)
    • Check air_verbosity against None. (#33871)
    • Better wording for empty config. (#33811)
  • Flatten config and metrics before passing to mlflow (#35074)
  • Remote_storage: Prefer fsspec filesystems over native pyarrow (#34663)
  • Use filesystem wrapper to exclude files from upload (#34102)
  • GCE test variants for air_benchmark and air_examples (#34466)
  • New storage path configuration
    • Add RunConfig.storage_path to replace SyncConfig.upload_dir and RunConfig.local_dir. (#33463)
    • Use Ray storage URI as default storage path, if configured [no_early_kickoff] (#34470)
    • Move to new storage_path API in tests and examples (#34263)

🔨 Fixes:

  • Store unflattened metrics in _TrackedCheckpoint (#35658) (#35706)
  • Fix test_tune_torch_get_device_gpu race condition (#35004)
  • Deflake test_e2e_train_flow.py (#34308)
  • Pin deepspeed version for now to unblock ci. (#34406)
  • Fix AIR benchmark configuration link failure. (#34597)
  • Fix unused config building function in lightning MNIST example.

📖Documentation:

  • Change doc occurrences of ray.data.Dataset to ray.data.Datastream (#34520)
  • DreamBooth example: Fix code for batch size > 1 (#34398)
  • Synced tabs in AIR getting started (#35170)
  • New Ray AIR link for try it out (#34924)
  • Correctly Render the Enumerate Numbers in convert_torch_code_to_ray_air (#35224)

Ray Data Processing

🎉 New Features:

  • Implement Strict Mode and enable it by default.
  • Add column API to Dataset (#35241)
  • Configure progress bars via DataContext (#34638)
  • Support using concurrent actors for ActorPool (#34253)
  • Add take_batch API for collecting data in the same format as iter_batches and map_batches (#34217)

💫Enhancements:

  • Improve map batches error message for strict mode migration (#35368)
  • Improve docstring and warning message for from_huggingface (#35206)
  • Improve notebook widget display (#34359)
  • Implement some operator fusion logic for the new backend (#35178 #34847)
  • Use wait based prefetcher by default (#34871)
  • Implement limit physical operator (#34705 #34844)
  • Require compute spec to be explicitly spelled out #34610
  • Log a warning if the batch size is misconfigured in a way that would grossly reduce parallelism for actor pool. (#34594)
  • Add alias parameters to the aggregate function, and add quantile fn (#34358)
  • Improve repr for Arrow Table and pandas types (#34286 #34502)
  • Defer first block computation when reading a Datasource with schema information in metadata (#34251)
  • Improve handling of KeyboardInterrupt (#34441)
  • Validate aggregation key in Aggregate LogicalOperator (#34292)
  • Add usage tag for which block formats are used (#34384)
  • Validate sort key in Sort LogicalOperator (#34282)
  • Combine_chunks before chunking pyarrow.Table block into batches (#34352)
  • Use read stage name for naming Data-read tasks on Ray Dashboard (#34341)
  • Update path expansion warning (#34221)
  • Improve state initialization for ActorPoolMapOperator (#34037)

🔨 Fixes:

  • Fix ipython representation (#35414)
  • Fix bugs in handling of nested ndarrays (and other complex object types) (#35359)
  • Capture the context when the dataset is first created (#35239)
  • Cooperatively exit producer threads for iter_batches (#34819)
  • Autoshutdown executor threads when deleted (#34811)
  • Fix backpressure when reading directly from input datasource (#34809)
  • Fix backpressure handling of queued actor pool tasks (#34254)
  • Fix row count after applying filter (#34372)
  • Remove unnecessary setting of global logging level to INFO when using Ray Data (#34347)
  • Make sure the tf and tensor iteration work in dataset pipeline (#34248)
  • Fix '_unwrap_protocol' for Windows systems (#31296)

📖Documentation:

  • Add batch inference object detection example (#35143)
  • Refine batch inference doc (#35041)

Ray Train

🎉 New Features:

  • Experimental support for distributed checkpointing (#34709)

💫Enhancements:

  • LightningTrainer: Enable prog bar (#35350)
  • LightningTrainer enable checkpoint full dict with FSDP strategy (#34967)
  • Support FSDP Strategy for LightningTrainer (#34148)

🔨 Fixes:

  • Fix HuggingFace -> Transformers wrapping logic (#35276, #35284)
  • LightningTrainer always resumes from the latest AIR checkpoint during restoration. (#35617) (#35791)
  • Fix lightning trainer devices setting (#34419)
  • TorchCheckpoint: Specifying pickle_protocol in torch.save() (#35615) (#35790)

📖Documentation:

  • Improve visibility of Trainer restore and stateful callback restoration (#34350)
  • Fix rendering of diff code-blocks (#34355)
  • LightningTrainer Dolly V2 FSDP Fine-tuning Example (#34990)
  • Update LightningTrainer MNIST example. (#34867)
  • LightningTrainer Advanced Example (#34082, #34429)

🏗 Architecture refactoring:

  • Restructure ray.train HuggingFace modules (#35270) (#35488)
  • rename _base_dataset to _base_datastream (#34423)

Ray Tune

🎉 New Features:

  • Ray Tune's new execution path is now enabled per default (#34840, #34833)

💫Enhancements:

  • Make `Tuner.restore(trainable=...) a required argument (#34982)
  • Enable tune.ExperimentAnalysis to pull experiment checkpoint files from the cloud if needed (#34461)
  • Add support for nested hyperparams in PB2 (#31502)
  • Release test for durable multifile checkpoints (#34860)
  • GCE variants for remaining Tune tests (#34572)
  • Add tune frequent pausing release test. (#34501)
  • Add PyArrow to ray[tune] dependencies (#34397)
  • Fix new execution backend for BOHB (#34828)
  • Add tune frequent pausing release test. (#34501)

🔨 Fixes:

  • Set config on trial restore (#35000)
  • Fix test_tune_torch_get_device_gpu race condition (#35004)
  • Fix a typo in tune/execution/checkpoint_manager state serialization. (#34368)
  • Fix tune_scalability_network_overhead by adding --smoke-test. (#34167)
  • Fix lightning_gpu_tune_.* release test (#35193)

📖Documentation:

  • Fix Tune tutorial (#34660)
  • Fix typo in Tune restore guide (#34247)

🏗 Architecture refactoring:

  • Use Ray-provided tabulate package (#34789)

Ray Serve

🎉 New Features:

  • Add support for json logging format.(#35118)
  • Add experimental support for model multiplexing.(#35399, #35326)
  • Added experimental support for HTTP StreamingResponses. (#35720)
  • Add support for application builders & arguments (#34584)

💫Enhancements:

  • Add more bucket size for histogram metrics. (#35242).
  • Add route information into the custom metrics. (#35246)
  • Add HTTPProxy details to Serve Dashboard UI (#35159)
  • Add status_code to http qps & latency (#35134)
  • Stream Serve logs across different drivers (#35070)
  • Add health checking for http proxy actors (#34944)
  • Better surfacing of errors in serve status (#34773)
  • Enable TLS on gRPCIngress if RAY_USE_TLS is on (#34403)
  • Wait until replicas have finished recovering (with timeout) to broadcast LongPoll updates (#34675)
  • Replace ClassNode and FunctionNode with Application in top-level Serve APIs (#34627)

🔨 Fixes:

  • Set app_msg to empty string by default (#35646)
  • Fix dead replica counts in the stats. (#34761)
  • Add default app name (#34260)
  • gRPC Deployment schema check & minor improvements (#34210)

📖Documentation:

  • Clean up API reference and various docstrings (#34711)
  • Clean up RayServeHandle and RayServeSyncHandle docstrings & typing (#34714)

RLlib

🎉 New Features:

  • Migrating approximately ~25 of the 30 algorithms from RLlib into rllib_contrib. You can review the REP here. This release we have covered A3C and MAML.
  • The APPO / IMPALA and PPO are all moved to the new Learner and RLModule stack.
  • The RLModule now supports Checkpointing.(#34717 #34760)

💫Enhancements:

  • Intro...
Read more

Ray-2.3.1

27 Mar 17:25
5b99dd9
Compare
Choose a tag to compare

The Ray 2.3.1 patch release contains fixes for multiple components:

Ray Data Processing

  • Support different number of blocks/rows per block in zip() (#32795)

Ray Serve

  • Revert serve run to use Ray Client instead of Ray Jobs (#32976)
  • Fix issue with max_concurrent_queries being ignored when autoscaling (#32772 and #33022)

Ray Core

  • Write Ray address even if Ray node is started with --block (#32961)
  • Fix Ray on Spark running on layered virtualenv python environment (#32996)

Dashboard

  • Fix disk metric showing double the actual value (#32674)

Ray-2.3.0

24 Feb 15:34
3aa6ede
Compare
Choose a tag to compare

Release Highlights

  • The streaming backend for Ray Datasets is in Developer Preview. It is designed to enable terabyte-scale ML inference and training workloads. Please contact us if you'd like to try it out on your workload, or you can find the preview guide here: https://docs.google.com/document/d/1BXd1cGexDnqHAIVoxTnV3BV0sklO9UXqPwSdHukExhY/edit
  • New Information Architecture (Beta): We’ve restructured the Ray dashboard to be organized around user personas and workflows instead of entities.
  • Ray-on-Spark is now available (Preview)!: You can launch Ray clusters on Databricks and Spark clusters and run Ray applications. Check out the documentation to learn more.

Ray Libraries

Ray AIR

💫Enhancements:

  • Add set_preprocessor method to Checkpoint (#31721)
  • Rename Keras callback and its parameters to be more descriptive (#31627)
  • Deprecate MlflowTrainableMixin in favor of setup_mlflow() function (#31295)
  • W&B
    • Have train_loop_config logged as a config (#31901)
    • Allow users to exclude config values with WandbLoggerCallback (#31624)
    • Rename WandB save_checkpoints to upload_checkpoints (#31582)
    • Add hook to get project/group for W&B integration (#31035, 31643)
    • Use Ray actors instead of multiprocessing for WandbLoggerCallback (#30847)
    • Update WandbLoggerCallback example (#31625)
  • Predictor
    • Place predictor kwargs in object store (#30932)
    • Delegate BatchPredictor stage fusion to Datasets (#31585)
    • Rename DLPredictor.call_model tensor parameter to inputs (#30574)
    • Add use_gpu to HuggingFacePredictor (#30945)
  • Checkpoints
    • Various Checkpoint improvements (#30948)
    • Implement lazy checkpointing for same-node case (#29824)
    • Automatically strip "module." from state dict (#30705)
    • Allow user to pass model to TensorflowCheckpoint.get_model (#31203)

🔨 Fixes:

  • Fix and improve support for HDFS remote storage. (#31940)
  • Use specified Preprocessor configs when using stream API. (#31725)
  • Support nested Chain in BatchPredictor (#31407)

📖Documentation:

🏗 Architecture refactoring:

  • Use NodeAffinitySchedulingPolicy for scheduling (#32016)
  • Internal resource management refactor (#30777, #30016)

Ray Data Processing

🎉 New Features:

  • Lazy execution by default (#31286)
  • Introduce streaming execution backend (#31579)
  • Introduce DatasetIterator (#31470)
  • Add per-epoch preprocessor (#31739)
  • Add TorchVisionPreprocessor (#30578)
  • Persist Dataset statistics automatically to log file (#30557)

💫Enhancements:

  • Async batch fetching for map_batches (#31576)
  • Add informative progress bar names to map_batches (#31526)
  • Provide an size bytes estimate for mongodb block (#31930)
  • Add support for dynamic block splitting to actor pool (#31715)
  • Improve str/repr of Dataset to include execution plan (#31604)
  • Deal with nested Chain in BatchPredictor (#31407)
  • Allow MultiHotEncoder to encode arrays (#31365)
  • Allow specify batch_size when reading Parquet file (#31165)
  • Add zero-copy batch API for ds.map_batches() (#30000)
  • Text dataset should save texts in ArrowTable format (#30963)
  • Return ndarray dicts for single-column tabular datasets (#30448)
  • Execute randomize_block_order eagerly if it's the last stage for ds.schema() (#30804)

🔨 Fixes:

  • Don't drop first dataset when peeking DatasetPipeline (#31513)
  • Handle np.array(dtype=object) constructor for ragged ndarrays (#31670)
  • Emit warning when starting Dataset execution with no CPU resources available (#31574)
  • Fix the bug of eagerly clearing up input blocks (#31459)
  • Fix Imputer failing with categorical dtype (#31435)
  • Fix schema unification for Datasets with ragged Arrow arrays (#31076)
  • Fix Discretizers transforming ignored cols (#31404)
  • Fix to_tf when the input feature_columns is a list. (#31228)
  • Raise error message if user calls Dataset.iter (#30575)

📖Documentation:

  • Refactor Ray Data API documentation (#31204)
  • Add seealso to map-related methods (#30579)

Ray Train

🎉 New Features:

  • Add option for per-epoch preprocessor (#31739)

💫Enhancements:

  • Change default NCCL_SOCKET_IFNAME to blacklist veth (#31824)
  • Introduce DatasetIterator for bulk and streaming ingest (#31470)
  • Clarify which RunConfig is used when there are multiple places to specify it (#31959)
  • Change ScalingConfig to be optional for DataParallelTrainers if already in Tuner param_space (#30920)

🔨 Fixes:

  • Use specified Preprocessor configs when using stream API. (#31725)
  • Fix off-by-one AIR Trainer checkpoint ID indexing on restore (#31423)
  • Force GBDTTrainer to use distributed loading for Ray Datasets (#31079)
  • Fix bad case in ScalingConfig->RayParams (#30977)
  • Don't raise TuneError on fail_fast="raise" (#30817)
  • Report only once in SklearnTrainer (#30593)
  • Ensure GBDT PGFs match passed ScalingConfig (#30470)

📖Documentation:

🏗 Architecture refactoring:

Ray Tune

💫Enhancements:

  • Improve trainable serialization error (#31070)
  • Add support for Nevergrad optimizer with extra parameters (#31015)
  • Add timeout for experiment checkpoint syncing to cloud (#30855)
  • Move validate_upload_dir to Syncer (#30869)
  • Enable experiment restore from moved cloud uri (#31669)
  • Save and restore stateful callbacks as part of experiment checkpoint (#31957)

🔨 Fixes:

  • Do not default to reuse_actors=True when mixins are used (#31999)
  • Only keep cached actors if search has not ended (#31974)
  • Fix best trial in ProgressReporter with nan (#31276)
  • Make ResultGrid return cloud checkpoints (#31437)
  • Wait for final experiment checkpoint sync to finish (#31131)
  • Fix CheckpointConfig validation for function trainables (#31255)
  • Fix checkpoint directory assignment for new checkpoints created after restoring a function trainable (#31231)
  • Fix AxSearch save and nan/inf result handling (#31147)
  • Fix AxSearch search space conversion for fixed list hyperparameters (#31088)
  • Restore searcher and scheduler properly on Tuner.restore (#30893)
  • Fix progress reporter sort_by_metric with nested metrics (#30906)
  • Don't raise TuneError on fail_fast="raise" (#30817)
  • Fix duplicate printing when trial is done (#30597)

📖Documentation:

🏗 Architecture refactoring:

  • Deprecate passing a custom trial executor (#31792)
  • Move signal handling into separate method (#31004)
  • Update staged resources in a fixed counter for faster lookup (#32087)
  • Rename overwrite_trainable argument in Tuner restore to trainable (#32059)

Ray Serve

🎉 New Features:

  • Serve python API to support multi application (#31589)

💫Enhancements:

  • Add exponential backoff when retrying replicas (#31436)
  • Enable Log Rotation on Serve (#31844)
  • Use tasks/futures for asyncio.wait (#31608)
  • Change target_num_ongoing_requests_per_replica to positive float (#31378)

🔨 Fixes:

  • Upgrade deprecated calls (#31839)
  • Change Gradio integration to take a builder function to avoid serialization issues (#31619)
  • Add initial health check before marking a replica as RUNNING (#31189)

📖Documentation:

  • Document end-to-end timeout in Serve (#31769)
  • Document Gradio visualization (#28310)

RLlib

🎉 New Features:

  • Gymnasium is now supported. (Notes)
  • Connectors are now activated by default (#31693, 30388, 31618, 31444, 31092)
  • Contribution of LeelaChessZero algorithm for playing chess in a MultiAgent env. (#31480)

💫Enhancements:

  • [RLlib] Error out if action_dict is empty in MultiAgentEnv. (#32129)
  • [RLlib] Upgrade tf eager code to no longer use experimental_relax_shapes (but reduce_retracing instead). (#29214)
  • [RLlib] Reduce SampleBatch counting complexity (#30936)
  • [RLlib] Use PyTorch vectorized max() and sum() in SampleBatch.init when possible (#28388)
  • [RLlib] Support multi-gpu CQL for torch (tf already supported). (#31466)
  • [RLlib] Introduce IMPALA off_policyness test with GPU (#31485)
  • [RLlib] Properly serialize and restore StateBufferConnector states for policy stashing (#31372)
  • [RLlib] Clean up deprecated concat_samples calls (#31391)
  • [RLlib] Better support MultiBinary spaces by treating Tuples as superset of them in ComplexInputNet. (#28900)
  • [RLlib] Add backward compatibility to MeanStdFilter to restore from older checkpoints. (#30439)
  • [RLlib] Clean up some signatures for compute_actions. (#31241)
  • [RLlib] Simplify logging configuration. (#30863)
  • [RLlib] Remove native Keras Models. (#30986)
  • [RLlib] Convert PolicySpec to a readable format when converting to_dict(). (#31146)
  • [RLlib] Issue 30394: Add proper __str__() method to PolicyMap. (#31098)
  • [RLlib] Issue 30840: Option to only checkpoint policies that are trainable. (#31133)
  • [RLlib] Deprecate (delete) contrib folder. (#30992)
  • [RLlib] Better behavior if user does not specify stopping condition in RLLib CLI. (#31078)
  • ...
Read more

Ray-2.2.0

13 Dec 19:56
840215b
Compare
Choose a tag to compare

Release Highlights

Ray 2.2 is a stability-focused release, featuring stability improvements across many Ray components.

  • Ray Jobs API is now GA. The Ray Jobs API allows you to submit locally developed applications to a remote Ray Cluster for execution. It simplifies the experience of packaging, deploying, and managing a Ray application.
  • Ray Dashboard has received a number of improvements, such as the ability to see cpu flame graphs of your Ray workers and new metrics for memory usage.
  • The Out-Of-Memory (OOM) Monitor is now enabled by default. This will increase the stability of memory-intensive applications on top of Ray.
  • [Ray Data] we’ve heard numerous users report that when files are too large, Ray Data can have out-of-memory or performance issues. In this release, we’re enabling dynamic block splitting by default, which will address the above issues by avoiding holding too much data in memory.

Ray Libraries

Ray AIR

🎉 New Features:

  • Add a NumPy first path for Torch and TensorFlow Predictors (#28917)

💫Enhancements:

  • Suppress "NumPy array is not writable" error in torch conversion (#29808)
  • Add node rank and local world size info to session (#29919)

🔨 Fixes:

  • Fix MLflow database integrity error (#29794)
  • Fix ResourceChangingScheduler dropping PlacementGroupFactory args (#30304)
  • Fix bug passing 'raise' to FailureConfig (#30814)
  • Fix reserved CPU warning if no CPUs are used (#30598)

📖Documentation:

  • Fix examples and docs to specify batch_format in BatchMapper (#30438)

🏗 Architecture refactoring:

  • Deprecate Wandb mixin (#29828)
  • Deprecate Checkpoint.to_object_ref and Checkpoint.from_object_ref (#30365)

Ray Data Processing

🎉 New Features:

  • Support all PyArrow versions released by Apache Arrow (#29993, #29999)
  • Add select_columns() to select a subset of columns (#29081)
  • Add write_tfrecords() to write TFRecord files (#29448)
  • Support MongoDB data source (#28550)
  • Enable dynamic block splitting by default (#30284)
  • Add from_torch() to create dataset from Torch dataset (#29588)
  • Add from_tf() to create dataset from TensorFlow dataset (#29591)
  • Allow to set batch_size in BatchMapper (#29193)
  • Support read/write from/to local node file system (#29565)

💫Enhancements:

  • Add include_paths in read_images() to return image file path (#30007)
  • Print out Dataset statistics automatically after execution (#29876)
  • Cast tensor extension type to opaque object dtype in to_pandas() and to_dask() (#29417)
  • Encode number of dimensions in variable-shaped tensor extension type (#29281)
  • Fuse AllToAllStage and OneToOneStage with compatible remote args (#29561)
  • Change read_tfrecords() output from Pandas to Arrow format (#30390)
  • Handle all Ray errors in task compute strategy (#30696)
  • Allow nested Chain preprocessors (#29706)
  • Warn user if missing columns and support str exclude in Concatenator (#29443)
  • Raise ValueError if preprocessor column doesn't exist (#29643)

🔨 Fixes:

  • Support custom resource with remote args for random_shuffle() (#29276)
  • Support custom resource with remote args for random_shuffle_each_window() (#29482)
  • Add PublicAPI annotation to preprocessors (#29434)
  • Tensor extension column concatenation fixes (#29479)
  • Fix iter_batches() to not return empty batch (#29638)
  • Change map_batches() to fetch input blocks on-demand (#29289)
  • Change take_all() to not accept limit argument (#29746)
  • Convert between block and batch correctly for map_groups() (#30172)
  • Fix stats() call causing Dataset schema to be unset (#29635)
  • Raise error when batch_format is not specified for BatchMapper (#30366)
  • Fix ndarray representation of single-element ragged tensor slices (#30514)

📖Documentation:

  • Improve map_batches() documentation about execution model and UDF pickle-ability requirement (#29233)
  • Improve to_tf() docstring (#29464)

Ray Train

🎉 New Features:

💫Enhancements:

  • Fast fail upon single worker failure (#29927)
  • Optimize checkpoint conversion logic (#29785)

🔨 Fixes:

  • Propagate DatasetContext to training workers (#29192)
  • Show correct error message on training failure (#29908)
  • Fix prepare_data_loader with enable_reproducibility (#30266)
  • Fix usage of NCCL_BLOCKING_WAIT (#29562)

📖Documentation:

  • Deduplicate Train examples (#29667)

🏗 Architecture refactoring:

  • Hard deprecate train.report (#29613)
  • Remove deprecated Train modules (#29960)
  • Deprecate old prepare_model DDP args #30364

Ray Tune

🎉 New Features:

  • Make Tuner.restore work with relative experiment paths (#30363)
  • Tuner.restore from a local directory that has moved (#29920)

💫Enhancements:

  • with_resources takes in a ScalingConfig (#30259)
  • Keep resource specifications when nesting with_resources in with_parameters (#29740)
  • Add trial_name_creator and trial_dirname_creator to TuneConfig (#30123)
  • Add option to not override the working directory (#29258)
  • Only convert a BaseTrainer to Trainable once in the Tuner (#30355)
  • Dynamically identify PyTorch Lightning Callback hooks (#30045)
  • Make remote_checkpoint_dir work with query strings (#30125)
  • Make cloud checkpointing retry configurable (#30111)
  • Sync experiment-checkpoints more often (#30187)
  • Update generate_id algorithm (#29900)

🔨 Fixes:

  • Catch SyncerCallback failure with dead node (#29438)
  • Do not warn in BayesOpt w/ Uniform sampler (#30350)
  • Fix ResourceChangingScheduler dropping PGF args (#30304)
  • Fix Jupyter output with Ray Client and Tuner (#29956)
  • Fix tests related to TUNE_ORIG_WORKING_DIR env variable (#30134)

📖Documentation:

  • Add user guide for analyzing results (using ResultGrid and Result) (#29072)
  • Tune checkpointing and Tuner restore docfix (#29411)
  • Fix and clean up PBT examples (#29060)
  • Fix TrialTerminationReporter in docs (#29254)

🏗 Architecture refactoring:

  • Remove hard deprecated SyncClient/Syncer (#30253)
  • Deprecate Wandb mixin, move to setup_wandb() function (#29828)

Ray Serve

🎉 New Features:

  • Guard for high latency requests (#29534)
  • Java API Support (blog)

💫Enhancements:

  • Serve K8s HA benchmarking (#30278)
  • Add method info for http metrics (#29918)

🔨 Fixes:

  • Fix log format error (#28760)
  • Inherit previous deployment num_replicas (29686)
  • Polish serve run deploy message (#29897)
  • Remove calling of get_event_loop from python 3.10

RLlib

🎉 New Features:

  • Fault tolerant, elastic WorkerSets: An asynchronous Ray Actor manager class is now used inside all of RLlib’s Algorithms, adding fully flexible fault tolerance to rollout workers and workers used for evaluation. If one or more workers (which are Ray actors) fails - e.g. due to a SPOT instance going down - the RLlib Algorithm will now flexibly wait it out and periodically try to recreate the failed workers. In the meantime, only the remaining healthy workers are used for sampling and evaluation. (#29938, #30118, #30334, #30252, #29703, #30183, #30327, #29953)

💫Enhancements:

  • RLlib CLI: A new and enhanced RLlib command line interface (CLI) has been added, allowing for automatically downloading example configuration files, python-based config files (defining an AlgorithmConfig object to use), better interoperability between training and evaluation runs, and many more. For a detailed overview of what has changed, check out the new CLI documentation. (#29204, #29459, #30526, #29661, #29972)
  • Checkpoint overhaul: Algorithm checkpoints and Policy checkpoints are now more cohesive and transparent. All checkpoints are now characterized by a directory (with files and maybe sub-directories), rather than a single pickle file; Both Algorithm and Policy classes now have a utility static method (from_checkpoint()) for directly instantiating instances from a checkpoint directory w/o knowing the original configuration used or any other information (having the checkpoint is sufficient). For a detailed overview, see here. (#28812, #29772, #29370, #29520, #29328)
  • A new metric for APPO/IMPALA/PPO has been added that measures off-policy’ness: The difference in number of grad-updates the sampler policy has received thus far vs the trained policy’s number of grad-updates thus far. (#29983)

🏗 Architecture refactoring:

  • AlgorithmConfig classes: All of RLlib’s Algorithms, RolloutWorkers, and other important classes now use AlgorithmConfig objects under the hood, instead of python config dicts. It is no longer recommended (however, still supported) to create a new algorithm (or a Tune+RLlib experiment) using a python dict as configuration. For more details on how to convert your scripts to the new AlgorithmConfig design, see here. (#29796, #30020, #29700, #29799, #30096, #29395, #29755, #30053, #29974, #29854, #29546, #30042, #29544, #30079, #30486, #30361)
  • Major progress was made on the new Connector API and making sure it can be used (tentatively) with the “config.rollouts(enable_connectors=True)” flag. Will be fully supported, across all of RLlib’s algorithms, in Ray 2.3. (#30307, #30434, #30459, #303...
Read more

Ray-2.1.0

08 Nov 01:55
@c21 c21
Compare
Choose a tag to compare

Release Highlights

  • Ray AI Runtime (AIR)
    • Better support for Image-based workloads.
      • Ray Datasets read_images() API for loading data.
      • Numpy-based API for user-defined functions in Preprocessor.
    • Ability to read TFRecord input.
      • Ray Datasets read_tfrecords() API to read TFRecord files.
  • Ray Serve:
    • Add support for gRPC endpoint (alpha release). Instead of using an HTTP server, Ray Serve supports gRPC protocol and users can bring their own schema for their use case.
  • RLlib:
    • Introduce decision transformer (DT) algorithm.
    • New hook for callbacks with on_episode_created().
    • Learning rate schedule to SimpleQ and PG.
  • Ray Core:
    • Ray OOM prevention (alpha release).
    • Support dynamic generators as task return values.
  • Dashboard:
    • Time series metrics support.
    • Export configuration files can be used in Prometheus or Grafana instances.
    • New progress bar in job detail view.

Ray Libraries

Ray AIR

💫Enhancements:

  • Improve readability of training failure output (#27946, #28333, #29143)
  • Auto-enable GPU for Predictors (#26549)
  • Add ability to create TorchCheckpoint from state dict (#27970)
  • Add ability to create TensorflowCheckpoint from saved model/h5 format (#28474)
  • Add attribute to retrieve URI from Checkpoint (#28731)
  • Add all allowable types to WandB Callback (#28888)

🔨 Fixes:

  • Handle nested metrics properly as scoring attribute (#27715)
  • Fix serializability of Checkpoints (#28387, #28895, #28935)

📖Documentation:

🏗 Architecture refactoring:

  • Deprecate Checkpoint.to_object_ref and Checkpoint.from_object_ref (#28318)
  • Deprecate legacy train/tune functions in favor of Session (#28856)

Ray Data Processing

🎉 New Features:

  • Add read_images (#29177)
  • Add read_tfrecords (#28430)
  • Add NumPy batch format to Preprocessor and BatchMapper (#28418)
  • Ragged tensor extension type (#27625)
  • Add KBinsDiscretizer Preprocessor (#28389)

💫Enhancements:

  • Simplify to_tf interface (#29028)
  • Add metadata override and inference in Dataset.to_dask() (#28625)
  • Prune unused columns before aggregate (#28556)
  • Add Dataset.default_batch_format (#28434)
  • Add partitioning parameter to read_ functions (#28413)
  • Deprecate "native" batch format in favor of "default" (#28489)
  • Support None partition field name (#28417)
  • Re-enable Parquet sampling and add progress bar (#28021)
  • Cap the number of stats kept in StatsActor and purge in FIFO order if the limit exceeded (#27964)
  • Customized serializer for Arrow JSON ParseOptions in read_json (#27911)
  • Optimize groupby/mapgroups performance (#27805)
  • Improve size estimation of image folder data source (#27219)
  • Use detached lifetime for stats actor (#25271)
  • Pin _StatsActor to the driver node (#27765)
  • Better error message for partition filtering if no file found (#27353)
  • Make Concatenator deterministic (#27575)
  • Change FeatureHasher input schema to expect token counts (#27523)
  • Avoid unnecessary reads when truncating a dataset with ds.limit() (#27343)
  • Hide tensor extension from UDFs (#27019)
  • Add repr to AIR classes (#27006)

🔨 Fixes:

  • Add upper bound to pyarrow version check (#29674) (#29744)
  • Fix map_groups to work with different output type (#29184)
  • read_csv not filter out files by default (#29032)
  • Check columns when adding rows to TableBlockBuilder (#29020)
  • Fix the peak memory usage calculation (#28419)
  • Change sampling to use same API as read Parquet (#28258)
  • Fix column assignment in Concatenator for Pandas 1.2. (#27531)
  • Doing partition filtering in reader constructor (#27156)
  • Fix split ownership (#27149)

📖Documentation:

  • Clarify dataset transformation. (#28482)
  • Update map_batches documentation (#28435)
  • Improve docstring and doctest for read_parquet (#28488)
  • Activate dataset doctests (#28395)
  • Document using a different separator for read_csv (#27850)
  • Convert custom datetime column when reading a CSV file (#27854)
  • Improve preprocessor documentation (#27215)
  • Improve limit() and take() docstrings (#27367)
  • Reorganize the tensor data support docs (#26952)
  • Fix nyc_taxi_basic_processing notebook (#26983)

Ray Train

🎉 New Features:

  • Add FullyShardedDataParallel support to TorchTrainer (#28096)

💫Enhancements:

  • Add rich notebook repr for DataParallelTrainer (#26335)
  • Fast fail if training loop raises an error on any worker (#28314)
  • Use torch.encode_data with HorovodTrainer when torch is imported (#28440)
  • Automatically set NCCL_SOCKET_IFNAME to use ethernet (#28633)
  • Don't add Trainer resources when running on Colab (#28822)
  • Support large checkpoints and other arguments (#28826)

🔨 Fixes:

📖Documentation:

  • Clarify LGBM/XGB Trainer documentation (#28122)
  • Improve Hugging Face notebook example (#28121)
  • Update Train API reference and docs (#28192)
  • Mention FSDP in HuggingFaceTrainer docs (#28217)

🏗 Architecture refactoring:

  • Improve Trainer modularity for extensibility (#28650)

Ray Tune

🎉 New Features:

  • Add Tuner.get_results() to retrieve results after restore (#29083)

💫Enhancements:

  • Exclude files in sync_dir_between_nodes, exclude temporary checkpoints (#27174)
  • Add rich notebook output for Tune progress updates (#26263)
  • Add logdir to W&B run config (#28454)
  • Improve readability for long column names in table output (#28764)
  • Add functionality to recover from latest available checkpoint (#29099)
  • Add retry logic for restoring trials (#29086)

🔨 Fixes:

  • Re-enable progress metric detection (#28130)
  • Add timeout to retry_fn to catch hanging syncs (#28155)
  • Correct PB2’s beta_t parameter implementation (#28342)
  • Ignore directory exists errors to tackle race conditions (#28401)
  • Correctly overwrite files on restore (#28404)
  • Disable pytorch-lightning multiprocessing per default (#28335)
  • Raise error if scheduling an empty PlacementGroupFactory#28445
  • Fix trial cleanup after x seconds, set default to 600 (#28449)
  • Fix trial checkpoint syncing after recovery from other node (#28470)
  • Catch empty hyperopt search space, raise better Tuner error message (#28503)
  • Fix and optimize sample search algorithm quantization logic (#28187)
  • Support tune.with_resources for class methods (#28596)
  • Maintain consistent Trial/TrialRunner state when pausing and resuming trial with PBT (#28511)
  • Raise better error for incompatible gcsfs version (#28772)
  • Ensure that exploited in-memory checkpoint is used by trial with PBT (#28509)
  • Fix Tune checkpoint tracking for minimizing metrics (#29145)

📖Documentation:

🏗 Architecture refactoring:

  • Store SyncConfig and CheckpointConfig in Experiment and Trial (#29019)

Ray Serve

🎉 New Features:

  • Added gRPC direct ingress support [alpha version] (#28175)
  • Serve cli can provide kubernetes formatted output (#28918)
  • Serve cli can provide user config output without default value (#28313)

💫Enhancements:

  • Enrich more benchmarks
  • image objection with resnet50 mode with image preprocessing (#29096)
  • gRPC vs HTTP inference performance (#28175)
  • Add health check metrics to reflect the replica health status (#29154)

🔨 Fixes:

  • Fix memory leak issues during inference (#29187)
  • Fix unexpected http options omit warning when using serve cli to start the ray serve (#28257)
  • Fix unexpected long poll exceptions (#28612)

📖Documentation:

  • Add e2e fault tolerance instructions (#28721)
  • Add Direct Ingress instructions (#29149)
  • Bunch of doc improvements on “dev workflow”, “custom resources”, “serve cli” etc (#29147, #28708, #28529, #28527)

RLlib

🎉 New Features:

  • Decision Transformer (DT) Algorithm added (#27890, #27889, #27872, #27829).
  • Callbacks now have a new hook on_episode_created(). (#28600)
  • Added learning rate schedule to SimpleQ and PG. (#28381)

💫Enhancements:

🔨 Fixes:

📖Documentation:

Ray Workflows

🔨 Fixes:

  • Fixed the object loss due to driver exit (#29092)
  • Change the name in step to task_id (#28151)

Ray Core and Ray Clusters

Ray Core

🎉 New Features:

  • Ray OOM prevention feature alpha release! If your Ray jobs suffer from OOM issues, please give it a try.
  • Support dynamic generators as task return values. (#29082 #28864 #28291)

💫Enhancements:

  • Fix spread scheduling imbalance issues (#28804 #28551 #28551)
  • Widening range of grpcio versions al...
Read more