Skip to content

Releases: apache/druid

druid-0.14.2-incubating

27 May 21:05
Compare
Choose a tag to compare

Apache Druid 0.14.2-incubating is a bug fix release that includes important fixes for the 'druid-datasketches' extension and the broker 'result' level caching.

Bug Fixes

  • #7607 thetaSketch(with sketches-core-0.13.1) in groupBy always return value no more than 16384
  • #6483 Exception during sketch aggregations while using Result level cache
  • #7621 NPE when both populateResultLevelCache and grandTotal are set

Credits

Thanks to everyone who contributed to this release!

@AlexanderSaydakov
@clintropolis
@jihoonson
@jon-wei

Apache Druid (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

druid-0.14.1-incubating

09 May 04:04
Compare
Choose a tag to compare

Apache Druid 0.14.1-incubating is a small patch release that includes a handful of bug and documentation fixes from 16 contributors.

Important Notice

This release fixes an issue with druid-datasketches extension with quantile sketches, but introduces another one with theta sketches that was confirmed after the release was finalized, caused by #7320 and described in #7607. If you utilize theta sketches, we recommend not upgrading to this release. This will be fixed in the next release of Druid by #7619.

Bug Fixes

  • use latest sketches-core-0.13.1 #7320
  • Adjust BufferAggregator.get() impls to return copies #7464
  • DoublesSketchComplexMetricSerde: Handle empty strings. #7429
  • handle empty sketches #7526
  • Adds backwards-compatible serde for SeekableStreamStartSequenceNumbers. #7512
  • Support Kafka supervisor adopting running tasks between versions #7212
  • Fix time-extraction topN with non-STRING outputType. #7257
  • Fix two issues with Coordinator -> Overlord communication. #7412
  • refactor druid-bloom-filter aggregators #7496
  • Fix encoded taskId check in chatHandlerResource #7520
  • Fix too many dentry cache slab objs#7508. #7509
  • Fix result-level cache for queries #7325
  • Fix flattening Avro Maps with Utf8 keys #7258
  • Write null byte when indexing numeric dimensions with Hadoop #7020
  • Batch hadoop ingestion job doesn't work correctly with custom segments table #7492
  • Fix aggregatorFactory meta merge exception #7504

Documentation Changes

  • Fix broken link due to Typo. #7513
  • Some docs optimization #6890
  • Updated Javascript Affinity config docs #7441
  • fix expressions docs operator table #7420
  • Fix conflicting information in configuration doc #7299
  • Add missing doc link for operations/http-compression.html #7110

Updating from 0.14.0-incubating and earlier

Kafka Ingestion

Updating from version 0.13.0-incubating or earlier directly to 0.14.1-incubating will not require downtime like the migration path to 0.14.0-incubating due to the issue described in #6958, which has been fixed for this release in #7212. Likewise, rolling updates from version 0.13.0-incubating and earlier should also work properly due to #7512.

Native Parallel Ingestion

Updating from 0.13.0-incubating directly to 0.14.1-incubating will not encounter any issues during a rolling update with mixed versions of middle managers due to the fixes in #7520, as could be experienced when updating to 0.14.0-incubating.

Credits

Thanks to everyone who contributed to this release!

@AlexanderSaydakov
@b-slim
@benhopp
@chrishardis
@clintropolis
@ferristseng
@es1220
@gianm
@jihoonson
@jon-wei
@justinborromeo
@kaka11chen
@samarthjain
@surekhasaharan
@zhaojiandong
@zhztheplayer

Apache Druid (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

druid-0.14.0-incubating-rc3

09 Apr 21:09
Compare
Choose a tag to compare
[maven-release-plugin] prepare release druid-0.14.0-incubating

druid-0.14.0-incubating

09 Apr 21:12
Compare
Choose a tag to compare

Apache Druid (incubating) 0.14.0-incubating contains over 200 new features, performance/stability/documentation improvements, and bug fixes from 54 contributors. Major new features and improvements include:

  • New web console
  • Amazon Kinesis indexing service
  • Decommissioning mode for Historicals
  • Published segment cache in Broker
  • Bloom filter aggregator and expression
  • Updated Apache Parquet extension
  • Force push down option for nested GroupBy queries
  • Better segment handoff and drop rule handling
  • Automatically kill MapReduce jobs when Apache Hadoop ingestion tasks are killed
  • DogStatsD tag support for statsd emitter
  • New API for retrieving all lookup specs
  • New compaction options
  • More efficient cachingCost segment balancing strategy

The full list of changes is here: https://github.com/apache/incubator-druid/pulls?q=is%3Apr+is%3Amerged+milestone%3A0.14.0

Documentation for this release is at: http://druid.io/docs/0.14.0-incubating/

Highlights

New web console

new-druid-console

Druid has a new web console that provides functionality that was previously split between the coordinator and overlord consoles.

The new console allows the user to manage datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console.

For more details, please see http://druid.io/docs/0.14.0-incubating/operations/management-uis.html

Added by @vogievetsky in #6923.

Kinesis indexing service

Druid now supports ingestion from Kinesis streams, provided by the new druid-kinesis-indexing-service core extension.

Please see http://druid.io/docs/0.14.0-incubating/development/extensions-core/kinesis-ingestion.html for details.

Added by @jsun98 in #6431.

Decommissioning mode for Historicals

Historical processes can now be put into a "decommissioning" mode, where the coordinator will no longer consider the Historical process as a target for segment replication. The coordinator will also move segments off the decommissioning Historical.

This is controlled via Coordinator dynamic configuration. For more details, please see http://druid.io/docs/0.14.0-incubating/configuration/index.html#dynamic-configuration.

Added by @egor-ryashin in #6349.

Published segment cache on Broker

The Druid Broker now has the ability to maintain a cache of published segments via polling the Coordinator, which can significantly improve response time for metadata queries on the sys.segments system table.

Please see http://druid.io/docs/0.14.0-incubating/querying/sql.html#retrieving-metadata for details.

Added by @surekhasaharan in #6901

Bloom filter aggregator and expression

A new aggregator for constructing Bloom filters at query time and support for performing Bloom filter checks within Druid expressions have been added to the druid-bloom-filter extension.

Please see http://druid.io/docs/0.14.0-incubating/development/extensions-core/bloom-filter.html

Added by @clintropolis in #6904 and #6397

Updated Parquet extension

druid-extensions-parquet has been moved into the core extension set from the contrib extensions and now supports flattening and int96 values.

Please see http://druid.io/docs/0.14.0-incubating/development/extensions-core/parquet.html for details.

Added by @clintropolis in #6360

Force push down option for nested GroupBy queries

Outer query execution for nested GroupBy queries can now be pushed down to Historical processes; previously, the outer queries would always be executed on the Broker.

Please see #5471 for details.

Added by @samarthjain in #5471.

Better segment handoff and retention rule handling

Segment handoff will now ignore segments that would be dropped by a datasource's retention rules, avoiding ingestion failures caused by issue #5868.

Period load rules will now include the future by default.

A new "Period Drop Before" rule has been added. Please see http://druid.io/docs/0.14.0-incubating/operations/rule-configuration.html#period-drop-before-rule for details.

Added by @QiuMM in #6676, #6414, and #6415.

Automatically kill MapReduce jobs when Hadoop ingestion tasks are killed

Druid will now automatically terminate MapReduce jobs created by Hadoop batch ingestion tasks when the ingestion task is killed.

Added by @ankit0811 in #6828.

DogStatsD tag support for statsd-emitter

The statsd-emitter extension now supports DogStatsD-style tags. Please see http://druid.io/docs/0.14.0-incubating/development/extensions-contrib/statsd.html

Added by @deiwin in #6605, with support for constant tags added by @glasser in #6791.

New API for retrieving all lookup specs

A new API for retrieving all lookup specs for all tiers has been added. Please see http://druid.io/docs/0.14.0-incubating/querying/lookups.html#get-all-lookups for details.

Added by @jihoonson in #7025.

New compaction options

Auto-compaction now supports the maxRowsPerSegment option. Please see http://druid.io/docs/0.14.0-incubating/design/coordinator.html#compacting-segments for details.

The compaction task now supports a new segmentGranularity option, deprecating the older keepSegmentGranularity option for controlling the segment granularity of compacted segments. Please see the segmentGranularity table in http://druid.io/docs/0.14.0-incubating/ingestion/compaction.html for more information on these properties.

Added by @jihoonson in #6758 and #6780.

More efficient cachingCost segment balancing strategy

The cachingCost Coordinator segment balancing strategy will now only consider Historical processes for balancing decisions. Previously the strategy would unnecessarily consider active worker tasks as well, which are not targets for segment replication.

Added by @QiuMM in #6879.

New metrics:

  • New allocation rate metric jvm/heapAlloc/bytes, added by @egor-ryashin in #6710.
  • New query count metric query/count, added by @QiuMM in #6473.
  • SQL query metrics sqlQuery/bytes and sqlQuery/time, added by @gaodayue in #6302.
  • Apache Kafka ingestion lag metrics ingest/kafka/maxLag and ingest/kafka/avgLag, added by @QiuMM in #6587
  • Task count metrics task/success/count, task/failed/count, task/running/count, task/pending/count, task/waiting/count, added by @QiuMM in #6657

New interfaces for extension developers

RequestLogEvent

It is now possible to control the fields in RequestLogEvent, emitted by EmittingRequestLogger. Please see #6477 for details. Added by @leventov.

Custom TLS certificate checks

An extension point for custom TLS certificate checks has been added. Please see http://druid.io/docs/0.14.0-incubating/operations/tls-support.html#custom-tls-certificate-checks for details. Added by @jon-wei in #6432.

Kafka Indexing Service no longer experimental

The Kafka Indexing Service extension has been moved out of experimental status.

SQL Enhancements

Enhancements to dsql

The dsql command line client now supports CLI history, basic autocomplete, and specifying query timeouts in the query context.

Added in #6929 by @gianm.

Add SQL id, request logs, and metrics

SQL queries now have an ID, and native queries executed as part of a SQL query will have the associated SQL query ID in the native query's request logs. SQL queries will now be logged in the request logs.

Two new metrics, sqlQuery/time and sqlQuery/bytes, are now emitted for SQL queries.

Please see http://druid.io/docs/0.14.0-incubating/configuration/index.html#request-logging and http://druid.io/docs/0.14.0-incubating/querying/sql.html#sql-metrics for details.

Added by @gaodayue in #6302

More SQL aggregator support

The follow aggregators are now supported in SQL:

  • DataSketches HLL sketch
  • DataSketches Theta sketch
  • DataSketches quantiles sketch
  • Fixed bins histogram
  • Bloom filter aggregator

Added by @jon-wei in #6951 and @clintropolis in #6502

Other SQL enhancements

  • SQL: Add support for queries with project-after-semijoin. #6756
  • SQL: Support for selecting multi-value dimensions. #6462
  • SQL: Support AVG on system tables. #601
  • SQL: Add "POSITION" function. #6596
  • SQL: Set INFORMATION_SCHEMA catalog name to "druid". #6595
  • SQL: Fix ordering of sort, sortProject in DruidSemiJoin. #6769

Added by @gianm.

Updating from 0.13.0-incubating and earlier

Kafka ingestion downtime when upgrading

Due to the issue described in #6958, existing Kafka indexing tasks can be terminated unnecessarily during a rolling upgrade of the Overlord. The terminated tasks will be restarted by the Overlord and will function correctly after the initial restart.

Parquet extension changes

The druid-parquet-extensions extension has been moved from contrib to core. When deploying 0.14.0-incubating, please ensure that your extensions-contrib directory does not have any older versions of the Parquet extension.

Additionally, there are now two styles of Parquet parsers in the extension:

  • parquet-avro: Converts Parquet to Avro, and then parses the Avro representation. This was the existing parser prior to 0.14.0-incubating.
  • parquet: A new parser that parses the Parquet format directly. Only this new parser supports int96 values.

Prior to 0.14.0-incubating, a specifying a parquet type parser would have a task use the Avro-converting parser. In 0.14.0-incubating, to continue using the Avro-converting parser, you will need to update your ingestion specs to use parquet-avro instead.

The inputFormat field in the inputSpec for tasks using Parquet ...

Read more

druid-0.13.0-incubating

12 Dec 23:52
cf15aac
Compare
Choose a tag to compare

Druid 0.13.0-incubating contains over 400 new features, performance/stability/documentation improvements, and bug fixes from 81 contributors. It is the first release of Druid in the Apache Incubator program. Major new features and improvements include:

  • native parallel batch indexing
  • automatic segment compaction
  • system schema tables
  • improved indexing task status, statistics, and error reporting
  • SQL-compatible null handling
  • result-level broker caching
  • ingestion from RDBMS
  • Bloom filter support
  • additional SQL result formats
  • additional aggregators (stringFirst/stringLast, ArrayOfDoublesSketch, HllSketch)
  • support for multiple grouping specs in groupBy query
  • mutual TLS support
  • HTTP-based worker management
  • broker backpressure
  • maxBytesInMemory ingestion tuning configuration
  • materialized views (community extension)
  • parser for Influx Line Protocol (community extension)
  • OpenTSDB emitter (community extension)

The full list of changes is here: https://github.com/apache/incubator-druid/pulls?q=is%3Apr+is%3Aclosed+milestone%3A0.13.0

Documentation for this release is at: http://druid.io/docs/0.13.0-incubating/

Highlights

Native parallel batch indexing

Introduces the index_parallel supervisor which manages the parallel batch ingestion of splittable sources without requiring a dependency on Hadoop. See http://druid.io/docs/latest/ingestion/native_tasks.html for more information.

Note: This is the initial single-phase implementation and has limitations on how it expects the input data to be partitioned. Notably, it does not have a shuffle implementation which will be added in the next iteration of this feature. For more details, see the proposal at #5543.

Added by @jihoonson in #5492.

Automatic segment compaction

Previously, compacting small segments into optimally-sized ones to improve query performance required submitting and running compaction or re-indexing tasks. This was often a manual process or required an external scheduler to handle the periodic submission of tasks. This patch implements automatic segment compaction managed by the coordinator service.

Note: This is the initial implementation and has limitations on interoperability with realtime ingestion tasks. Indexing tasks currently require acquisition of a lock on the portion of the timeline they will be modifying to prevent inconsistencies from concurrent operations. This implementation uses low-priority locks to ensure that it never interrupts realtime ingestion, but this also means that compaction may fail to make any progress if the realtime tasks are continually acquiring locks on the time interval being compacted. This will be improved in the next iteration of this feature with finer-grained locking. For more details, see the proposal at #4479.

Documentation for this feature: http://druid.io/docs/0.13.0-incubating/design/coordinator.html#compacting-segments

Added by @jihoonson in #5102.

System schema tables

Adds a system schema to the SQL interface which contains tables exposing information on served and published segments, nodes of the cluster, and information on running and completed indexing tasks.

Note: This implementation contains some known overhead inefficiencies that will be addressed in a future patch.

Documentation for this feature: http://druid.io/docs/0.13.0-incubating/querying/sql.html#system-schema

Added by @surekhasaharan in #6094.

Improved indexing task status, statistics, and error reporting

Improves the performance and detail of the ingestion-related APIs which were previously quite opaque making it difficult to determine the cause of parse exceptions, task failures, and the actual output from a completed task. Also adds improved ingestion metric reporting including moving average throughput statistics.

Added by @surekhasaharan and @jon-wei in #5801, #5418, and #5748.

SQL-compatible null handling

Improves Druid's handling of null values by treating them as missing values instead of being equivalent to empty strings or a zero-value. This makes Druid more SQL compatible and improves integration with external BI tools supporting ODBC/JDBC. See #4349 for proposal.

To enable this feature, you will need to set the system-wide property druid.generic.useDefaultValueForNull=false.

Added by @nishantmonu51 in #5278 and #5958.

Results-level broker caching

Implements result-level caching on brokers which can operate concurrently with the traditional segment-level cache. See #4843 for proposal.

Documentation for this feature: http://druid.io/docs/0.13.0-incubating/configuration/index.html#broker-caching

Added by @a2l007 in #5028.

Ingestion from RDBMS

Introduces a sql firehose which supports data ingestion directly from an RDBMS.

Added by @a2l007 in #5441.

Bloom filter support

Adds support for optimizing Druid queries by applying a Bloom filter generated by an external system such as Apache Hive. In the future, #6397 will support generation of Bloom filters as the result of Druid queries which can then be used to optimize future queries.

Added by @nishantmonu51 in #6222.

Additional SQL result formats

Adds result formats for line-based JSON and CSV and additionally X-Druid-Column-Names and X-Druid-Column-Types response headers containing a list of columns contained in the result.

Added by @gianm in #6191.

'stringLast' and 'stringFirst' aggregators

Introduces two complementary aggregators, stringLast and stringFirst which operate on string columns and return the value with the maximum and minimum timestamp respectively.

Added by @andresgomezfrr in #5789.

ArrayOfDoublesSketch

Adds support for numeric Tuple sketches, which extend the functionality of the count distinct Theta sketches by adding arrays of double values associated with unique keys.

Added by @AlexanderSaydakov in #5148.

HllSketch

Adds a configurable implementation of a count distinct aggregator based on HllSketch from https://github.com/DataSketches. Comparison to Druid's native HyperLogLogCollector shows improved accuracy, efficiency, and speed: https://datasketches.github.io/docs/HLL/HllSketchVsDruidHyperLogLogCollector.html

Added by @AlexanderSaydakov in #5712.

Support for multiple grouping specs in groupBy query

Adds support for the subtotalsSpec groupBy parameter which allows Druid to be efficient by reusing intermediate results at the broker level when running multiple queries that group by subsets of the same set of columns. See proposal in #5179 for more information.

Added by @himanshug in #5280.

Mutual TLS support

Adds support for mutual TLS (server certificate validation + client certificate validation). See: https://en.wikipedia.org/wiki/Mutual_authentication

Added by @jon-wei in #6076.

HTTP based worker management

Adds an HTTP-based indexing task management implementation to replace the previous one based on ZooKeeper. Part of a set of improvements to reduce and eventually eliminate Druid's dependency on ZooKeeper. See #4996 for proposal.

Added by @himanshug in #5104.

Broker backpressure

Allows the broker to exert backpressure on data-serving nodes to prevent the broker from crashing under memory pressure when results are coming in faster than they are being read by clients.

Added by @gianm in #6313.

'maxBytesInMemory' ingestion tuning configuration

Previously, a major tuning parameter for indexing task memory management was the maxRowsInMemory configuration, which determined the threshold for spilling the contents of memory to disk. This was difficult to properly configure since the 'size' of a row varied based on multiple factors. maxBytesInMemory makes this configuration byte-based instead of row-based.

Added by @surekhasaharan in #5583.

Materialized views

Supports the creation of materialized views which can improve query performance in certain situations at the cost of additional storage. See http://druid.io/docs/latest/development/extensions-contrib/materialized-view.html for more information.

Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.

Added by @zhangxinyu1 in #5556.

Parser for Influx Line Protocol

Adds support for ingesting the Influx Line Protocol data format. For more information, see: https://docs.influxdata.com/influxdb/v1.6/write_protocols/line_protocol_tutorial/

Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.

Added by @njhartwell in #5440.

OpenTSDB emitter

Adds support for emitting Druid metrics to OpenTSDB.

Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.

Added by @QiuMM in #5380.

Updating from 0.12.3 and earlier

Please see below for changes between 0.12.3 and 0.13.0 that you should be aware of before upgrading. If you're updating from an earlier version than 0.12.3, please see release notes of the relevant intermediate versions for additional notes.

MySQL metadata storage extension no longer includes JDBC driver

The MySQL metadata storage extension is now packaged together...

Read more

druid-0.12.3

18 Sep 21:05
Compare
Choose a tag to compare

Druid 0.12.3 contains stability improvements and bug fixes from 6 contributors. Major improvements include:

  • More stable Kafka indexing service
  • Several query bug fixes

The full list of changes is here: https://github.com/apache/incubator-druid/pulls?q=is%3Apr+milestone%3A0.12.3+is%3Aclosed

Documentation for this release is at: http://druid.io/docs/0.12.3

Highlights

More stable Kafka indexing service

0.12.3 fixes a serious issue where the Kafka indexing service would incorrectly delete published segments in certain situations. Please see #6155, contributed by @gianm, for details.

Other stability improvements include:

Query bug fixes

0.12.3 includes a memory allocation adjustment for the GroupBy query that should reduce heap usage. Added by @gaodayue in #6256.

This release also contains several fixes for the TopN and GroupBy queries when numeric dimension output types are used. Added by @gianm in #6220.

SQL fixes

Additionally, 0.12.3 includes the following Druid SQL bug fixes:

  • Fix post-aggregator naming logic for sort-project: #6250
  • Fix precision of TIMESTAMP types: #5464
  • Fix assumption that AND, OR have two arguments: #5470
  • Remove useless boolean CASTs in filters: #5619
  • Fix selecting BOOLEAN type in JDBC: #5401
  • Support projection after sorting in SQL: #5788
  • Fix missing postAggregations for Timeseries and TopN: #5912
  • Finalize aggregations for inner queries when necessary #6221

Other

0.12.3 fixes an issue with the druid-basic-security extension where non-coordinator nodes would sometimes fail when updating their cached views of the authentication and/or authorization tables from the coordinator. Fixed by @gaodayue in #6270.

Full change list

The full change list can be found here: https://github.com/apache/incubator-druid/pulls?q=is%3Apr+milestone%3A0.12.3+is%3Aclosed

Updating from 0.12.2 and earlier

0.12.3 is a minor release and compatible with 0.12.2. If you're updating from an earlier version than 0.12.2, please see release notes of the relevant intermediate versions for additional notes.

Credits

Thanks to everyone who contributed to this release!

@gaodayue
@gianm
@jihoonson
@jon-wei
@QiuMM
@vogievetsky
@wpcnjupt

druid-0.12.2

10 Aug 05:13
Compare
Choose a tag to compare

Druid 0.12.2 contains stability improvements and bug fixes from 13 contributors. Major improvements include:

  • More stable Kafka indexing service
  • More stable data ingestion
  • More stable segment balancing
  • Bug fixes in querying and result caching

The full list of changes is here: https://github.com/apache/incubator-druid/pulls?q=is%3Apr+milestone%3A0.12.2+is%3Aclosed

Documentation for this release is at: http://druid.io/docs/0.12.2-rc1

Highlights

More stable Kafka indexing service

We have fixed a bunch of bugs in Kafka indexing service, which are mostly race conditions when incrementally publishing segments.

Added by @jihoonson in #5805.
Added by @surekhasaharan in #5899.
Added by @surekhasaharan in #5900.
Added by @jihoonson in #5905.
Added by @jihoonson in #5907.
Added by @jihoonson in #5996.

More stable data ingestion

We also have fixed some bugs in general data ingestion logics. Especially we have fixed a bug of wrong segment data when you use auto encoded long columns with any compression.

Added by @jihoonson in #5932.
Added by @clintropolis in #6045.

More stable segment balancing

Coordinator is now capable of more stable segment management especially for segment balancing. We have fixed an unexpected segment imbalancing caused by the conflicted decisions of Coordinator rule runner and balancer.

Added by @clintropolis in #5528.
Added by @clintropolis in #5529.
Added by @clintropolis in #5532.
Added by @clintropolis in #5555.
Added by @clintropolis in #5591.
Added by @clintropolis in #5888.
Added by @clintropolis in #5928.

Bug fixes in querying and result caching

We've fixed the wrong lexicographic sort of topN queries and the wrong filter application for the nested queries. The bug of ClassCastException when caching topN queries with Float dimensions has also fixed.

Added by @drcrallen in #5650.
Added by @gianm in #5653.

And much more!

The full list of changes is here: https://github.com/apache/incubator-druid/pulls?q=is%3Apr+milestone%3A0.12.2+is%3Aclosed

Updating from 0.12.1 and earlier

0.12.2 is a minor release and compatible with 0.12.1. If you're updating from an earlier version than 0.12.1, please see release notes of the relevant intermediate versions for additional notes.

Credits

Thanks to everyone who contributed to this release!

@acdn-ekeddy
@awelsh93
@clintropolis
@drcrallen
@gianm
@jihoonson
@jon-wei
@kaijianding
@leventov
@michas2
@Oooocean
@samarthjain
@surekhasaharan

druid-0.12.1

08 Jun 20:13
Compare
Choose a tag to compare

Druid 0.12.1 contains stability improvements and bug fixes from 10 contributors. Major improvements include:

  • Large performance improvements for coordinator's loadstatus API
  • More memory limiting for HttpPostEmitter
  • Fix several issues of Kerberos Authentication
  • Fix SQLMetadataSegmentManager to allow successive start and stop
  • Fix default interval handling in SegmentMetadataQuery
  • Support HTTP OPTIONS request
  • Fix a bug of different segments of the same segment id in Kafka indexing

The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+milestone%3A0.12.1

Documentation for this release is at: http://druid.io/docs/0.12.1

Highlights

Large performance improvements for coordinator's loadstatus API

The loadstatus API of Coordinators returns the percentage of segments actually loaded in the cluster versus segments that should be loaded in the cluster. The performance of this API has greatly been improved.

Added by @jon-wei in #5632.

More memory limiting for HttpPostEmitter

Druid now can limit the amount of memory used by HttpPostEmitter to 10% of the available JVM heap, thereby avoiding OutOfMemory errors from buffered events.

Added by @jon-wei in #5300.

Fix several issues of Kerberos Authentication

There were some bugs in Kerberos authentication like authentication failure without cookies or broken authentication when router is used. See #5596, #5706, and #5766 for more details.

Added by @nishantmonu51 in #5596.
Added by @b-slim in #5706.
Added by @jon-wei in #5766.

Fix SQLMetadataSegmentManager to allow successive start and stop

Coordinators could be stuck if it loses leadership while starting. This bug has been fixed now.

Added by @jihoonson in #5554.

Fix default interval handling in SegmentMetadataQuery

SegmentMetadataQuery is supposed to use the interval of druid.query.segmentMetadata.defaultHistory if the interval is not specified, but it queried all segments instead which incurs an unexpected performance hit. SegmentMetadataQuery now respects the defaultHistory option again.

Added by @gianm in #5489.

Support HTTP OPTIONS request

Druid now supports the HTTP OPTIONS request by fixing its auth handling.

Added by @jon-wei in #5615.

Fix a bug of different segments of the same segment id in Kafka indexing

Kafka indexing service allowed retrying tasks to overwrite the segments in deep storage written by the previous failed tasks. However, this caused another bug that the same segment ID could have different data on historicals and in deep storage. This bug has been fixed now by using unique segment paths for each Kafka index tasks.

Added by @dclim in #5692.

And much more!

The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+milestone%3A0.12.1

Updating from 0.12.0 and earlier

0.12.1 is a minor release and compatible with 0.12.0. If you're updating from an earlier version than 0.12.0, please see release notes of the relevant intermediate versions for additional notes.

Credits

Thanks to everyone who contributed to this release!

@dclim
@gianm
@JeKuOrdina
@jihoonson
@jon-wei
@leventov
@niketh
@nishantmonu51
@pdeva

druid-0.12.0

09 Mar 03:07
Compare
Choose a tag to compare

Druid 0.12.0 contains over a hundred performance improvements, stability improvements, and bug fixes from almost 40 contributors. This release adds major improvements to the Kafka indexing service.

Other major new features include:

  • Prioritized task locking
  • Improved automatic segment management
  • Test stats post-aggregators
  • Numeric quantiles sketch aggregator
  • Basic auth extension
  • Query request queuing improvements
  • Parse batch support
  • Various performance improvements
  • Various improvements to Druid SQL

The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aclosed%20milestone%3A0.12.0

Documentation for this release is at: http://druid.io/docs/0.12.0/

Highlights

Kafka indexing incremental handoffs and decoupled partitioning

The Kafka indexing service now supports incremental handoffs, as well as decoupling the number of segments created by a Kafka indexing task from the number of Kafka partitions. Please see #4815 (comment) for more information.

Added by @pjain1 in #4815.

Prioritized task locking

Druid now supports priorities for indexing task locks. When an indexing task needs to acquire a lock on a datasource + interval, higher-priority tasks can now preempt lower-priority tasks. Please see http://druid.io/docs/0.12.0-rc1/ingestion/tasks.html#task-priority for more information.

Added by @jihoonson in #4550.

Improved automatic segment management

Automatic pending segments cleanup

Indexing tasks create entries in the "pendingSegments" table in the metadata store; prior to 0.12.0, these temporary entries were not automatically cleaned up, leading to possible cluster performance degradation over time. Druid 0.12.0 allows the coordinator to automatically clean up unused entries in the pending segments table. This feature is enabled by setting druid.coordinator.kill.pendingSegments.on=true in coordinator properties.

Added by @jihoonson in #5149.

Compaction task

Compacting segments (merging a set of segments within a given interval to create a set with larger but fewer segments) is a common Druid batch ingestion use case. Druid 0.12.0 now supports a Compaction Task that merges all segments within a given interval into a single segment. Please see http://druid.io/docs/0.12.0-rc1/ingestion/tasks.html#compaction-task for more details.

Added by @jihoonson in #4985.

Test stats post-aggregators

New z-score and p-value test statistics post-aggregators have been added to the druid-stats extension. Please see http://druid.io/docs/0.12.0-rc1/development/extensions-core/test-stats.html for more details.

Added by @chunghochen in #4532.

Numeric quantiles sketch aggregator

A numeric quantiles sketch aggregator has been added to the druid-datasketches extension.

Added by @AlexanderSaydakov in #5002.

Basic auth extension

Druid 0.12.0 includes a new authentication/authorization extension that provides Basic HTTP authentication and simple role-based access control. Please see http://druid.io/docs/0.12.0-rc1/development/extensions-core/druid-basic-security.html for more information.

Added by @jon-wei in #5099.

Query request queuing improvements

Currently clients can overwhelm a broker inadvertently by sending too many requests which get queued in an unbounded Jetty worker pool queue. Clients typically close the connection after a certain client-side timeout but the broker will continue to process these requests, giving the appearance of being unresponsive. Meanwhile, clients would continue to retry, continuing to add requests to an already overloaded broker..

The newly introduced properties druid.server.http.queueSize and druid.server.http.enableRequestLimit in the broker configuration and historical configuration allow users to configure request rejection to prevent clients from overwhelming brokers and historicals with queries.

Added by @himanshug in #4540.

Parse batch support

For developers of custom ingestion parsing extensions, it is now possible for InputRowParsers to return multiple InputRows from a single input row. This can simplify ingestion pipelines by reducing the need for input transformations outside of Druid.
Added by @pjain1 in #5081.

Performance improvements

SegmentWriteOutMedium

When creating new segments, Druid stores some pre-processed data in temporary buffers. Prior to 0.12.0, these buffers were always kept in temporary files on disk. In 0.12.0, PR #4762 by @leventov allows these temporary buffers to be stored in off-heap memory, thus reducing the number of disk I/O operations during ingestion. To enable using off-heap memory for these buffers, the druid.peon.defaultSegmentWriteOutMediumFactory property needs to be configured accordingly. If using off-heap memory for the temporary buffers, please ensure that -XX:MaxDirectMemorySize is increased to accommodate the higher direct memory usage.

Please see http://druid.io/docs/0.12.0-rc1/configuration/indexing-service.html#SegmentWriteOutMediumFactory for configuration details.

Parallel merging of intermediate GroupBy results

PR #4704 by @jihoonson allows the user to configure a number of processing threads to be used for parallel merging of intermediate GroupBy results that have been spilled to disk. Prior to 0.12.0, this merging step would always take place within a single thread.

Please see http://druid.io/docs/0.12.0-rc1/configuration/querying/groupbyquery.html#parallel-combine for configuration details.

Other performance improvements

SQL improvements

Various improvements and features have been added to Druid SQL, by @gianm in the following PRs:

  • Improve translation of time floor expressions: #5107
  • Add TIMESTAMPADD: #5079
  • Add rule to prune unused aggregations: #5049
  • Support CASE-style filtered count distinct: #5047
  • Add "array" result format, and document result formats: #5032
  • Fix havingSpec on complex aggregators: #5024
  • Improved behavior when implicitly casting strings to date/time literals: #5023
  • Add Router connection balancers for Avatica queries: #4983
  • Fix incorrect filter simplification: #4945
  • Fix Router handling of SQL queries: #4851

And much more!

The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aclosed%20milestone%3A0.12.0

Updating from 0.11.0 and earlier

Please see below for changes between 0.11.0 and 0.12.0 that you should be aware of before upgrading. If you're updating from an earlier version than 0.11.0, please see release notes of the relevant intermediate versions for additional notes.

Rollback restrictions

Please note that after upgrading to 0.12.0, it is no longer possible to downgrade to a version older than 0.11.0, due to changes made in #4762. It is still possible to roll back to version 0.11.0.

com.metamx.java-util library migration

The Metamarkets java-util library has been brought into Druid. As a result, the following package references have changed:

com.metamx.common -> io.druid.java.util.common
com.metamx.emitter -> io.druid.java.util.emitter
com.metamx.http -> io.druid.java.util.http
com.metamx.metrics -> io.druid.java.util.metrics

This will affect the druid.monitoring.monitors configuration. References to monitor classes under the old com.metamx.metrics.* package will need to be updated to reference io.druid.java.metrics.* instead, e.g. io.druid.java.util.metrics.JvmMonitor.

If classes under the the com.metamx packages shown above are referenced in other configurations such as log4j2.xml, those references will need to be updated as well.

Extension developers will need to update their code to use the new Druid packages as well.

Caffeine cache extension

The Caffeine cache extension has been moved out of an extension, into core Druid. In addition, the Caffeine cache is now the default cache implementation. Please remove druid-caffeine-cache if present from the extension list when upgrading to 0.12.0. More information can be found at #4810.

Kafka indexing service changes

earlyMessageRejectPeriod

The semantics of the earlyMessageRejectPeriod configuration have changed. The earlyMessageRejectPeriod will now be added to (task start time + task duration) instead of just (task start time) when determining the bounds of the message window. Please see #4990 for more information.

Rolling upgrade

In 0.12.0, there are protocol changes between the Kafka supervisor and Kafka Indexing task and also some changes to the metadata formats persisted on disk. Therefore, to support rolling upgrade, all the Middle Managers will need to be upgraded first before the Overlord. Note that this ordering is different from the standard order of upgrade, also note that this ordering is only necessary when using the Kafka Indexing Service. If one is not using Kafka Indexing Service or can handle down time for Kafka Supervisor then one can upgrade in any order.

Until the point in time Overlord is upgraded, all the Kafka Indexing Task will behave in same manner (even if they are upgraded) as earlier which means no decoupling and incremental hand-offs. Once, Overlord is upgraded, the new tasks started by the upgraded Overlord will support the new features.

Please see https://gi...

Read more

druid-0.11.0

05 Dec 03:56
Compare
Choose a tag to compare

Druid 0.11.0 contains over a hundred performance improvements, stability improvements, and bug fixes from almost 40 contributors. This release adds two major security features, TLS support and extension points for authentication and authorization.

Major new features include:

  • TLS (a.k.a. SSL) support
  • Extension points for authentication and authorization
  • Double columns support
  • cachingCost Balancer Strategy
  • jq expression support in JSON parser
  • Redis cache extension
  • GroupBy performance improvements
  • Various improvements to Druid SQL

The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aclosed%20milestone%3A0.11.0

Documentation for this release is at: http://druid.io/docs/0.11.0/

Highlights

TLS support

Druid now supports TLS, enabling encrypted client and inter-node communications. Please see http://druid.io/docs/0.11.0/operations/tls-support.html for details on configuration and related extensions.

Added by @pjain1 in #4270.

Authentication/authorization extension points

Extension points for authenticating and authorizing requests have been added to Druid. Please see http://druid.io/docs/0.11.0/configuration/auth.html for information on configuration and extension implementation.

The existing Kerberos authentication extension has been updated to implement the new Authenticator interface, please see the "Kerberos configuration changes" section under "Updating from 0.10.1 and earlier" for more information if you are using the Kerberos extension.

Added by @jon-wei in #4271

Double columns support

Druid now supports Double type aggregator columns. Please see http://druid.io/docs/0.11.0/querying/aggregations.html for documentation on the new Double aggregators.

Added by @b-slim in #4491.

cachingCost Balancer Strategy

Users upgrading to 0.11.0 are encouraged to try the new cachingCost segment balancing strategy on their coordinators. This strategy offers large performance improvements over the existing cost balancer strategy, and it is planned to become the default strategy in the release following 0.11.0.

This strategy can be selected by setting the following property on coordinators:

druid.coordinator.balancer.strategy=cachingCost

Added by @dgolitsyn in #4731

jq expression support in JSON parser

Druid's JSON input parser now supports jq expressions using jackson-jq, enabling more input transforms before ingestion. Please see http://druid.io/docs/0.11.0/ingestion/flatten-json.html for more details.

Added by @knoguchi in #4171.

Redis cache extension

A new cache implementation using Redis has been added in an extension, added by @QiuMM in #4615. Please refer to the preceding pull request for more details.

GroupBy performance improvements

Several new performance optimizations have been added to the GroupBy query by @jihoonson in the following PRs:

#4660 Parallel sort for ConcurrentGrouper
#4576 Array-based aggregation for groupBy query
#4668 Add IntGrouper to avoid unnecessary boxing/unboxing in array-based aggregation

PR #4660 offers a general improvement by parallelizing partial result sorting, while PR #4576 and #4668 offer significant improvements when grouping on a single String column.

SQL improvements

Various improvements and features have been added to Druid SQL, by @gianm in the following PRs:

#4750 - TRIM support
#4720 - Rounding for count distinct
#4561 - Metrics for SQL queries
#4360 - SQL expressions support

And much more!

The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aclosed%20milestone%3A0.11.0

Updating from 0.10.1 and earlier

Please see below for changes between 0.10.1 and 0.11.0 that you should be aware of before upgrading. If you're updating from an earlier version than 0.10.1, please see release notes of the relevant intermediate versions for additional notes.

Upgrading coordinators and overlords

The following patch changes the way coordinator->overlord redirects are handled:
#5037

The overlord leader election algorithm has changed in 0.11.0: #4699.

As a result of the two patches above, special care is needed when upgrading Coordinator or Overlord to 0.11.0. All coordinators and overlords must be shut down and upgraded together.

For example, to upgrade Coordinators, you would shutdown all coordinators, upgrade them to 0.11.0 and then start them. Overlords should be upgraded in a similar way.

During the upgrade process, there must not be any time period where a non-0.11.0 coordinator or overlord is running simultaneously with an 0.11.0 coordinator or overlord.

Note that at least one overlord should be brought up as quickly as possible after shutting them all down so that peons, tranquility etc continue to work after some retries.

Also note that the druid.zk.paths.indexer.leaderLatchPath property is no longer used now.

Service name changes

In earlier versions of Druid, / characters in service names defined by druid.service would be replaced by : characters because these service names were used in Zookeeper paths. Druid 0.11.0 no longer performs these character replacements.

Example:1 - if the old configuration had a broker with service name test/broker:
druid.service=test/broker

and a Router was configured assuming that / will be replaced with : in the broker service name,
druid.router.tierToBrokerMap={"hot":"test:broker","_default_tier":"test:broker"}

the Router configuration should be updated to remove that assumption:
druid.router.tierToBrokerMap={"hot":"test/broker","_default_tier":"test/broker"}

Example:2 - If the old configuration had overlord with service Name test/overlord then value of druid.coordinator.asOverlord.overlordService or druid.selectors.indexing.serviceName should be test/overlord and not test:overlord

Example:3 - If the old configuration had overlord with service Name test:overlord then value of druid.coordinator.asOverlord.overlordService or druid.selectors.indexing.serviceName should be test:overlord and not test/overlord

Following service name-related configurations are also affected and should be updated to exactly match the value of druid.service property on other node being discovered.

druid.coordinator.asOverlord.overlordService
druid.selectors.coordinator.serviceName
druid.selectors.indexing.serviceName
druid.router.defaultBrokerServiceName
druid.router.coordinatorServiceName
druid.router.tierToBrokerMap

Please see #4992 for more details.

Kerberos configuration changes

The Kerberos authentication configuration format has changed as a result of the new interfaces introduced by #4271. Please refer to http://druid.io/docs/0.11.0/development/extensions-core/druid-kerberos.html for the new configuration properties.

Users can point the Kerberos authenticator's authorizerName to an instance of an "allowAll" authorizer to replicate the pre-0.11.0 behavior of a cluster using Kerberos authentication with no authorization.

Lookups API path changes

The paths for the lookups configuration API have changed due to #5058.

Configuration paths that had the form /druid/coordinator/v1/lookups now have the form /druid/coordinator/v1/lookups/config.

Please see http://druid.io/docs/0.11.0/querying/lookups.html for the current API.

Migrating to Double columns

Prior to 0.11.0, the Double* aggregators would store column values on disk as Float while performing aggregations using Double representations.

PR #4491 allows the Double aggregators to store column values on disk as Doubles. Due to concerns related to rolling updates and version downgrades, this behavior is disabled by default and Druid will continue to store Double aggregators on disk as floats.

To enable Double column storage, set the following property in the common runtime properties:

druid.indexing.doubleStorage=double

Users should not set this property during an initial rolling upgrade to 0.11.0, as any nodes running pre-0.11.0 Druid will not be able to handle Double columns created during the upgrade period. Users will also need to reindex any segments with Double columns if downgrading from 0.11.0 to an older version. Please see #4944 and #4605 for more information.

Scan query changes

The Scan query has been moved from extensions-contrib to core Druid. As part of this migration: #4751, the scan query's handling of the time column has changed.

The time column is now is returned as "__time" rather than "timestamp", it is no longer included if you do not specifically ask for it in your "columns", and it is returned as a long rather than a string.

Users can revert the Scan query's time handling to the legacy extension behavior by setting "legacy" : true in their queries, or setting the property druid.query.scan.legacy = true. This is meant to provide a migration path for users that were formerly using the contrib extension.

Extension Interface Changes

Aggregator double column support

The Aggregator interface has gained a getDouble() method, whi...

Read more