Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster Feature Gate in etcd #4662

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

siyuanfoundation
Copy link
Contributor

  • One-line PR description: Cluster Feature Gate in etcd
  • Other comments:

@siyuanfoundation siyuanfoundation marked this pull request as draft May 24, 2024 17:40
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: siyuanfoundation
Once this PR has been reviewed and has the lgtm label, please assign wenjiaswe for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label May 24, 2024
@k8s-ci-robot k8s-ci-robot requested a review from ahrtr May 24, 2024 17:40
@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. labels May 24, 2024
@k8s-ci-robot k8s-ci-robot requested a review from jmhbnz May 24, 2024 17:40
@k8s-ci-robot k8s-ci-robot added sig/etcd Categorizes an issue or PR as relevant to SIG Etcd. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels May 24, 2024

Users of etcd cluster would be able to find out which features are enabled in the etcd cluster and decide on how to use them downstream.

Currently Kubernetes uses [`FeatureSupportChecker`](https://github.com/kubernetes/kubernetes/blob/db82fd1604ebf327ab74cde0a7158a8d95d46202/staging/src/k8s.io/apiserver/pkg/storage/etcd3/feature/feature_support_checker.go#L42) to check if a feature is supported by etcd, which basically checks the etcd version and compares the version with a hard-coded map of feature availability for different etcd versions.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kubernetes means kube-apiserver right? (just to see if there's any other clients on k8s that reads from etcd)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct.


#### Backport Risks

To support feature gate as soon as in 3.6, we are backporting some proto changes into 3.5. There is the potential risk of changing a stable release. But we do not think this is a real risk, because it does not involve any changes other than adding a new proto field in an existing proto, and 3.5 server does not write or use this new field. Protos are inherently backward compatible. This change should also not affect upgrade/downgrade of 3.5.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's also worthwhile to highlight the benefit of backporting. Since proto3 fields are optional (and unknown fields are ignored), we want to provide rationale for backporting this change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually because we are introducing a new raft message type, we have to backport. Changed the paragraph.

There are cases when a developer wants to change how an existing feature works. This would make things complicated in a mixed version cluster. One member could have the old implementation while another member uses the latest implementation. How can we make sure the cluster works consistently?

Similar to [how Kubernetes handles feature gating changes for compatibility version](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/4330-compatibility-versions#feature-gating-changes), if the feature change affects data in anyway, we need to keep the implementation history for at least 2 minor versions (allowed version skew), like
```go

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, we want to keep 2 implementations (if there's delta), not 2 previous minor versions (3 implementations), correct?

Also I wonder if we can make it easier for developers to make sure they keep the implementation. I am thinking some interface such as (very roughly):

type FeatureGateRunner struct {
  featuregate.Feature
  Handlers map[Version]func() error
}

func (r FeatureGateRunner) HandleAtVersion(v Version) error {
  return r.Handlers[v]()
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the additional interface is going to be beneficial. The main benefits of feature gate is it is easy to use, insert the following whereever I want to add new codes.

if featureGate.Enabled(feature) {
  // my fancy new codes
}

With the wrapper, it is much cumbersome to use and remove in the future.

I would expect change of feature direction to happen rarely. It is fine for it to be a little bit harder to keep multiple implementations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @siyuanfoundation, let's not overdesign the interface until we need it.

@siyuanfoundation siyuanfoundation force-pushed the cluster-fg branch 2 times, most recently from 6933592 to ebe1a2e Compare May 28, 2024 21:55

### Feature Gate

A feature can be registered as server level feature or cluster level feature, but not both.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a possibility of converting or migrating a server level feature to cluster level feature in later releases?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Migration would mean we remove it from server level feature gate, which means a breaking change in cli.
So default answer should be no. We might still consider to have a process for migrating by deprecating a server feature and creating a similarly named cluster level feature, but we don't need to discuss it the KEP.


The `func (s *EtcdServer) FeatureEnabled(key Feature) bool` interface would return if the feature is enabled for the whole cluster if it is registered as a cluster level feature.

The feature gates for a server can only be set with the `--feature-gates`flag or in the `config-file` during startup. We do not support dynamically changing the feature gates when the server is running.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: a space in between ``feature-gates` flag...

@siyuanfoundation siyuanfoundation force-pushed the cluster-fg branch 3 times, most recently from 123689b to 16aea9b Compare June 5, 2024 22:35
#### Should we prevent a new member with an inconsistent feature gate from joining the cluster?

One way to prevent inconsistencies of feature gate between different members is to prevent a new member with an inconsistent feature gate from joining the cluster in the first place.
We do not think this is the right way because we want to keep the ability to turn on/off a feature in a rolling sequence without bringing down the whole cluster.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would there be a use case where a specific feature requires all the nodes to enable/disable it at the same time in order to function correctly? (thus we have to block inconsistent nodes from joining)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are cases like that. That's why we need the consensus algorithm to set the final values of feature gates. It should not block inconsistent nodes from joining, the leader will decide what value to use.

@siyuanfoundation siyuanfoundation force-pushed the cluster-fg branch 4 times, most recently from 2d0a993 to 17e18ee Compare June 20, 2024 20:49
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 20, 2024
Comment on lines +144 to +152
Because we are adding a new type of raft request, if we want to support feature gate as soon as in 3.6, we have to backport the proto changes and add no-op apply logic in 3.5.
There is the potential risk of changing a stable release. But we do not think this is a real risk, because it should be a no-op change in 3.5.
When upgrading etcd cluster from 3.5 to 3.6, the users would need to upgrade to the latest 3.5 release first.

If we do not backport to 3.5, we would need to add no-op changes in 3.6, and wait till 3.7 to add the real feature gate capabilities, which could be a long cycle of several years.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't understand this section. Can you clarify what backport you mean? Of the proposed mechanism or a feature gate.


If we do not backport to 3.5, we would need to add no-op changes in 3.6, and wait till 3.7 to add the real feature gate capabilities, which could be a long cycle of several years.

#### Data Compatibility Risks during Feature Value Change
Copy link
Contributor

@serathius serathius Jun 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Data compatibility should be handled by cluster version, not feature gates. Don't understand why we touch it in this KEP.

```
This way the cluster would work consistently because there is a single ClusterVersion across the whole cluster.

## Design Details
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please start from describing how features are enabled on command line. Does cluster feature gates share the same command line as server feature, I don't think they should. We should clarify this at the top.

Copy link
Contributor

@serathius serathius Jun 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also we should clarify one important thing, server level feature represent stability of the feature and can still have a separate enablement. For cluster level features we focus on consistent enablement. Both share the same name "feature gate", but gating motivation is different.

The `etcdctl` commands could look like:
* `etcdctl endpoint featuregate $featureName` returns true if the feature is enabled

Because the feature gate of a cluster could change anytime, even if the client has queried the etcd cluster for feature availability before sending a request, feature availability can be changed by the time the request is sent. So we are proposing to add a new `required_features` field into all grpc requests. For example,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about using the grpc metadata to control non-API semantics?


The feature gates for a server can only be set with the `--feature-gates`flag or in the `config-file` during startup. We do not support dynamically changing the feature gates when the server is running.

### Cluster Level Feature Enablement
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please discuss the process in a top down approach. I don't care about the proto details if I don't know how you plan to use it. Suggested order of topics:

  • Properties that feature enablement needs
  • How we plan to guarantee those properties
  • Detail of the process.
  • How proto will look.

Signed-off-by: Siyuan Zhang <[email protected]>

Co-authored-by: Marek Siarkowicz <[email protected]>
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Jul 1, 2024
Comment on lines +108 to +113
#### Story 1

A developer is adding a new feature of a new write type. Because it requires a consensus write, the request should be rejected if the feature is not enabled or the request can be processed on all cluster members.
Today, each change in etcd API, requires development spenning at least 2 minor versions. First a raft proto change needs to be added and wait a release before it can be used. Only a release later it can be actually used to prevent a incompatible proto being used during upgrade/downgrade process.

With cluster level feature gate, a server would know if a feature is enabled on all members, and decide whether or not to accept the new write request.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ups, I think the description of this user story is not fully relevant, sorry. What I described described should be solved using cluster version. 

Suggested change
#### Story 1
A developer is adding a new feature of a new write type. Because it requires a consensus write, the request should be rejected if the feature is not enabled or the request can be processed on all cluster members.
Today, each change in etcd API, requires development spenning at least 2 minor versions. First a raft proto change needs to be added and wait a release before it can be used. Only a release later it can be actually used to prevent a incompatible proto being used during upgrade/downgrade process.
With cluster level feature gate, a server would know if a feature is enabled on all members, and decide whether or not to accept the new write request.
#### Story 1: Developing Features Impacting the Apply Loop
A developer is adding a new feature that modifies the etcd apply loop. This change could impact how data is processed and replicated across the cluster. An example of such a feature is [persisted lease checkpoints](https://github.com/etcd-io/etcd/pull/13508). Currently, enabling such a feature requires careful coordination and potentially taking the entire cluster down to ensure all members apply the changes consistently. This lack of flexibility restricts development and can lead to operational disruptions.
With cluster-wide feature enablement, we can enable or disable such features in a controlled manner, ensuring consistent behavior across all members during a rolling upgrade, while allowing users to enable/disable the feature as they wish. This empowers developers to iterate more quickly and safely, while also providing operators with greater flexibility to manage feature rollouts and upgrades without disrupting service.


A Kubernetes developer would like use a new great etcd feature, but it is not available on all supported etcd versions. What can they do?
Historically, Kubernetes avoided using any features that were not available on all supported versions of etcd.
From K8s 1.31, Kubernetes added very complicated and fragile system: [`FeatureSupportChecker`](https://github.com/kubernetes/kubernetes/blob/db82fd1604ebf327ab74cde0a7158a8d95d46202/staging/src/k8s.io/apiserver/pkg/storage/etcd3/feature/feature_support_checker.go#L42) to detect etcd version, parse it and guess whether this feature based on a set of hardcoded etcd version. It does not really know if the feature is enabled by the feature `--experimental` flag.
Copy link
Contributor

@serathius serathius Jul 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth to node that the FeatureSupportChecker checks only member version at the start, and assumes that cluster will never stop supporting the feature. Which is gross oversimplification.

With a client-side mechanism to reliably determine if a cluster-wide feature is enabled, would allow Kubernetes to immediately utilize new features.

#### Story 3

Copy link
Contributor

@serathius serathius Jul 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to give some background why users would like to disable a feature. Let's say that after some qualification a feature was graduated from experimental and enabled by default in new release. However, it quickly turned out that the feature is bugged. Users would like to disable it ASAP to protect their production.


#### Story 3

In a HA cluster, users would like to turn off an enabled feature. They need to restart each server node one by one with feature changed from enabled to disabled. Today, after the restarting process begins and before all nodes are restarted, the behavior of the feature is really undefined: it is enabled on some nodes and disabled on the other nodes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to note that it's impossible for etcd contributors to ensure that feature maintains correctness during the undefined state. Automatic testing would not be able to cover all possible edge cases in distributed systems, it's even too much for robustness tests.

#### Could it done on the client side instead of inside etcd?
One could argue that it is easy to know if a feature enabled for the whole cluster by querying each member if the feature is enabled on the client side. But there are several caveates with this approach:
* not all members are available all the time. if some member is not available, it is questionable if the feature can be used, and that could break the HA assumption.
* some feature might change how a raft message is sent and applied, and the order of messages relative to the index the feature is enabled is critical. Here the consistent index is more important than the time, and it is hard to get that outside etcd.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this points is a little unclear. I think it's about fact that order of operations in distributed system can be hard to predict, as they can be arbitrarily re-ordered by leader, which can also be arbitrarily changed. In short, you cannot check if feature is enabled and then assume it will still be enabled when you send a following request.

* not all members are available all the time. if some member is not available, it is questionable if the feature can be used, and that could break the HA assumption.
* some feature might change how a raft message is sent and applied, and the order of messages relative to the index the feature is enabled is critical. Here the consistent index is more important than the time, and it is hard to get that outside etcd.

#### Should we prevent a new member with an inconsistent feature gate from joining the cluster?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would state the problem differently, there are two alternative approaches:

  • configure cluster feature gates globally (either on cluster initialization or via command line flags) and reject members reconnecting with different feature flags.
  • configure cluster feature gates based on member proposed feature gates.

I stand for the second option as it allows us to maintain same way of cluster configuration. User just changes the etcd manifests/config file. The first option would cause user changing manifests to cause unexpected crashloop.

The only downside of the second option is that it requires member restart which might be to slow for some critical bugs, however I would be careful with allowing clients to etcd configuration from etcd endpoints. It's opens a security pandora box.


#### Backport Risks

Because we are adding a new type of raft request, if we want to support feature gate as soon as in 3.6, we have to backport the proto changes and add no-op apply logic in 3.5.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand the risk, it should be mitigated by not using the new proto if the ClusterVersion is low.


Overall, when developers introduce new data with a feature, they should be careful to include data cleaning in `schema.UnsafeMigrate`. But we prefer not to have a mechanism to disallow such features to be turned off after they are turned on because new features could be buggy, we need to keep the option to be able to turn it off without bring down the whole cluster.

#### Feature Implementation Change Risks
Copy link
Contributor

@serathius serathius Jul 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you cover a broader issue of how how the feature gates will work with downgrade/upgrade? What happens if cluster is being downgraded to version that doesn't support a feature gate? The solution should be the same, using Cluster Version.

Also do we even plan to remove feature gates? In Kubernetes we remove feature gates couple of releases after they go to GA and become default.


When a user start up a new etcd cluster, it is unnecessary and confusing to require them to know the difference between server level and cluster level features. It is really up to the developers to register them to the proper type in the code.

So the cluster feature gates for a server can be set in the same way as the server level feature gates: with the `--feature-gates`flag or in the `config-file` during startup. We do not support dynamically changing the feature gates when the server is running.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to separate configuration of server and cluster features in different command line flag. The reason is user expectation. Users expect consistent behavior of flags, are they enabled immediately or they require to reconfigure the whole cluster. They shouldn't need to check the type of flag.

What do you think?

```

### Set the Feature Gates
Conceptually, a feature is a new functionality etcd provides, and it has to work on individual servers first before working on the cluster. The main difference between server level and cluster level features is whether their usage require consistent value across all members.
Copy link
Contributor

@serathius serathius Jul 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about livecycle, for server level feature gate we expect they will go to GA and be enabled by default, but sometimes we would like to still keep the option to disable them for users. For example a if we were developing a feature like strict-reconfig-check using cluster feature gate. We still left an option for user to disable it.

I'm thinking that we might want to use different separation for feature development, instead of having "cluster feature gates" and "server feature gates" I would propose to use:

  • "Feature gate" - tracks lifecycle of a feature across etcd releases (alpha,beta, GA, deprecated), defines it default enablement (true or false).
  • "Cluster feature" - mechanism used by some features (can be gated with feature flag, but doesn't need to) to ensure it's consistent enablement across cluster.


To guarantee consistent value of if a feature is enabled in the whole cluster, the leader would decide if the feature is enabled for all cluster members, and propagate the information to all members.

1. When an etcd server starts, the values of all cluster level features in its `ServerFeatureGate` would saved in the [`member.Attributes`](https://github.com/etcd-io/etcd/blob/e37a67e40b3f5ff8ef81f9de6e7f475f17fda32b/server/etcdserver/api/membership/member.go#L38) as the field `Attributes.feature_gates`. (see the [discussion](#push-vs-poll-when-leader-decides-cluster-feature-gate) about why we choose to push the information through raft)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed before, we should avoid "feature_gates" as name as it implies that those are enabled features gates. They are not, that's the feature gates that member that member is proposing to enable. Recommended name "proposed_features"


1. The attributes would then be [published through raft](https://github.com/etcd-io/etcd/blob/e37a67e40b3f5ff8ef81f9de6e7f475f17fda32b/server/etcdserver/server.go#L1745) and stored in the `members` bucket in the backend of all members. Whenever a new member joins or an existing member restarts, its feature gate attributes would be automatically updated in the start up process.

1. After a leader is elected, the leader will decide the values for the cluster level features based on the cluster version and server feature values of all members it receives.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doesn't clarify how we can maintain cluster level flags during lifecycle of the cluster, member restarting with different flags. After member is elected is definitely not a good place to do that. Please clarify when exactly this is code is executed. I see two options:

  • [preferred] On leaders apply loop when executing member set attribute.
  • Periodic execution by leader like in case of cluster version


1. After a leader is elected, the leader will decide the values for the cluster level features based on the cluster version and server feature values of all members it receives.

1. The leader sends the final feature values in `ClusterFeatureGateSetRequest` through raft.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If cluster flags should be changed leader sends a proposal ClusterFeatureGateSetRequest through raft.

string name = 1;
repeated string client_urls = 2;
// the values of all cluster level feature gates set by the configuration of the member server.
Feature proposed_feature_gates = 3;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Feature proposed_feature_gates = 3;
repeated Feature proposed_feature_gates = 3;

?

option (versionpb.etcd_version_msg) = "3.5";

// the values of all cluster level feature gates for the whole cluster.
Feature feature_gates = 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Feature feature_gates = 1;
repeated Feature feature_gates = 1;

We will leverage the consensus `ClusterVersion` to negotiate the cluster level feature gate values:
1. if `ClusterVersion` is not set, the cluster feature gate would be `nil`, and all features would be considered disabled.
1. when the `ClusterVersion` is set or updated, initialize the cluster feature gate with the `ClusterVersion`. At this point,
* if the `member.Attributes` of any members has not been set, use the default values for feature gate at version `ClusterVersion`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't agree with that, we should not depend on local defaults of leader. I would reserve the right to change to backport disabling the feature by default into a patch version. I imagine a situation that a feature is graduated to early to beta and made default in new minor version. If the feature is bugged we should be able to change default enablement in patch version. Might be a good user story to add.


![setting cluster feature gate for a new cluster](./cluster_feature_gate_new.png "setting cluster feature gate for a new cluster")

We will leverage the consensus `ClusterVersion` to negotiate the cluster level feature gate values:
Copy link
Contributor

@serathius serathius Jul 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should consider reconciling cluster version and it's features in one proto to ensure they are consistent with each other.

1. if `ClusterVersion` is not set, the cluster feature gate would be `nil`, and all features would be considered disabled.
1. when the `ClusterVersion` is set or updated, initialize the cluster feature gate with the `ClusterVersion`. At this point,
* if the `member.Attributes` of any members has not been set, use the default values for feature gate at version `ClusterVersion`.
* if the `member.Attributes` of all members have been set, take the common set of all the mbmer `--feature-gates` flags and set values of the cluster feature gate: discard any features not recognized at the `ClusterVersion`, and set the feature to false if there is any member setting it to false.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should decision about feature enablement from cluster version. If we implement Feature Implementation Change Risks with option for older cluster version it should be compatible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/etcd Categorizes an issue or PR as relevant to SIG Etcd. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants