-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure Renovate #30
Open
renovate
wants to merge
180
commits into
main
Choose a base branch
from
renovate/configure
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
renovate
bot
force-pushed
the
renovate/configure
branch
2 times, most recently
from
May 11, 2022 08:37
e798d75
to
ff02f9d
Compare
renovate
bot
force-pushed
the
renovate/configure
branch
3 times, most recently
from
May 18, 2022 07:05
f9b1ea3
to
42b8a92
Compare
renovate
bot
force-pushed
the
renovate/configure
branch
from
May 19, 2022 08:38
42b8a92
to
e0ae912
Compare
In sync with the tradition of our other drivers we will have one branch for each Neutron release we're currently maintaining or have maintained in the format of stabel/$release-m3. Whatever we are currently running in prod is the default branch on GitHub. We can currently do it this way as we're the only user of this driver and do not need to maintain multiple releases, except when we do a migration ourselves. So, in short, this is what we're doing now, until a better release / branch management model is needed.
renovate
bot
force-pushed
the
renovate/configure
branch
from
May 20, 2022 09:10
e0ae912
to
dc96399
Compare
Some objects we might need to ignore for various reasons, such as interfaces that are for backbone connectivity that we cannot identify by other means, other than tagging it. Further on, we only include connected devices with a converged-cloud tenant as other devices might be network backbone servers.
renovate
bot
force-pushed
the
renovate/configure
branch
from
May 21, 2022 08:11
dc96399
to
a61e703
Compare
We don't want to use synchronous calls from within the ml2 driver, cause an error on the agent side should never interrupt an OpenStack reaction. All errors inside the agents should be recoverable, either by syncloop, trying again or manual intervention (in the worst case). Examples are non-reachable devices, broken config or driver bugs. The calls which we don't want to interrupt are port bindings and the network create/delete postcommit hooks.
On switch sync I want to know what the failing operation was and on which device it happened. With this it is way more informative than "internal server error".
When implementing the check that a network needs to have bound ports I forgot about interconnects. Indeed, we need to be able to sync networks that have no bound ports, so we can properly test interconnect allocations. Hence... the check is gone now.
renovate
bot
force-pushed
the
renovate/configure
branch
from
May 24, 2022 09:04
a61e703
to
70f67d2
Compare
In the future each AZ will need a number and a suffix. As we don't want to require a specific AZ format, we allow this to be configured. This also allows us a somewhat easier AZ handling, as we don't need to iterate over all switchgroups and could also define AZs for which we don't have a switchgroup. Adaption of the config generator for this is not part of this commit.
We don't yet use the VRFs, but they will be a required part for the L3 implementation and infra networks. Adaption of the config generator for this is not part of this commit.
Pulling InfraNetworks from netbox using vlans assigned to the server facing ports, as well as the SVIs modelled on the TORs for the anycast gateteway. Also made VNI an mandatory field, as we potentially want to stretch every VLAN over TOR borders. VRF is also mandatory if the network is L3 enabled.
renovate
bot
force-pushed
the
renovate/configure
branch
from
May 26, 2022 09:30
70f67d2
to
bea2697
Compare
We now put the Route Targets that should be exported/imported directly into the agent message / SwitchConfig format. This means that we'll have less business logic inside the agent and more in the neutron-server part of the driver (which is a good thing, as we will have multiple agents and therefore will not need to duplicate business logic). It will also help when we implement the config diff functionality, as we can then just fill the SwitchConfig object with what's on the switch and diff it with the "should" version from OpenStack.
To better debug and test RPC calls we can now pass kwargs encoded as json via the command line, e.g. --kwargs '{"switches": ["foo"]}'.
renovate
bot
force-pushed
the
renovate/configure
branch
from
March 24, 2023 18:50
3497c85
to
f413d0c
Compare
We need to configure MTUs, as else the networks are not getting created. The default MTU is 1500, for the VXLAN segment OpenStack substracts space for the header and our standard networks cannot be created anymore. Therefore I'm hardcoding the MTU in the tests. Network segments need to be specified for the current allocation logic. oslo.config's set_override() seems to not be cleaned up after a test is run. This means override of one test can influence tests run after this test. To avoid this we don't override the tenant_network_types in our config validation tests as they are not really required for the test. The other option would be to use oslo.config's clear_override(). Before this commit the problematic behavior could be triggered with this command: tox -e py38 -- --serial TestConfigValidation.test_validate_ml2_vlan_ranges_success networking_ccloud.tests.unit.db.test_db_plugin.TestDBPluginNetworkSyncData.test_get_hosts_on_segments_with_segment_with_wrong_level
It's the future now!
A common test base is recommended, I heard. It also configures the logging for better test debugability.
See tox-dev/tox#2730 and also the Neutron CI bug report https://bugs.launchpad.net/neutron/+bug/1999558
Config set in oslo.config with set_override() is not necessarily removed when not explicitly overridden in a test and can cause other tests in the same worker to fail due to leftover overrides. We now use a fixture to reset overrides done by the tests, as it is done in the tests of some Neutron ml2 drivers and also in general in Nova, where we got the inspiration from. https://github.com/sapcc/nova/blob/6d0d19c069d1ea70fbed225ddb9f68f78d271a03/nova/test.py#L210
Whenever we encounter a device that does not fit our naming convention, we raise an exception and terminate. That's tedious as we need to poke people to correct all their switch names when we run this script. Instead, we should just chose to ignore every device that does not comply with the naming scheme and move on.
EOS was the only agent where we planned on using GNMI, but now it looks like we're going to use GNMI with NXOS as well. Therefore the GNMI client abstraction we have moves into common/ and all agents that want to use it can use it. The agents will keep their vendor specific stuff (like paths or regexes) in their own code base though.
A lot of changes are in pydantic 2. At some point we need to upgrade and also check if there is something in our dependency list that depends on pydantic<2 as well (and then check if we are backwards compatible). For now we'll just pin this dependency.
Since this change[0] the MTU set by _make_network() is set to the maximum, not to nl_const.DEFAULT_NETWORK_MTU anymore. As some TypeDrivers reserve some bytes for their tunneling they don't offer the full maximum MTU. This is only a problem for tests not instantiating a ml2 driver, meaning our MechanismDriver tests are working fine, the other tests creating networks are not. In these other tests we also don't have the option to set the MTU via API, as the API does not recognize the attribute, most likely due to a missing default plugin. The easiest way for me to get the tests running again is to directly update the MTU in the DB after network creation. Might change in the future, when we find a better way to do this. [0] Ica21e891cd2559942abb0ab2b12132e7f6cdd835 or openstack/neutron@fec0286
…ity support Adapation of https://opendev.org/openstack/neutron/commit/0fe6c0b8ca8a5704242766472d94d5ca86832363 for 3rd party drivers. The base class "MechanismDriver" now has a property called "connectivity". This patch overrides the default value in the out-of-tree drivers. The method "_check_drivers_connectivity" now uses this property that is available in all drivers.
When a port is updated we try to clean up after it. As there is no explicit delete message, but only a update_port_postcommit() call, we handle deletion of the old host. For this, the delete call also needs the network, but in certain scenarios, there is no original network - most likely in cases where the port only gets its binding host updated, but stays in the same network. Therefore in these cases we now use the network_id given in the original bindings level. As segments are not always fully specified (or better: this is what I have seen in some places in the code, but am unsure if the segments are always fully specified or not within a NetworkContext (fully specified in this case means "missing the network_id entry")), I chose the "safe way", using the original network first, then look at the segment dict and if that fails we log a warning and return, i.e. don't try to clean up after the segment. This fixes a TypeError, where when there was no original network, we'd try to lookup ['id'] to the original network, which was None.
In some cases update_port_postcommit() gets two old segments, but the second segment can be None, apparently. This, again, leads us to a TypeError, as we try to find out if 'network_id' is in None. Well, now we check if the segment is none beforehand and even make the core a bit shorter by giving the original bindings a name.
VLANs on EOS can have a maximum length of 32 characters. As we're using the network id as VLAN name in many places we now convert UUIDs to 32 chars by removing the "-" in it. To be able to diff the config later on we also transform this back to a UUID when we read the config by adding "-" in the appropriate places when the VLAN name is a 32 char hex string. As OpenStack ids are lower-case and we are mainly interested in converting names back that we put there ourselves we at the moment do the conversion only for lower-case UUIDs/characters. Let's see if that gets problematic in the future.
With netbox 3.0, there were 2 API changes that affect us: 1. Interfaces can now carry multiple connections, I believe is that is to support broken out interfaces. We do not plan to have break outs in the future, however we have them now. Yet we model them in the form that we create a new interface (which could carry individual VLANs and so on) and then model the connection. Hence, there is no need for us to support multiple connected endpoints under an interface. If we encounter it, we fail. 2. Sites can have multiple ASNs now, ASNs are objects, not plain ints anymore. This is not an issue so far as a site currently only carries one ASN, which we simply unpack. Yet, I can imagine that we use multiple ASN in a site in the future in which case we would currently throw an exceptions. That's fine for now. If the time comes, we probably need to tag the ASN we want to use for the fabric.
Since netbox now supports tagging of VLAN groups, we can tag these groups and thus not have to rely on some lose convention if and when a VLAN group should considered to be always an extra vlan.
When generating the config, we usually only pickup extra-vlans from the switch side, not from the host side. That is, as the device side is modelled by the device owners, yet the switch side is modelled by us. However, we have agreed that for future VLAN assignments, the device side modelling of apod nodes and swift devices will be modelled by their team. Any VLAN updates will hence propagate to the driver config.
Implement the trunk service extension. A trunk can be created on any port that is in a direct binding hostgroup. Only one trunk per hostgroup is possible. If the host is part of a metagroup, only VLANs not used by the metagroup VLAN pool can be used as target, to not interfere with VMs put on the host(s). The trunk plugin modifies the subports by setting binding host, vnic type and binding profile. The binding profile contains details about the specific VLAN translation, but this is only informational, as the "real" info will be fetched from the respective trunk/subport tables in the DB.
Our code configured VLANs inside the interface's switched-vlan config, but it needs to be on the same level as the config, so we now move it to the proper location. The code for getting the config from the device looks fine though.
When querying segment info from the DB we also query out the trunk id for subports. Trunking for the OpenStack trunking extension always needs to happen at the last driver, so in this case we must ignore subports that belong to other drivers. We will only handle trunking for our own ports, this being ports that are bound by us as the last driver. We identify these ports by their vif_type, cc-fabric.
If a hostgroup has allow_multiple_trunk_ports set it can now have multiple trunks on it. This allows us to have multiple domains/projects set VLAN translations for the same device without the networks needing to be shared in to the same project. As we don't know in this case which port should dictate the native VLAN we don't set the native VLAN at all for these groups.
As the driver is fully responsible for its own switchports, we use a replace config call via GNMI to configure those ports. This means that there is - at the moment - no option for manual configuration of these interfaces. This means that we now also need to be able to configure interface port speeds. Config-wise this means that we just have a string, which might be vendor-specific, but at least understandable by the platform's agent. In the eos case we translate speeds like 10/100g to their respective API values (SPEED_10GB, SPEED_100GB).
We need to force the speed on some breakout/QSA interfaces. As port-channel members are just plain strings, we add the speed to the port-channel interfaces itself. That is fine, as members can only be of the same speed anyway. For physical interfaces, we can annotate it straight to the interface.
[config-gen]Fix variable reference
renovate
bot
force-pushed
the
renovate/configure
branch
from
December 15, 2023 05:35
f413d0c
to
bdbc2da
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Welcome to Renovate! This is an onboarding PR to help you understand and configure settings before regular Pull Requests begin.
🚦 To activate Renovate, merge this Pull Request. To disable Renovate, simply close this Pull Request unmerged.
Detected Package Files
.github/workflows/build-docs.yml
(github-actions).github/workflows/run-tox.yml
(github-actions)doc/requirements.txt
(pip_requirements)requirements.txt
(pip_requirements)test-requirements.txt
(pip_requirements)Configuration Summary
Based on the default config's presets, Renovate will:
fix
for dependencies andchore
for all others if semantic commits are in use.node_modules
,bower_components
,vendor
and various test/tests directories.🔡 Do you want to change how Renovate upgrades your dependencies? Add your custom config to
renovate.json
in this branch. Renovate will update the Pull Request description the next time it runs.What to Expect
With your current configuration, Renovate will create 7 Pull Requests:
Update dependency pygnmi to v0.8.14
renovate/pygnmi-0.x
stable/yoga-m3
==0.8.14
Update dependency hacking to >=3.2,<3.3
renovate/hacking-3.x
stable/yoga-m3
>=3.2,<3.3
Update actions/checkout action to v4
renovate/actions-checkout-4.x
stable/yoga-m3
v4
Update actions/setup-python action to v5
renovate/actions-setup-python-5.x
stable/yoga-m3
v5
Update dependency hacking to v7
renovate/hacking-7.x
stable/yoga-m3
>=7,<7.1
Update dependency pydantic to v2
renovate/pydantic-2.x
stable/yoga-m3
<3
Update peaceiris/actions-gh-pages action to v4
renovate/peaceiris-actions-gh-pages-4.x
stable/yoga-m3
v4
🚸 Branch creation will be limited to maximum 2 per hour, so it doesn't swamp any CI resources or overwhelm the project. See docs for
prhourlylimit
for details.❓ Got questions? Check out Renovate's Docs, particularly the Getting Started section.
If you need any further assistance then you can also request help here.
This PR was generated by Mend Renovate. View the repository job log.