-
Notifications
You must be signed in to change notification settings - Fork 93
Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Access Control List
Server with SmartNics installed (for CPS performance increase) to accept traffic redirected from the destination host.
Address Resolution Protocol
Autonomous System Number
Availability Zones are unique physical locations within an Azure region, designed to provide a software and networking solution to protect against datacenter failures, and to provide increased high availability (HA) to our customers. See Build solutions for high availability using availability zones.
Border Gateway Protocol
Entire node not running any custom SDN software (connected to SDN/VNET via intelligent router/appliance)
Bump = processing engine 2 bidirectional ethernet interfaces (1 for each side of processing engine) Each direction provides 1/2 the capacity of the NIC. Ex: 400G provides 200G bump-in-the-wire
Collection of Nodes across different Racks
Control plane communication between sender and receiver (usually involves handshake); any packet that does not hit the 'flow'
Address visible inside Customer VM
Data Center
Dynamic Host Configuration Protocol (IPv4)
Dynamic Host Configuration Protocol (IPv6)
Direct Server Return
Virtual IP used for Direct Server Return
External Load Balancer
Elastic Network Interface. Eni, Vnic, VPort are used interchangeably. They all in the general sense mean a VM's NIC.
Single transposition - Data plane stream of packets between sender and receiver that shares key IP header information. TCP conversation entered into device flow table as defined by Tuple (SrcIP, DestIP, Src Port, Dst Port, Protocol) between source and destination. May also have modified Tuple attributes in future.
In case of DPU failure, one appliance in passive/ one in active, traffic will switch and 2nd appliance will take over
Flow table is view into memory space of a device capturing established TCP connections, tuple information, and current TCP state.
Google RPC
Gateway
High Availability
IPSec tunnel or IPSec device
Internal Load Balancer
IP protocol Version 4 (ex. 10.1.2.3
)
IP protocol Version 6 (ex. 2001:1234:abcd::1
)
JavaScript Script Object Notation
Load Balancer
Longest-Prefix-Match algorithm commonly used in routing
MAC Address (Media Access Control)
Mapping transformation between CA:PA:MAC. Integration point between Overlay and Underlay. Mappings can be shared; mappings will likely be a set of different global objects, which different policies can potentially refer to. For example multiple high CPS VMs in the same VNET are sharing the same VNET address space so they can use 1 mapping table, and multiple policies can refer to the same mapping table.
Neighbor Advertisement
Used to translate APIs to SAI
Single Physical Machine in a Rack
Define the way the SDN controller interact with the application plane. Applications and services are components like load-balancers, firewalls, security services and cloud resources. The idea is to abstract the network, so that application developers can connect to the network and make changes to accommodate the needs of the applications without having to understand exactly what that means for the network.
See also Southbound interfaces
Neighbor Solicitation
Network Virtual Appliance (VM that might have forwarding or filtering functionality – ex. router or firewall deployed as Linux/Windows VM/baremetal appliance).
Generic Routing Encapsulation (Protocol)
The overlay network is a virtual network that is built on top of the underlay network infrastructure. IN DASH, the overlay is generated automatically; it = SDN pipeline - tunneling, decapsulation, ST, and overlay ACLs (for example). The overlay folder is designated for the DASH APIs. (Please note, the overlay router is not the same as the SONiC router_. The DASH overlay API will be exposed for flexibility (and to refrain from introducing customer attributes into the SAI community). There will be no overlap between SAI/DASH APIs and DASH APIs. For now, we consider the DASH API to be 'NB', SAI is 'SB'. Overlay is visible and configured by our Azure end-customers (CA).
Programming Protocol-independent Packet Processors (P4, in other words PPPP that is P4) is a domain-specific language for network devices, specifying how data plane devices (switches, NICs, routers, filters, etc.) process packets. For more information, see P4 Open-Source Programming Language.
Provider Address (internal Azure Datacenter address used for routing)
Network relationship between two entities (usually between two VNETs – ex. VNET Peering
ACLs + Routes
For IPv4: (0-32) – example 10.0.0.0/8
For IPv6: (0-128) – example: 2001:1234:abcd::/48
Before we Encap, we Transpose the packet; we transform and uplevel to become IPv6 to IPv6
Destination side of Private Link; packet generated in Private Link arrives at destination - transposed and NAT'd
Router Advertisement
Standard size DC rack – a physical unit of containment for DC equipment, dependent upon Rack SKU. Contains varying equipment including such as blades, switches (T0, MGMT, Console), PDUs, Rack Managers, etc…
A set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network. See Regions and Availability Zones.
Router Solicitation
Software Defined Networking (high level name for the Virtual Network and its elements)
Software for Open Networking in the Cloud. See SONiC.
Rule put in place to prevent VM from spoofing traffic
The Switch Abstraction Interface (SAI) is a hardware abstraction model for switching silicon (ASICs). It is an open-source framework that allows ASICs to be represented in software. This means you can use a Broadcom ASIC the same way as one from Mellanox or Cavium XPliant. This framework let developers target switching platforms in an agnostic way; as long as you have the necessary ASIC driver, you’re good to go. SAI locates the abstraction in the user space, while other frameworks, such as switchdev, locate the abstraction in the kernel space. Microsoft open-sourced SAI in 2015.
The Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring, making it more like cloud computing than traditional network management. SDN is meant to address the fact that the static architecture of traditional networks is decentralized and complex while current networks require more flexibility and easy troubleshooting. SDN attempts to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane).
The control plane consists of one or more controllers, which are considered the brain of the SDN network where the whole intelligence is incorporated. However, the intelligent centralization has its own drawbacks when it comes to security, scalability and elasticity and this is the main issue of SDN.
SDN was commonly associated with the OpenFlow protocol (for remote communication with network plane elements for the purpose of determining the path of network packets across network switches) since the latter's emergence in 2011. However, since 2012 OpenFlow for many companies is no longer an exclusive solution, they added proprietary techniques. These include Cisco Systems' Open Network Environment and Nicira's network virtualization platform. For more information, see Software-defined networking.
SmartToR (Smart Top-of-Rack), provides fully programmable switching, routing, and L4-L7 services.
Merchant silicon has evolved from basic L2 switching to highest performance switching and routing solutions. SmartToR supports high-performance, advanced network services. For example, Trident SmartToR by BroadCom enables a 100x performance increase with massive capacity: tracking 3 million connections with 3 million connection-level policies, as well as 1 million tunnels and over 1 million stateful counters for metering and telemetry.
See also ToR.
Define the way the SDN controller interact with the data plane (aka forwarding plane) to make adjustments to the network, so it can better adapt to changing requirements.
See also Northbound interfaces
Transmission Control Protocol
Top of the Rack Switch (aka ToR or T0).
A Data Center has racks, each contains several computing devices. The Top of Rack (ToR) allows for network switches to be placed on every rack and the computing devices in the rack to be connected to them. Then these network switches are connected to aggregation switches using fewer cables.
ToR switches also handle operations such as Layer 2 and Layer 3 frame and packet forwarding, data center bridging and the transport of Fiber Channel over Ethernet for the racks of servers connected to them.
The following figure shows a simplified TOR network layout.
The underlay network is the physical infrastructure on which the overlay network is built. It is the underlay network responsibility the delivery of packets across networks. Controlled by SONiC via SAI. The underlay deals with (for example) packet forwarding, setup of DEST MAC correctly, setup initializations (in a switch) such as 'how many ports available, port speed, FEC settings,', router interfaces, default route, etc... These header files are NOT automatically generated.
Virtual Filtering Platform
Virtual IP (IP exposed on Load Balancer)
Virtual Machine
Virtual Network
VNI (Vnet Identifier) = VXLANID or GRE key
Eni, Vnic, VPort are being used interchangeably. They all in the general sense mean a VM's NIC.
Virtual Extensive Local Area Network (Protocol)
Extensible Markup Language (Format)
Last edit by KrisNey on July 12, 2021