ENCOR 350-401 Category

Architecture Questions

February 7th, 2021 digitaltut 41 comments

Question 1

Explanation

For campus designs requiring simplified configuration, common end-to-end troubleshooting tools, and the fastest convergence, a design using Layer 3 switches in the access layer (routed access) in combination with Layer 3 switching at the distribution layer and core layers provides the most rapid convergence of data and control plane traffic flows.

Routed_access.jpg

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/cisco-sda-design-guide.html#Layer_3_Routed_Access_Introduction

Campus fabric runs over arbitrary topologies:
+ Traditional 3-tier hierarchical network
+ Collapsed core/aggregation designs
+ Routed access
+ U-topology

Ideal design is routed access –allows fabric to extend to very edge of campus network

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2017/pdf/BRKCRS-2812.pdf

From above references, we see that campus infrastructure does not include two-tier topology.

Question 2

Question 3

Explanation

The difference between on-premise and cloud is essentially where this hardware and software resides. On-premise means that a company keeps all of this IT environment onsite either managed by themselves or a third-party. Cloud means that it is housed offsite with someone else responsible for monitoring and maintaining it.

Question 4

Question 5

Explanation

Stateful Switchover (SSO) provides protection for network edge devices with dual Route Processors (RPs) that represent a single point of failure in the network design, and where an outage might result in loss of service for customers.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/12-2SY/configuration/guide/sy_swcg/stateful_switchover.html

Etherchannel Questions

February 7th, 2021 digitaltut 36 comments

Quick overview of EtherChannel:

EtherChannel bundles the physical links into one logical link with the combined bandwidth and it is awesome! STP sees this link as a single link so STP will not block any links! EtherChannel also does load balancing among the links in the channel automatically. If a link within the EtherChannel bundle fails, traffic previously carried over the failed link is carried over the remaining links within the EtherChannel. If one of the links in the channel fails but at least one of the links is up, the logical link (EtherChannel link) remains up. EtherChannel also works well for router connections: EtherChannel_router.jpg When an EtherChannel is created, a logical interface will be created on the switches or routers representing for that EtherChannel. You can configure this logical interface in the way you want. For example, assign access/trunk mode on switches or assign IP address for the logical interface on routers…

Note: A maximum of 8 Fast Ethernet or 8 Gigabit Ethernet ports can be grouped together when forming an EtherChannel. There are three mechanisms you can choose to configure EtherChannel:
+ Link Aggregation Control Protocol (LACP)
+ Port Aggregation Protocol (PAgP)
+ Static (“On”)

The Port Aggregation Protocol (PAgP) and Link Aggregation Control Protocol (LACP) facilitate the automatic creation of EtherChannels by exchanging packets between Ethernet interfaces. The Port Aggregation Protocol (PAgP) is a Cisco-proprietary solution, and the Link Aggregation Control Protocol (LACP) is standards based.

LACP modes:

LACP dynamically negotiates the formation of a channel. There are two LACP modes:

+ passive: the switch does not initiate the channel, but does understand incoming LACP packets
+ active: send LACP packets and willing to form a port-channel

The table below lists if an EtherChannel will be formed or not for LACP:

LACP Active Passive
Active Yes Yes
Passive Yes No

PAgP modes:

+ desirable: send PAgP packets and willing to form a port-channel
+ auto: does not start PAgP packet negotiation but responds to PAgP packets it receives

The table below lists if an EtherChannel will be formed or not for PAgP:

PAgP Desirable Auto
Desirable Yes Yes
Auto Yes No

An EtherChannel in Cisco can be defined as a Layer 2 EtherChannel or a Layer 3 EtherChannel.
+ For Layer 2 EtherChannel, physical ports are placed into an EtherChannel group. A logical port-channel interface will be created automatically. An example of configuring Layer 2 EtherChannel can be found in Question 1 in this article.

+ For Layer 3 EtherChannel, a Layer 3 Switch Virtual Interface (SVI) is created and then the physical ports are bound into this Layer 3 SVI.

Static (“On”)

In this mode, no negotiation is needed. The interfaces become members of the EtherChannel immediately. When using this mode make sure the other end must use this mode too because they will not check if port parameters match. Otherwise the EtherChannel would not come up and may cause some troubles (like loop…).

Note: All interfaces in an EtherChannel must be configured identically to form an EtherChannel. Specific settings that must be identical include:
+ Speed settings
+ Duplex settings
+ STP settings
+ VLAN membership (for access ports)
+ Native VLAN (for trunk ports)
+ Allowed VLANs (for trunk ports)
+ Trunking Encapsulation (ISL or 802.1Q, for trunk ports)

Note: EtherChannels will not form if either dynamic VLANs or port security are enabled on the participating EtherChannel interfaces.

For more information about EtherChannel, please read our EtherChannel tutorial.

Question 1

Explanation

There are two PAgP modes:

Auto Responds to PAgP messages but does not aggressively negotiate a PAgP EtherChannel. A channel is formed only if the port on the other end is set to Desirable. This is the default mode.
Desirable Port actively negotiates channeling status with the interface on the other end of the link. A channel is formed if the other side is Auto or Desirable.

The table below lists if an EtherChannel will be formed or not for PAgP:

PAgP Desirable Auto
Desirable Yes Yes
Auto Yes No

Question 2

Explanation

The Cisco switch was configured with PAgP, which is a Cisco proprietary protocol so non-Cisco switch could not communicate.

Question 3

Explanation

In the exhibit we see no interfaces are shown in the “Ports” field. The interfaces will be shown under “Ports” field even if they are shut down. For example:

Etherchannel_interface_down.jpg

So the most likely cause of this problem is no port members have been defined for Po1 of SW1.

Trunking Questions

February 6th, 2021 digitaltut 30 comments

Question 1

Explanation

SW3 does not have VLAN 60 so it should not receive traffic for this VLAN (sent from SW2). Therefore we should configure VTP Pruning on SW3 so that SW2 does not forward VLAN 60 traffic to SW3. Also notice that we need to configure pruning on SW1 (the VTP Server), not SW2.

Question 2

Explanation

From the “show vlan brief” we learn that Finance belongs to VLAN 110 and all VLANs (from 1 to 1005) are allowed to traverse the trunk (port-channel 1). Therefore we have to remove VLAN 110 from the allowed VLAN list with the “switchport trunk allowed vlan remove ” command. The pruning feature cannot do this job as Finance VLAN is active.

Question 3

SD-WAN & SD-Access Solutions

February 6th, 2021 digitaltut 38 comments

SD-Access Quick summary

There are five basic device roles in the fabric overlay:
+ Control plane node: This node contains the settings, protocols, and mapping tables to provide the endpoint-to-location (EID-to-RLOC) mapping system for
the fabric overlay.
+ Fabric border node: This fabric device (for example, core layer device) connects external Layer 3 networks to the SDA fabric.
+ Fabric edge node: This fabric device (for example, access or distribution layer device) connects wired endpoints to the SDA fabric.
+ Fabric WLAN controller (WLC): This fabric device connects APs and wireless endpoints to the SDA fabric.
+ Intermediate nodes: These are intermediate routers or extended switches that do not provide any sort of SD-Access fabric role other than underlay services.

SD_Access_Fabric.jpg

Three major building blocks that make up SDA: the control plane, the data plane and the policy plane.

+ Control-Plane based on LISP
+ Data-Plane based on VXLAN
+ Policy-Plane based on TrustSec

SD-WAN Quick Summary

The primary components for the Cisco SD-WAN solution consist of the vManage network management system (management plane), the vSmart controller (control plane), the vBond orchestrator (orchestration plane), and the vEdge router (data plane).

+ vManage – This centralized network management system provides a GUI interface to easily monitor, configure, and maintain all Cisco SD-WAN devices and links in the underlay and overlay network.

+ vSmart controller – This software-based component is responsible for the centralized control plane of the SD-WAN network. It establishes a secure connection to each vEdge router and distributes routes and policy information via the Overlay Management Protocol (OMP), acting as a route reflector. It also orchestrates the secure data plane connectivity between the vEdge routers by distributing crypto key information, allowing for a very scalable, IKE-less architecture.

+ vBond orchestrator – This software-based component performs the initial authentication of vEdge devices and orchestrates vSmart and vEdge connectivity. It also has an important role in enabling the communication of devices that sit behind Network Address Translation (NAT).

+ vEdge router – This device, available as either a hardware appliance or software-based router, sits at a physical site or in the cloud and provides secure data plane connectivity among the sites over one or more WAN transports. It is responsible for traffic forwarding, security, encryption, Quality of Service (QoS), routing protocols such as Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF), and more.

SD_WAN_Physical_Architecture.jpg

Cisco SD-WAN uses Overlay Management Protocol (OMP) which manages the overlay network. OMP runs between the vSmart controllers and WAN Edge routers (and among vSmarts themselves) where control plane information, such as the routing, policy, and management information, is exchanged over a secure connection.

VPNs in SD-WAN

In the SD-WAN overlay, virtual private networks (VPNs) provide segmentation. Each VPN is equivalent to a VRF, which is isolated from one another and have their own forwarding tables. An interface or subinterface is explicitly configured under a single VPN and cannot be part of more than one VPN. Devices attached to an interface in one VPN cannot communicate with devices in another VPN unless policy is put in place to allow it. The VPN ranges from 0 to 65535, but several VPNs are reserved for internal use.

The Transport & Management VPNs

There are two implicitly configured VPNs in the WAN Edge devices and controllers: VPN 0 and VPN 512.

VPN 0 is the transport VPN. It contains all the interfaces that connect to the WAN links. Secure DTLS/TLS connections to the controllers are initiated from this VPN. Static or default routes or a dynamic routing protocol needs to be configured inside this VPN in order to get appropriate next-hop information so the control plane can be established and IPsec tunnel traffic can reach remote sites.

VPN 0 connects the WAN Edge to the WAN transport and creates control plane and data plane connections. The WAN Edge device can connect to multiple WAN transport(s) on different interfaces on the same VPN 0 transport segment. At least one interface needs to be configured to initially reach the SD-WAN controllers for onboarding.

VPN 512 is the management VPN. It carries the out-of-band management traffic to and from the Cisco SD-WAN devices. This VPN is ignored by OMP and not carried across the overlay network.

SDWAN_VPNs.jpg

Question 1

Explanation

There are five basic device roles in the fabric overlay:
+ Control plane node: This node contains the settings, protocols, and mapping tables to provide the endpoint-to-location (EID-to-RLOC) mapping system for
the fabric overlay.
+ Fabric border node: This fabric device (for example, core layer device) connects external Layer 3 networks to the SDA fabric.
+ Fabric edge node: This fabric device (for example, access or distribution layer device) connects wired endpoints to the SDA fabric.
+ Fabric WLAN controller (WLC): This fabric device connects APs and wireless endpoints to the SDA fabric.
+ Intermediate nodes: These are intermediate routers or extended switches that do not provide any sort of SD-Access fabric role other than underlay services.

SD_Access_Fabric.jpg

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

Question 2

Explanation

+ Orchestration plane (vBond) assists in securely onboarding the SD-WAN WAN Edge routers into the SD-WAN overlay (-> Therefore answer A mentioned about vBond). The vBond controller, or orchestrator, authenticates and authorizes the SD-WAN components onto the network. The vBond orchestrator takes an added responsibility to distribute the list of vSmart and vManage controller information to the WAN Edge routers. vBond is the only device in SD-WAN that requires a public IP address as it is the first point of contact and authentication for all SD-WAN components to join the SD-WAN fabric. All other components need to know the vBond IP or DNS information.

+ Management plane (vManage) is responsible for central configuration and monitoring. The vManage controller is the centralized network management system that provides a single pane of glass GUI interface to easily deploy, configure, monitor and troubleshoot all Cisco SD-WAN components in the network. (-> Answer C and answer D are about vManage)

+ Control plane (vSmart) builds and maintains the network topology and make decisions on the traffic flows. The vSmart controller disseminates control plane information between WAN Edge devices, implements control plane policies and distributes data plane policies to network devices for enforcement (-> Answer B is about vSmart)

Question 3

Explanation

The southbound protocol used by APIC is OpFlex that is pushed by Cisco as the protocol for policy enablement across physical and virtual switches.

Southbound interfaces are implemented with some called Service Abstraction Layer (SAL), which talks to the network elements via SNMP and CLI.

Note: Cisco OpFlex is a southbound protocol in a software-defined network (SDN).

Question 4

Explanation

Today the Dynamic Network Architecture Software Defined Access (DNA-SDA) solution requires a fusion router to perform VRF route leaking between user VRFs and Shared-Services, which may be in the Global routing table (GRT) or another VRF. Shared Services may consist of DHCP, Domain Name System (DNS), Network Time Protocol (NTP), Wireless LAN Controller (WLC), Identity Services Engine (ISE), DNAC components which must be made available to other virtual networks (VN’s) in the Campus.

Reference: https://www.cisco.com/c/en/us/support/docs/cloud-systems-management/dna-center/213525-sda-steps-to-configure-fusion-router.html

Question 5

Explanation

Fabric mode APs continue to support the same wireless media services that traditional APs support; apply AVC, quality of service (QoS), and other wireless policies; and establish the CAPWAP control plane to the fabric WLC. Fabric APs join as local-mode APs and must be directly connected to the fabric edge node switch to enable fabric registration events, including RLOC assignment via the fabric WLC. The fabric edge nodes use CDP to recognize APs as special wired hosts, applying special port configurations and assigning the APs to a unique overlay network within a common EID space across a fabric. The assignment allows management simplification by using a single subnet to cover the AP infrastructure at a fabric site.

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/sda-sdg-2019oct.html

Question 6

Explanation

The tunneling technology used for the fabric data plane is based on Virtual Extensible LAN (VXLAN). VXLAN encapsulation is UDP based, meaning that it can be forwarded by any IP-based network (legacy or third party) and creates the overlay network for the SD-Access fabric. Although LISP is the control plane for the SD-Access fabric, it does not use LISP data encapsulation for the data plane; instead, it uses VXLAN encapsulation because it is capable of encapsulating the original Ethernet header to perform MAC-in-IP encapsulation, while LISP does not. Using VXLAN allows the SD-Access fabric to support Layer 2 and Layer 3 virtual topologies (overlays) and the ability to operate over any IP-based network with built-in network segmentation (VRF instance/VN) and built-in group-based policy.

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

Question 7

Explanation

Access Points
+ AP is directly connected to FE (or to an extended node switch)
+ AP is part of Fabric overlay

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2018/pdf/BRKEWN-2020.pdf

Question 8

Explanation

The primary components for the Cisco SD-WAN solution consist of the vManage network management system (management plane), the vSmart controller (control plane), the vBond orchestrator (orchestration plane), and the vEdge router (data plane).

+ vManage – This centralized network management system provides a GUI interface to easily monitor, configure, and maintain all Cisco SD-WAN devices and links in the underlay and overlay network.

+ vSmart controller – This software-based component is responsible for the centralized control plane of the SD-WAN network. It establishes a secure connection to each vEdge router and distributes routes and policy information via the Overlay Management Protocol (OMP), acting as a route reflector. It also orchestrates the secure data plane connectivity between the vEdge routers by distributing crypto key information, allowing for a very scalable, IKE-less architecture.

+ vBond orchestrator – This software-based component performs the initial authentication of vEdge devices and orchestrates vSmart and vEdge connectivity. It also has an important role in enabling the communication of devices that sit behind Network Address Translation (NAT).

+ vEdge router – This device, available as either a hardware appliance or software-based router, sits at a physical site or in the cloud and provides secure data plane connectivity among the sites over one or more WAN transports. It is responsible for traffic forwarding, security, encryption, Quality of Service (QoS), routing protocols such as Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF), and more.

Reference: https://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/SDWAN/CVD-SD-WAN-Design-2018OCT.pdf

Question 9

Question 10

Explanation

There are five basic device roles in the fabric overlay:
+ Control plane node: This node contains the settings, protocols, and mapping tables to provide the endpoint-to-location (EID-to-RLOC) mapping system for the fabric overlay.
+ Fabric border node: This fabric device (for example, core layer device) connects external Layer 3 networks to the SDA fabric.
+ Fabric edge node: This fabric device (for example, access or distribution layer device) connects wired endpoints to the SDA fabric.
+ Fabric WLAN controller (WLC): This fabric device connects APs and wireless endpoints to the SDA fabric.
+ Intermediate nodes: These are intermediate routers or extended switches that do not provide any sort of SD-Access fabric role other than underlay services.

SD_Access_Fabric.jpg

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

SD-WAN & SD-Access Solutions 2

February 6th, 2021 digitaltut 5 comments

Question 1

Explanation

+ Orchestration plane (vBond) assists in securely onboarding the SD-WAN WAN Edge routers into the SD-WAN overlay. The vBond controller, or orchestrator, authenticates and authorizes the SD-WAN components onto the network. The vBond orchestrator takes an added responsibility to distribute the list of vSmart and vManage controller information to the WAN Edge routers. vBond is the only device in SD-WAN that requires a public IP address as it is the first point of contact and authentication for all SD-WAN components to join the SD-WAN fabric. All other components need to know the vBond IP or DNS information.

Question 2

Explanation

+ Fabric edge node: This fabric device (for example, access or distribution layer device) connects wired endpoints to the SDA fabric.

SD_Access_Fabric.jpg

Question 3

Explanation

+ Control plane (vSmart) builds and maintains the network topology and make decisions on the traffic flows. The vSmart controller disseminates control plane information between WAN Edge devices, implements control plane policies and distributes data plane policies to network devices for enforcement.

Question 4

Explanation

The BFD (Bidirectional Forwarding Detection) is a protocol that detects link failures as part of the Cisco SD-WAN (Viptela) high availability solution, is enabled by default on all vEdge routers, and you cannot disable it.

Question 5

Explanation

An overlay network creates a logical topology used to virtually connect devices that are built over an arbitrary physical underlay topology.

An overlay network is created on top of the underlay network through virtualization (virtual networks). The data plane traffic and control plane signaling are contained within each virtualized network, maintaining isolation among the networks and an independence from the underlay network.

SD-Access allows for the extension of Layer 2 and Layer 3 connectivity across the overlay through the services provided by through LISP.

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/cisco-sda-design-guide.html

Question 6

Explanation

Control plane (vSmart) builds and maintains the network topology and make decisions on the traffic flows. The vSmart controller disseminates control plane information between WAN Edge devices, implements control plane policies and distributes data plane policies to network devices for enforcement.

Question 7

Question 8

Explanation

Access Points
+ AP is directly connected to FE (or to an extended node switch)
+ AP is part of Fabric overlay

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2018/pdf/BRKEWN-2020.pdf

Question 9

Explanation

Fabric control plane node (C): One or more network elements that implement the LISP Map-Server (MS) and Map-Resolver (MR) functionality. The control plane node’s host tracking database keep track of all endpoints in a fabric site and associates the endpoints to fabric nodes in what is known as an EID-to-RLOC binding in LISP.

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/cisco-sda-macro-segmentation-deploy-guide.html

Question 10

Explanation

SD-Access Wireless Architecture Control Plane Node –A Closer Look

Fabric Control-Plane Node is based on a LISP Map Server / Resolver

Runs the LISP Endpoint ID Database to provide overlay reachability information
+ A simple Host Database, that tracks Endpoint ID to Edge Node bindings (RLOCs)
+ Host Database supports multiple types of Endpoint ID (EID), such as IPv4 /32, IPv6 /128* or MAC/48
+ Receives prefix registrations from Edge Nodes for wired clients, and from Fabric mode WLCs for wireless clients
+ Resolves lookup requests from FE to locate Endpoints
+ Updates Fabric Edge nodes, Border nodes with wireless client mobility and RLOC information

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/latam/docs/2018/pdf/BRKEWN-2020.pdf

SD-WAN & SD-Access Solutions 3

February 6th, 2021 digitaltut 8 comments

Question 1

Explanation

Cisco SD-WAN uses Overlay Management Protocol (OMP) which manages the overlay network. OMP runs between the vSmart controllers and WAN Edge routers (and among vSmarts themselves) where control plane information, such as the routing, policy, and management information, is exchanged over a secure connection.

Question 2

Explanation

SDA supports two additional types of roaming, which are Intra-xTR and Inter-xTR. In SDA, xTR stands for an access-switch that is a fabric edge node. It serves both as an ingress tunnel router as well as an egress tunnel router.

When a client on a fabric enabled WLAN, roams from an access point to another access point on the same access-switch, it is called Intra-xTR. Here, the local client database and client history table are updated with the information of the newly associated access point.

When a client on a fabric enabled WLAN, roams from an access point to another access point on a different access-switch, it is called Inter-xTR. Here, the map server is also updated with the client location (RLOC) information. Also, the local client database is updated with the information of the newly associated access point.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/mobility.html

Question 3

Question 4

Explanation

The tunneling technology used for the fabric data plane is based on Virtual Extensible LAN (VXLAN). VXLAN encapsulation is UDP based, meaning that it can be forwarded by any IP-based network (legacy or third party) and creates the overlay network for the SD-Access fabric. Although LISP is the control plane for the SD-Access fabric, it does not use LISP data encapsulation for the data plane; instead, it uses VXLAN encapsulation because it is capable of encapsulating the original Ethernet header to perform MAC-in-IP encapsulation, while LISP does not. Using VXLAN allows the SD-Access fabric to support Layer 2 and Layer 3 virtual topologies (overlays) and the ability to operate over any IP-based network with built-in network segmentation (VRF instance/VN) and built-in group-based policy.

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

QoS Questions

February 5th, 2021 digitaltut 7 comments

QoS quick summary:

1. Network factors:
+ Bandwidth: the speed of the link (or the capacity available on the link), usually measured in bits per second (bps)
+ Delay (or latency): how long a packet takes to get from the sender to the receiver. The more the delay, the slower the network. Delay is usually measured in milliseconds (ms)
+ Jitter: A measure of the variation in delay between packets. For example, one packet need 50ms to reach B from A while another packet takes 40ms then the jitter is 10ms
+ Loss: When packets travels to the destination, some of them may get lost.

2. QoS Models:
+ Best Effort: No QoS policies applied
+ Integrated Services (IntServ): Resource Reservation Protocol (RSVP) is used to reserve bandwidth
+ Differentiated Services (DiffServ): Packets are classified and marked individually; policy decisions are made independently by each node in a path.

3. QoS Markings:

Marking is a method that you use to modify the QoS fields of the incoming and outgoing packets. The QoS fields that you can mark are CoS in Layer 2 (both access and trunk traffic), and IP precedence and Differentiated Service Code Point (DSCP) in Layer 3.

Layer 2 marking would be of two types:
+ Traffic in trunk link with 802.1Q header. 802.1q tag on layer 2 header adds 4 bytes to the header to provide with a VLAN ID. Out of those 4 bytes, 3 bits are called ‘Priority bits (PRI)’. These bits provide 8 combinations (23) and they are termed as CoS (Class of Service) tags.
+ Traffic in access link without 802.1Q: We need to use 802.1P marking instead. 802.1P tag is similar to 802.1q tag (4 bytes), it will set the PRI value according to CoS marking and leave the VLAN ID as all zeroes.

Layer-3 marking is accomplished using the 8-bit Type of Service (ToS) field, part of the IP header. A mark in this field will remain unchanged as it travels from hop-to-hop, unless a Layer-3 device is explicitly configured to overwrite this field. There are two marking methods that use the ToS field:
+ IP Precedence: uses the first three bits of the ToS field.
+ Differentiated Service Code Point (DSCP): uses the first six bits of the ToS field. When using DSCP, the ToS field is often referred to as the Differentiated Services (DS) field.

TOS.png

4. QoS terms:

+ Classification: This involves categorizing network traffic into different groups based on specific criteria like IP address, protocol, port, or application type.
+ Marking: allows you to mark (set or change) a value (attribute) for the traffic belonging to a specific class
+ Queuing: entails holding packets in a queue and scheduling their transmission based on priority.
+ Policing: is used to control the rate of traffic flowing across an interface. During a bandwidth exceed (crossed the maximum configured rate), the excess traffic is generally dropped or remarked. The result of traffic policing is an output rate that appears as a saw-tooth with crests and troughs. Traffic policing can be applied to inbound and outbound interfaces. Unlike traffic shaping, QoS policing avoids delays due to queuing. Policing is configured in bytes.
+ Congestion: occurs when network bandwidth is insufficient to accommodate all traffic.
+ Shaping: retains excess packets in a queue and then schedules the excess for later transmission over increments of time. When traffic reaches the maximum configured rate, additional packets are queued instead of being dropped to proceed later. Traffic shaping is applicable only on outbound interfaces as buffering and queuing happens only on outbound interfaces. Shaping is configured in bits per second.

The primary reasons you would use traffic shaping are to control access to available bandwidth, to ensure that traffic conforms to the policies established for it, and to regulate the flow of traffic in order to avoid congestion that can occur when the sent traffic exceeds the access speed of its remote, target interface.

traffic_policing_vs_shaping.jpg

+ Tail drop: When the queue is full, the packet is dropped. This is the default behavior.

5. Congestion Management (types of queuing): uses the marking on each packet to determine which queue to place packets in

First-in, first-out (FIFO): FIFO entails no concept of priority or classes of traffic. With FIFO, transmission of packets out the interface occurs in the order the packets arrive, which means no QoS.
Priority Queuing (PQ): This type of queuing places traffic into one of four queues. Each queue has a different level of priority, and higher-priority queues must be emptied before packets are emptied from lower-priority queues. This behavior can “starve out” lower- priority traffic.
Custom Queuing (CQ): provide specific traffic guaranteed bandwidth at a potential congestion point, assuring the traffic a fixed portion of available bandwidth and leaving the remaining bandwidth to other traffic.
Weighted fair queueing (WFQ): allocates bandwidths to flows based on the weight. In addition, to allocate bandwidths fairly to flows, WFQ schedules packets in bits (not bytes). This prevents long packets from preempting bandwidths of short packets and reduces the delay and jitter when both short and long packets wait to be forwarded.

Class-based weighted fair queueing (CBWFQ) extends the standard WFQ functionality to provide support for user-defined traffic classes. For CBWFQ, you define traffic classes based on match criteria including protocols, access control lists (ACLs), and input interfaces. Packets satisfying the match criteria for a class constitute the traffic for that class. A queue is reserved for each class, and traffic belonging to a class is directed to the queue for that class.

Once a class has been defined according to its match criteria, you can assign it characteristics. To characterize a class, you assign it bandwidth, weight, and maximum packet limit. The bandwidth assigned to a class is the guaranteed bandwidth delivered to the class during congestion.

CBWFQ.jpg

Low latency queueing (LLQ) or also known as Priority Queuing (PQ): brings strict priority queuing (PQ) to CBWFQ. Strict PQ allows delay-sensitive packets such as voice to be sent before packets in other queues. LLQ reduces jitter in voice conversations.

This type of queuing places traffic into one of four queues. Each queue has a different level of priority, and higher-priority queues must be emptied before packets are emptied from lower-priority queues. This behavior can “starve out” lower- priority traffic.

Low_Latency_Queuing.jpg

The Resource Reservation Protocol (RSVP) protocol allows applications to reserve bandwidth for their data flows. It is used by a host, on the behalf of an application data flow, to request a specific amount of bandwidth from the network. RSVP is also used by the routers to forward bandwidth reservation requests.

Question 1

Question 2

Explanation

Weighted Random Early Detection (WRED) is just a congestion avoidance mechanism. WRED drops packets selectively based on IP precedence. Edge routers assign IP precedences to packets as they enter the network. When a packet arrives, the following events occur:

1. The average queue size is calculated.
2. If the average is less than the minimum queue threshold, the arriving packet is queued.
3. If the average is between the minimum queue threshold for that type of traffic and the maximum threshold for the interface, the packet is either dropped or queued, depending on the packet drop probability for that type of traffic.
4. If the average queue size is greater than the maximum threshold, the packet is dropped.

WRED reduces the chances of tail drop (when the queue is full, the packet is dropped) by selectively dropping packets when the output interface begins to show signs of congestion (thus it can mitigate congestion by preventing the queue from filling up). By dropping some packets early rather than waiting until the queue is full, WRED avoids dropping large numbers of packets at once and minimizes the chances of global synchronization. Thus, WRED allows the transmission line to be used fully at all times.

WRED generally drops packets selectively based on IP precedence. Packets with a higher IP precedence are less likely to be dropped than packets with a lower precedence. Thus, the higher the priority of a packet, the higher the probability that the packet will be delivered.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_conavd/configuration/15-mt/qos-conavd-15-mt-book/qos-conavd-cfg-wred.html

WRED is only useful when the bulk of the traffic is TCP/IP traffic. With TCP, dropped packets indicate congestion, so the packet source will reduce its transmission rate. With other protocols, packet sources may not respond or may resend dropped packets at the same rate. Thus, dropping packets does not decrease congestion.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_conavd/configuration/xe-16/qos-conavd-xe-16-book/qos-conavd-oview.html

Note: Global synchronization occurs when multiple TCP hosts reduce their transmission rates in response to congestion. But when congestion is reduced, TCP hosts try to increase their transmission rates again simultaneously (known as slow-start algorithm), which causes another congestion. Global synchronization produces this graph:

TCP_Global_Synchronization.jpg

Question 3

Explanation

QoS Packet Marking refers to changing a field within a packet either at Layer 2 (802.1Q/p CoS, MPLS EXP) or Layer 3 (IP Precedence, DSCP and/or IP ECN).

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_mqc/configuration/xe-16/qos-mqc-xe-16-book/qos-mrkg.html

Question 4

Explanation

Cisco routers allow you to mark two internal values (qos-group and discard-class) that travel with the packet within the router but do not modify the packet’s contents.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_mqc/configuration/xe-16-6/qos-mqc-xe-16-6-book/qos-mrkg.html

Question 5

Explanation

Traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate.

traffic_policing_vs_shaping.jpg

Question 6

Explanation

Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate (or committed information rate), excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs.

Unlike traffic shaping, traffic policing does not cause delay.

Classification (which includes traffic policing, traffic shaping and queuing techniques) should take place at the network edge. It is recommended that classification occur as close to the source of the traffic as possible.

Also according to this Cisco link, “policing traffic as close to the source as possible”.

Question 7

Explanation

CoS value 5 is commonly used for VOIP and CoS value 5 should be mapped to DSCP 46. DSCP 46 is defined as being for EF (Expedited Forwarding) traffic flows and is the value usually assigned to all interactive voice and video traffic. This is to keep the uniformity from end-to-end that DSCP EF (mostly for VOICE RTP) is mapped to COS 5.

Note:

+ CoS is a L2 marking contained within an 802.1q tag,. The values for CoS are 0 – 7
+ DSCP is a L3 marking and has values 0 – 63
+ The default DSCP-to-CoS mapping for CoS 5 is DSCP 40

Question 8

Explanation

First-in, first-out (FIFO): FIFO entails no concept of priority or classes of traffic. With FIFO, transmission of packets out the interface occurs in the order the packets arrive, which means no QoS.

Switching Mechanism Questions

February 4th, 2021 digitaltut 13 comments

Packet Switching

Packet Switching is the method of moving a packet from a router’s input interface to an output interface. There are three main packet switching methods: Process Switching, Fast Switching and Cisco Express Forwarding (CEF).

Process switching is the oldest and slowest switching methods because it must examines the routing table, determines which interface the packet should be switched to and then switches the packet for every coming packet. The router CPU is responsible of choosing the appropriate process to handle the packet and scheduling the running of the process. With this kind of switching, it was clear that the router could not handle packets fast enough to attain the speeds needed as the traffic flows were increasing at a rapid pace across the networks. So an idea was born to solve this problem: Why not cache the results of the IP next-hop and layer 2 look-ups and use them to switch the next packet towards a given destination? This concept gave birth to fast switching.

Fast Switching relies on the idea of caching the routing decision of first packet (via process switching) then applying to the next ones without calculating. The first packet is copied to packet memory, and if the destination network is found in fast switching cache, the frame is rewritten and sent to outgoing interface. If the destination address is not present in the fast-switching cache, the packet is returned to process switching path, where the processor attempts to build a cache entry which can be used to forward packets to the destination.

CEF switching is a Cisco proprietary and advanced Layer3 IP switching mechanism that was designed to tackle the deficiencies associated with fast-switching. CEF optimizes performance, scalability, and resiliency for large and complex networks with dynamic traffic patterns. CEF’s retrieval and packet forwarding technique is less CPU intensive than process or fast switching. This results in higher throughput when CEF is enabled.

CEF Quick summary

Cisco Express Forwarding (CEF) provides the ability to switch packets through a device in a very quick and efficient way while also keeping the load on the router’s processor low. CEF is made up of two different main components: the Forwarding Information Base (FIB) and the Adjacency Table. These are automatically updated at the same time as the routing table.

The adjacency table is tasked with maintaining the layer 2 next-hop information for the FIB.

RIB vs FIB

Each routing protocol like OSPF, EIGRP has its own Routing information base (RIB) and they select their best candidates to try to install to global RIB so that it can then be selected for forwarding. In order to view the RIB table, use the command “show ip ospf database” for OSPF, “show ip eigrp topology” for EIGRP or “show ip bgp” for BGP. To view the Forwarding Information Base (FIB), use the “show ip cef” command. RIB is in Control plane while FIB is in Data plane.

The Forwarding Information Base (FIB) contains destination reachability information as well as next hop information. This information is then used by the router to make forwarding decisions. The FIB allows for very efficient and easy lookups. Below is an example of the FIB table:

show_ip_cef.jpg

The FIB maintains next-hop address information based on the information in the IP routing table (RIB). In other words, FIB is a mirror copy of RIB.

RIB is in Control plane (and it is not used for forwarding) while FIB is in Data plane (and it is used for forwarding).

Question 1

Explanation

Cisco Express Forwarding (CEF) provides the ability to switch packets through a device in a very quick and efficient way while also keeping the load on the router’s processor low. CEF is made up of two different main components: the Forwarding Information Base (FIB) and the Adjacency Table. These are automatically updated at the same time as the routing table.

The Forwarding Information Base (FIB) contains destination reachability information as well as next hop information. This information is then used by the router to make forwarding decisions. The FIB allows for very efficient and easy lookups. Below is an example of the FIB table:

show_ip_cef.jpg

The adjacency table is tasked with maintaining the layer 2 next-hop information for the FIB. An example of the adjacency table is shown below:

show_adjacency.jpg

Note: A fast cache is only used when fast switching is enabled while CEF is disabled.

Question 2

Explanation

Cisco IOS software basically supports two modes of CEF load balancing: On per-destination or per-packet basis.

For per destination load balancing a hash is computed out of the source and destination IP address (-> Answer E is correct). This hash points to exactly one of the adjacency entries in the adjacency table (-> Answer D is correct), providing that the same path is used for all packets with this source/destination address pair. If per packet load balancing is used the packets are distributed round robin over the available paths. In either case the information in the FIB and adjacency tables provide all the necessary forwarding information, just like for non-load balancing operation.

The number of paths used is limited by the number of entries the routing protocol puts in the routing table, the default in IOS is 4 entries for most IP routing protocols with the exception of BGP, where it is one entry. The maximum number that can be configured is 6 different paths -> Answer A is not correct.

Reference: https://www.cisco.com/en/US/products/hw/modules/ps2033/prod_technical_reference09186a00800afeb7.html

Question 3

Explanation

The Forwarding Information Base (FIB) table – CEF uses a FIB to make IP destination prefix-based switching decisions. The FIB is conceptually similar to a routing table or information base. It maintains a mirror image of the forwarding information contained in the IP routing table. When routing or topology changes occur in the network, the IP routing table is updated, and these changes are reflected in the FIB. The FIB maintains next-hop address information based on the information in the IP routing table.

Reference: https://www.cisco.com/c/en/us/support/docs/routers/12000-series-routers/47321-ciscoef.html

Question 4

Explanation

CEF uses a Forwarding Information Base (FIB) to make IP destination prefix-based switching decisions. The FIB is conceptually similar to a routing table or information base. It maintains a mirror image of the forwarding information contained in the IP routing table. When routing or topology changes occur in the network, the IP routing table is updated, and those changes are reflected in the FIB. The FIB maintains next-hop address information based on the information in the IP routing table. Because there is a one-to-one correlation between FIB entries and routing table entries, the FIB contains all known routes and eliminates the need for route cache maintenance that is associated with earlier switching paths such as fast switching and optimum switching.

Note: In order to view the Routing information base (RIB) table, use the “show ip route” command. To view the Forwarding Information Base (FIB), use the “show ip cef” command. RIB is in Control plane while FIB is in Data plane.

Question 5

Explanation

Both answer A and answer C in this question are correct. It is hard to say which correct answer is better.

Question 6

Explanation

“Punt” is often used to describe the action of moving a packet from the fast path (CEF) to the route processor for handling.

Cisco Express Forwarding (CEF) provides the ability to switch packets through a device in a very quick and efficient way while also keeping the load on the router’s processor low. CEF is made up of two different main components: the Forwarding Information Base (FIB) and the Adjacency Table.

Process switching is the slowest switching methods (compared to fast switching and Cisco Express Forwarding) because it must find a destination in the routing table. Process switching must also construct a new Layer 2 frame header for every packet. With process switching, when a packet comes in, the scheduler calls a process that examines the routing table, determines which interface the packet should be switched to and then switches the packet. The problem is, this happens for the every packet.

Reference: http://www.cisco.com/web/about/security/intelligence/acl-logging.html

Question 7

Explanation

The Forwarding Information Base (FIB) contains destination reachability information as well as next hop information. This information is then used by the router to make forwarding decisions. The FIB allows for very efficient and easy lookups. Below is an example of the FIB table:

show_ip_cef.jpg

The FIB maintains next-hop address information based on the information in the IP routing table (RIB).

Note: In order to view the Routing information base (RIB) table, use the “show ip route” command. To view the Forwarding Information Base (FIB), use the “show ip cef” command. RIB is in Control plane while FIB is in Data plane.

Virtualization Questions

February 3rd, 2021 digitaltut 34 comments

Virtualization Quick Summary

A virtual machine (VM) is a software emulation of a physical server with an operating system. From an application’s point of view, the VM provides the look and feel of a real physical server, including all its components, such as CPU, memory, and network interface cards (NICs).

A hypervisor, also known as a virtual machine monitor, is a software that creates and manages virtual machines. A hypervisor allows one physical server to support multiple guest VMs by virtually sharing its resources, such as memory and processing.

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors_detail.jpg

Comparison Type 1 and Type 2 hypervisors

  Type 1 hypervisor Type 2 hypervisor
Other name Bare metal hypervisor Hosted hypervisor
Runs on Underlying physical host machine hardware Underlying operating system (host OS)
Best suited for Large, resource-intensive, or fixed-use workloads Desktop and development environments
Can negotiate dedicated resources? Yes No
Knowledge required System administrator-level knowledge Basic user knowledge
Examples VMware ESXi, Microsoft Hyper-V, KVM Oracle VM VirtualBox, VMware Workstation, Microsoft Virtual PC

Structure of virtualization in a hypervisor

Hypervisors provide virtual switch (vSwitch) that Virtual Machines (VMs) use to communicate with other VMs on the same host. The vSwitch may also be connected to the host’s physical NIC to allow VMs to get layer 2 access to the outside world.

Each VM is provided with a virtual NIC (vNIC) that is connected to the virtual switch. Multiple vNICs can connect to a single vSwitch, allowing VMs on a physical host to communicate with one another at layer 2 without having to go out to a physical switch.

 

Virtual_machine_structure.jpg

Although vSwitch does not run Spanning-tree protocol but vSwitch implements other loop prevention mechanisms. For example, a frame that enters from one VMNIC is not going to go out of the physical host from a different VMNIC card.

Benefits of Virtualizing

Server virtualization and the use of virtual machines is profoundly changing data center dynamics. Most organizations are struggling with the cost and complexity of hosting multiple physical servers in their data centers. The expansion of the data center, a result of both scale-out server architectures and traditional “one application, one server” sprawl, has created problems in housing, powering, and cooling large numbers of underutilized servers. In addition, IT organizations continue to deal with the traditional cost and operational challenges of matching server resources to organizational needs that seem fickle and ever changing.

Virtual machines can significantly mitigate many of these challenges by enabling multiple application and operating system environments to be hosted on a single physical server while maintaining complete isolation between the guest operating systems and their respective applications. Hence, server virtualization facilitates server consolidation by enabling organizations to exchange a number of underutilized servers for a single highly utilized server running multiple virtual machines.

By consolidating multiple physical servers, organizations can gain several benefits:
+ Underutilized servers can be retired or redeployed.
+ Rack space can be reclaimed.
+ Power and cooling loads can be reduced.
+ New virtual servers can be rapidly deployed.
+ CapEx (higher utilization means fewer servers need to be purchased) and OpEx (few servers means a simpler environment and lower maintenance costs) can be reduced.

Para-virtualization

Para-virtualization is an enhancement of virtualization technology in which a guest operating system (guest OS) is modified prior to installation inside a virtual machine. This allows all guest OS within the system to share resources and successfully collaborate, rather than attempt to emulate an entire hardware environment. The modification also decreases the execution time required to complete operations that can be problematic in virtual environments.

Paravirtualization.jpg

By granting the guest OS access to the underlying hardware, Para-virtualization enables communication between the guest OS and the hypervisor (using API calls), thus improving performance and efficiency within the system. This is the main difference between Para-virtualization and (traditional) full-virtualization.

Question 1

Explanation

There is nothing special with the configuration of Gi0/0 on R1. Only Gi0/0 interface on R2 is assigned to VRF VPN_A. The default VRF here is similar to the global routing table concept in Cisco IOS

Question 2

Explanation

Answer C and answer D are not correct as only route distinguisher (RD) identifies the customer routing table and “allows customers to be assigned overlapping addresses”.

Answer A is not correct as “When BGP is configured, route targets are transmitted as BGP extended communities”

Question 3

Explanation

In VRF-Lite, Route distinguisher (RD) identifies the customer routing table and allows customers to be assigned overlapping addresses. Therefore it can support multiple customers with overlapping addresses -> Answer E is correct.

VRFs are commonly used for MPLS deployments, when we use VRFs without MPLS then we call it VRF lite -> Answer C is not correct.

– VRF-lite does not support IGRP and ISIS. ( -> Answer B is not correct)
– The capability vrf-lite subcommand under router ospf should be used when configuring OSPF as the routing protocol between the PE and the CE.
– VRF-lite does not affect the packet switching rate. (-> Answer A is not correct)

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12-2/25ew/configuration/guide/conf/vrf.html#wp1045190

Question 4

Explanation

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

Question 5

Explanation

Server virtualization and the use of virtual machines is profoundly changing data center dynamics. Most organizations are struggling with the cost and complexity of hosting multiple physical servers in their data centers. The expansion of the data center, a result of both scale-out server architectures and traditional “one application, one server” sprawl, has created problems in housing, powering, and cooling large numbers of underutilized servers. In addition, IT organizations continue to deal with the traditional cost and operational challenges of matching server resources to organizational needs that seem fickle and ever changing.

Virtual machines can significantly mitigate many of these challenges by enabling multiple application and operating system environments to be hosted on a single physical server while maintaining complete isolation between the guest operating systems and their respective applications. Hence, server virtualization facilitates server consolidation by enabling organizations to exchange a number of underutilized servers for a single highly utilized server running multiple virtual machines.

By consolidating multiple physical servers, organizations can gain several benefits:
+ Underutilized servers can be retired or redeployed.
+ Rack space can be reclaimed.
+ Power and cooling loads can be reduced.
+ New virtual servers can be rapidly deployed.
+ CapEx (higher utilization means fewer servers need to be purchased) and OpEx (few servers means a simpler environment and lower maintenance costs) can be reduced.

Reference: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/net_implementation_white_paper0900aecd806a9c05.html

Question 6

Explanation

A virtual machine (VM) is a software emulation of a physical server with an operating system. From an application’s point of view, the VM provides the look
and feel of a real physical server, including all its components, such as CPU, memory, and network interface cards (NICs).

The virtualization software that creates VMs and performs the hardware abstraction that allows multiple VMs to run concurrently is known as a hypervisor.

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

Question 7

Question 8

Explanation

Because some PE routers might receive routing information they do not require, a basic requirement is to be able to filter the MP-iBGP updates at the ingress to the PE router so that the router does not need to keep this information in memory.

The Automatic Route Filtering feature fulfills this filtering requirement. This feature is available by default on all PE routers, and no additional configuration is necessary to enable it. Its function is to filter automatically VPN-IPv4 routes that contain a route target extended community that does not match any of the PE’s configured VRFs. This effectively discards any unwanted VPN-IPv4 routes silently, thus reducing the amount of information that the PE has to store in memory -> Answer D is correct.

Reference: MPLS and VPN Architectures Book, Volume 1

The reason that PE1 dropped the route is there is no “route-target import 999:999” command on PE1 (so we see the “DENIED due to:extended community not supported” in the debug) so we need to type this command to accept this route -> Answer E is correct.

Question 9

Explanation

Broadcast radiation refers to the processing that is required every time a broadcast is received on a host. Although IP is very efficient from a broadcast perspective when compared to traditional protocols such as Novell Internetwork Packet Exchange (IPX) Service Advertising Protocol (SAP), virtual machines and the vswitch implementation require special consideration. Because the vswitch is software based, as broadcasts are received the vswitch must interrupt the server CPU to change contexts to enable the vswitch to process the packet. After the vswitch has determined that the packet is a broadcast, it copies the packet to all the VMNICs, which then pass the broadcast packet up the stack to process. This processing overhead can have a tangible effect on overall server performance if a single domain is hosting a large number of virtual machines.

Note: This overhead effect is not a limitation of the vswitch implementation. It is a result of the software-based nature of the vswitch embedded in the ESX hypervisor.

Reference: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/net_implementation_white_paper0900aecd806a9c05.html

—————————————————————-

Note about the structure of virtualization in a hypervisor:

Hypervisors provide virtual switch (vSwitch) that Virtual Machines (VMs) use to communicate with other VMs on the same host. The vSwitch may also be connected to the host’s physical NIC to allow VMs to get layer 2 access to the outside world.

Each VM is provided with a virtual NIC (vNIC) that is connected to the virtual switch. Multiple vNICs can connect to a single vSwitch, allowing VMs on a physical host to communicate with one another at layer 2 without having to go out to a physical switch.

 

Virtual_machine_structure.jpg

Although vSwitch does not run Spanning-tree protocol but vSwitch implements other loop prevention mechanisms. For example, a frame that enters from one VMNIC is not going to go out of the physical host from a different VMNIC card.

Question 10

Explanation

A bare-metal hypervisor (Type 1) is a layer of software we install directly on top of a physical server and its underlying hardware. There is no software or any operating system in between, hence the name bare-metal hypervisor. A Type 1 hypervisor is proven in providing excellent performance and stability since it does not run inside Windows or any other operating system. These are the most common type 1 hypervisors:

+ VMware vSphere with ESX/ESXi
+ KVM (Kernel-Based Virtual Machine)
+ Microsoft Hyper-V
+ Oracle VM
+ Citrix Hypervisor (formerly known as Xen Server)

Virtualization Questions 2

February 3rd, 2021 digitaltut 14 comments

Question 1

Explanation

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

Question 2

Explanation

Hypervisors provide virtual switch (vSwitch) that Virtual Machines (VMs) use to communicate with other VMs on the same host. The vSwitch may also be connected to the host’s physical NIC to allow VMs to get layer 2 access to the outside world.

Each VM is provided with a virtual NIC (vNIC) that is connected to the virtual switch. Multiple vNICs can connect to a single vSwitch, allowing VMs on a physical host to communicate with one another at layer 2 without having to go out to a physical switch.

 

Virtual_machine_structure.jpg

-> Therefore answer B is correct.

Answer E is not correct as besides the virtual switch running as a separate virtual machine, we also need to set up a trunk link so that VMs can communicate.

Answer C is not correct as it is too complex when we want to connect two VMs on the same hypervisor. VXLAN should only be used between two VMs on different physical servers.

Answer D is not correct as it uses Layer 3 network (routed link).

Therefore only answer A is left. We can connect two VMs via a trunk link with an external Layer2 switch.

Question 3

Explanation

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

Question 4

Explanation

Each VM is provided with a virtual NIC (vNIC) that is connected to the virtual switch. Multiple vNICs can connect to a single vSwitch, allowing VMs on a physical host to communicate with one another at layer 2 without having to go out to a physical switch.

 

Virtual_machine_structure.jpg

Question 5

Explanation

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type 1 is more efficient and well performing, it is also more secure than type 2 because the flaws and vulnerabilities that are endemic to Operating Systems are often absent from Type 1, bare metal hypervisors. Type 1 has better performance, scalability and stability but supported by limited hardware.

Type1_Type2_Hypervisors.jpg

Question 6

Explanation

Static routes directly between VRFs are not supported so we cannot configure a direct static route between two VRFs.

The command “ip route vrf Customer1 172.16.1.0 255.255.255.0 172.16.1.1 global” means in VRF Customer1, in order to reach destination 172.16.1.0/24 then we uses the next hop IP address 172.16.1.1 in the global routing table. And the command “ip route 192.168.1.0 255.255.255.0 Vlan10” tells the router “to reach 192.168.1.0/24, send to Vlan 10”.

Question 7

Explanation

This question is a bit unclear but it mentioned about “dedicated operating systems that provide the virtualization platform” -> It means the Hypervisor so “Type 1 hypervisor” is the best choice here as type 1 hypervisor does not require to run any underlay Operating System.

Note: Hosted virtualization is type 2 hypervisor. In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

LISP & VXLAN Questions

February 2nd, 2021 digitaltut 22 comments

Note: If you are not sure about LISP or VXLAN, please read our LISP Tutorial and VXLAN tutorial.

Question 1

Explanation

An Egress Tunnel Router (ETR) connects a site to the LISP-capable part of a core network (such as the Internet), publishes EID-to-RLOC mappings for the site, responds to Map-Request messages, and decapsulates and delivers LISP-encapsulated user data to end systems at the site.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_lisp/configuration/xe-3s/irl-xe-3s-book/irl-overview.html

Question 2

Explanation

Proxy ingress tunnel router (PITR): A PITR is an infrastructure LISP network entity that receives packets from non-LISP sites and encapsulates the packets to LISP sites or natively forwards them to non-LISP sites.

Reference: https://www.ciscopress.com/articles/article.asp?p=2992605

Note: The proxy egress tunnel router (PETR) allows the communication from the LISP sites to the non-LISP sites. The PETR receives LISP encapsulated traffic from ITR.

Question 3

Explanation

Locator ID Separation Protocol (LISP) is a network architecture and protocol that implements the use of two namespaces instead of a single IP address:
+ Endpoint identifiers (EIDs)—assigned to end hosts.
+ Routing locators (RLOCs)—assigned to devices (primarily routers) that make up the global routing system.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_lisp/configuration/xe-3s/irl-xe-3s-book/irl-overview.html

Question 4

Explanation

Locator ID Separation Protocol (LISP) is a network architecture and protocol that implements the use of two namespaces instead of a single IP address:
+ Endpoint identifiers (EIDs) – assigned to end hosts.
+ Routing locators (RLOCs) – assigned to devices (primarily routers) that make up the global routing system.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_lisp/configuration/xe-3s/irl-xe-3s-book/irl-overview.html

Question 5

Explanation

802.1Q VLAN identifier space is only 12 bits. The VXLAN identifier space is 24 bits. This doubling in size allows the VXLAN ID space to support 16 million Layer 2 segments -> Answer B is not correct.

VXLAN is a MAC-in-UDP encapsulation method that is used in order to extend a Layer 2 or Layer 3 overlay network over a Layer 3 infrastructure that already exists.

Reference: https://www.cisco.com/c/en/us/support/docs/lan-switching/vlan/212682-virtual-extensible-lan-and-ethernet-virt.html

Question 6

Explanation

Locator ID Separation Protocol (LISP) is a network architecture and protocol that implements the use of two namespaces instead of a single IP address:
+ Endpoint identifiers (EIDs)—assigned to end hosts.
+ Routing locators (RLOCs)—assigned to devices (primarily routers) that make up the global routing system.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_lisp/configuration/xe-3s/irl-xe-3s-book/irl-overview.html

Question 7

Explanation

VTEPs connect between Overlay and Underlay network and they are responsible for encapsulating frame into VXLAN packets to send across IP network (Underlay) then decapsulating when the packets leaves the VXLAN tunnel.

VXLAN_VTEP.jpg

Question 8

Question 9

Explanation

In this question we suppose that we only need to send packets from LISP site to non-LISP site successfully. We don’t care about the way back (if we care about the way back then all PETR, PITR, MS & MR are needed).

Proxy Egress Tunnel Router (PETR): A LISP device that de-encapsulates packets from LISP sites to deliver them to non-LISP sites.

LISP_PxTR.jpg

When the xTR in LISP Site 1 want to sends traffic to Non-LISP site, the ITR (not PETR) needs a Map Resolver (MR) to send Map Request to. When the ITR (the xTR in LISP Site 1 in the figure above) receives negative MAP-Reply packet from MR, it caches that prefix and map it to the PETR.

Good reference: https://netmindblog.com/2019/12/04/lisp-locator-id-separation-protocol-part-ii-pxtr/

Question 10

Explanation

Locator ID Separation Protocol (LISP) solves this issue by separating the location and identity of a device through the Routing locator (RLOC) and Endpoint identifier (EID):

+ Endpoint identifiers (EIDs) – assigned to end hosts.
+ Routing locators (RLOCs) – assigned to devices (primarily routers) that make up the global routing system.

Question 11

Explanation

VXLAN uses an 8-byte VXLAN header that consists of a 24-bit VNID and a few reserved bits. The VXLAN header together with the original Ethernet frame goes in the UDP payload. The 24-bit VNID is used to identify Layer 2 segments and to maintain Layer 2 isolation between the segments.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/vxlan/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_VXLAN_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_NX-OS_VXLAN_Configuration_Guide_7x_chapter_010.html

Let’s see the structure of a VXLAN packet to understand how (note: VNI = VNID)

VXLAN_Packet_Structure.jpg

The key fields for the VXLAN packet in each of the protocol headers are:

+ Outer MAC header (14 bytes with 4 bytes optional) – Contains the MAC address of the source VTEP and the MAC address of the next-hop router. Each router along the packet’s path rewrites this header so that the source address is the router’s MAC address and the destination address is the next-hop router’s MAC address.

+ Outer IP header (20 bytes)- Contains the IP addresses of the source and destination VTEPs.
+ (Outer) UDP header (8 bytes)- Contains source and destination UDP ports:
– Source UDP port: The VXLAN protocol repurposes this standard field in a UDP packet header. Instead of using this field for the source UDP port, the protocol uses it as a numeric identifier for the particular flow between VTEPs. The VXLAN standard does not define how this number is derived, but the source VTEP usually calculates it from a hash of some combination of fields from the inner Layer 2 packet and the Layer 3 or Layer 4 headers of the original frame.
– Destination UDP port: The VXLAN UDP port. The Internet Assigned Numbers Authority (IANA) allocates port 4789 to VXLAN.

+ VXLAN header (8 bytes)- Contains the 24-bit VNI (or VNID)
+ Original Ethernet/L2 Frame – Contains the original Layer 2 Ethernet frame.

EIGRP & OSPF Questions

February 1st, 2021 digitaltut 25 comments

Quick OSPF Overview

OSPF router ID selection:

OSPF uses the following criteria to select the router ID:
1. Manual configuration of the router ID (via the “router-id x.x.x.x” command under OSPF router configuration mode).
2. Highest IP address on a loopback interface.
3. Highest IP address on a non-loopback and active (no shutdown) interface.

OSPF forms neighbor relationship with other OSPF routers on the same segment by exchanging hello packets. The hello packets contain various parameters. Some of them should match between neighboring routers. These include:

+ Hello and Dead intervals
+ Area ID
+ Authentication type and password
+ Stub Area flag
+ Subnet ID and Subnet mask

When OSPF neighbor relationship is formed, a router goes through several state changes before it becomes fully adjacent with its neighbor. The states are Down -> Attempt (optional) -> Init -> 2-Way -> Exstart -> Exchange -> Loading -> Full. Short descriptions about these states are listed below:

Down: no information (hellos) has been received from this neighbor

Attempt: only valid for manually configured neighbors in an NBMA environment. In Attempt state, the router sends unicast hello packets every poll interval to the neighbor, from which hellos have not been received within the dead interval

Init: specifies that the router has received a hello packet from its neighbor, but the receiving router’s ID was not included in the hello packet

2-Way: indicates bi-directional communication has been established between two routers

Exstart: Once the DR and BDR are elected, the actual process of exchanging link state information can start between the routers and their DR and BDR

Exchange: OSPF routers exchange and compare database descriptor (DBD) packets

Loading: In this state, the actual exchange of link state information occurs. Outdated or missing entries are also requested to be resent

Full: routers are fully adjacent with each other

When OSPF is run on a network, two important events happen before routing information is exchanged:
+ Neighbors are discovered using multicast hello packets.
+ DR and BDR are elected for every multi-access network to optimize the adjacency building process. All the routers in that segment should be able to communicate directly with the DR and BDR for proper adjacency (in the case of a point-to-point network, DR and BDR are not necessary since there are only two routers in the segment, and hence the election does not take place).
For a successful neighbor discovery on a segment, the network must allow broadcasts or multicast packets to be sent.

In an NBMA network topology, which is inherently nonbroadcast, neighbors are not discovered automatically. OSPF tries to elect a DR and a BDR due to the multi-access nature of the network, but the election fails since neighbors are not discovered. Neighbors must be configured manually to overcome these problems

Each OSPF area only allows some specific LSAs to pass through. Below is a summarization of which LSAs are allowed in each OSPF area:

Area Restriction
Normal (Backbone & Non-Backbone) No Type 7 allowed
Stub No Type 5 AS-external LSA allowed
Totally Stub No Type 3, 4 or 5 LSAs allowed except the default summary route
NSSA No Type 5 AS-external LSAs allowed, but Type 7 LSAs that convert to Type 5 at the NSSA ABR can traverse
NSSA Totally Stub No Type 3, 4 or 5 LSAs except the default summary route, but Type 7 LSAs that convert to Type 5 at the NSSA ABR are allowed

Or this table will help you grasp it:

OSPF_LSAs_Area_types.jpg

OSPF Summarization
OSPF offers two methods of route summarization:
1) Summarization of internal routes performed on the ABRs
2) Summarization of external routes performed on the ASBRs

1) To summarize routes at the area boundary (ABRs), use the command:
area area-id range ip-address mask [advertise | not-advertise] [cost cost]

An internal summary route is generated if at least one subnet within the area falls in the summary address range and the summarized route metric is equal to the lowest cost of all the subnets within the summary address range. Interarea summarization can only be done for the intra-area routes of connected areas, and the ABR creates a route to Null0 to avoid loops in the absence of more specific routes.

2) To summarize external routes on the domain boundary (ASBRs), use the command:
summary-address {{ip-address mask} | {prefix mask}} [not-advertise] [tag tag]
The ASBR will summarize external routes before injecting them into the OSPF domain as type 5 external LSAs.

Note: An exception of using the “summary-address” is at the boundary of a NSSA area.

In both methods of route summarization described above, a summarized route is only generated if at least one subnet in the routing table falls in the summary address range.

OSPF point-to-point network type

Setting OSPF to point-to-point mode results in advertised routes containing the actual subnet mask instead of the default behavior of advertising /32 for a loopback interface.

Summarization in EIGRP and OSPF

Unlike OSPF where we can summarize only on ABR or ASBR, in EIGRP we can summarize anywhere.

Manual summarization can be applied anywhere in EIGRP domain, on every router, on every interface via the ip summary-address eigrp as-number address mask [administrative-distance ] command (for example: ip summary-address eigrp 1 192.168.16.0 255.255.248.0). Summary route will exist in routing table as long as at least one more specific route exists. If the last specific route disappears, summary route will also fade out. The metric used by EIGRP manual summary route is the minimum metric of the specific routes. The example below shows how to configure EIGRP manual summarization:

Manual_summarization_EIGRP.jpgR1(config)#interface fa1/0
R1(config-if)#ip summary-address 10 172.16.0.0 255.255.254.0

If you are not sure about OSPF LSA Types, please read our OSPF LSA Types Tutorial.

OSPF area filtering

The command “area area-number filter-list prefixin“: Prevent prefixes from entering this area (in keyword here means “into”)
The command “area area-number filter-list prefixout“: Prevent other areas that the ABR is connected to receive the prefix.

Question 1

Explanation

The following different OSPF types are compatible with each other:

+ Broadcast and Non-Broadcast (adjust hello/dead timers)
+ Point-to-Point and Point-to-Multipoint (adjust hello/dead timers)

Broadcast and Non-Broadcast networks elect DR/BDR so they are compatible. Point-to-point/multipoint do not elect DR/BDR so they are compatible.

Question 2

Explanation

On Ethernet interfaces the OSPF hello intervl is 10 second by default so in this case there would be a Hello interval mismatch -> the OSPF adjacency would not be established.

Question 3

Explanation

This combination of commands is known as “Conditional debug” and will filter the debug output based on your conditions. Each condition added, will behave like an ‘And’ operator in Boolean logic. Some examples of the “debug ip ospf hello” are shown below:

*Oct 12 14:03:32.595: OSPF: Send hello to 224.0.0.5 area 0 on FastEthernet1/0 from 192.168.12.2
*Oct 12 14:03:33.227: OSPF: Rcv hello from 1.1.1.1 area 0 on FastEthernet1/0 from 192.168.12.1
*Oct 12 14:03:33.227: OSPF: Mismatched hello parameters from 192.168.12.1

Question 4

Explanation

If we configured an EIGRP stub router so that it only advertises connected and summary routes. But we also want to have an exception to this rule then we can configure a leak-map. For example:

R4(config-if)#router eigrp 1
R4(config-router)#eigrp stub
R4(config)#ip access-list standard R4_L0opback0
R4(config-std-nacl)#permit host 4.4.4.4
R4(config)#route-map R4_L0opback0_LEAKMAP
R4(config-route-map)#match ip address R4_L0opback0
R4(config)#router eigrp 1
R4(config-router)#eigrp stub leak-map R4_L0opback0_LEAKMAP

As we can see the leak-map feature goes long with ‘eigrp stub’ command.

Question 5

Explanation

EIGRP provides a mechanism to load balance over unequal cost paths (or called unequal cost load balancing) through the “variance” command. In other words, EIGRP will install all paths with metric < variance * best_metric into the local routing table, provided that it meets the feasibility condition to prevent routing loop. The path that meets this requirement is called a feasible successor. If a path is not a feasible successor, it is not used in load balancing.

Note: The feasibility condition states that, the Advertised Distance (AD) of a route must be lower than the feasible distance of the current successor route.

Question 6

Explanation

OTP leverages existing LISP encapsulation which:
+ Allows dynamic multi-point tunneling (-> Answer A is correct)
+ Provides instance ID field to optionally support virtualization across WAN (see EVN WAN Extension section)
OTP does NOT use LISP control plane (map server/resolver, etc.) (-> Therefore answer B is not correct) instead it uses EIGRP to exchange routes and provide the next-hop (-> answer C and answer D are not correct), which LISP encapsulation uses to reach remote prefixes.

Reference: https://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ip-routing/whitepaper_C11-730404.html

Question 7

Explanation

When OSPF adjacency is formed, a router goes through several state changes before it becomes fully adjacent with its neighbor. The states are Down -> Attempt (optional) -> Init -> 2-Way -> Exstart -> Exchange -> Loading -> Full. Short descriptions about these states are listed below:

Down: no information (hellos) has been received from this neighbor.

Attempt: only valid for manually configured neighbors in an NBMA environment. In Attempt state, the router sends unicast hello packets every poll interval to the neighbor, from which hellos have not been received within the dead interval.

Init: specifies that the router has received a hello packet from its neighbor, but the receiving router’s ID was not included in the hello packet
2-Way: indicates bi-directional communication has been established between two routers.

Exstart: Once the DR and BDR are elected, the actual process of exchanging link state information can start between the routers and their DR and BDR.

Exchange: OSPF routers exchange database descriptor (DBD) packets

Loading: In this state, the actual exchange of link state information occurs

Full: routers are fully adjacent with each other

(Reference: http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a0080093f0e.shtml)

Neighbors Stuck in Exstart/Exchange State
The problem occurs most frequently when attempting to run OSPF between a Cisco router and another vendor’s router. The problem occurs when the maximum transmission unit (MTU) settings for neighboring router interfaces don’t match. If the router with the higher MTU sends a packet larger that the MTU set on the neighboring router, the neighboring router ignores the packet.

Question 8

Explanation

EIGRP support unequal-cost load balancing via the “variance …” while OSPF only supports equal-cost load balancing.

Question 9

Explanation

The Broadcast network type is the default for an OSPF enabled ethernet interface (while Point-to-Point is the default OSPF network type for Serial interface with HDLC and PPP encapsulation).

Reference: https://www.oreilly.com/library/view/cisco-ios-cookbook/0596527225/ch08s15.html

Question 10

Explanation

Summary ASBR LSA (Type 4) – Generated by the ABR to describe an ASBR to routers in other areas so that routers in other areas know how to get to external routes through that ASBR. For example, suppose R8 is redistributing external route (EIGRP, RIP…) to R3. This makes R3 an Autonomous System Boundary Router (ASBR). When R2 (which is an ABR) receives this LSA Type 1 update, R2 will create LSA Type 4 and flood into Area 0 to inform them how to reach R3. When R5 receives this LSA it also floods into Area 2.

In the above example, the only ASBR belongs to area 1 so the two ABRs (R2 & R5) send LSA Type 4 to area 0 & area 2 (not vice versa). This is an indication of the existence of the ASBR in area 1.

OSPF_LSAs_Types_4.jpg

Note:
+ Type 4 LSAs contain the router ID of the ASBR.
+ There are no LSA Type 4 injected into Area 1 because every router inside area 1 knows how to reach R3. R3 only uses LSA Type 1 to inform R2 about R8 and inform R2 that R3 is an ASBR.

EIGRP & OSPF Questions 2

February 1st, 2021 digitaltut 10 comments

Question 1

Explanation

Broadcast and Non-Broadcast networks elect DR/BDR while Point-to-point/multipoint do not elect DR/BDR. Therefore we have to set the two Gi0/0 interfaces to point-to-point or point-to-multipoint network to ensure that a DR/BDR election does not occur.

Question 2

Question 3

Question 4

Explanation

The “network 20.0.0.0 0.0.0.255 area 0” command on R2 did not cover the IP address of Fa1/1 interface of R2 so OSPF did not run on this interface. Therefore we have to use the command “network 20.1.1.2 0.0.255.255 area 0” to turn on OSPF on this interface.

Note: The command “network 20.1.1.2 0.0.255.255 area 0” can be used too so this answer is also correct but answer C is the best answer here.

The “network 0.0.0.0 255.255.255.255 area 0” command on R1 will run OSPF on all active interfaces of R1.

Question 5

Question 6

Explanation

We must reconfigure the IP address after assigning or removing an interface to a VRF. Otherwise that interface does not have an IP address.

Question 7

Explanation

By default, EIGRP metric is calculated:

metric = bandwidth + delay

While OSPF is calculated by:

OSPF metric = Reference bandwidth / Interface bandwidth in bps

(Or Cisco uses 100Mbps (108) bandwidth as reference bandwidth. With this bandwidth, our equation would be:

Cost = 108/interface bandwidth in bps)

BGP Questions

January 31st, 2021 digitaltut 28 comments

If you are not sure about BGP, please read our BGP tutorial.

BGP Quick Summary:

Protocol type: Path Vector
Type: EGP (External Gateway Protocol)
Packet Types: Open, Update, KeepAlive, Notification
Administrative Distance: eBGP: 20; iBGP: 200
Transport: TCP port 179
Neighbor States: Idle -> Active -> Connect -> Open Sent -> Open Confirm -> Established
1 – Idle: the initial state of a BGP connection. In this state, the BGP speaker is waiting for a BGP start event, generally either the establishment of a TCP connection or the re-establishment of a previous connection. Once the connection is established, BGP moves to the next state.
2 – Connect: In this state, BGP is waiting for the TCP connection to be formed. If the TCP connection completes, BGP will move to the OpenSent stage; if the connection cannot complete, BGP goes to Active
3 – Active: In the Active state, the BGP speaker is attempting to initiate a TCP session with the BGP speaker it wants to peer with. If this can be done, the BGP state goes to OpenSent state.
4 – OpenSent: the BGP speaker is waiting to receive an OPEN message from the remote BGP speaker
5 – OpenConfirm: Once the BGP speaker receives the OPEN message and no error is detected, the BGP speaker sends a KEEPALIVE message to the remote BGP speaker
6 – Established: All of the neighbor negotiations are complete. You will see a number, which tells us the number of prefixes the router has received from a neighbor or peer group.
Path Selection Attributes: (highest) Weight > (highest) Local Preference > Originate > (shortest) AS Path > Origin > (lowest) MED > External > IGP Cost > eBGP Peering > (lowest) Router ID
(Originate: prefer routes that it installed into BGP by itself over a route that another router installed in BGP)
Authentication: MD5
BGP Origin codes: i – IGP (injected by “network” statement), e – EGP, ? – Incomplete
AS number range: Private AS range: 64512 – 65535, Globally (unique) AS: 1 – 64511

More information about popular Path Selection Attributes
Weight Attribute:
+ Cisco proprietary
+ First attribute used in Path selection
+ Only used locally in a router (not be exchanged between BGP neighbors)
+ Higher weight is preferred
+ Default value is 0
Weight_BGP_Attribute_Influence.jpg

Local Preference (LocalPrf) Attribute:
+ Sent to all iBGP neighbor (not be exchanged between eBGP neighbors)
+ Used to choose the path to external BGP neighbors
+ Higher value is preferred
+ Default value is 100

LocalPreference_BGP_Influence.jpg

Note: Although Local Preference attribute is only sent to all iBGP neighbor and it is not exchanged between eBGP neighbors but we can apply it to an eBGP neighbor (inbound direction) to affect our local AS choice.

For example in the topology above, we can use Local Preference on R2 (inbound direction) for the R2-R4 eBGP connection to affect routing decision on R1 toward R4.

Unlike Weight attribute, Local Preference attribute is advertised to all iBGP neighbors.

MED Attribute:
+ Optional nontransitive attribute (nontransitive means that we can only advertise MED to routers that are one AS away)
+ Sent through ASes to external BGP neighbors
+ Lower value is preferred (it can be considered the external metric of a route)
+ Default value is 0

MED_BGP_Attribute_Influence.jpg

Question 1

Explanation

The BGP session may report in the following states

1 – Idle: the initial state of a BGP connection. In this state, the BGP speaker is waiting for a BGP start event, generally either the establishment of a TCP connection or the re-establishment of a previous connection. Once the connection is established, BGP moves to the next state.
2 – Connect: In this state, BGP is waiting for the TCP connection to be formed. If the TCP connection completes, BGP will move to the OpenSent stage; if the connection cannot complete, BGP goes to Active
3 – Active: In the Active state, the BGP speaker is attempting to initiate a TCP session with the BGP speaker it wants to peer with. If this can be done, the BGP state goes to OpenSent state.
4 – OpenSent: the BGP speaker is waiting to receive an OPEN message from the remote BGP speaker
5 – OpenConfirm: Once the BGP speaker receives the OPEN message and no error is detected, the BGP speaker sends a KEEPALIVE message to the remote BGP speaker
6 – Established: All of the neighbor negotiations are complete. You will see a number, which tells us the number of prefixes the router has received from a neighbor or peer group.

Question 2

Explanation

With BGP, we must advertise the correct network and subnet mask in the “network” command ( in this case network 10.1.1.0/24 on R1 and network 10.2.2.0/24 on R2). BGP is very strict in the routing advertisements. In other words, BGP only advertises the network which exists exactly in the routing table. In this case, if you put the command “network x.x.0.0 mask 255.255.0.0” or “network x.0.0.0 mask 255.0.0.0” or “network x.x.x.x mask 255.255.255.255” then BGP will not advertise anything.

It is easy to establish eBGP neighborship via the direct link. But let’s see what are required when we want to establish eBGP neighborship via their loopback interfaces. We will need two commands:
+ The command “neighbor 10.1.1.1 ebgp-multihop 2” on R1 and “neighbor 10.2.2.2 ebgp-multihop 2” on R1. This command increases the TTL value to 2 so that BGP updates can reach the BGP neighbor which is two hops away.
+ A route to the neighbor loopback interface. For example: “ip route 10.2.2.0 255.255.255.0 192.168.10.2” on R1 and “ip route 10.1.1.0 255.255.255.0 192.168.10.1” on R2

Question 3

Explanation

The ‘>’ shown in the output above indicates that the path with a next hop of 192.168.101.2 is the current best path.

Path Selection Attributes: Weight > Local Preference > Originate > AS Path > Origin > MED > External > IGP Cost > eBGP Peering > Router ID

BGP prefers the path with highest weight but the weights here are all 0 (which indicate all routes that are not originated by the local router) so we need to check the Local Preference. A path without LOCAL_PREF (LocPrf column) means it has the default value of 100. Therefore we can find the two next best paths with the next hop of 192.168.101.18 and 192.168.101.10.

We have to move to the next path selection attribute: Originate. BGP prefers the path that the local router originated (which is indicated with the “next hop 0.0.0.0”). But none of the two best paths is self-originated.

The AS Path of the next hop 192.168.101.18 (one AS away) is shorter than the AS Path of the next hop 192.168.101.10 (two ASes away) so the next hop 192.168.101.18 will be chosen as the next best path.

Question 4

Explanation

Path Selection Attributes: Weight > Local Preference > Originate > AS Path > Origin > MED > External > IGP Cost > eBGP Peering > Router ID

Question 5

Explanation

Local preference is an indication to the AS about which path has preference to exit the AS in order to reach a certain network. A path with a higher local preference is preferred. The default value for local preference is 100.

Unlike the weight attribute, which is only relevant to the local router, local preference is an attribute that routers exchange in the same AS. The local preference is set with the “bgp default local-preference value” command.

In this case, both R3 & R4 have exit links but R4 has higher local-preference so R4 will be chosen as the preferred exit point from AS 200.

(Reference: http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a00800c95bb.shtml#localpref)

Question 6

Question 7

Explanation

Unlike EIGRP or OSPF, when a “network” command is declared under BGP, BGP advertises that network without knowing if it has that network or not. Therefore in this question R2 will advertise network 10.0.0.0/24, which is collided with the same network on R1. Therefore we must remove this statement on R2 as it does not have this network. Although answer A is not totally correct (it should be “no network 10.0.0.0 mask 255.255.255.0 instead) but it is the best choice here.

Question 8

Explanation

R3 advertises BGP updates to R1 with multiple AS 200 so R1 believes the path to reach AS 200 via R3 is farther than R2 so R1 will choose R2 to forward traffic to AS 200.

Wireless Questions

January 30th, 2021 digitaltut 42 comments

Quick Wireless Summary
Cisco Access Points (APs) can operate in one of two modes: autonomous or lightweight
+ Autonomous: self-sufficient and standalone. Used for small wireless networks. Each autonomous AP must be configured with a management IP address so that it can be remotely accessed using Telnet, SSH, or a web interface. Each AP must be individually managed and maintained unless you use a management platform such as Cisco DNA Center.
+ Lightweight: The term ‘lightweight’ refers to the fact that these devices cannot work independently. A Cisco lightweight AP (LAP) has to join a Wireless LAN Controller (WLC) to function. LAP and WLC communicate with each other via a logical pair of CAPWAP tunnels.

Control and Provisioning for Wireless Access Point (CAPWAP) is an IETF standard for control messaging for setup, authentication and operations between APs and WLCs. CAPWAP is similar to LWAPP except the following differences:

+ CAPWAP uses Datagram Transport Layer Security (DTLS) for authentication and encryption to protect traffic between APs and controllers. LWAPP uses AES.
+ CAPWAP has a dynamic maximum transmission unit (MTU) discovery mechanism.
+ CAPWAP runs on UDP ports 5246 (control messages) and 5247 (data messages)

An LAP operates in one of six different modes:
+ Local mode (default mode): It offers one or more basic service sets (BBS) on a specific channel. AP maintains a tunnel towards its Wireless Controller. When the AP is not transmitting wireless client frames, it measures noise floor and interference, and scans for intrusion detection (IDS) events every 180 seconds.
+ FlexConnect, formerly known as Hybrid Remote Edge AP (H-REAP), mode: allows data traffic to be switched locally and not go back to the controller if the CAPWAP to the WLC is down. The FlexConnect AP can perform standalone client authentication and switch VLAN traffic locally even when it’s disconnected to the WLC (Local Switched). FlexConnect AP can also tunnel (via CAPWAP) both user wireless data and control traffic to a centralized WLC (Central Switched). The AP can locally switch traffic between a VLAN and SSID when the CAPWAP tunnel to the WLC is down.
+ Monitor mode: does not handle data traffic between clients and the infrastructure. It acts like a sensor for location-based services (LBS), rogue AP detection, and IDS
+ Rogue detector mode: monitor for rogue APs. It does not handle data at all.
+ Sniffer mode: run as a sniffer and captures and forwards all the packets on a particular channel to a remote machine where you can use protocol analysis tool (Wireshark, Airopeek, etc) to review the packets and diagnose issues. Strictly used for troubleshooting purposes.
+ Bridge mode: bridge together the WLAN and the wired infrastructure together.
+ Sensor mode: this is a special mode which is not listed in the books but you need to know. In this mode, the device can actually function much like a WLAN client would associating and identifying client connectivity issues within the network in real time without requiring an IT or technician to be on site.

Mobility Express is the ability to use an access point (AP) as a controller instead of a real WLAN controller. But this solution is only suitable for small to midsize, or multi-site branch locations where you might not want to invest in a dedicated WLC. A Mobility Express WLC can support up to 100 APs.

The 2.4 GHz band is subdivided into multiple channels each allotted 22 MHz bandwidth and separated from the next channel by 5 MHz.
-> A best practice for 802.11b/g/n WLANs requiring multiple APs is to use non-overlapping channels such as 1, 6, and 11.

wireless_2_4_GHz_band.png

Antenna

An antenna is a device to transmit and/or receive electromagnetic waves. Electromagnetic waves are often referred to as radio waves. Most antennas are resonant devices, which operate efficiently over a relatively narrow frequency band. An antenna must be tuned (matched) to the same frequency band as the radio system to which it is connected otherwise reception and/or transmission will be impaired.

Types of external antennas:
+ Omnidirectional: Provide 360-degree coverage. Ideal in houses and office areas. This type of antenna is used when coverage in all directions from the antenna is required.

ominidirectionl_antenna_direction.jpg

Omnidirectional Antenna Radiation Pattern

+ Directional: Focus the radio signal in a specific direction. Typically, these antennas have one main lobe and several minor lobes. Examples are the Yagi and parabolic dish

Yagi_radiation_pattern.jpgYagi Antenna Radiation Pattern

+ Multiple Input Multiple Output (MIMO) – Uses multiple antennas (up to eight) to increase bandwidth

A real example of how the lobes affect the signal strength:

antenna_radiation_real_example.jpg

Common Antennas

Commonly used antennas in a WLAN system are dipoles, omnidirectional antennas, patches and Yagis as shown below:

Antennas.jpg

Antenna Radiation Patterns

Dipole

Dipole_Antenna_Radiation_Pattern.jpg

A more comprehensive explanation of how we get the 2D patterns in (c) and (d) above is shown below:

Dipole_3d.jpg

Omni

Omni_Radiation_Pattern.jpg

Patch

Antenna_Radiation_Pattern_Patch.jpg

4 x 4 Patch

4x4_Patch_Radiation_Pattern.jpgYagi

Yagi_Antenna_Radiation_Pattern.jpg

Wireless Terminologies

Decibels

Decibels (dB) are the accepted method of describing a gain or loss relationship in a communication system. If a level is stated in decibels, then it is comparing a current signal level to a previous level or preset standard level. The beauty of dB is they may be added and subtracted. A decibel relationship (for power) is calculated using the following formula:

dB_formula.jpg

“A” might be the power applied to the connector on an antenna, the input terminal of an amplifier or one end of a transmission line. “B” might be the power arriving at the opposite end of the transmission line, the amplifier output or the peak power in the main lobe of radiated energy from an antenna. If “A” is larger than “B”, the result will be a positive number or gain. If “A” is smaller than “B”, the result will be a negative number or loss.

You will notice that the “B” is capitalized in dB. This is because it refers to the last name of Alexander Graham Bell.

Note:

+ dBi is a measure of the increase in signal (gain) by your antenna compared to the hypothetical isotropic antenna (which uniformly distributes energy in all directions) -> It is a ratio. The greater the dBi value, the higher the gain and the more acute the angle of coverage.

+To divide one number by another, simply subtract their equivalent decibel values. For example, to find 100 divided by 10:

100÷10 = log100 – log10= 20dB – 10dB = 10dB = 10

+ dBm is a measure of signal power. It is the the power ratio in decibel (dB) of the measured power referenced to one milliwatt (mW). The “m” stands for “milliwatt”.

Example:

At 1700 MHz, 1/4 of the power applied to one end of a coax cable arrives at the other end. What is the cable loss in dB?

Solution:

dB_example.jpg

=> Loss = 10 * (- 0.602) = – 6.02 dB

From the formula above we can calculate at 3 dB the power is reduced by half. Loss =  10 * log (1/2) = -3 dB; this is an important number to remember.

Beamwidth

The angle, in degrees, between the two half-power points (-3 dB) of an antenna beam, where more than 90% of the energy is radiated.

beamwidth.jpg

A radiation pattern defines the variation of the power radiated by an antenna as a function of the direction away from the antenna.

Polarization describes the way the electric field of the radio wave is oriented.

Antenna gain is the ability of the antenna to radiate more or less in any direction compared to a theoretical antenna.

OFDM

OFDM was proposed in the late 1960s, and in 1970, US patent was issued. OFDM encodes a single transmission into
multiple sub-carriers. All the slow subchannel are then multiplexed into one fast combined channel.

The trouble with traditional FDM is that the guard bands waste bandwidth and thus reduce capacity. OFDM selects channels that overlap but do not interfere with each other.

FDM_OFDM.gif

OFDM works because the frequencies of the subcarriers are selected so that at each subcarrier frequency, all other subcarriers do not contribute to overall waveform.

In this example, three subcarriers are overlapped but do not interfere with each other. Notice that only the peaks of each subcarrier carry data. At the peak of each of the subcarriers, the other two subcarriers have zero amplitude.

OFDM.jpg

Basic Service Set (BSS)

A group of stations that share an access point are said to be part of one BSS.

Extended Service Set (ESS)

Some WLANs are large enough to require multiple access points. A group of access points connected to the same WLAN are known as an ESS. Within an ESS, a client can associate with any one of many access points that use the same Extended service set identifier (ESSID). That allows users to roam about an office without losing wireless connection.

Roaming

Roaming is the movement of a client from one AP to another while still transmitting. Roaming can be done across different mobility groups, but must remain inside the same mobility domain. The wireless client makes decisions on whether to change APs or remain connected to the current AP. There are 2 types of roaming:

A client roaming from AP1 to AP2. These two APs are in the same mobility group and mobility domain

Roaming_Same_Mobile_Group.jpg

Roaming in the same Mobility Group

A client roaming from AP1 to AP2. These two APs are in different mobility groups but in the same mobility domain

Roaming_Different_Mobile_Group.jpg

Roaming in different Mobility Groups (but still in the same Mobility Domain)

Layer 2 & Layer 3 Intercontroller Roam

When a mobile client roams from one AP to another, and if those APs are on different WLCs, then the client makes an intercontroller roam.

When a client starts an intercontroller roam, the two WLCs compare the VLAN IDs allocated to their WLAN interfaces.
– If the VLAN IDs are the same, the client performs a Layer 2 intercontroller roam.
– If the VLAN IDs are different, the WLCs will arrange a Layer 3 or local-to-foreign roam.

Both of the roaming above allow the client to continue using its current IP address on the new AP and WLC.

In a Layer 3 roaming, the original WLC is called the anchor controller, and the WLC where the roamed client is reassociated is called the foreign controller. The client is anchored to the original WLC even if it roams to different controllers.

Wireless Parameters

Noise

There is radio frequency (RF) everywhere, from human activity, earth heat, space… The amount of unwanted RF is called noise.

Effective Isotropic Radiated Power (EIRP)

EIRP tells you what is the actual transmit power of the antenna. EIRP is a very important parameter because it is regulated by governmental agencies in most countries. In those cases, a system cannot radiate signals higher than a maximum allowable EIRP. To find the EIRP of a system, simply add the transmitter power level to the antenna gain and subtract the cable loss.

EIRP_wireless.jpg

EIRP = Tx Power – Tx Cable + Tx Antenna

Suppose a transmitter is configured for a power level of 10 dBm. A cable with 5-dB loss connects the transmitter to an antenna with an 8-dBi gain. The resulting EIRP of the system is EIRP = 10 dBm – 5 dB + 8 dBi = 13 dBm.

You might notice that the EIRP is made up of decibel-milliwatt (dBm), dB relative to an isotropic antenna (dBi), and decibel (dB) values. Even though the units appear to be different, you can safely combine them because they are all in the dB “domain”.

Receive Signal Strength Indicator (RSSI)

RSSI is a measurement of how well your device can hear a signal from an access point or router (useful signal). It’s a value that is useful for determining if you have enough signal to get a good wireless connection.

Signal-to-noise ratio (SNR or S/N)

SNR is the ratio of received signal power (at wireless client) to the noise power, and its unit of expression is typically decibels (dB). If your signal power and noise power are already in decibel form, then you can subtract the noise power from the signal power: SNR = S – N. This is because when you subtract logarithms, it is the equivalent of dividing normal numbers. Also, the difference in the numbers equals the SNR.For example, if the noise floor is -80 dBm and the wireless client is receiving a signal of -65 dBm SNR = -65 – (-80) = 15.

RSSI_SNR.jpg

Or we can find SNR from RSSI with this formula: SNR = RSSI – N, with N is the noise power.

A practical example to calculate SNR

Here is an example to tie together this information to come up with a very simple RF plan calculator for a single AP and a single client.
+ Access Point Power = 20 dBm
+ 50 foot antenna cable = – 3.35 dB Loss
+ External Access Point Antenna = + 5.5 dBi gain
+ Signal attenuation due to glass wall with metal frame = -6 dB
+ RSSI at WLAN Client = -75 dBm at 100ft from the AP
+ Noise level detected by WLAN Client = -85 dBm at 100ft from the AP

Based on the above, we can calculate the following information:
+ EIRP of the AP at source = 20 – 3.35 + 5.5 = 22.15 dBm
+ Transmit power as signal passes through glass wall = 22.15 – 6 = 16.15 dBm
+ SNR at Client = -75 + -85 = 10 dBm (difference between Signal and Noise)

WPA2 and WPA3

WPA2 is classified into two versions to encrypt Wi-Fi networks:
+ WPA2-Personal uses pre-shared key (PSK)
+ WPA2-Enterprise uses advanced encryption standard (AES)

Similar to WPA2, WPA3 includes:
+ WPA3-Personal: applies to small-scale networks (individual and home networks). 
+ WPA3-Enterprise: applies to medium- and large-sized networks with higher requirements on network management, access control, and security, and uses more advanced security protocols to protect sensitive data of users.

WPA3 provides improvements to the general Wi-Fi encryption, thanks to Simultaneous Authentication of Equals (SAE) replacing the Pre-Shared Key (PSK) authentication method used in prior WPA versions. SAE enables individuals or home users to set Wi-Fi passwords that are easier to remember and provide the same security protection even if the passwords are not complex enough.

WPA3 requires the use of Protected Management Frames. These frames help protect against forging and eavesdropping.

Wifi 6 (802.11ax)

Wifi 6 is an IEEE standard for wireless local-area networks (WLANs) and the successor of 802.11ac. Wi-Fi 6 brings several crucial wireless enhancements for IT administrators when compared to Wi-Fi 5. The first significant change is using 2.4 GHz. Wi-Fi 5 was limited to only using 5 GHz. While 5 GHz is a ‘cleaner’ band of RF, it doesn’t penetrate walls and 2.4 GHz and requires more battery life. For Wi-Fi driven IoT devices, 2.4 GHz will likely continue to be the band of choice for the foreseeable future.

Another critical difference between the two standards is the use of Orthogonal Frequency Division Multiple Access (OFDMA) and MU-MIMO. Wi-Fi 5 was limited to downlink only on MU-MIMO, where Wi-Fi 6 includes downlink and uplink. OFDMA, as referenced above, is also only available in Wi-Fi 6.

Question 1

Explanation

The Lightweight AP (LAP) can discover controllers through your domain name server (DNS). For the access point (AP) to do so, you must configure your DNS to return controller IP addresses in response to CISCO-LWAPP-CONTROLLER.localdomain, where localdomain is the AP domain name. When an AP receives an IP address and DNS information from a DHCP server, it contacts the DNS to resolve CISCO-CAPWAP-CONTROLLER.localdomain. When the DNS sends a list of controller IP addresses, the AP sends discovery requests to the controllers.

The AP will attempt to resolve the DNS name CISCO-CAPWAP-CONTROLLER.localdomain. When the AP is able to resolve this name to one or more IP addresses, the AP sends a unicast CAPWAP Discovery Message to the resolved IP address(es). Each WLC that receives the CAPWAP Discovery Request Message replies with a unicast CAPWAP Discovery Response to the AP.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless/4400-series-wireless-lan-controllers/107606-dns-wlc-config.html

Question 2

Explanation

Signal to Noise Ratio (SNR) is defined as the ratio of the transmitted power from the AP to the ambient (noise floor) energy present. To calculate the SNR value, we add the Signal Value to the Noise Value to get the SNR ratio. A positive value of the SNR ratio is always better.

Here is an example to tie together this information to come up with a very simple RF plan calculator for a single AP and a single client.
+ Access Point Power = 20 dBm
+ 50 foot antenna cable = – 3.35 dB Loss
+ Signal attenuation due to glass wall with metal frame = -6 dB
+ External Access Point Antenna = + 5.5 dBi gain
+ RSSI at WLAN Client = -75 dBm at 100ft from the AP
+ Noise level detected by WLAN Client = -85 dBm at 100ft from the AP

Based on the above, we can calculate the following information.
+ EIRP of the AP at source = 20 – 3.35 + 5.5 = 22.15 dBm
+ Transmit power as signal passes through glass wall = 22.15 – 6 = 16.15 dBm
+ SNR at Client = -75 + -85 = 10 dBm (difference between Signal and Noise)

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Borderless_Networks/Unified_Access/CMX/CMX_RFFund.html

Receive Signal Strength Indicator (RSSI) is a measurement of how well your device can hear a signal from an access point or router. It’s a value that is useful for determining if you have enough signal to get a good wireless connection.

EIRP tells you what’s the actual transmit power of the antenna in milliwatts.

dBm is an abbreviation for “decibels relative to one milliwatt,” where one milliwatt (1 mW) equals 1/1000 of a watt. It follows the same scale as dB. Therefore 0 dBm = 1 mW, 30 dBm = 1 W, and -20 dBm = 0.01 mW

Question 3

Explanation

The EAP-FAST protocol is a publicly accessible IEEE 802.1X EAP type that Cisco developed to support customers that cannot enforce a strong password policy and want to deploy an 802.1X EAP type that does not require digital certificates.

EAP-FAST is also designed for simplicity of deployment since it does not require a certificate on the wireless LAN client or on the RADIUS infrastructure yet incorporates a built-in provisioning mechanism.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-fixed/72788-CSSC-Deployment-Guide.html

Question 4

Explanation

If the clients roam between APs registered to different controllers and the client WLAN on the two controllers is on different subnet, then it is called inter-controller L3 roam.

In this situation as well controllers exchange mobility messages. Client database entry change is completely different that to L2 roam(instead of move, it will copy). In this situation the original controller marks the client entry as “Anchor” where as new controller marks the client entry as “Foreign“.The two controllers now referred to as “Anchor controller” & “Foreign Controller” respectively. Client will keep the original IP address & that is the real advantage.

Note: Inter-Controller (normally layer 2) roaming occurs when a client roam between two APs registered to two different controllers, where each controller has an interface in the client subnet.

Question 5

Question 6

Explanation

According to the Meraki webpage, radar and rogue AP are two sources of Wireless Interference.

Interference between different WLANs occurs when the access points within range of each other are set to the same RF channel.

Note: Microwave ovens (not conventional oven) emit damaging interfering signals at up to 25 feet or so from an operating oven. Some microwave ovens emit radio signals that occupy only a third of the 2.4-GHz band, whereas others occupy the entire band.

Reference: https://www.ciscopress.com/articles/article.asp?p=2351131&seqNum=2

So answer D is not a correct answer.

Question 7

Explanation

This paragraph was taken from the link https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wlan-security/69340-web-auth-config.html#c5:

“The next step is to configure the WLC for the Internal web authentication. Internal web authentication is the default web authentication type on WLCs.”

In step 4 of the link above, we will configure Security as described in this question. Therefore we can deduce this configuration is for Internal web authentication.

webauth_security_WLC.jpg

Question 8

Explanation

FlexConnect is a wireless solution for branch office and remote office deployments. It enables customers to configure and control access points in a branch or remote office from the corporate office through a wide area network (WAN) link without deploying a controller in each office.

The FlexConnect access points can switch client data traffic locally and perform client authentication locally when their connection to the controller is lost. When they are connected to the controller, they can also send traffic back to the controller. In the connected mode, the FlexConnect access point can also perform local authentication.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-2/configuration/guide/cg/cg_flexconnect.html

Question 9

Explanation

Deploying WPA2-Enterprise requires a RADIUS server, which handles the task of authenticating network users access. The actual authentication process is based on the 802.1X policy and comes in several different systems labelled EAP. Because each device is authenticated before it connects, a personal, encrypted tunnel is effectively created between the device and the network.

Reference: https://www.securew2.com/solutions/wpa2-enterprise-and-802-1x-simplified/

Question 10

Explanation

802.11r Fast Transition (FT) Roaming is an amendment to the 802.11 IEEE standards. It is a new concept for roaming. The initial handshake with the new AP occurs before client roams to the target AP. Therefor it is called Fast Transition. 802.11r provides two methods of roaming:

+ Over-the-air: With this type of roaming, the client communicates directly with the target AP using IEEE 802.11 authentication with the Fast Transition (FT) authentication algorithm.
+ Over-the-DS (distribution system): With this type of roaming, the client communicates with the target AP through the current AP. The communication between the client and the target AP is carried in FT action frames between the client and the current AP and is then sent through the controller.

But both of these methods do not deal with legacy clients.

The 802.11k allows 11k capable clients to request a neighbor report containing information about known neighbor APs that are candidates for roaming.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/80211r-ft/b-80211r-dg.html

IEEE 802.11v is an amendment to the IEEE 802.11 standard which describes numerous enhancements to wireless network management. One such enhancement is Network assisted Power Savings which helps clients to improve the battery life by enabling them to sleep longer. Another enhancement is Network assisted Roaming which enables the WLAN to send requests to associated clients, advising the clients as to better APs to associate to. This is useful for both load balancing and in directing poorly connected clients.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/802-11v.pdf

Cisco 802.11r supports three modes:
+ Pure mode: only allows 802.11r client to connect
+ Mixed mode: allows both clients that do and do not support FT to connect
+ Adaptive mode: does not advertise the FT AKM at all, but will use FT when supported clients connect

Therefore “Adaptive mode” is the best answer here.

Question 11

Explanation

Link aggregation (LAG) is a partial implementation of the 802.3ad port aggregation standard. It bundles all of the controller’s distribution system ports into a single 802.3ad port channel.

Restriction for Link aggregation:

+ LAG requires the EtherChannel to be configured for ‘mode on’ on both the controller and the Catalyst switch.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-5/configuration-guide/b_cg75/b_cg75_chapter_0100010.html

Question 12

Question 13

Explanation

Mobility Express is the ability to use an access point (AP) as a controller instead of a real WLAN controller. But this solution is only suitable for small to midsize, or multi-site branch locations where you might not want to invest in a dedicated WLC. A Mobility Express WLC can support up to 100 APs. Mobility Express WLC also uses CAPWAP to communicate to other APs.

Note: Local mode is the most common mode that an AP operates in. This is also the default mode. In local mode, the LAP maintains a CAPWAP (or LWAPP) tunnel to its associated controller.

Question 14

Explanation

A Cisco lightweight wireless AP needs to be paired with a WLC to function.

An AP must be very diligent to discover any controllers that it can join—all without any preconfiguration on your part. To accomplish this feat, several methods of discovery are used. The goal of discovery is just to build a list of live candidate controllers that are available, using the following methods:
+ Prior knowledge of WLCs
+ DHCP and DNS information to suggest some controllers (DHCP Option 43)
+ Broadcast on the local subnet to solicit controllers

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

If you do not tell the LAP where the controller is via DHCP option 43, DNS resolution of “Cisco-capwap-controller.local_domain”, or statically configure it, the LAP does not know where in the network to find the management interface of the controller.

In addition to these methods, the LAP does automatically look on the local subnet for controllers with a 255.255.255.255 local broadcast.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless/5500-series-wireless-controllers/119286-lap-notjoin-wlc-tshoot.html

Question 15

Explanation

A patch antenna, in its simplest form, is just a single rectangular (or circular) conductive plate that is spaced above a ground plane. Patch antennas are attractive due to their low profile and ease of fabrication.

The azimuth and elevation plane patterns are derived by simply slicing through the 3D radiation pattern. In this case, the azimuth plane pattern is obtained by slicing through the x-z plane, and the elevation plane pattern is formed by slicing through the y-z plane. Note that there is one main lobe that is radiated out from the front of the antenna. There are three back lobes in the elevation plane (in this case), the strongest of which happens to be 180 degrees behind the peak of the main lobe, establishing the front-to-back ratio at about 14 dB. That is, the gain of the antenna 180 degrees behind the peak is 14 dB lower than the peak gain.

patch_atenna.jpg

Again, it doesn’t matter if these patterns are shown pointing up, down, to the left or to the right. That is usually an artifact of the measurement system. A patch antenna radiates its energy out from the front of the antenna. That will establish the true direction of the patterns.

Reference: https://www.cisco.com/c/en/us/products/collateral/wireless/aironet-antennas-accessories/prod_white_paper0900aecd806a1a3e.html

Wireless Questions 2

January 30th, 2021 digitaltut 3 comments

Question 1

Explanation

A prerequisite for configuring Mobility Groups is “All controllers must be configured with the same virtual interface IP address”. If all the controllers within a mobility group are not using the same virtual interface, inter-controller roaming may appear to work, but the handoff does not complete, and the client loses connectivity for a period of time. -> Answer B is correct.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-5/config-guide/b_cg85/mobility_groups.html

Answer A is not correct because when the client moves to a different mobility group (with different mobility group name), that client would be connected (provided that the new connected controller had information about this client in its mobility list already) or drop (if the new connected controller have not had information about this client in its mobility list). For more information please read the note below.

Note:

A mobility group is a set of controllers, identified by the same mobility group name, that defines the realm of seamless roaming for wireless clients. By creating a mobility group, you can enable multiple controllers in a network to dynamically share information and forward data traffic when inter-controller or inter-subnet roaming occurs. Controllers in the same mobility group can share the context and state of client devices as well as their list of access points so that they do not consider each other’s access points as rogue devices.

different_mobility_groups.jpg

Let’s take an example:

The controllers in the ABC mobility group share access point and client information with each other. The controllers in the ABC mobility group do not share the access point or client information with the XYZ controllers, which are in a different mobility group. Therefore if a client from ABC mobility group moves to XYZ mobility group, and the new connected controller does not have information about this client in its mobility list, that client will be dropped.

Note: Clients may roam between access points in different mobility groups if the controllers are included in each other’s mobility lists.

Question 2

Explanation

As these wireless networks grow especially in remote facilities where IT professionals may not always be on site, it becomes even more important to be able to quickly identify and resolve potential connectivity issues ideally before the users complain or notice connectivity degradation.
To address these issues we have created Cisco’s Wireless Service Assurance and a new AP mode called “sensor” mode. Cisco’s Wireless Service Assurance platform has three components, namely, Wireless Performance Analytics, Real-time Client Troubleshooting, and Proactive Health Assessment. Using a supported AP or dedicated sensor the device can actually function much like a WLAN client would associating and identifying client connectivity issues within the network in real time without requiring an IT or technician to be on site.

Reference: https://content.cisco.com/chapter.sjs?uri=/searchable/chapter/content/dam/en/us/td/docs/wireless/controller/technotes/8-5/b_Cisco_Aironet_Sensor_Deployment_Guide.html.xml

Question 3

Explanation

Mobility, or roaming, is a wireless LAN client’s ability to maintain its association seamlessly from one access point to another securely and with as little latency as possible. Three popular types of client roaming are:

Intra-Controller Roaming: Each controller supports same-controller client roaming across access points managed by the same controller. This roaming is transparent to the client as the session is sustained, and the client continues using the same DHCP-assigned or client-assigned IP address.

Inter-Controller Roaming: Multiple-controller deployments support client roaming across access points managed by controllers in the same mobility group and on the same subnet. This roaming is also transparent to the client because the session is sustained and a tunnel between controllers allows the client to continue using the same DHCP- or client-assigned IP address as long as the session remains active.

Inter-Subnet Roaming: Multiple-controller deployments support client roaming across access points managed by controllers in the same mobility group on different subnets. This roaming is transparent to the client because the session is sustained and a tunnel between the controllers allows the client to continue using the same DHCP-assigned or client-assigned IP address as long as the session remains active.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-4/configuration/guides/consolidated/b_cg74_CONSOLIDATED/b_cg74_CONSOLIDATED_chapter_01100.html

Question 4

Explanation

The AP will attempt to resolve the DNS name CISCO-CAPWAP-CONTROLLER.localdomain. When the AP is able to resolve this name to one or more IP addresses, the AP sends a unicast CAPWAP Discovery Message to the resolved IP address(es). Each WLC that receives the CAPWAP Discovery Request Message replies with a unicast CAPWAP Discovery Response to the AP.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless/4400-series-wireless-lan-controllers/107606-dns-wlc-config.html

Question 5

Question 6

Explanation

These message logs inform that the radio channel has been reset (and the AP must be down briefly). With dynamic channel assignment (DCA), the radios can frequently switch from one channel to another but it also makes disruption. The default DCA interval is 10 minutes, which is matched with the time of the message logs. By increasing the DCA interval, we can reduce the number of times our users are disconnected for changing radio channels.

Question 7

Question 8

Question 9

Explanation

A Yagi antenna is formed by driving a simple antenna, typically a dipole or dipole-like antenna, and shaping the beam using a well-chosen series of non-driven elements whose length and spacing are tightly controlled.

Yagi_antenna_model.jpg

Reference: https://www.cisco.com/c/en/us/products/collateral/wireless/aironet-antennas-accessories/prod_white_paper0900aecd806a1a3e.html

Question 10

Explanation

Once you know the complete combination of transmitter power level, the length of cable, and the antenna gain, you can figure out the actual power level that will be radiated from the antenna. This is known as the effective isotropic radiated power (EIRP), measured in dBm.

EIRP is a very important parameter because it is regulated by governmental agencies in most countries. In those cases, a system cannot radiate signals higher than a maximum allowable EIRP. To find the EIRP of a system, simply add the transmitter power level to the antenna gain and subtract the cable loss.

EIRP_wireless.jpg

EIRP = Tx Power – Tx Cable + Tx Antenna

Suppose a transmitter is configured for a power level of 10 dBm (10 mW). A cable with 5-dB loss connects the transmitter to an antenna with an 8-dBi gain. The resulting EIRP of the system is 10 dBm – 5 dB + 8 dBi, or 13 dBm.

You might notice that the EIRP is made up of decibel-milliwatt (dBm), dB relative to an isotropic antenna (dBi), and decibel (dB) values. Even though the units appear to be different, you can safely combine them because they are all in the dB “domain”.

Reference: CCNA Wireless 640-722 Official Cert Guide

Wireless Questions 3

January 30th, 2021 digitaltut 20 comments

Question 1

Explanation

Output power is measured in mW (milliwatts). A milliwatt is equal to one thousandth (10−3) of a watt.

Question 2

Question 3

Explanation

+ From the output of WLC “show interface summary”, we learned that the WLC has four VLANs: 999, 14, 15 and 16.
+ From the “show ap config general FlexAP1” output, we learned that FlexConnect AP has four VLANs: 10, 11, 12 and 13. Also the WLAN of FlexConnect AP is mapped to VLAN 10 (from the line “WLAN 1: …… 10 (AP-Specific)).

From the reference at: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-1/Enterprise-Mobility-8-1-Design-Guide/Enterprise_Mobility_8-1_Deployment_Guide/ch7_HREA.html

FlexConnect VLAN Central Switching Summary
Traffic flow on WLANs configured for Local Switching when FlexConnect APs are in connected mode are as follows:

+ If the VLAN is returned as one of the AAA attributes and that VLAN is not present in the FlexConnect AP database, traffic will switch centrally and the client is assigned this VLAN/Interface returned from the AAA server provided that the VLAN exists on the WLC. (-> as VLAN 15 exists on the WLC so the client in connected mode would be assigned this VLAN -> Answer G is correct)
+ If the VLAN is returned as one of the AAA attributes and that VLAN is not present in the FlexConnect AP database, traffic will switch centrally. If that VLAN is also not present on the WLC, the client will be assigned a VLAN/Interface mapped to a WLAN on the WLC.
+ If the VLAN is returned as one of the AAA attributes and that VLAN is present in the FlexConnect AP database, traffic will switch locally.
+ If the VLAN is not returned from the AAA server, the client is assigned a WLAN mapped VLAN on that FlexConnect AP and traffic is switched locally.

Traffic flow on WLANs configured for Local Switching when FlexConnect APs are in standalone mode are as follows:

+ If the VLAN returned by the AAA server is not present in the FlexConnect AP database, the client will be put on a default VLAN (that is, a WLAN mapped VLAN on a FlexConnect AP) (-> Therefore answer B is correct). When the AP connects back, this client is de-authenticated (-> Therefore answer C is correct) and will switch traffic centrally.

Question 4

Question 5

Explanation

Once you know the complete combination of transmitter power level, the length of cable, and the antenna gain, you can figure out the actual power level that will be radiated from the antenna. This is known as the effective isotropic radiated power (EIRP), measured in dBm.

EIRP is a very important parameter because it is regulated by governmental agencies in most countries. In those cases, a system cannot radiate signals higher than a maximum allowable EIRP. To find the EIRP of a system, simply add the transmitter power level to the antenna gain and subtract the cable loss.

EIRP_wireless.jpg

EIRP = Tx Power – Tx Cable + Tx Antenna

Suppose a transmitter is configured for a power level of 10 dBm (10 mW). A cable with 5-dB loss connects the transmitter to an antenna with an 8-dBi gain. The resulting EIRP of the system is 10 dBm – 5 dB + 8 dBi, or 13 dBm.

You might notice that the EIRP is made up of decibel-milliwatt (dBm), dB relative to an isotropic antenna (dBi), and decibel (dB) values. Even though the units appear to be different, you can safely combine them because they are all in the dB “domain”.

Reference: CCNA Wireless 640-722 Official Cert Guide

Question 6

Explanation

This is called Inter Controller-L2 Roaming. Inter-Controller (normally layer 2) roaming occurs when a client roam between two APs registered to two different controllers, where each controller has an interface in the client subnet. In this instance, controllers exchange mobility control messages (over UDP port 16666) and the client database entry is moved from the original controller to the new controller.

Question 7

Explanation

Windows can actually block your WiFi signal. How? Because the signals will be reflected by the glass.

Some new windows have transparent films that can block certain wave types, and this can make it harder for your WiFi signal to pass through.

Tinted glass is another problem for the same reasons. They sometimes contain metallic films that can completely block out your signal.
Mirrors, like windows, can reflect your signal. They’re also a source of electromagnetic interference because of their metal backings.

Reference: https://dis-dot-dat.net/what-materials-can-block-a-wifi-signal/

An incandescent light bulb, incandescent lamp or incandescent light globe is an electric light with a wire filament heated until it glows. WiFi operates in the gigahertz microwave band. The FCC has strict regulations on RFI (radio frequency interference) from all sorts of things, including light bulbs -> Incandesent lights do not interfere Wi-Fi networks.

Note:

+ Many baby monitors operate at 900MHz and won’t interfere with Wi-Fi, which uses the 2.4GHz band.
+ DECT cordless phone 6.0 is designed to eliminate wifi interference by operating on a different frequency. There is essentially no such thing as DECT wifi interference.

Question 8

Explanation

When the primary controller (WLC-1) goes down, the APs automatically get registered with the secondary controller (WLC-2). The APs register back to the primary controller when the primary controller comes back on line.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-lan-wlan/69639-wlc-failover.html

Question 9

Question 10

Explanation

Directional antennas
Directional antennas come in many different styles and shapes. An antenna does not offer any added power to the signal; it simply redirects the energy it receives from the transmitter. By redirecting this energy, it has the effect of providing more energy in one direction and less energy in all other directions. As the gain of a directional antenna increases, the angle of radiation usually decreases, providing a greater coverage distance but with a reduced coverage angle. Directional antennas include patch antennas and parabolic dishes. Parabolic dishes have a very narrow RF energy path, and the installer must be accurate in aiming these types of antennas at each other.

directionl_patch_antenna.jpgDirectional patch antenna

Reference: https://www.cisco.com/c/en/us/products/collateral/wireless/aironet-antennas-accessories/product_data_sheet09186a008008883b.html

Omnidirectional antennas

An omnidirectional antenna is designed to provide a 360-degree radiation pattern. This type of antenna is used when coverage in all directions from the antenna is required. The standard 2.14-dBi “rubber duck” is one style of omnidirectional antenna.

ominidirectionl_antenna_direction.jpgOmnidirectional antenna

-> Therefore Omnidirectional antenna is best suited for a high-density wireless network in a lecture hall.

Question 11

Explanation

We will have the answer from this paragraph:

“TLV values for the Option 43 suboption: Type + Length + Value. Type is always the suboption code 0xf1. Length is the number of controller management IP addresses times 4 in hex. Value is the IP address of the controller listed sequentially in hex. For example, suppose there are two controllers with management interface IP addresses, 192.168.10.5 and 192.168.10.20. The type is 0xf1. The length is 2 * 4 = 8 = 0x08. The IP addresses translates to c0a80a05 (192.168.10.5) and c0a80a14 (192.168.10.20). When the string is assembled, it yields f108c0a80a05c0a80a14. The Cisco IOS command that is added to the DHCP scope is option 43 hex f108c0a80a05c0a80a14.”

Reference: https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-lan-wlan/97066-dhcp-option-43-00.html

Therefore in this question the option 43 in hex should be “F104.AC10.3205 (the management IP address of 172.16.50.5 in hex is AC.10.32.05).

Question 12

Explanation

At first, we thought “greater severity” means higher priority (or severity with smaller value) in the Syslog level table below:

Level Keyword Description
0 emergencies System is unusable
1 alerts Immediate action is needed
2 critical Critical conditions exist
3 errors Error conditions exist
4 warnings Warning conditions exist
5 notification Normal, but significant, conditions exist
6 informational Informational messages
7 debugging Debugging messages

For example, levels 0 to 4 are “greater severity” than level 5. But according to this Cisco link:

“If you set a syslog level, only those messages whose severity is equal to or less than that level are sent to the syslog servers. For example, if you set the syslog level to Notifications (severity level 5), only those messages whose severity is between 0 and 5 are sent to the syslog servers.”

The correct answer for this question should be “equal or less severity”.

HSRP & VRRP Questions

January 29th, 2021 digitaltut 40 comments

If you are not sure about HSRP, please read our HSRP tutorial (on 9tut.com).

Quick VRRP overview:

+ is IETF RFC 3768 standard
+ supports maximum 255 groups
+ 1 active and some backups
+ Use multicast address 224.0.0.18
+ Tracking via objects
+ 1 sec hello timer, 3 sec hold time
+ Authentication: plaintext or MD5 authentication
+ Preemption is enabled by default
+ Virtual IP address can be the same as physical IP address (which is running VRRP)
+ Default priority is 100
+ Only VRRPv3 supports both IPv4 and IPv6

Question 1

Explanation

When you change the HSRP version, Cisco NX-OS reinitializes the group because it now has a new virtual MAC address. HSRP version 1 uses the MAC address range 0000.0C07.ACxx while HSRP version 2 uses the MAC address range 0000.0C9F.F0xx.

HSRP supports interface tracking which allows to specify another interface on the router for the HSRP process to monitor in order to alter the HSRP priority for a given group.

Question 2

Explanation

If you change the version for existing groups, Cisco NX-OS reinitializes HSRP for those groups because the virtual MAC address changes.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus3548/sw/unicast/503_A1_1/l3_nx-os/l3_hsrp.html

Question 3

Question 4

Explanation

The main disadvantage of HSRP and VRRP is that only one gateway is elected to be the active gateway and used to forward traffic whilst the rest are unused until the active one fails. Gateway Load Balancing Protocol (GLBP) is a Cisco proprietary protocol and performs the similar function to HSRP and VRRP but it supports load balancing among members in a GLBP group.

Note: Although GLBP is not a topic for this exam but not sure why we still have this question!

Question 5

Explanation

HSRP consists of 6 states:

State Description
Initial This is the beginning state. It indicates HSRP is not running. It happens when the configuration changes or the interface is first turned on
Learn The router has not determined the virtual IP address and has not yet seen an authenticated hello message from the active router. In this state, the router still waits to hear from the active router.
Listen The router knows both IP and MAC address of the virtual router but it is not the active or standby router. For example, if there are 3 routers in HSRP group, the router which is not in active or standby state will remain in listen state.
Speak The router sends periodic HSRP hellos and participates in the election of the active or standby router.
Standby In this state, the router monitors hellos from the active router and it will take the active state when the current active router fails (no packets heard from active router)
Active The router forwards packets that are sent to the HSRP group. The router also sends periodic hello messages

Please notice that not all routers in a HSRP group go through all states above. In a HSRP group, only one router reaches active state and one router reaches standby state. Other routers will stop at listen state.

Question 6

Explanation

A VRRP router receiving a packet with the TTL not equal to 255 must discard the packet (only one possible hop) -> Answer B is correct.

Currently there are three VRRP versions which are versions 1, 2 and 3 -> Answer E is correct.

VRRP uses multicast address 224.0.0.18 and supports plaintext or MD5 authentication.

Question 7

Question 8

Explanation

The main disadvantage of HSRP and VRRP is that only one gateway is elected to be the active gateway and used to forward traffic whilst the rest are unused until the active one fails. Gateway Load Balancing Protocol (GLBP) is a Cisco proprietary protocol and performs the similar function to HSRP and VRRP but it supports load balancing among members in a GLBP group.

Question 9

Explanation

SSO HSRP alters the behavior of HSRP when a device with redundant Route Processors (RPs) is configured for stateful switchover (SSO) redundancy mode. When an RP is active and the other RP is standby, SSO enables the standby RP to take over if the active RP fails.

The SSO HSRP feature enables the Cisco IOS HSRP subsystem software to detect that a standby RP is installed and the system is configured in SSO redundancy mode. Further, if the active RP fails, no change occurs to the HSRP group itself and traffic continues to be forwarded through the current active gateway device.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipapp_fhrp/configuration/15-s/fhp-15-s-book/fhp-hsrp-sso.html

Question 10

Explanation

In fact, VRRP has the preemption enabled by default so we don’t need the “vrrp 10 preempt” command. The default priority is 100 so we don’t need to configure it either. But notice that the correct command to configure the virtual IP address for the group is “vrrp 10 ip {ip-address}” (not “vrrp group 10 ip …”) and this command does not include a subnet mask.

Question 11

Explanation

In fact, when Edge-01 goes down, Edge-02 will not receive “Hello” messages from Edge-01 so it will promote itself to active state automatically. Therefore no answers here are correct. But if we have to choose the best answer then it would be “preempt” command as the “preempt” command enables the HSRP router with the highest priority to immediately become the active router.

Maybe this question wanted to ask if the link to Core router goes down, which command can be used to take over the forwarding role.

HSRP & VRRP Questions 2

January 29th, 2021 digitaltut 6 comments

Question 1

Explanation

The “virtual MAC address” is 0000.0c07.acXX (XX is the hexadecimal group number) so it is using HSRPv1.

Note: HSRP Version 2 uses a new MAC address which ranges from 0000.0C9F.F000 to 0000.0C9F.FFFF.

Question 2

Question 3

Explanation

From the output exhibit, we notice that the key-string of R1 is “Cisco123!” (letter “C” is in capital) while that of R2 is “cisco123!”. This causes a mismatch in the authentication so we have to fix their key-strings.

Note:

key-string [encryption-type] text-string: Configures the text string for the key. The text-string argument is alphanumeric, case-sensitive, and supports special characters.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/security/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_Security_Configuration_Guide/b_Cisco_Nexus_9000_Series_NX-OS_Security_Configuration_Guide_chapter_01111.pdf

Question 4

Explanation

In HSRP version 1, group numbers are restricted to the range from 0 to 255. HSRP version 2 expands the group number range from 0 to 4095 -> We must configure HSRP group 300 so we must change to HSRP version 2.

Network Assurance Questions

January 28th, 2021 digitaltut 4 comments

Question 1

Explanation

Syslog levels are listed below:

Level Keyword Description
0 emergencies System is unusable
1 alerts Immediate action is needed
2 critical Critical conditions exist
3 errors Error conditions exist
4 warnings Warning conditions exist
5 notification Normal, but significant, conditions exist
6 informational Informational messages
7 debugging Debugging messages

Number “5” in “%LINEPROTO-5- UPDOWN” is the severity level of this message so in this case it is “notification”.

Question 2

Explanation

The TCP port 6514 has been allocated as the default port for syslog over Transport Layer Security (TLS).

Reference: https://tools.ietf.org/html/rfc5425

Question 3

Explanation

The goal of the Cyber Threat Defense solution is to introduce a design and architecture that can help facilitate the discovery, containment, and remediation of threats once they have penetrated into the network interior.

Cisco Cyber Threat Defense version 2.0 makes use of several solutions to accomplish its objectives:

* NetFlow and the Lancope StealthWatch System
– Broad visibility
User and flow context analysis
– Network behavior and anomaly detection
– Incident response and network forensics

* Cisco FirePOWER and FireSIGHT
– Real-time threat management
– Deeper contextual visibility for threats bypassing the perimeters
– URL control

* Advanced Malware Protection (AMP)
– Endpoint control with AMP for Endpoints
– Malware control with AMP for networks and content

* Content Security Appliances and Services
– Cisco Web Security Appliance (WSA) and Cloud Web Security (CWS)
– Dynamic threat control for web traffic
– Outbound URL analysis and data transfer controls
– Detection of suspicious web activity
– Cisco Email Security Appliance (ESA)
– Dynamic threat control for email traffic
– Detection of suspicious email activity

* Cisco Identity Services Engine (ISE)
– User and device identity integration with Lancope StealthWatch
– Remediation policy actions using pxGrid

Reference: https://www.cisco.com/c/dam/en/us/td/docs/security/network_security/ctd/ctd2-0/design_guides/ctd_2-0_cvd_guide_jul15.pdf

IP SLA Questions

January 28th, 2021 digitaltut 21 comments

Question 1

Explanation

We cannot use the LAN (inside) interface to communicate to the ISP as the ISP surely does not know how to return the reply -> Answer A is correct.

Answer B is not correct as we must track the destination of the primary link, not backup link.

In this question, R1 pings R2 via its LAN Fa0/0 interface so maybe R1 (which is an ISP) will not know how to reply back as an ISP usually does not configure a route to a customer’s LAN -> C is correct.

There is no problem with the default route -> D is not correct.

Note:

This tutorial can help you revise IP SLA tracking topic: http://www.firewall.cx/cisco-technical-knowledgebase/cisco-routers/813-cisco-router-ipsla-basic.html and http://www.ciscozine.com/using-ip-sla-to-change-routing/

Note: Maybe some of us will wonder why there are these two commands:

R1(config)#ip route 0.0.0.0 0.0.0.0 172.20.20.2 track 10
R1(config)#no ip route 0.0.0.0 0.0.0.0 172.20.20.2

In fact the two commands:

ip route 0.0.0.0 0.0.0.0 172.20.20.2 track 10
ip route 0.0.0.0 0.0.0.0 172.20.20.2

are different. These two static routes can co-exist in the routing table. Therefore if the tracking goes down, the first command will be removed but the second one still exists and the backup path is not preferred. So we have to remove the second one.

Question 2

Explanation

The “ip sla 10” will ping the IP 192.168.10.20 every 3 seconds to make sure the connection is still up. We can configure an EEM applet if there is any problem with this IP SLA via the command “event track 10 state down”.

Reference: https://www.theroutingtable.com/ip-sla-and-cisco-eem/

The syntax of tracking command is: track object-number ip sla operation-number [state | reachability]

Tracking Return Code Track State
State

OK

(all other return codes)

Up

Down

Reachability

OK or over threshold

(all other return codes)

Up

Down

Both “state” and “reachability” return the track state of “up” or “down” only (no “unreachable” state) so we have to configure either “up” or “down” in “event track 10 state …” command.

Note:
+ There is no “event sla …” command.
+ There are only three states of an “event track”, which are “any”, “down” and “up”

Router(config-applet)#event track 10 state ?
  any   Any state
  down  Down state
  up    Up state

Question 3

Explanation

IP SLAs allows Cisco customers to analyze IP service levels for IP applications and services, to increase productivity, to lower operational costs, and to reduce the frequency of network outages. IP SLAs uses active traffic monitoring–the generation of traffic in a continuous, reliable, and predictable manner–for measuring network performance.

Being Layer-2 transport independent, IP SLAs can be configured end-to-end over disparate networks to best reflect the metrics that an end-user is likely to experience.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipsla/configuration/15-mt/sla-15-mt-book/sla_overview.html

Question 4

Explanation

Cisco IOS IP SLA Responder is a Cisco IOS Software component whose functionality is to respond to Cisco IOS IP SLA request packets. The IP SLA source sends control packets before the operation starts to establish a connection to the responder. Once the control packet is acknowledged, test packets are sent to the responder. The responder inserts a time-stamp when it receives a packet and factors out the destination processing time and adds time-stamps to the sent packets. This feature allows the calculation of unidirectional packet loss, latency, and jitter measurements with the kind of accuracy that is not possible with ping or other dedicated probe testing.

Reference: https://www.cisco.com/en/US/technologies/tk869/tk769/technologies_white_paper0900aecd806bfb52.html

The IP SLAs responder is a component embedded in the destination Cisco device that allows the system to anticipate and respond to IP SLAs request packets. The responder provides accurate measurements without the need for dedicated probes.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12-2/46sg/configuration/guide/Wrapper-46SG/swipsla.html

UDP Jitter measures the delay, delay variation(jitter), corruption, misorderingand packet lossby generating periodic UDP traffic. This operation always requires IP SLA responder.

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2017/pdf/BRKNMS-3043.pdf

NetFlow Questions

January 28th, 2021 digitaltut 10 comments

Note: If you are not sure about NetFlow, please read our NetFlow tutorial.

Question 1

Explanation

The “mode random one-out of 100” specifies that sampling uses the random mode and only take one sample out of every 100 packets.

Question 2

Explanation

Option A is not correct because “flow FLOW-MONITOR-1” cannot be under “sampler SAMPLER-1”.
Option B is not correct because we remove “mode random 1 out-of 2”.
Option D is not correct because “sampler SAMPLER-1” cannot be put under “flow monitor …”
Option C is correct and the command “ip flow monitor …” assigns the flow monitor and the flow sampler that you created to the interface to enable sampling.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/fnetflow/configuration/xe-3se/3850/use-fnflow-redce-cpu.html

Question 3

Explanation

To configure multiple NetFlow export destinations to a router, use the following commands in global configuration mode:

Step 1: Router(config)# ip flow-export destination ip-address udp-port
Step 2: Router(config)# ip flow-export destination ip-address udp-port

The following example enables the exporting of information in NetFlow cache entries:

ip flow-export destination 10.42.42.1 9991
ip flow-export destination 10.0.101.254 1999

Reference: https://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/12s_mdnf.html

Question 4

SPAN Questions

January 28th, 2021 digitaltut No comments

Question 1

Explanation

SW1 has been configured with the following commands:

SW1(config)#monitor session 1 source remote vlan 50
SW1(config)#monitor session 2 source interface fa0/14
SW1(config)#monitor session 2 destination interface fa0/15

The session 1 on SW1 was configured for Remote SPAN (RSPAN) while session 2 was configured for local SPAN. For RSPAN we need to configure the destination port to complete the configuration.

Note: In fact we cannot create such a session like session 1 because if we only configure “Source RSPAN VLAN 50” (with the command “monitor session 1 source remote vlan 50”) then we will receive a “Type: Remote Source Session” (not “Remote Destination Session”).

Question 2

Explanation

RSPAN consists of at least one RSPAN source session, an RSPAN VLAN, and at least one RSPAN destination session. You separately configure RSPAN source sessions and RSPAN destination sessions on different network devices. To configure an RSPAN source session on a device, you associate a set of source ports or source VLANs with an RSPAN VLAN.

Traffic monitoring in a SPAN session has these restrictions:
+ Sources can be ports or VLANs, but you cannot mix source ports and source VLANs in the same session.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3750x_3560x/software/release/12-2_55_se/configuration/guide/3750xscg/swspan.html

Therefore in this question, we cannot configure a source VLAN because we configured source ports for RSPAN session 1 already.

Question 3

Question 4

Question 5

Explanation

Encapsulated remote SPAN (ERSPAN): encapsulated Remote SPAN (ERSPAN), as the name says, brings generic routing encapsulation (GRE) for all captured traffic and allows it to be extended across Layer 3 domains.

Question 6

Explanation

The traffic for each RSPAN session is carried over a user-specified RSPAN VLAN that is dedicated for that RSPAN session in all participating switches -> This VLAN can be considered a special VLAN type -> Answer C is correct.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3750x_3560x/software/release/12-2_55_se/configuration/guide/3750xscg/swspan.html

We can configure multiple RSPAN sessions on a switch at a time, then continue configuring multiple RSPAN sessions on the other switch without any problem -> Answer B is not correct.

This is how to configure Remote SPAN (RSPAN) feature on two switches. Traffic on FastEthernet0/1 of Switch 1 will be sent to Fa0/10 of Switch2 via VLAN 40.

+ Configure on both switches
Switch1,2(config)#vlan 40
Switch1,2(config-vlan)#remote-span
+ Configure on Switch1
Switch1(config)# monitor session 1 source interface FastEthernet 0/1
Switch1(config)# monitor session 1 destination remote vlan 40
+ Configure on Switch2
Switch2(config)#monitor session 5 source remote vlan 40
Switch2(config)# monitor session 5 destination interface FastEthernet 0/10

Troubleshooting Questions

January 28th, 2021 digitaltut 5 comments

Question 1

Explanation

If the DF bit is set, routers cannot fragment packets. From the output below, we learn that the maximum MTU of R2 is 1492 bytes while we sent ping with 1500 bytes. Therefore these ICMP packets were dropped.

Note: Record option displays the address(es) of the hops (up to nine) the packet goes through.

AAA Questions

January 28th, 2021 digitaltut 27 comments

Note: If you are not sure about AAA, please read our AAA TACACS+ and RADIUS Tutorial (on 9tut.com).

Question 1

Question 2

Explanation

The “aaa authentication login default local group tacacs+” command is broken down as follows:

+ The ‘aaa authentication’ part is simply saying we want to configure authentication settings.
+ The ‘login’ is stating that we want to prompt for a username/password when a connection is made to the device.
+ The ‘default’ means we want to apply for all login connections (such as tty, vty, console and aux). If we use this keyword, we don’t need to configure anything else under tty, vty and aux lines. If we don’t use this keyword then we have to specify which line(s) we want to apply the authentication feature.
+ The ‘local group tacacs+” means all users are authenticated using router’s local database (the first method). If the credentials are not found on the local database, then the TACACS+ server is used (the second method).

Question 3


Explanation

According to the requirements (first use TACACS+, then allow login with no authentication), we have to use “aaa authentication login … group tacacs+ none” for AAA command.

The next thing to check is the if the “aaa authentication login default” or “aaa authentication login list-name” is used. The ‘default’ keyword means we want to apply for all login connections (such as tty, vty, console and aux). If we use this keyword, we don’t need to configure anything else under tty, vty and aux lines. If we don’t use this keyword then we have to specify which line(s) we want to apply the authentication feature.

From above information, we can find out answer C is correct. Although the “password 7 0202039485748” line under “line vty 0 4” is not necessary.

If you want to learn more about AAA configuration, please read our AAA TACACS+ and RADIUS Tutorial – Part 2.

For your information, answer D would be correct if we add the following command under vty line (“line vty 0 4”): “login authentication telnet” (“telnet” is the name of the AAA list above)

Question 4

Question 5

Explanation

The “autocommand” causes the specified command to be issued automatically after the user logs in. When the command is complete, the session is terminated. Because the command can be any length and can contain embedded spaces, commands using the autocommand keyword must be the last option on the line. In this specific question, we have to enter this line “username CCNP autocommand show running-config”.

Question 6

Explanation

In this question, there are two different passwords for user “tommy”:
+ In the TACACS+ server, the password is “Tommy”
+ In the local database of the router, the password is “Cisco”.

From the line “login authentication local” we know that the router uses the local database for authentication so the password should be “Cisco”.

Note: “… password 0 …” here means unencrypted password.

GRE Tunnel Questions

January 28th, 2021 digitaltut 8 comments

Note: If you are not sure about GRE, please read our GRE Tunnel Tutorial.

Question 1

Explanation

The IP protocol was designed for use on a wide variety of transmission links. Although the maximum length of an IP datagram is 65535, most transmission links enforce a smaller maximum packet length limit, called an MTU. The value of the MTU depends on the type of the transmission link. The design of IP accommodates MTU differences since it allows routers to fragment IP datagrams as necessary. The receiving station is responsible for the reassembly of the fragments back into the original full size IP datagram.

Fragmentation and Path Maximum Transmission Unit Discovery (PMTUD) is a standardized technique to determine the maximum transmission unit (MTU) size on the network path between two hosts, usually with the goal of avoiding IP fragmentation. PMTUD was originally intended for routers in IPv4. However, all modern operating systems use it on endpoints.

The TCP Maximum Segment Size (TCP MSS) defines the maximum amount of data that a host is willing to accept in a single TCP/IP datagram. This TCP/IP datagram might be fragmented at the IP layer. The MSS value is sent as a TCP header option only in TCP SYN segments. Each side of a TCP connection reports its MSS value to the other side. Contrary to popular belief, the MSS value is not negotiated between hosts. The sending host is required to limit the size of data in a single TCP segment to a value less than or equal to the MSS reported by the receiving host.

TCP MSS takes care of fragmentation at the two endpoints of a TCP connection, but it does not handle the case where there is a smaller MTU link in the middle between these two endpoints. PMTUD was developed in order to avoid fragmentation in the path between the endpoints. It is used to dynamically determine the lowest MTU along the path from a packet’s source to its destination.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html (there is some examples of how TCP MSS avoids IP Fragmentation in this link but it is too long so if you want to read please visit this link)

Note: IP fragmentation involves breaking a datagram into a number of pieces that can be reassembled later.

If the DF bit is set to clear, routers can fragment packets regardless of the original DF bit setting -> Answer D is not correct.

Question 2

Explanation

The TCP Maximum Segment Size (TCP MSS) defines the maximum amount of data that a host is willing to accept in a single TCP/IP datagram. This TCP/IP datagram might be fragmented at the IP layer. The MSS value is sent as a TCP header option only in TCP SYN segments. Each side of a TCP connection reports its MSS value to the other side. Contrary to popular belief, the MSS value is not negotiated between hosts. The sending host is required to limit the size of data in a single TCP segment to a value less than or equal to the MSS reported by the receiving host.

TCP MSS takes care of fragmentation at the two endpoints of a TCP connection, but it does not handle the case where there is a smaller MTU link in the middle between these two endpoints. PMTUD was developed in order to avoid fragmentation in the path between the endpoints. It is used to dynamically determine the lowest MTU along the path from a packet’s source to its destination.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html (there is some examples of how TCP MSS avoids IP Fragmentation in this link but it is too long so if you want to read please visit this link)

Note: IP fragmentation involves breaking a datagram into a number of pieces that can be reassembled later.

Question 3

Explanation

If the DF bit is set to clear (not set), routers can fragment packets regardless of the original DF bit setting.

Whenever we create tunnel interfaces, the GRE IP MTU is automatically configured 24 bytes less than the outbound physical interface MTU. Ethernet interfaces have an MTU value of 1500 bytes so tunnel interfaces by default will have 1476 bytes MTU, which is 24 bytes less the physical interface. The process of sending a 1500-byte IPv4 packet (with DF bit set to clear) is shown below:

1. The sender sends a 1500-byte packet (20 byte IPv4 header + 1480 bytes of TCP payload).
2. Since the MTU of the GRE tunnel is 1476, the 1500-byte packet is broken into two IPv4 fragments of 1476 and 44 bytes, each in anticipation of the additional 24 byes of GRE header.
3. The 24 bytes of GRE header is added to each IPv4 fragment. Now the fragments are 1500 (1476 + 24) and 68 (44 + 24) bytes each.
4. The GRE + IPv4 packets that contain the two IPv4 fragments are forwarded to the GRE tunnel peer router.
5. The GRE tunnel peer router removes the GRE headers from the two packets.
6. This router forwards the two packets to the destination host.
7. The destination host reassembles the IPv4 fragments back into the original IPv4 datagram.

Reference: https://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html (Scenario 5)

Question 4

Question 5

Explanation

The %TUN-5-RECURDOWN: Tunnel0 temporarily disabled due to recursive routing error message means that the generic routing encapsulation (GRE) tunnel router has discovered a recursive routing problem. This condition is usually due to one of these causes:
+ A misconfiguration that causes the router to try to route to the tunnel destination address using the tunnel interface itself (recursive routing)
+ A temporary instability caused by route flapping elsewhere in the network

Reference: https://www.cisco.com/c/en/us/support/docs/ip/enhanced-interior-gateway-routing-protocol-eigrp/22327-gre-flap.html

Question 6

Explanation

From the “Tunnel protocol/transport GRE/IP” line, we can deduce this tunnel is using the default IPv4 Layer-3 tunnel mode. We can return to this default mode with the “tunnel mode gre ip” command.

Question 7

Explanation

In order to make a Point-to-Point GRE Tunnel interface in up/up state, two requirements must be met:
+ A valid tunnel source (which is in up/up state and has an IP address configured on it) and tunnel destination must be configured
+ A valid tunnel destination is one which is routable. However, it does not have to be reachable.

-> In this question we are missing an up/up source so we can choose Loopback 0 interface.

Question 8

Explanation

In the above output, the IP address of “209.165.202.130” is the tunnel source IP while the IP 10.111.1.1 is the tunnel IP address.

An example of configuring GRE tunnel is shown below:

R1 (GRE config only)
interface s0/0/0
ip address 63.1.27.2 255.255.255.0
interface tunnel0
ip address 10.0.0.1 255.255.255.0
tunnel mode gre ip //this command can be ignored
tunnel source s0/0
tunnel destination 85.5.24.10
R2 (GRE config only)
interface s0/0/0
ip address 85.5.24.10 255.255.255.0
interface tunnel1
ip address 10.0.0.2 255.255.255.0
tunnel source 85.5.24.10
tunnel destination 63.1.27.2

Question 9

Question 10

Explanation

6to4 tunnel is a technique which relies on reserved address space 2002::/16 (you must remember this range). These tunnels determine the appropriate destination address by combining the IPv6 prefix with the globally unique destination 6to4 border router’s IPv4 address, beginning with the 2002::/16 prefix, in this format:

2002:border-router-IPv4-address::/48

For example, if the border-router-IPv4-address is 64.101.64.1, the tunnel interface will have an IPv6 prefix of 2002:4065:4001:1::/64, where 4065:4001 is the hexadecimal equivalent of 64.101.64.1. This technique allows IPv6 sites to communicate with each other over the IPv4 network without explicit tunnel setup but we have to implement it on all routers on the path.

NAT Questions

January 28th, 2021 digitaltut 9 comments

Question 1

Explanation

The command “ip nat inside source list 1 interface gigabitethernet0/0 overload” translates all source addresses that pass access list 1, which means 172.16.1.0/24 subnet, into an address assigned to gigabitethernet0/0 interface. Overload keyword allows to map multiple IP addresses to a single registered IP address (many-to-one) by using different ports so it is called Port Address Translation (PAT).

Question 2

Explanation

In this question, the inside local addresses of the 10.1.1.0/27 subnet are translated into 209.165.201.0/27 subnet. This is one-to-one NAT translation as the keyword “overload” is missing so in fact answer B is also correct.

Question 3

Explanation

The syntax of NAT command is ip nat inside source static local-ip global-ip so we can deduce the first IP address is the local IP address where we apply “ip nat inside” command and the second IP address is the global IP address where we apply “ip nat outside” command.

Question 4

Explanation

The command “ip nat inside source list 10 interface FastEthernet0/1 overload” configures NAT to overload on the address that is assigned to the Fa0/1 interface.

Question 5

Explanation

The command “ip nat inside source list 10 interface G0/3 overload” configures NAT to overload (PAT) on the address that is assigned to the G0/3 interface.

STP Questions

January 28th, 2021 digitaltut 19 comments

Quick review about BPDUGuard & BPDUFilter:

BPDU Guard feature allows STP to shut an access port in the event of receiving a BPDU and put that port into err-disabled state. BPDU Guard is configured under an interface via this command:

Switch(config-if)#spanning-tree bpduguard enable

Or configured globally via this command (BPDU Guard is enabled on all PortFast interfaces):

Switch(config)#spanning-tree portfast edge bpduguard default

BPDUFilter is designed to suppress the sending and receiving of BPDUs on an interface. There are two ways of configuring BPDUFilter: under global configuration mode or under interface mode but they have subtle difference.

If BPDUFilter is configured globally via this command:

Switch(config)#spanning-tree portfast bpdufilter default

BPDUFilter will be enabled on all PortFast-enabled interfaces and will suppress the interface from sending or receiving BPDUs. This is good if that port is connected to a host because we can enable PortFast on this port to save some start-up time while not allowing BPDU being sent out to that host. Hosts do not participate in STP and hence drop the received BPDUs. As a result, BPDU filtering prevents unnecessary BPDUs from being transmitted to host devices.

If BPDUFilter is configured under interface mode like this:

Switch(config-if)#spanning-tree bpdufilter enable

It will suppress the sending and receiving of BPDUs. This is the same as disabling spanning tree on the interface. This choice is risky and should only be used when you are sure that port only connects to host devices.

Question 1

Explanation

The purpose of Port Fast is to minimize the time interfaces must wait for spanning-tree to converge, it is effective only when used on interfaces connected to end stations.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3560/software/release/12-2_55_se/configuration/guide/3560_scg/swstpopt.html

Question 2

Question 3

Explanation

SW1 needs to block one of its ports to SW2 to avoid a bridging loop between the two switches. Unfortunately, it blocked the fiber port Link2. But how does SW2 select its blocked port? Well, the answer is based on the BPDUs it receives from SW1. A BPDU is superior than another if it has:
1. A lower Root Bridge ID
2. A lower path cost to the Root
3. A lower Sending Bridge ID
4. A lower Sending Port ID

These four parameters are examined in order. In this specific case, all the BPDUs sent by SW1 have the same Root Bridge ID, the same path cost to the Root and the same Sending Bridge ID. The only parameter left to select the best one is the Sending Port ID (Port ID = port priority + port index). And the port index of Gi0/0 is lower than the port index of Gi0/1 so Link 1 has been chosen as the primary link.

Therefore we must change the port priority to change the primary link. The lower numerical value of port priority, the higher priority that port has. In other words, we must change the port-priority on Gi0/1 of SW1 (not on Gi0/1 of SW2) to a lower value than that of Gi0/0.

Question 4

Explanation

Where to Use MST
This diagram shows a common design that features access Switch A with 1000 VLANs redundantly connected to two distribution Switches, D1 and D2. In this setup, users connect to Switch A, and the network administrator typically seeks to achieve load balancing on the access switch Uplinks based on even or odd VLANs, or any other scheme deemed appropriate.

MST_usage.pngReference: https://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/24248-147.html

Question 5

Explanation

From the second command output (show spanning-tree mst) we learn that MST1 includes VLANs 10 & 20. Therefore if we want DSW1 to become root bridge for these VLANs we need to set the MST 1 region to root -> The command “spanning-tree mst 1 root primary” can do the trick. In fact, this command runs a macro and sets the priority lower than the current root.

Also we can see the current root bridge for these VLANs has the priority of 32769 (default value + sysid) so we can set the priority of DSW1 to a specific lower value. But notice that the priority must be a multiple of 4096. Therefore D is a correct answer.

Question 6

Explanation

In the topology above, we see DSW2 has lowest priority 24576 so it is the root bridge for VLAN 10 so surely all traffic for this VLAN must go through it. All of DSW2 ports must be in forwarding state.

The next thing we have to figure out is which port of ALSW1 would be chosen as root port as traffic must go via this port. The root port is chosen via the following sequence of three conditions:

1. Lowest accumulated cost on interfaces towards Root Bridge
2. Lowest Sender Bridge ID
3. Lowest Sender Port ID (= Port Priority + Port Number)

Let’s start with the first condition: Lowest accumulated cost on interfaces towards Root Bridge.  This question did not mention about which method that STP is using (short or long method) so we will suppose the default, which is short method, is used. With this method, the STP cost of 10Gbps is 2 while the STP cost of 1Gbps is 4.

Therefore the path cost from DSW2 to ALSW1
+ via DSW2 -> DSW1 -> ALSW1 is 2 + 2 = 4 and
+ the path cost from DSW2 -> ALSW1 (direct link) is 4, too

-> The first condition is equal so we have to use the second one: Lowest Sender Bridge ID

In this condition, the direct path DSW2 -> ALSW1 wins because the sender Bridge ID of DSW2 is lower.

Therefore ALSW1 will choose Gi0/2 the root port and the link between ALSW1 and DSW1 is blocked by STP to prevent loop.

Therefore PC1 must go via this path: PC1 -> ALSW1 -> DSW2 -> DSW1.

Question 7

Explanation

Root guard does not allow the port to become a STP root port, so the port is always STP-designated. If a better BPDU arrives on this port, root guard does not take the BPDU into account and elect a new STP root. Instead, root guard puts the port into the root-inconsistent STP state which is equal to a listening state. No traffic is forwarded across this port.

Below is an example of where to configure Root Guard on the ports. Notice that Root Guard is always configure on designated ports.

Root_Guard_Location.jpg

To configure Root Guard use this command:

Switch(config-if)# spanning-tree guard root

Reference: http://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/10588-74.html

Question 8

DNA Center Questions

January 27th, 2021 digitaltut 17 comments

DNA Center Quick Summary

Software-Defined Access (SDA) uses the software-defined architectural model, with a controller and various APIs. At the center sits the Digital Network Architecture (DNA) Center controller.

DNA_Center.jpg

Note:

 + REST API (Representational State Transfer API) is a web service architecture that provides a standard way for clients to communicate with servers over the web. It is based on the HTTP protocol and uses the concepts of resources (represented by URIs) and HTTP methods (GET, POST, PUT, DELETE) to interact with them.

 + RESTCONF, on the other hand, is a protocol that provides a programmatic interface to network devices using RESTful architecture. It is built on top of RESTful principles and is designed specifically for network management. RESTCONF provides a standardized way to access and manipulate network configuration data, state data, and operational data using HTTP methods.

DNA Center is the controller for SDA networks. The southbound side of the controller contains the fabric, underlay, and overlay:

Overlay: The mechanisms to create VXLAN tunnels between SDA switches, which are then used to transport traffic from one fabric endpoint to another over the fabric.

Overlay.jpg

Underlay: The network of devices and connections (cables and wireless) to provide IP connectivity to all nodes in the fabric, with a goal to support the dynamic discovery of all SDA devices and endpoints as a part of the process to create overlay VXLAN tunnels.

Underlay.jpg

The relationship between Overlay and Underlay is shown below:

VXLAN_VTEP.jpg

Fabric: The combination of overlay and underlay, which together provide all features to deliver data across the network with the desired features and attributes

Cisco DNA Center is a software solution that resides on the Cisco DNA Center appliance. The solution receives data in the form of streaming telemetry from every device (switch, router, access point, and wireless access controller) on the network. This data provides Cisco DNA Center with the real-time information it needs for the many functions it performs.

Cisco DNA Center collects data from several different sources and protocols on the local network, including the following: traceroute; syslog; NetFlow; Authentication, Authorization, and Accounting (AAA); routers; Dynamic Host Configuration Protocol (DHCP); Telnet; wireless devices; Command-Line Interface (CLI); Object IDs (OIDs); IP SLA; DNS; ping; Simple Network Management Protocol (SNMP); IP Address Management (IPAM); MIB; Cisco Connected Mobile Experiences (CMX); and AppDynamics.

Cisco DNA Center offers 360-degree extensibility through four distinct types of platform capabilities:

+ Intent-based APIs leverage the controller and enable business and IT applications to deliver intent to the network and to reap network analytics and insights for IT and business innovation. The Intent API provides policy-based abstraction of business intent, allowing focus on an outcome rather than struggling with individual mechanisms steps.

For example, the administrator configures the intent or outcome desired of a set of security polices. The DNA Center then communicates with the devices to determine exactly the required configuration to achieve that intent. Then the complete configuration is sent down to the devices.
+ Process adapters, built on integration APIs, allow integration with other IT and network systems to streamline IT operations and processes.
+ Domain adapters, built on integration APIs, allow integration with other infrastructure domains such as data center, WAN, and security to deliver a consistent intent-based infrastructure across the entire IT environment.
+ SDKs allow management to be extended to third-party vendor’s network devices to offer support for diverse environments.

DNA Center APIs

Cisco DNA Center APIs are grouped into four categories: northbound, southbound, eastbound and westbound:
+ Northbound (Intent) APIs: enable developers to access Cisco DNA Center Automation and Assurance workflows. For example: provision SSIDs, QoS policies, update software images running on the network devices, and application health.
+ Southbound (Multivendor Support) APIs: allows partners to add support for managing non-Cisco devices directly from Cisco DNA Center
+ Eastbound (Events and Notifications) APIs: publish event notifications that enable third party applications to act on system level, operational or Cisco DNA Assurance notifications. For example, when some of the devices in the network are out of compliance, an eastbound API can enable an application to execute a software upgrade when it receives a notification.
+ Westbound (Integration) APIs: provide the capability to publish the network data, events and notifications to the external systems and consume information in Cisco DNA Center from the connected systems. Through integration APIs, Cisco DNA Center platform can power end-to-end IT processes across the value chain by integrating various domains such as IT Service Management (ITSM), IP address management (IPAM), and reporting. By leveraging the REST-based Integration Adapter APIs, bi-directional interfaces can be built to allow the exchange of contextual information between Cisco DNA Center and the external, third-party IT systems.

Question 1

Question 2

Explanation

A complete Cisco DNA Center upgrade includes “System Update” and “Appplication Updates”

DNA_Complete_Upgrade.jpg

Question 3

Explanation

Cisco DNA Center provides an interactive editor called Template Editor to author CLI templates. Template Editor is a centralized CLI management tool to help design a set of device configurations that you need to build devices in a branch. When you have a site, office, or branch that uses a similar set of devices and configurations, you can use Template Editor to build generic configurations and apply the configurations to one or more devices in the branch.

Reference: https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/1-3/user_guide/b_cisco_dna_center_ug_1_3/b_cisco_dna_center_ug_1_3_chapter_0111.html

Question 4

Question 5

Explanation

The Cisco DNA Center open platform for intent-based networking provides 360-degree extensibility across multiple components, including:
+ Intent-based APIs leverage the controller to enable business and IT applications to deliver intent to the network and to reap network analytics and insights for IT and business innovation. These enable APIs that allow Cisco DNA Center to receive input from a variety of sources, both internal to IT and from line-of-business applications, related to application policy, provisioning, software image management, and assurance.

Reference: https://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/dna-center/nb-06-dna-cent-plat-sol-over-cte-en.html

Question 6

Explanation

You can create a network hierarchy that represents your network’s geographical locations. Your network hierarchy can contain sites, which in turn contain buildings and areas. You can create site and building IDs to easily identify where to apply design settings or configurations later.

Reference: https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/1-2-5/user_guide/b_dnac_ug_1_2_5/b_dnac_ug_1_2_4_chapter_0110.html

DNA_Center_Design_Network_Hierarchy.jpg

Question 7

Question 8

Explanation

The Intent API is a Northbound REST API that exposes specific capabilities of the Cisco DNA Center platform.
The Intent API provides policy-based abstraction of business intent, allowing focus on an outcome rather than struggling with individual mechanisms steps.

Reference: https://developer.cisco.com/docs/dna-center/#!cisco-dna-center-platform-overview/intent-api-northbound

Question 9

Explanation

Cisco DNA Center allows customers to manage their non-Cisco devices through the use of a Software Development Kit (SDK) that can be used to create Device Packages for third-party devices.

Reference: https://developer.cisco.com/docs/dna-center/#!cisco-dna-center-platform-overview/multivendor-support-southbound

Question 10

Explanation

If your network uses Cisco Identity Services Engine for user authentication, you can configure Assurance for Cisco ISE integration. This enables you to see more information about wired clients, such as the username and operating system, in Assurance.

Reference: https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center-assurance/2-1-2/b_cisco_dna_assurance_2_1_2_ug/b_cisco_dna_assurance_2_1_1_ug_chapter_011.html

Security Questions

January 27th, 2021 digitaltut 14 comments

Question 1

Explanation

Lines (CON, AUX, VTY) default to level 1 privileges.

Question 2

Explanation

MACsec, defined in 802.1AE, provides MAC-layer encryption over wired networks by using out-of-band methods for encryption keying. The MACsec Key Agreement (MKA) Protocol provides the required session keys and manages the required encryption keys. MKA and MACsec are implemented after successful authentication using the 802.1x Extensible Authentication Protocol (EAP-TLS) or Pre Shared Key (PSK) framework.

A switch using MACsec accepts either MACsec or non-MACsec frames, depending on the policy associated with the MKA peer. MACsec frames are encrypted and protected with an integrity check value (ICV). When the switch receives frames from the MKA peer, it decrypts them and calculates the correct ICV by using session keys provided by MKA. The switch compares that ICV to the ICV within the frame. If they are not identical, the frame is dropped. The switch also encrypts and adds an ICV to any frames sent over the secured port (the access point used to provide the secure MAC service to a MKA peer) using the current session key.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9300/software/release/16-9/configuration_guide/sec/b_169_sec_9300_cg/macsec_encryption.html

Note: Cisco Trustsec is the solution which includes MACsec.

Question 3

Explanation

The ultimate goal of Cisco TrustSec technology is to assign a tag (known as a Security Group Tag, or SGT) to the user’s or device’s traffic at ingress (inbound into the network), and then enforce the access policy based on the tag elsewhere in the infrastructure (in the data center, for example). This SGT is used by switches, routers, and firewalls to make forwarding decisions. For instance, an SGT may be assigned to a Guest user, so that Guest traffic may be isolated from non-Guest traffic throughout the infrastructure.

Reference: https://www.cisco.com/c/dam/en/us/solutions/collateral/borderless-networks/trustsec/C07-730151-00_overview_of_trustSec_og.pdf

Question 4

Explanation

The Cisco TrustSec solution simplifies the provisioning and management of network access control through the use of software-defined segmentation to classify network traffic and enforce policies for more flexible access controls. Traffic classification is based on endpoint identity, not IP address, enabling policy change without net-work redesign.

Reference: https://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/Apr2016/User-to-DC_Access_Control_Using_TrustSec_Deployment_April2016.pdf

Question 5

Explanation

The “enable secret” password is always encrypted (independent of the “service password-encryption” command) using MD5 hash algorithm. The “enable password” does not encrypt the password and can be view in clear text in the running-config. In order to encrypt the “enable password”, use the “service password-encryption” command. This command will encrypt the passwords by using the Vigenere encryption algorithm. Unfortunately, the Vigenere encryption method is cryptographically weak and trivial to reverse.

The MD5 hash is a stronger algorithm than Vigenere so answer D is correct.

Question 6

Explanation

Firepower Threat Defense (FTD) provides six interface modes which are: Routed, Switched, Inline Pair, Inline Pair with Tap, Passive, Passive (ERSPAN).

When Inline Pair Mode is in use, packets can be blocked since they are processed inline
When you use Inline Pair mode, the packet goes mainly through the FTD Snort engine
When Tap Mode is enabled, a copy of the packet is inspected and dropped internally while the actual traffic goes through FTD unmodified

https://www.cisco.com/c/en/us/support/docs/security/firepower-ngfw/200924-configuring-firepower-threat-defense-int.html

Question 7

Question 8

Explanation

Ransomware are malicious software that locks up critical resources of the users. Ransomware uses well-established public/private key cryptography which leaves the only way of recovering the files being the payment of the ransom, or restoring files from backups.

Cisco Advanced Malware Protection (AMP) for Endpoints Malicious Activity Protection (MAP) engine defends your endpoints by monitoring the system and identifying processes that exhibit malicious activities when they execute and stops them from running. Because the MAP engine detects threats by observing the behavior of the process at run time, it can generically determine if a system is under attack by a new variant of ransomware or malware that may have eluded other security products and detection technology, such as legacy signature-based malware detection. The first release of the MAP engine targets identification, blocking, and quarantine of ransomware attacks on the endpoint.

Reference: https://www.cisco.com/c/dam/en/us/products/collateral/security/amp-for-endpoints/white-paper-c11-740980.pdf

Question 9

Explanation

Clustering lets you group multiple Firepower Threat Defense (FTD) units together as a single logical device. Clustering is only supported for the FTD device on the Firepower 9300 and the Firepower 4100 series. A cluster provides all the convenience of a single device (management, integration into a network) while achieving the increased throughput and redundancy of multiple devices.

Question 10

Explanation

The “exec-timeout” command is used to configure the inactive session timeout on the console port or the virtual terminal. The syntax of this command is:

exec-timeout minutes [seconds]

Therefore we need to use the “exec-timeout 10 0” command to set the user inactivity timer to 600 seconds (10 minutes).

Question 11

Explanation

If you’re a website owner and your website displays this error message, then there could be two reasons why the browser says the cert authority is invalid:
+ You’re using a self-signed SSL certificate, OR
+ The certificate authority (CA) that issued your SSL certificate isn’t trusted by your web browser.

Access-list Questions

January 27th, 2021 digitaltut 38 comments

If you are not sure about Access-list, please read our Access-list tutorial.

Question 1

Explanation

Remember, for the wildcard mask, 1′s are I DON’T CARE, and 0′s are I CARE. So now let’s analyze a simple ACL:

access-list 1 permit 172.23.16.0 0.0.15.255

Two first octets are all 0’s meaning that we care about the network 172.23.x.x. The third octet of the wildcard mask, 15 (0000 1111 in binary), means that we care about first 4 bits but don’t care about last 4 bits so we allow the third octet in the form of 0001xxxx (minimum:00010000 = 16; maximum: 0001111 = 31).

wildcard_mask.jpg

The fourth octet is 255 (all 1 bits) that means I don’t care.

Therefore network 172.23.16.0 0.0.15.255 ranges from 172.23.16.0 to 172.23.31.255.

Now let’s consider the wildcard mask of 0.0.0.254 (four octet: 254 = 1111 1110) which means we only care the last bit. Therefore if the last bit of the IP address is a “1” (0000 0001) then only odd numbers are allowed. If the last bit of the IP address is a “0” (0000 0000) then only even numbers are allowed.

Note: In binary, odd numbers are always end with a “1” while even numbers are always end with a “0”.

Therefore in this question, only the statement “permit 10.0.0.1 0.0.0.254” will allow all odd-numbered hosts in the 10.0.0.0/24 subnet.

Question 2

Question 3

Explanation

The syntax of an extended ACL is shown below:

access-list access-list-number {permit | deny} protocol source {source-mask} [eq source-port] destination {destination-mask} [eq destination-port]

Access_list_Inbound.jpg

According to the request in this question, we must apply the ACL on the port connected to the Web Server and with inbound direction. So it can only filter traffic sent from the Web Server to the Client. Please notice that the Client communicate to the Web Server with destination port of 80 but with random source port. So the Web Server must answer the Client with this random port (as the destination port) -> Therefore the destination port in the required ACL must be ignored. Also the Web Server must use port 80 as its source port.

So the structure of the ACL should be: permit tcp host <IP-address-of-Web-Server> eq 80 host <IP-address-of-Client>

-> Answer C is correct.

Question 4

Explanation

Although the statement “permit tcp any any gt … lt …” seems to be correct but in fact it is not. Each ACL statement only supports either “gt” or “lt” but not both:

Access-list_gt_lt.jpg

-> Answer A is not correct.

Answer C is only correct if the order of the statement is in reverse order.

Question 5

Explanation

We can insert a line (statement) between entries into an existing ACL by a number in between.

access_list_add_one_statement.jpg

So what will happen if we just enter a statement without the number? Well, that statement would be added at the bottom of an ACL. But in this case we already had an explicit “deny ip any any” statement so we cannot put another line under it.

Question 6

Explanation

We cannot filter traffic that is originated from the local router (R3 in this case) so we can only configure the ACL on R1 or R2. “Weekend hours” means from Saturday morning through Sunday night so we have to configure: “periodic weekend 00:00 to 23:59”.

Note: The time is specified in 24-hour time (hh:mm), where the hours range from 0 to 23 and the minutes range from 0 to 59.

Question 7

Explanation

The established keyword is only applicable to TCP access list entries to match TCP segments that have the ACK and/or RST control bit set (regardless of the source and destination ports), which assumes that a TCP connection has already been established in one direction only. Let’s see an example below:

access-list_established.jpgSuppose you only want to allow the hosts inside your company to telnet to an outside server but not vice versa, you can simply use an “established” access-list like this:

access-list 100 permit tcp any any established
access-list 101 permit tcp any any eq telnet
!
interface S0/0
ip access-group 100 in
ip access-group 101 out

Note:

Suppose host A wants to start communicating with host B using TCP. Before they can send real data, a three-way handshake must be established first. Let’s see how this process takes place:

TCP_Three_way_handshake.jpg

1. First host A will send a SYN message (a TCP segment with SYN flag set to 1, SYN is short for SYNchronize) to indicate it wants to setup a connection with host B. This message includes a sequence (SEQ) number for tracking purpose. This sequence number can be any 32-bit number (range from 0 to 232) so we use “x” to represent it.

2. After receiving SYN message from host A, host B replies with SYN-ACK message (some books may call it “SYN/ACK” or “SYN, ACK” message. ACK is short for ACKnowledge). This message includes a SYN sequence number and an ACK number:
+ SYN sequence number (let’s called it “y”) is a random number and does not have any relationship with Host A’s SYN SEQ number.
+ ACK number is the next number of Host A’s SYN sequence number it received, so we represent it with “x+1”. It means “I received your part. Now send me the next part (x + 1)”.

The SYN-ACK message indicates host B accepts to talk to host A (via ACK part). And ask if host A still wants to talk to it as well (via SYN part).

3. After Host A received the SYN-ACK message from host B, it sends an ACK message with ACK number “y+1” to host B. This confirms host A still wants to talk to host B.

Question 8

Explanation

The inbound direction of G0/0 of SW2 only filter traffic from Web Server to PC-1 so the source IP address and port is of the Web Server.

Question 9

Explanation

Note: We cannot assign a number smaller than 100 to an extended access-list so the command “ip access-list extended 10” is not correct.

extended_acl_number.jpg

Question 10

Explanation

We see in the traceroute result the packet could reach 10.99.69.5 (on R2) but it could not go any further so we can deduce an ACL on R3 was blocking it.

Note: Record option displays the address(es) of the hops (up to nine) the packet goes through.

Access-list Questions 2

January 27th, 2021 digitaltut 8 comments

Question 1

Explanation

The established keyword is only applicable to TCP access list entries to match TCP segments that have the ACK and/or RST control bit set (regardless of the source and destination ports), which assumes that a TCP connection has already been established in one direction only. Let’s see an example below:

access-list_established.jpgSuppose you only want to allow the hosts inside your company to telnet to an outside server but not vice versa, you can simply use an “established” access-list like this:

access-list 100 permit tcp any any established
access-list 101 permit tcp any any eq telnet
!
interface S0/0
ip access-group 100 in
ip access-group 101 out

Note:

Suppose host A wants to start communicating with host B using TCP. Before they can send real data, a three-way handshake must be established first. Let’s see how this process takes place:

TCP_Three_way_handshake.jpg

1. First host A will send a SYN message (a TCP segment with SYN flag set to 1, SYN is short for SYNchronize) to indicate it wants to setup a connection with host B. This message includes a sequence (SEQ) number for tracking purpose. This sequence number can be any 32-bit number (range from 0 to 232) so we use “x” to represent it.

2. After receiving SYN message from host A, host B replies with SYN-ACK message (some books may call it “SYN/ACK” or “SYN, ACK” message. ACK is short for ACKnowledge). This message includes a SYN sequence number and an ACK number:
+ SYN sequence number (let’s called it “y”) is a random number and does not have any relationship with Host A’s SYN SEQ number.
+ ACK number is the next number of Host A’s SYN sequence number it received, so we represent it with “x+1”. It means “I received your part. Now send me the next part (x + 1)”.

The SYN-ACK message indicates host B accepts to talk to host A (via ACK part). And ask if host A still wants to talk to it as well (via SYN part).

3. After Host A received the SYN-ACK message from host B, it sends an ACK message with ACK number “y+1” to host B. This confirms host A still wants to talk to host B.

Question 2

Multicast Questions

January 27th, 2021 digitaltut 11 comments

Question 1

Explanation

A rendezvous point (RP) is required only in networks running Protocol Independent Multicast sparse mode (PIM-SM).

By default, the RP is needed only to start new sessions with sources and receivers.

Reference: https://www.cisco.com/c/en/us/td/docs/ios/solutions_docs/ip_multicast/White_papers/rps.html

For your information, in PIM-SM, only network segments with active receivers that have explicitly requested multicast data will be forwarded the traffic. This method of delivering multicast data is in contrast to the PIM dense mode (PIM-DM) model. In PIM-DM, multicast traffic is initially flooded to all segments of the network. Routers that have no downstream neighbors or directly connected receivers prune back the unwanted traffic.

Question 2

Explanation

PIM dense mode (PIM-DM) uses a push model to flood multicast traffic to every corner of the network. This push model is a brute-force method of delivering data to the receivers. This method would be efficient in certain deployments in which there are active receivers on every subnet in the network. PIM-DM initially floods multicast traffic throughout the network. Routers that have no downstream neighbors prune the unwanted traffic. This process repeats every 3 minutes.

PIM Sparse Mode (PIM-SM) uses a pull model to deliver multicast traffic. Only network segments with active receivers that have explicitly requested the data receive the traffic. PIM-SM distributes information about active sources by forwarding data packets on the shared tree. Because PIM-SM uses shared trees (at least initially), it requires the use of an RP. The RP must be administratively configured in the network.

Answer C seems to be correct but it is not, PIM spare mode uses sources (not receivers) to register with the RP. Sources register with the RP, and then data is forwarded down the shared tree to the receivers.

Reference: Selecting MPLS VPN Services Book, page 193

Question 3

Explanation

The concept of joining the rendezvous point (RP) is called the RPT (Root Path Tree) or shared distribution tree. The RP is the root of our tree which decides where to forward multicast traffic to. Each multicast group might have different sources and receivers so we might have different RPTs in our network.

Question 4

Explanation

In the figure below, we can see RP sent “join 234.1.1.1” message toward Source.

RP_join_source.jpg

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/apjc/docs/2018/pdf/BRKIPM-1261.pdf

Question 5

Explanation

Query messages are used to elect the IGMP querier as follows:
1. When IGMPv2 devices start, they each multicast a general query message to the all-systems group address of 224.0.0.1 with their interface address in the source IP address field of the message.
2. When an IGMPv2 device receives a general query message, the device compares the source IP address in the message with its own interface address. The device with the lowest IP address on the subnet is elected the IGMP querier.
3. All devices (excluding the querier) start the query timer, which is reset whenever a general query message is received from the IGMP querier. If the query timer expires, it is assumed that the IGMP querier has gone down, and the election process is performed again to elect a new IGMP querier.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3750x_3560x/software/release/15-2_2_e/multicast/configuration_guide/b_mc_1522e_3750x_3560x_cg/b_ipmc_3750x_3560x_chapter_01000.html

NTP Questions

January 27th, 2021 digitaltut 6 comments

Quick review of NTP

NTP uses the concept of a stratum to describe how many NTP hops away a machine is from an authoritative time source, usually a reference clock. A reference clock is a stratum 0 device that is assumed to be accurate and has little or no delay associated with it. Stratum 0 servers cannot be used on the network but they are directly connected to computers which then operate as stratum-1 servers. A stratum 1 time server acts as a primary network time standard.

ntp-stratum.jpg

A stratum 2 server is connected to the stratum 1 server; then a stratum 3 server is connected to the stratum 2 server and so on. A stratum 2 server gets its time via NTP packet requests from a stratum 1 server. A stratum 3 server gets its time via NTP packet requests from a stratum-2 server… A stratum server may also peer with other stratum servers at the same level to provide more stable and robust time for all devices in the peer group (for example a stratum 2 server can peer with other stratum 2 servers).

– NTP is designed to synchronize the time on a network. NTP runs over the User Datagram Protocol (UDP), using port 123 as both the source and destination.
– An Authoritative NTP Server can distribute time even when it is not synchronized to an existing time server. To configure a Cisco device as an Authoritative NTP Server, use the ntp master [stratum] command.
– To configure the local device to use a remote NTP clock source, use the command ntp server <IP address>. For example: Router(config)#ntp server 192.168.1.1
– The ntp authenticate command is used to enable the NTP authentication feature (NTP authentication is disabled by default).
– The ntp trusted-key command specifies one or more keys that a time source must provide in its NTP packets in order for the device to synchronize to it. This command provides protection against accidentally synchronizing the device to a time source that is not trusted.
– The ntp authentication-key defines the authentication keys. The device does not synchronize to a time source unless the source has one of these authentication keys and the key number is specified by the ntp trusted-key number command.
– Two most popular commands to display time sources statistics: show ntp status and show ntp associations

Question 1

Explanation

The stratum levels define the distance from the reference clock. A reference clock is a stratum 0 device that is assumed to be accurate and has little or no delay associated with it. Stratum 0 servers cannot be used on the network but they are directly connected to computers which then operate as stratum-1 servers. A stratum 1 time server acts as a primary network time standard.

ntp-stratum.jpg

A stratum 2 server is connected to the stratum 1 server; then a stratum 3 server is connected to the stratum 2 server and so on. A stratum 2 server gets its time via NTP packet requests from a stratum 1 server. A stratum 3 server gets its time via NTP packet requests from a stratum-2 server… A stratum server may also peer with other stratum servers at the same level to provide more stable and robust time for all devices in the peer group (for example a stratum 2 server can peer with other stratum 2 servers).

NTP uses the concept of a stratum to describe how many NTP hops away a machine is from an authoritative time source. A stratum 1 time server typically has an authoritative time source (such as a radio or atomic clock, or a Global Positioning System (GPS) time source) directly attached, a stratum 2 time server receives its time via NTP from a stratum 1 time server, and so on.

Reference: https://www.cisco.com/c/en/us/td/docs/routers/asr920/configuration/guide/bsm/16-6-1/b-bsm-xe-16-6-1-asr920/bsm-time-calendar-set.html

Question 2

Explanation

The time kept on a machine is a critical resource and it is strongly recommend that you use the security features of NTP to avoid the accidental or malicious setting of incorrect time. The two security features available are an access list-based restriction scheme and an encrypted authentication mechanism.

Reference: https://www.cisco.com/c/en/us/support/docs/availability/high-availability/19643-ntpm.html

Question 3

Explanation

The time kept on a machine is a critical resource and it is strongly recommend that you use the security features of NTP to avoid the accidental or malicious setting of incorrect time. The two security features available are an access list-based restriction scheme and an encrypted authentication mechanism.

Reference: https://www.cisco.com/c/en/us/support/docs/availability/high-availability/19643-ntpm.html

Question 4

Explanation

If the system clock has not been set, the date and time are preceded by an asterisk (*) to indicate that the date and time are probably not correct.

Reference: https://www.cisco.com/E-Learning/bulk/public/tac/cim/cib/using_cisco_ios_software/cmdrefs/service_timestamps.htm

Moreover, when we use “show clock” on a brand-new router (which has not been configured anything), we see the clock is set to the default value with an asterisk mask.

asterisk_mask_clock.jpg

Therefore we can deduce this device is not configured NTP.

CoPP Questions

January 27th, 2021 digitaltut No comments

If you are not sure about CoPP, please read our Control Plane Policing (CoPP) Tutorial (on networktut.com)

Question 1

Explanation

CoPP protects the route processor on network devices by treating route processor resources as a separate entity with its own ingress interface (and in some implementations, egress also). CoPP is used to police traffic that is destined to the route processor of the router such as:
+ Routing protocols like OSPF, EIGRP, or BGP.
+ Gateway redundancy protocols like HSRP, VRRP, or GLBP.
+ Network management protocols like telnet, SSH, SNMP, or RADIUS.

CoPP_SSH.jpg

Therefore we must apply the CoPP to deal with SSH because it is in the management plane. CoPP must be put under “control-plane” command. But we cannot name the control-plane (like “transit”).

Question 2

Automation Questions

January 25th, 2021 digitaltut 29 comments

Note: If you are not sure about Automation, please read our JSON Tutorial, JSON Web Token (JWT) Tutorial, Ansible Tutorial, Chef Tutorial, Puppet Tutorial.

Quick summary about Ansible, Puppet, Chef

Ansible_Puppet_Chef_compare.jpg

JSON Quick summary

JavaScript Object Notation (JSON) is a human readable and very popular format used by web services, programming languages (including Python) and APIs to read/write data.

JSON syntax structure:
+ uses curly braces {} to hold objects and square brackets [] to hold arrays
+ JSON data is written as key/value pairs
+ A key/value pair consists of a key (must be a string in double quotation marks ""), followed by a colon :, followed by a value. For example: “name”:”John”
+ Each key must be unique
+ Values must be of type string, number, object, array, boolean or null
+ Multiple key/value within an object are separated by commas ,

JSON can use arrays. Arrays are used to store multiple values in a single variable. For example:

{
“name”:”John”,
“age”:30,
“cars”:[ “Ford”, “BMW”, “Fiat”]
}

In the above example, “cars” is an array which contains three values “Ford”, “BMW” and “Fiat”.

If we have a JSON string, we can convert (parse) it into Python by using the json.loads() method, which returns a Python dictionary:

import json
myvar = '{“name”:”John”,“age”:30,“cars”:[ “Ford”, “BMW”, “Fiat”]}'
parse_myvar = json.loads(myvar)
print(parse_myvar["cars"][0])

The result:

Ford

Note:
+ json.dumps()
function converts a Python object into a JSON string. For example, we can convert Dictionary type into a JSON string: json.dumps({‘name’: ‘John’,’age’: ’20’})
+ json.loads() method parses a valid JSON string and convert it into a Python Dictionary.

NETCONF Quick summary

NETCONF provides mechanisms to retrieve and manipulate configuration of network devices. NETCONF is very similar to Command Line Interface (CLI) we all knew. But the main difference is CLI is designed for humans while NETCONF is aimed for automation applications.

NETCONF protocol is based on XML messages exchanged via SSH protocol using TCP port 830 (default). Network devices running a NETCONF agent can be managed through five main operations:
get: This operation retrieves the running configuration and device state information.
get-config: This operation retrieves all or part of a specified configuration.
edit-config: This operation loads all or part of a specified configuration to the specified device.
copy-config: This operation creates or replaces an entire configuration with specified contents.
delete-config: This operation deletes a configuration. The running configuration cannot be deleted.

The NETCONF protocol requires messages to always be encoded with XML.

RESTCONF Quick summary

RESTCONF helps NETCONF to run on the most popular protocol on the Internet: HTTP/HTTPS. Network devices running a RESTCONF agent can be managed through five HTTP operations:
OPTIONS: Discover which operations are supported by a data resource
HEAD
: Get without a body
GET: This method retrieves data and metadata for a resource. It is supported for all resource types, except operation resources.
PATCH: This method partially modifies a resource (the equivalent of the NETCONF merge operation).
PUT: This method creates or replaces the target resource.
POST: This method creates a data resource or invokes an operations resource.
DELETE: This method deletes the target resource.

The RESTCONF protocol allows data to be encoded with either XML or JSON.

YANG Quick summary

YANG (Yet Another Next Generation) is a data modelling language, providing a standardized way to model the operational and configuration data of a network device. YANG can then be converted into any encoding format, e.g. XML or JSON. An example of a YANG model is shown below (source: Cisco Live DEVNET-1721):

Yang_example.png

Question 1

Explanation

Ansible-managed node can be a Juniper device or other vendors’ device as well so answer A is not correct.

Ansible communicates with managed node via SSH -> Answer B is correct.

An Ansible ad-hoc command uses the /usr/bin/ansible command-line tool to automate a single task on one or more managed nodes. Ad-hoc commands are quick and easy, but they are not reusable -> It is not a requirement either -> Answer C is not correct.

Ansible Tower is a web-based solution that makes Ansible even more easy to use for IT teams of all kinds. But it is not a requirement to run Ansible -> Answer D is not correct.

Ansible_workflow.jpg

Note: Managed Nodes are the network devices (and/or servers) you manage with Ansible. Managed nodes are also sometimes called “hosts”. Ansible is not installed on managed nodes.

Question 2

Explanation

When a device boots up with the startup configuration, the nginx process will be running. NGINX is an internal webserver that acts as a proxy webserver. It provides Transport Layer Security (TLS)-based HTTPS. RESTCONF request sent via HTTPS is first received by the NGINX proxy web server, and the request is transferred to the confd web server for further syntax/semantics check.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/prog/configuration/168/b_168_programmability_cg/RESTCONF.html

The https-based protocol-RESTCONF (RFC 8040), which is a stateless protocol, uses secure HTTP methods to provide CREATE, READ, UPDATE and DELETE (CRUD) operations on a conceptual datastore containing YANG-defined data -> RESTCONF only uses HTTPs.

Note: In fact answer C is also correct:

RESTCONF servers MUST present an X.509v3-based certificate when establishing a TLS connection with a RESTCONF client. The use of X.509v3-based certificates is consistent with NETCONF over TLS.

Reference: https://tools.ietf.org/html/rfc8040

But answer A is still a better choice.

Question 3

Explanation

RESTCONF operations include OPTIONS, HEAD, GET, POST, PUT, PATCH, DELETE.

RESTCONF Description
OPTIONS Determine which methods are supported by the server.
GET Retrieve data and metadata about a resource.
HEAD The same as GET, but only the response headers are returned.
POST Create a resource or invoke an RPC operation.
PUT Create or replace a resource.
PATCH Create or update (but not delete) various resources.
DELETE Sent by a client to delete a target resource.

Question 4

Question 5

Explanation

An EEM policy is an entity that defines an event and the actions to be taken when that event occurs. There are two types of EEM policies: an applet or a script. An applet is a simple form of policy that is defined within the CLI configuration. A script is a form of policy that is written in Tool Command Language (Tcl).

There are two ways to manually run an EEM policy. EEM usually schedules and runs policies on the basis of an event specification that is contained within the policy itself. The event none command allows EEM to identify an EEM policy that can be manually triggered. To run the policy, use either the action policy command in applet configuration mode or the event manager run command in privileged EXEC mode.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/eem/configuration/xe-3s/eem-xe-3s-book/eem-policy-cli.html

Question 6

Explanation

EEM offers the ability to monitor events and take informational or corrective action when the monitored events occur or reach a threshold. An EEM policy is an entity that defines an event and the actions to be taken when that event occurs. There are two types of EEM policies: an applet or a script. An applet is a simple form of policy that is defined within the CLI configuration.

To specify the event criteria for an Embedded Event Manager (EEM) applet that is run by sampling Simple Network Management Protocol (SNMP) object identifier values, use the event snmp command in applet configuration mode.
event snmp oid oid-value get-type {exact | next} entry-op operator entry-val entry-value [exit-comb {or | and}] [exit-op operator] [exit-val exit-value] [exit-time exit-time-value] poll-interval poll-int-value

+ oid: Specifies the SNMP object identifier (object ID)
+ get-type: Specifies the type of SNMP get operation to be applied to the object ID specified by the oid-value argument.
— next – Retrieves the object ID that is the alphanumeric successor to the object ID specified by the oid-value argument.
+ entry-op: Compares the contents of the current object ID with the entry value using the specified operator. If there is a match, an event is triggered and event monitoring is disabled until the exit criteria are met.
+ entry-val: Specifies the value with which the contents of the current object ID are compared to decide if an SNMP event should be raised.
+ exit-op: Compares the contents of the current object ID with the exit value using the specified operator. If there is a match, an event is triggered and event monitoring is reenabled.
+ poll-interval: Specifies the time interval between consecutive polls (in seconds)

Reference: https://www.cisco.com/en/US/docs/ios/12_3t/12_3t4/feature/guide/gtioseem.html

In particular, this EEM will read the next value of above OID every 5 second and will trigger an action if the value is greater or equal (ge) 75%.

Question 7

Explanation

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA.

JSON Web Tokens are composed of three parts, separated by a dot (.): Header, Payload, Signature. Therefore, a JWT typically looks like the following:

xxxxx.yyyyy.zzzzz

The header typically consists of two parts: the type of the token, which is JWT, and the signing algorithm being used, such as HMAC SHA256 or RSA.
The second part of the token is the payload, which contains the claims. Claims are statements about an entity (typically, the user) and additional data.
To create the signature part you have to take the encoded header, the encoded payload, a secret, the algorithm specified in the header, and sign that.

Reference: https://jwt.io/introduction/

Question 8

Explanation

When you use the sync yes option in the event cli command, the EEM applet runs before the CLI command is executed. The EEM applet should set the _exit_status variable to indicate whether the CLI command should be executed (_exit_status set to one) or not (_exit_status set to zero).

With the sync no option, the EEM applet is executed in background in parallel with the CLI command.

Reference: https://blog.ipspace.net/2011/01/eem-event-cli-command-options-and.html

Question 9

Question 10

Explanation

YANG (Yet Another Next Generation) is a data modeling language for the definition of data sent over network management protocols such as the NETCONF and RESTCONF.

Question 11

Explanation

The REST API accepts and returns HTTP (not enabled by default) or HTTPS messages that contain JavaScript Object Notation (JSON) or Extensible Markup Language (XML) documents. You can use any programming language to generate the messages and the JSON or XML documents that contain the API methods or Managed Object (MO) descriptions.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/rest_cfg/2_1_x/b_Cisco_APIC_REST_API_Configuration_Guide/b_Cisco_APIC_REST_API_Configuration_Guide_chapter_01.html

Question 12

Explanation

This JSON can be written as follows:

{
   "switch": {
      "name": "dist1",
      "interfaces": ["gig1", "gig2", "gig3"]
   }
}

Automation Questions 2

January 24th, 2021 digitaltut 17 comments

Question 1

Explanation

The words “try” and “except” are Python keywords and are used to catch exceptions. For example:

try:
 print 1/0
except ZeroDivisionError:
 print "Error! We cannot divide by zero!!!" 

Question 2

Explanation

If we have a JSON string, we can parse it by using the json.loads() method so we don’t need to have a response server to test this question. Therefore, in order to test the result above, you can try this Python code:

import json
json_string = """
{
 "ins_api": { <!!!Please copy the code above and put here. We omitted it to save some space!!!>
 }
}
""" 
response = json.loads(json_string)

print(response['ins_api']['outputs']['output']['body']['kickstart_ver_str'])

And this is the result:

Python_JSON_print.jpg

Note:
+ If you want to run the full code in this question in Python (with a real HTTP JSON response), you must first install “requests” package before “import requests”.
+ The error “NameError: name ‘json’ is not defined” is only shown if we forgot the line “import json” in Python code -> Answer A is not correct.
+ We only see the “KeyError” message if we try to print out an unknown attribute (key). For example:

print(response['ins_api']['outputs']['output']['body']['unknown_attribute'])

Python_error_unknow_attribute.jpg
+ Triple quotes (“””) in Python allows strings to span multiple lines, including verbatim NEWLINEs, TABs, and any other special characters.

Question 3

Explanation

Cisco IOS XE supports the Yet Another Next Generation (YANG) data modeling language. YANG can be used with the Network Configuration Protocol (NETCONF) to provide the desired solution of automated and programmable network operations. NETCONF(RFC6241) is an XML-based protocol that client applications use to request information from and make configuration changes to the device. YANG is primarily used to model the configuration and state data used by NETCONF operations.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9500/software/release/16-5/configuration_guide/prog/b_165_prog_9500_cg/data_models.pdf

Note: Although NETCONF also uses XML but XML is not a data modeling language.

Question 4

Explanation

The 404 (Not Found) error status code indicates that the REST API can’t map the client’s URI to a resource but may be available in the future. Subsequent requests by the client are permissible.

Reference: https://restfulapi.net/http-status-codes/

Question 5

Explanation

A 401 error response indicates that the client tried to operate on a protected resource without providing the proper authorization. It may have provided the wrong credentials or none at all.

Note: A 4xx code indicates a “client error” while a 5xx code indicates a “server error”.

Reference: https://restfulapi.net/http-status-codes/

Question 6

Question 7

Explanation

The Southbound API is used to communicate with network devices.

Southbound_Northbound_APIs.jpg

Question 8

Explanation

To enable the action of printing data directly to the local tty when an Embedded Event Manager (EEM) applet is triggered, use the action puts command in applet configuration mode.

The following example shows how to print data directly to the local tty:

Router(config-applet)# event manager applet puts
Router(config-applet)# event none
Router(config-applet)# action 1 regexp “(.*) (.*) (.*)” “one two three” _match _sub1
Router(config-applet)# action 2 puts “match is $_match”
Router(config-applet)# action 3 puts “submatch 1 is $_sub1”
Router# event manager run puts
match is one two three
submatch 1 is one
Router#

The action puts command applies to synchronous events. The output of this command for a synchronous applet is directly displayed to the tty, bypassing the syslog.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/eem/command/eem-cr-book/eem-cr-a1.html

Question 9

Explanation

JSON data is written as name/value pairs.
A name/value pair consists of a field name (in double quotes), followed by a colon, followed by a value:
“name”:”Mark”

JSON can use arrays. Array values must be of type string, number, object, array, boolean or null. For example:
{
“name”:”John”,
“age”:30,
“alive”:true,
“cars”:[ “Ford”, “BMW”, “Fiat” ]
}

Question 10

Explanation

Agentless tool means that no software or agent needs to be installed on the client machines that are to be managed. Ansible is such an agentless tool. In contrast to agentless tool, agent-based tool requires software or agent to be installed on the client (-> Answer D is not correct).

In agentless tool, the master and slave nodes can communicate directly without the need of high-level language interpreter but agent-based tool requires interpreter to be installed on both master and slave nodes -> Answer C is not correct.

An agentless tool uses standard protocols, such as SSH, to push configurations down to a device (and it can be considered a “messaging system”).

Agentless tools like Ansible can directly communicate to slave nodes via SSH -> Answer B is not correct.

Therefore only answer A left. In this answer, “Messaging systems” should be understood as “additional software packages installed on slave nodes” to control nodes. Agentless tools do not require them.

Question 11

Explanation

Yet Another Next Generation (YANG) is a language which is only used to describe data models (structure). It is not XML or JSON.

Question 12

Explanation

With Synchronous ( sync yes), the CLI command in question is not executed until the policy exits. Whether or not the command runs depends on the value for the variable _exit_status. If _exit_status is 1, the command runs, if it is 0, the command is skipped.

Automation Questions 3

January 24th, 2021 digitaltut 6 comments

Question 1

Explanation

YANG (Yet Another Next Generation) is a data modeling language for the definition of data sent over network management protocols such as the NETCONF and RESTCONF.

Question 2

Explanation

One of the best practices to secure REST APIs is using password hash. Passwords must always be hashed to protect the system (or minimize the damage) even if it is compromised in some hacking attempts. There are many such hashing algorithms which can prove really effective for password security e.g. PBKDF2, bcrypt and scrypt algorithms.

Other ways to secure REST APIs are: Always use HTTPS, Never expose information on URLs (Usernames, passwords, session tokens, and API keys should not appear in the URL), Adding Timestamp in Request, Using OAuth, Input Parameter Validation.

Reference: https://restfulapi.net/security-essentials/

We should not use MD5 or any SHA (SHA-1, SHA-256, SHA-512…) algorithm to hash password as they are not totally secure.

Note: A brute-force attack is an attempt to discover a password by systematically trying every possible combination of letters, numbers, and symbols until you discover the one correct combination that works.

Question 3

Explanation

The example below shows the usage of lock command:

def demo(host, user, names):
    with manager.connect(host=host, port=22, username=user) as m:
        with m.locked(target=’running’):
            for n in names:
                m.edit_config(target=’running’, config=template % n)

the command “m.locked(target=’running’)” causes a lock to be acquired on the running datastore.

Question 4

Explanation

The most common implementations of OAuth (OAuth 2.0) use one or both of these tokens:
+ access token: sent like an API key, it allows the application to access a user’s data; optionally, access tokens can expire.
+ refresh token: optionally part of an OAuth flow, refresh tokens retrieve a new access token if they have expired. OAuth2 combines Authentication and Authorization to allow more sophisticated scope and validity control.

Question 5

Explanation

Ansible works by connecting to your nodes and pushing out small programs, called “Ansible modules” to them. These programs are written to be resource models of the desired state of the system. Ansible then executes these modules (over SSH by default), and removes them when finished.

Chef is a much older, mature solution to configure management. Unlike Ansible, it does require an installation of an agent on each server, named chef-client. Also, unlike Ansible, it has a Chef server that each client pulls configuration from.

Question 6

Explanation

Accept and Content-type are both headers sent from a client (a browser) to a service.
Accept header is a way for a client to specify the media type of the response content it is expecting and Content-type is a way to specify the media type of request being sent from the client to the server.

The response was sent in XML so we can say the Accept header sent was application/xml.

Question 7

Explanation

Customer needs are fast evolving. Typically, a network center is a heterogenous mix of various devices at multiple layers of the network. Bulk and automatic configurations need to be accomplished. CLI scraping is not flexible and optimal. Re-writing scripts many times, even for small configuration changes is cumbersome. Bulk configuration changes through CLIs are error-prone and may cause system issues. The solution lies in using data models-a programmatic and standards-based way of writing configurations to any network device, replacing the process of manual configuration. Data models are written in a standard, industry-defined language. Although configurations using CLIs are easier (more human-friendly), automating the configuration using data models results in scalability.

Question 8

Explanation

Missing Data Model RPC Error Reply Message
If a request is made for a data model that doesn’t exist on the Catalyst 3850 or a request is made for a leaf that is not implemented in a data model, the Server (Catalyst 3850) responds with an empty data response. This is expected behavior.

Reference: https://www.cisco.com/c/en/us/support/docs/storage-networking/management/200933-YANG-NETCONF-Configuration-Validation.html

Question 9

Explanation

ncclient is a Python library that facilitates client-side scripting and application development around the NETCONF protocol.

The above Python snippet uses the ncclient to connect and establish a NETCONF session to a Nexus device (which is also a NETCONF server).

Question 10

Question 11

Explanation

JSON syntax structure:
+ uses curly braces {} to hold objects and square brackets [] to hold arrays
+ JSON data is written as key/value pairs
+ A key/value pair consists of a key (must be a string in double quotation marks ""), followed by a colon :, followed by a value. For example: “name”:”John”
+ Each key must be unique
+ Values must be of type string, number, object, array, boolean or null
+ Multiple key/value within an object are separated by commas ,

JSON can use arrays. Arrays are used to store multiple values in a single variable. For example:

{
“name”:”John”,
“age”:30,
“cars”:[ “Ford”, “BMW”, “Fiat”]
}

In the above example, “cars” is an array which contains three values “Ford”, “BMW” and “Fiat”.

Note: Although our correct answer above does not have curly braces to hold objects but it is still the best choice here.

Miscellaneous Questions

January 23rd, 2021 digitaltut 13 comments

Question 1

Explanation

Although some Cisco webpages (like this one) mentioned about “logging synchronous” command in global configuration mode, which means “Router(config)#logging synchronous”, but in fact we cannot use it under global configuration mode. We can only use this command in line mode. Therefore answer C is better than answer A.

Let’s see how the “logging synchronous” command affect the typing command:

Without this command, a message may pop up and you may not know what you typed if that message is too long. When trying to erase (backspace) your command, you realize you are erasing the message instead.

without_logging_synchronous.jpg

With this command enabled, when a message pops up you will be put to a new line with your typing command which is very nice:

with_logging_synchronous.jpg

Question 2

Explanation

The Link Management Protocol (LMP) performs the following functions:
+ Verifies link integrity by establishing bidirectional traffic forwarding, and rejects any unidirectional links
+ Exchanges periodic hellos to monitor and maintain the health of the links
+ Negotiates the version of StackWise Virtual header between the switches StackWise Virtual link role resolution

Reference: https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9000/nb-06-cat-9k-stack-wp-cte-en.html

Question 3

Explanation

Syslog levels are listed below:

Level Keyword Description
0 emergencies System is unusable
1 alerts Immediate action is needed
2 critical Critical conditions exist
3 errors Error conditions exist
4 warnings Warning conditions exist
5 notification Normal, but significant, conditions exist
6 informational Informational messages
7 debugging Debugging messages

Number “3” in “%LINK-3-UPDOWN” is the severity level of this message so in this case it is “errors”.

Question 4

Explanation

The application residing on Device 1 originates an RSVP message called Path, which is sent to the same destination IP address as the data flow for which a reservation is requested (that is, 10.60.60.60) and is sent with the “router alert” option turned on in the IP header. The Path message contains, among other things, the following objects:

–The “sender T-Spec” (traffic specification) object, which characterizes the data flow for which a reservation will be requested. The T-Spec basically defines the maximum IP bandwidth required for a call flow using a specific codec. The T-Spec is typically defined using values for the data flow’s average bit rate, peak rate, and burst size.

Reference: https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/srnd/9x/uc9x/cac.html

Question 5

Explanation

Cisco IOS Nonstop Forwarding(NSF) always runs with stateful switchover (SSO) and provides redundancy for Layer 3 traffic.

Reference: https://www.cisco.com/en/US/docs/switches/lan/catalyst3850/software/release/3se/consolidated_guide/b_consolidated_3850_3se_cg_chapter_01101110.pdf

Drag Drop Questions

January 22nd, 2021 digitaltut 18 comments

Question 1

Explanation

OSPF metric is only dependent on the interface bandwidth & reference bandwidth while EIGRP metric is dependent on bandwidth and delay by default.

Both OSPF and EIGRP have three tables to operate: neighbor table (store information about OSPF/EIGRP neighbors), topology table (store topology structure of the network) and routing table (store the best routes).

Question 2

Question 3

Explanation

The following diagram illustrates the key difference between traffic policing and traffic shaping. Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate (or committed information rate), excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs. In contrast to policing, traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate.

traffic_policing_vs_shaping.jpg

Note: Committed information rate (CIR): The minimum guaranteed data transfer rate agreed to by the routing device.

Question 4

Explanation

ITR is the function that maps the destination EID to a destination RLOC and then encapsulates the original packet with an additional header that has the source IP address of the ITR RLOC and the destination IP address of the RLOC of an Egress Tunnel Router (ETR). After the encapsulation, the original packet become a LISP packet.

ETR is the function that receives LISP encapsulated packets, decapsulates them and forwards to its local EIDs. This function also requires EID-to-RLOC mappings so we need to point out an “map-server” IP address and the key (password) for authentication.

A LISP proxy ETR (PETR) implements ETR functions on behalf of non-LISP sites. A PETR is typically used when a LISP site needs to send traffic to non-LISP sites but the LISP site is connected through a service provider that does not accept nonroutable EIDs as packet sources. PETRs act just like ETRs but for EIDs that send traffic to destinations at non-LISP sites.

Map Server (MS) processes the registration of authentication keys and EID-to-RLOC mappings. ETRs sends periodic Map-Register messages to all its configured Map Servers.

Map Resolver (MR): a LISP component which accepts LISP Encapsulated Map Requests, typically from an ITR, quickly determines whether or not the destination IP address is part of the EID namespace

Question 5

Explanation

Unlike OSPF where we can summarize only on ABR or ASBR, in EIGRP we can summarize anywhere.

Manual summarization can be applied anywhere in EIGRP domain, on every router, on every interface via the ip summary-address eigrp as-number address mask [administrative-distance ] command (for example: ip summary-address eigrp 1 192.168.16.0 255.255.248.0). Summary route will exist in routing table as long as at least one more specific route will exist. If the last specific route will disappear, summary route also will fade out. The metric used by EIGRP manual summary route is the minimum metric of the specific routes.

Question 6

Explanation

When Secure Vault is not in use, all information stored in its container is encrypted. When a user wants to use the files and notes stored within the app, they have to first decrypt the database. This happens by filling in a previously determined Security Lock – which could be a PIN or a password of the user’s choosing.

When a user leaves the app, it automatically encrypts everything again. This way all data stored in Secure Vault is decrypted only while a user is actively using the app. In all other instances, it remains locked to any attacker, malware or spyware trying to access the data.

How token-based authentication works: Users log in to a system and – once authenticated – are provided with a token to access other services without having to enter their username and password multiple times. In short, token-based authentication adds a second layer of security to application, network, or service access.

OAuth is an open standard for authorization used by many APIs and modern applications. The simplest example of OAuth is when you go to log onto a website and it offers one or more opportunities to log on using another website’s/service’s logon. You then click on the button linked to the other website, the other website authenticates you, and the website you were originally connecting to logs you on itself afterward using permission gained from the second website.

Question 7

Explanation

To attach a policy map to an input interface, a virtual circuit (VC), an output interface, or a VC that will be used as the service policy for the interface or VC, use the service-policy command in the appropriate configuration mode.

Class of Service (CoS) is a 3 bit field within an Ethernet frame header when we use 802.1q which supports virtual LANs on an Ethernet network. This field specifies a priority value which is between 0 and 63 inclusive which can be used in the Quality of Service (QoS) to differentiate traffic.

The Differentiated Services Code Point (DSCP) is a 6-bit field in the IP header for the classification of packets. Differentiated Services is a technique which is used to classify and manage network traffic and it helps to provide QoS for modern Internet networks. It can provide services to all kinds of networks.

Traffic policing is also known as rate limiting as it propagates bursts. When the traffic rate reaches the configured maximum rate (or committed information rate), excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs.

Traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time -> It causes delay.

Question 8

Question 9

Question 10

Explanation

+ StealthWatch: performs security analytics by collecting network flows via NetFlow
+ ESA: email security solution which protects against email threats like ransomware, business email compromise, phishing, whaling, and many other email-driven attacks
+ AMP for Endpoints (AMP4E): provides malware protection on endpoints
+ Umbrella: provides DNS protection by blocking malicious destinations using DNS
+ Firepower Threat Defense (FTD): provides a comprehensive suite of security features such as firewall capabilities, monitoring, alerts, Intrusion Detection System (IDS) and Intrusion Prevention System (IPS).

Drag Drop Questions 2

January 22nd, 2021 digitaltut 3 comments

Question 1

Explanation

EIGRP maintains alternative loop-free backup via the feasible successors. To qualify as a feasible successor, a router must have an Advertised Distance (AD) less than the Feasible distance (FD) of the current successor route.

Advertised distance (AD): the cost from the neighbor to the destination.
Feasible distance (FD): The sum of the AD plus the cost between the local router and the next-hop router

Question 2

Explanation

There are four messages sent between the DHCP Client and DHCP Server: DHCPDISCOVER, DHCPOFFER, DHCPREQUEST and DHCPACKNOWLEDGEMENT. This process is often abbreviated as DORA (for Discover, Offer, Request, Acknowledgement).

Question 3